Compare commits

...
Sign in to create a new pull request.

79 commits
main ... dev

Author SHA1 Message Date
Virgil
5e14c79d64 Lock in io helper interfaces
Some checks failed
CI / test (push) Has been cancelled
CI / auto-fix (push) Has been cancelled
CI / auto-merge (push) Has been cancelled
2026-04-03 06:58:49 +00:00
Virgil
c95697e4f5 Sort local listings deterministically
Some checks are pending
CI / test (push) Waiting to run
CI / auto-fix (push) Waiting to run
CI / auto-merge (push) Waiting to run
2026-04-03 06:55:51 +00:00
Virgil
2f186d20ef Align workspace docs with AX examples
Some checks are pending
CI / test (push) Waiting to run
CI / auto-fix (push) Waiting to run
CI / auto-merge (push) Waiting to run
2026-04-03 06:53:25 +00:00
Virgil
c60c4d95f0 docs: add AX examples to memory medium
Some checks are pending
CI / test (push) Waiting to run
CI / auto-fix (push) Waiting to run
CI / auto-merge (push) Waiting to run
2026-04-03 06:49:39 +00:00
Virgil
ef587639cd Refine io memory helpers
Some checks are pending
CI / test (push) Waiting to run
CI / auto-fix (push) Waiting to run
CI / auto-merge (push) Waiting to run
2026-04-03 06:46:19 +00:00
Virgil
8994c8b464 Infer in-memory directory paths
Some checks are pending
CI / test (push) Waiting to run
CI / auto-fix (push) Waiting to run
CI / auto-merge (push) Waiting to run
2026-04-03 06:43:35 +00:00
Virgil
3efb43aaf7 Improve memory medium metadata
Some checks are pending
CI / test (push) Waiting to run
CI / auto-fix (push) Waiting to run
CI / auto-merge (push) Waiting to run
2026-04-03 05:13:09 +00:00
Virgil
3c8c16320a Polish io memory medium naming
Some checks are pending
CI / test (push) Waiting to run
CI / auto-fix (push) Waiting to run
CI / auto-merge (push) Waiting to run
2026-04-03 05:10:15 +00:00
Virgil
35b725d2b8 Preserve MemoryMedium file modes
Some checks failed
CI / test (push) Failing after 3s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-04-01 09:50:24 +00:00
Virgil
cee004f426 feat(io): export memory file helpers
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 2s
CI / auto-merge (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-01 05:22:22 +00:00
Snider
df9c443657 feat(workspace): encrypt workspace files using ChaChaPolySigil
Some checks failed
CI / test (push) Failing after 3s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
ReadWorkspaceFile and WriteWorkspaceFile now encrypt/decrypt file
content using XChaCha20-Poly1305 via the existing sigil pipeline.
A 32-byte symmetric key is derived by SHA-256-hashing the workspace's
stored private.key material so no new dependencies are required.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 16:14:43 +01:00
Virgil
c713bafd48 refactor(ax): align remaining AX examples and names
Some checks failed
CI / test (push) Failing after 3s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 14:27:58 +00:00
Virgil
15b6074e46 refactor(ax): align remaining AX surfaces
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 14:19:53 +00:00
Virgil
ede0c8bb49 refactor(ax): rename remaining test helpers and examples
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 14:13:15 +00:00
Virgil
bf4ba4141d refactor(ax): demote internal memory helpers and document sigil errors
Co-authored-by: Virgil <virgil@lethean.io>
2026-03-31 14:08:24 +00:00
Virgil
db6bbb650e refactor(ax): normalize interface compliance test names
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 14:04:07 +00:00
Virgil
9dbcc5d184 refactor(ax): rename medium test variables and examples
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 14:00:33 +00:00
Virgil
e922734c6e refactor(store): rename key-value store surface
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 13:54:58 +00:00
Virgil
45bd96387a refactor(workspace): harden path boundaries and naming
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 13:47:35 +00:00
Virgil
c6adf478d8 refactor(ax): rename nonce helper for clearer naming
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 13:41:04 +00:00
Virgil
50bb356c7c refactor(ax): align remaining AX naming surfaces
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 13:35:21 +00:00
Virgil
bd8d7c6975 refactor(ax): tighten local naming
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 13:25:00 +00:00
Virgil
eab112c7cf refactor(workspace): accept declarative root and medium options
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 13:20:09 +00:00
Virgil
e1efd3634c refactor(ax): align remaining AX docs and invalid-input errors
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 13:13:41 +00:00
Snider
702286a583 feat(ax): apply AX compliance sweep — usage examples and predictable names
Some checks failed
CI / test (push) Failing after 2s
CI / auto-merge (push) Failing after 0s
CI / auto-fix (push) Failing after 0s
- Add // Example: usage comments to all Medium interface methods in io.go
- Add // Example: comments to local, s3, sqlite, store, datanode, node medium methods
- Rename short variable `n` → `nodeTree` throughout node/node_test.go
- Rename short variable `s` → `keyValueStore` in store/store_test.go
- Rename counter variable `n` → `count` in store/store_test.go
- Rename `m` → `medium` in store/medium_test.go helper
- Remove redundant prose comments replaced by usage examples

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 12:19:56 +01:00
Virgil
378fc7c0de docs(ax): align sigil references with current surfaces
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 07:24:17 +00:00
Virgil
48b777675e refactor(workspace): fail unsupported workspace messages explicitly
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
Return explicit fs sentinels for workspace creation, switching, and inactive file access.\n\nUnsupported command and message inputs now return a failed core.Result instead of a silent success, and tests cover the fallback path.\n\nCo-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 07:17:59 +00:00
Virgil
cc2b553c94 docs(ax): align RFC API reference with current surfaces
Some checks failed
CI / auto-merge (push) Failing after 0s
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 2s
2026-03-31 06:26:16 +00:00
Virgil
97535f650a docs(ax): align guidance with current medium surface
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 06:17:48 +00:00
Virgil
3054217038 refactor(ax): remove workspace message compatibility map
Some checks failed
CI / test (push) Failing after 4s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 06:10:46 +00:00
Virgil
38066a6fae refactor(ax): rename workspace file helpers
Co-authored-by: Virgil <virgil@lethean.io>
2026-03-31 06:00:23 +00:00
Virgil
b3d12ce553 refactor(ax): remove fileget/fileset compatibility aliases
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 2s
CI / auto-merge (push) Failing after 0s
Co-authored-by: Virgil <virgil@lethean.io>
2026-03-31 05:57:21 +00:00
Virgil
a290cba908 refactor(ax): remove redundant compatibility surfaces
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
CI / test (push) Failing after 2s
2026-03-31 05:50:19 +00:00
Virgil
bcf780c0ac refactor(ax): align memory medium test names
Some checks failed
CI / auto-merge (push) Failing after 0s
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 2s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 05:46:33 +00:00
Virgil
9f0e155d62 refactor(ax): rename workspace provider surface
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 2s
CI / auto-merge (push) Failing after 1s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 05:42:12 +00:00
Virgil
619f731e5e refactor(ax): align remaining semantic names
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 3s
CI / auto-merge (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 05:36:25 +00:00
Virgil
313b704f54 refactor(ax): trim test prose comments
Some checks failed
CI / test (push) Failing after 2s
CI / auto-merge (push) Failing after 0s
CI / auto-fix (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 05:30:25 +00:00
Virgil
1cc185cb35 Align node and sigil APIs with AX principles
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
CI / test (push) Failing after 4s
2026-03-31 05:24:39 +00:00
Virgil
6aa96dc7b7 refactor(ax): align remaining example names and walk APIs
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 1s
CI / auto-merge (push) Failing after 1s
2026-03-31 05:18:17 +00:00
Virgil
32cfabb5e0 refactor(ax): normalize remaining usage examples
Some checks failed
CI / test (push) Failing after 3s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 1s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 05:10:35 +00:00
Virgil
347c4b1b57 refactor(ax): trim prose comments to examples
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 2s
CI / auto-merge (push) Failing after 0s
2026-03-30 23:02:53 +00:00
Virgil
f8988c51cb refactor(ax): tighten naming and comment surfaces
Some checks failed
CI / test (push) Failing after 4s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-30 22:56:51 +00:00
Virgil
b80a162373 refactor(ax): rename placeholder test cases
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 3s
CI / auto-merge (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 22:52:35 +00:00
Virgil
3a5f9bb005 refactor(ax): encapsulate memory medium internals
Some checks failed
CI / test (push) Failing after 3s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-30 22:47:27 +00:00
Virgil
64854a8268 refactor(ax): simplify workspace options 2026-03-30 22:46:05 +00:00
Virgil
64427aec1b refactor(ax): add semantic workspace message handler 2026-03-30 22:45:15 +00:00
Virgil
14418b7782 refactor: tighten AX-facing comments
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-30 22:41:48 +00:00
Virgil
fc34a75fb2 refactor(ax): continue AX surface alignment
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 3s
CI / auto-merge (push) Failing after 0s
2026-03-30 22:39:50 +00:00
Virgil
a8caedaf55 docs(local): convert constructor note to usage example
Some checks failed
CI / test (push) Failing after 3s
CI / auto-fix (push) Failing after 1s
CI / auto-merge (push) Failing after 1s
2026-03-30 22:33:41 +00:00
Virgil
0927aab29d refactor: align AX surfaces and semantic file names 2026-03-30 22:33:03 +00:00
Virgil
e8b87dfbee refactor(ax): make memory medium primary
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 1s
2026-03-30 22:26:50 +00:00
Virgil
25b12a22a4 refactor(ax): add memory medium aliases
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 1s
CI / auto-merge (push) Failing after 0s
2026-03-30 22:00:45 +00:00
Virgil
c0ee58201b refactor(ax): expand semantic backend naming
Some checks failed
CI / auto-fix (push) Failing after 1s
CI / test (push) Failing after 3s
CI / auto-merge (push) Failing after 1s
2026-03-30 21:52:52 +00:00
Virgil
d4615a2ad8 refactor(ax): align backend names and examples
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 3s
CI / auto-merge (push) Failing after 0s
2026-03-30 21:48:42 +00:00
Virgil
bab889e9ac refactor(ax): clarify core storage names
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 2s
CI / auto-merge (push) Failing after 0s
2026-03-30 21:39:03 +00:00
Virgil
a8eaaa1581 refactor(ax): tighten AX-facing docs
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 3s
CI / auto-merge (push) Failing after 0s
2026-03-30 21:29:35 +00:00
Virgil
16d968b551 refactor(ax): make public docs example-driven
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 2s
CI / auto-merge (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 21:23:35 +00:00
Virgil
41dd111072 refactor(ax): make exported docs example-driven
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 21:17:43 +00:00
Virgil
d5b5915863 refactor(ax): make sigil names explicit
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-30 21:12:40 +00:00
Virgil
f0b828a7e3 refactor(ax): drop legacy compatibility shims
Some checks failed
CI / test (push) Failing after 3s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-30 21:08:22 +00:00
Virgil
48c328f935 refactor(ax): tighten names and ipc keys
Some checks failed
CI / test (push) Failing after 3s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 1s
2026-03-30 21:04:19 +00:00
Virgil
d175fc2b6f refactor(ax): make names and errors explicit
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 2s
CI / auto-merge (push) Failing after 0s
2026-03-30 20:58:10 +00:00
Virgil
b0bcdadb2f refactor(ax): make store and traversal explicit
Some checks failed
CI / test (push) Failing after 3s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-30 20:52:34 +00:00
Virgil
9fb978dc75 refactor(ax): make docs and helpers example-driven
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-30 20:47:41 +00:00
Virgil
b19617c371 refactor(ax): prune redundant api comments
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 20:42:44 +00:00
Virgil
518309a022 refactor(ax): add explicit node traversal options
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 20:37:40 +00:00
Virgil
d900a785e7 refactor(ax): replace placeholder doc comments
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 2s
CI / auto-merge (push) Failing after 0s
2026-03-30 20:31:12 +00:00
Virgil
0cb59850f5 refactor(ax): expand remaining API names
Some checks failed
CI / test (push) Failing after 3s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-30 20:18:30 +00:00
Virgil
1743b9810e refactor(ax): remove remaining short names
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-30 20:10:24 +00:00
Virgil
5f780e6261 refactor(ax): normalize remaining agent-facing names
Some checks failed
CI / test (push) Failing after 4s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-30 20:04:09 +00:00
Virgil
977218cdfe docs: align CLAUDE with s3 client rename
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 2s
CI / auto-merge (push) Failing after 0s
2026-03-30 19:36:39 +00:00
Virgil
d9f5b7101b refactor(ax): replace option chains with config structs 2026-03-30 19:36:30 +00:00
Snider
aaf0aca661 docs: add AX design principles RFC for agent dispatch
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 1s
CI / auto-merge (push) Failing after 1s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 20:27:13 +01:00
Virgil
61193c0b2f fix: use UK English spelling throughout
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 14:04:36 +00:00
Virgil
bdd925e771 Add complete API reference
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-30 09:14:45 +00:00
Virgil
5e4bc3b0ac test(ax): cover wrapper APIs and add package docs
Some checks failed
CI / auto-fix (push) Failing after 1s
CI / test (push) Failing after 3s
CI / auto-merge (push) Failing after 1s
2026-03-30 06:24:36 +00:00
Virgil
514ecd7e7a fix(io): enforce ax v0.8.0 polish spec
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 06:24:36 +00:00
Virgil
238d6c6b91 chore(ax): align imports, tests, and usage comments
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 06:22:48 +00:00
Virgil
6b74ae2afe fix(io): address audit issue 4 findings
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 06:21:35 +00:00
40 changed files with 8980 additions and 5389 deletions

View file

@ -34,7 +34,7 @@ GOWORK=off go test -cover ./...
### Core Interface
`io.Medium` — 18 methods: Read, Write, EnsureDir, IsFile, FileGet, FileSet, Delete, DeleteAll, Rename, List, Stat, Open, Create, Append, ReadStream, WriteStream, Exists, IsDir.
`io.Medium` — 17 methods: Read, Write, WriteMode, EnsureDir, IsFile, Delete, DeleteAll, Rename, List, Stat, Open, Create, Append, ReadStream, WriteStream, Exists, IsDir.
```go
// Sandboxed to a project directory
@ -60,7 +60,7 @@ io.Copy(s3Medium, "backup.tar", localMedium, "restore/backup.tar")
| `datanode` | Borg DataNode | Thread-safe (RWMutex) in-memory, snapshot/restore via tar |
| `store` | SQLite KV store | Group-namespaced key-value with Go template rendering |
| `workspace` | Core service | Encrypted workspaces, SHA-256 IDs, PGP keypairs |
| `MockMedium` | In-memory map | Testing — no filesystem needed |
| `MemoryMedium` | In-memory map | Testing — no filesystem needed |
`store.Medium` maps filesystem paths as `group/key` — first path segment is the group, remainder is the key. `List("")` returns groups as directories.
@ -128,8 +128,8 @@ Backend packages use `var _ io.Medium = (*Medium)(nil)` to verify interface comp
### Sentinel Errors
Sentinel errors (`var ErrNotFound`, `var ErrInvalidKey`, etc.) use standard `errors.New()` — this is correct Go convention. Only inline error returns in functions should use `coreerr.E()`.
Sentinel errors (`var NotFoundError`, `var InvalidKeyError`, etc.) use standard `errors.New()` — this is correct Go convention. Only inline error returns in functions should use `coreerr.E()`.
## Testing
Use `io.MockMedium` or `io.NewSandboxed(t.TempDir())` in tests — never hit real S3/SQLite unless integration testing. S3 tests use an interface-based mock (`s3API`).
Use `io.NewMemoryMedium()` or `io.NewSandboxed(t.TempDir())` in tests — never hit real S3/SQLite unless integration testing. S3 tests use an interface-based mock (`s3.Client`).

View file

@ -4,31 +4,31 @@ import (
"testing"
)
func BenchmarkMockMedium_Write(b *testing.B) {
m := NewMockMedium()
func BenchmarkMemoryMedium_Write(b *testing.B) {
medium := NewMemoryMedium()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = m.Write("test.txt", "some content")
_ = medium.Write("test.txt", "some content")
}
}
func BenchmarkMockMedium_Read(b *testing.B) {
m := NewMockMedium()
_ = m.Write("test.txt", "some content")
func BenchmarkMemoryMedium_Read(b *testing.B) {
medium := NewMemoryMedium()
_ = medium.Write("test.txt", "some content")
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, _ = m.Read("test.txt")
_, _ = medium.Read("test.txt")
}
}
func BenchmarkMockMedium_List(b *testing.B) {
m := NewMockMedium()
_ = m.EnsureDir("dir")
func BenchmarkMemoryMedium_List(b *testing.B) {
medium := NewMemoryMedium()
_ = medium.EnsureDir("dir")
for i := 0; i < 100; i++ {
_ = m.Write("dir/file"+string(rune(i))+".txt", "content")
_ = medium.Write("dir/file"+string(rune(i))+".txt", "content")
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, _ = m.List("dir")
_, _ = medium.List("dir")
}
}

View file

@ -1,260 +0,0 @@
package io
import (
"testing"
"github.com/stretchr/testify/assert"
)
// --- MockMedium Tests ---
func TestNewMockMedium_Good(t *testing.T) {
m := NewMockMedium()
assert.NotNil(t, m)
assert.NotNil(t, m.Files)
assert.NotNil(t, m.Dirs)
assert.Empty(t, m.Files)
assert.Empty(t, m.Dirs)
}
func TestMockMedium_Read_Good(t *testing.T) {
m := NewMockMedium()
m.Files["test.txt"] = "hello world"
content, err := m.Read("test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello world", content)
}
func TestMockMedium_Read_Bad(t *testing.T) {
m := NewMockMedium()
_, err := m.Read("nonexistent.txt")
assert.Error(t, err)
}
func TestMockMedium_Write_Good(t *testing.T) {
m := NewMockMedium()
err := m.Write("test.txt", "content")
assert.NoError(t, err)
assert.Equal(t, "content", m.Files["test.txt"])
// Overwrite existing file
err = m.Write("test.txt", "new content")
assert.NoError(t, err)
assert.Equal(t, "new content", m.Files["test.txt"])
}
func TestMockMedium_EnsureDir_Good(t *testing.T) {
m := NewMockMedium()
err := m.EnsureDir("/path/to/dir")
assert.NoError(t, err)
assert.True(t, m.Dirs["/path/to/dir"])
}
func TestMockMedium_IsFile_Good(t *testing.T) {
m := NewMockMedium()
m.Files["exists.txt"] = "content"
assert.True(t, m.IsFile("exists.txt"))
assert.False(t, m.IsFile("nonexistent.txt"))
}
func TestMockMedium_FileGet_Good(t *testing.T) {
m := NewMockMedium()
m.Files["test.txt"] = "content"
content, err := m.FileGet("test.txt")
assert.NoError(t, err)
assert.Equal(t, "content", content)
}
func TestMockMedium_FileSet_Good(t *testing.T) {
m := NewMockMedium()
err := m.FileSet("test.txt", "content")
assert.NoError(t, err)
assert.Equal(t, "content", m.Files["test.txt"])
}
func TestMockMedium_Delete_Good(t *testing.T) {
m := NewMockMedium()
m.Files["test.txt"] = "content"
err := m.Delete("test.txt")
assert.NoError(t, err)
assert.False(t, m.IsFile("test.txt"))
}
func TestMockMedium_Delete_Bad_NotFound(t *testing.T) {
m := NewMockMedium()
err := m.Delete("nonexistent.txt")
assert.Error(t, err)
}
func TestMockMedium_Delete_Bad_DirNotEmpty(t *testing.T) {
m := NewMockMedium()
m.Dirs["mydir"] = true
m.Files["mydir/file.txt"] = "content"
err := m.Delete("mydir")
assert.Error(t, err)
}
func TestMockMedium_DeleteAll_Good(t *testing.T) {
m := NewMockMedium()
m.Dirs["mydir"] = true
m.Dirs["mydir/subdir"] = true
m.Files["mydir/file.txt"] = "content"
m.Files["mydir/subdir/nested.txt"] = "nested"
err := m.DeleteAll("mydir")
assert.NoError(t, err)
assert.Empty(t, m.Dirs)
assert.Empty(t, m.Files)
}
func TestMockMedium_Rename_Good(t *testing.T) {
m := NewMockMedium()
m.Files["old.txt"] = "content"
err := m.Rename("old.txt", "new.txt")
assert.NoError(t, err)
assert.False(t, m.IsFile("old.txt"))
assert.True(t, m.IsFile("new.txt"))
assert.Equal(t, "content", m.Files["new.txt"])
}
func TestMockMedium_Rename_Good_Dir(t *testing.T) {
m := NewMockMedium()
m.Dirs["olddir"] = true
m.Files["olddir/file.txt"] = "content"
err := m.Rename("olddir", "newdir")
assert.NoError(t, err)
assert.False(t, m.Dirs["olddir"])
assert.True(t, m.Dirs["newdir"])
assert.Equal(t, "content", m.Files["newdir/file.txt"])
}
func TestMockMedium_List_Good(t *testing.T) {
m := NewMockMedium()
m.Dirs["mydir"] = true
m.Files["mydir/file1.txt"] = "content1"
m.Files["mydir/file2.txt"] = "content2"
m.Dirs["mydir/subdir"] = true
entries, err := m.List("mydir")
assert.NoError(t, err)
assert.Len(t, entries, 3)
names := make(map[string]bool)
for _, e := range entries {
names[e.Name()] = true
}
assert.True(t, names["file1.txt"])
assert.True(t, names["file2.txt"])
assert.True(t, names["subdir"])
}
func TestMockMedium_Stat_Good(t *testing.T) {
m := NewMockMedium()
m.Files["test.txt"] = "hello world"
info, err := m.Stat("test.txt")
assert.NoError(t, err)
assert.Equal(t, "test.txt", info.Name())
assert.Equal(t, int64(11), info.Size())
assert.False(t, info.IsDir())
}
func TestMockMedium_Stat_Good_Dir(t *testing.T) {
m := NewMockMedium()
m.Dirs["mydir"] = true
info, err := m.Stat("mydir")
assert.NoError(t, err)
assert.Equal(t, "mydir", info.Name())
assert.True(t, info.IsDir())
}
func TestMockMedium_Exists_Good(t *testing.T) {
m := NewMockMedium()
m.Files["file.txt"] = "content"
m.Dirs["mydir"] = true
assert.True(t, m.Exists("file.txt"))
assert.True(t, m.Exists("mydir"))
assert.False(t, m.Exists("nonexistent"))
}
func TestMockMedium_IsDir_Good(t *testing.T) {
m := NewMockMedium()
m.Files["file.txt"] = "content"
m.Dirs["mydir"] = true
assert.False(t, m.IsDir("file.txt"))
assert.True(t, m.IsDir("mydir"))
assert.False(t, m.IsDir("nonexistent"))
}
// --- Wrapper Function Tests ---
func TestRead_Good(t *testing.T) {
m := NewMockMedium()
m.Files["test.txt"] = "hello"
content, err := Read(m, "test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello", content)
}
func TestWrite_Good(t *testing.T) {
m := NewMockMedium()
err := Write(m, "test.txt", "hello")
assert.NoError(t, err)
assert.Equal(t, "hello", m.Files["test.txt"])
}
func TestEnsureDir_Good(t *testing.T) {
m := NewMockMedium()
err := EnsureDir(m, "/my/dir")
assert.NoError(t, err)
assert.True(t, m.Dirs["/my/dir"])
}
func TestIsFile_Good(t *testing.T) {
m := NewMockMedium()
m.Files["exists.txt"] = "content"
assert.True(t, IsFile(m, "exists.txt"))
assert.False(t, IsFile(m, "nonexistent.txt"))
}
func TestCopy_Good(t *testing.T) {
source := NewMockMedium()
dest := NewMockMedium()
source.Files["test.txt"] = "hello"
err := Copy(source, "test.txt", dest, "test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello", dest.Files["test.txt"])
// Copy to different path
source.Files["original.txt"] = "content"
err = Copy(source, "original.txt", dest, "copied.txt")
assert.NoError(t, err)
assert.Equal(t, "content", dest.Files["copied.txt"])
}
func TestCopy_Bad(t *testing.T) {
source := NewMockMedium()
dest := NewMockMedium()
err := Copy(source, "nonexistent.txt", dest, "dest.txt")
assert.Error(t, err)
}
// --- Local Global Tests ---
func TestLocalGlobal_Good(t *testing.T) {
// io.Local should be initialised by init()
assert.NotNil(t, Local, "io.Local should be initialised")
// Should be able to use it as a Medium
var m = Local
assert.NotNil(t, m)
}

View file

@ -1,580 +0,0 @@
// Package datanode provides an in-memory io.Medium backed by Borg's DataNode.
//
// DataNode is an in-memory fs.FS that serializes to tar. Wrapping it as a
// Medium lets any code that works with io.Medium transparently operate on
// an in-memory filesystem that can be snapshotted, shipped as a crash report,
// or wrapped in a TIM container for runc execution.
package datanode
import (
"cmp"
goio "io"
"io/fs"
"os"
"path"
"slices"
"strings"
"sync"
"time"
coreerr "forge.lthn.ai/core/go-log"
"forge.lthn.ai/Snider/Borg/pkg/datanode"
)
// Medium is an in-memory storage backend backed by a Borg DataNode.
// All paths are relative (no leading slash). Thread-safe via RWMutex.
type Medium struct {
dn *datanode.DataNode
dirs map[string]bool // explicit directory tracking
mu sync.RWMutex
}
// New creates a new empty DataNode Medium.
func New() *Medium {
return &Medium{
dn: datanode.New(),
dirs: make(map[string]bool),
}
}
// FromTar creates a Medium from a tarball, restoring all files.
func FromTar(data []byte) (*Medium, error) {
dn, err := datanode.FromTar(data)
if err != nil {
return nil, coreerr.E("datanode.FromTar", "failed to restore", err)
}
return &Medium{
dn: dn,
dirs: make(map[string]bool),
}, nil
}
// Snapshot serializes the entire filesystem to a tarball.
// Use this for crash reports, workspace packaging, or TIM creation.
func (m *Medium) Snapshot() ([]byte, error) {
m.mu.RLock()
defer m.mu.RUnlock()
data, err := m.dn.ToTar()
if err != nil {
return nil, coreerr.E("datanode.Snapshot", "tar failed", err)
}
return data, nil
}
// Restore replaces the filesystem contents from a tarball.
func (m *Medium) Restore(data []byte) error {
dn, err := datanode.FromTar(data)
if err != nil {
return coreerr.E("datanode.Restore", "tar failed", err)
}
m.mu.Lock()
defer m.mu.Unlock()
m.dn = dn
m.dirs = make(map[string]bool)
return nil
}
// DataNode returns the underlying Borg DataNode.
// Use this to wrap the filesystem in a TIM container.
func (m *Medium) DataNode() *datanode.DataNode {
m.mu.RLock()
defer m.mu.RUnlock()
return m.dn
}
// clean normalises a path: strips leading slash, cleans traversal.
func clean(p string) string {
p = strings.TrimPrefix(p, "/")
p = path.Clean(p)
if p == "." {
return ""
}
return p
}
// --- io.Medium interface ---
func (m *Medium) Read(p string) (string, error) {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
f, err := m.dn.Open(p)
if err != nil {
return "", coreerr.E("datanode.Read", "not found: "+p, os.ErrNotExist)
}
defer f.Close()
info, err := f.Stat()
if err != nil {
return "", coreerr.E("datanode.Read", "stat failed: "+p, err)
}
if info.IsDir() {
return "", coreerr.E("datanode.Read", "is a directory: "+p, os.ErrInvalid)
}
data, err := goio.ReadAll(f)
if err != nil {
return "", coreerr.E("datanode.Read", "read failed: "+p, err)
}
return string(data), nil
}
func (m *Medium) Write(p, content string) error {
m.mu.Lock()
defer m.mu.Unlock()
p = clean(p)
if p == "" {
return coreerr.E("datanode.Write", "empty path", os.ErrInvalid)
}
m.dn.AddData(p, []byte(content))
// ensure parent dirs are tracked
m.ensureDirsLocked(path.Dir(p))
return nil
}
func (m *Medium) WriteMode(p, content string, mode os.FileMode) error {
return m.Write(p, content)
}
func (m *Medium) EnsureDir(p string) error {
m.mu.Lock()
defer m.mu.Unlock()
p = clean(p)
if p == "" {
return nil
}
m.ensureDirsLocked(p)
return nil
}
// ensureDirsLocked marks a directory and all ancestors as existing.
// Caller must hold m.mu.
func (m *Medium) ensureDirsLocked(p string) {
for p != "" && p != "." {
m.dirs[p] = true
p = path.Dir(p)
if p == "." {
break
}
}
}
func (m *Medium) IsFile(p string) bool {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
info, err := m.dn.Stat(p)
return err == nil && !info.IsDir()
}
func (m *Medium) FileGet(p string) (string, error) {
return m.Read(p)
}
func (m *Medium) FileSet(p, content string) error {
return m.Write(p, content)
}
func (m *Medium) Delete(p string) error {
m.mu.Lock()
defer m.mu.Unlock()
p = clean(p)
if p == "" {
return coreerr.E("datanode.Delete", "cannot delete root", os.ErrPermission)
}
// Check if it's a file in the DataNode
info, err := m.dn.Stat(p)
if err != nil {
// Check explicit dirs
if m.dirs[p] {
// Check if dir is empty
if m.hasPrefixLocked(p + "/") {
return coreerr.E("datanode.Delete", "directory not empty: "+p, os.ErrExist)
}
delete(m.dirs, p)
return nil
}
return coreerr.E("datanode.Delete", "not found: "+p, os.ErrNotExist)
}
if info.IsDir() {
if m.hasPrefixLocked(p + "/") {
return coreerr.E("datanode.Delete", "directory not empty: "+p, os.ErrExist)
}
delete(m.dirs, p)
return nil
}
// Remove the file by creating a new DataNode without it
m.removeFileLocked(p)
return nil
}
func (m *Medium) DeleteAll(p string) error {
m.mu.Lock()
defer m.mu.Unlock()
p = clean(p)
if p == "" {
return coreerr.E("datanode.DeleteAll", "cannot delete root", os.ErrPermission)
}
prefix := p + "/"
found := false
// Check if p itself is a file
info, err := m.dn.Stat(p)
if err == nil && !info.IsDir() {
m.removeFileLocked(p)
found = true
}
// Remove all files under prefix
entries, _ := m.collectAllLocked()
for _, name := range entries {
if name == p || strings.HasPrefix(name, prefix) {
m.removeFileLocked(name)
found = true
}
}
// Remove explicit dirs under prefix
for d := range m.dirs {
if d == p || strings.HasPrefix(d, prefix) {
delete(m.dirs, d)
found = true
}
}
if !found {
return coreerr.E("datanode.DeleteAll", "not found: "+p, os.ErrNotExist)
}
return nil
}
func (m *Medium) Rename(oldPath, newPath string) error {
m.mu.Lock()
defer m.mu.Unlock()
oldPath = clean(oldPath)
newPath = clean(newPath)
// Check if source is a file
info, err := m.dn.Stat(oldPath)
if err != nil {
return coreerr.E("datanode.Rename", "not found: "+oldPath, os.ErrNotExist)
}
if !info.IsDir() {
// Read old, write new, delete old
f, err := m.dn.Open(oldPath)
if err != nil {
return coreerr.E("datanode.Rename", "open failed: "+oldPath, err)
}
data, err := goio.ReadAll(f)
f.Close()
if err != nil {
return coreerr.E("datanode.Rename", "read failed: "+oldPath, err)
}
m.dn.AddData(newPath, data)
m.ensureDirsLocked(path.Dir(newPath))
m.removeFileLocked(oldPath)
return nil
}
// Directory rename: move all files under oldPath to newPath
oldPrefix := oldPath + "/"
newPrefix := newPath + "/"
entries, _ := m.collectAllLocked()
for _, name := range entries {
if strings.HasPrefix(name, oldPrefix) {
newName := newPrefix + strings.TrimPrefix(name, oldPrefix)
f, err := m.dn.Open(name)
if err != nil {
continue
}
data, _ := goio.ReadAll(f)
f.Close()
m.dn.AddData(newName, data)
m.removeFileLocked(name)
}
}
// Move explicit dirs
dirsToMove := make(map[string]string)
for d := range m.dirs {
if d == oldPath || strings.HasPrefix(d, oldPrefix) {
newD := newPath + strings.TrimPrefix(d, oldPath)
dirsToMove[d] = newD
}
}
for old, nw := range dirsToMove {
delete(m.dirs, old)
m.dirs[nw] = true
}
return nil
}
func (m *Medium) List(p string) ([]fs.DirEntry, error) {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
entries, err := m.dn.ReadDir(p)
if err != nil {
// Check explicit dirs
if p == "" || m.dirs[p] {
return []fs.DirEntry{}, nil
}
return nil, coreerr.E("datanode.List", "not found: "+p, os.ErrNotExist)
}
// Also include explicit subdirectories not discovered via files
prefix := p
if prefix != "" {
prefix += "/"
}
seen := make(map[string]bool)
for _, e := range entries {
seen[e.Name()] = true
}
for d := range m.dirs {
if !strings.HasPrefix(d, prefix) {
continue
}
rest := strings.TrimPrefix(d, prefix)
if rest == "" {
continue
}
first := strings.SplitN(rest, "/", 2)[0]
if !seen[first] {
seen[first] = true
entries = append(entries, &dirEntry{name: first})
}
}
slices.SortFunc(entries, func(a, b fs.DirEntry) int {
return cmp.Compare(a.Name(), b.Name())
})
return entries, nil
}
func (m *Medium) Stat(p string) (fs.FileInfo, error) {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
if p == "" {
return &fileInfo{name: ".", isDir: true, mode: fs.ModeDir | 0755}, nil
}
info, err := m.dn.Stat(p)
if err == nil {
return info, nil
}
if m.dirs[p] {
return &fileInfo{name: path.Base(p), isDir: true, mode: fs.ModeDir | 0755}, nil
}
return nil, coreerr.E("datanode.Stat", "not found: "+p, os.ErrNotExist)
}
func (m *Medium) Open(p string) (fs.File, error) {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
return m.dn.Open(p)
}
func (m *Medium) Create(p string) (goio.WriteCloser, error) {
p = clean(p)
if p == "" {
return nil, coreerr.E("datanode.Create", "empty path", os.ErrInvalid)
}
return &writeCloser{m: m, path: p}, nil
}
func (m *Medium) Append(p string) (goio.WriteCloser, error) {
p = clean(p)
if p == "" {
return nil, coreerr.E("datanode.Append", "empty path", os.ErrInvalid)
}
// Read existing content
var existing []byte
m.mu.RLock()
f, err := m.dn.Open(p)
if err == nil {
existing, _ = goio.ReadAll(f)
f.Close()
}
m.mu.RUnlock()
return &writeCloser{m: m, path: p, buf: existing}, nil
}
func (m *Medium) ReadStream(p string) (goio.ReadCloser, error) {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
f, err := m.dn.Open(p)
if err != nil {
return nil, coreerr.E("datanode.ReadStream", "not found: "+p, os.ErrNotExist)
}
return f.(goio.ReadCloser), nil
}
func (m *Medium) WriteStream(p string) (goio.WriteCloser, error) {
return m.Create(p)
}
func (m *Medium) Exists(p string) bool {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
if p == "" {
return true // root always exists
}
_, err := m.dn.Stat(p)
if err == nil {
return true
}
return m.dirs[p]
}
func (m *Medium) IsDir(p string) bool {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
if p == "" {
return true
}
info, err := m.dn.Stat(p)
if err == nil {
return info.IsDir()
}
return m.dirs[p]
}
// --- internal helpers ---
// hasPrefixLocked checks if any file path starts with prefix. Caller holds lock.
func (m *Medium) hasPrefixLocked(prefix string) bool {
entries, _ := m.collectAllLocked()
for _, name := range entries {
if strings.HasPrefix(name, prefix) {
return true
}
}
for d := range m.dirs {
if strings.HasPrefix(d, prefix) {
return true
}
}
return false
}
// collectAllLocked returns all file paths in the DataNode. Caller holds lock.
func (m *Medium) collectAllLocked() ([]string, error) {
var names []string
err := fs.WalkDir(m.dn, ".", func(p string, d fs.DirEntry, err error) error {
if err != nil {
return nil
}
if !d.IsDir() {
names = append(names, p)
}
return nil
})
return names, err
}
// removeFileLocked removes a single file by rebuilding the DataNode.
// This is necessary because Borg's DataNode doesn't expose a Remove method.
// Caller must hold m.mu write lock.
func (m *Medium) removeFileLocked(target string) {
entries, _ := m.collectAllLocked()
newDN := datanode.New()
for _, name := range entries {
if name == target {
continue
}
f, err := m.dn.Open(name)
if err != nil {
continue
}
data, err := goio.ReadAll(f)
f.Close()
if err != nil {
continue
}
newDN.AddData(name, data)
}
m.dn = newDN
}
// --- writeCloser buffers writes and flushes to DataNode on Close ---
type writeCloser struct {
m *Medium
path string
buf []byte
}
func (w *writeCloser) Write(p []byte) (int, error) {
w.buf = append(w.buf, p...)
return len(p), nil
}
func (w *writeCloser) Close() error {
w.m.mu.Lock()
defer w.m.mu.Unlock()
w.m.dn.AddData(w.path, w.buf)
w.m.ensureDirsLocked(path.Dir(w.path))
return nil
}
// --- fs types for explicit directories ---
type dirEntry struct {
name string
}
func (d *dirEntry) Name() string { return d.name }
func (d *dirEntry) IsDir() bool { return true }
func (d *dirEntry) Type() fs.FileMode { return fs.ModeDir }
func (d *dirEntry) Info() (fs.FileInfo, error) {
return &fileInfo{name: d.name, isDir: true, mode: fs.ModeDir | 0755}, nil
}
type fileInfo struct {
name string
size int64
mode fs.FileMode
modTime time.Time
isDir bool
}
func (fi *fileInfo) Name() string { return fi.name }
func (fi *fileInfo) Size() int64 { return fi.size }
func (fi *fileInfo) Mode() fs.FileMode { return fi.mode }
func (fi *fileInfo) ModTime() time.Time { return fi.modTime }
func (fi *fileInfo) IsDir() bool { return fi.isDir }
func (fi *fileInfo) Sys() any { return nil }

View file

@ -1,352 +0,0 @@
package datanode
import (
"io"
"testing"
coreio "dappco.re/go/core/io"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Compile-time check: Medium implements io.Medium.
var _ coreio.Medium = (*Medium)(nil)
func TestReadWrite_Good(t *testing.T) {
m := New()
err := m.Write("hello.txt", "world")
require.NoError(t, err)
got, err := m.Read("hello.txt")
require.NoError(t, err)
assert.Equal(t, "world", got)
}
func TestReadWrite_Bad(t *testing.T) {
m := New()
_, err := m.Read("missing.txt")
assert.Error(t, err)
err = m.Write("", "content")
assert.Error(t, err)
}
func TestNestedPaths_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("a/b/c/deep.txt", "deep"))
got, err := m.Read("a/b/c/deep.txt")
require.NoError(t, err)
assert.Equal(t, "deep", got)
assert.True(t, m.IsDir("a"))
assert.True(t, m.IsDir("a/b"))
assert.True(t, m.IsDir("a/b/c"))
}
func TestLeadingSlash_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("/leading/file.txt", "stripped"))
got, err := m.Read("leading/file.txt")
require.NoError(t, err)
assert.Equal(t, "stripped", got)
got, err = m.Read("/leading/file.txt")
require.NoError(t, err)
assert.Equal(t, "stripped", got)
}
func TestIsFile_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("file.go", "package main"))
assert.True(t, m.IsFile("file.go"))
assert.False(t, m.IsFile("missing.go"))
assert.False(t, m.IsFile("")) // empty path
}
func TestEnsureDir_Good(t *testing.T) {
m := New()
require.NoError(t, m.EnsureDir("foo/bar/baz"))
assert.True(t, m.IsDir("foo"))
assert.True(t, m.IsDir("foo/bar"))
assert.True(t, m.IsDir("foo/bar/baz"))
assert.True(t, m.Exists("foo/bar/baz"))
}
func TestDelete_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("delete-me.txt", "bye"))
assert.True(t, m.Exists("delete-me.txt"))
require.NoError(t, m.Delete("delete-me.txt"))
assert.False(t, m.Exists("delete-me.txt"))
}
func TestDelete_Bad(t *testing.T) {
m := New()
// Delete non-existent
assert.Error(t, m.Delete("ghost.txt"))
// Delete non-empty dir
require.NoError(t, m.Write("dir/file.txt", "content"))
assert.Error(t, m.Delete("dir"))
}
func TestDeleteAll_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("tree/a.txt", "a"))
require.NoError(t, m.Write("tree/sub/b.txt", "b"))
require.NoError(t, m.Write("keep.txt", "keep"))
require.NoError(t, m.DeleteAll("tree"))
assert.False(t, m.Exists("tree/a.txt"))
assert.False(t, m.Exists("tree/sub/b.txt"))
assert.True(t, m.Exists("keep.txt"))
}
func TestRename_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("old.txt", "content"))
require.NoError(t, m.Rename("old.txt", "new.txt"))
assert.False(t, m.Exists("old.txt"))
got, err := m.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "content", got)
}
func TestRenameDir_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("src/a.go", "package a"))
require.NoError(t, m.Write("src/sub/b.go", "package b"))
require.NoError(t, m.Rename("src", "dst"))
assert.False(t, m.Exists("src/a.go"))
got, err := m.Read("dst/a.go")
require.NoError(t, err)
assert.Equal(t, "package a", got)
got, err = m.Read("dst/sub/b.go")
require.NoError(t, err)
assert.Equal(t, "package b", got)
}
func TestList_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("root.txt", "r"))
require.NoError(t, m.Write("pkg/a.go", "a"))
require.NoError(t, m.Write("pkg/b.go", "b"))
require.NoError(t, m.Write("pkg/sub/c.go", "c"))
entries, err := m.List("")
require.NoError(t, err)
names := make([]string, len(entries))
for i, e := range entries {
names[i] = e.Name()
}
assert.Contains(t, names, "root.txt")
assert.Contains(t, names, "pkg")
entries, err = m.List("pkg")
require.NoError(t, err)
names = make([]string, len(entries))
for i, e := range entries {
names[i] = e.Name()
}
assert.Contains(t, names, "a.go")
assert.Contains(t, names, "b.go")
assert.Contains(t, names, "sub")
}
func TestStat_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("stat.txt", "hello"))
info, err := m.Stat("stat.txt")
require.NoError(t, err)
assert.Equal(t, int64(5), info.Size())
assert.False(t, info.IsDir())
// Root stat
info, err = m.Stat("")
require.NoError(t, err)
assert.True(t, info.IsDir())
}
func TestOpen_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("open.txt", "opened"))
f, err := m.Open("open.txt")
require.NoError(t, err)
defer f.Close()
data, err := io.ReadAll(f)
require.NoError(t, err)
assert.Equal(t, "opened", string(data))
}
func TestCreateAppend_Good(t *testing.T) {
m := New()
// Create
w, err := m.Create("new.txt")
require.NoError(t, err)
w.Write([]byte("hello"))
w.Close()
got, err := m.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "hello", got)
// Append
w, err = m.Append("new.txt")
require.NoError(t, err)
w.Write([]byte(" world"))
w.Close()
got, err = m.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "hello world", got)
}
func TestStreams_Good(t *testing.T) {
m := New()
// WriteStream
ws, err := m.WriteStream("stream.txt")
require.NoError(t, err)
ws.Write([]byte("streamed"))
ws.Close()
// ReadStream
rs, err := m.ReadStream("stream.txt")
require.NoError(t, err)
data, err := io.ReadAll(rs)
require.NoError(t, err)
assert.Equal(t, "streamed", string(data))
rs.Close()
}
func TestFileGetFileSet_Good(t *testing.T) {
m := New()
require.NoError(t, m.FileSet("alias.txt", "via set"))
got, err := m.FileGet("alias.txt")
require.NoError(t, err)
assert.Equal(t, "via set", got)
}
func TestSnapshotRestore_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("a.txt", "alpha"))
require.NoError(t, m.Write("b/c.txt", "charlie"))
snap, err := m.Snapshot()
require.NoError(t, err)
assert.NotEmpty(t, snap)
// Restore into a new Medium
m2, err := FromTar(snap)
require.NoError(t, err)
got, err := m2.Read("a.txt")
require.NoError(t, err)
assert.Equal(t, "alpha", got)
got, err = m2.Read("b/c.txt")
require.NoError(t, err)
assert.Equal(t, "charlie", got)
}
func TestRestore_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("original.txt", "before"))
snap, err := m.Snapshot()
require.NoError(t, err)
// Modify
require.NoError(t, m.Write("original.txt", "after"))
require.NoError(t, m.Write("extra.txt", "extra"))
// Restore to snapshot
require.NoError(t, m.Restore(snap))
got, err := m.Read("original.txt")
require.NoError(t, err)
assert.Equal(t, "before", got)
assert.False(t, m.Exists("extra.txt"))
}
func TestDataNode_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("test.txt", "borg"))
dn := m.DataNode()
assert.NotNil(t, dn)
// Verify we can use the DataNode directly
f, err := dn.Open("test.txt")
require.NoError(t, err)
defer f.Close()
data, err := io.ReadAll(f)
require.NoError(t, err)
assert.Equal(t, "borg", string(data))
}
func TestOverwrite_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("file.txt", "v1"))
require.NoError(t, m.Write("file.txt", "v2"))
got, err := m.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "v2", got)
}
func TestExists_Good(t *testing.T) {
m := New()
assert.True(t, m.Exists("")) // root
assert.False(t, m.Exists("x"))
require.NoError(t, m.Write("x", "y"))
assert.True(t, m.Exists("x"))
}
func TestReadDir_Ugly(t *testing.T) {
m := New()
// Read from a file path (not a dir) should return empty or error
require.NoError(t, m.Write("file.txt", "content"))
_, err := m.Read("file.txt")
require.NoError(t, err)
}

597
datanode/medium.go Normal file
View file

@ -0,0 +1,597 @@
// Example: medium := datanode.New()
// Example: _ = medium.Write("jobs/run.log", "started")
// Example: snapshot, _ := medium.Snapshot()
// Example: restored, _ := datanode.FromTar(snapshot)
package datanode
import (
"cmp"
goio "io"
"io/fs"
"path"
"slices"
"sync"
"time"
core "dappco.re/go/core"
borgdatanode "forge.lthn.ai/Snider/Borg/pkg/datanode"
)
var (
dataNodeWalkDir = func(fileSystem fs.FS, root string, callback fs.WalkDirFunc) error {
return fs.WalkDir(fileSystem, root, callback)
}
dataNodeOpen = func(dataNode *borgdatanode.DataNode, filePath string) (fs.File, error) {
return dataNode.Open(filePath)
}
dataNodeReadAll = func(reader goio.Reader) ([]byte, error) {
return goio.ReadAll(reader)
}
)
// Example: medium := datanode.New()
// Example: _ = medium.Write("jobs/run.log", "started")
// Example: snapshot, _ := medium.Snapshot()
type Medium struct {
dataNode *borgdatanode.DataNode
directorySet map[string]bool
lock sync.RWMutex
}
// Example: medium := datanode.New()
// Example: _ = medium.Write("jobs/run.log", "started")
func New() *Medium {
return &Medium{
dataNode: borgdatanode.New(),
directorySet: make(map[string]bool),
}
}
// Example: sourceMedium := datanode.New()
// Example: snapshot, _ := sourceMedium.Snapshot()
// Example: restored, _ := datanode.FromTar(snapshot)
func FromTar(data []byte) (*Medium, error) {
dataNode, err := borgdatanode.FromTar(data)
if err != nil {
return nil, core.E("datanode.FromTar", "failed to restore", err)
}
return &Medium{
dataNode: dataNode,
directorySet: make(map[string]bool),
}, nil
}
// Example: snapshot, _ := medium.Snapshot()
func (medium *Medium) Snapshot() ([]byte, error) {
medium.lock.RLock()
defer medium.lock.RUnlock()
data, err := medium.dataNode.ToTar()
if err != nil {
return nil, core.E("datanode.Snapshot", "tar failed", err)
}
return data, nil
}
// Example: _ = medium.Restore(snapshot)
func (medium *Medium) Restore(data []byte) error {
dataNode, err := borgdatanode.FromTar(data)
if err != nil {
return core.E("datanode.Restore", "tar failed", err)
}
medium.lock.Lock()
defer medium.lock.Unlock()
medium.dataNode = dataNode
medium.directorySet = make(map[string]bool)
return nil
}
// Example: dataNode := medium.DataNode()
func (medium *Medium) DataNode() *borgdatanode.DataNode {
medium.lock.RLock()
defer medium.lock.RUnlock()
return medium.dataNode
}
func normaliseEntryPath(filePath string) string {
filePath = core.TrimPrefix(filePath, "/")
filePath = path.Clean(filePath)
if filePath == "." {
return ""
}
return filePath
}
func (medium *Medium) Read(filePath string) (string, error) {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
file, err := medium.dataNode.Open(filePath)
if err != nil {
return "", core.E("datanode.Read", core.Concat("not found: ", filePath), fs.ErrNotExist)
}
defer file.Close()
info, err := file.Stat()
if err != nil {
return "", core.E("datanode.Read", core.Concat("stat failed: ", filePath), err)
}
if info.IsDir() {
return "", core.E("datanode.Read", core.Concat("is a directory: ", filePath), fs.ErrInvalid)
}
data, err := goio.ReadAll(file)
if err != nil {
return "", core.E("datanode.Read", core.Concat("read failed: ", filePath), err)
}
return string(data), nil
}
func (medium *Medium) Write(filePath, content string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return core.E("datanode.Write", "empty path", fs.ErrInvalid)
}
medium.dataNode.AddData(filePath, []byte(content))
medium.ensureDirsLocked(path.Dir(filePath))
return nil
}
func (medium *Medium) WriteMode(filePath, content string, mode fs.FileMode) error {
return medium.Write(filePath, content)
}
func (medium *Medium) EnsureDir(filePath string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return nil
}
medium.ensureDirsLocked(filePath)
return nil
}
func (medium *Medium) ensureDirsLocked(directoryPath string) {
for directoryPath != "" && directoryPath != "." {
medium.directorySet[directoryPath] = true
directoryPath = path.Dir(directoryPath)
if directoryPath == "." {
break
}
}
}
func (medium *Medium) IsFile(filePath string) bool {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
info, err := medium.dataNode.Stat(filePath)
return err == nil && !info.IsDir()
}
func (medium *Medium) Delete(filePath string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return core.E("datanode.Delete", "cannot delete root", fs.ErrPermission)
}
info, err := medium.dataNode.Stat(filePath)
if err != nil {
if medium.directorySet[filePath] {
hasChildren, err := medium.hasPrefixLocked(filePath + "/")
if err != nil {
return core.E("datanode.Delete", core.Concat("failed to inspect directory: ", filePath), err)
}
if hasChildren {
return core.E("datanode.Delete", core.Concat("directory not empty: ", filePath), fs.ErrExist)
}
delete(medium.directorySet, filePath)
return nil
}
return core.E("datanode.Delete", core.Concat("not found: ", filePath), fs.ErrNotExist)
}
if info.IsDir() {
hasChildren, err := medium.hasPrefixLocked(filePath + "/")
if err != nil {
return core.E("datanode.Delete", core.Concat("failed to inspect directory: ", filePath), err)
}
if hasChildren {
return core.E("datanode.Delete", core.Concat("directory not empty: ", filePath), fs.ErrExist)
}
delete(medium.directorySet, filePath)
return nil
}
if err := medium.removeFileLocked(filePath); err != nil {
return core.E("datanode.Delete", core.Concat("failed to delete file: ", filePath), err)
}
return nil
}
func (medium *Medium) DeleteAll(filePath string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return core.E("datanode.DeleteAll", "cannot delete root", fs.ErrPermission)
}
prefix := filePath + "/"
found := false
info, err := medium.dataNode.Stat(filePath)
if err == nil && !info.IsDir() {
if err := medium.removeFileLocked(filePath); err != nil {
return core.E("datanode.DeleteAll", core.Concat("failed to delete file: ", filePath), err)
}
found = true
}
entries, err := medium.collectAllLocked()
if err != nil {
return core.E("datanode.DeleteAll", core.Concat("failed to inspect tree: ", filePath), err)
}
for _, name := range entries {
if name == filePath || core.HasPrefix(name, prefix) {
if err := medium.removeFileLocked(name); err != nil {
return core.E("datanode.DeleteAll", core.Concat("failed to delete file: ", name), err)
}
found = true
}
}
for directoryPath := range medium.directorySet {
if directoryPath == filePath || core.HasPrefix(directoryPath, prefix) {
delete(medium.directorySet, directoryPath)
found = true
}
}
if !found {
return core.E("datanode.DeleteAll", core.Concat("not found: ", filePath), fs.ErrNotExist)
}
return nil
}
func (medium *Medium) Rename(oldPath, newPath string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
oldPath = normaliseEntryPath(oldPath)
newPath = normaliseEntryPath(newPath)
info, err := medium.dataNode.Stat(oldPath)
if err != nil {
return core.E("datanode.Rename", core.Concat("not found: ", oldPath), fs.ErrNotExist)
}
if !info.IsDir() {
data, err := medium.readFileLocked(oldPath)
if err != nil {
return core.E("datanode.Rename", core.Concat("failed to read source file: ", oldPath), err)
}
medium.dataNode.AddData(newPath, data)
medium.ensureDirsLocked(path.Dir(newPath))
if err := medium.removeFileLocked(oldPath); err != nil {
return core.E("datanode.Rename", core.Concat("failed to remove source file: ", oldPath), err)
}
return nil
}
oldPrefix := oldPath + "/"
newPrefix := newPath + "/"
entries, err := medium.collectAllLocked()
if err != nil {
return core.E("datanode.Rename", core.Concat("failed to inspect tree: ", oldPath), err)
}
for _, name := range entries {
if core.HasPrefix(name, oldPrefix) {
newName := core.Concat(newPrefix, core.TrimPrefix(name, oldPrefix))
data, err := medium.readFileLocked(name)
if err != nil {
return core.E("datanode.Rename", core.Concat("failed to read source file: ", name), err)
}
medium.dataNode.AddData(newName, data)
if err := medium.removeFileLocked(name); err != nil {
return core.E("datanode.Rename", core.Concat("failed to remove source file: ", name), err)
}
}
}
dirsToMove := make(map[string]string)
for directoryPath := range medium.directorySet {
if directoryPath == oldPath || core.HasPrefix(directoryPath, oldPrefix) {
newDirectoryPath := core.Concat(newPath, core.TrimPrefix(directoryPath, oldPath))
dirsToMove[directoryPath] = newDirectoryPath
}
}
for oldDirectoryPath, newDirectoryPath := range dirsToMove {
delete(medium.directorySet, oldDirectoryPath)
medium.directorySet[newDirectoryPath] = true
}
return nil
}
func (medium *Medium) List(filePath string) ([]fs.DirEntry, error) {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
entries, err := medium.dataNode.ReadDir(filePath)
if err != nil {
if filePath == "" || medium.directorySet[filePath] {
return []fs.DirEntry{}, nil
}
return nil, core.E("datanode.List", core.Concat("not found: ", filePath), fs.ErrNotExist)
}
prefix := filePath
if prefix != "" {
prefix += "/"
}
seen := make(map[string]bool)
for _, entry := range entries {
seen[entry.Name()] = true
}
for directoryPath := range medium.directorySet {
if !core.HasPrefix(directoryPath, prefix) {
continue
}
rest := core.TrimPrefix(directoryPath, prefix)
if rest == "" {
continue
}
first := core.SplitN(rest, "/", 2)[0]
if !seen[first] {
seen[first] = true
entries = append(entries, &dirEntry{name: first})
}
}
slices.SortFunc(entries, func(a, b fs.DirEntry) int {
return cmp.Compare(a.Name(), b.Name())
})
return entries, nil
}
func (medium *Medium) Stat(filePath string) (fs.FileInfo, error) {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return &fileInfo{name: ".", isDir: true, mode: fs.ModeDir | 0755}, nil
}
info, err := medium.dataNode.Stat(filePath)
if err == nil {
return info, nil
}
if medium.directorySet[filePath] {
return &fileInfo{name: path.Base(filePath), isDir: true, mode: fs.ModeDir | 0755}, nil
}
return nil, core.E("datanode.Stat", core.Concat("not found: ", filePath), fs.ErrNotExist)
}
func (medium *Medium) Open(filePath string) (fs.File, error) {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
return medium.dataNode.Open(filePath)
}
func (medium *Medium) Create(filePath string) (goio.WriteCloser, error) {
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return nil, core.E("datanode.Create", "empty path", fs.ErrInvalid)
}
return &writeCloser{medium: medium, path: filePath}, nil
}
func (medium *Medium) Append(filePath string) (goio.WriteCloser, error) {
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return nil, core.E("datanode.Append", "empty path", fs.ErrInvalid)
}
var existing []byte
medium.lock.RLock()
if medium.IsFile(filePath) {
data, err := medium.readFileLocked(filePath)
if err != nil {
medium.lock.RUnlock()
return nil, core.E("datanode.Append", core.Concat("failed to read existing content: ", filePath), err)
}
existing = data
}
medium.lock.RUnlock()
return &writeCloser{medium: medium, path: filePath, buffer: existing}, nil
}
func (medium *Medium) ReadStream(filePath string) (goio.ReadCloser, error) {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
file, err := medium.dataNode.Open(filePath)
if err != nil {
return nil, core.E("datanode.ReadStream", core.Concat("not found: ", filePath), fs.ErrNotExist)
}
return file.(goio.ReadCloser), nil
}
func (medium *Medium) WriteStream(filePath string) (goio.WriteCloser, error) {
return medium.Create(filePath)
}
func (medium *Medium) Exists(filePath string) bool {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return true
}
_, err := medium.dataNode.Stat(filePath)
if err == nil {
return true
}
return medium.directorySet[filePath]
}
func (medium *Medium) IsDir(filePath string) bool {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return true
}
info, err := medium.dataNode.Stat(filePath)
if err == nil {
return info.IsDir()
}
return medium.directorySet[filePath]
}
func (medium *Medium) hasPrefixLocked(prefix string) (bool, error) {
entries, err := medium.collectAllLocked()
if err != nil {
return false, err
}
for _, name := range entries {
if core.HasPrefix(name, prefix) {
return true, nil
}
}
for directoryPath := range medium.directorySet {
if core.HasPrefix(directoryPath, prefix) {
return true, nil
}
}
return false, nil
}
func (medium *Medium) collectAllLocked() ([]string, error) {
var names []string
err := dataNodeWalkDir(medium.dataNode, ".", func(filePath string, entry fs.DirEntry, err error) error {
if err != nil {
return err
}
if !entry.IsDir() {
names = append(names, filePath)
}
return nil
})
return names, err
}
func (medium *Medium) readFileLocked(filePath string) ([]byte, error) {
file, err := dataNodeOpen(medium.dataNode, filePath)
if err != nil {
return nil, err
}
data, readErr := dataNodeReadAll(file)
closeErr := file.Close()
if readErr != nil {
return nil, readErr
}
if closeErr != nil {
return nil, closeErr
}
return data, nil
}
func (medium *Medium) removeFileLocked(target string) error {
entries, err := medium.collectAllLocked()
if err != nil {
return err
}
newDataNode := borgdatanode.New()
for _, name := range entries {
if name == target {
continue
}
data, err := medium.readFileLocked(name)
if err != nil {
return err
}
newDataNode.AddData(name, data)
}
medium.dataNode = newDataNode
return nil
}
type writeCloser struct {
medium *Medium
path string
buffer []byte
}
func (writer *writeCloser) Write(data []byte) (int, error) {
writer.buffer = append(writer.buffer, data...)
return len(data), nil
}
func (writer *writeCloser) Close() error {
writer.medium.lock.Lock()
defer writer.medium.lock.Unlock()
writer.medium.dataNode.AddData(writer.path, writer.buffer)
writer.medium.ensureDirsLocked(path.Dir(writer.path))
return nil
}
type dirEntry struct {
name string
}
func (entry *dirEntry) Name() string { return entry.name }
func (entry *dirEntry) IsDir() bool { return true }
func (entry *dirEntry) Type() fs.FileMode { return fs.ModeDir }
func (entry *dirEntry) Info() (fs.FileInfo, error) {
return &fileInfo{name: entry.name, isDir: true, mode: fs.ModeDir | 0755}, nil
}
type fileInfo struct {
name string
size int64
mode fs.FileMode
modTime time.Time
isDir bool
}
func (info *fileInfo) Name() string { return info.name }
func (info *fileInfo) Size() int64 { return info.size }
func (info *fileInfo) Mode() fs.FileMode { return info.mode }
func (info *fileInfo) ModTime() time.Time { return info.modTime }
func (info *fileInfo) IsDir() bool { return info.isDir }
func (info *fileInfo) Sys() any { return nil }

418
datanode/medium_test.go Normal file
View file

@ -0,0 +1,418 @@
package datanode
import (
"io"
"io/fs"
"testing"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var _ coreio.Medium = (*Medium)(nil)
func TestDataNode_ReadWrite_Good(t *testing.T) {
dataNodeMedium := New()
err := dataNodeMedium.Write("hello.txt", "world")
require.NoError(t, err)
got, err := dataNodeMedium.Read("hello.txt")
require.NoError(t, err)
assert.Equal(t, "world", got)
}
func TestDataNode_ReadWrite_Bad(t *testing.T) {
dataNodeMedium := New()
_, err := dataNodeMedium.Read("missing.txt")
assert.Error(t, err)
err = dataNodeMedium.Write("", "content")
assert.Error(t, err)
}
func TestDataNode_NestedPaths_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("a/b/c/deep.txt", "deep"))
got, err := dataNodeMedium.Read("a/b/c/deep.txt")
require.NoError(t, err)
assert.Equal(t, "deep", got)
assert.True(t, dataNodeMedium.IsDir("a"))
assert.True(t, dataNodeMedium.IsDir("a/b"))
assert.True(t, dataNodeMedium.IsDir("a/b/c"))
}
func TestDataNode_LeadingSlash_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("/leading/file.txt", "stripped"))
got, err := dataNodeMedium.Read("leading/file.txt")
require.NoError(t, err)
assert.Equal(t, "stripped", got)
got, err = dataNodeMedium.Read("/leading/file.txt")
require.NoError(t, err)
assert.Equal(t, "stripped", got)
}
func TestDataNode_IsFile_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("file.go", "package main"))
assert.True(t, dataNodeMedium.IsFile("file.go"))
assert.False(t, dataNodeMedium.IsFile("missing.go"))
assert.False(t, dataNodeMedium.IsFile(""))
}
func TestDataNode_EnsureDir_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.EnsureDir("foo/bar/baz"))
assert.True(t, dataNodeMedium.IsDir("foo"))
assert.True(t, dataNodeMedium.IsDir("foo/bar"))
assert.True(t, dataNodeMedium.IsDir("foo/bar/baz"))
assert.True(t, dataNodeMedium.Exists("foo/bar/baz"))
}
func TestDataNode_Delete_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("delete-me.txt", "bye"))
assert.True(t, dataNodeMedium.Exists("delete-me.txt"))
require.NoError(t, dataNodeMedium.Delete("delete-me.txt"))
assert.False(t, dataNodeMedium.Exists("delete-me.txt"))
}
func TestDataNode_Delete_Bad(t *testing.T) {
dataNodeMedium := New()
assert.Error(t, dataNodeMedium.Delete("ghost.txt"))
require.NoError(t, dataNodeMedium.Write("dir/file.txt", "content"))
assert.Error(t, dataNodeMedium.Delete("dir"))
}
func TestDataNode_Delete_DirectoryInspectionFailure_Bad(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("dir/file.txt", "content"))
original := dataNodeWalkDir
dataNodeWalkDir = func(_ fs.FS, _ string, _ fs.WalkDirFunc) error {
return core.NewError("walk failed")
}
t.Cleanup(func() {
dataNodeWalkDir = original
})
err := dataNodeMedium.Delete("dir")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to inspect directory")
}
func TestDataNode_DeleteAll_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("tree/a.txt", "a"))
require.NoError(t, dataNodeMedium.Write("tree/sub/b.txt", "b"))
require.NoError(t, dataNodeMedium.Write("keep.txt", "keep"))
require.NoError(t, dataNodeMedium.DeleteAll("tree"))
assert.False(t, dataNodeMedium.Exists("tree/a.txt"))
assert.False(t, dataNodeMedium.Exists("tree/sub/b.txt"))
assert.True(t, dataNodeMedium.Exists("keep.txt"))
}
func TestDataNode_DeleteAll_WalkFailure_Bad(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("tree/a.txt", "a"))
original := dataNodeWalkDir
dataNodeWalkDir = func(_ fs.FS, _ string, _ fs.WalkDirFunc) error {
return core.NewError("walk failed")
}
t.Cleanup(func() {
dataNodeWalkDir = original
})
err := dataNodeMedium.DeleteAll("tree")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to inspect tree")
}
func TestDataNode_Delete_RemoveFailure_Bad(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("keep.txt", "keep"))
require.NoError(t, dataNodeMedium.Write("bad.txt", "bad"))
original := dataNodeReadAll
dataNodeReadAll = func(_ io.Reader) ([]byte, error) {
return nil, core.NewError("read failed")
}
t.Cleanup(func() {
dataNodeReadAll = original
})
err := dataNodeMedium.Delete("bad.txt")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to delete file")
}
func TestDataNode_Rename_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("old.txt", "content"))
require.NoError(t, dataNodeMedium.Rename("old.txt", "new.txt"))
assert.False(t, dataNodeMedium.Exists("old.txt"))
got, err := dataNodeMedium.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "content", got)
}
func TestDataNode_RenameDir_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("src/a.go", "package a"))
require.NoError(t, dataNodeMedium.Write("src/sub/b.go", "package b"))
require.NoError(t, dataNodeMedium.Rename("src", "destination"))
assert.False(t, dataNodeMedium.Exists("src/a.go"))
got, err := dataNodeMedium.Read("destination/a.go")
require.NoError(t, err)
assert.Equal(t, "package a", got)
got, err = dataNodeMedium.Read("destination/sub/b.go")
require.NoError(t, err)
assert.Equal(t, "package b", got)
}
func TestDataNode_RenameDir_ReadFailure_Bad(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("src/a.go", "package a"))
original := dataNodeReadAll
dataNodeReadAll = func(_ io.Reader) ([]byte, error) {
return nil, core.NewError("read failed")
}
t.Cleanup(func() {
dataNodeReadAll = original
})
err := dataNodeMedium.Rename("src", "destination")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to read source file")
}
func TestDataNode_List_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("root.txt", "r"))
require.NoError(t, dataNodeMedium.Write("pkg/a.go", "a"))
require.NoError(t, dataNodeMedium.Write("pkg/b.go", "b"))
require.NoError(t, dataNodeMedium.Write("pkg/sub/c.go", "c"))
entries, err := dataNodeMedium.List("")
require.NoError(t, err)
names := make([]string, len(entries))
for index, entry := range entries {
names[index] = entry.Name()
}
assert.Contains(t, names, "root.txt")
assert.Contains(t, names, "pkg")
entries, err = dataNodeMedium.List("pkg")
require.NoError(t, err)
names = make([]string, len(entries))
for index, entry := range entries {
names[index] = entry.Name()
}
assert.Contains(t, names, "a.go")
assert.Contains(t, names, "b.go")
assert.Contains(t, names, "sub")
}
func TestDataNode_Stat_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("stat.txt", "hello"))
info, err := dataNodeMedium.Stat("stat.txt")
require.NoError(t, err)
assert.Equal(t, int64(5), info.Size())
assert.False(t, info.IsDir())
info, err = dataNodeMedium.Stat("")
require.NoError(t, err)
assert.True(t, info.IsDir())
}
func TestDataNode_Open_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("open.txt", "opened"))
file, err := dataNodeMedium.Open("open.txt")
require.NoError(t, err)
defer file.Close()
data, err := io.ReadAll(file)
require.NoError(t, err)
assert.Equal(t, "opened", string(data))
}
func TestDataNode_CreateAppend_Good(t *testing.T) {
dataNodeMedium := New()
writer, err := dataNodeMedium.Create("new.txt")
require.NoError(t, err)
_, _ = writer.Write([]byte("hello"))
require.NoError(t, writer.Close())
got, err := dataNodeMedium.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "hello", got)
writer, err = dataNodeMedium.Append("new.txt")
require.NoError(t, err)
_, _ = writer.Write([]byte(" world"))
require.NoError(t, writer.Close())
got, err = dataNodeMedium.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "hello world", got)
}
func TestDataNode_Append_ReadFailure_Bad(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("new.txt", "hello"))
original := dataNodeReadAll
dataNodeReadAll = func(_ io.Reader) ([]byte, error) {
return nil, core.NewError("read failed")
}
t.Cleanup(func() {
dataNodeReadAll = original
})
_, err := dataNodeMedium.Append("new.txt")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to read existing content")
}
func TestDataNode_Streams_Good(t *testing.T) {
dataNodeMedium := New()
writeStream, err := dataNodeMedium.WriteStream("stream.txt")
require.NoError(t, err)
_, _ = writeStream.Write([]byte("streamed"))
require.NoError(t, writeStream.Close())
readStream, err := dataNodeMedium.ReadStream("stream.txt")
require.NoError(t, err)
data, err := io.ReadAll(readStream)
require.NoError(t, err)
assert.Equal(t, "streamed", string(data))
require.NoError(t, readStream.Close())
}
func TestDataNode_SnapshotRestore_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("a.txt", "alpha"))
require.NoError(t, dataNodeMedium.Write("b/c.txt", "charlie"))
snapshotData, err := dataNodeMedium.Snapshot()
require.NoError(t, err)
assert.NotEmpty(t, snapshotData)
restoredNode, err := FromTar(snapshotData)
require.NoError(t, err)
got, err := restoredNode.Read("a.txt")
require.NoError(t, err)
assert.Equal(t, "alpha", got)
got, err = restoredNode.Read("b/c.txt")
require.NoError(t, err)
assert.Equal(t, "charlie", got)
}
func TestDataNode_Restore_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("original.txt", "before"))
snapshotData, err := dataNodeMedium.Snapshot()
require.NoError(t, err)
require.NoError(t, dataNodeMedium.Write("original.txt", "after"))
require.NoError(t, dataNodeMedium.Write("extra.txt", "extra"))
require.NoError(t, dataNodeMedium.Restore(snapshotData))
got, err := dataNodeMedium.Read("original.txt")
require.NoError(t, err)
assert.Equal(t, "before", got)
assert.False(t, dataNodeMedium.Exists("extra.txt"))
}
func TestDataNode_DataNode_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("test.txt", "borg"))
dataNode := dataNodeMedium.DataNode()
assert.NotNil(t, dataNode)
file, err := dataNode.Open("test.txt")
require.NoError(t, err)
defer file.Close()
data, err := io.ReadAll(file)
require.NoError(t, err)
assert.Equal(t, "borg", string(data))
}
func TestDataNode_Overwrite_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("file.txt", "v1"))
require.NoError(t, dataNodeMedium.Write("file.txt", "v2"))
got, err := dataNodeMedium.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "v2", got)
}
func TestDataNode_Exists_Good(t *testing.T) {
dataNodeMedium := New()
assert.True(t, dataNodeMedium.Exists(""))
assert.False(t, dataNodeMedium.Exists("x"))
require.NoError(t, dataNodeMedium.Write("x", "y"))
assert.True(t, dataNodeMedium.Exists("x"))
}
func TestDataNode_ReadExistingFile_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("file.txt", "content"))
got, err := dataNodeMedium.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "content", got)
}

5
doc.go Normal file
View file

@ -0,0 +1,5 @@
// Example: medium, _ := io.NewSandboxed("/srv/app")
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
// Example: backup, _ := io.NewSandboxed("/srv/backup")
// Example: _ = io.Copy(medium, "data/report.json", backup, "daily/report.json")
package io

View file

@ -0,0 +1,440 @@
# RFC-025: Agent Experience (AX) Design Principles
- **Status:** Draft
- **Authors:** Snider, Cladius
- **Date:** 2026-03-19
- **Applies to:** All Core ecosystem packages (CoreGO, CorePHP, CoreTS, core-agent)
## Abstract
Agent Experience (AX) is a design paradigm for software systems where the primary code consumer is an AI agent, not a human developer. AX sits alongside User Experience (UX) and Developer Experience (DX) as the third era of interface design.
This RFC establishes AX as a formal design principle for the Core ecosystem and defines the conventions that follow from it.
## Motivation
As of early 2026, AI agents write, review, and maintain the majority of code in the Core ecosystem. The original author has not manually edited code (outside of Core struct design) since October 2025. Code is processed semantically — agents reason about intent, not characters.
Design patterns inherited from the human-developer era optimise for the wrong consumer:
- **Short names** save keystrokes but increase semantic ambiguity
- **Functional option chains** are fluent for humans but opaque for agents tracing configuration
- **Error-at-every-call-site** produces 50% boilerplate that obscures intent
- **Generic type parameters** force agents to carry type context that the runtime already has
- **Panic-hiding conventions** (`Must*`) create implicit control flow that agents must special-case
AX acknowledges this shift and provides principles for designing code, APIs, file structures, and conventions that serve AI agents as first-class consumers.
## The Three Eras
| Era | Primary Consumer | Optimises For | Key Metric |
|-----|-----------------|---------------|------------|
| UX | End users | Discoverability, forgiveness, visual clarity | Task completion time |
| DX | Developers | Typing speed, IDE support, convention familiarity | Time to first commit |
| AX | AI agents | Predictability, composability, semantic navigation | Correct-on-first-pass rate |
AX does not replace UX or DX. End users still need good UX. Developers still need good DX. But when the primary code author and maintainer is an AI agent, the codebase should be designed for that consumer first.
## Principles
### 1. Predictable Names Over Short Names
Names are tokens that agents pattern-match across languages and contexts. Abbreviations introduce mapping overhead.
```
Config not Cfg
Service not Srv
Embed not Emb
Error not Err (as a subsystem name; err for local variables is fine)
Options not Opts
```
**Rule:** If a name would require a comment to explain, it is too short.
**Exception:** Industry-standard abbreviations that are universally understood (`HTTP`, `URL`, `ID`, `IPC`, `I18n`) are acceptable. The test: would an agent trained on any mainstream language recognise it without context?
### 2. Comments as Usage Examples
The function signature tells WHAT. The comment shows HOW with real values.
```go
// Detect the project type from files present
setup.Detect("/path/to/project")
// Set up a workspace with auto-detected template
setup.Run(setup.Options{Path: ".", Template: "auto"})
// Scaffold a PHP module workspace
setup.Run(setup.Options{Path: "./my-module", Template: "php"})
```
**Rule:** If a comment restates what the type signature already says, delete it. If a comment shows a concrete usage with realistic values, keep it.
**Rationale:** Agents learn from examples more effectively than from descriptions. A comment like "Run executes the setup process" adds zero information. A comment like `setup.Run(setup.Options{Path: ".", Template: "auto"})` teaches an agent exactly how to call the function.
### 3. Path Is Documentation
File and directory paths should be self-describing. An agent navigating the filesystem should understand what it is looking at without reading a README.
```
flow/deploy/to/homelab.yaml — deploy TO the homelab
flow/deploy/from/github.yaml — deploy FROM GitHub
flow/code/review.yaml — code review flow
template/file/go/struct.go.tmpl — Go struct file template
template/dir/workspace/php/ — PHP workspace scaffold
```
**Rule:** If an agent needs to read a file to understand what a directory contains, the directory naming has failed.
**Corollary:** The unified path convention (folder structure = HTTP route = CLI command = test path) is AX-native. One path, every surface.
### 4. Templates Over Freeform
When an agent generates code from a template, the output is constrained to known-good shapes. When an agent writes freeform, the output varies.
```go
// Template-driven — consistent output
lib.RenderFile("php/action", data)
lib.ExtractDir("php", targetDir, data)
// Freeform — variance in output
"write a PHP action class that..."
```
**Rule:** For any code pattern that recurs, provide a template. Templates are guardrails for agents.
**Scope:** Templates apply to file generation, workspace scaffolding, config generation, and commit messages. They do NOT apply to novel logic — agents should write business logic freeform with the domain knowledge available.
### 5. Declarative Over Imperative
Agents reason better about declarations of intent than sequences of operations.
```yaml
# Declarative — agent sees what should happen
steps:
- name: build
flow: tools/docker-build
with:
context: "{{ .app_dir }}"
image_name: "{{ .image_name }}"
- name: deploy
flow: deploy/with/docker
with:
host: "{{ .host }}"
```
```go
// Imperative — agent must trace execution
cmd := exec.Command("docker", "build", "--platform", "linux/amd64", "-t", imageName, ".")
cmd.Dir = appDir
if err := cmd.Run(); err != nil {
return fmt.Errorf("docker build: %w", err)
}
```
**Rule:** Orchestration, configuration, and pipeline logic should be declarative (YAML/JSON). Implementation logic should be imperative (Go/PHP/TS). The boundary is: if an agent needs to compose or modify the logic, make it declarative.
### 6. Universal Types (Core Primitives)
Every component in the ecosystem accepts and returns the same primitive types. An agent processing any level of the tree sees identical shapes.
```go
// Universal contract
setup.Run(core.Options{Path: ".", Template: "auto"})
brain.New(core.Options{Name: "openbrain"})
deploy.Run(core.Options{Flow: "deploy/to/homelab"})
// Fractal — Core itself is a Service
core.New(core.Options{
Services: []core.Service{
process.New(core.Options{Name: "process"}),
brain.New(core.Options{Name: "brain"}),
},
})
```
**Core primitive types:**
| Type | Purpose |
|------|---------|
| `core.Options` | Input configuration (what you want) |
| `core.Config` | Runtime settings (what is active) |
| `core.Data` | Embedded or stored content |
| `core.Service` | A managed component with lifecycle |
| `core.Result[T]` | Return value with OK/fail state |
**What this replaces:**
| Go Convention | Core AX | Why |
|--------------|---------|-----|
| `func With*(v) Option` | `core.Options{Field: v}` | Struct literal is parseable; option chain requires tracing |
| `func Must*(v) T` | `core.Result[T]` | No hidden panics; errors flow through Core |
| `func *For[T](c) T` | `c.Service("name")` | String lookup is greppable; generics require type context |
| `val, err :=` everywhere | Single return via `core.Result` | Intent not obscured by error handling |
| `_ = err` | Never needed | Core handles all errors internally |
### 7. Directory as Semantics
The directory structure tells an agent the intent before it reads a word. Top-level directories are semantic categories, not organisational bins.
```
plans/
├── code/ # Pure primitives — read for WHAT exists
├── project/ # Products — read for WHAT we're building and WHY
└── rfc/ # Contracts — read for constraints and rules
```
**Rule:** An agent should know what kind of document it's reading from the path alone. `code/core/go/io/RFC.md` = a lib primitive spec. `project/ofm/RFC.md` = a product spec that cross-references code/. `rfc/snider/borg/RFC-BORG-006-SMSG-FORMAT.md` = an immutable contract for the Borg SMSG protocol.
**Corollary:** The three-way split (code/project/rfc) extends principle 3 (Path Is Documentation) from files to entire subtrees. The path IS the metadata.
### 8. Lib Never Imports Consumer
Dependency flows one direction. Libraries define primitives. Consumers compose from them. A new feature in a consumer can never break a library.
```
code/core/go/* → lib tier (stable foundation)
code/core/agent/ → consumer tier (composes from go/*)
code/core/cli/ → consumer tier (composes from go/*)
code/core/gui/ → consumer tier (composes from go/*)
```
**Rule:** If package A is in `go/` and package B is in the consumer tier, B may import A but A must never import B. The repo naming convention enforces this: `go-{name}` = lib, bare `{name}` = consumer.
**Why this matters for agents:** When an agent is dispatched to implement a feature in `core/agent`, it can freely import from `go-io`, `go-scm`, `go-process`. But if an agent is dispatched to `go-io`, it knows its changes are foundational — every consumer depends on it, so the contract must not break.
### 9. Issues Are N+(rounds) Deep
Problems in code and specs are layered. Surface issues mask deeper issues. Fixing the surface reveals the next layer. This is not a failure mode — it is the discovery process.
```
Pass 1: Find 16 issues (surface — naming, imports, obvious errors)
Pass 2: Find 11 issues (structural — contradictions, missing types)
Pass 3: Find 5 issues (architectural — signature mismatches, registration gaps)
Pass 4: Find 4 issues (contract — cross-spec API mismatches)
Pass 5: Find 2 issues (mechanical — path format, nil safety)
Pass N: Findings are trivial → spec/code is complete
```
**Rule:** Iteration is required, not a failure. Each pass sees what the previous pass could not, because the context changed. An agent dispatched with the same task on the same repo will find different things each time — this is correct behaviour.
**Corollary:** The cheapest model should do the most passes (surface work). The frontier model should arrive last, when only deep issues remain. Tiered iteration: grunt model grinds → mid model pre-warms → frontier model polishes.
**Anti-pattern:** One-shot generation expecting valid output. No model, no human, produces correct-on-first-pass for non-trivial work. Expecting it wastes the first pass on surface issues that a cheaper pass would have caught.
### 10. CLI Tests as Artifact Validation
Unit tests verify the code. CLI tests verify the binary. The directory structure IS the command structure — path maps to command, Taskfile runs the test.
```
tests/cli/
├── core/
│ └── lint/
│ ├── Taskfile.yaml ← test `core-lint` (root)
│ ├── run/
│ │ ├── Taskfile.yaml ← test `core-lint run`
│ │ └── fixtures/
│ ├── go/
│ │ ├── Taskfile.yaml ← test `core-lint go`
│ │ └── fixtures/
│ └── security/
│ ├── Taskfile.yaml ← test `core-lint security`
│ └── fixtures/
```
**Rule:** Every CLI command has a matching `tests/cli/{path}/Taskfile.yaml`. The Taskfile runs the compiled binary against fixtures with known inputs and validates the output. If the CLI test passes, the underlying actions work — because CLI commands call actions, MCP tools call actions, API endpoints call actions. Test the CLI, trust the rest.
**Pattern:**
```yaml
# tests/cli/core/lint/go/Taskfile.yaml
version: '3'
tasks:
test:
cmds:
- core-lint go --output json fixtures/ > /tmp/result.json
- jq -e '.findings | length > 0' /tmp/result.json
- jq -e '.summary.passed == false' /tmp/result.json
```
**Why this matters for agents:** An agent can validate its own work by running `task test` in the matching `tests/cli/` directory. No test framework, no mocking, no setup — just the binary, fixtures, and `jq` assertions. The agent builds the binary, runs the test, sees the result. If it fails, the agent can read the fixture, read the output, and fix the code.
**Corollary:** Fixtures are planted bugs. Each fixture file has a known issue that the linter must find. If the linter doesn't find it, the test fails. Fixtures are the spec for what the tool must detect — they ARE the test cases, not descriptions of test cases.
## Applying AX to Existing Patterns
### File Structure
```
# AX-native: path describes content
core/agent/
├── go/ # Go source
├── php/ # PHP source
├── ui/ # Frontend source
├── claude/ # Claude Code plugin
└── codex/ # Codex plugin
# Not AX: generic names requiring README
src/
├── lib/
├── utils/
└── helpers/
```
### Error Handling
```go
// AX-native: errors are infrastructure, not application logic
svc := c.Service("brain")
cfg := c.Config().Get("database.host")
// Errors logged by Core. Code reads like a spec.
// Not AX: errors dominate the code
svc, err := c.ServiceFor[brain.Service]()
if err != nil {
return fmt.Errorf("get brain service: %w", err)
}
cfg, err := c.Config().Get("database.host")
if err != nil {
_ = err // silenced because "it'll be fine"
}
```
### API Design
```go
// AX-native: one shape, every surface
core.New(core.Options{
Name: "my-app",
Services: []core.Service{...},
Config: core.Config{...},
})
// Not AX: multiple patterns for the same thing
core.New(
core.WithName("my-app"),
core.WithService(factory1),
core.WithService(factory2),
core.WithConfig(cfg),
)
```
## The Plans Convention — AX Development Lifecycle
The `plans/` directory structure encodes a development methodology designed for how generative AI actually works: iterative refinement across structured phases, not one-shot generation.
### The Three-Way Split
```
plans/
├── project/ # 1. WHAT and WHY — start here
├── rfc/ # 2. CONSTRAINTS — immutable contracts
└── code/ # 3. HOW — implementation specs
```
Each directory is a phase. Work flows from project → rfc → code. Each transition forces a refinement pass — you cannot write a code spec without discovering gaps in the project spec, and you cannot write an RFC without discovering assumptions in both.
**Three places for data that can't be written simultaneously = three guaranteed iterations of "actually, this needs changing."** Refinement is baked into the structure, not bolted on as a review step.
### Phase 1: Project (Vision)
Start with `project/`. No code exists yet. Define:
- What the product IS and who it serves
- What existing primitives it consumes (cross-ref to `code/`)
- What constraints it operates under (cross-ref to `rfc/`)
This is where creativity lives. Map features to building blocks. Connect systems. The project spec is integrative — it references everything else.
### Phase 2: RFC (Contracts)
Extract the immutable rules into `rfc/`. These are constraints that don't change with implementation:
- Wire formats, protocols, hash algorithms
- Security properties that must hold
- Compatibility guarantees
RFCs are numbered per component (`RFC-BORG-006-SMSG-FORMAT.md`) and never modified after acceptance. If the contract changes, write a new RFC.
### Phase 3: Code (Implementation Specs)
Define the implementation in `code/`. Each component gets an RFC.md that an agent can implement from:
- Struct definitions (the DTOs — see principle 6)
- Method signatures and behaviour
- Error conditions and edge cases
- Cross-references to other code/ specs
The code spec IS the product. Write the spec → dispatch to an agent → review output → iterate.
### Pre-Launch: Alignment Protocol
Before dispatching for implementation, verify spec-model alignment:
```
1. REVIEW — The implementation model (Codex/Jules) reads the spec
and reports missing elements. This surfaces the delta between
the model's training and the spec's assumptions.
"I need X, Y, Z to implement this" is the model saying
"I hear you but I'm missing context" — without asking.
2. ADJUST — Update the spec to close the gaps. Add examples,
clarify ambiguities, provide the context the model needs.
This is shared alignment, not compromise.
3. VERIFY — A different model (or sub-agent) reviews the adjusted
spec without the planner's bias. Fresh eyes on the contract.
"Does this make sense to someone who wasn't in the room?"
4. READY — When the review findings are trivial or deployment-
related (not architectural), the spec is ready to dispatch.
```
### Implementation: Iterative Dispatch
Same prompt, multiple runs. Each pass sees deeper because the context evolved:
```
Round 1: Build features (the obvious gaps)
Round 2: Write tests (verify what was built)
Round 3: Harden security (what can go wrong?)
Round 4: Next RFC section (what's still missing?)
Round N: Findings are trivial → implementation is complete
```
Re-running is not failure. It is the process. Each pass changes the codebase, which changes what the next pass can see. The iteration IS the refinement.
### Post-Implementation: Auto-Documentation
The QA/verify chain produces artefacts that feed forward:
- Test results document the contract (what works, what doesn't)
- Coverage reports surface untested paths
- Diff summaries prep the changelog for the next release
- Doc site updates from the spec (the spec IS the documentation)
The output of one cycle is the input to the next. The plans repo stays current because the specs drive the code, not the other way round.
## Compatibility
AX conventions are valid, idiomatic Go/PHP/TS. They do not require language extensions, code generation, or non-standard tooling. An AX-designed codebase compiles, tests, and deploys with standard toolchains.
The conventions diverge from community patterns (functional options, Must/For, etc.) but do not violate language specifications. This is a style choice, not a fork.
## Adoption
AX applies to all new code in the Core ecosystem. Existing code migrates incrementally as it is touched — no big-bang rewrite.
Priority order:
1. **Public APIs** (package-level functions, struct constructors)
2. **File structure** (path naming, template locations)
3. **Internal fields** (struct field names, local variables)
## References
- dAppServer unified path convention (2024)
- CoreGO DTO pattern refactor (2026-03-18)
- Core primitives design (2026-03-19)
- Go Proverbs, Rob Pike (2015) — AX provides an updated lens
## Changelog
- 2026-03-19: Initial draft

2516
docs/RFC.md Normal file

File diff suppressed because it is too large Load diff

View file

@ -25,7 +25,7 @@ The `Medium` interface is defined in `io.go`. It is the only type that consuming
- **`io.Local`** — a package-level variable initialised in `init()` via `local.New("/")`. This gives unsandboxed access to the host filesystem, mirroring the behaviour of the standard `os` package.
- **`io.NewSandboxed(root)`** — creates a `local.Medium` restricted to `root`. All path resolution is confined within that directory.
- **`io.Copy(src, srcPath, dst, dstPath)`** — copies a file between any two mediums by reading from one and writing to the other.
- **`io.MockMedium`** — a fully functional in-memory implementation for unit tests. It tracks files, directories, and modification times in plain maps.
- **`io.NewMemoryMedium()`** — a fully functional in-memory implementation for unit tests. It tracks files, directories, and modification times in plain maps.
### FileInfo and DirEntry (root package)
@ -36,7 +36,7 @@ Simple struct implementations of `fs.FileInfo` and `fs.DirEntry` are exported fr
### local.Medium
**File:** `local/client.go`
**File:** `local/medium.go`
The local backend wraps the standard `os` package with two layers of path protection:
@ -60,7 +60,7 @@ The S3 backend translates `Medium` operations into AWS SDK calls. Key design dec
- **Directory semantics:** S3 has no real directories. `EnsureDir` is a no-op. `IsDir` and `Exists` for directory-like paths use `ListObjectsV2` with `MaxKeys: 1` to check for objects under the prefix.
- **Rename:** Implemented as copy-then-delete, since S3 has no atomic rename.
- **Append:** Downloads existing content, appends in memory, re-uploads on `Close()`. This is the only viable approach given S3's immutable-object model.
- **Testability:** The `s3API` interface (unexported) abstracts the six SDK methods used. Tests inject a `mockS3` that stores objects in a `map[string][]byte` with a `sync.RWMutex`.
- **Testability:** The `Client` interface abstracts the six SDK methods used. Tests inject a `mockS3` that stores objects in a `map[string][]byte` with a `sync.RWMutex`.
### sqlite.Medium
@ -81,7 +81,7 @@ CREATE TABLE IF NOT EXISTS files (
- **WAL mode** is enabled at connection time for better concurrent read performance.
- **Path cleaning** uses the same `path.Clean("/" + p)` pattern as other backends.
- **Rename** is transactional: it reads the source row, inserts at the destination, deletes the source, and moves all children (if it is a directory) within a single transaction.
- **Custom tables** are supported via `WithTable("name")` to allow multiple logical filesystems in one database.
- **Custom tables** are supported via `sqlite.Options{Path: ":memory:", Table: "name"}` to allow multiple logical filesystems in one database.
- **`:memory:`** databases work out of the box for tests.
### node.Node
@ -100,7 +100,7 @@ Key capabilities beyond `Medium`:
### datanode.Medium
**File:** `datanode/client.go`
**File:** `datanode/medium.go`
A thread-safe `Medium` backed by Borg's `DataNode` (an in-memory `fs.FS` with tar serialisation). It adds:
@ -117,7 +117,7 @@ A thread-safe `Medium` backed by Borg's `DataNode` (an in-memory `fs.FS` with ta
The store package provides two complementary APIs:
### Store (key-value)
### KeyValueStore (key-value)
A group-namespaced key-value store backed by SQLite:
@ -135,22 +135,23 @@ Operations: `Get`, `Set`, `Delete`, `Count`, `DeleteGroup`, `GetAll`, `Render`.
The `Render` method loads all key-value pairs from a group into a `map[string]string` and executes a Go `text/template` against them:
```go
s.Set("user", "pool", "pool.lthn.io:3333")
s.Set("user", "wallet", "iz...")
out, _ := s.Render(`{"pool":"{{ .pool }}"}`, "user")
// out: {"pool":"pool.lthn.io:3333"}
keyValueStore, _ := store.New(store.Options{Path: ":memory:"})
keyValueStore.Set("user", "pool", "pool.lthn.io:3333")
keyValueStore.Set("user", "wallet", "iz...")
renderedText, _ := keyValueStore.Render(`{"pool":"{{ .pool }}"}`, "user")
assert.Equal(t, `{"pool":"pool.lthn.io:3333"}`, renderedText)
```
### store.Medium (Medium adapter)
Wraps a `Store` to satisfy the `Medium` interface. Paths are split as `group/key`:
Wraps a `KeyValueStore` to satisfy the `Medium` interface. Paths are split as `group/key`:
- `Read("config/theme")` calls `Get("config", "theme")`
- `List("")` returns all groups as directories
- `List("config")` returns all keys in the `config` group as files
- `IsDir("config")` returns true if the group has entries
You can create it directly (`NewMedium(":memory:")`) or adapt an existing store (`store.AsMedium()`).
You can create it directly (`store.NewMedium(store.Options{Path: ":memory:"})`) or adapt an existing store (`keyValueStore.AsMedium()`).
## sigil Package
@ -163,8 +164,8 @@ The sigil package implements composable, reversible data transformations.
```go
type Sigil interface {
In(data []byte) ([]byte, error) // forward transform
Out(data []byte) ([]byte, error) // reverse transform
In(data []byte) ([]byte, error)
Out(data []byte) ([]byte, error)
}
```
@ -198,10 +199,8 @@ Created via `NewSigil(name)`:
### Pipeline Functions
```go
// Apply sigils left-to-right.
encoded, _ := sigil.Transmute(data, []sigil.Sigil{gzipSigil, hexSigil})
// Reverse sigils right-to-left.
original, _ := sigil.Untransmute(encoded, []sigil.Sigil{gzipSigil, hexSigil})
```
@ -230,12 +229,11 @@ The pre-obfuscation layer ensures that raw plaintext patterns are never sent dir
key := make([]byte, 32)
rand.Read(key)
s, _ := sigil.NewChaChaPolySigil(key)
ciphertext, _ := s.In([]byte("secret"))
plaintext, _ := s.Out(ciphertext)
cipherSigil, _ := sigil.NewChaChaPolySigil(key, nil)
ciphertext, _ := cipherSigil.In([]byte("secret"))
plaintext, _ := cipherSigil.Out(ciphertext)
// With stronger obfuscation:
s2, _ := sigil.NewChaChaPolySigilWithObfuscator(key, &sigil.ShuffleMaskObfuscator{})
shuffleCipherSigil, _ := sigil.NewChaChaPolySigil(key, &sigil.ShuffleMaskObfuscator{})
```
Each call to `In` generates a fresh random nonce, so encrypting the same plaintext twice produces different ciphertexts.
@ -270,8 +268,8 @@ Application code
+-- sqlite.Medium --> modernc.org/sqlite
+-- node.Node --> in-memory map + tar serialisation
+-- datanode.Medium --> Borg DataNode + sync.RWMutex
+-- store.Medium --> store.Store (SQLite KV) --> Medium adapter
+-- MockMedium --> map[string]string (for tests)
+-- store.Medium --> store.KeyValueStore (SQLite KV) --> Medium adapter
+-- MemoryMedium --> map[string]string (for tests)
```
Every backend normalises paths using the same `path.Clean("/" + p)` pattern, ensuring consistent behaviour regardless of which backend is in use.

View file

@ -88,30 +88,31 @@ func TestDelete_Bad_DirNotEmpty(t *testing.T) { /* returns error for non-empty d
## Writing Tests Against Medium
Use `MockMedium` from the root package for unit tests that need a storage backend but should not touch disk:
Use `MemoryMedium` from the root package for unit tests that need a storage backend but should not touch disk:
```go
func TestMyFeature(t *testing.T) {
m := io.NewMockMedium()
m.Files["config.yaml"] = "key: value"
m.Dirs["data"] = true
memoryMedium := io.NewMemoryMedium()
_ = memoryMedium.Write("config.yaml", "key: value")
_ = memoryMedium.EnsureDir("data")
// Your code under test receives m as an io.Medium
result, err := myFunction(m)
result, err := myFunction(memoryMedium)
assert.NoError(t, err)
assert.Equal(t, "expected", m.Files["output.txt"])
output, err := memoryMedium.Read("output.txt")
require.NoError(t, err)
assert.Equal(t, "expected", output)
}
```
For tests that need a real but ephemeral filesystem, use `local.New` with `t.TempDir()`:
For tests that need a temporary filesystem, use `local.New` with `t.TempDir()`:
```go
func TestWithRealFS(t *testing.T) {
m, err := local.New(t.TempDir())
func TestLocalMedium_RoundTrip_Good(t *testing.T) {
localMedium, err := local.New(t.TempDir())
require.NoError(t, err)
_ = m.Write("file.txt", "hello")
content, _ := m.Read("file.txt")
_ = localMedium.Write("file.txt", "hello")
content, _ := localMedium.Read("file.txt")
assert.Equal(t, "hello", content)
}
```
@ -119,12 +120,12 @@ func TestWithRealFS(t *testing.T) {
For SQLite-backed tests, use `:memory:`:
```go
func TestWithSQLite(t *testing.T) {
m, err := sqlite.New(":memory:")
func TestSqliteMedium_RoundTrip_Good(t *testing.T) {
sqliteMedium, err := sqlite.New(sqlite.Options{Path: ":memory:"})
require.NoError(t, err)
defer m.Close()
defer sqliteMedium.Close()
_ = m.Write("file.txt", "hello")
_ = sqliteMedium.Write("file.txt", "hello")
}
```
@ -134,7 +135,7 @@ func TestWithSQLite(t *testing.T) {
To add a new `Medium` implementation:
1. Create a new package directory (e.g., `sftp/`).
2. Define a struct that implements all 18 methods of `io.Medium`.
2. Define a struct that implements all 17 methods of `io.Medium`.
3. Add a compile-time check at the top of your file:
```go
@ -142,7 +143,7 @@ var _ coreio.Medium = (*Medium)(nil)
```
4. Normalise paths using `path.Clean("/" + p)` to prevent traversal escapes. This is the convention followed by every existing backend.
5. Handle `nil` and empty input consistently: check how `MockMedium` and `local.Medium` behave and match that behaviour.
5. Handle `nil` and empty input consistently: check how `MemoryMedium` and `local.Medium` behave and match that behaviour.
6. Write tests using the `_Good` / `_Bad` / `_Ugly` naming convention.
7. Add your package to the table in `docs/index.md`.
@ -171,13 +172,13 @@ To add a new data transformation:
```
go-io/
├── io.go # Medium interface, helpers, MockMedium
├── client_test.go # Tests for MockMedium and helpers
├── io.go # Medium interface, helpers, MemoryMedium
├── medium_test.go # Tests for MemoryMedium and helpers
├── bench_test.go # Benchmarks
├── go.mod
├── local/
│ ├── client.go # Local filesystem backend
│ └── client_test.go
│ ├── medium.go # Local filesystem backend
│ └── medium_test.go
├── s3/
│ ├── s3.go # S3 backend
│ └── s3_test.go
@ -188,8 +189,8 @@ go-io/
│ ├── node.go # In-memory fs.FS + Medium
│ └── node_test.go
├── datanode/
│ ├── client.go # Borg DataNode Medium wrapper
│ └── client_test.go
│ ├── medium.go # Borg DataNode Medium wrapper
│ └── medium_test.go
├── store/
│ ├── store.go # KV store
│ ├── medium.go # Medium adapter for KV store

View file

@ -19,21 +19,17 @@ import (
"forge.lthn.ai/core/go-io/node"
)
// Use the pre-initialised local filesystem (unsandboxed, rooted at "/").
content, _ := io.Local.Read("/etc/hostname")
// Create a sandboxed medium restricted to a single directory.
sandbox, _ := io.NewSandboxed("/var/data/myapp")
_ = sandbox.Write("config.yaml", "key: value")
sandboxMedium, _ := io.NewSandboxed("/var/data/myapp")
_ = sandboxMedium.Write("config.yaml", "key: value")
// In-memory filesystem with tar serialisation.
mem := node.New()
mem.AddData("hello.txt", []byte("world"))
tarball, _ := mem.ToTar()
nodeTree := node.New()
nodeTree.AddData("hello.txt", []byte("world"))
tarball, _ := nodeTree.ToTar()
// S3 backend (requires an *s3.Client from the AWS SDK).
bucket, _ := s3.New("my-bucket", s3.WithClient(awsClient), s3.WithPrefix("uploads/"))
_ = bucket.Write("photo.jpg", rawData)
s3Medium, _ := s3.New(s3.Options{Bucket: "my-bucket", Client: awsClient, Prefix: "uploads/"})
_ = s3Medium.Write("photo.jpg", rawData)
```
@ -41,7 +37,7 @@ _ = bucket.Write("photo.jpg", rawData)
| Package | Import Path | Purpose |
|---------|-------------|---------|
| `io` (root) | `forge.lthn.ai/core/go-io` | `Medium` interface, helper functions, `MockMedium` for tests |
| `io` (root) | `forge.lthn.ai/core/go-io` | `Medium` interface, helper functions, `MemoryMedium` for tests |
| `local` | `forge.lthn.ai/core/go-io/local` | Local filesystem backend with path sandboxing and symlink-escape protection |
| `s3` | `forge.lthn.ai/core/go-io/s3` | Amazon S3 / S3-compatible backend (Garage, MinIO, etc.) |
| `sqlite` | `forge.lthn.ai/core/go-io/sqlite` | SQLite-backed virtual filesystem (pure Go driver, no CGO) |
@ -54,34 +50,28 @@ _ = bucket.Write("photo.jpg", rawData)
## The Medium Interface
Every storage backend implements the same 18-method interface:
Every storage backend implements the same 17-method interface:
```go
type Medium interface {
// Content operations
Read(path string) (string, error)
Write(path, content string) error
FileGet(path string) (string, error) // alias for Read
FileSet(path, content string) error // alias for Write
WriteMode(path, content string, mode fs.FileMode) error
// Streaming (for large files)
ReadStream(path string) (io.ReadCloser, error)
WriteStream(path string) (io.WriteCloser, error)
Open(path string) (fs.File, error)
Create(path string) (io.WriteCloser, error)
Append(path string) (io.WriteCloser, error)
// Directory operations
EnsureDir(path string) error
List(path string) ([]fs.DirEntry, error)
// Metadata
Stat(path string) (fs.FileInfo, error)
Exists(path string) bool
IsFile(path string) bool
IsDir(path string) bool
// Mutation
Delete(path string) error
DeleteAll(path string) error
Rename(oldPath, newPath string) error
@ -96,12 +86,12 @@ All backends implement this interface fully. Backends where a method has no natu
The root package provides helper functions that accept any `Medium`:
```go
// Copy a file between any two backends.
err := io.Copy(localMedium, "source.txt", s3Medium, "dest.txt")
sourceMedium := io.Local
destinationMedium := io.NewMemoryMedium()
err := io.Copy(sourceMedium, "source.txt", destinationMedium, "dest.txt")
// Read/Write wrappers that take an explicit medium.
content, err := io.Read(medium, "path")
err := io.Write(medium, "path", "content")
content, err := io.Read(destinationMedium, "path")
err = io.Write(destinationMedium, "path", "content")
```

7
go.mod
View file

@ -3,10 +3,8 @@ module dappco.re/go/core/io
go 1.26.0
require (
dappco.re/go/core v0.4.7
dappco.re/go/core v0.8.0-alpha.1
forge.lthn.ai/Snider/Borg v0.3.1
forge.lthn.ai/core/go-crypt v0.1.6
forge.lthn.ai/core/go-log v0.0.4
github.com/aws/aws-sdk-go-v2 v1.41.4
github.com/aws/aws-sdk-go-v2/service/s3 v1.97.1
github.com/stretchr/testify v1.11.1
@ -15,8 +13,6 @@ require (
)
require (
forge.lthn.ai/core/go v0.3.0 // indirect
github.com/ProtonMail/go-crypto v1.4.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.7 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.20 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.20 // indirect
@ -26,7 +22,6 @@ require (
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.20 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.20 // indirect
github.com/aws/smithy-go v1.24.2 // indirect
github.com/cloudflare/circl v1.6.3 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/google/uuid v1.6.0 // indirect

14
go.sum
View file

@ -1,15 +1,7 @@
dappco.re/go/core v0.4.7 h1:KmIA/2lo6rl1NMtLrKqCWfMlUqpDZYH3q0/d10dTtGA=
dappco.re/go/core v0.4.7/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A=
dappco.re/go/core v0.8.0-alpha.1 h1:gj7+Scv+L63Z7wMxbJYHhaRFkHJo2u4MMPuUSv/Dhtk=
dappco.re/go/core v0.8.0-alpha.1/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A=
forge.lthn.ai/Snider/Borg v0.3.1 h1:gfC1ZTpLoZai07oOWJiVeQ8+qJYK8A795tgVGJHbVL8=
forge.lthn.ai/Snider/Borg v0.3.1/go.mod h1:Z7DJD0yHXsxSyM7Mjl6/g4gH1NBsIz44Bf5AFlV76Wg=
forge.lthn.ai/core/go v0.3.0 h1:mOG97ApMprwx9Ked62FdWVwXTGSF6JO6m0DrVpoH2Q4=
forge.lthn.ai/core/go v0.3.0/go.mod h1:gE6c8h+PJ2287qNhVUJ5SOe1kopEwHEquvinstpuyJc=
forge.lthn.ai/core/go-crypt v0.1.6 h1:jB7L/28S1NR+91u3GcOYuKfBLzPhhBUY1fRe6WkGVns=
forge.lthn.ai/core/go-crypt v0.1.6/go.mod h1:4VZAGqxlbadhSB66sJkdj54/HSJ+bSxVgwWK5kMMYDo=
forge.lthn.ai/core/go-log v0.0.4 h1:KTuCEPgFmuM8KJfnyQ8vPOU1Jg654W74h8IJvfQMfv0=
forge.lthn.ai/core/go-log v0.0.4/go.mod h1:r14MXKOD3LF/sI8XUJQhRk/SZHBE7jAFVuCfgkXoZPw=
github.com/ProtonMail/go-crypto v1.4.0 h1:Zq/pbM3F5DFgJiMouxEdSVY44MVoQNEKp5d5QxIQceQ=
github.com/ProtonMail/go-crypto v1.4.0/go.mod h1:e1OaTyu5SYVrO9gKOEhTc+5UcXtTUa+P3uLudwcgPqo=
github.com/aws/aws-sdk-go-v2 v1.41.4 h1:10f50G7WyU02T56ox1wWXq+zTX9I1zxG46HYuG1hH/k=
github.com/aws/aws-sdk-go-v2 v1.41.4/go.mod h1:mwsPRE8ceUUpiTgF7QmQIJ7lgsKUPQOUl3o72QBrE1o=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.7 h1:3kGOqnh1pPeddVa/E37XNTaWJ8W6vrbYV9lJEkCnhuY=
@ -32,8 +24,6 @@ github.com/aws/aws-sdk-go-v2/service/s3 v1.97.1 h1:csi9NLpFZXb9fxY7rS1xVzgPRGMt7
github.com/aws/aws-sdk-go-v2/service/s3 v1.97.1/go.mod h1:qXVal5H0ChqXP63t6jze5LmFalc7+ZE7wOdLtZ0LCP0=
github.com/aws/smithy-go v1.24.2 h1:FzA3bu/nt/vDvmnkg+R8Xl46gmzEDam6mZ1hzmwXFng=
github.com/aws/smithy-go v1.24.2/go.mod h1:YE2RhdIuDbA5E5bTdciG9KrW3+TiEONeUWCqxX9i1Fc=
github.com/cloudflare/circl v1.6.3 h1:9GPOhQGF9MCYUeXyMYlqTR6a5gTrgR/fBLXvUgtVcg8=
github.com/cloudflare/circl v1.6.3/go.mod h1:2eXP6Qfat4O/Yhh8BznvKnJ+uzEoTQ6jVKJRn81BiS4=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=

894
io.go

File diff suppressed because it is too large Load diff

View file

@ -1,307 +0,0 @@
// Package local provides a local filesystem implementation of the io.Medium interface.
package local
import (
"fmt"
goio "io"
"io/fs"
"os"
"os/user"
"path/filepath"
"strings"
"time"
coreerr "forge.lthn.ai/core/go-log"
)
// Medium is a local filesystem storage backend.
type Medium struct {
root string
}
// New creates a new local Medium rooted at the given directory.
// Pass "/" for full filesystem access, or a specific path to sandbox.
func New(root string) (*Medium, error) {
abs, err := filepath.Abs(root)
if err != nil {
return nil, err
}
// Resolve symlinks so sandbox checks compare like-for-like.
// On macOS, /var is a symlink to /private/var — without this,
// EvalSymlinks on child paths resolves to /private/var/... while
// root stays /var/..., causing false sandbox escape detections.
if resolved, err := filepath.EvalSymlinks(abs); err == nil {
abs = resolved
}
return &Medium{root: abs}, nil
}
// path sanitises and returns the full path.
// Absolute paths are sandboxed under root (unless root is "/").
func (m *Medium) path(p string) string {
if p == "" {
return m.root
}
// If the path is relative and the medium is rooted at "/",
// treat it as relative to the current working directory.
// This makes io.Local behave more like the standard 'os' package.
if m.root == "/" && !filepath.IsAbs(p) {
cwd, _ := os.Getwd()
return filepath.Join(cwd, p)
}
// Use filepath.Clean with a leading slash to resolve all .. and . internally
// before joining with the root. This is a standard way to sandbox paths.
clean := filepath.Clean("/" + p)
// If root is "/", allow absolute paths through
if m.root == "/" {
return clean
}
// Join cleaned relative path with root
return filepath.Join(m.root, clean)
}
// validatePath ensures the path is within the sandbox, following symlinks if they exist.
func (m *Medium) validatePath(p string) (string, error) {
if m.root == "/" {
return m.path(p), nil
}
// Split the cleaned path into components
parts := strings.Split(filepath.Clean("/"+p), string(os.PathSeparator))
current := m.root
for _, part := range parts {
if part == "" {
continue
}
next := filepath.Join(current, part)
realNext, err := filepath.EvalSymlinks(next)
if err != nil {
if os.IsNotExist(err) {
// Part doesn't exist, we can't follow symlinks anymore.
// Since the path is already Cleaned and current is safe,
// appending a component to current will not escape.
current = next
continue
}
return "", err
}
// Verify the resolved part is still within the root
rel, err := filepath.Rel(m.root, realNext)
if err != nil || strings.HasPrefix(rel, "..") {
// Security event: sandbox escape attempt
username := "unknown"
if u, err := user.Current(); err == nil {
username = u.Username
}
fmt.Fprintf(os.Stderr, "[%s] SECURITY sandbox escape detected root=%s path=%s attempted=%s user=%s\n",
time.Now().Format(time.RFC3339), m.root, p, realNext, username)
return "", os.ErrPermission // Path escapes sandbox
}
current = realNext
}
return current, nil
}
// Read returns file contents as string.
func (m *Medium) Read(p string) (string, error) {
full, err := m.validatePath(p)
if err != nil {
return "", err
}
data, err := os.ReadFile(full)
if err != nil {
return "", err
}
return string(data), nil
}
// Write saves content to file, creating parent directories as needed.
// Files are created with mode 0644. For sensitive files (keys, secrets),
// use WriteMode with 0600.
func (m *Medium) Write(p, content string) error {
return m.WriteMode(p, content, 0644)
}
// WriteMode saves content to file with explicit permissions.
// Use 0600 for sensitive files (encryption output, private keys, auth hashes).
func (m *Medium) WriteMode(p, content string, mode os.FileMode) error {
full, err := m.validatePath(p)
if err != nil {
return err
}
if err := os.MkdirAll(filepath.Dir(full), 0755); err != nil {
return err
}
return os.WriteFile(full, []byte(content), mode)
}
// EnsureDir creates directory if it doesn't exist.
func (m *Medium) EnsureDir(p string) error {
full, err := m.validatePath(p)
if err != nil {
return err
}
return os.MkdirAll(full, 0755)
}
// IsDir returns true if path is a directory.
func (m *Medium) IsDir(p string) bool {
if p == "" {
return false
}
full, err := m.validatePath(p)
if err != nil {
return false
}
info, err := os.Stat(full)
return err == nil && info.IsDir()
}
// IsFile returns true if path is a regular file.
func (m *Medium) IsFile(p string) bool {
if p == "" {
return false
}
full, err := m.validatePath(p)
if err != nil {
return false
}
info, err := os.Stat(full)
return err == nil && info.Mode().IsRegular()
}
// Exists returns true if path exists.
func (m *Medium) Exists(p string) bool {
full, err := m.validatePath(p)
if err != nil {
return false
}
_, err = os.Stat(full)
return err == nil
}
// List returns directory entries.
func (m *Medium) List(p string) ([]fs.DirEntry, error) {
full, err := m.validatePath(p)
if err != nil {
return nil, err
}
return os.ReadDir(full)
}
// Stat returns file info.
func (m *Medium) Stat(p string) (fs.FileInfo, error) {
full, err := m.validatePath(p)
if err != nil {
return nil, err
}
return os.Stat(full)
}
// Open opens the named file for reading.
func (m *Medium) Open(p string) (fs.File, error) {
full, err := m.validatePath(p)
if err != nil {
return nil, err
}
return os.Open(full)
}
// Create creates or truncates the named file.
func (m *Medium) Create(p string) (goio.WriteCloser, error) {
full, err := m.validatePath(p)
if err != nil {
return nil, err
}
if err := os.MkdirAll(filepath.Dir(full), 0755); err != nil {
return nil, err
}
return os.Create(full)
}
// Append opens the named file for appending, creating it if it doesn't exist.
func (m *Medium) Append(p string) (goio.WriteCloser, error) {
full, err := m.validatePath(p)
if err != nil {
return nil, err
}
if err := os.MkdirAll(filepath.Dir(full), 0755); err != nil {
return nil, err
}
return os.OpenFile(full, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
}
// ReadStream returns a reader for the file content.
//
// This is a convenience wrapper around Open that exposes a streaming-oriented
// API, as required by the io.Medium interface, while Open provides the more
// general filesystem-level operation. Both methods are kept for semantic
// clarity and backward compatibility.
func (m *Medium) ReadStream(path string) (goio.ReadCloser, error) {
return m.Open(path)
}
// WriteStream returns a writer for the file content.
//
// This is a convenience wrapper around Create that exposes a streaming-oriented
// API, as required by the io.Medium interface, while Create provides the more
// general filesystem-level operation. Both methods are kept for semantic
// clarity and backward compatibility.
func (m *Medium) WriteStream(path string) (goio.WriteCloser, error) {
return m.Create(path)
}
// Delete removes a file or empty directory.
func (m *Medium) Delete(p string) error {
full, err := m.validatePath(p)
if err != nil {
return err
}
if full == "/" || full == os.Getenv("HOME") {
return coreerr.E("local.Delete", "refusing to delete protected path: "+full, nil)
}
return os.Remove(full)
}
// DeleteAll removes a file or directory recursively.
func (m *Medium) DeleteAll(p string) error {
full, err := m.validatePath(p)
if err != nil {
return err
}
if full == "/" || full == os.Getenv("HOME") {
return coreerr.E("local.DeleteAll", "refusing to delete protected path: "+full, nil)
}
return os.RemoveAll(full)
}
// Rename moves a file or directory.
func (m *Medium) Rename(oldPath, newPath string) error {
oldFull, err := m.validatePath(oldPath)
if err != nil {
return err
}
newFull, err := m.validatePath(newPath)
if err != nil {
return err
}
return os.Rename(oldFull, newFull)
}
// FileGet is an alias for Read.
func (m *Medium) FileGet(p string) (string, error) {
return m.Read(p)
}
// FileSet is an alias for Write.
func (m *Medium) FileSet(p, content string) error {
return m.Write(p, content)
}

View file

@ -1,513 +0,0 @@
package local
import (
"io"
"os"
"path/filepath"
"strings"
"testing"
"github.com/stretchr/testify/assert"
)
func TestNew(t *testing.T) {
root := t.TempDir()
m, err := New(root)
assert.NoError(t, err)
// New() resolves symlinks (macOS /var → /private/var), so compare resolved paths.
resolved, _ := filepath.EvalSymlinks(root)
assert.Equal(t, resolved, m.root)
}
func TestPath(t *testing.T) {
m := &Medium{root: "/home/user"}
// Normal paths
assert.Equal(t, "/home/user/file.txt", m.path("file.txt"))
assert.Equal(t, "/home/user/dir/file.txt", m.path("dir/file.txt"))
// Empty returns root
assert.Equal(t, "/home/user", m.path(""))
// Traversal attempts get sanitised
assert.Equal(t, "/home/user/file.txt", m.path("../file.txt"))
assert.Equal(t, "/home/user/file.txt", m.path("dir/../file.txt"))
// Absolute paths are constrained to sandbox (no escape)
assert.Equal(t, "/home/user/etc/passwd", m.path("/etc/passwd"))
}
func TestPath_RootFilesystem(t *testing.T) {
m := &Medium{root: "/"}
// When root is "/", absolute paths pass through
assert.Equal(t, "/etc/passwd", m.path("/etc/passwd"))
assert.Equal(t, "/home/user/file.txt", m.path("/home/user/file.txt"))
// Relative paths are relative to CWD when root is "/"
cwd, _ := os.Getwd()
assert.Equal(t, filepath.Join(cwd, "file.txt"), m.path("file.txt"))
}
func TestReadWrite(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
// Write and read back
err := m.Write("test.txt", "hello")
assert.NoError(t, err)
content, err := m.Read("test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello", content)
// Write creates parent dirs
err = m.Write("a/b/c.txt", "nested")
assert.NoError(t, err)
content, err = m.Read("a/b/c.txt")
assert.NoError(t, err)
assert.Equal(t, "nested", content)
// Read nonexistent
_, err = m.Read("nope.txt")
assert.Error(t, err)
}
func TestEnsureDir(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
err := m.EnsureDir("one/two/three")
assert.NoError(t, err)
info, err := os.Stat(filepath.Join(root, "one/two/three"))
assert.NoError(t, err)
assert.True(t, info.IsDir())
}
func TestIsDir(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.Mkdir(filepath.Join(root, "mydir"), 0755)
_ = os.WriteFile(filepath.Join(root, "myfile"), []byte("x"), 0644)
assert.True(t, m.IsDir("mydir"))
assert.False(t, m.IsDir("myfile"))
assert.False(t, m.IsDir("nope"))
assert.False(t, m.IsDir(""))
}
func TestIsFile(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.Mkdir(filepath.Join(root, "mydir"), 0755)
_ = os.WriteFile(filepath.Join(root, "myfile"), []byte("x"), 0644)
assert.True(t, m.IsFile("myfile"))
assert.False(t, m.IsFile("mydir"))
assert.False(t, m.IsFile("nope"))
assert.False(t, m.IsFile(""))
}
func TestExists(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.WriteFile(filepath.Join(root, "exists"), []byte("x"), 0644)
assert.True(t, m.Exists("exists"))
assert.False(t, m.Exists("nope"))
}
func TestList(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.WriteFile(filepath.Join(root, "a.txt"), []byte("a"), 0644)
_ = os.WriteFile(filepath.Join(root, "b.txt"), []byte("b"), 0644)
_ = os.Mkdir(filepath.Join(root, "subdir"), 0755)
entries, err := m.List("")
assert.NoError(t, err)
assert.Len(t, entries, 3)
}
func TestStat(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.WriteFile(filepath.Join(root, "file"), []byte("content"), 0644)
info, err := m.Stat("file")
assert.NoError(t, err)
assert.Equal(t, int64(7), info.Size())
}
func TestDelete(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.WriteFile(filepath.Join(root, "todelete"), []byte("x"), 0644)
assert.True(t, m.Exists("todelete"))
err := m.Delete("todelete")
assert.NoError(t, err)
assert.False(t, m.Exists("todelete"))
}
func TestDeleteAll(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.MkdirAll(filepath.Join(root, "dir/sub"), 0755)
_ = os.WriteFile(filepath.Join(root, "dir/sub/file"), []byte("x"), 0644)
err := m.DeleteAll("dir")
assert.NoError(t, err)
assert.False(t, m.Exists("dir"))
}
func TestRename(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.WriteFile(filepath.Join(root, "old"), []byte("x"), 0644)
err := m.Rename("old", "new")
assert.NoError(t, err)
assert.False(t, m.Exists("old"))
assert.True(t, m.Exists("new"))
}
func TestFileGetFileSet(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
err := m.FileSet("data", "value")
assert.NoError(t, err)
val, err := m.FileGet("data")
assert.NoError(t, err)
assert.Equal(t, "value", val)
}
func TestDelete_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_delete_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
// Create and delete a file
err = medium.Write("file.txt", "content")
assert.NoError(t, err)
assert.True(t, medium.IsFile("file.txt"))
err = medium.Delete("file.txt")
assert.NoError(t, err)
assert.False(t, medium.IsFile("file.txt"))
// Create and delete an empty directory
err = medium.EnsureDir("emptydir")
assert.NoError(t, err)
err = medium.Delete("emptydir")
assert.NoError(t, err)
assert.False(t, medium.IsDir("emptydir"))
}
func TestDelete_Bad_NotEmpty(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_delete_notempty_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
// Create a directory with a file
err = medium.Write("mydir/file.txt", "content")
assert.NoError(t, err)
// Try to delete non-empty directory
err = medium.Delete("mydir")
assert.Error(t, err)
}
func TestDeleteAll_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_deleteall_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
// Create nested structure
err = medium.Write("mydir/file1.txt", "content1")
assert.NoError(t, err)
err = medium.Write("mydir/subdir/file2.txt", "content2")
assert.NoError(t, err)
// Delete all
err = medium.DeleteAll("mydir")
assert.NoError(t, err)
assert.False(t, medium.Exists("mydir"))
assert.False(t, medium.Exists("mydir/file1.txt"))
assert.False(t, medium.Exists("mydir/subdir/file2.txt"))
}
func TestRename_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_rename_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
// Rename a file
err = medium.Write("old.txt", "content")
assert.NoError(t, err)
err = medium.Rename("old.txt", "new.txt")
assert.NoError(t, err)
assert.False(t, medium.IsFile("old.txt"))
assert.True(t, medium.IsFile("new.txt"))
content, err := medium.Read("new.txt")
assert.NoError(t, err)
assert.Equal(t, "content", content)
}
func TestRename_Traversal_Sanitised(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_rename_traversal_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
err = medium.Write("file.txt", "content")
assert.NoError(t, err)
// Traversal attempts are sanitised (.. becomes .), so this renames to "./escaped.txt"
// which is just "escaped.txt" in the root
err = medium.Rename("file.txt", "../escaped.txt")
assert.NoError(t, err)
assert.False(t, medium.Exists("file.txt"))
assert.True(t, medium.Exists("escaped.txt"))
}
func TestList_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_list_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
// Create some files and directories
err = medium.Write("file1.txt", "content1")
assert.NoError(t, err)
err = medium.Write("file2.txt", "content2")
assert.NoError(t, err)
err = medium.EnsureDir("subdir")
assert.NoError(t, err)
// List root
entries, err := medium.List(".")
assert.NoError(t, err)
assert.Len(t, entries, 3)
names := make(map[string]bool)
for _, e := range entries {
names[e.Name()] = true
}
assert.True(t, names["file1.txt"])
assert.True(t, names["file2.txt"])
assert.True(t, names["subdir"])
}
func TestStat_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_stat_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
// Stat a file
err = medium.Write("file.txt", "hello world")
assert.NoError(t, err)
info, err := medium.Stat("file.txt")
assert.NoError(t, err)
assert.Equal(t, "file.txt", info.Name())
assert.Equal(t, int64(11), info.Size())
assert.False(t, info.IsDir())
// Stat a directory
err = medium.EnsureDir("mydir")
assert.NoError(t, err)
info, err = medium.Stat("mydir")
assert.NoError(t, err)
assert.Equal(t, "mydir", info.Name())
assert.True(t, info.IsDir())
}
func TestExists_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_exists_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
assert.False(t, medium.Exists("nonexistent"))
err = medium.Write("file.txt", "content")
assert.NoError(t, err)
assert.True(t, medium.Exists("file.txt"))
err = medium.EnsureDir("mydir")
assert.NoError(t, err)
assert.True(t, medium.Exists("mydir"))
}
func TestIsDir_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_isdir_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
err = medium.Write("file.txt", "content")
assert.NoError(t, err)
assert.False(t, medium.IsDir("file.txt"))
err = medium.EnsureDir("mydir")
assert.NoError(t, err)
assert.True(t, medium.IsDir("mydir"))
assert.False(t, medium.IsDir("nonexistent"))
}
func TestReadStream(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
content := "streaming content"
err := m.Write("stream.txt", content)
assert.NoError(t, err)
reader, err := m.ReadStream("stream.txt")
assert.NoError(t, err)
defer reader.Close()
// Read only first 9 bytes
limitReader := io.LimitReader(reader, 9)
data, err := io.ReadAll(limitReader)
assert.NoError(t, err)
assert.Equal(t, "streaming", string(data))
}
func TestWriteStream(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
writer, err := m.WriteStream("output.txt")
assert.NoError(t, err)
_, err = io.Copy(writer, strings.NewReader("piped data"))
assert.NoError(t, err)
err = writer.Close()
assert.NoError(t, err)
content, err := m.Read("output.txt")
assert.NoError(t, err)
assert.Equal(t, "piped data", content)
}
func TestPath_Traversal_Advanced(t *testing.T) {
m := &Medium{root: "/sandbox"}
// Multiple levels of traversal
assert.Equal(t, "/sandbox/file.txt", m.path("../../../file.txt"))
assert.Equal(t, "/sandbox/target", m.path("dir/../../target"))
// Traversal with hidden files
assert.Equal(t, "/sandbox/.ssh/id_rsa", m.path(".ssh/id_rsa"))
assert.Equal(t, "/sandbox/id_rsa", m.path(".ssh/../id_rsa"))
// Null bytes (Go's filepath.Clean handles them, but good to check)
assert.Equal(t, "/sandbox/file\x00.txt", m.path("file\x00.txt"))
}
func TestValidatePath_Security(t *testing.T) {
root := t.TempDir()
m, err := New(root)
assert.NoError(t, err)
// Create a directory outside the sandbox
outside := t.TempDir()
outsideFile := filepath.Join(outside, "secret.txt")
err = os.WriteFile(outsideFile, []byte("secret"), 0644)
assert.NoError(t, err)
// Test 1: Simple traversal
_, err = m.validatePath("../outside.txt")
assert.NoError(t, err) // path() sanitises to root, so this shouldn't escape
// Test 2: Symlink escape
// Create a symlink inside the sandbox pointing outside
linkPath := filepath.Join(root, "evil_link")
err = os.Symlink(outside, linkPath)
assert.NoError(t, err)
// Try to access a file through the symlink
_, err = m.validatePath("evil_link/secret.txt")
assert.Error(t, err)
assert.ErrorIs(t, err, os.ErrPermission)
// Test 3: Nested symlink escape
innerDir := filepath.Join(root, "inner")
err = os.Mkdir(innerDir, 0755)
assert.NoError(t, err)
nestedLink := filepath.Join(innerDir, "nested_evil")
err = os.Symlink(outside, nestedLink)
assert.NoError(t, err)
_, err = m.validatePath("inner/nested_evil/secret.txt")
assert.Error(t, err)
assert.ErrorIs(t, err, os.ErrPermission)
}
func TestEmptyPaths(t *testing.T) {
root := t.TempDir()
m, err := New(root)
assert.NoError(t, err)
// Read empty path (should fail as it's a directory)
_, err = m.Read("")
assert.Error(t, err)
// Write empty path (should fail as it's a directory)
err = m.Write("", "content")
assert.Error(t, err)
// EnsureDir empty path (should be ok, it's just the root)
err = m.EnsureDir("")
assert.NoError(t, err)
// IsDir empty path (should be true for root, but current impl returns false for "")
// Wait, I noticed IsDir returns false for "" in the code.
assert.False(t, m.IsDir(""))
// Exists empty path (root exists)
assert.True(t, m.Exists(""))
// List empty path (lists root)
entries, err := m.List("")
assert.NoError(t, err)
assert.NotNil(t, entries)
}

482
local/medium.go Normal file
View file

@ -0,0 +1,482 @@
// Example: medium, _ := local.New("/srv/app")
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
// Example: content, _ := medium.Read("config/app.yaml")
package local
import (
"cmp"
goio "io"
"io/fs"
"slices"
"syscall"
core "dappco.re/go/core"
)
// Example: medium, _ := local.New("/srv/app")
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
type Medium struct {
filesystemRoot string
}
var unrestrictedFileSystem = (&core.Fs{}).NewUnrestricted()
// Example: medium, _ := local.New("/srv/app")
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
func New(root string) (*Medium, error) {
absoluteRoot := absolutePath(root)
if resolvedRoot, err := resolveSymlinksPath(absoluteRoot); err == nil {
absoluteRoot = resolvedRoot
}
return &Medium{filesystemRoot: absoluteRoot}, nil
}
func dirSeparator() string {
if separator := core.Env("CORE_PATH_SEPARATOR"); separator != "" {
return separator
}
if separator := core.Env("DS"); separator != "" {
return separator
}
return "/"
}
func normalisePath(path string) string {
separator := dirSeparator()
if separator == "/" {
return core.Replace(path, "\\", separator)
}
return core.Replace(path, "/", separator)
}
func currentWorkingDir() string {
if workingDirectory := core.Env("CORE_WORKING_DIRECTORY"); workingDirectory != "" {
return workingDirectory
}
if workingDirectory := core.Env("DIR_CWD"); workingDirectory != "" {
return workingDirectory
}
return "."
}
func absolutePath(path string) string {
path = normalisePath(path)
if core.PathIsAbs(path) {
return core.Path(path)
}
return core.Path(currentWorkingDir(), path)
}
func cleanSandboxPath(path string) string {
return core.Path(dirSeparator() + normalisePath(path))
}
func splitPathParts(path string) []string {
trimmed := core.TrimPrefix(path, dirSeparator())
if trimmed == "" {
return nil
}
var parts []string
for _, part := range core.Split(trimmed, dirSeparator()) {
if part == "" {
continue
}
parts = append(parts, part)
}
return parts
}
func resolveSymlinksPath(path string) (string, error) {
return resolveSymlinksRecursive(absolutePath(path), map[string]struct{}{})
}
func resolveSymlinksRecursive(path string, seen map[string]struct{}) (string, error) {
path = core.Path(path)
if path == dirSeparator() {
return path, nil
}
current := dirSeparator()
for _, part := range splitPathParts(path) {
next := core.Path(current, part)
info, err := lstat(next)
if err != nil {
if core.Is(err, syscall.ENOENT) {
current = next
continue
}
return "", err
}
if !isSymlink(info.Mode) {
current = next
continue
}
target, err := readlink(next)
if err != nil {
return "", err
}
target = normalisePath(target)
if !core.PathIsAbs(target) {
target = core.Path(current, target)
} else {
target = core.Path(target)
}
if _, ok := seen[target]; ok {
return "", core.E("local.resolveSymlinksPath", core.Concat("symlink cycle: ", target), fs.ErrInvalid)
}
seen[target] = struct{}{}
resolved, err := resolveSymlinksRecursive(target, seen)
delete(seen, target)
if err != nil {
return "", err
}
current = resolved
}
return current, nil
}
func isWithinRoot(root, target string) bool {
root = core.Path(root)
target = core.Path(target)
if root == dirSeparator() {
return true
}
return target == root || core.HasPrefix(target, root+dirSeparator())
}
func canonicalPath(path string) string {
if path == "" {
return ""
}
if resolved, err := resolveSymlinksPath(path); err == nil {
return resolved
}
return absolutePath(path)
}
func isProtectedPath(fullPath string) bool {
fullPath = canonicalPath(fullPath)
protected := map[string]struct{}{
canonicalPath(dirSeparator()): {},
}
for _, home := range []string{core.Env("HOME"), core.Env("DIR_HOME")} {
if home == "" {
continue
}
protected[canonicalPath(home)] = struct{}{}
}
_, ok := protected[fullPath]
return ok
}
func logSandboxEscape(root, path, attempted string) {
username := core.Env("USER")
if username == "" {
username = "unknown"
}
core.Security("sandbox escape detected", "root", root, "path", path, "attempted", attempted, "user", username)
}
func (medium *Medium) sandboxedPath(path string) string {
if path == "" {
return medium.filesystemRoot
}
if medium.filesystemRoot == dirSeparator() && !core.PathIsAbs(normalisePath(path)) {
return core.Path(currentWorkingDir(), normalisePath(path))
}
clean := cleanSandboxPath(path)
if medium.filesystemRoot == dirSeparator() {
return clean
}
return core.Path(medium.filesystemRoot, core.TrimPrefix(clean, dirSeparator()))
}
func (medium *Medium) validatePath(path string) (string, error) {
if medium.filesystemRoot == dirSeparator() {
return medium.sandboxedPath(path), nil
}
parts := splitPathParts(cleanSandboxPath(path))
current := medium.filesystemRoot
for _, part := range parts {
next := core.Path(current, part)
realNext, err := resolveSymlinksPath(next)
if err != nil {
if core.Is(err, syscall.ENOENT) {
current = next
continue
}
return "", err
}
if !isWithinRoot(medium.filesystemRoot, realNext) {
logSandboxEscape(medium.filesystemRoot, path, realNext)
return "", fs.ErrPermission
}
current = realNext
}
return current, nil
}
func (medium *Medium) Read(path string) (string, error) {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return "", err
}
return resultString("local.Read", core.Concat("read failed: ", path), unrestrictedFileSystem.Read(resolvedPath))
}
func (medium *Medium) Write(path, content string) error {
return medium.WriteMode(path, content, 0644)
}
func (medium *Medium) WriteMode(path, content string, mode fs.FileMode) error {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return err
}
return resultError("local.WriteMode", core.Concat("write failed: ", path), unrestrictedFileSystem.WriteMode(resolvedPath, content, mode))
}
// Example: _ = medium.EnsureDir("config/app")
func (medium *Medium) EnsureDir(path string) error {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return err
}
return resultError("local.EnsureDir", core.Concat("ensure dir failed: ", path), unrestrictedFileSystem.EnsureDir(resolvedPath))
}
// Example: isDirectory := medium.IsDir("config")
func (medium *Medium) IsDir(path string) bool {
if path == "" {
return false
}
resolvedPath, err := medium.validatePath(path)
if err != nil {
return false
}
return unrestrictedFileSystem.IsDir(resolvedPath)
}
// Example: isFile := medium.IsFile("config/app.yaml")
func (medium *Medium) IsFile(path string) bool {
if path == "" {
return false
}
resolvedPath, err := medium.validatePath(path)
if err != nil {
return false
}
return unrestrictedFileSystem.IsFile(resolvedPath)
}
// Example: exists := medium.Exists("config/app.yaml")
func (medium *Medium) Exists(path string) bool {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return false
}
return unrestrictedFileSystem.Exists(resolvedPath)
}
// Example: entries, _ := medium.List("config")
func (medium *Medium) List(path string) ([]fs.DirEntry, error) {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return nil, err
}
entries, err := resultDirEntries("local.List", core.Concat("list failed: ", path), unrestrictedFileSystem.List(resolvedPath))
if err != nil {
return nil, err
}
slices.SortFunc(entries, func(a, b fs.DirEntry) int {
return cmp.Compare(a.Name(), b.Name())
})
return entries, nil
}
// Example: info, _ := medium.Stat("config/app.yaml")
func (medium *Medium) Stat(path string) (fs.FileInfo, error) {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return nil, err
}
return resultFileInfo("local.Stat", core.Concat("stat failed: ", path), unrestrictedFileSystem.Stat(resolvedPath))
}
// Example: file, _ := medium.Open("config/app.yaml")
func (medium *Medium) Open(path string) (fs.File, error) {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return nil, err
}
return resultFile("local.Open", core.Concat("open failed: ", path), unrestrictedFileSystem.Open(resolvedPath))
}
// Example: writer, _ := medium.Create("logs/app.log")
func (medium *Medium) Create(path string) (goio.WriteCloser, error) {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return nil, err
}
return resultWriteCloser("local.Create", core.Concat("create failed: ", path), unrestrictedFileSystem.Create(resolvedPath))
}
// Example: writer, _ := medium.Append("logs/app.log")
func (medium *Medium) Append(path string) (goio.WriteCloser, error) {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return nil, err
}
return resultWriteCloser("local.Append", core.Concat("append failed: ", path), unrestrictedFileSystem.Append(resolvedPath))
}
// Example: reader, _ := medium.ReadStream("logs/app.log")
func (medium *Medium) ReadStream(path string) (goio.ReadCloser, error) {
return medium.Open(path)
}
// Example: writer, _ := medium.WriteStream("logs/app.log")
func (medium *Medium) WriteStream(path string) (goio.WriteCloser, error) {
return medium.Create(path)
}
// Example: _ = medium.Delete("config/app.yaml")
func (medium *Medium) Delete(path string) error {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return err
}
if isProtectedPath(resolvedPath) {
return core.E("local.Delete", core.Concat("refusing to delete protected path: ", resolvedPath), nil)
}
return resultError("local.Delete", core.Concat("delete failed: ", path), unrestrictedFileSystem.Delete(resolvedPath))
}
// Example: _ = medium.DeleteAll("logs/archive")
func (medium *Medium) DeleteAll(path string) error {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return err
}
if isProtectedPath(resolvedPath) {
return core.E("local.DeleteAll", core.Concat("refusing to delete protected path: ", resolvedPath), nil)
}
return resultError("local.DeleteAll", core.Concat("delete all failed: ", path), unrestrictedFileSystem.DeleteAll(resolvedPath))
}
// Example: _ = medium.Rename("drafts/todo.txt", "archive/todo.txt")
func (medium *Medium) Rename(oldPath, newPath string) error {
oldResolvedPath, err := medium.validatePath(oldPath)
if err != nil {
return err
}
newResolvedPath, err := medium.validatePath(newPath)
if err != nil {
return err
}
return resultError("local.Rename", core.Concat("rename failed: ", oldPath), unrestrictedFileSystem.Rename(oldResolvedPath, newResolvedPath))
}
func lstat(path string) (*syscall.Stat_t, error) {
info := &syscall.Stat_t{}
if err := syscall.Lstat(path, info); err != nil {
return nil, err
}
return info, nil
}
func isSymlink(mode uint32) bool {
return mode&syscall.S_IFMT == syscall.S_IFLNK
}
func readlink(path string) (string, error) {
size := 256
for {
linkBuffer := make([]byte, size)
bytesRead, err := syscall.Readlink(path, linkBuffer)
if err != nil {
return "", err
}
if bytesRead < len(linkBuffer) {
return string(linkBuffer[:bytesRead]), nil
}
size *= 2
}
}
func resultError(operation, message string, result core.Result) error {
if result.OK {
return nil
}
if err, ok := result.Value.(error); ok {
return core.E(operation, message, err)
}
return core.E(operation, message, nil)
}
func resultString(operation, message string, result core.Result) (string, error) {
if !result.OK {
return "", resultError(operation, message, result)
}
value, ok := result.Value.(string)
if !ok {
return "", core.E(operation, "unexpected result type", nil)
}
return value, nil
}
func resultDirEntries(operation, message string, result core.Result) ([]fs.DirEntry, error) {
if !result.OK {
return nil, resultError(operation, message, result)
}
entries, ok := result.Value.([]fs.DirEntry)
if !ok {
return nil, core.E(operation, "unexpected result type", nil)
}
return entries, nil
}
func resultFileInfo(operation, message string, result core.Result) (fs.FileInfo, error) {
if !result.OK {
return nil, resultError(operation, message, result)
}
fileInfo, ok := result.Value.(fs.FileInfo)
if !ok {
return nil, core.E(operation, "unexpected result type", nil)
}
return fileInfo, nil
}
func resultFile(operation, message string, result core.Result) (fs.File, error) {
if !result.OK {
return nil, resultError(operation, message, result)
}
file, ok := result.Value.(fs.File)
if !ok {
return nil, core.E(operation, "unexpected result type", nil)
}
return file, nil
}
func resultWriteCloser(operation, message string, result core.Result) (goio.WriteCloser, error) {
if !result.OK {
return nil, resultError(operation, message, result)
}
writer, ok := result.Value.(goio.WriteCloser)
if !ok {
return nil, core.E(operation, "unexpected result type", nil)
}
return writer, nil
}

473
local/medium_test.go Normal file
View file

@ -0,0 +1,473 @@
package local
import (
"io"
"io/fs"
"syscall"
"testing"
core "dappco.re/go/core"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestLocal_New_ResolvesRoot_Good(t *testing.T) {
root := t.TempDir()
localMedium, err := New(root)
assert.NoError(t, err)
resolved, err := resolveSymlinksPath(root)
require.NoError(t, err)
assert.Equal(t, resolved, localMedium.filesystemRoot)
}
func TestLocal_Path_Sandboxed_Good(t *testing.T) {
localMedium := &Medium{filesystemRoot: "/home/user"}
assert.Equal(t, "/home/user/file.txt", localMedium.sandboxedPath("file.txt"))
assert.Equal(t, "/home/user/dir/file.txt", localMedium.sandboxedPath("dir/file.txt"))
assert.Equal(t, "/home/user", localMedium.sandboxedPath(""))
assert.Equal(t, "/home/user/file.txt", localMedium.sandboxedPath("../file.txt"))
assert.Equal(t, "/home/user/file.txt", localMedium.sandboxedPath("dir/../file.txt"))
assert.Equal(t, "/home/user/etc/passwd", localMedium.sandboxedPath("/etc/passwd"))
}
func TestLocal_Path_RootFilesystem_Good(t *testing.T) {
localMedium := &Medium{filesystemRoot: "/"}
assert.Equal(t, "/etc/passwd", localMedium.sandboxedPath("/etc/passwd"))
assert.Equal(t, "/home/user/file.txt", localMedium.sandboxedPath("/home/user/file.txt"))
workingDirectory := currentWorkingDir()
assert.Equal(t, core.Path(workingDirectory, "file.txt"), localMedium.sandboxedPath("file.txt"))
}
func TestLocal_ReadWrite_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
err := localMedium.Write("test.txt", "hello")
assert.NoError(t, err)
content, err := localMedium.Read("test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello", content)
err = localMedium.Write("a/b/c.txt", "nested")
assert.NoError(t, err)
content, err = localMedium.Read("a/b/c.txt")
assert.NoError(t, err)
assert.Equal(t, "nested", content)
_, err = localMedium.Read("nope.txt")
assert.Error(t, err)
}
func TestLocal_EnsureDir_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
err := localMedium.EnsureDir("one/two/three")
assert.NoError(t, err)
info, err := localMedium.Stat("one/two/three")
assert.NoError(t, err)
assert.True(t, info.IsDir())
}
func TestLocal_IsDir_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.EnsureDir("mydir")
_ = localMedium.Write("myfile", "x")
assert.True(t, localMedium.IsDir("mydir"))
assert.False(t, localMedium.IsDir("myfile"))
assert.False(t, localMedium.IsDir("nope"))
assert.False(t, localMedium.IsDir(""))
}
func TestLocal_IsFile_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.EnsureDir("mydir")
_ = localMedium.Write("myfile", "x")
assert.True(t, localMedium.IsFile("myfile"))
assert.False(t, localMedium.IsFile("mydir"))
assert.False(t, localMedium.IsFile("nope"))
assert.False(t, localMedium.IsFile(""))
}
func TestLocal_Exists_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.Write("exists", "x")
assert.True(t, localMedium.Exists("exists"))
assert.False(t, localMedium.Exists("nope"))
}
func TestLocal_List_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.Write("a.txt", "a")
_ = localMedium.Write("b.txt", "b")
_ = localMedium.EnsureDir("subdir")
entries, err := localMedium.List("")
assert.NoError(t, err)
assert.Len(t, entries, 3)
}
func TestLocal_Stat_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.Write("file", "content")
info, err := localMedium.Stat("file")
assert.NoError(t, err)
assert.Equal(t, int64(7), info.Size())
}
func TestLocal_Delete_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.Write("todelete", "x")
assert.True(t, localMedium.Exists("todelete"))
err := localMedium.Delete("todelete")
assert.NoError(t, err)
assert.False(t, localMedium.Exists("todelete"))
}
func TestLocal_DeleteAll_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.Write("dir/sub/file", "x")
err := localMedium.DeleteAll("dir")
assert.NoError(t, err)
assert.False(t, localMedium.Exists("dir"))
}
func TestLocal_Delete_ProtectedHomeViaSymlinkEnv_Bad(t *testing.T) {
realHome := t.TempDir()
linkParent := t.TempDir()
homeLink := core.Path(linkParent, "home-link")
require.NoError(t, syscall.Symlink(realHome, homeLink))
t.Setenv("HOME", homeLink)
localMedium, err := New("/")
require.NoError(t, err)
err = localMedium.Delete(realHome)
require.Error(t, err)
assert.DirExists(t, realHome)
}
func TestLocal_DeleteAll_ProtectedHomeViaEnv_Bad(t *testing.T) {
tempHome := t.TempDir()
t.Setenv("HOME", tempHome)
localMedium, err := New("/")
require.NoError(t, err)
err = localMedium.DeleteAll(tempHome)
require.Error(t, err)
assert.DirExists(t, tempHome)
}
func TestLocal_Rename_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.Write("old", "x")
err := localMedium.Rename("old", "new")
assert.NoError(t, err)
assert.False(t, localMedium.Exists("old"))
assert.True(t, localMedium.Exists("new"))
}
func TestLocal_Delete_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("file.txt", "content")
assert.NoError(t, err)
assert.True(t, localMedium.IsFile("file.txt"))
err = localMedium.Delete("file.txt")
assert.NoError(t, err)
assert.False(t, localMedium.IsFile("file.txt"))
err = localMedium.EnsureDir("emptydir")
assert.NoError(t, err)
err = localMedium.Delete("emptydir")
assert.NoError(t, err)
assert.False(t, localMedium.IsDir("emptydir"))
}
func TestLocal_Delete_NotEmpty_Bad(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("mydir/file.txt", "content")
assert.NoError(t, err)
err = localMedium.Delete("mydir")
assert.Error(t, err)
}
func TestLocal_DeleteAll_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("mydir/file1.txt", "content1")
assert.NoError(t, err)
err = localMedium.Write("mydir/subdir/file2.txt", "content2")
assert.NoError(t, err)
err = localMedium.DeleteAll("mydir")
assert.NoError(t, err)
assert.False(t, localMedium.Exists("mydir"))
assert.False(t, localMedium.Exists("mydir/file1.txt"))
assert.False(t, localMedium.Exists("mydir/subdir/file2.txt"))
}
func TestLocal_Rename_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("old.txt", "content")
assert.NoError(t, err)
err = localMedium.Rename("old.txt", "new.txt")
assert.NoError(t, err)
assert.False(t, localMedium.IsFile("old.txt"))
assert.True(t, localMedium.IsFile("new.txt"))
content, err := localMedium.Read("new.txt")
assert.NoError(t, err)
assert.Equal(t, "content", content)
}
func TestLocal_Rename_TraversalSanitised_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("file.txt", "content")
assert.NoError(t, err)
err = localMedium.Rename("file.txt", "../escaped.txt")
assert.NoError(t, err)
assert.False(t, localMedium.Exists("file.txt"))
assert.True(t, localMedium.Exists("escaped.txt"))
}
func TestLocal_List_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("file1.txt", "content1")
assert.NoError(t, err)
err = localMedium.Write("file2.txt", "content2")
assert.NoError(t, err)
err = localMedium.EnsureDir("subdir")
assert.NoError(t, err)
entries, err := localMedium.List(".")
assert.NoError(t, err)
assert.Len(t, entries, 3)
names := make(map[string]bool)
for _, entry := range entries {
names[entry.Name()] = true
}
assert.True(t, names["file1.txt"])
assert.True(t, names["file2.txt"])
assert.True(t, names["subdir"])
}
func TestLocal_Stat_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("file.txt", "hello world")
assert.NoError(t, err)
info, err := localMedium.Stat("file.txt")
assert.NoError(t, err)
assert.Equal(t, "file.txt", info.Name())
assert.Equal(t, int64(11), info.Size())
assert.False(t, info.IsDir())
err = localMedium.EnsureDir("mydir")
assert.NoError(t, err)
info, err = localMedium.Stat("mydir")
assert.NoError(t, err)
assert.Equal(t, "mydir", info.Name())
assert.True(t, info.IsDir())
}
func TestLocal_Exists_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
assert.False(t, localMedium.Exists("nonexistent"))
err = localMedium.Write("file.txt", "content")
assert.NoError(t, err)
assert.True(t, localMedium.Exists("file.txt"))
err = localMedium.EnsureDir("mydir")
assert.NoError(t, err)
assert.True(t, localMedium.Exists("mydir"))
}
func TestLocal_IsDir_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("file.txt", "content")
assert.NoError(t, err)
assert.False(t, localMedium.IsDir("file.txt"))
err = localMedium.EnsureDir("mydir")
assert.NoError(t, err)
assert.True(t, localMedium.IsDir("mydir"))
assert.False(t, localMedium.IsDir("nonexistent"))
}
func TestLocal_ReadStream_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
content := "streaming content"
err := localMedium.Write("stream.txt", content)
assert.NoError(t, err)
reader, err := localMedium.ReadStream("stream.txt")
assert.NoError(t, err)
defer reader.Close()
limitReader := io.LimitReader(reader, 9)
data, err := io.ReadAll(limitReader)
assert.NoError(t, err)
assert.Equal(t, "streaming", string(data))
}
func TestLocal_WriteStream_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
writer, err := localMedium.WriteStream("output.txt")
assert.NoError(t, err)
_, err = io.Copy(writer, core.NewReader("piped data"))
assert.NoError(t, err)
err = writer.Close()
assert.NoError(t, err)
content, err := localMedium.Read("output.txt")
assert.NoError(t, err)
assert.Equal(t, "piped data", content)
}
func TestLocal_Path_TraversalSandbox_Good(t *testing.T) {
localMedium := &Medium{filesystemRoot: "/sandbox"}
assert.Equal(t, "/sandbox/file.txt", localMedium.sandboxedPath("../../../file.txt"))
assert.Equal(t, "/sandbox/target", localMedium.sandboxedPath("dir/../../target"))
assert.Equal(t, "/sandbox/.ssh/id_rsa", localMedium.sandboxedPath(".ssh/id_rsa"))
assert.Equal(t, "/sandbox/id_rsa", localMedium.sandboxedPath(".ssh/../id_rsa"))
assert.Equal(t, "/sandbox/file\x00.txt", localMedium.sandboxedPath("file\x00.txt"))
}
func TestLocal_ValidatePath_SymlinkEscape_Bad(t *testing.T) {
root := t.TempDir()
localMedium, err := New(root)
assert.NoError(t, err)
outside := t.TempDir()
outsideFile := core.Path(outside, "secret.txt")
outsideMedium, err := New("/")
require.NoError(t, err)
err = outsideMedium.Write(outsideFile, "secret")
assert.NoError(t, err)
_, err = localMedium.validatePath("../outside.txt")
assert.NoError(t, err)
linkPath := core.Path(root, "evil_link")
err = syscall.Symlink(outside, linkPath)
assert.NoError(t, err)
_, err = localMedium.validatePath("evil_link/secret.txt")
assert.Error(t, err)
assert.ErrorIs(t, err, fs.ErrPermission)
err = localMedium.EnsureDir("inner")
assert.NoError(t, err)
innerDir := core.Path(root, "inner")
nestedLink := core.Path(innerDir, "nested_evil")
err = syscall.Symlink(outside, nestedLink)
assert.NoError(t, err)
_, err = localMedium.validatePath("inner/nested_evil/secret.txt")
assert.Error(t, err)
assert.ErrorIs(t, err, fs.ErrPermission)
}
func TestLocal_EmptyPaths_Good(t *testing.T) {
root := t.TempDir()
localMedium, err := New(root)
assert.NoError(t, err)
_, err = localMedium.Read("")
assert.Error(t, err)
err = localMedium.Write("", "content")
assert.Error(t, err)
err = localMedium.EnsureDir("")
assert.NoError(t, err)
assert.False(t, localMedium.IsDir(""))
assert.True(t, localMedium.Exists(""))
entries, err := localMedium.List("")
assert.NoError(t, err)
assert.NotNil(t, entries)
}

432
medium_test.go Normal file
View file

@ -0,0 +1,432 @@
package io
import (
goio "io"
"io/fs"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestMemoryMedium_NewMemoryMedium_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
assert.NotNil(t, memoryMedium)
assert.NotNil(t, memoryMedium.fileContents)
assert.NotNil(t, memoryMedium.directories)
assert.Empty(t, memoryMedium.fileContents)
assert.Empty(t, memoryMedium.directories)
}
func TestMemoryMedium_NewFileInfo_Good(t *testing.T) {
info := NewFileInfo("app.yaml", 8, 0644, time.Unix(0, 0), false)
assert.Equal(t, "app.yaml", info.Name())
assert.Equal(t, int64(8), info.Size())
assert.Equal(t, fs.FileMode(0644), info.Mode())
assert.True(t, info.ModTime().Equal(time.Unix(0, 0)))
assert.False(t, info.IsDir())
assert.Nil(t, info.Sys())
}
func TestMemoryMedium_NewDirEntry_Good(t *testing.T) {
info := NewFileInfo("app.yaml", 8, 0644, time.Unix(0, 0), false)
entry := NewDirEntry("app.yaml", false, 0644, info)
assert.Equal(t, "app.yaml", entry.Name())
assert.False(t, entry.IsDir())
assert.Equal(t, fs.FileMode(0), entry.Type())
entryInfo, err := entry.Info()
require.NoError(t, err)
assert.Equal(t, "app.yaml", entryInfo.Name())
assert.Equal(t, int64(8), entryInfo.Size())
}
func TestMemoryMedium_Read_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["test.txt"] = "hello world"
content, err := memoryMedium.Read("test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello world", content)
}
func TestMemoryMedium_Read_Bad(t *testing.T) {
memoryMedium := NewMemoryMedium()
_, err := memoryMedium.Read("nonexistent.txt")
assert.Error(t, err)
}
func TestMemoryMedium_Write_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
err := memoryMedium.Write("test.txt", "content")
assert.NoError(t, err)
assert.Equal(t, "content", memoryMedium.fileContents["test.txt"])
err = memoryMedium.Write("test.txt", "new content")
assert.NoError(t, err)
assert.Equal(t, "new content", memoryMedium.fileContents["test.txt"])
}
func TestMemoryMedium_WriteMode_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
err := memoryMedium.WriteMode("secure.txt", "secret", 0600)
require.NoError(t, err)
content, err := memoryMedium.Read("secure.txt")
require.NoError(t, err)
assert.Equal(t, "secret", content)
info, err := memoryMedium.Stat("secure.txt")
require.NoError(t, err)
assert.Equal(t, fs.FileMode(0600), info.Mode())
file, err := memoryMedium.Open("secure.txt")
require.NoError(t, err)
fileInfo, err := file.Stat()
require.NoError(t, err)
assert.Equal(t, fs.FileMode(0600), fileInfo.Mode())
}
func TestMemoryMedium_EnsureDir_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
err := memoryMedium.EnsureDir("/path/to/dir")
assert.NoError(t, err)
assert.True(t, memoryMedium.directories["/path/to/dir"])
}
func TestMemoryMedium_EnsureDir_CreatesParents_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
require.NoError(t, memoryMedium.EnsureDir("alpha/beta/gamma"))
assert.True(t, memoryMedium.IsDir("alpha"))
assert.True(t, memoryMedium.IsDir("alpha/beta"))
assert.True(t, memoryMedium.IsDir("alpha/beta/gamma"))
}
func TestMemoryMedium_IsFile_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["exists.txt"] = "content"
assert.True(t, memoryMedium.IsFile("exists.txt"))
assert.False(t, memoryMedium.IsFile("nonexistent.txt"))
}
func TestMemoryMedium_Write_CreatesParentDirectories_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
require.NoError(t, memoryMedium.Write("nested/path/file.txt", "content"))
assert.True(t, memoryMedium.Exists("nested"))
assert.True(t, memoryMedium.IsDir("nested"))
assert.True(t, memoryMedium.Exists("nested/path"))
assert.True(t, memoryMedium.IsDir("nested/path"))
}
func TestMemoryMedium_Delete_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["test.txt"] = "content"
err := memoryMedium.Delete("test.txt")
assert.NoError(t, err)
assert.False(t, memoryMedium.IsFile("test.txt"))
}
func TestMemoryMedium_Delete_NotFound_Bad(t *testing.T) {
memoryMedium := NewMemoryMedium()
err := memoryMedium.Delete("nonexistent.txt")
assert.Error(t, err)
}
func TestMemoryMedium_Delete_DirNotEmpty_Bad(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.directories["mydir"] = true
memoryMedium.fileContents["mydir/file.txt"] = "content"
err := memoryMedium.Delete("mydir")
assert.Error(t, err)
}
func TestMemoryMedium_Delete_InferredDirNotEmpty_Bad(t *testing.T) {
memoryMedium := NewMemoryMedium()
require.NoError(t, memoryMedium.Write("mydir/file.txt", "content"))
err := memoryMedium.Delete("mydir")
assert.Error(t, err)
}
func TestMemoryMedium_DeleteAll_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.directories["mydir"] = true
memoryMedium.directories["mydir/subdir"] = true
memoryMedium.fileContents["mydir/file.txt"] = "content"
memoryMedium.fileContents["mydir/subdir/nested.txt"] = "nested"
err := memoryMedium.DeleteAll("mydir")
assert.NoError(t, err)
assert.Empty(t, memoryMedium.directories)
assert.Empty(t, memoryMedium.fileContents)
}
func TestMemoryMedium_Rename_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["old.txt"] = "content"
err := memoryMedium.Rename("old.txt", "new.txt")
assert.NoError(t, err)
assert.False(t, memoryMedium.IsFile("old.txt"))
assert.True(t, memoryMedium.IsFile("new.txt"))
assert.Equal(t, "content", memoryMedium.fileContents["new.txt"])
}
func TestMemoryMedium_Rename_Dir_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.directories["olddir"] = true
memoryMedium.fileContents["olddir/file.txt"] = "content"
err := memoryMedium.Rename("olddir", "newdir")
assert.NoError(t, err)
assert.False(t, memoryMedium.directories["olddir"])
assert.True(t, memoryMedium.directories["newdir"])
assert.Equal(t, "content", memoryMedium.fileContents["newdir/file.txt"])
}
func TestMemoryMedium_Rename_InferredDir_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
require.NoError(t, memoryMedium.Write("olddir/file.txt", "content"))
require.NoError(t, memoryMedium.Rename("olddir", "newdir"))
assert.False(t, memoryMedium.Exists("olddir"))
assert.True(t, memoryMedium.Exists("newdir"))
assert.True(t, memoryMedium.IsDir("newdir"))
assert.Equal(t, "content", memoryMedium.fileContents["newdir/file.txt"])
}
func TestMemoryMedium_List_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.directories["mydir"] = true
memoryMedium.fileContents["mydir/file1.txt"] = "content1"
memoryMedium.fileContents["mydir/file2.txt"] = "content2"
memoryMedium.directories["mydir/subdir"] = true
entries, err := memoryMedium.List("mydir")
assert.NoError(t, err)
assert.Len(t, entries, 3)
assert.Equal(t, "file1.txt", entries[0].Name())
assert.Equal(t, "file2.txt", entries[1].Name())
assert.Equal(t, "subdir", entries[2].Name())
names := make(map[string]bool)
for _, entry := range entries {
names[entry.Name()] = true
}
assert.True(t, names["file1.txt"])
assert.True(t, names["file2.txt"])
assert.True(t, names["subdir"])
}
func TestMemoryMedium_Stat_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["test.txt"] = "hello world"
info, err := memoryMedium.Stat("test.txt")
assert.NoError(t, err)
assert.Equal(t, "test.txt", info.Name())
assert.Equal(t, int64(11), info.Size())
assert.False(t, info.IsDir())
}
func TestMemoryMedium_Stat_Dir_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.directories["mydir"] = true
info, err := memoryMedium.Stat("mydir")
assert.NoError(t, err)
assert.Equal(t, "mydir", info.Name())
assert.True(t, info.IsDir())
}
func TestMemoryMedium_Exists_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["file.txt"] = "content"
memoryMedium.directories["mydir"] = true
assert.True(t, memoryMedium.Exists("file.txt"))
assert.True(t, memoryMedium.Exists("mydir"))
assert.False(t, memoryMedium.Exists("nonexistent"))
}
func TestMemoryMedium_IsDir_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["file.txt"] = "content"
memoryMedium.directories["mydir"] = true
assert.False(t, memoryMedium.IsDir("file.txt"))
assert.True(t, memoryMedium.IsDir("mydir"))
assert.False(t, memoryMedium.IsDir("nonexistent"))
}
func TestMemoryMedium_StreamAndFSHelpers_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
require.NoError(t, memoryMedium.EnsureDir("dir"))
require.NoError(t, memoryMedium.Write("dir/file.txt", "alpha"))
statInfo, err := memoryMedium.Stat("dir/file.txt")
require.NoError(t, err)
file, err := memoryMedium.Open("dir/file.txt")
require.NoError(t, err)
info, err := file.Stat()
require.NoError(t, err)
assert.Equal(t, "file.txt", info.Name())
assert.Equal(t, int64(5), info.Size())
assert.Equal(t, fs.FileMode(0644), info.Mode())
assert.Equal(t, statInfo.ModTime(), info.ModTime())
assert.False(t, info.IsDir())
assert.Nil(t, info.Sys())
data, err := goio.ReadAll(file)
require.NoError(t, err)
assert.Equal(t, "alpha", string(data))
require.NoError(t, file.Close())
entries, err := memoryMedium.List("dir")
require.NoError(t, err)
require.Len(t, entries, 1)
assert.Equal(t, "file.txt", entries[0].Name())
assert.False(t, entries[0].IsDir())
assert.Equal(t, fs.FileMode(0), entries[0].Type())
entryInfo, err := entries[0].Info()
require.NoError(t, err)
assert.Equal(t, "file.txt", entryInfo.Name())
assert.Equal(t, int64(5), entryInfo.Size())
assert.Equal(t, fs.FileMode(0644), entryInfo.Mode())
assert.Equal(t, statInfo.ModTime(), entryInfo.ModTime())
writer, err := memoryMedium.Create("created.txt")
require.NoError(t, err)
_, err = writer.Write([]byte("created"))
require.NoError(t, err)
require.NoError(t, writer.Close())
appendWriter, err := memoryMedium.Append("created.txt")
require.NoError(t, err)
_, err = appendWriter.Write([]byte(" later"))
require.NoError(t, err)
require.NoError(t, appendWriter.Close())
reader, err := memoryMedium.ReadStream("created.txt")
require.NoError(t, err)
streamed, err := goio.ReadAll(reader)
require.NoError(t, err)
assert.Equal(t, "created later", string(streamed))
require.NoError(t, reader.Close())
writeStream, err := memoryMedium.WriteStream("streamed.txt")
require.NoError(t, err)
_, err = writeStream.Write([]byte("stream output"))
require.NoError(t, err)
require.NoError(t, writeStream.Close())
assert.Equal(t, "stream output", memoryMedium.fileContents["streamed.txt"])
statInfo, err = memoryMedium.Stat("streamed.txt")
require.NoError(t, err)
assert.Equal(t, fs.FileMode(0644), statInfo.Mode())
assert.False(t, statInfo.ModTime().IsZero())
}
func TestIO_Read_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["test.txt"] = "hello"
content, err := Read(memoryMedium, "test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello", content)
}
func TestIO_Write_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
err := Write(memoryMedium, "test.txt", "hello")
assert.NoError(t, err)
assert.Equal(t, "hello", memoryMedium.fileContents["test.txt"])
}
func TestIO_EnsureDir_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
err := EnsureDir(memoryMedium, "/my/dir")
assert.NoError(t, err)
assert.True(t, memoryMedium.directories["/my/dir"])
}
func TestIO_IsFile_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["exists.txt"] = "content"
assert.True(t, IsFile(memoryMedium, "exists.txt"))
assert.False(t, IsFile(memoryMedium, "nonexistent.txt"))
}
func TestIO_NewSandboxed_Good(t *testing.T) {
root := t.TempDir()
memoryMedium, err := NewSandboxed(root)
require.NoError(t, err)
require.NoError(t, memoryMedium.Write("config/app.yaml", "port: 8080"))
content, err := memoryMedium.Read("config/app.yaml")
require.NoError(t, err)
assert.Equal(t, "port: 8080", content)
assert.True(t, memoryMedium.IsDir("config"))
}
func TestIO_ReadWriteStream_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
writer, err := WriteStream(memoryMedium, "logs/run.txt")
require.NoError(t, err)
_, err = writer.Write([]byte("started"))
require.NoError(t, err)
require.NoError(t, writer.Close())
reader, err := ReadStream(memoryMedium, "logs/run.txt")
require.NoError(t, err)
data, err := goio.ReadAll(reader)
require.NoError(t, err)
assert.Equal(t, "started", string(data))
require.NoError(t, reader.Close())
}
func TestIO_Copy_Good(t *testing.T) {
source := NewMemoryMedium()
dest := NewMemoryMedium()
source.fileContents["test.txt"] = "hello"
err := Copy(source, "test.txt", dest, "test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello", dest.fileContents["test.txt"])
source.fileContents["original.txt"] = "content"
err = Copy(source, "original.txt", dest, "copied.txt")
assert.NoError(t, err)
assert.Equal(t, "content", dest.fileContents["copied.txt"])
}
func TestIO_Copy_Bad(t *testing.T) {
source := NewMemoryMedium()
dest := NewMemoryMedium()
err := Copy(source, "nonexistent.txt", dest, "dest.txt")
assert.Error(t, err)
}
func TestIO_LocalGlobal_Good(t *testing.T) {
assert.NotNil(t, Local, "io.Local should be initialised")
var memoryMedium = Local
assert.NotNil(t, memoryMedium)
}

View file

@ -1,6 +1,7 @@
// Package node provides an in-memory filesystem implementation of io.Medium
// ported from Borg's DataNode. It stores files in memory with implicit
// directory structure and supports tar serialisation.
// Example: nodeTree := node.New()
// Example: nodeTree.AddData("config/app.yaml", []byte("port: 8080"))
// Example: snapshot, _ := nodeTree.ToTar()
// Example: restored, _ := node.FromTar(snapshot)
package node
import (
@ -9,93 +10,90 @@ import (
"cmp"
goio "io"
"io/fs"
"os"
"path"
"slices"
"strings"
"time"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
)
// Node is an in-memory filesystem that implements coreio.Node (and therefore
// coreio.Medium). Directories are implicit -- they exist whenever a file path
// contains a "/".
// Example: nodeTree := node.New()
// Example: nodeTree.AddData("config/app.yaml", []byte("port: 8080"))
// Example: snapshot, _ := nodeTree.ToTar()
// Example: restored, _ := node.FromTar(snapshot)
type Node struct {
files map[string]*dataFile
}
// compile-time interface checks
var _ coreio.Medium = (*Node)(nil)
var _ fs.ReadFileFS = (*Node)(nil)
// New creates a new, empty Node.
// Example: nodeTree := node.New()
// Example: _ = nodeTree.Write("config/app.yaml", "port: 8080")
func New() *Node {
return &Node{files: make(map[string]*dataFile)}
}
// ---------- Node-specific methods ----------
// AddData stages content in the in-memory filesystem.
func (n *Node) AddData(name string, content []byte) {
name = strings.TrimPrefix(name, "/")
// Example: nodeTree.AddData("config/app.yaml", []byte("port: 8080"))
func (node *Node) AddData(name string, content []byte) {
name = core.TrimPrefix(name, "/")
if name == "" {
return
}
// Directories are implicit, so we don't store them.
if strings.HasSuffix(name, "/") {
if core.HasSuffix(name, "/") {
return
}
n.files[name] = &dataFile{
node.files[name] = &dataFile{
name: name,
content: content,
modTime: time.Now(),
}
}
// ToTar serialises the entire in-memory tree to a tar archive.
func (n *Node) ToTar() ([]byte, error) {
buf := new(bytes.Buffer)
tw := tar.NewWriter(buf)
// Example: snapshot, _ := nodeTree.ToTar()
func (node *Node) ToTar() ([]byte, error) {
buffer := new(bytes.Buffer)
tarWriter := tar.NewWriter(buffer)
for _, file := range n.files {
for _, file := range node.files {
hdr := &tar.Header{
Name: file.name,
Mode: 0600,
Size: int64(len(file.content)),
ModTime: file.modTime,
}
if err := tw.WriteHeader(hdr); err != nil {
if err := tarWriter.WriteHeader(hdr); err != nil {
return nil, err
}
if _, err := tw.Write(file.content); err != nil {
if _, err := tarWriter.Write(file.content); err != nil {
return nil, err
}
}
if err := tw.Close(); err != nil {
if err := tarWriter.Close(); err != nil {
return nil, err
}
return buf.Bytes(), nil
return buffer.Bytes(), nil
}
// FromTar creates a new Node from a tar archive.
// Example: restored, _ := node.FromTar(snapshot)
func FromTar(data []byte) (*Node, error) {
n := New()
if err := n.LoadTar(data); err != nil {
restoredNode := New()
if err := restoredNode.LoadTar(data); err != nil {
return nil, err
}
return n, nil
return restoredNode, nil
}
// LoadTar replaces the in-memory tree with the contents of a tar archive.
func (n *Node) LoadTar(data []byte) error {
// Example: _ = nodeTree.LoadTar(snapshot)
func (node *Node) LoadTar(data []byte) error {
newFiles := make(map[string]*dataFile)
tr := tar.NewReader(bytes.NewReader(data))
tarReader := tar.NewReader(bytes.NewReader(data))
for {
header, err := tr.Next()
header, err := tarReader.Next()
if err == goio.EOF {
break
}
@ -104,12 +102,12 @@ func (n *Node) LoadTar(data []byte) error {
}
if header.Typeflag == tar.TypeReg {
content, err := goio.ReadAll(tr)
content, err := goio.ReadAll(tarReader)
if err != nil {
return err
return core.E("node.LoadTar", "read tar entry", err)
}
name := strings.TrimPrefix(header.Name, "/")
if name == "" || strings.HasSuffix(name, "/") {
name := core.TrimPrefix(header.Name, "/")
if name == "" || core.HasSuffix(name, "/") {
continue
}
newFiles[name] = &dataFile{
@ -120,188 +118,164 @@ func (n *Node) LoadTar(data []byte) error {
}
}
n.files = newFiles
node.files = newFiles
return nil
}
// WalkNode walks the in-memory tree, calling fn for each entry.
func (n *Node) WalkNode(root string, fn fs.WalkDirFunc) error {
return fs.WalkDir(n, root, fn)
}
// WalkOptions configures the behaviour of Walk.
// Example: options := node.WalkOptions{MaxDepth: 1, SkipErrors: true}
type WalkOptions struct {
// MaxDepth limits how many directory levels to descend. 0 means unlimited.
MaxDepth int
// Filter, if set, is called for each entry. Return true to include the
// entry (and descend into it if it is a directory).
Filter func(path string, d fs.DirEntry) bool
// SkipErrors suppresses errors (e.g. nonexistent root) instead of
// propagating them through the callback.
MaxDepth int
Filter func(entryPath string, entry fs.DirEntry) bool
SkipErrors bool
}
// Walk walks the in-memory tree with optional WalkOptions.
func (n *Node) Walk(root string, fn fs.WalkDirFunc, opts ...WalkOptions) error {
var opt WalkOptions
if len(opts) > 0 {
opt = opts[0]
}
if opt.SkipErrors {
// If root doesn't exist, silently return nil.
if _, err := n.Stat(root); err != nil {
// Example: _ = nodeTree.Walk(".", func(_ string, _ fs.DirEntry, _ error) error { return nil }, node.WalkOptions{MaxDepth: 1, SkipErrors: true})
func (node *Node) Walk(root string, walkFunc fs.WalkDirFunc, options WalkOptions) error {
if options.SkipErrors {
if _, err := node.Stat(root); err != nil {
return nil
}
}
return fs.WalkDir(n, root, func(p string, d fs.DirEntry, err error) error {
if opt.Filter != nil && err == nil {
if !opt.Filter(p, d) {
if d != nil && d.IsDir() {
return fs.WalkDir(node, root, func(entryPath string, entry fs.DirEntry, err error) error {
if options.Filter != nil && err == nil {
if !options.Filter(entryPath, entry) {
if entry != nil && entry.IsDir() {
return fs.SkipDir
}
return nil
}
}
// Call the user's function first so the entry is visited.
result := fn(p, d, err)
walkResult := walkFunc(entryPath, entry, err)
// After visiting a directory at MaxDepth, prevent descending further.
if result == nil && opt.MaxDepth > 0 && d != nil && d.IsDir() && p != root {
rel := strings.TrimPrefix(p, root)
rel = strings.TrimPrefix(rel, "/")
depth := strings.Count(rel, "/") + 1
if depth >= opt.MaxDepth {
if walkResult == nil && options.MaxDepth > 0 && entry != nil && entry.IsDir() && entryPath != root {
relativePath := core.TrimPrefix(entryPath, root)
relativePath = core.TrimPrefix(relativePath, "/")
depth := len(core.Split(relativePath, "/"))
if depth >= options.MaxDepth {
return fs.SkipDir
}
}
return result
return walkResult
})
}
// ReadFile returns the content of the named file as a byte slice.
// Implements fs.ReadFileFS.
func (n *Node) ReadFile(name string) ([]byte, error) {
name = strings.TrimPrefix(name, "/")
f, ok := n.files[name]
// Example: content, _ := nodeTree.ReadFile("config/app.yaml")
func (node *Node) ReadFile(name string) ([]byte, error) {
name = core.TrimPrefix(name, "/")
file, ok := node.files[name]
if !ok {
return nil, &fs.PathError{Op: "read", Path: name, Err: fs.ErrNotExist}
return nil, core.E("node.ReadFile", core.Concat("path not found: ", name), fs.ErrNotExist)
}
// Return a copy to prevent callers from mutating internal state.
result := make([]byte, len(f.content))
copy(result, f.content)
result := make([]byte, len(file.content))
copy(result, file.content)
return result, nil
}
// CopyFile copies a file from the in-memory tree to the local filesystem.
func (n *Node) CopyFile(src, dst string, perm fs.FileMode) error {
src = strings.TrimPrefix(src, "/")
f, ok := n.files[src]
// Example: _ = nodeTree.CopyFile("config/app.yaml", "backup/app.yaml", 0644)
func (node *Node) CopyFile(sourcePath, destinationPath string, permissions fs.FileMode) error {
sourcePath = core.TrimPrefix(sourcePath, "/")
file, ok := node.files[sourcePath]
if !ok {
// Check if it's a directory — can't copy directories this way.
info, err := n.Stat(src)
info, err := node.Stat(sourcePath)
if err != nil {
return &fs.PathError{Op: "copyfile", Path: src, Err: fs.ErrNotExist}
return core.E("node.CopyFile", core.Concat("source not found: ", sourcePath), fs.ErrNotExist)
}
if info.IsDir() {
return &fs.PathError{Op: "copyfile", Path: src, Err: fs.ErrInvalid}
return core.E("node.CopyFile", core.Concat("source is a directory: ", sourcePath), fs.ErrInvalid)
}
return &fs.PathError{Op: "copyfile", Path: src, Err: fs.ErrNotExist}
return core.E("node.CopyFile", core.Concat("source not found: ", sourcePath), fs.ErrNotExist)
}
return os.WriteFile(dst, f.content, perm)
parent := core.PathDir(destinationPath)
if parent != "." && parent != "" && parent != destinationPath && !coreio.Local.IsDir(parent) {
return &fs.PathError{Op: "copyfile", Path: destinationPath, Err: fs.ErrNotExist}
}
return coreio.Local.WriteMode(destinationPath, string(file.content), permissions)
}
// CopyTo copies a file (or directory tree) from the node to any Medium.
func (n *Node) CopyTo(target coreio.Medium, sourcePath, destPath string) error {
sourcePath = strings.TrimPrefix(sourcePath, "/")
info, err := n.Stat(sourcePath)
// Example: _ = nodeTree.CopyTo(io.NewMemoryMedium(), "config", "backup/config")
func (node *Node) CopyTo(target coreio.Medium, sourcePath, destinationPath string) error {
sourcePath = core.TrimPrefix(sourcePath, "/")
info, err := node.Stat(sourcePath)
if err != nil {
return err
}
if !info.IsDir() {
// Single file copy
f, ok := n.files[sourcePath]
file, ok := node.files[sourcePath]
if !ok {
return fs.ErrNotExist
return core.E("node.CopyTo", core.Concat("path not found: ", sourcePath), fs.ErrNotExist)
}
return target.Write(destPath, string(f.content))
return target.Write(destinationPath, string(file.content))
}
// Directory: walk and copy all files underneath
prefix := sourcePath
if prefix != "" && !strings.HasSuffix(prefix, "/") {
if prefix != "" && !core.HasSuffix(prefix, "/") {
prefix += "/"
}
for p, f := range n.files {
if !strings.HasPrefix(p, prefix) && p != sourcePath {
for filePath, file := range node.files {
if !core.HasPrefix(filePath, prefix) && filePath != sourcePath {
continue
}
rel := strings.TrimPrefix(p, prefix)
dest := destPath
if rel != "" {
dest = destPath + "/" + rel
relativePath := core.TrimPrefix(filePath, prefix)
copyDestinationPath := destinationPath
if relativePath != "" {
copyDestinationPath = core.Concat(destinationPath, "/", relativePath)
}
if err := target.Write(dest, string(f.content)); err != nil {
if err := target.Write(copyDestinationPath, string(file.content)); err != nil {
return err
}
}
return nil
}
// ---------- Medium interface: fs.FS methods ----------
// Open opens a file from the Node. Implements fs.FS.
func (n *Node) Open(name string) (fs.File, error) {
name = strings.TrimPrefix(name, "/")
if file, ok := n.files[name]; ok {
return &dataFileReader{file: file}, nil
// Example: file, _ := nodeTree.Open("config/app.yaml")
func (node *Node) Open(name string) (fs.File, error) {
name = core.TrimPrefix(name, "/")
if dataFile, ok := node.files[name]; ok {
return &dataFileReader{file: dataFile}, nil
}
// Check if it's a directory
prefix := name + "/"
if name == "." || name == "" {
prefix = ""
}
for p := range n.files {
if strings.HasPrefix(p, prefix) {
for filePath := range node.files {
if core.HasPrefix(filePath, prefix) {
return &dirFile{path: name, modTime: time.Now()}, nil
}
}
return nil, fs.ErrNotExist
return nil, core.E("node.Open", core.Concat("path not found: ", name), fs.ErrNotExist)
}
// Stat returns file information for the given path.
func (n *Node) Stat(name string) (fs.FileInfo, error) {
name = strings.TrimPrefix(name, "/")
if file, ok := n.files[name]; ok {
return file.Stat()
// Example: info, _ := nodeTree.Stat("config/app.yaml")
func (node *Node) Stat(name string) (fs.FileInfo, error) {
name = core.TrimPrefix(name, "/")
if dataFile, ok := node.files[name]; ok {
return dataFile.Stat()
}
// Check if it's a directory
prefix := name + "/"
if name == "." || name == "" {
prefix = ""
}
for p := range n.files {
if strings.HasPrefix(p, prefix) {
for filePath := range node.files {
if core.HasPrefix(filePath, prefix) {
return &dirInfo{name: path.Base(name), modTime: time.Now()}, nil
}
}
return nil, fs.ErrNotExist
return nil, core.E("node.Stat", core.Concat("path not found: ", name), fs.ErrNotExist)
}
// ReadDir reads and returns all directory entries for the named directory.
func (n *Node) ReadDir(name string) ([]fs.DirEntry, error) {
name = strings.TrimPrefix(name, "/")
// Example: entries, _ := nodeTree.ReadDir("config")
func (node *Node) ReadDir(name string) ([]fs.DirEntry, error) {
name = core.TrimPrefix(name, "/")
if name == "." {
name = ""
}
// Disallow reading a file as a directory.
if info, err := n.Stat(name); err == nil && !info.IsDir() {
if info, err := node.Stat(name); err == nil && !info.IsDir() {
return nil, &fs.PathError{Op: "readdir", Path: name, Err: fs.ErrInvalid}
}
@ -313,24 +287,24 @@ func (n *Node) ReadDir(name string) ([]fs.DirEntry, error) {
prefix = name + "/"
}
for p := range n.files {
if !strings.HasPrefix(p, prefix) {
for filePath := range node.files {
if !core.HasPrefix(filePath, prefix) {
continue
}
relPath := strings.TrimPrefix(p, prefix)
firstComponent := strings.Split(relPath, "/")[0]
relPath := core.TrimPrefix(filePath, prefix)
firstComponent := core.SplitN(relPath, "/", 2)[0]
if seen[firstComponent] {
continue
}
seen[firstComponent] = true
if strings.Contains(relPath, "/") {
dir := &dirInfo{name: firstComponent, modTime: time.Now()}
entries = append(entries, fs.FileInfoToDirEntry(dir))
if core.Contains(relPath, "/") {
directoryInfo := &dirInfo{name: firstComponent, modTime: time.Now()}
entries = append(entries, fs.FileInfoToDirEntry(directoryInfo))
} else {
file := n.files[p]
file := node.files[filePath]
info, _ := file.Stat()
entries = append(entries, fs.FileInfoToDirEntry(info))
}
@ -343,272 +317,245 @@ func (n *Node) ReadDir(name string) ([]fs.DirEntry, error) {
return entries, nil
}
// ---------- Medium interface: read/write ----------
// Read retrieves the content of a file as a string.
func (n *Node) Read(p string) (string, error) {
p = strings.TrimPrefix(p, "/")
f, ok := n.files[p]
// Example: content, _ := nodeTree.Read("config/app.yaml")
func (node *Node) Read(filePath string) (string, error) {
filePath = core.TrimPrefix(filePath, "/")
file, ok := node.files[filePath]
if !ok {
return "", fs.ErrNotExist
return "", core.E("node.Read", core.Concat("path not found: ", filePath), fs.ErrNotExist)
}
return string(f.content), nil
return string(file.content), nil
}
// Write saves the given content to a file, overwriting it if it exists.
func (n *Node) Write(p, content string) error {
n.AddData(p, []byte(content))
// Example: _ = nodeTree.Write("config/app.yaml", "port: 8080")
func (node *Node) Write(filePath, content string) error {
node.AddData(filePath, []byte(content))
return nil
}
// WriteMode saves content with explicit permissions (no-op for in-memory node).
func (n *Node) WriteMode(p, content string, mode os.FileMode) error {
return n.Write(p, content)
// Example: _ = nodeTree.WriteMode("keys/private.key", key, 0600)
func (node *Node) WriteMode(filePath, content string, mode fs.FileMode) error {
return node.Write(filePath, content)
}
// FileGet is an alias for Read.
func (n *Node) FileGet(p string) (string, error) {
return n.Read(p)
}
// FileSet is an alias for Write.
func (n *Node) FileSet(p, content string) error {
return n.Write(p, content)
}
// EnsureDir is a no-op because directories are implicit in Node.
func (n *Node) EnsureDir(_ string) error {
// Example: _ = nodeTree.EnsureDir("config")
func (node *Node) EnsureDir(directoryPath string) error {
return nil
}
// ---------- Medium interface: existence checks ----------
// Exists checks if a path exists (file or directory).
func (n *Node) Exists(p string) bool {
_, err := n.Stat(p)
// Example: exists := nodeTree.Exists("config/app.yaml")
func (node *Node) Exists(filePath string) bool {
_, err := node.Stat(filePath)
return err == nil
}
// IsFile checks if a path exists and is a regular file.
func (n *Node) IsFile(p string) bool {
p = strings.TrimPrefix(p, "/")
_, ok := n.files[p]
// Example: isFile := nodeTree.IsFile("config/app.yaml")
func (node *Node) IsFile(filePath string) bool {
filePath = core.TrimPrefix(filePath, "/")
_, ok := node.files[filePath]
return ok
}
// IsDir checks if a path exists and is a directory.
func (n *Node) IsDir(p string) bool {
info, err := n.Stat(p)
// Example: isDirectory := nodeTree.IsDir("config")
func (node *Node) IsDir(filePath string) bool {
info, err := node.Stat(filePath)
if err != nil {
return false
}
return info.IsDir()
}
// ---------- Medium interface: mutations ----------
// Delete removes a single file.
func (n *Node) Delete(p string) error {
p = strings.TrimPrefix(p, "/")
if _, ok := n.files[p]; ok {
delete(n.files, p)
// Example: _ = nodeTree.Delete("config/app.yaml")
func (node *Node) Delete(filePath string) error {
filePath = core.TrimPrefix(filePath, "/")
if _, ok := node.files[filePath]; ok {
delete(node.files, filePath)
return nil
}
return fs.ErrNotExist
return core.E("node.Delete", core.Concat("path not found: ", filePath), fs.ErrNotExist)
}
// DeleteAll removes a file or directory and all children.
func (n *Node) DeleteAll(p string) error {
p = strings.TrimPrefix(p, "/")
// Example: _ = nodeTree.DeleteAll("logs/archive")
func (node *Node) DeleteAll(filePath string) error {
filePath = core.TrimPrefix(filePath, "/")
found := false
if _, ok := n.files[p]; ok {
delete(n.files, p)
if _, ok := node.files[filePath]; ok {
delete(node.files, filePath)
found = true
}
prefix := p + "/"
for k := range n.files {
if strings.HasPrefix(k, prefix) {
delete(n.files, k)
prefix := filePath + "/"
for entryPath := range node.files {
if core.HasPrefix(entryPath, prefix) {
delete(node.files, entryPath)
found = true
}
}
if !found {
return fs.ErrNotExist
return core.E("node.DeleteAll", core.Concat("path not found: ", filePath), fs.ErrNotExist)
}
return nil
}
// Rename moves a file from oldPath to newPath.
func (n *Node) Rename(oldPath, newPath string) error {
oldPath = strings.TrimPrefix(oldPath, "/")
newPath = strings.TrimPrefix(newPath, "/")
// Example: _ = nodeTree.Rename("drafts/todo.txt", "archive/todo.txt")
func (node *Node) Rename(oldPath, newPath string) error {
oldPath = core.TrimPrefix(oldPath, "/")
newPath = core.TrimPrefix(newPath, "/")
f, ok := n.files[oldPath]
file, ok := node.files[oldPath]
if !ok {
return fs.ErrNotExist
return core.E("node.Rename", core.Concat("path not found: ", oldPath), fs.ErrNotExist)
}
f.name = newPath
n.files[newPath] = f
delete(n.files, oldPath)
file.name = newPath
node.files[newPath] = file
delete(node.files, oldPath)
return nil
}
// List returns directory entries for the given path.
func (n *Node) List(p string) ([]fs.DirEntry, error) {
p = strings.TrimPrefix(p, "/")
if p == "" || p == "." {
return n.ReadDir(".")
// Example: entries, _ := nodeTree.List("config")
func (node *Node) List(filePath string) ([]fs.DirEntry, error) {
filePath = core.TrimPrefix(filePath, "/")
if filePath == "" || filePath == "." {
return node.ReadDir(".")
}
return n.ReadDir(p)
return node.ReadDir(filePath)
}
// ---------- Medium interface: streams ----------
// Create creates or truncates the named file, returning a WriteCloser.
// Content is committed to the Node on Close.
func (n *Node) Create(p string) (goio.WriteCloser, error) {
p = strings.TrimPrefix(p, "/")
return &nodeWriter{node: n, path: p}, nil
// Example: writer, _ := nodeTree.Create("logs/app.log")
func (node *Node) Create(filePath string) (goio.WriteCloser, error) {
filePath = core.TrimPrefix(filePath, "/")
return &nodeWriter{node: node, path: filePath}, nil
}
// Append opens the named file for appending, creating it if needed.
// Content is committed to the Node on Close.
func (n *Node) Append(p string) (goio.WriteCloser, error) {
p = strings.TrimPrefix(p, "/")
// Example: writer, _ := nodeTree.Append("logs/app.log")
func (node *Node) Append(filePath string) (goio.WriteCloser, error) {
filePath = core.TrimPrefix(filePath, "/")
var existing []byte
if f, ok := n.files[p]; ok {
existing = make([]byte, len(f.content))
copy(existing, f.content)
if file, ok := node.files[filePath]; ok {
existing = make([]byte, len(file.content))
copy(existing, file.content)
}
return &nodeWriter{node: n, path: p, buf: existing}, nil
return &nodeWriter{node: node, path: filePath, buffer: existing}, nil
}
// ReadStream returns a ReadCloser for the file content.
func (n *Node) ReadStream(p string) (goio.ReadCloser, error) {
f, err := n.Open(p)
func (node *Node) ReadStream(filePath string) (goio.ReadCloser, error) {
file, err := node.Open(filePath)
if err != nil {
return nil, err
}
return goio.NopCloser(f), nil
return goio.NopCloser(file), nil
}
// WriteStream returns a WriteCloser for the file content.
func (n *Node) WriteStream(p string) (goio.WriteCloser, error) {
return n.Create(p)
func (node *Node) WriteStream(filePath string) (goio.WriteCloser, error) {
return node.Create(filePath)
}
// ---------- Internal types ----------
// nodeWriter buffers writes and commits them to the Node on Close.
type nodeWriter struct {
node *Node
path string
buf []byte
node *Node
path string
buffer []byte
}
func (w *nodeWriter) Write(p []byte) (int, error) {
w.buf = append(w.buf, p...)
return len(p), nil
func (writer *nodeWriter) Write(data []byte) (int, error) {
writer.buffer = append(writer.buffer, data...)
return len(data), nil
}
func (w *nodeWriter) Close() error {
w.node.files[w.path] = &dataFile{
name: w.path,
content: w.buf,
func (writer *nodeWriter) Close() error {
writer.node.files[writer.path] = &dataFile{
name: writer.path,
content: writer.buffer,
modTime: time.Now(),
}
return nil
}
// dataFile represents a file in the Node.
type dataFile struct {
name string
content []byte
modTime time.Time
}
func (d *dataFile) Stat() (fs.FileInfo, error) { return &dataFileInfo{file: d}, nil }
func (d *dataFile) Read(_ []byte) (int, error) { return 0, goio.EOF }
func (d *dataFile) Close() error { return nil }
func (file *dataFile) Stat() (fs.FileInfo, error) { return &dataFileInfo{file: file}, nil }
func (file *dataFile) Read(buffer []byte) (int, error) { return 0, goio.EOF }
func (file *dataFile) Close() error { return nil }
// dataFileInfo implements fs.FileInfo for a dataFile.
type dataFileInfo struct{ file *dataFile }
func (d *dataFileInfo) Name() string { return path.Base(d.file.name) }
func (d *dataFileInfo) Size() int64 { return int64(len(d.file.content)) }
func (d *dataFileInfo) Mode() fs.FileMode { return 0444 }
func (d *dataFileInfo) ModTime() time.Time { return d.file.modTime }
func (d *dataFileInfo) IsDir() bool { return false }
func (d *dataFileInfo) Sys() any { return nil }
func (info *dataFileInfo) Name() string { return path.Base(info.file.name) }
func (info *dataFileInfo) Size() int64 { return int64(len(info.file.content)) }
func (info *dataFileInfo) Mode() fs.FileMode { return 0444 }
func (info *dataFileInfo) ModTime() time.Time { return info.file.modTime }
func (info *dataFileInfo) IsDir() bool { return false }
func (info *dataFileInfo) Sys() any { return nil }
// dataFileReader implements fs.File for reading a dataFile.
type dataFileReader struct {
file *dataFile
reader *bytes.Reader
}
func (d *dataFileReader) Stat() (fs.FileInfo, error) { return d.file.Stat() }
func (d *dataFileReader) Read(p []byte) (int, error) {
if d.reader == nil {
d.reader = bytes.NewReader(d.file.content)
}
return d.reader.Read(p)
}
func (d *dataFileReader) Close() error { return nil }
func (reader *dataFileReader) Stat() (fs.FileInfo, error) { return reader.file.Stat() }
func (reader *dataFileReader) Read(buffer []byte) (int, error) {
if reader.reader == nil {
reader.reader = bytes.NewReader(reader.file.content)
}
return reader.reader.Read(buffer)
}
func (reader *dataFileReader) Close() error { return nil }
// dirInfo implements fs.FileInfo for an implicit directory.
type dirInfo struct {
name string
modTime time.Time
}
func (d *dirInfo) Name() string { return d.name }
func (d *dirInfo) Size() int64 { return 0 }
func (d *dirInfo) Mode() fs.FileMode { return fs.ModeDir | 0555 }
func (d *dirInfo) ModTime() time.Time { return d.modTime }
func (d *dirInfo) IsDir() bool { return true }
func (d *dirInfo) Sys() any { return nil }
func (info *dirInfo) Name() string { return info.name }
func (info *dirInfo) Size() int64 { return 0 }
func (info *dirInfo) Mode() fs.FileMode { return fs.ModeDir | 0555 }
func (info *dirInfo) ModTime() time.Time { return info.modTime }
func (info *dirInfo) IsDir() bool { return true }
func (info *dirInfo) Sys() any { return nil }
// dirFile implements fs.File for a directory.
type dirFile struct {
path string
modTime time.Time
}
func (d *dirFile) Stat() (fs.FileInfo, error) {
return &dirInfo{name: path.Base(d.path), modTime: d.modTime}, nil
func (directory *dirFile) Stat() (fs.FileInfo, error) {
return &dirInfo{name: path.Base(directory.path), modTime: directory.modTime}, nil
}
func (d *dirFile) Read([]byte) (int, error) {
return 0, &fs.PathError{Op: "read", Path: d.path, Err: fs.ErrInvalid}
}
func (d *dirFile) Close() error { return nil }
// Ensure Node implements fs.FS so WalkDir works.
func (directory *dirFile) Read([]byte) (int, error) {
return 0, core.E("node.dirFile.Read", core.Concat("cannot read directory: ", directory.path), &fs.PathError{Op: "read", Path: directory.path, Err: fs.ErrInvalid})
}
func (directory *dirFile) Close() error { return nil }
var _ fs.FS = (*Node)(nil)
// Ensure Node also satisfies fs.StatFS and fs.ReadDirFS for WalkDir.
var _ fs.StatFS = (*Node)(nil)
var _ fs.ReadDirFS = (*Node)(nil)
// Unexported helper: ensure ReadStream result also satisfies fs.File
// (for cases where callers do a type assertion).
var _ goio.ReadCloser = goio.NopCloser(nil)
// Ensure nodeWriter satisfies goio.WriteCloser.
var _ goio.WriteCloser = (*nodeWriter)(nil)
// Ensure dirFile satisfies fs.File.
var _ fs.File = (*dirFile)(nil)
// Ensure dataFileReader satisfies fs.File.
var _ fs.File = (*dataFileReader)(nil)
// ReadDirFile is not needed since fs.WalkDir works via ReadDirFS on the FS itself,
// but we need the Node to satisfy fs.ReadDirFS.
// ensure all internal compile-time checks are grouped above
// no further type assertions needed

View file

@ -3,38 +3,28 @@ package node
import (
"archive/tar"
"bytes"
"errors"
"io"
"io/fs"
"os"
"path/filepath"
"sort"
"strings"
"testing"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// ---------------------------------------------------------------------------
// New
// ---------------------------------------------------------------------------
func TestNew_Good(t *testing.T) {
n := New()
require.NotNil(t, n, "New() must not return nil")
assert.NotNil(t, n.files, "New() must initialise the files map")
func TestNode_New_Good(t *testing.T) {
nodeTree := New()
require.NotNil(t, nodeTree, "New() must not return nil")
assert.NotNil(t, nodeTree.files, "New() must initialise the files map")
}
// ---------------------------------------------------------------------------
// AddData
// ---------------------------------------------------------------------------
func TestNode_AddData_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
func TestAddData_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
file, ok := n.files["foo.txt"]
file, ok := nodeTree.files["foo.txt"]
require.True(t, ok, "file foo.txt should be present")
assert.Equal(t, []byte("foo"), file.content)
@ -43,287 +33,251 @@ func TestAddData_Good(t *testing.T) {
assert.Equal(t, "foo.txt", info.Name())
}
func TestAddData_Bad(t *testing.T) {
n := New()
func TestNode_AddData_Bad(t *testing.T) {
nodeTree := New()
// Empty name is silently ignored.
n.AddData("", []byte("data"))
assert.Empty(t, n.files, "empty name must not be stored")
nodeTree.AddData("", []byte("data"))
assert.Empty(t, nodeTree.files, "empty name must not be stored")
// Directory entry (trailing slash) is silently ignored.
n.AddData("dir/", nil)
assert.Empty(t, n.files, "directory entry must not be stored")
nodeTree.AddData("dir/", nil)
assert.Empty(t, nodeTree.files, "directory entry must not be stored")
}
func TestAddData_Ugly(t *testing.T) {
func TestNode_AddData_EdgeCases_Good(t *testing.T) {
t.Run("Overwrite", func(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
n.AddData("foo.txt", []byte("bar"))
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
nodeTree.AddData("foo.txt", []byte("bar"))
file := n.files["foo.txt"]
file := nodeTree.files["foo.txt"]
assert.Equal(t, []byte("bar"), file.content, "second AddData should overwrite")
})
t.Run("LeadingSlash", func(t *testing.T) {
n := New()
n.AddData("/hello.txt", []byte("hi"))
_, ok := n.files["hello.txt"]
nodeTree := New()
nodeTree.AddData("/hello.txt", []byte("hi"))
_, ok := nodeTree.files["hello.txt"]
assert.True(t, ok, "leading slash should be trimmed")
})
}
// ---------------------------------------------------------------------------
// Open
// ---------------------------------------------------------------------------
func TestNode_Open_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
func TestOpen_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
file, err := n.Open("foo.txt")
file, err := nodeTree.Open("foo.txt")
require.NoError(t, err)
defer file.Close()
buf := make([]byte, 10)
nr, err := file.Read(buf)
readBuffer := make([]byte, 10)
nr, err := file.Read(readBuffer)
require.True(t, nr > 0 || err == io.EOF)
assert.Equal(t, "foo", string(buf[:nr]))
assert.Equal(t, "foo", string(readBuffer[:nr]))
}
func TestOpen_Bad(t *testing.T) {
n := New()
_, err := n.Open("nonexistent.txt")
func TestNode_Open_Bad(t *testing.T) {
nodeTree := New()
_, err := nodeTree.Open("nonexistent.txt")
require.Error(t, err)
assert.ErrorIs(t, err, fs.ErrNotExist)
}
func TestOpen_Ugly(t *testing.T) {
n := New()
n.AddData("bar/baz.txt", []byte("baz"))
func TestNode_Open_Directory_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("bar/baz.txt", []byte("baz"))
// Opening a directory should succeed.
file, err := n.Open("bar")
file, err := nodeTree.Open("bar")
require.NoError(t, err)
defer file.Close()
// Reading from a directory should fail.
_, err = file.Read(make([]byte, 1))
require.Error(t, err)
var pathErr *fs.PathError
require.True(t, errors.As(err, &pathErr))
assert.Equal(t, fs.ErrInvalid, pathErr.Err)
var pathError *fs.PathError
require.True(t, core.As(err, &pathError))
assert.Equal(t, fs.ErrInvalid, pathError.Err)
}
// ---------------------------------------------------------------------------
// Stat
// ---------------------------------------------------------------------------
func TestNode_Stat_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
nodeTree.AddData("bar/baz.txt", []byte("baz"))
func TestStat_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
n.AddData("bar/baz.txt", []byte("baz"))
// File stat.
info, err := n.Stat("bar/baz.txt")
info, err := nodeTree.Stat("bar/baz.txt")
require.NoError(t, err)
assert.Equal(t, "baz.txt", info.Name())
assert.Equal(t, int64(3), info.Size())
assert.False(t, info.IsDir())
// Directory stat.
dirInfo, err := n.Stat("bar")
dirInfo, err := nodeTree.Stat("bar")
require.NoError(t, err)
assert.True(t, dirInfo.IsDir())
assert.Equal(t, "bar", dirInfo.Name())
}
func TestStat_Bad(t *testing.T) {
n := New()
_, err := n.Stat("nonexistent")
func TestNode_Stat_Bad(t *testing.T) {
nodeTree := New()
_, err := nodeTree.Stat("nonexistent")
require.Error(t, err)
assert.ErrorIs(t, err, fs.ErrNotExist)
}
func TestStat_Ugly(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
func TestNode_Stat_RootDirectory_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
// Root directory.
info, err := n.Stat(".")
info, err := nodeTree.Stat(".")
require.NoError(t, err)
assert.True(t, info.IsDir())
assert.Equal(t, ".", info.Name())
}
// ---------------------------------------------------------------------------
// ReadFile
// ---------------------------------------------------------------------------
func TestNode_ReadFile_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("hello.txt", []byte("hello world"))
func TestReadFile_Good(t *testing.T) {
n := New()
n.AddData("hello.txt", []byte("hello world"))
data, err := n.ReadFile("hello.txt")
data, err := nodeTree.ReadFile("hello.txt")
require.NoError(t, err)
assert.Equal(t, []byte("hello world"), data)
}
func TestReadFile_Bad(t *testing.T) {
n := New()
_, err := n.ReadFile("missing.txt")
func TestNode_ReadFile_Bad(t *testing.T) {
nodeTree := New()
_, err := nodeTree.ReadFile("missing.txt")
require.Error(t, err)
assert.ErrorIs(t, err, fs.ErrNotExist)
}
func TestReadFile_Ugly(t *testing.T) {
n := New()
n.AddData("data.bin", []byte("original"))
func TestNode_ReadFile_ReturnsCopy_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("data.bin", []byte("original"))
// Returned slice must be a copy — mutating it must not affect internal state.
data, err := n.ReadFile("data.bin")
data, err := nodeTree.ReadFile("data.bin")
require.NoError(t, err)
data[0] = 'X'
data2, err := n.ReadFile("data.bin")
data2, err := nodeTree.ReadFile("data.bin")
require.NoError(t, err)
assert.Equal(t, []byte("original"), data2, "ReadFile must return an independent copy")
}
// ---------------------------------------------------------------------------
// ReadDir
// ---------------------------------------------------------------------------
func TestNode_ReadDir_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
nodeTree.AddData("bar/baz.txt", []byte("baz"))
nodeTree.AddData("bar/qux.txt", []byte("qux"))
func TestReadDir_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
n.AddData("bar/baz.txt", []byte("baz"))
n.AddData("bar/qux.txt", []byte("qux"))
// Root.
entries, err := n.ReadDir(".")
entries, err := nodeTree.ReadDir(".")
require.NoError(t, err)
assert.Equal(t, []string{"bar", "foo.txt"}, sortedNames(entries))
// Subdirectory.
barEntries, err := n.ReadDir("bar")
barEntries, err := nodeTree.ReadDir("bar")
require.NoError(t, err)
assert.Equal(t, []string{"baz.txt", "qux.txt"}, sortedNames(barEntries))
}
func TestReadDir_Bad(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
func TestNode_ReadDir_Bad(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
// Reading a file as a directory should fail.
_, err := n.ReadDir("foo.txt")
_, err := nodeTree.ReadDir("foo.txt")
require.Error(t, err)
var pathErr *fs.PathError
require.True(t, errors.As(err, &pathErr))
assert.Equal(t, fs.ErrInvalid, pathErr.Err)
var pathError *fs.PathError
require.True(t, core.As(err, &pathError))
assert.Equal(t, fs.ErrInvalid, pathError.Err)
}
func TestReadDir_Ugly(t *testing.T) {
n := New()
n.AddData("bar/baz.txt", []byte("baz"))
n.AddData("empty_dir/", nil) // Ignored by AddData.
func TestNode_ReadDir_IgnoresEmptyEntry_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("bar/baz.txt", []byte("baz"))
nodeTree.AddData("empty_dir/", nil)
entries, err := n.ReadDir(".")
entries, err := nodeTree.ReadDir(".")
require.NoError(t, err)
assert.Equal(t, []string{"bar"}, sortedNames(entries))
}
// ---------------------------------------------------------------------------
// Exists
// ---------------------------------------------------------------------------
func TestNode_Exists_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
nodeTree.AddData("bar/baz.txt", []byte("baz"))
func TestExists_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
n.AddData("bar/baz.txt", []byte("baz"))
assert.True(t, n.Exists("foo.txt"))
assert.True(t, n.Exists("bar"))
assert.True(t, nodeTree.Exists("foo.txt"))
assert.True(t, nodeTree.Exists("bar"))
}
func TestExists_Bad(t *testing.T) {
n := New()
assert.False(t, n.Exists("nonexistent"))
func TestNode_Exists_Bad(t *testing.T) {
nodeTree := New()
assert.False(t, nodeTree.Exists("nonexistent"))
}
func TestExists_Ugly(t *testing.T) {
n := New()
n.AddData("dummy.txt", []byte("dummy"))
func TestNode_Exists_RootAndEmptyPath_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("dummy.txt", []byte("dummy"))
assert.True(t, n.Exists("."), "root '.' must exist")
assert.True(t, n.Exists(""), "empty path (root) must exist")
assert.True(t, nodeTree.Exists("."), "root '.' must exist")
assert.True(t, nodeTree.Exists(""), "empty path (root) must exist")
}
// ---------------------------------------------------------------------------
// Walk
// ---------------------------------------------------------------------------
func TestWalk_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
n.AddData("bar/baz.txt", []byte("baz"))
n.AddData("bar/qux.txt", []byte("qux"))
func TestNode_Walk_Default_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
nodeTree.AddData("bar/baz.txt", []byte("baz"))
nodeTree.AddData("bar/qux.txt", []byte("qux"))
var paths []string
err := n.Walk(".", func(p string, d fs.DirEntry, err error) error {
err := nodeTree.Walk(".", func(p string, d fs.DirEntry, err error) error {
paths = append(paths, p)
return nil
})
}, WalkOptions{})
require.NoError(t, err)
sort.Strings(paths)
assert.Equal(t, []string{".", "bar", "bar/baz.txt", "bar/qux.txt", "foo.txt"}, paths)
}
func TestWalk_Bad(t *testing.T) {
n := New()
func TestNode_Walk_Default_Bad(t *testing.T) {
nodeTree := New()
var called bool
err := n.Walk("nonexistent", func(p string, d fs.DirEntry, err error) error {
err := nodeTree.Walk("nonexistent", func(p string, d fs.DirEntry, err error) error {
called = true
assert.Error(t, err)
assert.ErrorIs(t, err, fs.ErrNotExist)
return err
})
}, WalkOptions{})
assert.True(t, called, "walk function must be called for nonexistent root")
assert.ErrorIs(t, err, fs.ErrNotExist)
}
func TestWalk_Ugly(t *testing.T) {
n := New()
n.AddData("a/b.txt", []byte("b"))
n.AddData("a/c.txt", []byte("c"))
func TestNode_Walk_CallbackError_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("a/b.txt", []byte("b"))
nodeTree.AddData("a/c.txt", []byte("c"))
// Stop walk early with a custom error.
walkErr := errors.New("stop walking")
walkErr := core.NewError("stop walking")
var paths []string
err := n.Walk(".", func(p string, d fs.DirEntry, err error) error {
err := nodeTree.Walk(".", func(p string, d fs.DirEntry, err error) error {
if p == "a/b.txt" {
return walkErr
}
paths = append(paths, p)
return nil
})
}, WalkOptions{})
assert.Equal(t, walkErr, err, "Walk must propagate the callback error")
}
func TestWalk_Options(t *testing.T) {
n := New()
n.AddData("root.txt", []byte("root"))
n.AddData("a/a1.txt", []byte("a1"))
n.AddData("a/b/b1.txt", []byte("b1"))
n.AddData("c/c1.txt", []byte("c1"))
func TestNode_Walk_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("root.txt", []byte("root"))
nodeTree.AddData("a/a1.txt", []byte("a1"))
nodeTree.AddData("a/b/b1.txt", []byte("b1"))
nodeTree.AddData("c/c1.txt", []byte("c1"))
t.Run("MaxDepth", func(t *testing.T) {
var paths []string
err := n.Walk(".", func(p string, d fs.DirEntry, err error) error {
err := nodeTree.Walk(".", func(p string, d fs.DirEntry, err error) error {
paths = append(paths, p)
return nil
}, WalkOptions{MaxDepth: 1})
@ -335,11 +289,11 @@ func TestWalk_Options(t *testing.T) {
t.Run("Filter", func(t *testing.T) {
var paths []string
err := n.Walk(".", func(p string, d fs.DirEntry, err error) error {
err := nodeTree.Walk(".", func(p string, d fs.DirEntry, err error) error {
paths = append(paths, p)
return nil
}, WalkOptions{Filter: func(p string, d fs.DirEntry) bool {
return !strings.HasPrefix(p, "a")
return !core.HasPrefix(p, "a")
}})
require.NoError(t, err)
@ -349,7 +303,7 @@ func TestWalk_Options(t *testing.T) {
t.Run("SkipErrors", func(t *testing.T) {
var called bool
err := n.Walk("nonexistent", func(p string, d fs.DirEntry, err error) error {
err := nodeTree.Walk("nonexistent", func(p string, d fs.DirEntry, err error) error {
called = true
return err
}, WalkOptions{SkipErrors: true})
@ -359,70 +313,165 @@ func TestWalk_Options(t *testing.T) {
})
}
// ---------------------------------------------------------------------------
// CopyFile
// ---------------------------------------------------------------------------
func TestNode_CopyFile_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
func TestCopyFile_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
tmpfile := filepath.Join(t.TempDir(), "test.txt")
err := n.CopyFile("foo.txt", tmpfile, 0644)
destinationPath := core.Path(t.TempDir(), "test.txt")
err := nodeTree.CopyFile("foo.txt", destinationPath, 0644)
require.NoError(t, err)
content, err := os.ReadFile(tmpfile)
content, err := coreio.Local.Read(destinationPath)
require.NoError(t, err)
assert.Equal(t, "foo", string(content))
assert.Equal(t, "foo", content)
}
func TestCopyFile_Bad(t *testing.T) {
n := New()
tmpfile := filepath.Join(t.TempDir(), "test.txt")
func TestNode_CopyFile_Bad(t *testing.T) {
nodeTree := New()
destinationPath := core.Path(t.TempDir(), "test.txt")
// Source does not exist.
err := n.CopyFile("nonexistent.txt", tmpfile, 0644)
err := nodeTree.CopyFile("nonexistent.txt", destinationPath, 0644)
assert.Error(t, err)
// Destination not writable.
n.AddData("foo.txt", []byte("foo"))
err = n.CopyFile("foo.txt", "/nonexistent_dir/test.txt", 0644)
nodeTree.AddData("foo.txt", []byte("foo"))
err = nodeTree.CopyFile("foo.txt", "/nonexistent_dir/test.txt", 0644)
assert.Error(t, err)
}
func TestCopyFile_Ugly(t *testing.T) {
n := New()
n.AddData("bar/baz.txt", []byte("baz"))
tmpfile := filepath.Join(t.TempDir(), "test.txt")
func TestNode_CopyFile_DirectorySource_Bad(t *testing.T) {
nodeTree := New()
nodeTree.AddData("bar/baz.txt", []byte("baz"))
destinationPath := core.Path(t.TempDir(), "test.txt")
// Attempting to copy a directory should fail.
err := n.CopyFile("bar", tmpfile, 0644)
err := nodeTree.CopyFile("bar", destinationPath, 0644)
assert.Error(t, err)
}
// ---------------------------------------------------------------------------
// ToTar / FromTar
// ---------------------------------------------------------------------------
func TestNode_CopyTo_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("config/app.yaml", []byte("port: 8080"))
nodeTree.AddData("config/env/app.env", []byte("MODE=test"))
func TestToTar_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
n.AddData("bar/baz.txt", []byte("baz"))
fileTarget := coreio.NewMemoryMedium()
err := nodeTree.CopyTo(fileTarget, "config/app.yaml", "backup/app.yaml")
require.NoError(t, err)
content, err := fileTarget.Read("backup/app.yaml")
require.NoError(t, err)
assert.Equal(t, "port: 8080", content)
tarball, err := n.ToTar()
dirTarget := coreio.NewMemoryMedium()
err = nodeTree.CopyTo(dirTarget, "config", "backup/config")
require.NoError(t, err)
content, err = dirTarget.Read("backup/config/app.yaml")
require.NoError(t, err)
assert.Equal(t, "port: 8080", content)
content, err = dirTarget.Read("backup/config/env/app.env")
require.NoError(t, err)
assert.Equal(t, "MODE=test", content)
}
func TestNode_CopyTo_Bad(t *testing.T) {
nodeTree := New()
err := nodeTree.CopyTo(coreio.NewMemoryMedium(), "missing", "backup/missing")
assert.Error(t, err)
}
func TestNode_MediumFacade_Good(t *testing.T) {
nodeTree := New()
require.NoError(t, nodeTree.Write("docs/readme.txt", "hello"))
require.NoError(t, nodeTree.WriteMode("docs/mode.txt", "mode", 0600))
require.NoError(t, nodeTree.Write("docs/guide.txt", "guide"))
require.NoError(t, nodeTree.EnsureDir("ignored"))
value, err := nodeTree.Read("docs/readme.txt")
require.NoError(t, err)
assert.Equal(t, "hello", value)
value, err = nodeTree.Read("docs/guide.txt")
require.NoError(t, err)
assert.Equal(t, "guide", value)
assert.True(t, nodeTree.IsFile("docs/readme.txt"))
assert.True(t, nodeTree.IsDir("docs"))
entries, err := nodeTree.List("docs")
require.NoError(t, err)
assert.Equal(t, []string{"guide.txt", "mode.txt", "readme.txt"}, sortedNames(entries))
file, err := nodeTree.Open("docs/readme.txt")
require.NoError(t, err)
info, err := file.Stat()
require.NoError(t, err)
assert.Equal(t, "readme.txt", info.Name())
assert.Equal(t, fs.FileMode(0444), info.Mode())
assert.False(t, info.IsDir())
assert.Nil(t, info.Sys())
require.NoError(t, file.Close())
dir, err := nodeTree.Open("docs")
require.NoError(t, err)
dirInfo, err := dir.Stat()
require.NoError(t, err)
assert.Equal(t, "docs", dirInfo.Name())
assert.True(t, dirInfo.IsDir())
assert.Equal(t, fs.ModeDir|0555, dirInfo.Mode())
assert.Nil(t, dirInfo.Sys())
require.NoError(t, dir.Close())
createWriter, err := nodeTree.Create("docs/generated.txt")
require.NoError(t, err)
_, err = createWriter.Write([]byte("generated"))
require.NoError(t, err)
require.NoError(t, createWriter.Close())
appendWriter, err := nodeTree.Append("docs/generated.txt")
require.NoError(t, err)
_, err = appendWriter.Write([]byte(" content"))
require.NoError(t, err)
require.NoError(t, appendWriter.Close())
streamReader, err := nodeTree.ReadStream("docs/generated.txt")
require.NoError(t, err)
streamData, err := io.ReadAll(streamReader)
require.NoError(t, err)
assert.Equal(t, "generated content", string(streamData))
require.NoError(t, streamReader.Close())
writeStream, err := nodeTree.WriteStream("docs/stream.txt")
require.NoError(t, err)
_, err = writeStream.Write([]byte("stream"))
require.NoError(t, err)
require.NoError(t, writeStream.Close())
require.NoError(t, nodeTree.Rename("docs/stream.txt", "docs/stream-renamed.txt"))
assert.True(t, nodeTree.Exists("docs/stream-renamed.txt"))
require.NoError(t, nodeTree.Delete("docs/stream-renamed.txt"))
assert.False(t, nodeTree.Exists("docs/stream-renamed.txt"))
require.NoError(t, nodeTree.DeleteAll("docs"))
assert.False(t, nodeTree.Exists("docs"))
}
func TestNode_ToTar_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
nodeTree.AddData("bar/baz.txt", []byte("baz"))
tarball, err := nodeTree.ToTar()
require.NoError(t, err)
require.NotEmpty(t, tarball)
// Verify tar content.
tr := tar.NewReader(bytes.NewReader(tarball))
tarReader := tar.NewReader(bytes.NewReader(tarball))
files := make(map[string]string)
for {
header, err := tr.Next()
header, err := tarReader.Next()
if err == io.EOF {
break
}
require.NoError(t, err)
content, err := io.ReadAll(tr)
content, err := io.ReadAll(tarReader)
require.NoError(t, err)
files[header.Name] = string(content)
}
@ -431,97 +480,84 @@ func TestToTar_Good(t *testing.T) {
assert.Equal(t, "baz", files["bar/baz.txt"])
}
func TestFromTar_Good(t *testing.T) {
buf := new(bytes.Buffer)
tw := tar.NewWriter(buf)
func TestNode_FromTar_Good(t *testing.T) {
buffer := new(bytes.Buffer)
tarWriter := tar.NewWriter(buffer)
for _, f := range []struct{ Name, Body string }{
for _, file := range []struct{ Name, Body string }{
{"foo.txt", "foo"},
{"bar/baz.txt", "baz"},
} {
hdr := &tar.Header{
Name: f.Name,
Name: file.Name,
Mode: 0600,
Size: int64(len(f.Body)),
Size: int64(len(file.Body)),
Typeflag: tar.TypeReg,
}
require.NoError(t, tw.WriteHeader(hdr))
_, err := tw.Write([]byte(f.Body))
require.NoError(t, tarWriter.WriteHeader(hdr))
_, err := tarWriter.Write([]byte(file.Body))
require.NoError(t, err)
}
require.NoError(t, tw.Close())
require.NoError(t, tarWriter.Close())
n, err := FromTar(buf.Bytes())
nodeTree, err := FromTar(buffer.Bytes())
require.NoError(t, err)
assert.True(t, n.Exists("foo.txt"), "foo.txt should exist")
assert.True(t, n.Exists("bar/baz.txt"), "bar/baz.txt should exist")
assert.True(t, nodeTree.Exists("foo.txt"), "foo.txt should exist")
assert.True(t, nodeTree.Exists("bar/baz.txt"), "bar/baz.txt should exist")
}
func TestFromTar_Bad(t *testing.T) {
// Truncated data that cannot be a valid tar.
func TestNode_FromTar_Bad(t *testing.T) {
truncated := make([]byte, 100)
_, err := FromTar(truncated)
assert.Error(t, err, "truncated data should produce an error")
}
func TestTarRoundTrip_Good(t *testing.T) {
n1 := New()
n1.AddData("a.txt", []byte("alpha"))
n1.AddData("b/c.txt", []byte("charlie"))
func TestNode_TarRoundTrip_Good(t *testing.T) {
nodeTree1 := New()
nodeTree1.AddData("a.txt", []byte("alpha"))
nodeTree1.AddData("b/c.txt", []byte("charlie"))
tarball, err := n1.ToTar()
tarball, err := nodeTree1.ToTar()
require.NoError(t, err)
n2, err := FromTar(tarball)
nodeTree2, err := FromTar(tarball)
require.NoError(t, err)
// Verify n2 matches n1.
data, err := n2.ReadFile("a.txt")
data, err := nodeTree2.ReadFile("a.txt")
require.NoError(t, err)
assert.Equal(t, []byte("alpha"), data)
data, err = n2.ReadFile("b/c.txt")
data, err = nodeTree2.ReadFile("b/c.txt")
require.NoError(t, err)
assert.Equal(t, []byte("charlie"), data)
}
// ---------------------------------------------------------------------------
// fs.FS interface compliance
// ---------------------------------------------------------------------------
func TestNode_FSInterface_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("hello.txt", []byte("world"))
func TestFSInterface_Good(t *testing.T) {
n := New()
n.AddData("hello.txt", []byte("world"))
// fs.FS
var fsys fs.FS = n
var fsys fs.FS = nodeTree
file, err := fsys.Open("hello.txt")
require.NoError(t, err)
defer file.Close()
// fs.StatFS
var statFS fs.StatFS = n
var statFS fs.StatFS = nodeTree
info, err := statFS.Stat("hello.txt")
require.NoError(t, err)
assert.Equal(t, "hello.txt", info.Name())
assert.Equal(t, int64(5), info.Size())
// fs.ReadFileFS
var readFS fs.ReadFileFS = n
var readFS fs.ReadFileFS = nodeTree
data, err := readFS.ReadFile("hello.txt")
require.NoError(t, err)
assert.Equal(t, []byte("world"), data)
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
func sortedNames(entries []fs.DirEntry) []string {
var names []string
for _, e := range entries {
names = append(names, e.Name())
for _, entry := range entries {
names = append(names, entry.Name())
}
sort.Strings(names)
return names

511
s3/s3.go
View file

@ -1,4 +1,6 @@
// Package s3 provides an S3-backed implementation of the io.Medium interface.
// Example: client := awss3.NewFromConfig(aws.Config{Region: "us-east-1"})
// Example: medium, _ := s3.New(s3.Options{Bucket: "backups", Client: client, Prefix: "daily/"})
// Example: _ = medium.Write("reports/daily.txt", "done")
package s3
import (
@ -6,303 +8,318 @@ import (
"context"
goio "io"
"io/fs"
"os"
"path"
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/s3"
awss3 "github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
coreerr "forge.lthn.ai/core/go-log"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
)
// s3API is the subset of the S3 client API used by this package.
// This allows for interface-based mocking in tests.
type s3API interface {
GetObject(ctx context.Context, params *s3.GetObjectInput, optFns ...func(*s3.Options)) (*s3.GetObjectOutput, error)
PutObject(ctx context.Context, params *s3.PutObjectInput, optFns ...func(*s3.Options)) (*s3.PutObjectOutput, error)
DeleteObject(ctx context.Context, params *s3.DeleteObjectInput, optFns ...func(*s3.Options)) (*s3.DeleteObjectOutput, error)
DeleteObjects(ctx context.Context, params *s3.DeleteObjectsInput, optFns ...func(*s3.Options)) (*s3.DeleteObjectsOutput, error)
HeadObject(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error)
ListObjectsV2(ctx context.Context, params *s3.ListObjectsV2Input, optFns ...func(*s3.Options)) (*s3.ListObjectsV2Output, error)
CopyObject(ctx context.Context, params *s3.CopyObjectInput, optFns ...func(*s3.Options)) (*s3.CopyObjectOutput, error)
// Example: client := awss3.NewFromConfig(aws.Config{Region: "us-east-1"})
// Example: medium, _ := s3.New(s3.Options{Bucket: "backups", Client: client, Prefix: "daily/"})
type Client interface {
GetObject(ctx context.Context, params *awss3.GetObjectInput, optFns ...func(*awss3.Options)) (*awss3.GetObjectOutput, error)
PutObject(ctx context.Context, params *awss3.PutObjectInput, optFns ...func(*awss3.Options)) (*awss3.PutObjectOutput, error)
DeleteObject(ctx context.Context, params *awss3.DeleteObjectInput, optFns ...func(*awss3.Options)) (*awss3.DeleteObjectOutput, error)
DeleteObjects(ctx context.Context, params *awss3.DeleteObjectsInput, optFns ...func(*awss3.Options)) (*awss3.DeleteObjectsOutput, error)
HeadObject(ctx context.Context, params *awss3.HeadObjectInput, optFns ...func(*awss3.Options)) (*awss3.HeadObjectOutput, error)
ListObjectsV2(ctx context.Context, params *awss3.ListObjectsV2Input, optFns ...func(*awss3.Options)) (*awss3.ListObjectsV2Output, error)
CopyObject(ctx context.Context, params *awss3.CopyObjectInput, optFns ...func(*awss3.Options)) (*awss3.CopyObjectOutput, error)
}
// Medium is an S3-backed storage backend implementing the io.Medium interface.
// Example: medium, _ := s3.New(s3.Options{Bucket: "backups", Client: client, Prefix: "daily/"})
// Example: _ = medium.Write("reports/daily.txt", "done")
type Medium struct {
client s3API
client Client
bucket string
prefix string
}
// Option configures a Medium.
type Option func(*Medium)
var _ coreio.Medium = (*Medium)(nil)
// WithPrefix sets an optional key prefix for all operations.
func WithPrefix(prefix string) Option {
return func(m *Medium) {
// Ensure prefix ends with "/" if non-empty
if prefix != "" && !strings.HasSuffix(prefix, "/") {
prefix += "/"
// Example: medium, _ := s3.New(s3.Options{Bucket: "backups", Client: client, Prefix: "daily/"})
type Options struct {
Bucket string
Client Client
Prefix string
}
func deleteObjectsError(prefix string, errs []types.Error) error {
if len(errs) == 0 {
return nil
}
details := make([]string, 0, len(errs))
for _, errorItem := range errs {
key := aws.ToString(errorItem.Key)
code := aws.ToString(errorItem.Code)
message := aws.ToString(errorItem.Message)
switch {
case code != "" && message != "":
details = append(details, core.Concat(key, ": ", code, " ", message))
case code != "":
details = append(details, core.Concat(key, ": ", code))
case message != "":
details = append(details, core.Concat(key, ": ", message))
default:
details = append(details, key)
}
m.prefix = prefix
}
return core.E("s3.DeleteAll", core.Concat("partial delete failed under ", prefix, ": ", core.Join("; ", details...)), nil)
}
// WithClient sets the S3 client for dependency injection.
func WithClient(client *s3.Client) Option {
return func(m *Medium) {
m.client = client
func normalisePrefix(prefix string) string {
if prefix == "" {
return ""
}
clean := path.Clean("/" + prefix)
if clean == "/" {
return ""
}
clean = core.TrimPrefix(clean, "/")
if clean != "" && !core.HasSuffix(clean, "/") {
clean += "/"
}
return clean
}
// withAPI sets the s3API interface directly (for testing with mocks).
func withAPI(api s3API) Option {
return func(m *Medium) {
m.client = api
// Example: medium, _ := s3.New(s3.Options{Bucket: "backups", Client: client, Prefix: "daily/"})
// Example: _ = medium.Write("reports/daily.txt", "done")
func New(options Options) (*Medium, error) {
if options.Bucket == "" {
return nil, core.E("s3.New", "bucket name is required", fs.ErrInvalid)
}
if options.Client == nil {
return nil, core.E("s3.New", "client is required", fs.ErrInvalid)
}
medium := &Medium{
client: options.Client,
bucket: options.Bucket,
prefix: normalisePrefix(options.Prefix),
}
return medium, nil
}
// New creates a new S3 Medium for the given bucket.
func New(bucket string, opts ...Option) (*Medium, error) {
if bucket == "" {
return nil, coreerr.E("s3.New", "bucket name is required", nil)
}
m := &Medium{bucket: bucket}
for _, opt := range opts {
opt(m)
}
if m.client == nil {
return nil, coreerr.E("s3.New", "S3 client is required (use WithClient option)", nil)
}
return m, nil
}
// key returns the full S3 object key for a given path.
func (m *Medium) key(p string) string {
// Clean the path using a leading "/" to sandbox traversal attempts,
// then strip the "/" prefix. This ensures ".." can't escape.
clean := path.Clean("/" + p)
func (medium *Medium) objectKey(filePath string) string {
clean := path.Clean("/" + filePath)
if clean == "/" {
clean = ""
}
clean = strings.TrimPrefix(clean, "/")
clean = core.TrimPrefix(clean, "/")
if m.prefix == "" {
if medium.prefix == "" {
return clean
}
if clean == "" {
return m.prefix
return medium.prefix
}
return m.prefix + clean
return medium.prefix + clean
}
// Read retrieves the content of a file as a string.
func (m *Medium) Read(p string) (string, error) {
key := m.key(p)
// Example: content, _ := medium.Read("reports/daily.txt")
func (medium *Medium) Read(filePath string) (string, error) {
key := medium.objectKey(filePath)
if key == "" {
return "", coreerr.E("s3.Read", "path is required", os.ErrInvalid)
return "", core.E("s3.Read", "path is required", fs.ErrInvalid)
}
out, err := m.client.GetObject(context.Background(), &s3.GetObjectInput{
Bucket: aws.String(m.bucket),
out, err := medium.client.GetObject(context.Background(), &awss3.GetObjectInput{
Bucket: aws.String(medium.bucket),
Key: aws.String(key),
})
if err != nil {
return "", coreerr.E("s3.Read", "failed to get object: "+key, err)
return "", core.E("s3.Read", core.Concat("failed to get object: ", key), err)
}
defer out.Body.Close()
data, err := goio.ReadAll(out.Body)
if err != nil {
return "", coreerr.E("s3.Read", "failed to read body: "+key, err)
return "", core.E("s3.Read", core.Concat("failed to read body: ", key), err)
}
return string(data), nil
}
// Write saves the given content to a file, overwriting it if it exists.
func (m *Medium) Write(p, content string) error {
key := m.key(p)
// Example: _ = medium.Write("reports/daily.txt", "done")
func (medium *Medium) Write(filePath, content string) error {
key := medium.objectKey(filePath)
if key == "" {
return coreerr.E("s3.Write", "path is required", os.ErrInvalid)
return core.E("s3.Write", "path is required", fs.ErrInvalid)
}
_, err := m.client.PutObject(context.Background(), &s3.PutObjectInput{
Bucket: aws.String(m.bucket),
_, err := medium.client.PutObject(context.Background(), &awss3.PutObjectInput{
Bucket: aws.String(medium.bucket),
Key: aws.String(key),
Body: strings.NewReader(content),
Body: core.NewReader(content),
})
if err != nil {
return coreerr.E("s3.Write", "failed to put object: "+key, err)
return core.E("s3.Write", core.Concat("failed to put object: ", key), err)
}
return nil
}
// EnsureDir is a no-op for S3 (S3 has no real directories).
func (m *Medium) EnsureDir(_ string) error {
// Example: _ = medium.WriteMode("keys/private.key", key, 0600)
func (medium *Medium) WriteMode(filePath, content string, mode fs.FileMode) error {
return medium.Write(filePath, content)
}
// Example: _ = medium.EnsureDir("reports/2026")
func (medium *Medium) EnsureDir(directoryPath string) error {
return nil
}
// IsFile checks if a path exists and is a regular file (not a "directory" prefix).
func (m *Medium) IsFile(p string) bool {
key := m.key(p)
// Example: isFile := medium.IsFile("reports/daily.txt")
func (medium *Medium) IsFile(filePath string) bool {
key := medium.objectKey(filePath)
if key == "" {
return false
}
// A "file" in S3 is an object whose key does not end with "/"
if strings.HasSuffix(key, "/") {
if core.HasSuffix(key, "/") {
return false
}
_, err := m.client.HeadObject(context.Background(), &s3.HeadObjectInput{
Bucket: aws.String(m.bucket),
_, err := medium.client.HeadObject(context.Background(), &awss3.HeadObjectInput{
Bucket: aws.String(medium.bucket),
Key: aws.String(key),
})
return err == nil
}
// FileGet is a convenience function that reads a file from the medium.
func (m *Medium) FileGet(p string) (string, error) {
return m.Read(p)
}
// FileSet is a convenience function that writes a file to the medium.
func (m *Medium) FileSet(p, content string) error {
return m.Write(p, content)
}
// Delete removes a single object.
func (m *Medium) Delete(p string) error {
key := m.key(p)
// Example: _ = medium.Delete("reports/daily.txt")
func (medium *Medium) Delete(filePath string) error {
key := medium.objectKey(filePath)
if key == "" {
return coreerr.E("s3.Delete", "path is required", os.ErrInvalid)
return core.E("s3.Delete", "path is required", fs.ErrInvalid)
}
_, err := m.client.DeleteObject(context.Background(), &s3.DeleteObjectInput{
Bucket: aws.String(m.bucket),
_, err := medium.client.DeleteObject(context.Background(), &awss3.DeleteObjectInput{
Bucket: aws.String(medium.bucket),
Key: aws.String(key),
})
if err != nil {
return coreerr.E("s3.Delete", "failed to delete object: "+key, err)
return core.E("s3.Delete", core.Concat("failed to delete object: ", key), err)
}
return nil
}
// DeleteAll removes all objects under the given prefix.
func (m *Medium) DeleteAll(p string) error {
key := m.key(p)
// Example: _ = medium.DeleteAll("reports/2026")
func (medium *Medium) DeleteAll(filePath string) error {
key := medium.objectKey(filePath)
if key == "" {
return coreerr.E("s3.DeleteAll", "path is required", os.ErrInvalid)
return core.E("s3.DeleteAll", "path is required", fs.ErrInvalid)
}
// First, try deleting the exact key
_, _ = m.client.DeleteObject(context.Background(), &s3.DeleteObjectInput{
Bucket: aws.String(m.bucket),
_, err := medium.client.DeleteObject(context.Background(), &awss3.DeleteObjectInput{
Bucket: aws.String(medium.bucket),
Key: aws.String(key),
})
if err != nil {
return core.E("s3.DeleteAll", core.Concat("failed to delete object: ", key), err)
}
// Then delete all objects under the prefix
prefix := key
if !strings.HasSuffix(prefix, "/") {
if !core.HasSuffix(prefix, "/") {
prefix += "/"
}
paginator := true
continueListing := true
var continuationToken *string
for paginator {
listOut, err := m.client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{
Bucket: aws.String(m.bucket),
for continueListing {
listOutput, err := medium.client.ListObjectsV2(context.Background(), &awss3.ListObjectsV2Input{
Bucket: aws.String(medium.bucket),
Prefix: aws.String(prefix),
ContinuationToken: continuationToken,
})
if err != nil {
return coreerr.E("s3.DeleteAll", "failed to list objects: "+prefix, err)
return core.E("s3.DeleteAll", core.Concat("failed to list objects: ", prefix), err)
}
if len(listOut.Contents) == 0 {
if len(listOutput.Contents) == 0 {
break
}
objects := make([]types.ObjectIdentifier, len(listOut.Contents))
for i, obj := range listOut.Contents {
objects[i] = types.ObjectIdentifier{Key: obj.Key}
objects := make([]types.ObjectIdentifier, len(listOutput.Contents))
for i, object := range listOutput.Contents {
objects[i] = types.ObjectIdentifier{Key: object.Key}
}
_, err = m.client.DeleteObjects(context.Background(), &s3.DeleteObjectsInput{
Bucket: aws.String(m.bucket),
deleteOut, err := medium.client.DeleteObjects(context.Background(), &awss3.DeleteObjectsInput{
Bucket: aws.String(medium.bucket),
Delete: &types.Delete{Objects: objects, Quiet: aws.Bool(true)},
})
if err != nil {
return coreerr.E("s3.DeleteAll", "failed to delete objects", err)
return core.E("s3.DeleteAll", "failed to delete objects", err)
}
if err := deleteObjectsError(prefix, deleteOut.Errors); err != nil {
return err
}
if listOut.IsTruncated != nil && *listOut.IsTruncated {
continuationToken = listOut.NextContinuationToken
if listOutput.IsTruncated != nil && *listOutput.IsTruncated {
continuationToken = listOutput.NextContinuationToken
} else {
paginator = false
continueListing = false
}
}
return nil
}
// Rename moves an object by copying then deleting the original.
func (m *Medium) Rename(oldPath, newPath string) error {
oldKey := m.key(oldPath)
newKey := m.key(newPath)
// Example: _ = medium.Rename("drafts/todo.txt", "archive/todo.txt")
func (medium *Medium) Rename(oldPath, newPath string) error {
oldKey := medium.objectKey(oldPath)
newKey := medium.objectKey(newPath)
if oldKey == "" || newKey == "" {
return coreerr.E("s3.Rename", "both old and new paths are required", os.ErrInvalid)
return core.E("s3.Rename", "both old and new paths are required", fs.ErrInvalid)
}
copySource := m.bucket + "/" + oldKey
copySource := medium.bucket + "/" + oldKey
_, err := m.client.CopyObject(context.Background(), &s3.CopyObjectInput{
Bucket: aws.String(m.bucket),
_, err := medium.client.CopyObject(context.Background(), &awss3.CopyObjectInput{
Bucket: aws.String(medium.bucket),
CopySource: aws.String(copySource),
Key: aws.String(newKey),
})
if err != nil {
return coreerr.E("s3.Rename", "failed to copy object: "+oldKey+" -> "+newKey, err)
return core.E("s3.Rename", core.Concat("failed to copy object: ", oldKey, " -> ", newKey), err)
}
_, err = m.client.DeleteObject(context.Background(), &s3.DeleteObjectInput{
Bucket: aws.String(m.bucket),
_, err = medium.client.DeleteObject(context.Background(), &awss3.DeleteObjectInput{
Bucket: aws.String(medium.bucket),
Key: aws.String(oldKey),
})
if err != nil {
return coreerr.E("s3.Rename", "failed to delete source object: "+oldKey, err)
return core.E("s3.Rename", core.Concat("failed to delete source object: ", oldKey), err)
}
return nil
}
// List returns directory entries for the given path using ListObjectsV2 with delimiter.
func (m *Medium) List(p string) ([]fs.DirEntry, error) {
prefix := m.key(p)
if prefix != "" && !strings.HasSuffix(prefix, "/") {
// Example: entries, _ := medium.List("reports")
func (medium *Medium) List(filePath string) ([]fs.DirEntry, error) {
prefix := medium.objectKey(filePath)
if prefix != "" && !core.HasSuffix(prefix, "/") {
prefix += "/"
}
var entries []fs.DirEntry
listOut, err := m.client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{
Bucket: aws.String(m.bucket),
listOutput, err := medium.client.ListObjectsV2(context.Background(), &awss3.ListObjectsV2Input{
Bucket: aws.String(medium.bucket),
Prefix: aws.String(prefix),
Delimiter: aws.String("/"),
})
if err != nil {
return nil, coreerr.E("s3.List", "failed to list objects: "+prefix, err)
return nil, core.E("s3.List", core.Concat("failed to list objects: ", prefix), err)
}
// Common prefixes are "directories"
for _, cp := range listOut.CommonPrefixes {
if cp.Prefix == nil {
for _, commonPrefix := range listOutput.CommonPrefixes {
if commonPrefix.Prefix == nil {
continue
}
name := strings.TrimPrefix(*cp.Prefix, prefix)
name = strings.TrimSuffix(name, "/")
name := core.TrimPrefix(*commonPrefix.Prefix, prefix)
name = core.TrimSuffix(name, "/")
if name == "" {
continue
}
@ -318,22 +335,21 @@ func (m *Medium) List(p string) ([]fs.DirEntry, error) {
})
}
// Contents are "files" (excluding the prefix itself)
for _, obj := range listOut.Contents {
if obj.Key == nil {
for _, object := range listOutput.Contents {
if object.Key == nil {
continue
}
name := strings.TrimPrefix(*obj.Key, prefix)
if name == "" || strings.Contains(name, "/") {
name := core.TrimPrefix(*object.Key, prefix)
if name == "" || core.Contains(name, "/") {
continue
}
var size int64
if obj.Size != nil {
size = *obj.Size
if object.Size != nil {
size = *object.Size
}
var modTime time.Time
if obj.LastModified != nil {
modTime = *obj.LastModified
if object.LastModified != nil {
modTime = *object.LastModified
}
entries = append(entries, &dirEntry{
name: name,
@ -351,19 +367,19 @@ func (m *Medium) List(p string) ([]fs.DirEntry, error) {
return entries, nil
}
// Stat returns file information for the given path using HeadObject.
func (m *Medium) Stat(p string) (fs.FileInfo, error) {
key := m.key(p)
// Example: info, _ := medium.Stat("reports/daily.txt")
func (medium *Medium) Stat(filePath string) (fs.FileInfo, error) {
key := medium.objectKey(filePath)
if key == "" {
return nil, coreerr.E("s3.Stat", "path is required", os.ErrInvalid)
return nil, core.E("s3.Stat", "path is required", fs.ErrInvalid)
}
out, err := m.client.HeadObject(context.Background(), &s3.HeadObjectInput{
Bucket: aws.String(m.bucket),
out, err := medium.client.HeadObject(context.Background(), &awss3.HeadObjectInput{
Bucket: aws.String(medium.bucket),
Key: aws.String(key),
})
if err != nil {
return nil, coreerr.E("s3.Stat", "failed to head object: "+key, err)
return nil, core.E("s3.Stat", core.Concat("failed to head object: ", key), err)
}
var size int64
@ -384,25 +400,24 @@ func (m *Medium) Stat(p string) (fs.FileInfo, error) {
}, nil
}
// Open opens the named file for reading.
func (m *Medium) Open(p string) (fs.File, error) {
key := m.key(p)
func (medium *Medium) Open(filePath string) (fs.File, error) {
key := medium.objectKey(filePath)
if key == "" {
return nil, coreerr.E("s3.Open", "path is required", os.ErrInvalid)
return nil, core.E("s3.Open", "path is required", fs.ErrInvalid)
}
out, err := m.client.GetObject(context.Background(), &s3.GetObjectInput{
Bucket: aws.String(m.bucket),
out, err := medium.client.GetObject(context.Background(), &awss3.GetObjectInput{
Bucket: aws.String(medium.bucket),
Key: aws.String(key),
})
if err != nil {
return nil, coreerr.E("s3.Open", "failed to get object: "+key, err)
return nil, core.E("s3.Open", core.Concat("failed to get object: ", key), err)
}
data, err := goio.ReadAll(out.Body)
out.Body.Close()
if err != nil {
return nil, coreerr.E("s3.Open", "failed to read body: "+key, err)
return nil, core.E("s3.Open", core.Concat("failed to read body: ", key), err)
}
var size int64
@ -422,30 +437,28 @@ func (m *Medium) Open(p string) (fs.File, error) {
}, nil
}
// Create creates or truncates the named file. Returns a writer that
// uploads the content on Close.
func (m *Medium) Create(p string) (goio.WriteCloser, error) {
key := m.key(p)
// Example: writer, _ := medium.Create("reports/daily.txt")
func (medium *Medium) Create(filePath string) (goio.WriteCloser, error) {
key := medium.objectKey(filePath)
if key == "" {
return nil, coreerr.E("s3.Create", "path is required", os.ErrInvalid)
return nil, core.E("s3.Create", "path is required", fs.ErrInvalid)
}
return &s3WriteCloser{
medium: m,
medium: medium,
key: key,
}, nil
}
// Append opens the named file for appending. It downloads the existing
// content (if any) and re-uploads the combined content on Close.
func (m *Medium) Append(p string) (goio.WriteCloser, error) {
key := m.key(p)
// Example: writer, _ := medium.Append("reports/daily.txt")
func (medium *Medium) Append(filePath string) (goio.WriteCloser, error) {
key := medium.objectKey(filePath)
if key == "" {
return nil, coreerr.E("s3.Append", "path is required", os.ErrInvalid)
return nil, core.E("s3.Append", "path is required", fs.ErrInvalid)
}
var existing []byte
out, err := m.client.GetObject(context.Background(), &s3.GetObjectInput{
Bucket: aws.String(m.bucket),
out, err := medium.client.GetObject(context.Background(), &awss3.GetObjectInput{
Bucket: aws.String(medium.bucket),
Key: aws.String(key),
})
if err == nil {
@ -454,92 +467,87 @@ func (m *Medium) Append(p string) (goio.WriteCloser, error) {
}
return &s3WriteCloser{
medium: m,
medium: medium,
key: key,
data: existing,
}, nil
}
// ReadStream returns a reader for the file content.
func (m *Medium) ReadStream(p string) (goio.ReadCloser, error) {
key := m.key(p)
// Example: reader, _ := medium.ReadStream("reports/daily.txt")
func (medium *Medium) ReadStream(filePath string) (goio.ReadCloser, error) {
key := medium.objectKey(filePath)
if key == "" {
return nil, coreerr.E("s3.ReadStream", "path is required", os.ErrInvalid)
return nil, core.E("s3.ReadStream", "path is required", fs.ErrInvalid)
}
out, err := m.client.GetObject(context.Background(), &s3.GetObjectInput{
Bucket: aws.String(m.bucket),
out, err := medium.client.GetObject(context.Background(), &awss3.GetObjectInput{
Bucket: aws.String(medium.bucket),
Key: aws.String(key),
})
if err != nil {
return nil, coreerr.E("s3.ReadStream", "failed to get object: "+key, err)
return nil, core.E("s3.ReadStream", core.Concat("failed to get object: ", key), err)
}
return out.Body, nil
}
// WriteStream returns a writer for the file content. Content is uploaded on Close.
func (m *Medium) WriteStream(p string) (goio.WriteCloser, error) {
return m.Create(p)
// Example: writer, _ := medium.WriteStream("reports/daily.txt")
func (medium *Medium) WriteStream(filePath string) (goio.WriteCloser, error) {
return medium.Create(filePath)
}
// Exists checks if a path exists (file or directory prefix).
func (m *Medium) Exists(p string) bool {
key := m.key(p)
// Example: exists := medium.Exists("reports/daily.txt")
func (medium *Medium) Exists(filePath string) bool {
key := medium.objectKey(filePath)
if key == "" {
return false
}
// Check as an exact object
_, err := m.client.HeadObject(context.Background(), &s3.HeadObjectInput{
Bucket: aws.String(m.bucket),
_, err := medium.client.HeadObject(context.Background(), &awss3.HeadObjectInput{
Bucket: aws.String(medium.bucket),
Key: aws.String(key),
})
if err == nil {
return true
}
// Check as a "directory" prefix
prefix := key
if !strings.HasSuffix(prefix, "/") {
if !core.HasSuffix(prefix, "/") {
prefix += "/"
}
listOut, err := m.client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{
Bucket: aws.String(m.bucket),
listOutput, err := medium.client.ListObjectsV2(context.Background(), &awss3.ListObjectsV2Input{
Bucket: aws.String(medium.bucket),
Prefix: aws.String(prefix),
MaxKeys: aws.Int32(1),
})
if err != nil {
return false
}
return len(listOut.Contents) > 0 || len(listOut.CommonPrefixes) > 0
return len(listOutput.Contents) > 0 || len(listOutput.CommonPrefixes) > 0
}
// IsDir checks if a path exists and is a directory (has objects under it as a prefix).
func (m *Medium) IsDir(p string) bool {
key := m.key(p)
// Example: isDirectory := medium.IsDir("reports")
func (medium *Medium) IsDir(filePath string) bool {
key := medium.objectKey(filePath)
if key == "" {
return false
}
prefix := key
if !strings.HasSuffix(prefix, "/") {
if !core.HasSuffix(prefix, "/") {
prefix += "/"
}
listOut, err := m.client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{
Bucket: aws.String(m.bucket),
listOutput, err := medium.client.ListObjectsV2(context.Background(), &awss3.ListObjectsV2Input{
Bucket: aws.String(medium.bucket),
Prefix: aws.String(prefix),
MaxKeys: aws.Int32(1),
})
if err != nil {
return false
}
return len(listOut.Contents) > 0 || len(listOut.CommonPrefixes) > 0
return len(listOutput.Contents) > 0 || len(listOutput.CommonPrefixes) > 0
}
// --- Internal types ---
// fileInfo implements fs.FileInfo for S3 objects.
type fileInfo struct {
name string
size int64
@ -548,14 +556,18 @@ type fileInfo struct {
isDir bool
}
func (fi *fileInfo) Name() string { return fi.name }
func (fi *fileInfo) Size() int64 { return fi.size }
func (fi *fileInfo) Mode() fs.FileMode { return fi.mode }
func (fi *fileInfo) ModTime() time.Time { return fi.modTime }
func (fi *fileInfo) IsDir() bool { return fi.isDir }
func (fi *fileInfo) Sys() any { return nil }
func (info *fileInfo) Name() string { return info.name }
func (info *fileInfo) Size() int64 { return info.size }
func (info *fileInfo) Mode() fs.FileMode { return info.mode }
func (info *fileInfo) ModTime() time.Time { return info.modTime }
func (info *fileInfo) IsDir() bool { return info.isDir }
func (info *fileInfo) Sys() any { return nil }
// dirEntry implements fs.DirEntry for S3 listings.
type dirEntry struct {
name string
isDir bool
@ -563,12 +575,14 @@ type dirEntry struct {
info fs.FileInfo
}
func (de *dirEntry) Name() string { return de.name }
func (de *dirEntry) IsDir() bool { return de.isDir }
func (de *dirEntry) Type() fs.FileMode { return de.mode.Type() }
func (de *dirEntry) Info() (fs.FileInfo, error) { return de.info, nil }
func (entry *dirEntry) Name() string { return entry.name }
func (entry *dirEntry) IsDir() bool { return entry.isDir }
func (entry *dirEntry) Type() fs.FileMode { return entry.mode.Type() }
func (entry *dirEntry) Info() (fs.FileInfo, error) { return entry.info, nil }
// s3File implements fs.File for S3 objects.
type s3File struct {
name string
content []byte
@ -577,48 +591,47 @@ type s3File struct {
modTime time.Time
}
func (f *s3File) Stat() (fs.FileInfo, error) {
func (file *s3File) Stat() (fs.FileInfo, error) {
return &fileInfo{
name: f.name,
size: int64(len(f.content)),
name: file.name,
size: int64(len(file.content)),
mode: 0644,
modTime: f.modTime,
modTime: file.modTime,
}, nil
}
func (f *s3File) Read(b []byte) (int, error) {
if f.offset >= int64(len(f.content)) {
func (file *s3File) Read(buffer []byte) (int, error) {
if file.offset >= int64(len(file.content)) {
return 0, goio.EOF
}
n := copy(b, f.content[f.offset:])
f.offset += int64(n)
return n, nil
bytesRead := copy(buffer, file.content[file.offset:])
file.offset += int64(bytesRead)
return bytesRead, nil
}
func (f *s3File) Close() error {
func (file *s3File) Close() error {
return nil
}
// s3WriteCloser buffers writes and uploads to S3 on Close.
type s3WriteCloser struct {
medium *Medium
key string
data []byte
}
func (w *s3WriteCloser) Write(p []byte) (int, error) {
w.data = append(w.data, p...)
return len(p), nil
func (writer *s3WriteCloser) Write(data []byte) (int, error) {
writer.data = append(writer.data, data...)
return len(data), nil
}
func (w *s3WriteCloser) Close() error {
_, err := w.medium.client.PutObject(context.Background(), &s3.PutObjectInput{
Bucket: aws.String(w.medium.bucket),
Key: aws.String(w.key),
Body: bytes.NewReader(w.data),
func (writer *s3WriteCloser) Close() error {
_, err := writer.medium.client.PutObject(context.Background(), &awss3.PutObjectInput{
Bucket: aws.String(writer.medium.bucket),
Key: aws.String(writer.key),
Body: bytes.NewReader(writer.data),
})
if err != nil {
return coreerr.E("s3.writeCloser.Close", "failed to upload on close", err)
return core.E("s3.writeCloser.Close", "failed to upload on close", err)
}
return nil
}

View file

@ -3,108 +3,118 @@ package s3
import (
"bytes"
"context"
"fmt"
goio "io"
"io/fs"
"sort"
"strings"
"sync"
"testing"
"time"
core "dappco.re/go/core"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/s3"
awss3 "github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// mockS3 is an in-memory mock implementing the s3API interface.
type mockS3 struct {
mu sync.RWMutex
objects map[string][]byte
mtimes map[string]time.Time
type testS3Client struct {
mu sync.RWMutex
objects map[string][]byte
mtimes map[string]time.Time
deleteObjectErrors map[string]error
deleteObjectsErrs map[string]types.Error
}
func newMockS3() *mockS3 {
return &mockS3{
objects: make(map[string][]byte),
mtimes: make(map[string]time.Time),
func newTestS3Client() *testS3Client {
return &testS3Client{
objects: make(map[string][]byte),
mtimes: make(map[string]time.Time),
deleteObjectErrors: make(map[string]error),
deleteObjectsErrs: make(map[string]types.Error),
}
}
func (m *mockS3) GetObject(_ context.Context, params *s3.GetObjectInput, _ ...func(*s3.Options)) (*s3.GetObjectOutput, error) {
m.mu.RLock()
defer m.mu.RUnlock()
func (client *testS3Client) GetObject(operationContext context.Context, params *awss3.GetObjectInput, optionFns ...func(*awss3.Options)) (*awss3.GetObjectOutput, error) {
client.mu.RLock()
defer client.mu.RUnlock()
key := aws.ToString(params.Key)
data, ok := m.objects[key]
data, ok := client.objects[key]
if !ok {
return nil, fmt.Errorf("NoSuchKey: key %q not found", key)
return nil, core.E("s3test.testS3Client.GetObject", core.Sprintf("NoSuchKey: key %q not found", key), fs.ErrNotExist)
}
mtime := m.mtimes[key]
return &s3.GetObjectOutput{
mtime := client.mtimes[key]
return &awss3.GetObjectOutput{
Body: goio.NopCloser(bytes.NewReader(data)),
ContentLength: aws.Int64(int64(len(data))),
LastModified: &mtime,
}, nil
}
func (m *mockS3) PutObject(_ context.Context, params *s3.PutObjectInput, _ ...func(*s3.Options)) (*s3.PutObjectOutput, error) {
m.mu.Lock()
defer m.mu.Unlock()
func (client *testS3Client) PutObject(operationContext context.Context, params *awss3.PutObjectInput, optionFns ...func(*awss3.Options)) (*awss3.PutObjectOutput, error) {
client.mu.Lock()
defer client.mu.Unlock()
key := aws.ToString(params.Key)
data, err := goio.ReadAll(params.Body)
if err != nil {
return nil, err
}
m.objects[key] = data
m.mtimes[key] = time.Now()
return &s3.PutObjectOutput{}, nil
client.objects[key] = data
client.mtimes[key] = time.Now()
return &awss3.PutObjectOutput{}, nil
}
func (m *mockS3) DeleteObject(_ context.Context, params *s3.DeleteObjectInput, _ ...func(*s3.Options)) (*s3.DeleteObjectOutput, error) {
m.mu.Lock()
defer m.mu.Unlock()
func (client *testS3Client) DeleteObject(operationContext context.Context, params *awss3.DeleteObjectInput, optionFns ...func(*awss3.Options)) (*awss3.DeleteObjectOutput, error) {
client.mu.Lock()
defer client.mu.Unlock()
key := aws.ToString(params.Key)
delete(m.objects, key)
delete(m.mtimes, key)
return &s3.DeleteObjectOutput{}, nil
if err, ok := client.deleteObjectErrors[key]; ok {
return nil, err
}
delete(client.objects, key)
delete(client.mtimes, key)
return &awss3.DeleteObjectOutput{}, nil
}
func (m *mockS3) DeleteObjects(_ context.Context, params *s3.DeleteObjectsInput, _ ...func(*s3.Options)) (*s3.DeleteObjectsOutput, error) {
m.mu.Lock()
defer m.mu.Unlock()
func (client *testS3Client) DeleteObjects(operationContext context.Context, params *awss3.DeleteObjectsInput, optionFns ...func(*awss3.Options)) (*awss3.DeleteObjectsOutput, error) {
client.mu.Lock()
defer client.mu.Unlock()
var outErrs []types.Error
for _, obj := range params.Delete.Objects {
key := aws.ToString(obj.Key)
delete(m.objects, key)
delete(m.mtimes, key)
if errInfo, ok := client.deleteObjectsErrs[key]; ok {
outErrs = append(outErrs, errInfo)
continue
}
delete(client.objects, key)
delete(client.mtimes, key)
}
return &s3.DeleteObjectsOutput{}, nil
return &awss3.DeleteObjectsOutput{Errors: outErrs}, nil
}
func (m *mockS3) HeadObject(_ context.Context, params *s3.HeadObjectInput, _ ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {
m.mu.RLock()
defer m.mu.RUnlock()
func (client *testS3Client) HeadObject(operationContext context.Context, params *awss3.HeadObjectInput, optionFns ...func(*awss3.Options)) (*awss3.HeadObjectOutput, error) {
client.mu.RLock()
defer client.mu.RUnlock()
key := aws.ToString(params.Key)
data, ok := m.objects[key]
data, ok := client.objects[key]
if !ok {
return nil, fmt.Errorf("NotFound: key %q not found", key)
return nil, core.E("s3test.testS3Client.HeadObject", core.Sprintf("NotFound: key %q not found", key), fs.ErrNotExist)
}
mtime := m.mtimes[key]
return &s3.HeadObjectOutput{
mtime := client.mtimes[key]
return &awss3.HeadObjectOutput{
ContentLength: aws.Int64(int64(len(data))),
LastModified: &mtime,
}, nil
}
func (m *mockS3) ListObjectsV2(_ context.Context, params *s3.ListObjectsV2Input, _ ...func(*s3.Options)) (*s3.ListObjectsV2Output, error) {
m.mu.RLock()
defer m.mu.RUnlock()
func (client *testS3Client) ListObjectsV2(operationContext context.Context, params *awss3.ListObjectsV2Input, optionFns ...func(*awss3.Options)) (*awss3.ListObjectsV2Output, error) {
client.mu.RLock()
defer client.mu.RUnlock()
prefix := aws.ToString(params.Prefix)
delimiter := aws.ToString(params.Delimiter)
@ -113,10 +123,9 @@ func (m *mockS3) ListObjectsV2(_ context.Context, params *s3.ListObjectsV2Input,
maxKeys = *params.MaxKeys
}
// Collect all matching keys sorted
var allKeys []string
for k := range m.objects {
if strings.HasPrefix(k, prefix) {
for k := range client.objects {
if core.HasPrefix(k, prefix) {
allKeys = append(allKeys, k)
}
}
@ -126,12 +135,12 @@ func (m *mockS3) ListObjectsV2(_ context.Context, params *s3.ListObjectsV2Input,
commonPrefixes := make(map[string]bool)
for _, k := range allKeys {
rest := strings.TrimPrefix(k, prefix)
rest := core.TrimPrefix(k, prefix)
if delimiter != "" {
if idx := strings.Index(rest, delimiter); idx >= 0 {
// This key has a delimiter after the prefix -> common prefix
cp := prefix + rest[:idx+len(delimiter)]
parts := core.SplitN(rest, delimiter, 2)
if len(parts) == 2 {
cp := core.Concat(prefix, parts[0], delimiter)
commonPrefixes[cp] = true
continue
}
@ -141,8 +150,8 @@ func (m *mockS3) ListObjectsV2(_ context.Context, params *s3.ListObjectsV2Input,
break
}
data := m.objects[k]
mtime := m.mtimes[k]
data := client.objects[k]
mtime := client.mtimes[k]
contents = append(contents, types.Object{
Key: aws.String(k),
Size: aws.Int64(int64(len(data))),
@ -151,7 +160,6 @@ func (m *mockS3) ListObjectsV2(_ context.Context, params *s3.ListObjectsV2Input,
}
var cpSlice []types.CommonPrefix
// Sort common prefixes for deterministic output
var cpKeys []string
for cp := range commonPrefixes {
cpKeys = append(cpKeys, cp)
@ -161,240 +169,248 @@ func (m *mockS3) ListObjectsV2(_ context.Context, params *s3.ListObjectsV2Input,
cpSlice = append(cpSlice, types.CommonPrefix{Prefix: aws.String(cp)})
}
return &s3.ListObjectsV2Output{
return &awss3.ListObjectsV2Output{
Contents: contents,
CommonPrefixes: cpSlice,
IsTruncated: aws.Bool(false),
}, nil
}
func (m *mockS3) CopyObject(_ context.Context, params *s3.CopyObjectInput, _ ...func(*s3.Options)) (*s3.CopyObjectOutput, error) {
m.mu.Lock()
defer m.mu.Unlock()
func (client *testS3Client) CopyObject(operationContext context.Context, params *awss3.CopyObjectInput, optionFns ...func(*awss3.Options)) (*awss3.CopyObjectOutput, error) {
client.mu.Lock()
defer client.mu.Unlock()
// CopySource is "bucket/key"
source := aws.ToString(params.CopySource)
parts := strings.SplitN(source, "/", 2)
parts := core.SplitN(source, "/", 2)
if len(parts) != 2 {
return nil, fmt.Errorf("invalid CopySource: %s", source)
return nil, core.E("s3test.testS3Client.CopyObject", core.Sprintf("invalid CopySource: %s", source), fs.ErrInvalid)
}
srcKey := parts[1]
data, ok := m.objects[srcKey]
data, ok := client.objects[srcKey]
if !ok {
return nil, fmt.Errorf("NoSuchKey: source key %q not found", srcKey)
return nil, core.E("s3test.testS3Client.CopyObject", core.Sprintf("NoSuchKey: source key %q not found", srcKey), fs.ErrNotExist)
}
destKey := aws.ToString(params.Key)
m.objects[destKey] = append([]byte{}, data...)
m.mtimes[destKey] = time.Now()
client.objects[destKey] = append([]byte{}, data...)
client.mtimes[destKey] = time.Now()
return &s3.CopyObjectOutput{}, nil
return &awss3.CopyObjectOutput{}, nil
}
// --- Helper ---
func newTestMedium(t *testing.T) (*Medium, *mockS3) {
func newS3Medium(t *testing.T) (*Medium, *testS3Client) {
t.Helper()
mock := newMockS3()
m, err := New("test-bucket", withAPI(mock))
testS3Client := newTestS3Client()
s3Medium, err := New(Options{Bucket: "test-bucket", Client: testS3Client})
require.NoError(t, err)
return m, mock
return s3Medium, testS3Client
}
// --- Tests ---
func TestNew_Good(t *testing.T) {
mock := newMockS3()
m, err := New("my-bucket", withAPI(mock))
func TestS3_New_Good(t *testing.T) {
testS3Client := newTestS3Client()
s3Medium, err := New(Options{Bucket: "my-bucket", Client: testS3Client})
require.NoError(t, err)
assert.Equal(t, "my-bucket", m.bucket)
assert.Equal(t, "", m.prefix)
assert.Equal(t, "my-bucket", s3Medium.bucket)
assert.Equal(t, "", s3Medium.prefix)
}
func TestNew_Bad_NoBucket(t *testing.T) {
_, err := New("")
func TestS3_New_NoBucket_Bad(t *testing.T) {
_, err := New(Options{Client: newTestS3Client()})
assert.Error(t, err)
assert.Contains(t, err.Error(), "bucket name is required")
}
func TestNew_Bad_NoClient(t *testing.T) {
_, err := New("bucket")
func TestS3_New_NoClient_Bad(t *testing.T) {
_, err := New(Options{Bucket: "bucket"})
assert.Error(t, err)
assert.Contains(t, err.Error(), "S3 client is required")
assert.Contains(t, err.Error(), "client is required")
}
func TestWithPrefix_Good(t *testing.T) {
mock := newMockS3()
m, err := New("bucket", withAPI(mock), WithPrefix("data/"))
func TestS3_New_Options_Good(t *testing.T) {
testS3Client := newTestS3Client()
s3Medium, err := New(Options{Bucket: "bucket", Client: testS3Client, Prefix: "data/"})
require.NoError(t, err)
assert.Equal(t, "data/", m.prefix)
assert.Equal(t, "data/", s3Medium.prefix)
// Prefix without trailing slash gets one added
m2, err := New("bucket", withAPI(mock), WithPrefix("data"))
prefixedS3Medium, err := New(Options{Bucket: "bucket", Client: testS3Client, Prefix: "data"})
require.NoError(t, err)
assert.Equal(t, "data/", m2.prefix)
assert.Equal(t, "data/", prefixedS3Medium.prefix)
}
func TestReadWrite_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_ReadWrite_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
err := m.Write("hello.txt", "world")
err := s3Medium.Write("hello.txt", "world")
require.NoError(t, err)
content, err := m.Read("hello.txt")
content, err := s3Medium.Read("hello.txt")
require.NoError(t, err)
assert.Equal(t, "world", content)
}
func TestReadWrite_Bad_NotFound(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_ReadWrite_NotFound_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
_, err := m.Read("nonexistent.txt")
_, err := s3Medium.Read("nonexistent.txt")
assert.Error(t, err)
}
func TestReadWrite_Bad_EmptyPath(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_ReadWrite_EmptyPath_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
_, err := m.Read("")
_, err := s3Medium.Read("")
assert.Error(t, err)
err = m.Write("", "content")
err = s3Medium.Write("", "content")
assert.Error(t, err)
}
func TestReadWrite_Good_WithPrefix(t *testing.T) {
mock := newMockS3()
m, err := New("bucket", withAPI(mock), WithPrefix("pfx"))
func TestS3_ReadWrite_Prefix_Good(t *testing.T) {
testS3Client := newTestS3Client()
s3Medium, err := New(Options{Bucket: "bucket", Client: testS3Client, Prefix: "pfx"})
require.NoError(t, err)
err = m.Write("file.txt", "data")
err = s3Medium.Write("file.txt", "data")
require.NoError(t, err)
// Verify the key has the prefix
_, ok := mock.objects["pfx/file.txt"]
_, ok := testS3Client.objects["pfx/file.txt"]
assert.True(t, ok, "object should be stored with prefix")
content, err := m.Read("file.txt")
content, err := s3Medium.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "data", content)
}
func TestEnsureDir_Good(t *testing.T) {
m, _ := newTestMedium(t)
// EnsureDir is a no-op for S3
err := m.EnsureDir("any/path")
func TestS3_EnsureDir_Good(t *testing.T) {
medium, _ := newS3Medium(t)
err := medium.EnsureDir("any/path")
assert.NoError(t, err)
}
func TestIsFile_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_IsFile_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
err := m.Write("file.txt", "content")
err := s3Medium.Write("file.txt", "content")
require.NoError(t, err)
assert.True(t, m.IsFile("file.txt"))
assert.False(t, m.IsFile("nonexistent.txt"))
assert.False(t, m.IsFile(""))
assert.True(t, s3Medium.IsFile("file.txt"))
assert.False(t, s3Medium.IsFile("nonexistent.txt"))
assert.False(t, s3Medium.IsFile(""))
}
func TestFileGetFileSet_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_Delete_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
err := m.FileSet("key.txt", "value")
err := s3Medium.Write("to-delete.txt", "content")
require.NoError(t, err)
assert.True(t, s3Medium.Exists("to-delete.txt"))
val, err := m.FileGet("key.txt")
err = s3Medium.Delete("to-delete.txt")
require.NoError(t, err)
assert.Equal(t, "value", val)
assert.False(t, s3Medium.IsFile("to-delete.txt"))
}
func TestDelete_Good(t *testing.T) {
m, _ := newTestMedium(t)
err := m.Write("to-delete.txt", "content")
require.NoError(t, err)
assert.True(t, m.Exists("to-delete.txt"))
err = m.Delete("to-delete.txt")
require.NoError(t, err)
assert.False(t, m.IsFile("to-delete.txt"))
}
func TestDelete_Bad_EmptyPath(t *testing.T) {
m, _ := newTestMedium(t)
err := m.Delete("")
func TestS3_Delete_EmptyPath_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
err := s3Medium.Delete("")
assert.Error(t, err)
}
func TestDeleteAll_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_DeleteAll_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
// Create nested structure
require.NoError(t, m.Write("dir/file1.txt", "a"))
require.NoError(t, m.Write("dir/sub/file2.txt", "b"))
require.NoError(t, m.Write("other.txt", "c"))
require.NoError(t, s3Medium.Write("dir/file1.txt", "a"))
require.NoError(t, s3Medium.Write("dir/sub/file2.txt", "b"))
require.NoError(t, s3Medium.Write("other.txt", "c"))
err := m.DeleteAll("dir")
err := s3Medium.DeleteAll("dir")
require.NoError(t, err)
assert.False(t, m.IsFile("dir/file1.txt"))
assert.False(t, m.IsFile("dir/sub/file2.txt"))
assert.True(t, m.IsFile("other.txt"))
assert.False(t, s3Medium.IsFile("dir/file1.txt"))
assert.False(t, s3Medium.IsFile("dir/sub/file2.txt"))
assert.True(t, s3Medium.IsFile("other.txt"))
}
func TestDeleteAll_Bad_EmptyPath(t *testing.T) {
m, _ := newTestMedium(t)
err := m.DeleteAll("")
func TestS3_DeleteAll_EmptyPath_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
err := s3Medium.DeleteAll("")
assert.Error(t, err)
}
func TestRename_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_DeleteAll_DeleteObjectError_Bad(t *testing.T) {
s3Medium, testS3Client := newS3Medium(t)
testS3Client.deleteObjectErrors["dir"] = core.NewError("boom")
require.NoError(t, m.Write("old.txt", "content"))
assert.True(t, m.IsFile("old.txt"))
err := s3Medium.DeleteAll("dir")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to delete object: dir")
}
err := m.Rename("old.txt", "new.txt")
func TestS3_DeleteAll_PartialDelete_Bad(t *testing.T) {
s3Medium, testS3Client := newS3Medium(t)
require.NoError(t, s3Medium.Write("dir/file1.txt", "a"))
require.NoError(t, s3Medium.Write("dir/file2.txt", "b"))
testS3Client.deleteObjectsErrs["dir/file2.txt"] = types.Error{
Key: aws.String("dir/file2.txt"),
Code: aws.String("AccessDenied"),
Message: aws.String("blocked"),
}
err := s3Medium.DeleteAll("dir")
require.Error(t, err)
assert.Contains(t, err.Error(), "partial delete failed")
assert.Contains(t, err.Error(), "dir/file2.txt")
assert.True(t, s3Medium.IsFile("dir/file2.txt"))
assert.False(t, s3Medium.IsFile("dir/file1.txt"))
}
func TestS3_Rename_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
require.NoError(t, s3Medium.Write("old.txt", "content"))
assert.True(t, s3Medium.IsFile("old.txt"))
err := s3Medium.Rename("old.txt", "new.txt")
require.NoError(t, err)
assert.False(t, m.IsFile("old.txt"))
assert.True(t, m.IsFile("new.txt"))
assert.False(t, s3Medium.IsFile("old.txt"))
assert.True(t, s3Medium.IsFile("new.txt"))
content, err := m.Read("new.txt")
content, err := s3Medium.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "content", content)
}
func TestRename_Bad_EmptyPath(t *testing.T) {
m, _ := newTestMedium(t)
err := m.Rename("", "new.txt")
func TestS3_Rename_EmptyPath_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
err := s3Medium.Rename("", "new.txt")
assert.Error(t, err)
err = m.Rename("old.txt", "")
err = s3Medium.Rename("old.txt", "")
assert.Error(t, err)
}
func TestRename_Bad_SourceNotFound(t *testing.T) {
m, _ := newTestMedium(t)
err := m.Rename("nonexistent.txt", "new.txt")
func TestS3_Rename_SourceNotFound_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
err := s3Medium.Rename("nonexistent.txt", "new.txt")
assert.Error(t, err)
}
func TestList_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_List_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
require.NoError(t, m.Write("dir/file1.txt", "a"))
require.NoError(t, m.Write("dir/file2.txt", "b"))
require.NoError(t, m.Write("dir/sub/file3.txt", "c"))
require.NoError(t, s3Medium.Write("dir/file1.txt", "a"))
require.NoError(t, s3Medium.Write("dir/file2.txt", "b"))
require.NoError(t, s3Medium.Write("dir/sub/file3.txt", "c"))
entries, err := m.List("dir")
entries, err := s3Medium.List("dir")
require.NoError(t, err)
names := make(map[string]bool)
for _, e := range entries {
names[e.Name()] = true
for _, entry := range entries {
names[entry.Name()] = true
}
assert.True(t, names["file1.txt"], "should list file1.txt")
@ -402,143 +418,142 @@ func TestList_Good(t *testing.T) {
assert.True(t, names["sub"], "should list sub directory")
assert.Len(t, entries, 3)
// Check that sub is a directory
for _, e := range entries {
if e.Name() == "sub" {
assert.True(t, e.IsDir())
info, err := e.Info()
for _, entry := range entries {
if entry.Name() == "sub" {
assert.True(t, entry.IsDir())
info, err := entry.Info()
require.NoError(t, err)
assert.True(t, info.IsDir())
}
}
}
func TestList_Good_Root(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_List_Root_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
require.NoError(t, m.Write("root.txt", "content"))
require.NoError(t, m.Write("dir/nested.txt", "nested"))
require.NoError(t, s3Medium.Write("root.txt", "content"))
require.NoError(t, s3Medium.Write("dir/nested.txt", "nested"))
entries, err := m.List("")
entries, err := s3Medium.List("")
require.NoError(t, err)
names := make(map[string]bool)
for _, e := range entries {
names[e.Name()] = true
for _, entry := range entries {
names[entry.Name()] = true
}
assert.True(t, names["root.txt"])
assert.True(t, names["dir"])
}
func TestStat_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_Stat_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
require.NoError(t, m.Write("file.txt", "hello world"))
require.NoError(t, s3Medium.Write("file.txt", "hello world"))
info, err := m.Stat("file.txt")
info, err := s3Medium.Stat("file.txt")
require.NoError(t, err)
assert.Equal(t, "file.txt", info.Name())
assert.Equal(t, int64(11), info.Size())
assert.False(t, info.IsDir())
}
func TestStat_Bad_NotFound(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_Stat_NotFound_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
_, err := m.Stat("nonexistent.txt")
_, err := s3Medium.Stat("nonexistent.txt")
assert.Error(t, err)
}
func TestStat_Bad_EmptyPath(t *testing.T) {
m, _ := newTestMedium(t)
_, err := m.Stat("")
func TestS3_Stat_EmptyPath_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
_, err := s3Medium.Stat("")
assert.Error(t, err)
}
func TestOpen_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_Open_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
require.NoError(t, m.Write("file.txt", "open me"))
require.NoError(t, s3Medium.Write("file.txt", "open me"))
f, err := m.Open("file.txt")
file, err := s3Medium.Open("file.txt")
require.NoError(t, err)
defer f.Close()
defer file.Close()
data, err := goio.ReadAll(f.(goio.Reader))
data, err := goio.ReadAll(file.(goio.Reader))
require.NoError(t, err)
assert.Equal(t, "open me", string(data))
stat, err := f.Stat()
stat, err := file.Stat()
require.NoError(t, err)
assert.Equal(t, "file.txt", stat.Name())
}
func TestOpen_Bad_NotFound(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_Open_NotFound_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
_, err := m.Open("nonexistent.txt")
_, err := s3Medium.Open("nonexistent.txt")
assert.Error(t, err)
}
func TestCreate_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_Create_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
w, err := m.Create("new.txt")
writer, err := s3Medium.Create("new.txt")
require.NoError(t, err)
n, err := w.Write([]byte("created"))
bytesWritten, err := writer.Write([]byte("created"))
require.NoError(t, err)
assert.Equal(t, 7, n)
assert.Equal(t, 7, bytesWritten)
err = w.Close()
err = writer.Close()
require.NoError(t, err)
content, err := m.Read("new.txt")
content, err := s3Medium.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "created", content)
}
func TestAppend_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_Append_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
require.NoError(t, m.Write("append.txt", "hello"))
require.NoError(t, s3Medium.Write("append.txt", "hello"))
w, err := m.Append("append.txt")
writer, err := s3Medium.Append("append.txt")
require.NoError(t, err)
_, err = w.Write([]byte(" world"))
_, err = writer.Write([]byte(" world"))
require.NoError(t, err)
err = w.Close()
err = writer.Close()
require.NoError(t, err)
content, err := m.Read("append.txt")
content, err := s3Medium.Read("append.txt")
require.NoError(t, err)
assert.Equal(t, "hello world", content)
}
func TestAppend_Good_NewFile(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_Append_NewFile_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
w, err := m.Append("new.txt")
writer, err := s3Medium.Append("new.txt")
require.NoError(t, err)
_, err = w.Write([]byte("fresh"))
_, err = writer.Write([]byte("fresh"))
require.NoError(t, err)
err = w.Close()
err = writer.Close()
require.NoError(t, err)
content, err := m.Read("new.txt")
content, err := s3Medium.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "fresh", content)
}
func TestReadStream_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_ReadStream_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
require.NoError(t, m.Write("stream.txt", "streaming content"))
require.NoError(t, s3Medium.Write("stream.txt", "streaming content"))
reader, err := m.ReadStream("stream.txt")
reader, err := s3Medium.ReadStream("stream.txt")
require.NoError(t, err)
defer reader.Close()
@ -547,89 +562,81 @@ func TestReadStream_Good(t *testing.T) {
assert.Equal(t, "streaming content", string(data))
}
func TestReadStream_Bad_NotFound(t *testing.T) {
m, _ := newTestMedium(t)
_, err := m.ReadStream("nonexistent.txt")
func TestS3_ReadStream_NotFound_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
_, err := s3Medium.ReadStream("nonexistent.txt")
assert.Error(t, err)
}
func TestWriteStream_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_WriteStream_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
writer, err := m.WriteStream("output.txt")
writer, err := s3Medium.WriteStream("output.txt")
require.NoError(t, err)
_, err = goio.Copy(writer, strings.NewReader("piped data"))
_, err = goio.Copy(writer, core.NewReader("piped data"))
require.NoError(t, err)
err = writer.Close()
require.NoError(t, err)
content, err := m.Read("output.txt")
content, err := s3Medium.Read("output.txt")
require.NoError(t, err)
assert.Equal(t, "piped data", content)
}
func TestExists_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_Exists_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
assert.False(t, m.Exists("nonexistent.txt"))
assert.False(t, s3Medium.Exists("nonexistent.txt"))
require.NoError(t, m.Write("file.txt", "content"))
assert.True(t, m.Exists("file.txt"))
require.NoError(t, s3Medium.Write("file.txt", "content"))
assert.True(t, s3Medium.Exists("file.txt"))
}
func TestExists_Good_DirectoryPrefix(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_Exists_DirectoryPrefix_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
require.NoError(t, m.Write("dir/file.txt", "content"))
// "dir" should exist as a directory prefix
assert.True(t, m.Exists("dir"))
require.NoError(t, s3Medium.Write("dir/file.txt", "content"))
assert.True(t, s3Medium.Exists("dir"))
}
func TestIsDir_Good(t *testing.T) {
m, _ := newTestMedium(t)
func TestS3_IsDir_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
require.NoError(t, m.Write("dir/file.txt", "content"))
require.NoError(t, s3Medium.Write("dir/file.txt", "content"))
assert.True(t, m.IsDir("dir"))
assert.False(t, m.IsDir("dir/file.txt"))
assert.False(t, m.IsDir("nonexistent"))
assert.False(t, m.IsDir(""))
assert.True(t, s3Medium.IsDir("dir"))
assert.False(t, s3Medium.IsDir("dir/file.txt"))
assert.False(t, s3Medium.IsDir("nonexistent"))
assert.False(t, s3Medium.IsDir(""))
}
func TestKey_Good(t *testing.T) {
mock := newMockS3()
func TestS3_ObjectKey_Good(t *testing.T) {
testS3Client := newTestS3Client()
// No prefix
m, _ := New("bucket", withAPI(mock))
assert.Equal(t, "file.txt", m.key("file.txt"))
assert.Equal(t, "dir/file.txt", m.key("dir/file.txt"))
assert.Equal(t, "", m.key(""))
assert.Equal(t, "file.txt", m.key("/file.txt"))
assert.Equal(t, "file.txt", m.key("../file.txt"))
s3Medium, _ := New(Options{Bucket: "bucket", Client: testS3Client})
assert.Equal(t, "file.txt", s3Medium.objectKey("file.txt"))
assert.Equal(t, "dir/file.txt", s3Medium.objectKey("dir/file.txt"))
assert.Equal(t, "", s3Medium.objectKey(""))
assert.Equal(t, "file.txt", s3Medium.objectKey("/file.txt"))
assert.Equal(t, "file.txt", s3Medium.objectKey("../file.txt"))
// With prefix
m2, _ := New("bucket", withAPI(mock), WithPrefix("pfx"))
assert.Equal(t, "pfx/file.txt", m2.key("file.txt"))
assert.Equal(t, "pfx/dir/file.txt", m2.key("dir/file.txt"))
assert.Equal(t, "pfx/", m2.key(""))
prefixedS3Medium, _ := New(Options{Bucket: "bucket", Client: testS3Client, Prefix: "pfx"})
assert.Equal(t, "pfx/file.txt", prefixedS3Medium.objectKey("file.txt"))
assert.Equal(t, "pfx/dir/file.txt", prefixedS3Medium.objectKey("dir/file.txt"))
assert.Equal(t, "pfx/", prefixedS3Medium.objectKey(""))
}
// Ugly: verify the Medium interface is satisfied at compile time.
func TestInterfaceCompliance_Ugly(t *testing.T) {
mock := newMockS3()
m, err := New("bucket", withAPI(mock))
func TestS3_InterfaceCompliance_Good(t *testing.T) {
testS3Client := newTestS3Client()
s3Medium, err := New(Options{Bucket: "bucket", Client: testS3Client})
require.NoError(t, err)
// Verify all methods exist by calling them in a way that
// proves compile-time satisfaction of the interface.
var _ interface {
Read(string) (string, error)
Write(string, string) error
EnsureDir(string) error
IsFile(string) bool
FileGet(string) (string, error)
FileSet(string, string) error
Delete(string) error
DeleteAll(string) error
Rename(string, string) error
@ -642,5 +649,5 @@ func TestInterfaceCompliance_Ugly(t *testing.T) {
WriteStream(string) (goio.WriteCloser, error)
Exists(string) bool
IsDir(string) bool
} = m
} = s3Medium
}

View file

@ -1,107 +1,78 @@
// This file implements the Pre-Obfuscation Layer Protocol with
// XChaCha20-Poly1305 encryption. The protocol applies a reversible transformation
// to plaintext BEFORE it reaches CPU encryption routines, providing defense-in-depth
// against side-channel attacks.
//
// The encryption flow is:
//
// plaintext -> obfuscate(nonce) -> encrypt -> [nonce || ciphertext || tag]
//
// The decryption flow is:
//
// [nonce || ciphertext || tag] -> decrypt -> deobfuscate(nonce) -> plaintext
// Example: cipherSigil, _ := sigil.NewChaChaPolySigil([]byte("0123456789abcdef0123456789abcdef"), nil)
// Example: ciphertext, _ := cipherSigil.In([]byte("payload"))
// Example: plaintext, _ := cipherSigil.Out(ciphertext)
package sigil
import (
"crypto/rand"
"crypto/sha256"
"encoding/binary"
"errors"
"io"
goio "io"
core "dappco.re/go/core"
"golang.org/x/crypto/chacha20poly1305"
)
var (
// ErrInvalidKey is returned when the encryption key is invalid.
ErrInvalidKey = errors.New("sigil: invalid key size, must be 32 bytes")
// ErrCiphertextTooShort is returned when the ciphertext is too short to decrypt.
ErrCiphertextTooShort = errors.New("sigil: ciphertext too short")
// ErrDecryptionFailed is returned when decryption or authentication fails.
ErrDecryptionFailed = errors.New("sigil: decryption failed")
// ErrNoKeyConfigured is returned when no encryption key has been set.
ErrNoKeyConfigured = errors.New("sigil: no encryption key configured")
// Example: errors.Is(err, sigil.InvalidKeyError)
InvalidKeyError = core.E("sigil.InvalidKeyError", "invalid key size, must be 32 bytes", nil)
// Example: errors.Is(err, sigil.CiphertextTooShortError)
CiphertextTooShortError = core.E("sigil.CiphertextTooShortError", "ciphertext too short", nil)
// Example: errors.Is(err, sigil.DecryptionFailedError)
DecryptionFailedError = core.E("sigil.DecryptionFailedError", "decryption failed", nil)
// Example: errors.Is(err, sigil.NoKeyConfiguredError)
NoKeyConfiguredError = core.E("sigil.NoKeyConfiguredError", "no encryption key configured", nil)
)
// PreObfuscator applies a reversible transformation to data before encryption.
// This ensures that raw plaintext patterns are never sent directly to CPU
// encryption routines, providing defense against side-channel attacks.
//
// Implementations must be deterministic: given the same entropy, the transformation
// must be perfectly reversible: Deobfuscate(Obfuscate(x, e), e) == x
// Example: obfuscator := &sigil.XORObfuscator{}
type PreObfuscator interface {
// Obfuscate transforms plaintext before encryption using the provided entropy.
// The entropy is typically the encryption nonce, ensuring the transformation
// is unique per-encryption without additional random generation.
Obfuscate(data []byte, entropy []byte) []byte
// Deobfuscate reverses the transformation after decryption.
// Must be called with the same entropy used during Obfuscate.
Deobfuscate(data []byte, entropy []byte) []byte
}
// XORObfuscator performs XOR-based obfuscation using an entropy-derived key stream.
//
// The key stream is generated using SHA-256 in counter mode:
//
// keyStream[i*32:(i+1)*32] = SHA256(entropy || BigEndian64(i))
//
// This provides a cryptographically uniform key stream that decorrelates
// plaintext patterns from the data seen by the encryption routine.
// XOR is symmetric, so obfuscation and deobfuscation use the same operation.
// Example: obfuscator := &sigil.XORObfuscator{}
type XORObfuscator struct{}
// Obfuscate XORs the data with a key stream derived from the entropy.
func (x *XORObfuscator) Obfuscate(data []byte, entropy []byte) []byte {
func (obfuscator *XORObfuscator) Obfuscate(data []byte, entropy []byte) []byte {
if len(data) == 0 {
return data
}
return x.transform(data, entropy)
return obfuscator.transform(data, entropy)
}
// Deobfuscate reverses the XOR transformation (XOR is symmetric).
func (x *XORObfuscator) Deobfuscate(data []byte, entropy []byte) []byte {
func (obfuscator *XORObfuscator) Deobfuscate(data []byte, entropy []byte) []byte {
if len(data) == 0 {
return data
}
return x.transform(data, entropy)
return obfuscator.transform(data, entropy)
}
// transform applies XOR with an entropy-derived key stream.
func (x *XORObfuscator) transform(data []byte, entropy []byte) []byte {
func (obfuscator *XORObfuscator) transform(data []byte, entropy []byte) []byte {
result := make([]byte, len(data))
keyStream := x.deriveKeyStream(entropy, len(data))
keyStream := obfuscator.deriveKeyStream(entropy, len(data))
for i := range data {
result[i] = data[i] ^ keyStream[i]
}
return result
}
// deriveKeyStream creates a deterministic key stream from entropy.
func (x *XORObfuscator) deriveKeyStream(entropy []byte, length int) []byte {
func (obfuscator *XORObfuscator) deriveKeyStream(entropy []byte, length int) []byte {
stream := make([]byte, length)
h := sha256.New()
hashFunction := sha256.New()
// Generate key stream in 32-byte blocks
blockNum := uint64(0)
offset := 0
for offset < length {
h.Reset()
h.Write(entropy)
hashFunction.Reset()
hashFunction.Write(entropy)
var blockBytes [8]byte
binary.BigEndian.PutUint64(blockBytes[:], blockNum)
h.Write(blockBytes[:])
block := h.Sum(nil)
hashFunction.Write(blockBytes[:])
block := hashFunction.Sum(nil)
copyLen := min(len(block), length-offset)
copy(stream[offset:], block[:copyLen])
@ -111,20 +82,10 @@ func (x *XORObfuscator) deriveKeyStream(entropy []byte, length int) []byte {
return stream
}
// ShuffleMaskObfuscator provides stronger obfuscation through byte shuffling and masking.
//
// The obfuscation process:
// 1. Generate a mask from entropy using SHA-256 in counter mode
// 2. XOR the data with the mask
// 3. Generate a deterministic permutation using Fisher-Yates shuffle
// 4. Reorder bytes according to the permutation
//
// This provides both value transformation (XOR mask) and position transformation
// (shuffle), making pattern analysis more difficult than XOR alone.
// Example: obfuscator := &sigil.ShuffleMaskObfuscator{}
type ShuffleMaskObfuscator struct{}
// Obfuscate shuffles bytes and applies a mask derived from entropy.
func (s *ShuffleMaskObfuscator) Obfuscate(data []byte, entropy []byte) []byte {
func (obfuscator *ShuffleMaskObfuscator) Obfuscate(data []byte, entropy []byte) []byte {
if len(data) == 0 {
return data
}
@ -132,42 +93,35 @@ func (s *ShuffleMaskObfuscator) Obfuscate(data []byte, entropy []byte) []byte {
result := make([]byte, len(data))
copy(result, data)
// Generate permutation and mask from entropy
perm := s.generatePermutation(entropy, len(data))
mask := s.deriveMask(entropy, len(data))
permutation := obfuscator.generatePermutation(entropy, len(data))
mask := obfuscator.deriveMask(entropy, len(data))
// Apply mask first, then shuffle
for i := range result {
result[i] ^= mask[i]
}
// Shuffle using Fisher-Yates with deterministic seed
shuffled := make([]byte, len(data))
for i, p := range perm {
shuffled[i] = result[p]
for destinationIndex, sourceIndex := range permutation {
shuffled[destinationIndex] = result[sourceIndex]
}
return shuffled
}
// Deobfuscate reverses the shuffle and mask operations.
func (s *ShuffleMaskObfuscator) Deobfuscate(data []byte, entropy []byte) []byte {
func (obfuscator *ShuffleMaskObfuscator) Deobfuscate(data []byte, entropy []byte) []byte {
if len(data) == 0 {
return data
}
result := make([]byte, len(data))
// Generate permutation and mask from entropy
perm := s.generatePermutation(entropy, len(data))
mask := s.deriveMask(entropy, len(data))
permutation := obfuscator.generatePermutation(entropy, len(data))
mask := obfuscator.deriveMask(entropy, len(data))
// Unshuffle first
for i, p := range perm {
result[p] = data[i]
for destinationIndex, sourceIndex := range permutation {
result[sourceIndex] = data[destinationIndex]
}
// Remove mask
for i := range result {
result[i] ^= mask[i]
}
@ -175,49 +129,45 @@ func (s *ShuffleMaskObfuscator) Deobfuscate(data []byte, entropy []byte) []byte
return result
}
// generatePermutation creates a deterministic permutation from entropy.
func (s *ShuffleMaskObfuscator) generatePermutation(entropy []byte, length int) []int {
perm := make([]int, length)
for i := range perm {
perm[i] = i
func (obfuscator *ShuffleMaskObfuscator) generatePermutation(entropy []byte, length int) []int {
permutation := make([]int, length)
for i := range permutation {
permutation[i] = i
}
// Use entropy to seed a deterministic shuffle
h := sha256.New()
h.Write(entropy)
h.Write([]byte("permutation"))
seed := h.Sum(nil)
hashFunction := sha256.New()
hashFunction.Write(entropy)
hashFunction.Write([]byte("permutation"))
seed := hashFunction.Sum(nil)
// Fisher-Yates shuffle with deterministic randomness
for i := length - 1; i > 0; i-- {
h.Reset()
h.Write(seed)
hashFunction.Reset()
hashFunction.Write(seed)
var iBytes [8]byte
binary.BigEndian.PutUint64(iBytes[:], uint64(i))
h.Write(iBytes[:])
jBytes := h.Sum(nil)
hashFunction.Write(iBytes[:])
jBytes := hashFunction.Sum(nil)
j := int(binary.BigEndian.Uint64(jBytes[:8]) % uint64(i+1))
perm[i], perm[j] = perm[j], perm[i]
permutation[i], permutation[j] = permutation[j], permutation[i]
}
return perm
return permutation
}
// deriveMask creates a mask byte array from entropy.
func (s *ShuffleMaskObfuscator) deriveMask(entropy []byte, length int) []byte {
func (obfuscator *ShuffleMaskObfuscator) deriveMask(entropy []byte, length int) []byte {
mask := make([]byte, length)
h := sha256.New()
hashFunction := sha256.New()
blockNum := uint64(0)
offset := 0
for offset < length {
h.Reset()
h.Write(entropy)
h.Write([]byte("mask"))
hashFunction.Reset()
hashFunction.Write(entropy)
hashFunction.Write([]byte("mask"))
var blockBytes [8]byte
binary.BigEndian.PutUint64(blockBytes[:], blockNum)
h.Write(blockBytes[:])
block := h.Sum(nil)
hashFunction.Write(blockBytes[:])
block := hashFunction.Sum(nil)
copyLen := min(len(block), length-offset)
copy(mask[offset:], block[:copyLen])
@ -227,123 +177,99 @@ func (s *ShuffleMaskObfuscator) deriveMask(entropy []byte, length int) []byte {
return mask
}
// ChaChaPolySigil is a Sigil that encrypts/decrypts data using ChaCha20-Poly1305.
// It applies pre-obfuscation before encryption to ensure raw plaintext never
// goes directly to CPU encryption routines.
//
// The output format is:
// [24-byte nonce][encrypted(obfuscated(plaintext))]
//
// Unlike demo implementations, the nonce is ONLY embedded in the ciphertext,
// not exposed separately in headers.
// Example: cipherSigil, _ := sigil.NewChaChaPolySigil(
// Example: []byte("0123456789abcdef0123456789abcdef"),
// Example: &sigil.ShuffleMaskObfuscator{},
// Example: )
type ChaChaPolySigil struct {
Key []byte
Obfuscator PreObfuscator
randReader io.Reader // for testing injection
Key []byte
Obfuscator PreObfuscator
randomReader goio.Reader
}
// NewChaChaPolySigil creates a new encryption sigil with the given key.
// The key must be exactly 32 bytes.
func NewChaChaPolySigil(key []byte) (*ChaChaPolySigil, error) {
// Example: cipherSigil, _ := sigil.NewChaChaPolySigil([]byte("0123456789abcdef0123456789abcdef"), nil)
// Example: ciphertext, _ := cipherSigil.In([]byte("payload"))
// Example: plaintext, _ := cipherSigil.Out(ciphertext)
func NewChaChaPolySigil(key []byte, obfuscator PreObfuscator) (*ChaChaPolySigil, error) {
if len(key) != 32 {
return nil, ErrInvalidKey
return nil, InvalidKeyError
}
keyCopy := make([]byte, 32)
copy(keyCopy, key)
if obfuscator == nil {
obfuscator = &XORObfuscator{}
}
return &ChaChaPolySigil{
Key: keyCopy,
Obfuscator: &XORObfuscator{},
randReader: rand.Reader,
Key: keyCopy,
Obfuscator: obfuscator,
randomReader: rand.Reader,
}, nil
}
// NewChaChaPolySigilWithObfuscator creates a new encryption sigil with custom obfuscator.
func NewChaChaPolySigilWithObfuscator(key []byte, obfuscator PreObfuscator) (*ChaChaPolySigil, error) {
sigil, err := NewChaChaPolySigil(key)
if err != nil {
return nil, err
}
if obfuscator != nil {
sigil.Obfuscator = obfuscator
}
return sigil, nil
}
// In encrypts the data with pre-obfuscation.
// The flow is: plaintext -> obfuscate -> encrypt
func (s *ChaChaPolySigil) In(data []byte) ([]byte, error) {
if s.Key == nil {
return nil, ErrNoKeyConfigured
func (sigil *ChaChaPolySigil) In(data []byte) ([]byte, error) {
if sigil.Key == nil {
return nil, NoKeyConfiguredError
}
if data == nil {
return nil, nil
}
aead, err := chacha20poly1305.NewX(s.Key)
aead, err := chacha20poly1305.NewX(sigil.Key)
if err != nil {
return nil, err
return nil, core.E("sigil.ChaChaPolySigil.In", "create cipher", err)
}
// Generate nonce
nonce := make([]byte, aead.NonceSize())
reader := s.randReader
reader := sigil.randomReader
if reader == nil {
reader = rand.Reader
}
if _, err := io.ReadFull(reader, nonce); err != nil {
return nil, err
if _, err := goio.ReadFull(reader, nonce); err != nil {
return nil, core.E("sigil.ChaChaPolySigil.In", "read nonce", err)
}
// Pre-obfuscate the plaintext using nonce as entropy
// This ensures CPU encryption routines never see raw plaintext
obfuscated := data
if s.Obfuscator != nil {
obfuscated = s.Obfuscator.Obfuscate(data, nonce)
if sigil.Obfuscator != nil {
obfuscated = sigil.Obfuscator.Obfuscate(data, nonce)
}
// Encrypt the obfuscated data
// Output: [nonce | ciphertext | auth tag]
ciphertext := aead.Seal(nonce, nonce, obfuscated, nil)
return ciphertext, nil
}
// Out decrypts the data and reverses obfuscation.
// The flow is: decrypt -> deobfuscate -> plaintext
func (s *ChaChaPolySigil) Out(data []byte) ([]byte, error) {
if s.Key == nil {
return nil, ErrNoKeyConfigured
func (sigil *ChaChaPolySigil) Out(data []byte) ([]byte, error) {
if sigil.Key == nil {
return nil, NoKeyConfiguredError
}
if data == nil {
return nil, nil
}
aead, err := chacha20poly1305.NewX(s.Key)
aead, err := chacha20poly1305.NewX(sigil.Key)
if err != nil {
return nil, err
return nil, core.E("sigil.ChaChaPolySigil.Out", "create cipher", err)
}
minLen := aead.NonceSize() + aead.Overhead()
if len(data) < minLen {
return nil, ErrCiphertextTooShort
return nil, CiphertextTooShortError
}
// Extract nonce from ciphertext
nonce := data[:aead.NonceSize()]
ciphertext := data[aead.NonceSize():]
// Decrypt
obfuscated, err := aead.Open(nil, nonce, ciphertext, nil)
if err != nil {
return nil, ErrDecryptionFailed
return nil, core.E("sigil.ChaChaPolySigil.Out", "decrypt ciphertext", DecryptionFailedError)
}
// Deobfuscate using the same nonce as entropy
plaintext := obfuscated
if s.Obfuscator != nil {
plaintext = s.Obfuscator.Deobfuscate(obfuscated, nonce)
if sigil.Obfuscator != nil {
plaintext = sigil.Obfuscator.Deobfuscate(obfuscated, nonce)
}
if len(plaintext) == 0 {
@ -353,13 +279,11 @@ func (s *ChaChaPolySigil) Out(data []byte) ([]byte, error) {
return plaintext, nil
}
// GetNonceFromCiphertext extracts the nonce from encrypted output.
// This is provided for debugging/logging purposes only.
// The nonce should NOT be stored separately in headers.
func GetNonceFromCiphertext(ciphertext []byte) ([]byte, error) {
// Example: nonce, _ := sigil.NonceFromCiphertext(ciphertext)
func NonceFromCiphertext(ciphertext []byte) ([]byte, error) {
nonceSize := chacha20poly1305.NonceSizeX
if len(ciphertext) < nonceSize {
return nil, ErrCiphertextTooShort
return nil, CiphertextTooShortError
}
nonceCopy := make([]byte, nonceSize)
copy(nonceCopy, ciphertext[:nonceSize])

View file

@ -3,17 +3,15 @@ package sigil
import (
"bytes"
"crypto/rand"
"errors"
"io"
goio "io"
"testing"
core "dappco.re/go/core"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// ── XORObfuscator ──────────────────────────────────────────────────
func TestXORObfuscator_Good_RoundTrip(t *testing.T) {
func TestCryptoSigil_XORObfuscator_RoundTrip_Good(t *testing.T) {
ob := &XORObfuscator{}
data := []byte("the axioms are in the weights")
entropy := []byte("deterministic-nonce-24bytes!")
@ -26,7 +24,7 @@ func TestXORObfuscator_Good_RoundTrip(t *testing.T) {
assert.Equal(t, data, restored)
}
func TestXORObfuscator_Good_DifferentEntropyDifferentOutput(t *testing.T) {
func TestCryptoSigil_XORObfuscator_DifferentEntropyDifferentOutput_Good(t *testing.T) {
ob := &XORObfuscator{}
data := []byte("same plaintext")
@ -35,7 +33,7 @@ func TestXORObfuscator_Good_DifferentEntropyDifferentOutput(t *testing.T) {
assert.NotEqual(t, out1, out2)
}
func TestXORObfuscator_Good_Deterministic(t *testing.T) {
func TestCryptoSigil_XORObfuscator_Deterministic_Good(t *testing.T) {
ob := &XORObfuscator{}
data := []byte("reproducible")
entropy := []byte("fixed-seed")
@ -45,9 +43,8 @@ func TestXORObfuscator_Good_Deterministic(t *testing.T) {
assert.Equal(t, out1, out2)
}
func TestXORObfuscator_Good_LargeData(t *testing.T) {
func TestCryptoSigil_XORObfuscator_LargeData_Good(t *testing.T) {
ob := &XORObfuscator{}
// Larger than one SHA-256 block (32 bytes) to test multi-block key stream.
data := make([]byte, 256)
for i := range data {
data[i] = byte(i)
@ -59,7 +56,7 @@ func TestXORObfuscator_Good_LargeData(t *testing.T) {
assert.Equal(t, data, restored)
}
func TestXORObfuscator_Good_EmptyData(t *testing.T) {
func TestCryptoSigil_XORObfuscator_EmptyData_Good(t *testing.T) {
ob := &XORObfuscator{}
result := ob.Obfuscate([]byte{}, []byte("entropy"))
assert.Equal(t, []byte{}, result)
@ -68,19 +65,16 @@ func TestXORObfuscator_Good_EmptyData(t *testing.T) {
assert.Equal(t, []byte{}, result)
}
func TestXORObfuscator_Good_SymmetricProperty(t *testing.T) {
func TestCryptoSigil_XORObfuscator_SymmetricProperty_Good(t *testing.T) {
ob := &XORObfuscator{}
data := []byte("XOR is its own inverse")
entropy := []byte("nonce")
// XOR is symmetric: Obfuscate(Obfuscate(x)) == x
double := ob.Obfuscate(ob.Obfuscate(data, entropy), entropy)
assert.Equal(t, data, double)
}
// ── ShuffleMaskObfuscator ──────────────────────────────────────────
func TestShuffleMaskObfuscator_Good_RoundTrip(t *testing.T) {
func TestCryptoSigil_ShuffleMaskObfuscator_RoundTrip_Good(t *testing.T) {
ob := &ShuffleMaskObfuscator{}
data := []byte("shuffle and mask protect patterns")
entropy := []byte("deterministic-entropy")
@ -93,7 +87,7 @@ func TestShuffleMaskObfuscator_Good_RoundTrip(t *testing.T) {
assert.Equal(t, data, restored)
}
func TestShuffleMaskObfuscator_Good_DifferentEntropy(t *testing.T) {
func TestCryptoSigil_ShuffleMaskObfuscator_DifferentEntropy_Good(t *testing.T) {
ob := &ShuffleMaskObfuscator{}
data := []byte("same data")
@ -102,7 +96,7 @@ func TestShuffleMaskObfuscator_Good_DifferentEntropy(t *testing.T) {
assert.NotEqual(t, out1, out2)
}
func TestShuffleMaskObfuscator_Good_Deterministic(t *testing.T) {
func TestCryptoSigil_ShuffleMaskObfuscator_Deterministic_Good(t *testing.T) {
ob := &ShuffleMaskObfuscator{}
data := []byte("reproducible shuffle")
entropy := []byte("fixed")
@ -112,7 +106,7 @@ func TestShuffleMaskObfuscator_Good_Deterministic(t *testing.T) {
assert.Equal(t, out1, out2)
}
func TestShuffleMaskObfuscator_Good_LargeData(t *testing.T) {
func TestCryptoSigil_ShuffleMaskObfuscator_LargeData_Good(t *testing.T) {
ob := &ShuffleMaskObfuscator{}
data := make([]byte, 512)
for i := range data {
@ -125,7 +119,7 @@ func TestShuffleMaskObfuscator_Good_LargeData(t *testing.T) {
assert.Equal(t, data, restored)
}
func TestShuffleMaskObfuscator_Good_EmptyData(t *testing.T) {
func TestCryptoSigil_ShuffleMaskObfuscator_EmptyData_Good(t *testing.T) {
ob := &ShuffleMaskObfuscator{}
result := ob.Obfuscate([]byte{}, []byte("entropy"))
assert.Equal(t, []byte{}, result)
@ -134,7 +128,7 @@ func TestShuffleMaskObfuscator_Good_EmptyData(t *testing.T) {
assert.Equal(t, []byte{}, result)
}
func TestShuffleMaskObfuscator_Good_SingleByte(t *testing.T) {
func TestCryptoSigil_ShuffleMaskObfuscator_SingleByte_Good(t *testing.T) {
ob := &ShuffleMaskObfuscator{}
data := []byte{0x42}
entropy := []byte("single")
@ -144,302 +138,282 @@ func TestShuffleMaskObfuscator_Good_SingleByte(t *testing.T) {
assert.Equal(t, data, restored)
}
// ── NewChaChaPolySigil ─────────────────────────────────────────────
func TestNewChaChaPolySigil_Good(t *testing.T) {
func TestCryptoSigil_NewChaChaPolySigil_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, err := NewChaChaPolySigil(key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
require.NoError(t, err)
assert.NotNil(t, s)
assert.Equal(t, key, s.Key)
assert.NotNil(t, s.Obfuscator)
assert.NotNil(t, cipherSigil)
assert.Equal(t, key, cipherSigil.Key)
assert.NotNil(t, cipherSigil.Obfuscator)
}
func TestNewChaChaPolySigil_Good_KeyIsCopied(t *testing.T) {
func TestCryptoSigil_NewChaChaPolySigil_KeyIsCopied_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
original := make([]byte, 32)
copy(original, key)
s, err := NewChaChaPolySigil(key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
require.NoError(t, err)
// Mutating the original key should not affect the sigil.
key[0] ^= 0xFF
assert.Equal(t, original, s.Key)
assert.Equal(t, original, cipherSigil.Key)
}
func TestNewChaChaPolySigil_Bad_ShortKey(t *testing.T) {
_, err := NewChaChaPolySigil([]byte("too short"))
assert.ErrorIs(t, err, ErrInvalidKey)
func TestCryptoSigil_NewChaChaPolySigil_ShortKey_Bad(t *testing.T) {
_, err := NewChaChaPolySigil([]byte("too short"), nil)
assert.ErrorIs(t, err, InvalidKeyError)
}
func TestNewChaChaPolySigil_Bad_LongKey(t *testing.T) {
_, err := NewChaChaPolySigil(make([]byte, 64))
assert.ErrorIs(t, err, ErrInvalidKey)
func TestCryptoSigil_NewChaChaPolySigil_LongKey_Bad(t *testing.T) {
_, err := NewChaChaPolySigil(make([]byte, 64), nil)
assert.ErrorIs(t, err, InvalidKeyError)
}
func TestNewChaChaPolySigil_Bad_EmptyKey(t *testing.T) {
_, err := NewChaChaPolySigil(nil)
assert.ErrorIs(t, err, ErrInvalidKey)
func TestCryptoSigil_NewChaChaPolySigil_EmptyKey_Bad(t *testing.T) {
_, err := NewChaChaPolySigil(nil, nil)
assert.ErrorIs(t, err, InvalidKeyError)
}
// ── NewChaChaPolySigilWithObfuscator ───────────────────────────────
func TestNewChaChaPolySigilWithObfuscator_Good(t *testing.T) {
func TestCryptoSigil_NewChaChaPolySigil_CustomObfuscator_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
ob := &ShuffleMaskObfuscator{}
s, err := NewChaChaPolySigilWithObfuscator(key, ob)
cipherSigil, err := NewChaChaPolySigil(key, ob)
require.NoError(t, err)
assert.Equal(t, ob, s.Obfuscator)
assert.Equal(t, ob, cipherSigil.Obfuscator)
}
func TestNewChaChaPolySigilWithObfuscator_Good_NilObfuscator(t *testing.T) {
func TestCryptoSigil_NewChaChaPolySigil_CustomObfuscatorNil_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, err := NewChaChaPolySigilWithObfuscator(key, nil)
cipherSigil, err := NewChaChaPolySigil(key, nil)
require.NoError(t, err)
// Falls back to default XORObfuscator.
assert.IsType(t, &XORObfuscator{}, s.Obfuscator)
assert.IsType(t, &XORObfuscator{}, cipherSigil.Obfuscator)
}
func TestNewChaChaPolySigilWithObfuscator_Bad_InvalidKey(t *testing.T) {
_, err := NewChaChaPolySigilWithObfuscator([]byte("bad"), &XORObfuscator{})
assert.ErrorIs(t, err, ErrInvalidKey)
func TestCryptoSigil_NewChaChaPolySigil_CustomObfuscator_InvalidKey_Bad(t *testing.T) {
_, err := NewChaChaPolySigil([]byte("bad"), &XORObfuscator{})
assert.ErrorIs(t, err, InvalidKeyError)
}
// ── ChaChaPolySigil In/Out (encrypt/decrypt) ───────────────────────
func TestChaChaPolySigil_Good_RoundTrip(t *testing.T) {
func TestCryptoSigil_ChaChaPolySigil_RoundTrip_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, err := NewChaChaPolySigil(key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
require.NoError(t, err)
plaintext := []byte("consciousness does not merely avoid causing harm")
ciphertext, err := s.In(plaintext)
ciphertext, err := cipherSigil.In(plaintext)
require.NoError(t, err)
assert.NotEqual(t, plaintext, ciphertext)
assert.Greater(t, len(ciphertext), len(plaintext)) // nonce + tag overhead
assert.Greater(t, len(ciphertext), len(plaintext))
decrypted, err := s.Out(ciphertext)
decrypted, err := cipherSigil.Out(ciphertext)
require.NoError(t, err)
assert.Equal(t, plaintext, decrypted)
}
func TestChaChaPolySigil_Good_WithShuffleMask(t *testing.T) {
func TestCryptoSigil_ChaChaPolySigil_CustomShuffleMask_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, err := NewChaChaPolySigilWithObfuscator(key, &ShuffleMaskObfuscator{})
cipherSigil, err := NewChaChaPolySigil(key, &ShuffleMaskObfuscator{})
require.NoError(t, err)
plaintext := []byte("shuffle mask pre-obfuscation layer")
ciphertext, err := s.In(plaintext)
ciphertext, err := cipherSigil.In(plaintext)
require.NoError(t, err)
decrypted, err := s.Out(ciphertext)
decrypted, err := cipherSigil.Out(ciphertext)
require.NoError(t, err)
assert.Equal(t, plaintext, decrypted)
}
func TestChaChaPolySigil_Good_NilData(t *testing.T) {
func TestCryptoSigil_ChaChaPolySigil_NilData_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, err := NewChaChaPolySigil(key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
require.NoError(t, err)
enc, err := s.In(nil)
enc, err := cipherSigil.In(nil)
require.NoError(t, err)
assert.Nil(t, enc)
dec, err := s.Out(nil)
dec, err := cipherSigil.Out(nil)
require.NoError(t, err)
assert.Nil(t, dec)
}
func TestChaChaPolySigil_Good_EmptyPlaintext(t *testing.T) {
func TestCryptoSigil_ChaChaPolySigil_EmptyPlaintext_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, err := NewChaChaPolySigil(key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
require.NoError(t, err)
ciphertext, err := s.In([]byte{})
ciphertext, err := cipherSigil.In([]byte{})
require.NoError(t, err)
assert.NotEmpty(t, ciphertext) // Has nonce + tag even for empty plaintext.
assert.NotEmpty(t, ciphertext)
decrypted, err := s.Out(ciphertext)
decrypted, err := cipherSigil.Out(ciphertext)
require.NoError(t, err)
assert.Equal(t, []byte{}, decrypted)
}
func TestChaChaPolySigil_Good_DifferentCiphertextsPerCall(t *testing.T) {
func TestCryptoSigil_ChaChaPolySigil_DifferentCiphertextsPerCall_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, err := NewChaChaPolySigil(key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
require.NoError(t, err)
plaintext := []byte("same input")
ct1, _ := s.In(plaintext)
ct2, _ := s.In(plaintext)
ct1, _ := cipherSigil.In(plaintext)
ct2, _ := cipherSigil.In(plaintext)
// Different nonces → different ciphertexts.
assert.NotEqual(t, ct1, ct2)
}
func TestChaChaPolySigil_Bad_NoKey(t *testing.T) {
s := &ChaChaPolySigil{}
func TestCryptoSigil_ChaChaPolySigil_NoKey_Bad(t *testing.T) {
cipherSigil := &ChaChaPolySigil{}
_, err := s.In([]byte("data"))
assert.ErrorIs(t, err, ErrNoKeyConfigured)
_, err := cipherSigil.In([]byte("data"))
assert.ErrorIs(t, err, NoKeyConfiguredError)
_, err = s.Out([]byte("data"))
assert.ErrorIs(t, err, ErrNoKeyConfigured)
_, err = cipherSigil.Out([]byte("data"))
assert.ErrorIs(t, err, NoKeyConfiguredError)
}
func TestChaChaPolySigil_Bad_WrongKey(t *testing.T) {
func TestCryptoSigil_ChaChaPolySigil_WrongKey_Bad(t *testing.T) {
key1 := make([]byte, 32)
key2 := make([]byte, 32)
_, _ = rand.Read(key1)
_, _ = rand.Read(key2)
s1, _ := NewChaChaPolySigil(key1)
s2, _ := NewChaChaPolySigil(key2)
cipherSigilOne, _ := NewChaChaPolySigil(key1, nil)
cipherSigilTwo, _ := NewChaChaPolySigil(key2, nil)
ciphertext, err := s1.In([]byte("secret"))
ciphertext, err := cipherSigilOne.In([]byte("secret"))
require.NoError(t, err)
_, err = s2.Out(ciphertext)
assert.ErrorIs(t, err, ErrDecryptionFailed)
_, err = cipherSigilTwo.Out(ciphertext)
assert.ErrorIs(t, err, DecryptionFailedError)
}
func TestChaChaPolySigil_Bad_TruncatedCiphertext(t *testing.T) {
func TestCryptoSigil_ChaChaPolySigil_TruncatedCiphertext_Bad(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, _ := NewChaChaPolySigil(key)
_, err := s.Out([]byte("too short"))
assert.ErrorIs(t, err, ErrCiphertextTooShort)
cipherSigil, _ := NewChaChaPolySigil(key, nil)
_, err := cipherSigil.Out([]byte("too short"))
assert.ErrorIs(t, err, CiphertextTooShortError)
}
func TestChaChaPolySigil_Bad_TamperedCiphertext(t *testing.T) {
func TestCryptoSigil_ChaChaPolySigil_TamperedCiphertext_Bad(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, _ := NewChaChaPolySigil(key)
ciphertext, _ := s.In([]byte("authentic data"))
cipherSigil, _ := NewChaChaPolySigil(key, nil)
ciphertext, _ := cipherSigil.In([]byte("authentic data"))
// Flip a bit in the ciphertext body (after nonce).
ciphertext[30] ^= 0xFF
_, err := s.Out(ciphertext)
assert.ErrorIs(t, err, ErrDecryptionFailed)
_, err := cipherSigil.Out(ciphertext)
assert.ErrorIs(t, err, DecryptionFailedError)
}
// failReader returns an error on read — for testing nonce generation failure.
type failReader struct{}
func (f *failReader) Read([]byte) (int, error) {
return 0, errors.New("entropy source failed")
func (reader *failReader) Read([]byte) (int, error) {
return 0, core.NewError("entropy source failed")
}
func TestChaChaPolySigil_Bad_RandReaderFailure(t *testing.T) {
func TestCryptoSigil_ChaChaPolySigil_RandomReaderFailure_Bad(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, _ := NewChaChaPolySigil(key)
s.randReader = &failReader{}
cipherSigil, _ := NewChaChaPolySigil(key, nil)
cipherSigil.randomReader = &failReader{}
_, err := s.In([]byte("data"))
_, err := cipherSigil.In([]byte("data"))
assert.Error(t, err)
}
// ── ChaChaPolySigil without obfuscator ─────────────────────────────
func TestChaChaPolySigil_Good_NoObfuscator(t *testing.T) {
func TestCryptoSigil_ChaChaPolySigil_NoObfuscator_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, _ := NewChaChaPolySigil(key)
s.Obfuscator = nil // Disable pre-obfuscation.
cipherSigil, _ := NewChaChaPolySigil(key, nil)
cipherSigil.Obfuscator = nil
plaintext := []byte("raw encryption without pre-obfuscation")
ciphertext, err := s.In(plaintext)
ciphertext, err := cipherSigil.In(plaintext)
require.NoError(t, err)
decrypted, err := s.Out(ciphertext)
decrypted, err := cipherSigil.Out(ciphertext)
require.NoError(t, err)
assert.Equal(t, plaintext, decrypted)
}
// ── GetNonceFromCiphertext ─────────────────────────────────────────
func TestGetNonceFromCiphertext_Good(t *testing.T) {
func TestCryptoSigil_NonceFromCiphertext_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, _ := NewChaChaPolySigil(key)
ciphertext, _ := s.In([]byte("nonce extraction test"))
cipherSigil, _ := NewChaChaPolySigil(key, nil)
ciphertext, _ := cipherSigil.In([]byte("nonce extraction test"))
nonce, err := GetNonceFromCiphertext(ciphertext)
nonce, err := NonceFromCiphertext(ciphertext)
require.NoError(t, err)
assert.Len(t, nonce, 24) // XChaCha20 nonce is 24 bytes.
assert.Len(t, nonce, 24)
// Nonce should match the prefix of the ciphertext.
assert.Equal(t, ciphertext[:24], nonce)
}
func TestGetNonceFromCiphertext_Good_NonceCopied(t *testing.T) {
func TestCryptoSigil_NonceFromCiphertext_NonceCopied_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, _ := NewChaChaPolySigil(key)
ciphertext, _ := s.In([]byte("data"))
cipherSigil, _ := NewChaChaPolySigil(key, nil)
ciphertext, _ := cipherSigil.In([]byte("data"))
nonce, _ := GetNonceFromCiphertext(ciphertext)
nonce, _ := NonceFromCiphertext(ciphertext)
original := make([]byte, len(nonce))
copy(original, nonce)
// Mutating the nonce should not affect the ciphertext.
nonce[0] ^= 0xFF
assert.Equal(t, original, ciphertext[:24])
}
func TestGetNonceFromCiphertext_Bad_TooShort(t *testing.T) {
_, err := GetNonceFromCiphertext([]byte("short"))
assert.ErrorIs(t, err, ErrCiphertextTooShort)
func TestCryptoSigil_NonceFromCiphertext_TooShort_Bad(t *testing.T) {
_, err := NonceFromCiphertext([]byte("short"))
assert.ErrorIs(t, err, CiphertextTooShortError)
}
func TestGetNonceFromCiphertext_Bad_Empty(t *testing.T) {
_, err := GetNonceFromCiphertext(nil)
assert.ErrorIs(t, err, ErrCiphertextTooShort)
func TestCryptoSigil_NonceFromCiphertext_Empty_Bad(t *testing.T) {
_, err := NonceFromCiphertext(nil)
assert.ErrorIs(t, err, CiphertextTooShortError)
}
// ── ChaChaPolySigil in Transmute pipeline ──────────────────────────
func TestChaChaPolySigil_Good_InTransmutePipeline(t *testing.T) {
func TestCryptoSigil_ChaChaPolySigil_InTransmutePipeline_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, _ := NewChaChaPolySigil(key)
cipherSigil, _ := NewChaChaPolySigil(key, nil)
hexSigil, _ := NewSigil("hex")
chain := []Sigil{s, hexSigil}
chain := []Sigil{cipherSigil, hexSigil}
plaintext := []byte("encrypt then hex encode")
encoded, err := Transmute(plaintext, chain)
require.NoError(t, err)
// Result should be hex-encoded ciphertext.
assert.True(t, isHex(encoded))
decoded, err := Untransmute(encoded, chain)
@ -456,43 +430,35 @@ func isHex(data []byte) bool {
return len(data) > 0
}
// ── Transmute error propagation ────────────────────────────────────
type failSigil struct{}
func (f *failSigil) In([]byte) ([]byte, error) { return nil, errors.New("fail in") }
func (f *failSigil) Out([]byte) ([]byte, error) { return nil, errors.New("fail out") }
func (sigil *failSigil) In([]byte) ([]byte, error) { return nil, core.NewError("fail in") }
func (sigil *failSigil) Out([]byte) ([]byte, error) { return nil, core.NewError("fail out") }
func TestTransmute_Bad_ErrorPropagation(t *testing.T) {
func TestCryptoSigil_Transmute_ErrorPropagation_Bad(t *testing.T) {
_, err := Transmute([]byte("data"), []Sigil{&failSigil{}})
assert.Error(t, err)
assert.Contains(t, err.Error(), "fail in")
}
func TestUntransmute_Bad_ErrorPropagation(t *testing.T) {
func TestCryptoSigil_Untransmute_ErrorPropagation_Bad(t *testing.T) {
_, err := Untransmute([]byte("data"), []Sigil{&failSigil{}})
assert.Error(t, err)
assert.Contains(t, err.Error(), "fail out")
}
// ── GzipSigil with custom writer (edge case) ──────────────────────
func TestCryptoSigil_GzipSigil_CustomOutputWriter_Good(t *testing.T) {
var outputBuffer bytes.Buffer
gzipSigil := &GzipSigil{outputWriter: &outputBuffer}
func TestGzipSigil_Good_CustomWriter(t *testing.T) {
var buf bytes.Buffer
s := &GzipSigil{writer: &buf}
// With custom writer, compressed data goes to buf, returned bytes will be empty
// because the internal buffer 'b' is unused when s.writer is set.
_, err := s.In([]byte("test data"))
_, err := gzipSigil.In([]byte("test data"))
require.NoError(t, err)
assert.Greater(t, buf.Len(), 0)
assert.Greater(t, outputBuffer.Len(), 0)
}
// ── deriveKeyStream edge: exactly 32 bytes ─────────────────────────
func TestDeriveKeyStream_Good_ExactBlockSize(t *testing.T) {
func TestCryptoSigil_DeriveKeyStream_ExactBlockSize_Good(t *testing.T) {
ob := &XORObfuscator{}
data := make([]byte, 32) // Exactly one SHA-256 block.
data := make([]byte, 32)
for i := range data {
data[i] = byte(i)
}
@ -503,24 +469,21 @@ func TestDeriveKeyStream_Good_ExactBlockSize(t *testing.T) {
assert.Equal(t, data, restored)
}
// ── io.Reader fallback in In ───────────────────────────────────────
func TestChaChaPolySigil_Good_NilRandReader(t *testing.T) {
func TestCryptoSigil_ChaChaPolySigil_NilRandomReader_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
s, _ := NewChaChaPolySigil(key)
s.randReader = nil // Should fall back to crypto/rand.Reader.
cipherSigil, _ := NewChaChaPolySigil(key, nil)
cipherSigil.randomReader = nil
ciphertext, err := s.In([]byte("fallback reader"))
ciphertext, err := cipherSigil.In([]byte("fallback reader"))
require.NoError(t, err)
decrypted, err := s.Out(ciphertext)
decrypted, err := cipherSigil.Out(ciphertext)
require.NoError(t, err)
assert.Equal(t, []byte("fallback reader"), decrypted)
}
// limitReader returns exactly N bytes then EOF — for deterministic tests.
type limitReader struct {
data []byte
pos int
@ -528,9 +491,9 @@ type limitReader struct {
func (l *limitReader) Read(p []byte) (int, error) {
if l.pos >= len(l.data) {
return 0, io.EOF
return 0, goio.EOF
}
n := copy(p, l.data[l.pos:])
l.pos += n
return n, nil
bytesCopied := copy(p, l.data[l.pos:])
l.pos += bytesCopied
return bytesCopied, nil
}

View file

@ -1,70 +1,39 @@
// Package sigil provides the Sigil transformation framework for composable,
// reversible data transformations.
//
// Sigils are the core abstraction - each sigil implements a specific transformation
// (encoding, compression, hashing, encryption) with a uniform interface. Sigils can
// be chained together to create transformation pipelines.
//
// Example usage:
//
// hexSigil, _ := sigil.NewSigil("hex")
// base64Sigil, _ := sigil.NewSigil("base64")
// result, _ := sigil.Transmute(data, []sigil.Sigil{hexSigil, base64Sigil})
// Example: hexSigil, _ := sigil.NewSigil("hex")
// Example: gzipSigil, _ := sigil.NewSigil("gzip")
// Example: encoded, _ := sigil.Transmute([]byte("payload"), []sigil.Sigil{hexSigil, gzipSigil})
// Example: decoded, _ := sigil.Untransmute(encoded, []sigil.Sigil{hexSigil, gzipSigil})
package sigil
// Sigil defines the interface for a data transformer.
//
// A Sigil represents a single transformation unit that can be applied to byte data.
// Sigils may be reversible (encoding, compression, encryption) or irreversible (hashing).
//
// For reversible sigils: Out(In(x)) == x for all valid x
// For irreversible sigils: Out returns the input unchanged
// For symmetric sigils: In(x) == Out(x)
//
// Implementations must handle nil input by returning nil without error,
// and empty input by returning an empty slice without error.
import core "dappco.re/go/core"
// Example: var transform sigil.Sigil = &sigil.HexSigil{}
type Sigil interface {
// In applies the forward transformation to the data.
// For encoding sigils, this encodes the data.
// For compression sigils, this compresses the data.
// For hash sigils, this computes the digest.
// Example: encoded, _ := hexSigil.In([]byte("payload"))
In(data []byte) ([]byte, error)
// Out applies the reverse transformation to the data.
// For reversible sigils, this recovers the original data.
// For irreversible sigils (e.g., hashing), this returns the input unchanged.
// Example: decoded, _ := hexSigil.Out(encoded)
Out(data []byte) ([]byte, error)
}
// Transmute applies a series of sigils to data in sequence.
//
// Each sigil's In method is called in order, with the output of one sigil
// becoming the input of the next. If any sigil returns an error, Transmute
// stops immediately and returns nil with that error.
//
// To reverse a transmutation, call each sigil's Out method in reverse order.
// Example: encoded, _ := sigil.Transmute([]byte("payload"), []sigil.Sigil{hexSigil, gzipSigil})
func Transmute(data []byte, sigils []Sigil) ([]byte, error) {
var err error
for _, s := range sigils {
data, err = s.In(data)
for _, sigilValue := range sigils {
data, err = sigilValue.In(data)
if err != nil {
return nil, err
return nil, core.E("sigil.Transmute", "sigil in failed", err)
}
}
return data, nil
}
// Untransmute reverses a transmutation by applying Out in reverse order.
//
// Each sigil's Out method is called in reverse order, with the output of one sigil
// becoming the input of the next. If any sigil returns an error, Untransmute
// stops immediately and returns nil with that error.
// Example: decoded, _ := sigil.Untransmute(encoded, []sigil.Sigil{hexSigil, gzipSigil})
func Untransmute(data []byte, sigils []Sigil) ([]byte, error) {
var err error
for i := len(sigils) - 1; i >= 0; i-- {
data, err = sigils[i].Out(data)
if err != nil {
return nil, err
return nil, core.E("sigil.Untransmute", "sigil out failed", err)
}
}
return data, nil

View file

@ -13,229 +13,193 @@ import (
"github.com/stretchr/testify/require"
)
// ---------------------------------------------------------------------------
// ReverseSigil
// ---------------------------------------------------------------------------
func TestSigil_ReverseSigil_Good(t *testing.T) {
reverseSigil := &ReverseSigil{}
func TestReverseSigil_Good(t *testing.T) {
s := &ReverseSigil{}
out, err := s.In([]byte("hello"))
out, err := reverseSigil.In([]byte("hello"))
require.NoError(t, err)
assert.Equal(t, []byte("olleh"), out)
// Symmetric: Out does the same thing.
restored, err := s.Out(out)
restored, err := reverseSigil.Out(out)
require.NoError(t, err)
assert.Equal(t, []byte("hello"), restored)
}
func TestReverseSigil_Bad(t *testing.T) {
s := &ReverseSigil{}
func TestSigil_ReverseSigil_Bad(t *testing.T) {
reverseSigil := &ReverseSigil{}
// Empty input returns empty.
out, err := s.In([]byte{})
out, err := reverseSigil.In([]byte{})
require.NoError(t, err)
assert.Equal(t, []byte{}, out)
}
func TestReverseSigil_Ugly(t *testing.T) {
s := &ReverseSigil{}
func TestSigil_ReverseSigil_NilInput_Good(t *testing.T) {
reverseSigil := &ReverseSigil{}
// Nil input returns nil.
out, err := s.In(nil)
out, err := reverseSigil.In(nil)
require.NoError(t, err)
assert.Nil(t, out)
out, err = s.Out(nil)
out, err = reverseSigil.Out(nil)
require.NoError(t, err)
assert.Nil(t, out)
}
// ---------------------------------------------------------------------------
// HexSigil
// ---------------------------------------------------------------------------
func TestHexSigil_Good(t *testing.T) {
s := &HexSigil{}
func TestSigil_HexSigil_Good(t *testing.T) {
hexSigil := &HexSigil{}
data := []byte("hello world")
encoded, err := s.In(data)
encoded, err := hexSigil.In(data)
require.NoError(t, err)
assert.Equal(t, []byte(hex.EncodeToString(data)), encoded)
decoded, err := s.Out(encoded)
decoded, err := hexSigil.Out(encoded)
require.NoError(t, err)
assert.Equal(t, data, decoded)
}
func TestHexSigil_Bad(t *testing.T) {
s := &HexSigil{}
func TestSigil_HexSigil_Bad(t *testing.T) {
hexSigil := &HexSigil{}
// Invalid hex input.
_, err := s.Out([]byte("zzzz"))
_, err := hexSigil.Out([]byte("zzzz"))
assert.Error(t, err)
// Empty input.
out, err := s.In([]byte{})
out, err := hexSigil.In([]byte{})
require.NoError(t, err)
assert.Equal(t, []byte{}, out)
}
func TestHexSigil_Ugly(t *testing.T) {
s := &HexSigil{}
func TestSigil_HexSigil_NilInput_Good(t *testing.T) {
hexSigil := &HexSigil{}
out, err := s.In(nil)
out, err := hexSigil.In(nil)
require.NoError(t, err)
assert.Nil(t, out)
out, err = s.Out(nil)
out, err = hexSigil.Out(nil)
require.NoError(t, err)
assert.Nil(t, out)
}
// ---------------------------------------------------------------------------
// Base64Sigil
// ---------------------------------------------------------------------------
func TestBase64Sigil_Good(t *testing.T) {
s := &Base64Sigil{}
func TestSigil_Base64Sigil_Good(t *testing.T) {
base64Sigil := &Base64Sigil{}
data := []byte("composable transforms")
encoded, err := s.In(data)
encoded, err := base64Sigil.In(data)
require.NoError(t, err)
assert.Equal(t, []byte(base64.StdEncoding.EncodeToString(data)), encoded)
decoded, err := s.Out(encoded)
decoded, err := base64Sigil.Out(encoded)
require.NoError(t, err)
assert.Equal(t, data, decoded)
}
func TestBase64Sigil_Bad(t *testing.T) {
s := &Base64Sigil{}
func TestSigil_Base64Sigil_Bad(t *testing.T) {
base64Sigil := &Base64Sigil{}
// Invalid base64 (wrong padding).
_, err := s.Out([]byte("!!!"))
_, err := base64Sigil.Out([]byte("!!!"))
assert.Error(t, err)
// Empty input.
out, err := s.In([]byte{})
out, err := base64Sigil.In([]byte{})
require.NoError(t, err)
assert.Equal(t, []byte{}, out)
}
func TestBase64Sigil_Ugly(t *testing.T) {
s := &Base64Sigil{}
func TestSigil_Base64Sigil_NilInput_Good(t *testing.T) {
base64Sigil := &Base64Sigil{}
out, err := s.In(nil)
out, err := base64Sigil.In(nil)
require.NoError(t, err)
assert.Nil(t, out)
out, err = s.Out(nil)
out, err = base64Sigil.Out(nil)
require.NoError(t, err)
assert.Nil(t, out)
}
// ---------------------------------------------------------------------------
// GzipSigil
// ---------------------------------------------------------------------------
func TestGzipSigil_Good(t *testing.T) {
s := &GzipSigil{}
func TestSigil_GzipSigil_Good(t *testing.T) {
gzipSigil := &GzipSigil{}
data := []byte("the quick brown fox jumps over the lazy dog")
compressed, err := s.In(data)
compressed, err := gzipSigil.In(data)
require.NoError(t, err)
assert.NotEqual(t, data, compressed)
decompressed, err := s.Out(compressed)
decompressed, err := gzipSigil.Out(compressed)
require.NoError(t, err)
assert.Equal(t, data, decompressed)
}
func TestGzipSigil_Bad(t *testing.T) {
s := &GzipSigil{}
func TestSigil_GzipSigil_Bad(t *testing.T) {
gzipSigil := &GzipSigil{}
// Invalid gzip data.
_, err := s.Out([]byte("not gzip"))
_, err := gzipSigil.Out([]byte("not gzip"))
assert.Error(t, err)
// Empty input compresses to a valid gzip stream.
compressed, err := s.In([]byte{})
compressed, err := gzipSigil.In([]byte{})
require.NoError(t, err)
assert.NotEmpty(t, compressed) // gzip header is always present
assert.NotEmpty(t, compressed)
decompressed, err := s.Out(compressed)
decompressed, err := gzipSigil.Out(compressed)
require.NoError(t, err)
assert.Equal(t, []byte{}, decompressed)
}
func TestGzipSigil_Ugly(t *testing.T) {
s := &GzipSigil{}
func TestSigil_GzipSigil_NilInput_Good(t *testing.T) {
gzipSigil := &GzipSigil{}
out, err := s.In(nil)
out, err := gzipSigil.In(nil)
require.NoError(t, err)
assert.Nil(t, out)
out, err = s.Out(nil)
out, err = gzipSigil.Out(nil)
require.NoError(t, err)
assert.Nil(t, out)
}
// ---------------------------------------------------------------------------
// JSONSigil
// ---------------------------------------------------------------------------
func TestJSONSigil_Good(t *testing.T) {
s := &JSONSigil{Indent: false}
func TestSigil_JSONSigil_Good(t *testing.T) {
jsonSigil := &JSONSigil{Indent: false}
data := []byte(`{ "key" : "value" }`)
compacted, err := s.In(data)
compacted, err := jsonSigil.In(data)
require.NoError(t, err)
assert.Equal(t, []byte(`{"key":"value"}`), compacted)
// Out is passthrough.
passthrough, err := s.Out(compacted)
passthrough, err := jsonSigil.Out(compacted)
require.NoError(t, err)
assert.Equal(t, compacted, passthrough)
}
func TestJSONSigil_Good_Indent(t *testing.T) {
s := &JSONSigil{Indent: true}
func TestSigil_JSONSigil_Indent_Good(t *testing.T) {
jsonSigil := &JSONSigil{Indent: true}
data := []byte(`{"key":"value"}`)
indented, err := s.In(data)
indented, err := jsonSigil.In(data)
require.NoError(t, err)
assert.Contains(t, string(indented), "\n")
assert.Contains(t, string(indented), " ")
}
func TestJSONSigil_Bad(t *testing.T) {
s := &JSONSigil{Indent: false}
func TestSigil_JSONSigil_Bad(t *testing.T) {
jsonSigil := &JSONSigil{Indent: false}
// Invalid JSON.
_, err := s.In([]byte("not json"))
_, err := jsonSigil.In([]byte("not json"))
assert.Error(t, err)
}
func TestJSONSigil_Ugly(t *testing.T) {
s := &JSONSigil{Indent: false}
func TestSigil_JSONSigil_NilInput_Good(t *testing.T) {
jsonSigil := &JSONSigil{Indent: false}
// json.Compact on nil/empty will produce an error (invalid JSON).
_, err := s.In(nil)
assert.Error(t, err)
out, err := jsonSigil.In(nil)
require.NoError(t, err)
assert.Nil(t, out)
// Out with nil is passthrough.
out, err := s.Out(nil)
out, err = jsonSigil.Out(nil)
require.NoError(t, err)
assert.Nil(t, out)
}
// ---------------------------------------------------------------------------
// HashSigil
// ---------------------------------------------------------------------------
func TestHashSigil_Good(t *testing.T) {
func TestSigil_HashSigil_Good(t *testing.T) {
data := []byte("hash me")
tests := []struct {
@ -265,44 +229,37 @@ func TestHashSigil_Good(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s, err := NewSigil(tt.sigilName)
sigilValue, err := NewSigil(tt.sigilName)
require.NoError(t, err)
hashed, err := s.In(data)
hashed, err := sigilValue.In(data)
require.NoError(t, err)
assert.Len(t, hashed, tt.size)
// Out is passthrough.
passthrough, err := s.Out(hashed)
passthrough, err := sigilValue.Out(hashed)
require.NoError(t, err)
assert.Equal(t, hashed, passthrough)
})
}
}
func TestHashSigil_Bad(t *testing.T) {
// Unsupported hash constant.
s := &HashSigil{Hash: 0}
_, err := s.In([]byte("data"))
func TestSigil_HashSigil_Bad(t *testing.T) {
hashSigil := &HashSigil{Hash: 0}
_, err := hashSigil.In([]byte("data"))
assert.Error(t, err)
assert.Contains(t, err.Error(), "not available")
}
func TestHashSigil_Ugly(t *testing.T) {
// Hashing empty data should still produce a valid digest.
s, err := NewSigil("sha256")
func TestSigil_HashSigil_EmptyInput_Good(t *testing.T) {
sigilValue, err := NewSigil("sha256")
require.NoError(t, err)
hashed, err := s.In([]byte{})
hashed, err := sigilValue.In([]byte{})
require.NoError(t, err)
assert.Len(t, hashed, sha256.Size)
}
// ---------------------------------------------------------------------------
// NewSigil factory
// ---------------------------------------------------------------------------
func TestNewSigil_Good(t *testing.T) {
func TestSigil_NewSigil_Good(t *testing.T) {
names := []string{
"reverse", "hex", "base64", "gzip", "json", "json-indent",
"md4", "md5", "sha1", "sha224", "sha256", "sha384", "sha512",
@ -314,29 +271,25 @@ func TestNewSigil_Good(t *testing.T) {
for _, name := range names {
t.Run(name, func(t *testing.T) {
s, err := NewSigil(name)
sigilValue, err := NewSigil(name)
require.NoError(t, err)
assert.NotNil(t, s)
assert.NotNil(t, sigilValue)
})
}
}
func TestNewSigil_Bad(t *testing.T) {
func TestSigil_NewSigil_Bad(t *testing.T) {
_, err := NewSigil("nonexistent")
assert.Error(t, err)
assert.Contains(t, err.Error(), "unknown sigil name")
}
func TestNewSigil_Ugly(t *testing.T) {
func TestSigil_NewSigil_EmptyName_Bad(t *testing.T) {
_, err := NewSigil("")
assert.Error(t, err)
}
// ---------------------------------------------------------------------------
// Transmute / Untransmute
// ---------------------------------------------------------------------------
func TestTransmute_Good(t *testing.T) {
func TestSigil_Transmute_Good(t *testing.T) {
data := []byte("round trip")
hexSigil, err := NewSigil("hex")
@ -355,7 +308,7 @@ func TestTransmute_Good(t *testing.T) {
assert.Equal(t, data, decoded)
}
func TestTransmute_Good_MultiSigil(t *testing.T) {
func TestSigil_Transmute_MultiSigil_Good(t *testing.T) {
data := []byte("multi sigil pipeline test data")
reverseSigil, err := NewSigil("reverse")
@ -375,7 +328,7 @@ func TestTransmute_Good_MultiSigil(t *testing.T) {
assert.Equal(t, data, decoded)
}
func TestTransmute_Good_GzipRoundTrip(t *testing.T) {
func TestSigil_Transmute_GzipRoundTrip_Good(t *testing.T) {
data := []byte("compress then encode then decode then decompress")
gzipSigil, err := NewSigil("gzip")
@ -393,17 +346,14 @@ func TestTransmute_Good_GzipRoundTrip(t *testing.T) {
assert.Equal(t, data, decoded)
}
func TestTransmute_Bad(t *testing.T) {
// Transmute with a sigil that will fail: hex decode on non-hex input.
func TestSigil_Transmute_Bad(t *testing.T) {
hexSigil := &HexSigil{}
// Calling Out (decode) with invalid input via manual chain.
_, err := Untransmute([]byte("not-hex!!"), []Sigil{hexSigil})
assert.Error(t, err)
}
func TestTransmute_Ugly(t *testing.T) {
// Empty sigil chain is a no-op.
func TestSigil_Transmute_NilAndEmptyInput_Good(t *testing.T) {
data := []byte("unchanged")
result, err := Transmute(data, nil)
@ -414,7 +364,6 @@ func TestTransmute_Ugly(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, data, result)
// Nil data through a chain.
hexSigil, _ := NewSigil("hex")
result, err = Transmute(nil, []Sigil{hexSigil})
require.NoError(t, err)

View file

@ -10,10 +10,10 @@ import (
"crypto/sha512"
"encoding/base64"
"encoding/hex"
"encoding/json"
"io"
goio "io"
"io/fs"
coreerr "forge.lthn.ai/core/go-log"
core "dappco.re/go/core"
"golang.org/x/crypto/blake2b"
"golang.org/x/crypto/blake2s"
"golang.org/x/crypto/md4"
@ -21,12 +21,10 @@ import (
"golang.org/x/crypto/sha3"
)
// ReverseSigil is a Sigil that reverses the bytes of the payload.
// It is a symmetrical Sigil, meaning that the In and Out methods perform the same operation.
// Example: reverseSigil, _ := sigil.NewSigil("reverse")
type ReverseSigil struct{}
// In reverses the bytes of the data.
func (s *ReverseSigil) In(data []byte) ([]byte, error) {
func (sigil *ReverseSigil) In(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
@ -37,189 +35,187 @@ func (s *ReverseSigil) In(data []byte) ([]byte, error) {
return reversed, nil
}
// Out reverses the bytes of the data.
func (s *ReverseSigil) Out(data []byte) ([]byte, error) {
return s.In(data)
func (sigil *ReverseSigil) Out(data []byte) ([]byte, error) {
return sigil.In(data)
}
// HexSigil is a Sigil that encodes/decodes data to/from hexadecimal.
// The In method encodes the data, and the Out method decodes it.
// Example: hexSigil, _ := sigil.NewSigil("hex")
type HexSigil struct{}
// In encodes the data to hexadecimal.
func (s *HexSigil) In(data []byte) ([]byte, error) {
func (sigil *HexSigil) In(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
dst := make([]byte, hex.EncodedLen(len(data)))
hex.Encode(dst, data)
return dst, nil
encodedBytes := make([]byte, hex.EncodedLen(len(data)))
hex.Encode(encodedBytes, data)
return encodedBytes, nil
}
// Out decodes the data from hexadecimal.
func (s *HexSigil) Out(data []byte) ([]byte, error) {
func (sigil *HexSigil) Out(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
dst := make([]byte, hex.DecodedLen(len(data)))
_, err := hex.Decode(dst, data)
return dst, err
decodedBytes := make([]byte, hex.DecodedLen(len(data)))
_, err := hex.Decode(decodedBytes, data)
return decodedBytes, err
}
// Base64Sigil is a Sigil that encodes/decodes data to/from base64.
// The In method encodes the data, and the Out method decodes it.
// Example: base64Sigil, _ := sigil.NewSigil("base64")
type Base64Sigil struct{}
// In encodes the data to base64.
func (s *Base64Sigil) In(data []byte) ([]byte, error) {
func (sigil *Base64Sigil) In(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
dst := make([]byte, base64.StdEncoding.EncodedLen(len(data)))
base64.StdEncoding.Encode(dst, data)
return dst, nil
encodedBytes := make([]byte, base64.StdEncoding.EncodedLen(len(data)))
base64.StdEncoding.Encode(encodedBytes, data)
return encodedBytes, nil
}
// Out decodes the data from base64.
func (s *Base64Sigil) Out(data []byte) ([]byte, error) {
func (sigil *Base64Sigil) Out(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
dst := make([]byte, base64.StdEncoding.DecodedLen(len(data)))
n, err := base64.StdEncoding.Decode(dst, data)
return dst[:n], err
decodedBytes := make([]byte, base64.StdEncoding.DecodedLen(len(data)))
decodedCount, err := base64.StdEncoding.Decode(decodedBytes, data)
return decodedBytes[:decodedCount], err
}
// GzipSigil is a Sigil that compresses/decompresses data using gzip.
// The In method compresses the data, and the Out method decompresses it.
// Example: gzipSigil, _ := sigil.NewSigil("gzip")
type GzipSigil struct {
writer io.Writer
outputWriter goio.Writer
}
// In compresses the data using gzip.
func (s *GzipSigil) In(data []byte) ([]byte, error) {
func (sigil *GzipSigil) In(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
var b bytes.Buffer
w := s.writer
if w == nil {
w = &b
var buffer bytes.Buffer
outputWriter := sigil.outputWriter
if outputWriter == nil {
outputWriter = &buffer
}
gz := gzip.NewWriter(w)
if _, err := gz.Write(data); err != nil {
return nil, err
gzipWriter := gzip.NewWriter(outputWriter)
if _, err := gzipWriter.Write(data); err != nil {
return nil, core.E("sigil.GzipSigil.In", "write gzip payload", err)
}
if err := gz.Close(); err != nil {
return nil, err
if err := gzipWriter.Close(); err != nil {
return nil, core.E("sigil.GzipSigil.In", "close gzip writer", err)
}
return b.Bytes(), nil
return buffer.Bytes(), nil
}
// Out decompresses the data using gzip.
func (s *GzipSigil) Out(data []byte) ([]byte, error) {
func (sigil *GzipSigil) Out(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
r, err := gzip.NewReader(bytes.NewReader(data))
gzipReader, err := gzip.NewReader(bytes.NewReader(data))
if err != nil {
return nil, err
return nil, core.E("sigil.GzipSigil.Out", "open gzip reader", err)
}
defer r.Close()
return io.ReadAll(r)
defer gzipReader.Close()
out, err := goio.ReadAll(gzipReader)
if err != nil {
return nil, core.E("sigil.GzipSigil.Out", "read gzip payload", err)
}
return out, nil
}
// JSONSigil is a Sigil that compacts or indents JSON data.
// The Out method is a no-op.
// Example: jsonSigil := &sigil.JSONSigil{Indent: true}
type JSONSigil struct{ Indent bool }
// In compacts or indents the JSON data.
func (s *JSONSigil) In(data []byte) ([]byte, error) {
if s.Indent {
var out bytes.Buffer
err := json.Indent(&out, data, "", " ")
return out.Bytes(), err
func (sigil *JSONSigil) In(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
var out bytes.Buffer
err := json.Compact(&out, data)
return out.Bytes(), err
var decoded any
result := core.JSONUnmarshal(data, &decoded)
if !result.OK {
if err, ok := result.Value.(error); ok {
return nil, core.E("sigil.JSONSigil.In", "decode json", err)
}
return nil, core.E("sigil.JSONSigil.In", "decode json", fs.ErrInvalid)
}
compact := core.JSONMarshalString(decoded)
if sigil.Indent {
return []byte(indentJSON(compact)), nil
}
return []byte(compact), nil
}
// Out is a no-op for JSONSigil.
func (s *JSONSigil) Out(data []byte) ([]byte, error) {
// For simplicity, Out is a no-op. The primary use is formatting.
func (sigil *JSONSigil) Out(data []byte) ([]byte, error) {
return data, nil
}
// HashSigil is a Sigil that hashes the data using a specified algorithm.
// The In method hashes the data, and the Out method is a no-op.
// Example: hashSigil := sigil.NewHashSigil(crypto.SHA256)
type HashSigil struct {
Hash crypto.Hash
}
// NewHashSigil creates a new HashSigil.
func NewHashSigil(h crypto.Hash) *HashSigil {
return &HashSigil{Hash: h}
// Example: hashSigil := sigil.NewHashSigil(crypto.SHA256)
// Example: digest, _ := hashSigil.In([]byte("payload"))
func NewHashSigil(hashAlgorithm crypto.Hash) *HashSigil {
return &HashSigil{Hash: hashAlgorithm}
}
// In hashes the data.
func (s *HashSigil) In(data []byte) ([]byte, error) {
var h io.Writer
switch s.Hash {
func (sigil *HashSigil) In(data []byte) ([]byte, error) {
var hasher goio.Writer
switch sigil.Hash {
case crypto.MD4:
h = md4.New()
hasher = md4.New()
case crypto.MD5:
h = md5.New()
hasher = md5.New()
case crypto.SHA1:
h = sha1.New()
hasher = sha1.New()
case crypto.SHA224:
h = sha256.New224()
hasher = sha256.New224()
case crypto.SHA256:
h = sha256.New()
hasher = sha256.New()
case crypto.SHA384:
h = sha512.New384()
hasher = sha512.New384()
case crypto.SHA512:
h = sha512.New()
hasher = sha512.New()
case crypto.RIPEMD160:
h = ripemd160.New()
hasher = ripemd160.New()
case crypto.SHA3_224:
h = sha3.New224()
hasher = sha3.New224()
case crypto.SHA3_256:
h = sha3.New256()
hasher = sha3.New256()
case crypto.SHA3_384:
h = sha3.New384()
hasher = sha3.New384()
case crypto.SHA3_512:
h = sha3.New512()
hasher = sha3.New512()
case crypto.SHA512_224:
h = sha512.New512_224()
hasher = sha512.New512_224()
case crypto.SHA512_256:
h = sha512.New512_256()
hasher = sha512.New512_256()
case crypto.BLAKE2s_256:
h, _ = blake2s.New256(nil)
hasher, _ = blake2s.New256(nil)
case crypto.BLAKE2b_256:
h, _ = blake2b.New256(nil)
hasher, _ = blake2b.New256(nil)
case crypto.BLAKE2b_384:
h, _ = blake2b.New384(nil)
hasher, _ = blake2b.New384(nil)
case crypto.BLAKE2b_512:
h, _ = blake2b.New512(nil)
hasher, _ = blake2b.New512(nil)
default:
// MD5SHA1 is not supported as a direct hash
return nil, coreerr.E("sigil.HashSigil.In", "hash algorithm not available", nil)
return nil, core.E("sigil.HashSigil.In", "hash algorithm not available", fs.ErrInvalid)
}
h.Write(data)
return h.(interface{ Sum([]byte) []byte }).Sum(nil), nil
hasher.Write(data)
return hasher.(interface{ Sum([]byte) []byte }).Sum(nil), nil
}
// Out is a no-op for HashSigil.
func (s *HashSigil) Out(data []byte) ([]byte, error) {
func (sigil *HashSigil) Out(data []byte) ([]byte, error) {
return data, nil
}
// NewSigil is a factory function that returns a Sigil based on a string name.
// It is the primary way to create Sigil instances.
func NewSigil(name string) (Sigil, error) {
switch name {
// Example: hexSigil, _ := sigil.NewSigil("hex")
// Example: gzipSigil, _ := sigil.NewSigil("gzip")
// Example: transformed, _ := sigil.Transmute([]byte("payload"), []sigil.Sigil{hexSigil, gzipSigil})
func NewSigil(sigilName string) (Sigil, error) {
switch sigilName {
case "reverse":
return &ReverseSigil{}, nil
case "hex":
@ -269,6 +265,72 @@ func NewSigil(name string) (Sigil, error) {
case "blake2b-512":
return NewHashSigil(crypto.BLAKE2b_512), nil
default:
return nil, coreerr.E("sigil.NewSigil", "unknown sigil name: "+name, nil)
return nil, core.E("sigil.NewSigil", core.Concat("unknown sigil name: ", sigilName), fs.ErrInvalid)
}
}
func indentJSON(compact string) string {
if compact == "" {
return ""
}
builder := core.NewBuilder()
indent := 0
inString := false
escaped := false
writeIndent := func(level int) {
for i := 0; i < level; i++ {
builder.WriteString(" ")
}
}
for i := 0; i < len(compact); i++ {
ch := compact[i]
if inString {
builder.WriteByte(ch)
if escaped {
escaped = false
continue
}
if ch == '\\' {
escaped = true
continue
}
if ch == '"' {
inString = false
}
continue
}
switch ch {
case '"':
inString = true
builder.WriteByte(ch)
case '{', '[':
builder.WriteByte(ch)
if i+1 < len(compact) && compact[i+1] != '}' && compact[i+1] != ']' {
indent++
builder.WriteByte('\n')
writeIndent(indent)
}
case '}', ']':
if i > 0 && compact[i-1] != '{' && compact[i-1] != '[' {
indent--
builder.WriteByte('\n')
writeIndent(indent)
}
builder.WriteByte(ch)
case ',':
builder.WriteByte(ch)
builder.WriteByte('\n')
writeIndent(indent)
case ':':
builder.WriteString(": ")
default:
builder.WriteByte(ch)
}
}
return builder.String()
}

View file

@ -1,4 +1,5 @@
// Package sqlite provides a SQLite-backed implementation of the io.Medium interface.
// Example: medium, _ := sqlite.New(sqlite.Options{Path: ":memory:"})
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
package sqlite
import (
@ -6,161 +7,163 @@ import (
"database/sql"
goio "io"
"io/fs"
"os"
"path"
"strings"
"time"
coreerr "forge.lthn.ai/core/go-log"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
_ "modernc.org/sqlite" // Pure Go SQLite driver
_ "modernc.org/sqlite"
)
// Medium is a SQLite-backed storage backend implementing the io.Medium interface.
// Example: medium, _ := sqlite.New(sqlite.Options{Path: ":memory:"})
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
type Medium struct {
db *sql.DB
table string
database *sql.DB
table string
}
// Option configures a Medium.
type Option func(*Medium)
var _ coreio.Medium = (*Medium)(nil)
// WithTable sets the table name (default: "files").
func WithTable(table string) Option {
return func(m *Medium) {
m.table = table
}
// Example: medium, _ := sqlite.New(sqlite.Options{Path: ":memory:", Table: "files"})
type Options struct {
Path string
Table string
}
// New creates a new SQLite Medium at the given database path.
// Use ":memory:" for an in-memory database.
func New(dbPath string, opts ...Option) (*Medium, error) {
if dbPath == "" {
return nil, coreerr.E("sqlite.New", "database path is required", nil)
func normaliseTableName(table string) string {
if table == "" {
return "files"
}
return table
}
// Example: medium, _ := sqlite.New(sqlite.Options{Path: ":memory:", Table: "files"})
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
func New(options Options) (*Medium, error) {
if options.Path == "" {
return nil, core.E("sqlite.New", "database path is required", fs.ErrInvalid)
}
m := &Medium{table: "files"}
for _, opt := range opts {
opt(m)
}
medium := &Medium{table: normaliseTableName(options.Table)}
db, err := sql.Open("sqlite", dbPath)
database, err := sql.Open("sqlite", options.Path)
if err != nil {
return nil, coreerr.E("sqlite.New", "failed to open database", err)
return nil, core.E("sqlite.New", "failed to open database", err)
}
// Enable WAL mode for better concurrency
if _, err := db.Exec("PRAGMA journal_mode=WAL"); err != nil {
db.Close()
return nil, coreerr.E("sqlite.New", "failed to set WAL mode", err)
if _, err := database.Exec("PRAGMA journal_mode=WAL"); err != nil {
database.Close()
return nil, core.E("sqlite.New", "failed to set WAL mode", err)
}
// Create the schema
createSQL := `CREATE TABLE IF NOT EXISTS ` + m.table + ` (
createSQL := `CREATE TABLE IF NOT EXISTS ` + medium.table + ` (
path TEXT PRIMARY KEY,
content BLOB NOT NULL,
mode INTEGER DEFAULT 420,
is_dir BOOLEAN DEFAULT FALSE,
mtime DATETIME DEFAULT CURRENT_TIMESTAMP
)`
if _, err := db.Exec(createSQL); err != nil {
db.Close()
return nil, coreerr.E("sqlite.New", "failed to create table", err)
if _, err := database.Exec(createSQL); err != nil {
database.Close()
return nil, core.E("sqlite.New", "failed to create table", err)
}
m.db = db
return m, nil
medium.database = database
return medium, nil
}
// Close closes the underlying database connection.
func (m *Medium) Close() error {
if m.db != nil {
return m.db.Close()
// Example: _ = medium.Close()
func (medium *Medium) Close() error {
if medium.database != nil {
return medium.database.Close()
}
return nil
}
// cleanPath normalises a path for consistent storage.
// Uses a leading "/" before Clean to sandbox traversal attempts.
func cleanPath(p string) string {
clean := path.Clean("/" + p)
func normaliseEntryPath(filePath string) string {
clean := path.Clean("/" + filePath)
if clean == "/" {
return ""
}
return strings.TrimPrefix(clean, "/")
return core.TrimPrefix(clean, "/")
}
// Read retrieves the content of a file as a string.
func (m *Medium) Read(p string) (string, error) {
key := cleanPath(p)
// Example: content, _ := medium.Read("config/app.yaml")
func (medium *Medium) Read(filePath string) (string, error) {
key := normaliseEntryPath(filePath)
if key == "" {
return "", coreerr.E("sqlite.Read", "path is required", os.ErrInvalid)
return "", core.E("sqlite.Read", "path is required", fs.ErrInvalid)
}
var content []byte
var isDir bool
err := m.db.QueryRow(
`SELECT content, is_dir FROM `+m.table+` WHERE path = ?`, key,
err := medium.database.QueryRow(
`SELECT content, is_dir FROM `+medium.table+` WHERE path = ?`, key,
).Scan(&content, &isDir)
if err == sql.ErrNoRows {
return "", coreerr.E("sqlite.Read", "file not found: "+key, os.ErrNotExist)
return "", core.E("sqlite.Read", core.Concat("file not found: ", key), fs.ErrNotExist)
}
if err != nil {
return "", coreerr.E("sqlite.Read", "query failed: "+key, err)
return "", core.E("sqlite.Read", core.Concat("query failed: ", key), err)
}
if isDir {
return "", coreerr.E("sqlite.Read", "path is a directory: "+key, os.ErrInvalid)
return "", core.E("sqlite.Read", core.Concat("path is a directory: ", key), fs.ErrInvalid)
}
return string(content), nil
}
// Write saves the given content to a file, overwriting it if it exists.
func (m *Medium) Write(p, content string) error {
key := cleanPath(p)
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
func (medium *Medium) Write(filePath, content string) error {
return medium.WriteMode(filePath, content, 0644)
}
// Example: _ = medium.WriteMode("keys/private.key", key, 0600)
func (medium *Medium) WriteMode(filePath, content string, mode fs.FileMode) error {
key := normaliseEntryPath(filePath)
if key == "" {
return coreerr.E("sqlite.Write", "path is required", os.ErrInvalid)
return core.E("sqlite.WriteMode", "path is required", fs.ErrInvalid)
}
_, err := m.db.Exec(
`INSERT INTO `+m.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, 420, FALSE, ?)
ON CONFLICT(path) DO UPDATE SET content = excluded.content, is_dir = FALSE, mtime = excluded.mtime`,
key, []byte(content), time.Now().UTC(),
_, err := medium.database.Exec(
`INSERT INTO `+medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, ?, FALSE, ?)
ON CONFLICT(path) DO UPDATE SET content = excluded.content, mode = excluded.mode, is_dir = FALSE, mtime = excluded.mtime`,
key, []byte(content), int(mode), time.Now().UTC(),
)
if err != nil {
return coreerr.E("sqlite.Write", "insert failed: "+key, err)
return core.E("sqlite.WriteMode", core.Concat("insert failed: ", key), err)
}
return nil
}
// EnsureDir makes sure a directory exists, creating it if necessary.
func (m *Medium) EnsureDir(p string) error {
key := cleanPath(p)
// Example: _ = medium.EnsureDir("config")
func (medium *Medium) EnsureDir(filePath string) error {
key := normaliseEntryPath(filePath)
if key == "" {
// Root always "exists"
return nil
}
_, err := m.db.Exec(
`INSERT INTO `+m.table+` (path, content, mode, is_dir, mtime) VALUES (?, '', 493, TRUE, ?)
_, err := medium.database.Exec(
`INSERT INTO `+medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, '', 493, TRUE, ?)
ON CONFLICT(path) DO NOTHING`,
key, time.Now().UTC(),
)
if err != nil {
return coreerr.E("sqlite.EnsureDir", "insert failed: "+key, err)
return core.E("sqlite.EnsureDir", core.Concat("insert failed: ", key), err)
}
return nil
}
// IsFile checks if a path exists and is a regular file.
func (m *Medium) IsFile(p string) bool {
key := cleanPath(p)
// Example: isFile := medium.IsFile("config/app.yaml")
func (medium *Medium) IsFile(filePath string) bool {
key := normaliseEntryPath(filePath)
if key == "" {
return false
}
var isDir bool
err := m.db.QueryRow(
`SELECT is_dir FROM `+m.table+` WHERE path = ?`, key,
err := medium.database.QueryRow(
`SELECT is_dir FROM `+medium.table+` WHERE path = ?`, key,
).Scan(&isDir)
if err != nil {
return false
@ -168,141 +171,124 @@ func (m *Medium) IsFile(p string) bool {
return !isDir
}
// FileGet is a convenience function that reads a file from the medium.
func (m *Medium) FileGet(p string) (string, error) {
return m.Read(p)
}
// FileSet is a convenience function that writes a file to the medium.
func (m *Medium) FileSet(p, content string) error {
return m.Write(p, content)
}
// Delete removes a file or empty directory.
func (m *Medium) Delete(p string) error {
key := cleanPath(p)
// Example: _ = medium.Delete("config/app.yaml")
func (medium *Medium) Delete(filePath string) error {
key := normaliseEntryPath(filePath)
if key == "" {
return coreerr.E("sqlite.Delete", "path is required", os.ErrInvalid)
return core.E("sqlite.Delete", "path is required", fs.ErrInvalid)
}
// Check if it's a directory with children
var isDir bool
err := m.db.QueryRow(
`SELECT is_dir FROM `+m.table+` WHERE path = ?`, key,
err := medium.database.QueryRow(
`SELECT is_dir FROM `+medium.table+` WHERE path = ?`, key,
).Scan(&isDir)
if err == sql.ErrNoRows {
return coreerr.E("sqlite.Delete", "path not found: "+key, os.ErrNotExist)
return core.E("sqlite.Delete", core.Concat("path not found: ", key), fs.ErrNotExist)
}
if err != nil {
return coreerr.E("sqlite.Delete", "query failed: "+key, err)
return core.E("sqlite.Delete", core.Concat("query failed: ", key), err)
}
if isDir {
// Check for children
prefix := key + "/"
var count int
err := m.db.QueryRow(
`SELECT COUNT(*) FROM `+m.table+` WHERE path LIKE ? AND path != ?`, prefix+"%", key,
err := medium.database.QueryRow(
`SELECT COUNT(*) FROM `+medium.table+` WHERE path LIKE ? AND path != ?`, prefix+"%", key,
).Scan(&count)
if err != nil {
return coreerr.E("sqlite.Delete", "count failed: "+key, err)
return core.E("sqlite.Delete", core.Concat("count failed: ", key), err)
}
if count > 0 {
return coreerr.E("sqlite.Delete", "directory not empty: "+key, os.ErrExist)
return core.E("sqlite.Delete", core.Concat("directory not empty: ", key), fs.ErrExist)
}
}
res, err := m.db.Exec(`DELETE FROM `+m.table+` WHERE path = ?`, key)
execResult, err := medium.database.Exec(`DELETE FROM `+medium.table+` WHERE path = ?`, key)
if err != nil {
return coreerr.E("sqlite.Delete", "delete failed: "+key, err)
return core.E("sqlite.Delete", core.Concat("delete failed: ", key), err)
}
n, _ := res.RowsAffected()
if n == 0 {
return coreerr.E("sqlite.Delete", "path not found: "+key, os.ErrNotExist)
rowsAffected, _ := execResult.RowsAffected()
if rowsAffected == 0 {
return core.E("sqlite.Delete", core.Concat("path not found: ", key), fs.ErrNotExist)
}
return nil
}
// DeleteAll removes a file or directory and all its contents recursively.
func (m *Medium) DeleteAll(p string) error {
key := cleanPath(p)
// Example: _ = medium.DeleteAll("config")
func (medium *Medium) DeleteAll(filePath string) error {
key := normaliseEntryPath(filePath)
if key == "" {
return coreerr.E("sqlite.DeleteAll", "path is required", os.ErrInvalid)
return core.E("sqlite.DeleteAll", "path is required", fs.ErrInvalid)
}
prefix := key + "/"
// Delete the exact path and all children
res, err := m.db.Exec(
`DELETE FROM `+m.table+` WHERE path = ? OR path LIKE ?`,
execResult, err := medium.database.Exec(
`DELETE FROM `+medium.table+` WHERE path = ? OR path LIKE ?`,
key, prefix+"%",
)
if err != nil {
return coreerr.E("sqlite.DeleteAll", "delete failed: "+key, err)
return core.E("sqlite.DeleteAll", core.Concat("delete failed: ", key), err)
}
n, _ := res.RowsAffected()
if n == 0 {
return coreerr.E("sqlite.DeleteAll", "path not found: "+key, os.ErrNotExist)
rowsAffected, _ := execResult.RowsAffected()
if rowsAffected == 0 {
return core.E("sqlite.DeleteAll", core.Concat("path not found: ", key), fs.ErrNotExist)
}
return nil
}
// Rename moves a file or directory from oldPath to newPath.
func (m *Medium) Rename(oldPath, newPath string) error {
oldKey := cleanPath(oldPath)
newKey := cleanPath(newPath)
// Example: _ = medium.Rename("drafts/todo.txt", "archive/todo.txt")
func (medium *Medium) Rename(oldPath, newPath string) error {
oldKey := normaliseEntryPath(oldPath)
newKey := normaliseEntryPath(newPath)
if oldKey == "" || newKey == "" {
return coreerr.E("sqlite.Rename", "both old and new paths are required", os.ErrInvalid)
return core.E("sqlite.Rename", "both old and new paths are required", fs.ErrInvalid)
}
tx, err := m.db.Begin()
tx, err := medium.database.Begin()
if err != nil {
return coreerr.E("sqlite.Rename", "begin tx failed", err)
return core.E("sqlite.Rename", "begin tx failed", err)
}
defer tx.Rollback()
// Check if source exists
var content []byte
var mode int
var isDir bool
var mtime time.Time
err = tx.QueryRow(
`SELECT content, mode, is_dir, mtime FROM `+m.table+` WHERE path = ?`, oldKey,
`SELECT content, mode, is_dir, mtime FROM `+medium.table+` WHERE path = ?`, oldKey,
).Scan(&content, &mode, &isDir, &mtime)
if err == sql.ErrNoRows {
return coreerr.E("sqlite.Rename", "source not found: "+oldKey, os.ErrNotExist)
return core.E("sqlite.Rename", core.Concat("source not found: ", oldKey), fs.ErrNotExist)
}
if err != nil {
return coreerr.E("sqlite.Rename", "query failed: "+oldKey, err)
return core.E("sqlite.Rename", core.Concat("query failed: ", oldKey), err)
}
// Insert or replace at new path
_, err = tx.Exec(
`INSERT INTO `+m.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, ?, ?, ?)
`INSERT INTO `+medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, ?, ?, ?)
ON CONFLICT(path) DO UPDATE SET content = excluded.content, mode = excluded.mode, is_dir = excluded.is_dir, mtime = excluded.mtime`,
newKey, content, mode, isDir, mtime,
)
if err != nil {
return coreerr.E("sqlite.Rename", "insert at new path failed: "+newKey, err)
return core.E("sqlite.Rename", core.Concat("insert at new path failed: ", newKey), err)
}
// Delete old path
_, err = tx.Exec(`DELETE FROM `+m.table+` WHERE path = ?`, oldKey)
_, err = tx.Exec(`DELETE FROM `+medium.table+` WHERE path = ?`, oldKey)
if err != nil {
return coreerr.E("sqlite.Rename", "delete old path failed: "+oldKey, err)
return core.E("sqlite.Rename", core.Concat("delete old path failed: ", oldKey), err)
}
// If it's a directory, move all children
if isDir {
oldPrefix := oldKey + "/"
newPrefix := newKey + "/"
rows, err := tx.Query(
`SELECT path, content, mode, is_dir, mtime FROM `+m.table+` WHERE path LIKE ?`,
childRows, err := tx.Query(
`SELECT path, content, mode, is_dir, mtime FROM `+medium.table+` WHERE path LIKE ?`,
oldPrefix+"%",
)
if err != nil {
return coreerr.E("sqlite.Rename", "query children failed", err)
return core.E("sqlite.Rename", "query children failed", err)
}
type child struct {
@ -313,52 +299,50 @@ func (m *Medium) Rename(oldPath, newPath string) error {
mtime time.Time
}
var children []child
for rows.Next() {
var c child
if err := rows.Scan(&c.path, &c.content, &c.mode, &c.isDir, &c.mtime); err != nil {
rows.Close()
return coreerr.E("sqlite.Rename", "scan child failed", err)
for childRows.Next() {
var childEntry child
if err := childRows.Scan(&childEntry.path, &childEntry.content, &childEntry.mode, &childEntry.isDir, &childEntry.mtime); err != nil {
childRows.Close()
return core.E("sqlite.Rename", "scan child failed", err)
}
children = append(children, c)
children = append(children, childEntry)
}
rows.Close()
childRows.Close()
for _, c := range children {
newChildPath := newPrefix + strings.TrimPrefix(c.path, oldPrefix)
for _, childEntry := range children {
newChildPath := core.Concat(newPrefix, core.TrimPrefix(childEntry.path, oldPrefix))
_, err = tx.Exec(
`INSERT INTO `+m.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, ?, ?, ?)
`INSERT INTO `+medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, ?, ?, ?)
ON CONFLICT(path) DO UPDATE SET content = excluded.content, mode = excluded.mode, is_dir = excluded.is_dir, mtime = excluded.mtime`,
newChildPath, c.content, c.mode, c.isDir, c.mtime,
newChildPath, childEntry.content, childEntry.mode, childEntry.isDir, childEntry.mtime,
)
if err != nil {
return coreerr.E("sqlite.Rename", "insert child failed", err)
return core.E("sqlite.Rename", "insert child failed", err)
}
}
// Delete old children
_, err = tx.Exec(`DELETE FROM `+m.table+` WHERE path LIKE ?`, oldPrefix+"%")
_, err = tx.Exec(`DELETE FROM `+medium.table+` WHERE path LIKE ?`, oldPrefix+"%")
if err != nil {
return coreerr.E("sqlite.Rename", "delete old children failed", err)
return core.E("sqlite.Rename", "delete old children failed", err)
}
}
return tx.Commit()
}
// List returns the directory entries for the given path.
func (m *Medium) List(p string) ([]fs.DirEntry, error) {
prefix := cleanPath(p)
// Example: entries, _ := medium.List("config")
func (medium *Medium) List(filePath string) ([]fs.DirEntry, error) {
prefix := normaliseEntryPath(filePath)
if prefix != "" {
prefix += "/"
}
// Query all paths under the prefix
rows, err := m.db.Query(
`SELECT path, content, mode, is_dir, mtime FROM `+m.table+` WHERE path LIKE ? OR path LIKE ?`,
rows, err := medium.database.Query(
`SELECT path, content, mode, is_dir, mtime FROM `+medium.table+` WHERE path LIKE ? OR path LIKE ?`,
prefix+"%", prefix+"%",
)
if err != nil {
return nil, coreerr.E("sqlite.List", "query failed", err)
return nil, core.E("sqlite.List", "query failed", err)
}
defer rows.Close()
@ -372,18 +356,17 @@ func (m *Medium) List(p string) ([]fs.DirEntry, error) {
var isDir bool
var mtime time.Time
if err := rows.Scan(&rowPath, &content, &mode, &isDir, &mtime); err != nil {
return nil, coreerr.E("sqlite.List", "scan failed", err)
return nil, core.E("sqlite.List", "scan failed", err)
}
rest := strings.TrimPrefix(rowPath, prefix)
rest := core.TrimPrefix(rowPath, prefix)
if rest == "" {
continue
}
// Check if this is a direct child or nested
if idx := strings.Index(rest, "/"); idx >= 0 {
// Nested - register as a directory
dirName := rest[:idx]
parts := core.SplitN(rest, "/", 2)
if len(parts) == 2 {
dirName := parts[0]
if !seen[dirName] {
seen[dirName] = true
entries = append(entries, &dirEntry{
@ -398,7 +381,6 @@ func (m *Medium) List(p string) ([]fs.DirEntry, error) {
})
}
} else {
// Direct child
if !seen[rest] {
seen[rest] = true
entries = append(entries, &dirEntry{
@ -417,28 +399,31 @@ func (m *Medium) List(p string) ([]fs.DirEntry, error) {
}
}
return entries, rows.Err()
if err := rows.Err(); err != nil {
return nil, core.E("sqlite.List", "rows", err)
}
return entries, nil
}
// Stat returns file information for the given path.
func (m *Medium) Stat(p string) (fs.FileInfo, error) {
key := cleanPath(p)
// Example: info, _ := medium.Stat("config/app.yaml")
func (medium *Medium) Stat(filePath string) (fs.FileInfo, error) {
key := normaliseEntryPath(filePath)
if key == "" {
return nil, coreerr.E("sqlite.Stat", "path is required", os.ErrInvalid)
return nil, core.E("sqlite.Stat", "path is required", fs.ErrInvalid)
}
var content []byte
var mode int
var isDir bool
var mtime time.Time
err := m.db.QueryRow(
`SELECT content, mode, is_dir, mtime FROM `+m.table+` WHERE path = ?`, key,
err := medium.database.QueryRow(
`SELECT content, mode, is_dir, mtime FROM `+medium.table+` WHERE path = ?`, key,
).Scan(&content, &mode, &isDir, &mtime)
if err == sql.ErrNoRows {
return nil, coreerr.E("sqlite.Stat", "path not found: "+key, os.ErrNotExist)
return nil, core.E("sqlite.Stat", core.Concat("path not found: ", key), fs.ErrNotExist)
}
if err != nil {
return nil, coreerr.E("sqlite.Stat", "query failed: "+key, err)
return nil, core.E("sqlite.Stat", core.Concat("query failed: ", key), err)
}
name := path.Base(key)
@ -451,28 +436,28 @@ func (m *Medium) Stat(p string) (fs.FileInfo, error) {
}, nil
}
// Open opens the named file for reading.
func (m *Medium) Open(p string) (fs.File, error) {
key := cleanPath(p)
// Example: file, _ := medium.Open("config/app.yaml")
func (medium *Medium) Open(filePath string) (fs.File, error) {
key := normaliseEntryPath(filePath)
if key == "" {
return nil, coreerr.E("sqlite.Open", "path is required", os.ErrInvalid)
return nil, core.E("sqlite.Open", "path is required", fs.ErrInvalid)
}
var content []byte
var mode int
var isDir bool
var mtime time.Time
err := m.db.QueryRow(
`SELECT content, mode, is_dir, mtime FROM `+m.table+` WHERE path = ?`, key,
err := medium.database.QueryRow(
`SELECT content, mode, is_dir, mtime FROM `+medium.table+` WHERE path = ?`, key,
).Scan(&content, &mode, &isDir, &mtime)
if err == sql.ErrNoRows {
return nil, coreerr.E("sqlite.Open", "file not found: "+key, os.ErrNotExist)
return nil, core.E("sqlite.Open", core.Concat("file not found: ", key), fs.ErrNotExist)
}
if err != nil {
return nil, coreerr.E("sqlite.Open", "query failed: "+key, err)
return nil, core.E("sqlite.Open", core.Concat("query failed: ", key), err)
}
if isDir {
return nil, coreerr.E("sqlite.Open", "path is a directory: "+key, os.ErrInvalid)
return nil, core.E("sqlite.Open", core.Concat("path is a directory: ", key), fs.ErrInvalid)
}
return &sqliteFile{
@ -483,81 +468,80 @@ func (m *Medium) Open(p string) (fs.File, error) {
}, nil
}
// Create creates or truncates the named file.
func (m *Medium) Create(p string) (goio.WriteCloser, error) {
key := cleanPath(p)
// Example: writer, _ := medium.Create("logs/app.log")
func (medium *Medium) Create(filePath string) (goio.WriteCloser, error) {
key := normaliseEntryPath(filePath)
if key == "" {
return nil, coreerr.E("sqlite.Create", "path is required", os.ErrInvalid)
return nil, core.E("sqlite.Create", "path is required", fs.ErrInvalid)
}
return &sqliteWriteCloser{
medium: m,
medium: medium,
path: key,
}, nil
}
// Append opens the named file for appending, creating it if it doesn't exist.
func (m *Medium) Append(p string) (goio.WriteCloser, error) {
key := cleanPath(p)
// Example: writer, _ := medium.Append("logs/app.log")
func (medium *Medium) Append(filePath string) (goio.WriteCloser, error) {
key := normaliseEntryPath(filePath)
if key == "" {
return nil, coreerr.E("sqlite.Append", "path is required", os.ErrInvalid)
return nil, core.E("sqlite.Append", "path is required", fs.ErrInvalid)
}
var existing []byte
err := m.db.QueryRow(
`SELECT content FROM `+m.table+` WHERE path = ? AND is_dir = FALSE`, key,
err := medium.database.QueryRow(
`SELECT content FROM `+medium.table+` WHERE path = ? AND is_dir = FALSE`, key,
).Scan(&existing)
if err != nil && err != sql.ErrNoRows {
return nil, coreerr.E("sqlite.Append", "query failed: "+key, err)
return nil, core.E("sqlite.Append", core.Concat("query failed: ", key), err)
}
return &sqliteWriteCloser{
medium: m,
medium: medium,
path: key,
data: existing,
}, nil
}
// ReadStream returns a reader for the file content.
func (m *Medium) ReadStream(p string) (goio.ReadCloser, error) {
key := cleanPath(p)
// Example: reader, _ := medium.ReadStream("logs/app.log")
func (medium *Medium) ReadStream(filePath string) (goio.ReadCloser, error) {
key := normaliseEntryPath(filePath)
if key == "" {
return nil, coreerr.E("sqlite.ReadStream", "path is required", os.ErrInvalid)
return nil, core.E("sqlite.ReadStream", "path is required", fs.ErrInvalid)
}
var content []byte
var isDir bool
err := m.db.QueryRow(
`SELECT content, is_dir FROM `+m.table+` WHERE path = ?`, key,
err := medium.database.QueryRow(
`SELECT content, is_dir FROM `+medium.table+` WHERE path = ?`, key,
).Scan(&content, &isDir)
if err == sql.ErrNoRows {
return nil, coreerr.E("sqlite.ReadStream", "file not found: "+key, os.ErrNotExist)
return nil, core.E("sqlite.ReadStream", core.Concat("file not found: ", key), fs.ErrNotExist)
}
if err != nil {
return nil, coreerr.E("sqlite.ReadStream", "query failed: "+key, err)
return nil, core.E("sqlite.ReadStream", core.Concat("query failed: ", key), err)
}
if isDir {
return nil, coreerr.E("sqlite.ReadStream", "path is a directory: "+key, os.ErrInvalid)
return nil, core.E("sqlite.ReadStream", core.Concat("path is a directory: ", key), fs.ErrInvalid)
}
return goio.NopCloser(bytes.NewReader(content)), nil
}
// WriteStream returns a writer for the file content. Content is stored on Close.
func (m *Medium) WriteStream(p string) (goio.WriteCloser, error) {
return m.Create(p)
// Example: writer, _ := medium.WriteStream("logs/app.log")
func (medium *Medium) WriteStream(filePath string) (goio.WriteCloser, error) {
return medium.Create(filePath)
}
// Exists checks if a path exists (file or directory).
func (m *Medium) Exists(p string) bool {
key := cleanPath(p)
// Example: exists := medium.Exists("config/app.yaml")
func (medium *Medium) Exists(filePath string) bool {
key := normaliseEntryPath(filePath)
if key == "" {
// Root always exists
return true
}
var count int
err := m.db.QueryRow(
`SELECT COUNT(*) FROM `+m.table+` WHERE path = ?`, key,
err := medium.database.QueryRow(
`SELECT COUNT(*) FROM `+medium.table+` WHERE path = ?`, key,
).Scan(&count)
if err != nil {
return false
@ -565,16 +549,16 @@ func (m *Medium) Exists(p string) bool {
return count > 0
}
// IsDir checks if a path exists and is a directory.
func (m *Medium) IsDir(p string) bool {
key := cleanPath(p)
// Example: isDirectory := medium.IsDir("config")
func (medium *Medium) IsDir(filePath string) bool {
key := normaliseEntryPath(filePath)
if key == "" {
return false
}
var isDir bool
err := m.db.QueryRow(
`SELECT is_dir FROM `+m.table+` WHERE path = ?`, key,
err := medium.database.QueryRow(
`SELECT is_dir FROM `+medium.table+` WHERE path = ?`, key,
).Scan(&isDir)
if err != nil {
return false
@ -582,9 +566,6 @@ func (m *Medium) IsDir(p string) bool {
return isDir
}
// --- Internal types ---
// fileInfo implements fs.FileInfo for SQLite entries.
type fileInfo struct {
name string
size int64
@ -593,14 +574,18 @@ type fileInfo struct {
isDir bool
}
func (fi *fileInfo) Name() string { return fi.name }
func (fi *fileInfo) Size() int64 { return fi.size }
func (fi *fileInfo) Mode() fs.FileMode { return fi.mode }
func (fi *fileInfo) ModTime() time.Time { return fi.modTime }
func (fi *fileInfo) IsDir() bool { return fi.isDir }
func (fi *fileInfo) Sys() any { return nil }
func (info *fileInfo) Name() string { return info.name }
func (info *fileInfo) Size() int64 { return info.size }
func (info *fileInfo) Mode() fs.FileMode { return info.mode }
func (info *fileInfo) ModTime() time.Time { return info.modTime }
func (info *fileInfo) IsDir() bool { return info.isDir }
func (info *fileInfo) Sys() any { return nil }
// dirEntry implements fs.DirEntry for SQLite listings.
type dirEntry struct {
name string
isDir bool
@ -608,12 +593,14 @@ type dirEntry struct {
info fs.FileInfo
}
func (de *dirEntry) Name() string { return de.name }
func (de *dirEntry) IsDir() bool { return de.isDir }
func (de *dirEntry) Type() fs.FileMode { return de.mode.Type() }
func (de *dirEntry) Info() (fs.FileInfo, error) { return de.info, nil }
func (entry *dirEntry) Name() string { return entry.name }
func (entry *dirEntry) IsDir() bool { return entry.isDir }
func (entry *dirEntry) Type() fs.FileMode { return entry.mode.Type() }
func (entry *dirEntry) Info() (fs.FileInfo, error) { return entry.info, nil }
// sqliteFile implements fs.File for SQLite entries.
type sqliteFile struct {
name string
content []byte
@ -622,48 +609,47 @@ type sqliteFile struct {
modTime time.Time
}
func (f *sqliteFile) Stat() (fs.FileInfo, error) {
func (file *sqliteFile) Stat() (fs.FileInfo, error) {
return &fileInfo{
name: f.name,
size: int64(len(f.content)),
mode: f.mode,
modTime: f.modTime,
name: file.name,
size: int64(len(file.content)),
mode: file.mode,
modTime: file.modTime,
}, nil
}
func (f *sqliteFile) Read(b []byte) (int, error) {
if f.offset >= int64(len(f.content)) {
func (file *sqliteFile) Read(buffer []byte) (int, error) {
if file.offset >= int64(len(file.content)) {
return 0, goio.EOF
}
n := copy(b, f.content[f.offset:])
f.offset += int64(n)
return n, nil
bytesRead := copy(buffer, file.content[file.offset:])
file.offset += int64(bytesRead)
return bytesRead, nil
}
func (f *sqliteFile) Close() error {
func (file *sqliteFile) Close() error {
return nil
}
// sqliteWriteCloser buffers writes and stores to SQLite on Close.
type sqliteWriteCloser struct {
medium *Medium
path string
data []byte
}
func (w *sqliteWriteCloser) Write(p []byte) (int, error) {
w.data = append(w.data, p...)
return len(p), nil
func (writer *sqliteWriteCloser) Write(data []byte) (int, error) {
writer.data = append(writer.data, data...)
return len(data), nil
}
func (w *sqliteWriteCloser) Close() error {
_, err := w.medium.db.Exec(
`INSERT INTO `+w.medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, 420, FALSE, ?)
func (writer *sqliteWriteCloser) Close() error {
_, err := writer.medium.database.Exec(
`INSERT INTO `+writer.medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, 420, FALSE, ?)
ON CONFLICT(path) DO UPDATE SET content = excluded.content, is_dir = FALSE, mtime = excluded.mtime`,
w.path, w.data, time.Now().UTC(),
writer.path, writer.data, time.Now().UTC(),
)
if err != nil {
return coreerr.E("sqlite.WriteCloser.Close", "store failed: "+w.path, err)
return core.E("sqlite.WriteCloser.Close", core.Concat("store failed: ", writer.path), err)
}
return nil
}

View file

@ -3,317 +3,287 @@ package sqlite
import (
goio "io"
"io/fs"
"strings"
"testing"
core "dappco.re/go/core"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func newTestMedium(t *testing.T) *Medium {
func newSqliteMedium(t *testing.T) *Medium {
t.Helper()
m, err := New(":memory:")
sqliteMedium, err := New(Options{Path: ":memory:"})
require.NoError(t, err)
t.Cleanup(func() { m.Close() })
return m
t.Cleanup(func() { sqliteMedium.Close() })
return sqliteMedium
}
// --- Constructor Tests ---
func TestNew_Good(t *testing.T) {
m, err := New(":memory:")
func TestSqlite_New_Good(t *testing.T) {
sqliteMedium, err := New(Options{Path: ":memory:"})
require.NoError(t, err)
defer m.Close()
assert.Equal(t, "files", m.table)
defer sqliteMedium.Close()
assert.Equal(t, "files", sqliteMedium.table)
}
func TestNew_Good_WithTable(t *testing.T) {
m, err := New(":memory:", WithTable("custom"))
func TestSqlite_New_Options_Good(t *testing.T) {
sqliteMedium, err := New(Options{Path: ":memory:", Table: "custom"})
require.NoError(t, err)
defer m.Close()
assert.Equal(t, "custom", m.table)
defer sqliteMedium.Close()
assert.Equal(t, "custom", sqliteMedium.table)
}
func TestNew_Bad_EmptyPath(t *testing.T) {
_, err := New("")
func TestSqlite_New_EmptyPath_Bad(t *testing.T) {
_, err := New(Options{})
assert.Error(t, err)
assert.Contains(t, err.Error(), "database path is required")
}
// --- Read/Write Tests ---
func TestSqlite_ReadWrite_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestReadWrite_Good(t *testing.T) {
m := newTestMedium(t)
err := m.Write("hello.txt", "world")
err := sqliteMedium.Write("hello.txt", "world")
require.NoError(t, err)
content, err := m.Read("hello.txt")
content, err := sqliteMedium.Read("hello.txt")
require.NoError(t, err)
assert.Equal(t, "world", content)
}
func TestReadWrite_Good_Overwrite(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_ReadWrite_Overwrite_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.Write("file.txt", "first"))
require.NoError(t, m.Write("file.txt", "second"))
require.NoError(t, sqliteMedium.Write("file.txt", "first"))
require.NoError(t, sqliteMedium.Write("file.txt", "second"))
content, err := m.Read("file.txt")
content, err := sqliteMedium.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "second", content)
}
func TestReadWrite_Good_NestedPath(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_ReadWrite_NestedPath_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
err := m.Write("a/b/c.txt", "nested")
err := sqliteMedium.Write("a/b/c.txt", "nested")
require.NoError(t, err)
content, err := m.Read("a/b/c.txt")
content, err := sqliteMedium.Read("a/b/c.txt")
require.NoError(t, err)
assert.Equal(t, "nested", content)
}
func TestRead_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Read_NotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
_, err := m.Read("nonexistent.txt")
_, err := sqliteMedium.Read("nonexistent.txt")
assert.Error(t, err)
}
func TestRead_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Read_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
_, err := m.Read("")
_, err := sqliteMedium.Read("")
assert.Error(t, err)
}
func TestWrite_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Write_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
err := m.Write("", "content")
err := sqliteMedium.Write("", "content")
assert.Error(t, err)
}
func TestRead_Bad_IsDirectory(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Read_IsDirectory_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.EnsureDir("mydir"))
_, err := m.Read("mydir")
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
_, err := sqliteMedium.Read("mydir")
assert.Error(t, err)
}
// --- EnsureDir Tests ---
func TestSqlite_EnsureDir_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestEnsureDir_Good(t *testing.T) {
m := newTestMedium(t)
err := m.EnsureDir("mydir")
err := sqliteMedium.EnsureDir("mydir")
require.NoError(t, err)
assert.True(t, m.IsDir("mydir"))
assert.True(t, sqliteMedium.IsDir("mydir"))
}
func TestEnsureDir_Good_EmptyPath(t *testing.T) {
m := newTestMedium(t)
// Root always exists, no-op
err := m.EnsureDir("")
func TestSqlite_EnsureDir_EmptyPath_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
err := sqliteMedium.EnsureDir("")
assert.NoError(t, err)
}
func TestEnsureDir_Good_Idempotent(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_EnsureDir_Idempotent_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.EnsureDir("mydir"))
require.NoError(t, m.EnsureDir("mydir"))
assert.True(t, m.IsDir("mydir"))
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
assert.True(t, sqliteMedium.IsDir("mydir"))
}
// --- IsFile Tests ---
func TestSqlite_IsFile_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestIsFile_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("file.txt", "content"))
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
require.NoError(t, m.Write("file.txt", "content"))
require.NoError(t, m.EnsureDir("mydir"))
assert.True(t, m.IsFile("file.txt"))
assert.False(t, m.IsFile("mydir"))
assert.False(t, m.IsFile("nonexistent"))
assert.False(t, m.IsFile(""))
assert.True(t, sqliteMedium.IsFile("file.txt"))
assert.False(t, sqliteMedium.IsFile("mydir"))
assert.False(t, sqliteMedium.IsFile("nonexistent"))
assert.False(t, sqliteMedium.IsFile(""))
}
// --- FileGet/FileSet Tests ---
func TestSqlite_Delete_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestFileGetFileSet_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("to-delete.txt", "content"))
assert.True(t, sqliteMedium.Exists("to-delete.txt"))
err := m.FileSet("key.txt", "value")
err := sqliteMedium.Delete("to-delete.txt")
require.NoError(t, err)
val, err := m.FileGet("key.txt")
require.NoError(t, err)
assert.Equal(t, "value", val)
assert.False(t, sqliteMedium.Exists("to-delete.txt"))
}
// --- Delete Tests ---
func TestSqlite_Delete_EmptyDir_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestDelete_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.EnsureDir("emptydir"))
assert.True(t, sqliteMedium.IsDir("emptydir"))
require.NoError(t, m.Write("to-delete.txt", "content"))
assert.True(t, m.Exists("to-delete.txt"))
err := m.Delete("to-delete.txt")
err := sqliteMedium.Delete("emptydir")
require.NoError(t, err)
assert.False(t, m.Exists("to-delete.txt"))
assert.False(t, sqliteMedium.IsDir("emptydir"))
}
func TestDelete_Good_EmptyDir(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Delete_NotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.EnsureDir("emptydir"))
assert.True(t, m.IsDir("emptydir"))
err := m.Delete("emptydir")
require.NoError(t, err)
assert.False(t, m.IsDir("emptydir"))
}
func TestDelete_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
err := m.Delete("nonexistent")
err := sqliteMedium.Delete("nonexistent")
assert.Error(t, err)
}
func TestDelete_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Delete_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
err := m.Delete("")
err := sqliteMedium.Delete("")
assert.Error(t, err)
}
func TestDelete_Bad_NotEmpty(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Delete_NotEmpty_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.EnsureDir("mydir"))
require.NoError(t, m.Write("mydir/file.txt", "content"))
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
require.NoError(t, sqliteMedium.Write("mydir/file.txt", "content"))
err := m.Delete("mydir")
err := sqliteMedium.Delete("mydir")
assert.Error(t, err)
}
// --- DeleteAll Tests ---
func TestSqlite_DeleteAll_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestDeleteAll_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("dir/file1.txt", "a"))
require.NoError(t, sqliteMedium.Write("dir/sub/file2.txt", "b"))
require.NoError(t, sqliteMedium.Write("other.txt", "c"))
require.NoError(t, m.Write("dir/file1.txt", "a"))
require.NoError(t, m.Write("dir/sub/file2.txt", "b"))
require.NoError(t, m.Write("other.txt", "c"))
err := m.DeleteAll("dir")
err := sqliteMedium.DeleteAll("dir")
require.NoError(t, err)
assert.False(t, m.Exists("dir/file1.txt"))
assert.False(t, m.Exists("dir/sub/file2.txt"))
assert.True(t, m.Exists("other.txt"))
assert.False(t, sqliteMedium.Exists("dir/file1.txt"))
assert.False(t, sqliteMedium.Exists("dir/sub/file2.txt"))
assert.True(t, sqliteMedium.Exists("other.txt"))
}
func TestDeleteAll_Good_SingleFile(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_DeleteAll_SingleFile_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.Write("file.txt", "content"))
require.NoError(t, sqliteMedium.Write("file.txt", "content"))
err := m.DeleteAll("file.txt")
err := sqliteMedium.DeleteAll("file.txt")
require.NoError(t, err)
assert.False(t, m.Exists("file.txt"))
assert.False(t, sqliteMedium.Exists("file.txt"))
}
func TestDeleteAll_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_DeleteAll_NotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
err := m.DeleteAll("nonexistent")
err := sqliteMedium.DeleteAll("nonexistent")
assert.Error(t, err)
}
func TestDeleteAll_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_DeleteAll_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
err := m.DeleteAll("")
err := sqliteMedium.DeleteAll("")
assert.Error(t, err)
}
// --- Rename Tests ---
func TestSqlite_Rename_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestRename_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("old.txt", "content"))
require.NoError(t, m.Write("old.txt", "content"))
err := m.Rename("old.txt", "new.txt")
err := sqliteMedium.Rename("old.txt", "new.txt")
require.NoError(t, err)
assert.False(t, m.Exists("old.txt"))
assert.True(t, m.IsFile("new.txt"))
assert.False(t, sqliteMedium.Exists("old.txt"))
assert.True(t, sqliteMedium.IsFile("new.txt"))
content, err := m.Read("new.txt")
content, err := sqliteMedium.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "content", content)
}
func TestRename_Good_Directory(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Rename_Directory_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.EnsureDir("olddir"))
require.NoError(t, m.Write("olddir/file.txt", "content"))
require.NoError(t, sqliteMedium.EnsureDir("olddir"))
require.NoError(t, sqliteMedium.Write("olddir/file.txt", "content"))
err := m.Rename("olddir", "newdir")
err := sqliteMedium.Rename("olddir", "newdir")
require.NoError(t, err)
assert.False(t, m.Exists("olddir"))
assert.False(t, m.Exists("olddir/file.txt"))
assert.True(t, m.IsDir("newdir"))
assert.True(t, m.IsFile("newdir/file.txt"))
assert.False(t, sqliteMedium.Exists("olddir"))
assert.False(t, sqliteMedium.Exists("olddir/file.txt"))
assert.True(t, sqliteMedium.IsDir("newdir"))
assert.True(t, sqliteMedium.IsFile("newdir/file.txt"))
content, err := m.Read("newdir/file.txt")
content, err := sqliteMedium.Read("newdir/file.txt")
require.NoError(t, err)
assert.Equal(t, "content", content)
}
func TestRename_Bad_SourceNotFound(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Rename_SourceNotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
err := m.Rename("nonexistent", "new")
err := sqliteMedium.Rename("nonexistent", "new")
assert.Error(t, err)
}
func TestRename_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Rename_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
err := m.Rename("", "new")
err := sqliteMedium.Rename("", "new")
assert.Error(t, err)
err = m.Rename("old", "")
err = sqliteMedium.Rename("old", "")
assert.Error(t, err)
}
// --- List Tests ---
func TestSqlite_List_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestList_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("dir/file1.txt", "a"))
require.NoError(t, sqliteMedium.Write("dir/file2.txt", "b"))
require.NoError(t, sqliteMedium.Write("dir/sub/file3.txt", "c"))
require.NoError(t, m.Write("dir/file1.txt", "a"))
require.NoError(t, m.Write("dir/file2.txt", "b"))
require.NoError(t, m.Write("dir/sub/file3.txt", "c"))
entries, err := m.List("dir")
entries, err := sqliteMedium.List("dir")
require.NoError(t, err)
names := make(map[string]bool)
for _, e := range entries {
names[e.Name()] = true
for _, entry := range entries {
names[entry.Name()] = true
}
assert.True(t, names["file1.txt"])
@ -322,30 +292,30 @@ func TestList_Good(t *testing.T) {
assert.Len(t, entries, 3)
}
func TestList_Good_Root(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_List_Root_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.Write("root.txt", "content"))
require.NoError(t, m.Write("dir/nested.txt", "nested"))
require.NoError(t, sqliteMedium.Write("root.txt", "content"))
require.NoError(t, sqliteMedium.Write("dir/nested.txt", "nested"))
entries, err := m.List("")
entries, err := sqliteMedium.List("")
require.NoError(t, err)
names := make(map[string]bool)
for _, e := range entries {
names[e.Name()] = true
for _, entry := range entries {
names[entry.Name()] = true
}
assert.True(t, names["root.txt"])
assert.True(t, names["dir"])
}
func TestList_Good_DirectoryEntry(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_List_DirectoryEntry_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.Write("dir/sub/file.txt", "content"))
require.NoError(t, sqliteMedium.Write("dir/sub/file.txt", "content"))
entries, err := m.List("dir")
entries, err := sqliteMedium.List("dir")
require.NoError(t, err)
require.Len(t, entries, 1)
@ -357,172 +327,162 @@ func TestList_Good_DirectoryEntry(t *testing.T) {
assert.True(t, info.IsDir())
}
// --- Stat Tests ---
func TestSqlite_Stat_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestStat_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("file.txt", "hello world"))
require.NoError(t, m.Write("file.txt", "hello world"))
info, err := m.Stat("file.txt")
info, err := sqliteMedium.Stat("file.txt")
require.NoError(t, err)
assert.Equal(t, "file.txt", info.Name())
assert.Equal(t, int64(11), info.Size())
assert.False(t, info.IsDir())
}
func TestStat_Good_Directory(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Stat_Directory_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.EnsureDir("mydir"))
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
info, err := m.Stat("mydir")
info, err := sqliteMedium.Stat("mydir")
require.NoError(t, err)
assert.Equal(t, "mydir", info.Name())
assert.True(t, info.IsDir())
}
func TestStat_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Stat_NotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
_, err := m.Stat("nonexistent")
_, err := sqliteMedium.Stat("nonexistent")
assert.Error(t, err)
}
func TestStat_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Stat_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
_, err := m.Stat("")
_, err := sqliteMedium.Stat("")
assert.Error(t, err)
}
// --- Open Tests ---
func TestSqlite_Open_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestOpen_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("file.txt", "open me"))
require.NoError(t, m.Write("file.txt", "open me"))
f, err := m.Open("file.txt")
file, err := sqliteMedium.Open("file.txt")
require.NoError(t, err)
defer f.Close()
defer file.Close()
data, err := goio.ReadAll(f.(goio.Reader))
data, err := goio.ReadAll(file.(goio.Reader))
require.NoError(t, err)
assert.Equal(t, "open me", string(data))
stat, err := f.Stat()
stat, err := file.Stat()
require.NoError(t, err)
assert.Equal(t, "file.txt", stat.Name())
}
func TestOpen_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Open_NotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
_, err := m.Open("nonexistent.txt")
_, err := sqliteMedium.Open("nonexistent.txt")
assert.Error(t, err)
}
func TestOpen_Bad_IsDirectory(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Open_IsDirectory_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.EnsureDir("mydir"))
_, err := m.Open("mydir")
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
_, err := sqliteMedium.Open("mydir")
assert.Error(t, err)
}
// --- Create Tests ---
func TestSqlite_Create_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestCreate_Good(t *testing.T) {
m := newTestMedium(t)
w, err := m.Create("new.txt")
writer, err := sqliteMedium.Create("new.txt")
require.NoError(t, err)
n, err := w.Write([]byte("created"))
bytesWritten, err := writer.Write([]byte("created"))
require.NoError(t, err)
assert.Equal(t, 7, n)
assert.Equal(t, 7, bytesWritten)
err = w.Close()
err = writer.Close()
require.NoError(t, err)
content, err := m.Read("new.txt")
content, err := sqliteMedium.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "created", content)
}
func TestCreate_Good_Overwrite(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Create_Overwrite_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.Write("file.txt", "old content"))
require.NoError(t, sqliteMedium.Write("file.txt", "old content"))
w, err := m.Create("file.txt")
writer, err := sqliteMedium.Create("file.txt")
require.NoError(t, err)
_, err = w.Write([]byte("new"))
_, err = writer.Write([]byte("new"))
require.NoError(t, err)
require.NoError(t, w.Close())
require.NoError(t, writer.Close())
content, err := m.Read("file.txt")
content, err := sqliteMedium.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "new", content)
}
func TestCreate_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Create_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
_, err := m.Create("")
_, err := sqliteMedium.Create("")
assert.Error(t, err)
}
// --- Append Tests ---
func TestSqlite_Append_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestAppend_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("append.txt", "hello"))
require.NoError(t, m.Write("append.txt", "hello"))
w, err := m.Append("append.txt")
writer, err := sqliteMedium.Append("append.txt")
require.NoError(t, err)
_, err = w.Write([]byte(" world"))
_, err = writer.Write([]byte(" world"))
require.NoError(t, err)
require.NoError(t, w.Close())
require.NoError(t, writer.Close())
content, err := m.Read("append.txt")
content, err := sqliteMedium.Read("append.txt")
require.NoError(t, err)
assert.Equal(t, "hello world", content)
}
func TestAppend_Good_NewFile(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Append_NewFile_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
w, err := m.Append("new.txt")
writer, err := sqliteMedium.Append("new.txt")
require.NoError(t, err)
_, err = w.Write([]byte("fresh"))
_, err = writer.Write([]byte("fresh"))
require.NoError(t, err)
require.NoError(t, w.Close())
require.NoError(t, writer.Close())
content, err := m.Read("new.txt")
content, err := sqliteMedium.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "fresh", content)
}
func TestAppend_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_Append_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
_, err := m.Append("")
_, err := sqliteMedium.Append("")
assert.Error(t, err)
}
// --- ReadStream Tests ---
func TestSqlite_ReadStream_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestReadStream_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("stream.txt", "streaming content"))
require.NoError(t, m.Write("stream.txt", "streaming content"))
reader, err := m.ReadStream("stream.txt")
reader, err := sqliteMedium.ReadStream("stream.txt")
require.NoError(t, err)
defer reader.Close()
@ -531,98 +491,84 @@ func TestReadStream_Good(t *testing.T) {
assert.Equal(t, "streaming content", string(data))
}
func TestReadStream_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_ReadStream_NotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
_, err := m.ReadStream("nonexistent.txt")
_, err := sqliteMedium.ReadStream("nonexistent.txt")
assert.Error(t, err)
}
func TestReadStream_Bad_IsDirectory(t *testing.T) {
m := newTestMedium(t)
func TestSqlite_ReadStream_IsDirectory_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, m.EnsureDir("mydir"))
_, err := m.ReadStream("mydir")
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
_, err := sqliteMedium.ReadStream("mydir")
assert.Error(t, err)
}
// --- WriteStream Tests ---
func TestSqlite_WriteStream_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestWriteStream_Good(t *testing.T) {
m := newTestMedium(t)
writer, err := m.WriteStream("output.txt")
writer, err := sqliteMedium.WriteStream("output.txt")
require.NoError(t, err)
_, err = goio.Copy(writer, strings.NewReader("piped data"))
_, err = goio.Copy(writer, core.NewReader("piped data"))
require.NoError(t, err)
require.NoError(t, writer.Close())
content, err := m.Read("output.txt")
content, err := sqliteMedium.Read("output.txt")
require.NoError(t, err)
assert.Equal(t, "piped data", content)
}
// --- Exists Tests ---
func TestSqlite_Exists_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestExists_Good(t *testing.T) {
m := newTestMedium(t)
assert.False(t, sqliteMedium.Exists("nonexistent"))
assert.False(t, m.Exists("nonexistent"))
require.NoError(t, sqliteMedium.Write("file.txt", "content"))
assert.True(t, sqliteMedium.Exists("file.txt"))
require.NoError(t, m.Write("file.txt", "content"))
assert.True(t, m.Exists("file.txt"))
require.NoError(t, m.EnsureDir("mydir"))
assert.True(t, m.Exists("mydir"))
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
assert.True(t, sqliteMedium.Exists("mydir"))
}
func TestExists_Good_EmptyPath(t *testing.T) {
m := newTestMedium(t)
// Root always exists
assert.True(t, m.Exists(""))
func TestSqlite_Exists_EmptyPath_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
assert.True(t, sqliteMedium.Exists(""))
}
// --- IsDir Tests ---
func TestSqlite_IsDir_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestIsDir_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("file.txt", "content"))
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
require.NoError(t, m.Write("file.txt", "content"))
require.NoError(t, m.EnsureDir("mydir"))
assert.True(t, m.IsDir("mydir"))
assert.False(t, m.IsDir("file.txt"))
assert.False(t, m.IsDir("nonexistent"))
assert.False(t, m.IsDir(""))
assert.True(t, sqliteMedium.IsDir("mydir"))
assert.False(t, sqliteMedium.IsDir("file.txt"))
assert.False(t, sqliteMedium.IsDir("nonexistent"))
assert.False(t, sqliteMedium.IsDir(""))
}
// --- cleanPath Tests ---
func TestCleanPath_Good(t *testing.T) {
assert.Equal(t, "file.txt", cleanPath("file.txt"))
assert.Equal(t, "dir/file.txt", cleanPath("dir/file.txt"))
assert.Equal(t, "file.txt", cleanPath("/file.txt"))
assert.Equal(t, "file.txt", cleanPath("../file.txt"))
assert.Equal(t, "file.txt", cleanPath("dir/../file.txt"))
assert.Equal(t, "", cleanPath(""))
assert.Equal(t, "", cleanPath("."))
assert.Equal(t, "", cleanPath("/"))
func TestSqlite_NormaliseEntryPath_Good(t *testing.T) {
assert.Equal(t, "file.txt", normaliseEntryPath("file.txt"))
assert.Equal(t, "dir/file.txt", normaliseEntryPath("dir/file.txt"))
assert.Equal(t, "file.txt", normaliseEntryPath("/file.txt"))
assert.Equal(t, "file.txt", normaliseEntryPath("../file.txt"))
assert.Equal(t, "file.txt", normaliseEntryPath("dir/../file.txt"))
assert.Equal(t, "", normaliseEntryPath(""))
assert.Equal(t, "", normaliseEntryPath("."))
assert.Equal(t, "", normaliseEntryPath("/"))
}
// --- Interface Compliance ---
func TestSqlite_InterfaceCompliance_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestInterfaceCompliance_Ugly(t *testing.T) {
m := newTestMedium(t)
// Verify all methods exist by asserting the interface shape.
var _ interface {
Read(string) (string, error)
Write(string, string) error
EnsureDir(string) error
IsFile(string) bool
FileGet(string) (string, error)
FileSet(string, string) error
Delete(string) error
DeleteAll(string) error
Rename(string, string) error
@ -635,19 +581,17 @@ func TestInterfaceCompliance_Ugly(t *testing.T) {
WriteStream(string) (goio.WriteCloser, error)
Exists(string) bool
IsDir(string) bool
} = m
} = sqliteMedium
}
// --- Custom Table ---
func TestCustomTable_Good(t *testing.T) {
m, err := New(":memory:", WithTable("my_files"))
func TestSqlite_CustomTable_Good(t *testing.T) {
sqliteMedium, err := New(Options{Path: ":memory:", Table: "my_files"})
require.NoError(t, err)
defer m.Close()
defer sqliteMedium.Close()
require.NoError(t, m.Write("file.txt", "content"))
require.NoError(t, sqliteMedium.Write("file.txt", "content"))
content, err := m.Read("file.txt")
content, err := sqliteMedium.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "content", content)
}

5
store/doc.go Normal file
View file

@ -0,0 +1,5 @@
// Example: keyValueStore, _ := store.New(store.Options{Path: ":memory:"})
// Example: _ = keyValueStore.Set("app", "theme", "midnight")
// Example: medium := keyValueStore.AsMedium()
// Example: _ = medium.Write("app/theme", "midnight")
package store

View file

@ -3,348 +3,348 @@ package store
import (
goio "io"
"io/fs"
"os"
"path"
"strings"
"time"
coreerr "forge.lthn.ai/core/go-log"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
)
// Medium wraps a Store to satisfy the io.Medium interface.
// Paths are mapped as group/key — first segment is the group,
// the rest is the key. List("") returns groups as directories,
// List("group") returns keys as files.
// Example: medium, _ := store.NewMedium(store.Options{Path: "config.db"})
// Example: _ = medium.Write("app/theme", "midnight")
// Example: entries, _ := medium.List("")
// Example: entries, _ := medium.List("app")
type Medium struct {
s *Store
keyValueStore *KeyValueStore
}
// NewMedium creates an io.Medium backed by a KV store at the given SQLite path.
func NewMedium(dbPath string) (*Medium, error) {
s, err := New(dbPath)
var _ coreio.Medium = (*Medium)(nil)
// Example: medium, _ := store.NewMedium(store.Options{Path: "config.db"})
// Example: _ = medium.Write("app/theme", "midnight")
func NewMedium(options Options) (*Medium, error) {
keyValueStore, err := New(options)
if err != nil {
return nil, err
}
return &Medium{s: s}, nil
return &Medium{keyValueStore: keyValueStore}, nil
}
// AsMedium returns a Medium adapter for an existing Store.
func (s *Store) AsMedium() *Medium {
return &Medium{s: s}
// Example: medium := keyValueStore.AsMedium()
func (keyValueStore *KeyValueStore) AsMedium() *Medium {
return &Medium{keyValueStore: keyValueStore}
}
// Store returns the underlying KV store for direct access.
func (m *Medium) Store() *Store {
return m.s
// Example: keyValueStore := medium.KeyValueStore()
func (medium *Medium) KeyValueStore() *KeyValueStore {
return medium.keyValueStore
}
// Close closes the underlying store.
func (m *Medium) Close() error {
return m.s.Close()
// Example: _ = medium.Close()
func (medium *Medium) Close() error {
return medium.keyValueStore.Close()
}
// splitPath splits a medium-style path into group and key.
// First segment = group, remainder = key.
func splitPath(p string) (group, key string) {
clean := path.Clean(p)
clean = strings.TrimPrefix(clean, "/")
func splitGroupKeyPath(entryPath string) (group, key string) {
clean := path.Clean(entryPath)
clean = core.TrimPrefix(clean, "/")
if clean == "" || clean == "." {
return "", ""
}
parts := strings.SplitN(clean, "/", 2)
parts := core.SplitN(clean, "/", 2)
if len(parts) == 1 {
return parts[0], ""
}
return parts[0], parts[1]
}
// Read retrieves the value at group/key.
func (m *Medium) Read(p string) (string, error) {
group, key := splitPath(p)
func (medium *Medium) Read(entryPath string) (string, error) {
group, key := splitGroupKeyPath(entryPath)
if key == "" {
return "", coreerr.E("store.Read", "path must include group/key", os.ErrInvalid)
return "", core.E("store.Read", "path must include group/key", fs.ErrInvalid)
}
return m.s.Get(group, key)
return medium.keyValueStore.Get(group, key)
}
// Write stores a value at group/key.
func (m *Medium) Write(p, content string) error {
group, key := splitPath(p)
func (medium *Medium) Write(entryPath, content string) error {
group, key := splitGroupKeyPath(entryPath)
if key == "" {
return coreerr.E("store.Write", "path must include group/key", os.ErrInvalid)
return core.E("store.Write", "path must include group/key", fs.ErrInvalid)
}
return m.s.Set(group, key, content)
return medium.keyValueStore.Set(group, key, content)
}
// EnsureDir is a no-op — groups are created implicitly on Set.
func (m *Medium) EnsureDir(_ string) error {
// Example: _ = medium.WriteMode("app/theme", "midnight", 0600)
func (medium *Medium) WriteMode(entryPath, content string, mode fs.FileMode) error {
return medium.Write(entryPath, content)
}
// Example: _ = medium.EnsureDir("app")
func (medium *Medium) EnsureDir(entryPath string) error {
return nil
}
// IsFile returns true if a group/key pair exists.
func (m *Medium) IsFile(p string) bool {
group, key := splitPath(p)
func (medium *Medium) IsFile(entryPath string) bool {
group, key := splitGroupKeyPath(entryPath)
if key == "" {
return false
}
_, err := m.s.Get(group, key)
_, err := medium.keyValueStore.Get(group, key)
return err == nil
}
// FileGet is an alias for Read.
func (m *Medium) FileGet(p string) (string, error) {
return m.Read(p)
}
// FileSet is an alias for Write.
func (m *Medium) FileSet(p, content string) error {
return m.Write(p, content)
}
// Delete removes a key, or checks that a group is empty.
func (m *Medium) Delete(p string) error {
group, key := splitPath(p)
func (medium *Medium) Delete(entryPath string) error {
group, key := splitGroupKeyPath(entryPath)
if group == "" {
return coreerr.E("store.Delete", "path is required", os.ErrInvalid)
return core.E("store.Delete", "path is required", fs.ErrInvalid)
}
if key == "" {
n, err := m.s.Count(group)
entryCount, err := medium.keyValueStore.Count(group)
if err != nil {
return err
}
if n > 0 {
return coreerr.E("store.Delete", "group not empty: "+group, os.ErrExist)
if entryCount > 0 {
return core.E("store.Delete", core.Concat("group not empty: ", group), fs.ErrExist)
}
return nil
}
return m.s.Delete(group, key)
return medium.keyValueStore.Delete(group, key)
}
// DeleteAll removes a key, or all keys in a group.
func (m *Medium) DeleteAll(p string) error {
group, key := splitPath(p)
func (medium *Medium) DeleteAll(entryPath string) error {
group, key := splitGroupKeyPath(entryPath)
if group == "" {
return coreerr.E("store.DeleteAll", "path is required", os.ErrInvalid)
return core.E("store.DeleteAll", "path is required", fs.ErrInvalid)
}
if key == "" {
return m.s.DeleteGroup(group)
return medium.keyValueStore.DeleteGroup(group)
}
return m.s.Delete(group, key)
return medium.keyValueStore.Delete(group, key)
}
// Rename moves a key from one path to another.
func (m *Medium) Rename(oldPath, newPath string) error {
og, ok := splitPath(oldPath)
ng, nk := splitPath(newPath)
if ok == "" || nk == "" {
return coreerr.E("store.Rename", "both paths must include group/key", os.ErrInvalid)
func (medium *Medium) Rename(oldPath, newPath string) error {
oldGroup, oldKey := splitGroupKeyPath(oldPath)
newGroup, newKey := splitGroupKeyPath(newPath)
if oldKey == "" || newKey == "" {
return core.E("store.Rename", "both paths must include group/key", fs.ErrInvalid)
}
val, err := m.s.Get(og, ok)
value, err := medium.keyValueStore.Get(oldGroup, oldKey)
if err != nil {
return err
}
if err := m.s.Set(ng, nk, val); err != nil {
if err := medium.keyValueStore.Set(newGroup, newKey, value); err != nil {
return err
}
return m.s.Delete(og, ok)
return medium.keyValueStore.Delete(oldGroup, oldKey)
}
// List returns directory entries. Empty path returns groups.
// A group path returns keys in that group.
func (m *Medium) List(p string) ([]fs.DirEntry, error) {
group, key := splitPath(p)
// Example: entries, _ := medium.List("app")
func (medium *Medium) List(entryPath string) ([]fs.DirEntry, error) {
group, key := splitGroupKeyPath(entryPath)
if group == "" {
rows, err := m.s.db.Query("SELECT DISTINCT grp FROM kv ORDER BY grp")
rows, err := medium.keyValueStore.database.Query("SELECT DISTINCT group_name FROM entries ORDER BY group_name")
if err != nil {
return nil, coreerr.E("store.List", "query groups", err)
return nil, core.E("store.List", "query groups", err)
}
defer rows.Close()
var entries []fs.DirEntry
for rows.Next() {
var g string
if err := rows.Scan(&g); err != nil {
return nil, coreerr.E("store.List", "scan", err)
var groupName string
if err := rows.Scan(&groupName); err != nil {
return nil, core.E("store.List", "scan", err)
}
entries = append(entries, &kvDirEntry{name: g, isDir: true})
entries = append(entries, &keyValueDirEntry{name: groupName, isDir: true})
}
return entries, rows.Err()
if err := rows.Err(); err != nil {
return nil, core.E("store.List", "rows", err)
}
return entries, nil
}
if key != "" {
return nil, nil // leaf node, nothing beneath
return nil, nil
}
all, err := m.s.GetAll(group)
all, err := medium.keyValueStore.GetAll(group)
if err != nil {
return nil, err
}
var entries []fs.DirEntry
for k, v := range all {
entries = append(entries, &kvDirEntry{name: k, size: int64(len(v))})
for key, value := range all {
entries = append(entries, &keyValueDirEntry{name: key, size: int64(len(value))})
}
return entries, nil
}
// Stat returns file info for a group (dir) or key (file).
func (m *Medium) Stat(p string) (fs.FileInfo, error) {
group, key := splitPath(p)
// Example: info, _ := medium.Stat("app/theme")
func (medium *Medium) Stat(entryPath string) (fs.FileInfo, error) {
group, key := splitGroupKeyPath(entryPath)
if group == "" {
return nil, coreerr.E("store.Stat", "path is required", os.ErrInvalid)
return nil, core.E("store.Stat", "path is required", fs.ErrInvalid)
}
if key == "" {
n, err := m.s.Count(group)
entryCount, err := medium.keyValueStore.Count(group)
if err != nil {
return nil, err
}
if n == 0 {
return nil, coreerr.E("store.Stat", "group not found: "+group, os.ErrNotExist)
if entryCount == 0 {
return nil, core.E("store.Stat", core.Concat("group not found: ", group), fs.ErrNotExist)
}
return &kvFileInfo{name: group, isDir: true}, nil
return &keyValueFileInfo{name: group, isDir: true}, nil
}
val, err := m.s.Get(group, key)
value, err := medium.keyValueStore.Get(group, key)
if err != nil {
return nil, err
}
return &kvFileInfo{name: key, size: int64(len(val))}, nil
return &keyValueFileInfo{name: key, size: int64(len(value))}, nil
}
// Open opens a key for reading.
func (m *Medium) Open(p string) (fs.File, error) {
group, key := splitPath(p)
func (medium *Medium) Open(entryPath string) (fs.File, error) {
group, key := splitGroupKeyPath(entryPath)
if key == "" {
return nil, coreerr.E("store.Open", "path must include group/key", os.ErrInvalid)
return nil, core.E("store.Open", "path must include group/key", fs.ErrInvalid)
}
val, err := m.s.Get(group, key)
value, err := medium.keyValueStore.Get(group, key)
if err != nil {
return nil, err
}
return &kvFile{name: key, content: []byte(val)}, nil
return &keyValueFile{name: key, content: []byte(value)}, nil
}
// Create creates or truncates a key. Content is stored on Close.
func (m *Medium) Create(p string) (goio.WriteCloser, error) {
group, key := splitPath(p)
func (medium *Medium) Create(entryPath string) (goio.WriteCloser, error) {
group, key := splitGroupKeyPath(entryPath)
if key == "" {
return nil, coreerr.E("store.Create", "path must include group/key", os.ErrInvalid)
return nil, core.E("store.Create", "path must include group/key", fs.ErrInvalid)
}
return &kvWriteCloser{s: m.s, group: group, key: key}, nil
return &keyValueWriteCloser{keyValueStore: medium.keyValueStore, group: group, key: key}, nil
}
// Append opens a key for appending. Content is stored on Close.
func (m *Medium) Append(p string) (goio.WriteCloser, error) {
group, key := splitPath(p)
func (medium *Medium) Append(entryPath string) (goio.WriteCloser, error) {
group, key := splitGroupKeyPath(entryPath)
if key == "" {
return nil, coreerr.E("store.Append", "path must include group/key", os.ErrInvalid)
return nil, core.E("store.Append", "path must include group/key", fs.ErrInvalid)
}
existing, _ := m.s.Get(group, key)
return &kvWriteCloser{s: m.s, group: group, key: key, data: []byte(existing)}, nil
existingValue, _ := medium.keyValueStore.Get(group, key)
return &keyValueWriteCloser{keyValueStore: medium.keyValueStore, group: group, key: key, data: []byte(existingValue)}, nil
}
// ReadStream returns a reader for the value.
func (m *Medium) ReadStream(p string) (goio.ReadCloser, error) {
group, key := splitPath(p)
func (medium *Medium) ReadStream(entryPath string) (goio.ReadCloser, error) {
group, key := splitGroupKeyPath(entryPath)
if key == "" {
return nil, coreerr.E("store.ReadStream", "path must include group/key", os.ErrInvalid)
return nil, core.E("store.ReadStream", "path must include group/key", fs.ErrInvalid)
}
val, err := m.s.Get(group, key)
value, err := medium.keyValueStore.Get(group, key)
if err != nil {
return nil, err
}
return goio.NopCloser(strings.NewReader(val)), nil
return goio.NopCloser(core.NewReader(value)), nil
}
// WriteStream returns a writer. Content is stored on Close.
func (m *Medium) WriteStream(p string) (goio.WriteCloser, error) {
return m.Create(p)
func (medium *Medium) WriteStream(entryPath string) (goio.WriteCloser, error) {
return medium.Create(entryPath)
}
// Exists returns true if a group or key exists.
func (m *Medium) Exists(p string) bool {
group, key := splitPath(p)
func (medium *Medium) Exists(entryPath string) bool {
group, key := splitGroupKeyPath(entryPath)
if group == "" {
return false
}
if key == "" {
n, err := m.s.Count(group)
return err == nil && n > 0
entryCount, err := medium.keyValueStore.Count(group)
return err == nil && entryCount > 0
}
_, err := m.s.Get(group, key)
_, err := medium.keyValueStore.Get(group, key)
return err == nil
}
// IsDir returns true if the path is a group with entries.
func (m *Medium) IsDir(p string) bool {
group, key := splitPath(p)
func (medium *Medium) IsDir(entryPath string) bool {
group, key := splitGroupKeyPath(entryPath)
if key != "" || group == "" {
return false
}
n, err := m.s.Count(group)
return err == nil && n > 0
entryCount, err := medium.keyValueStore.Count(group)
return err == nil && entryCount > 0
}
// --- fs helper types ---
type kvFileInfo struct {
type keyValueFileInfo struct {
name string
size int64
isDir bool
}
func (fi *kvFileInfo) Name() string { return fi.name }
func (fi *kvFileInfo) Size() int64 { return fi.size }
func (fi *kvFileInfo) Mode() fs.FileMode { if fi.isDir { return fs.ModeDir | 0755 }; return 0644 }
func (fi *kvFileInfo) ModTime() time.Time { return time.Time{} }
func (fi *kvFileInfo) IsDir() bool { return fi.isDir }
func (fi *kvFileInfo) Sys() any { return nil }
func (fileInfo *keyValueFileInfo) Name() string { return fileInfo.name }
type kvDirEntry struct {
func (fileInfo *keyValueFileInfo) Size() int64 { return fileInfo.size }
func (fileInfo *keyValueFileInfo) Mode() fs.FileMode {
if fileInfo.isDir {
return fs.ModeDir | 0755
}
return 0644
}
func (fileInfo *keyValueFileInfo) ModTime() time.Time { return time.Time{} }
func (fileInfo *keyValueFileInfo) IsDir() bool { return fileInfo.isDir }
func (fileInfo *keyValueFileInfo) Sys() any { return nil }
type keyValueDirEntry struct {
name string
isDir bool
size int64
}
func (de *kvDirEntry) Name() string { return de.name }
func (de *kvDirEntry) IsDir() bool { return de.isDir }
func (de *kvDirEntry) Type() fs.FileMode { if de.isDir { return fs.ModeDir }; return 0 }
func (de *kvDirEntry) Info() (fs.FileInfo, error) {
return &kvFileInfo{name: de.name, size: de.size, isDir: de.isDir}, nil
func (entry *keyValueDirEntry) Name() string { return entry.name }
func (entry *keyValueDirEntry) IsDir() bool { return entry.isDir }
func (entry *keyValueDirEntry) Type() fs.FileMode {
if entry.isDir {
return fs.ModeDir
}
return 0
}
type kvFile struct {
func (entry *keyValueDirEntry) Info() (fs.FileInfo, error) {
return &keyValueFileInfo{name: entry.name, size: entry.size, isDir: entry.isDir}, nil
}
type keyValueFile struct {
name string
content []byte
offset int64
}
func (f *kvFile) Stat() (fs.FileInfo, error) {
return &kvFileInfo{name: f.name, size: int64(len(f.content))}, nil
func (file *keyValueFile) Stat() (fs.FileInfo, error) {
return &keyValueFileInfo{name: file.name, size: int64(len(file.content))}, nil
}
func (f *kvFile) Read(b []byte) (int, error) {
if f.offset >= int64(len(f.content)) {
func (file *keyValueFile) Read(buffer []byte) (int, error) {
if file.offset >= int64(len(file.content)) {
return 0, goio.EOF
}
n := copy(b, f.content[f.offset:])
f.offset += int64(n)
return n, nil
readCount := copy(buffer, file.content[file.offset:])
file.offset += int64(readCount)
return readCount, nil
}
func (f *kvFile) Close() error { return nil }
func (file *keyValueFile) Close() error { return nil }
type kvWriteCloser struct {
s *Store
type keyValueWriteCloser struct {
keyValueStore *KeyValueStore
group string
key string
data []byte
}
func (w *kvWriteCloser) Write(p []byte) (int, error) {
w.data = append(w.data, p...)
return len(p), nil
func (writer *keyValueWriteCloser) Write(data []byte) (int, error) {
writer.data = append(writer.data, data...)
return len(data), nil
}
func (w *kvWriteCloser) Close() error {
return w.s.Set(w.group, w.key, string(w.data))
func (writer *keyValueWriteCloser) Close() error {
return writer.keyValueStore.Set(writer.group, writer.key, string(writer.data))
}

View file

@ -2,201 +2,256 @@ package store
import (
"io"
"io/fs"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func newTestMedium(t *testing.T) *Medium {
func newKeyValueMedium(t *testing.T) *Medium {
t.Helper()
m, err := NewMedium(":memory:")
keyValueMedium, err := NewMedium(Options{Path: ":memory:"})
require.NoError(t, err)
t.Cleanup(func() { m.Close() })
return m
t.Cleanup(func() { keyValueMedium.Close() })
return keyValueMedium
}
func TestMedium_ReadWrite_Good(t *testing.T) {
m := newTestMedium(t)
func TestKeyValueMedium_ReadWrite_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
err := m.Write("config/theme", "dark")
err := keyValueMedium.Write("config/theme", "dark")
require.NoError(t, err)
val, err := m.Read("config/theme")
value, err := keyValueMedium.Read("config/theme")
require.NoError(t, err)
assert.Equal(t, "dark", val)
assert.Equal(t, "dark", value)
}
func TestMedium_Read_Bad_NoKey(t *testing.T) {
m := newTestMedium(t)
_, err := m.Read("config")
func TestKeyValueMedium_Read_NoKey_Bad(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_, err := keyValueMedium.Read("config")
assert.Error(t, err)
}
func TestMedium_Read_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
_, err := m.Read("config/missing")
func TestKeyValueMedium_Read_NotFound_Bad(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_, err := keyValueMedium.Read("config/missing")
assert.Error(t, err)
}
func TestMedium_IsFile_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "val")
func TestKeyValueMedium_IsFile_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "val")
assert.True(t, m.IsFile("grp/key"))
assert.False(t, m.IsFile("grp/nope"))
assert.False(t, m.IsFile("grp"))
assert.True(t, keyValueMedium.IsFile("group/key"))
assert.False(t, keyValueMedium.IsFile("group/nope"))
assert.False(t, keyValueMedium.IsFile("group"))
}
func TestMedium_Delete_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "val")
func TestKeyValueMedium_Delete_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "val")
err := m.Delete("grp/key")
err := keyValueMedium.Delete("group/key")
require.NoError(t, err)
assert.False(t, m.IsFile("grp/key"))
assert.False(t, keyValueMedium.IsFile("group/key"))
}
func TestMedium_Delete_Bad_NonEmptyGroup(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "val")
func TestKeyValueMedium_Delete_NonEmptyGroup_Bad(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "val")
err := m.Delete("grp")
err := keyValueMedium.Delete("group")
assert.Error(t, err)
}
func TestMedium_DeleteAll_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/a", "1")
_ = m.Write("grp/b", "2")
func TestKeyValueMedium_DeleteAll_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/a", "1")
_ = keyValueMedium.Write("group/b", "2")
err := m.DeleteAll("grp")
err := keyValueMedium.DeleteAll("group")
require.NoError(t, err)
assert.False(t, m.Exists("grp"))
assert.False(t, keyValueMedium.Exists("group"))
}
func TestMedium_Rename_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("old/key", "val")
func TestKeyValueMedium_Rename_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("old/key", "val")
err := m.Rename("old/key", "new/key")
err := keyValueMedium.Rename("old/key", "new/key")
require.NoError(t, err)
val, err := m.Read("new/key")
value, err := keyValueMedium.Read("new/key")
require.NoError(t, err)
assert.Equal(t, "val", val)
assert.False(t, m.IsFile("old/key"))
assert.Equal(t, "val", value)
assert.False(t, keyValueMedium.IsFile("old/key"))
}
func TestMedium_List_Good_Groups(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("alpha/a", "1")
_ = m.Write("beta/b", "2")
func TestKeyValueMedium_List_Groups_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("alpha/a", "1")
_ = keyValueMedium.Write("beta/b", "2")
entries, err := m.List("")
entries, err := keyValueMedium.List("")
require.NoError(t, err)
assert.Len(t, entries, 2)
names := make(map[string]bool)
for _, e := range entries {
names[e.Name()] = true
assert.True(t, e.IsDir())
for _, entry := range entries {
names[entry.Name()] = true
assert.True(t, entry.IsDir())
}
assert.True(t, names["alpha"])
assert.True(t, names["beta"])
}
func TestMedium_List_Good_Keys(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/a", "1")
_ = m.Write("grp/b", "22")
func TestKeyValueMedium_List_Keys_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/a", "1")
_ = keyValueMedium.Write("group/b", "22")
entries, err := m.List("grp")
entries, err := keyValueMedium.List("group")
require.NoError(t, err)
assert.Len(t, entries, 2)
}
func TestMedium_Stat_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "hello")
func TestKeyValueMedium_Stat_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "hello")
// Stat group
info, err := m.Stat("grp")
info, err := keyValueMedium.Stat("group")
require.NoError(t, err)
assert.True(t, info.IsDir())
// Stat key
info, err = m.Stat("grp/key")
info, err = keyValueMedium.Stat("group/key")
require.NoError(t, err)
assert.Equal(t, int64(5), info.Size())
assert.False(t, info.IsDir())
}
func TestMedium_Exists_IsDir_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "val")
func TestKeyValueMedium_Exists_IsDir_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "val")
assert.True(t, m.Exists("grp"))
assert.True(t, m.Exists("grp/key"))
assert.True(t, m.IsDir("grp"))
assert.False(t, m.IsDir("grp/key"))
assert.False(t, m.Exists("nope"))
assert.True(t, keyValueMedium.Exists("group"))
assert.True(t, keyValueMedium.Exists("group/key"))
assert.True(t, keyValueMedium.IsDir("group"))
assert.False(t, keyValueMedium.IsDir("group/key"))
assert.False(t, keyValueMedium.Exists("nope"))
}
func TestMedium_Open_Read_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "hello world")
func TestKeyValueMedium_Open_Read_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "hello world")
f, err := m.Open("grp/key")
file, err := keyValueMedium.Open("group/key")
require.NoError(t, err)
defer f.Close()
defer file.Close()
data, err := io.ReadAll(f)
data, err := io.ReadAll(file)
require.NoError(t, err)
assert.Equal(t, "hello world", string(data))
}
func TestMedium_CreateClose_Good(t *testing.T) {
m := newTestMedium(t)
func TestKeyValueMedium_CreateClose_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
w, err := m.Create("grp/key")
writer, err := keyValueMedium.Create("group/key")
require.NoError(t, err)
_, _ = w.Write([]byte("streamed"))
require.NoError(t, w.Close())
_, _ = writer.Write([]byte("streamed"))
require.NoError(t, writer.Close())
val, err := m.Read("grp/key")
value, err := keyValueMedium.Read("group/key")
require.NoError(t, err)
assert.Equal(t, "streamed", val)
assert.Equal(t, "streamed", value)
}
func TestMedium_Append_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "hello")
func TestKeyValueMedium_Append_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "hello")
w, err := m.Append("grp/key")
writer, err := keyValueMedium.Append("group/key")
require.NoError(t, err)
_, _ = w.Write([]byte(" world"))
require.NoError(t, w.Close())
_, _ = writer.Write([]byte(" world"))
require.NoError(t, writer.Close())
val, err := m.Read("grp/key")
value, err := keyValueMedium.Read("group/key")
require.NoError(t, err)
assert.Equal(t, "hello world", val)
assert.Equal(t, "hello world", value)
}
func TestMedium_AsMedium_Good(t *testing.T) {
s, err := New(":memory:")
require.NoError(t, err)
defer s.Close()
func TestKeyValueMedium_AsMedium_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
m := s.AsMedium()
require.NoError(t, m.Write("grp/key", "val"))
keyValueMedium := keyValueStore.AsMedium()
require.NoError(t, keyValueMedium.Write("group/key", "val"))
// Accessible through both APIs
val, err := s.Get("grp", "key")
value, err := keyValueStore.Get("group", "key")
require.NoError(t, err)
assert.Equal(t, "val", val)
assert.Equal(t, "val", value)
val, err = m.Read("grp/key")
value, err = keyValueMedium.Read("group/key")
require.NoError(t, err)
assert.Equal(t, "val", val)
assert.Equal(t, "val", value)
}
func TestKeyValueMedium_KeyValueStore_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
assert.NotNil(t, keyValueMedium.KeyValueStore())
assert.Same(t, keyValueMedium.KeyValueStore(), keyValueMedium.KeyValueStore())
}
func TestKeyValueMedium_EnsureDir_ReadWrite_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
require.NoError(t, keyValueMedium.EnsureDir("ignored"))
require.NoError(t, keyValueMedium.Write("group/key", "value"))
value, err := keyValueMedium.Read("group/key")
require.NoError(t, err)
assert.Equal(t, "value", value)
}
func TestKeyValueMedium_StreamHelpers_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
writer, err := keyValueMedium.WriteStream("group/key")
require.NoError(t, err)
_, err = writer.Write([]byte("streamed"))
require.NoError(t, err)
require.NoError(t, writer.Close())
reader, err := keyValueMedium.ReadStream("group/key")
require.NoError(t, err)
data, err := io.ReadAll(reader)
require.NoError(t, err)
assert.Equal(t, "streamed", string(data))
require.NoError(t, reader.Close())
file, err := keyValueMedium.Open("group/key")
require.NoError(t, err)
info, err := file.Stat()
require.NoError(t, err)
assert.Equal(t, "key", info.Name())
assert.Equal(t, int64(8), info.Size())
assert.Equal(t, fs.FileMode(0644), info.Mode())
assert.True(t, info.ModTime().IsZero())
assert.False(t, info.IsDir())
assert.Nil(t, info.Sys())
require.NoError(t, file.Close())
entries, err := keyValueMedium.List("group")
require.NoError(t, err)
require.Len(t, entries, 1)
assert.Equal(t, "key", entries[0].Name())
assert.False(t, entries[0].IsDir())
assert.Equal(t, fs.FileMode(0), entries[0].Type())
entryInfo, err := entries[0].Info()
require.NoError(t, err)
assert.Equal(t, "key", entryInfo.Name())
assert.Equal(t, int64(8), entryInfo.Size())
}

View file

@ -3,151 +3,163 @@ package store
import (
"database/sql"
"errors"
"strings"
"io/fs"
"text/template"
coreerr "forge.lthn.ai/core/go-log"
core "dappco.re/go/core"
_ "modernc.org/sqlite"
)
// ErrNotFound is returned when a key does not exist in the store.
var ErrNotFound = errors.New("store: not found")
// Example: _, err := keyValueStore.Get("app", "theme")
var NotFoundError = errors.New("key not found")
// Store is a group-namespaced key-value store backed by SQLite.
type Store struct {
db *sql.DB
// Example: keyValueStore, _ := store.New(store.Options{Path: ":memory:"})
type KeyValueStore struct {
database *sql.DB
}
// New creates a Store at the given SQLite path. Use ":memory:" for tests.
func New(dbPath string) (*Store, error) {
db, err := sql.Open("sqlite", dbPath)
// Example: keyValueStore, _ := store.New(store.Options{Path: ":memory:"})
type Options struct {
Path string
}
// Example: keyValueStore, _ := store.New(store.Options{Path: ":memory:"})
// Example: _ = keyValueStore.Set("app", "theme", "midnight")
func New(options Options) (*KeyValueStore, error) {
if options.Path == "" {
return nil, core.E("store.New", "database path is required", fs.ErrInvalid)
}
database, err := sql.Open("sqlite", options.Path)
if err != nil {
return nil, coreerr.E("store.New", "open db", err)
return nil, core.E("store.New", "open db", err)
}
if _, err := db.Exec("PRAGMA journal_mode=WAL"); err != nil {
db.Close()
return nil, coreerr.E("store.New", "WAL mode", err)
if _, err := database.Exec("PRAGMA journal_mode=WAL"); err != nil {
database.Close()
return nil, core.E("store.New", "WAL mode", err)
}
if _, err := db.Exec(`CREATE TABLE IF NOT EXISTS kv (
grp TEXT NOT NULL,
key TEXT NOT NULL,
value TEXT NOT NULL,
PRIMARY KEY (grp, key)
if _, err := database.Exec(`CREATE TABLE IF NOT EXISTS entries (
group_name TEXT NOT NULL,
entry_key TEXT NOT NULL,
entry_value TEXT NOT NULL,
PRIMARY KEY (group_name, entry_key)
)`); err != nil {
db.Close()
return nil, coreerr.E("store.New", "create schema", err)
database.Close()
return nil, core.E("store.New", "create schema", err)
}
return &Store{db: db}, nil
return &KeyValueStore{database: database}, nil
}
// Close closes the underlying database.
func (s *Store) Close() error {
return s.db.Close()
// Example: _ = keyValueStore.Close()
func (keyValueStore *KeyValueStore) Close() error {
return keyValueStore.database.Close()
}
// Get retrieves a value by group and key.
func (s *Store) Get(group, key string) (string, error) {
var val string
err := s.db.QueryRow("SELECT value FROM kv WHERE grp = ? AND key = ?", group, key).Scan(&val)
// Example: theme, _ := keyValueStore.Get("app", "theme")
func (keyValueStore *KeyValueStore) Get(group, key string) (string, error) {
var value string
err := keyValueStore.database.QueryRow("SELECT entry_value FROM entries WHERE group_name = ? AND entry_key = ?", group, key).Scan(&value)
if err == sql.ErrNoRows {
return "", coreerr.E("store.Get", "not found: "+group+"/"+key, ErrNotFound)
return "", core.E("store.Get", core.Concat("not found: ", group, "/", key), NotFoundError)
}
if err != nil {
return "", coreerr.E("store.Get", "query", err)
return "", core.E("store.Get", "query", err)
}
return val, nil
return value, nil
}
// Set stores a value by group and key, overwriting if exists.
func (s *Store) Set(group, key, value string) error {
_, err := s.db.Exec(
`INSERT INTO kv (grp, key, value) VALUES (?, ?, ?)
ON CONFLICT(grp, key) DO UPDATE SET value = excluded.value`,
// Example: _ = keyValueStore.Set("app", "theme", "midnight")
func (keyValueStore *KeyValueStore) Set(group, key, value string) error {
_, err := keyValueStore.database.Exec(
`INSERT INTO entries (group_name, entry_key, entry_value) VALUES (?, ?, ?)
ON CONFLICT(group_name, entry_key) DO UPDATE SET entry_value = excluded.entry_value`,
group, key, value,
)
if err != nil {
return coreerr.E("store.Set", "exec", err)
return core.E("store.Set", "exec", err)
}
return nil
}
// Delete removes a single key from a group.
func (s *Store) Delete(group, key string) error {
_, err := s.db.Exec("DELETE FROM kv WHERE grp = ? AND key = ?", group, key)
// Example: _ = keyValueStore.Delete("app", "theme")
func (keyValueStore *KeyValueStore) Delete(group, key string) error {
_, err := keyValueStore.database.Exec("DELETE FROM entries WHERE group_name = ? AND entry_key = ?", group, key)
if err != nil {
return coreerr.E("store.Delete", "exec", err)
return core.E("store.Delete", "exec", err)
}
return nil
}
// Count returns the number of keys in a group.
func (s *Store) Count(group string) (int, error) {
var n int
err := s.db.QueryRow("SELECT COUNT(*) FROM kv WHERE grp = ?", group).Scan(&n)
// Example: count, _ := keyValueStore.Count("app")
func (keyValueStore *KeyValueStore) Count(group string) (int, error) {
var count int
err := keyValueStore.database.QueryRow("SELECT COUNT(*) FROM entries WHERE group_name = ?", group).Scan(&count)
if err != nil {
return 0, coreerr.E("store.Count", "query", err)
return 0, core.E("store.Count", "query", err)
}
return n, nil
return count, nil
}
// DeleteGroup removes all keys in a group.
func (s *Store) DeleteGroup(group string) error {
_, err := s.db.Exec("DELETE FROM kv WHERE grp = ?", group)
// Example: _ = keyValueStore.DeleteGroup("app")
func (keyValueStore *KeyValueStore) DeleteGroup(group string) error {
_, err := keyValueStore.database.Exec("DELETE FROM entries WHERE group_name = ?", group)
if err != nil {
return coreerr.E("store.DeleteGroup", "exec", err)
return core.E("store.DeleteGroup", "exec", err)
}
return nil
}
// GetAll returns all key-value pairs in a group.
func (s *Store) GetAll(group string) (map[string]string, error) {
rows, err := s.db.Query("SELECT key, value FROM kv WHERE grp = ?", group)
// Example: values, _ := keyValueStore.GetAll("app")
func (keyValueStore *KeyValueStore) GetAll(group string) (map[string]string, error) {
rows, err := keyValueStore.database.Query("SELECT entry_key, entry_value FROM entries WHERE group_name = ?", group)
if err != nil {
return nil, coreerr.E("store.GetAll", "query", err)
return nil, core.E("store.GetAll", "query", err)
}
defer rows.Close()
result := make(map[string]string)
for rows.Next() {
var k, v string
if err := rows.Scan(&k, &v); err != nil {
return nil, coreerr.E("store.GetAll", "scan", err)
var key, value string
if err := rows.Scan(&key, &value); err != nil {
return nil, core.E("store.GetAll", "scan", err)
}
result[k] = v
result[key] = value
}
if err := rows.Err(); err != nil {
return nil, coreerr.E("store.GetAll", "rows", err)
return nil, core.E("store.GetAll", "rows", err)
}
return result, nil
}
// Render loads all key-value pairs from a group and renders a Go template.
func (s *Store) Render(tmplStr, group string) (string, error) {
rows, err := s.db.Query("SELECT key, value FROM kv WHERE grp = ?", group)
// Example: keyValueStore, _ := store.New(store.Options{Path: ":memory:"})
// Example: _ = keyValueStore.Set("user", "name", "alice")
// Example: renderedText, _ := keyValueStore.Render("hello {{ .name }}", "user")
func (keyValueStore *KeyValueStore) Render(templateText, group string) (string, error) {
rows, err := keyValueStore.database.Query("SELECT entry_key, entry_value FROM entries WHERE group_name = ?", group)
if err != nil {
return "", coreerr.E("store.Render", "query", err)
return "", core.E("store.Render", "query", err)
}
defer rows.Close()
vars := make(map[string]string)
templateValues := make(map[string]string)
for rows.Next() {
var k, v string
if err := rows.Scan(&k, &v); err != nil {
return "", coreerr.E("store.Render", "scan", err)
var key, value string
if err := rows.Scan(&key, &value); err != nil {
return "", core.E("store.Render", "scan", err)
}
vars[k] = v
templateValues[key] = value
}
if err := rows.Err(); err != nil {
return "", coreerr.E("store.Render", "rows", err)
return "", core.E("store.Render", "rows", err)
}
tmpl, err := template.New("render").Parse(tmplStr)
renderTemplate, err := template.New("render").Parse(templateText)
if err != nil {
return "", coreerr.E("store.Render", "parse template", err)
return "", core.E("store.Render", "parse template", err)
}
var b strings.Builder
if err := tmpl.Execute(&b, vars); err != nil {
return "", coreerr.E("store.Render", "execute template", err)
builder := core.NewBuilder()
if err := renderTemplate.Execute(builder, templateValues); err != nil {
return "", core.E("store.Render", "execute template", err)
}
return b.String(), nil
return builder.String(), nil
}

View file

@ -7,97 +7,109 @@ import (
"github.com/stretchr/testify/require"
)
func TestSetGet_Good(t *testing.T) {
s, err := New(":memory:")
require.NoError(t, err)
defer s.Close()
func newKeyValueStore(t *testing.T) *KeyValueStore {
t.Helper()
err = s.Set("config", "theme", "dark")
keyValueStore, err := New(Options{Path: ":memory:"})
require.NoError(t, err)
val, err := s.Get("config", "theme")
require.NoError(t, err)
assert.Equal(t, "dark", val)
t.Cleanup(func() {
require.NoError(t, keyValueStore.Close())
})
return keyValueStore
}
func TestGet_Bad_NotFound(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
func TestKeyValueStore_New_Options_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
assert.NotNil(t, keyValueStore)
}
_, err := s.Get("config", "missing")
func TestKeyValueStore_New_Options_Bad(t *testing.T) {
_, err := New(Options{})
assert.Error(t, err)
}
func TestDelete_Good(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
func TestKeyValueStore_SetGet_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
_ = s.Set("config", "key", "val")
err := s.Delete("config", "key")
err := keyValueStore.Set("config", "theme", "dark")
require.NoError(t, err)
_, err = s.Get("config", "key")
assert.Error(t, err)
}
func TestCount_Good(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
_ = s.Set("grp", "a", "1")
_ = s.Set("grp", "b", "2")
_ = s.Set("other", "c", "3")
n, err := s.Count("grp")
value, err := keyValueStore.Get("config", "theme")
require.NoError(t, err)
assert.Equal(t, 2, n)
assert.Equal(t, "dark", value)
}
func TestDeleteGroup_Good(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
func TestKeyValueStore_Get_NotFound_Bad(t *testing.T) {
keyValueStore := newKeyValueStore(t)
_ = s.Set("grp", "a", "1")
_ = s.Set("grp", "b", "2")
err := s.DeleteGroup("grp")
_, err := keyValueStore.Get("config", "missing")
assert.ErrorIs(t, err, NotFoundError)
}
func TestKeyValueStore_Delete_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
_ = keyValueStore.Set("config", "key", "val")
err := keyValueStore.Delete("config", "key")
require.NoError(t, err)
n, _ := s.Count("grp")
assert.Equal(t, 0, n)
_, err = keyValueStore.Get("config", "key")
assert.ErrorIs(t, err, NotFoundError)
}
func TestGetAll_Good(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
func TestKeyValueStore_Count_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
_ = s.Set("grp", "a", "1")
_ = s.Set("grp", "b", "2")
_ = s.Set("other", "c", "3")
_ = keyValueStore.Set("group", "a", "1")
_ = keyValueStore.Set("group", "b", "2")
_ = keyValueStore.Set("other", "c", "3")
all, err := s.GetAll("grp")
count, err := keyValueStore.Count("group")
require.NoError(t, err)
assert.Equal(t, 2, count)
}
func TestKeyValueStore_DeleteGroup_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
_ = keyValueStore.Set("group", "a", "1")
_ = keyValueStore.Set("group", "b", "2")
err := keyValueStore.DeleteGroup("group")
require.NoError(t, err)
count, _ := keyValueStore.Count("group")
assert.Equal(t, 0, count)
}
func TestKeyValueStore_GetAll_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
_ = keyValueStore.Set("group", "a", "1")
_ = keyValueStore.Set("group", "b", "2")
_ = keyValueStore.Set("other", "c", "3")
all, err := keyValueStore.GetAll("group")
require.NoError(t, err)
assert.Equal(t, map[string]string{"a": "1", "b": "2"}, all)
}
func TestGetAll_Good_Empty(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
func TestKeyValueStore_GetAll_Empty_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
all, err := s.GetAll("empty")
all, err := keyValueStore.GetAll("empty")
require.NoError(t, err)
assert.Empty(t, all)
}
func TestRender_Good(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
func TestKeyValueStore_Render_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
_ = s.Set("user", "pool", "pool.lthn.io:3333")
_ = s.Set("user", "wallet", "iz...")
_ = keyValueStore.Set("user", "pool", "pool.lthn.io:3333")
_ = keyValueStore.Set("user", "wallet", "iz...")
tmpl := `{"pool":"{{ .pool }}","wallet":"{{ .wallet }}"}`
out, err := s.Render(tmpl, "user")
templateText := `{"pool":"{{ .pool }}","wallet":"{{ .wallet }}"}`
renderedText, err := keyValueStore.Render(templateText, "user")
require.NoError(t, err)
assert.Contains(t, out, "pool.lthn.io:3333")
assert.Contains(t, out, "iz...")
assert.Contains(t, renderedText, "pool.lthn.io:3333")
assert.Contains(t, renderedText, "iz...")
}

9
workspace/doc.go Normal file
View file

@ -0,0 +1,9 @@
// Example: service, _ := workspace.New(workspace.Options{
// Example: KeyPairProvider: keyPairProvider,
// Example: RootPath: "/srv/workspaces",
// Example: Medium: io.NewMemoryMedium(),
// Example: })
// Example: workspaceID, _ := service.CreateWorkspace("alice", "pass123")
// Example: _ = service.SwitchWorkspace(workspaceID)
// Example: _ = service.WriteWorkspaceFile("notes/todo.txt", "ship it")
package workspace

View file

@ -3,173 +3,306 @@ package workspace
import (
"crypto/sha256"
"encoding/hex"
"os"
"path/filepath"
"io/fs"
"sync"
core "dappco.re/go/core"
coreerr "forge.lthn.ai/core/go-log"
"dappco.re/go/core/io"
"dappco.re/go/core/io/sigil"
)
// Workspace provides management for encrypted user workspaces.
// Example: service, _ := workspace.New(workspace.Options{KeyPairProvider: keyPairProvider})
type Workspace interface {
CreateWorkspace(identifier, password string) (string, error)
SwitchWorkspace(name string) error
WorkspaceFileGet(filename string) (string, error)
WorkspaceFileSet(filename, content string) error
CreateWorkspace(identifier, passphrase string) (string, error)
SwitchWorkspace(workspaceID string) error
ReadWorkspaceFile(workspaceFilePath string) (string, error)
WriteWorkspaceFile(workspaceFilePath, content string) error
}
// cryptProvider is the interface for PGP key generation.
type cryptProvider interface {
CreateKeyPair(name, passphrase string) (string, error)
// Example: key, _ := keyPairProvider.CreateKeyPair("alice", "pass123")
type KeyPairProvider interface {
CreateKeyPair(identifier, passphrase string) (string, error)
}
// Service implements the Workspace interface.
const (
WorkspaceCreateAction = "workspace.create"
WorkspaceSwitchAction = "workspace.switch"
)
// Example: command := WorkspaceCommand{Action: WorkspaceCreateAction, Identifier: "alice", Password: "pass123"}
type WorkspaceCommand struct {
Action string
Identifier string
Password string
WorkspaceID string
}
// Example: service, _ := workspace.New(workspace.Options{
// Example: KeyPairProvider: keyPairProvider,
// Example: RootPath: "/srv/workspaces",
// Example: Medium: io.NewMemoryMedium(),
// Example: Core: c,
// Example: })
type Options struct {
KeyPairProvider KeyPairProvider
RootPath string
Medium io.Medium
// Example: service, _ := workspace.New(workspace.Options{Core: core.New()})
Core *core.Core
}
// Example: service, _ := workspace.New(workspace.Options{KeyPairProvider: keyPairProvider})
type Service struct {
core *core.Core
crypt cryptProvider
activeWorkspace string
rootPath string
medium io.Medium
mu sync.RWMutex
keyPairProvider KeyPairProvider
activeWorkspaceID string
rootPath string
medium io.Medium
stateLock sync.RWMutex
}
// New creates a new Workspace service instance.
// An optional cryptProvider can be passed to supply PGP key generation.
func New(c *core.Core, crypt ...cryptProvider) (any, error) {
home, err := os.UserHomeDir()
if err != nil {
return nil, coreerr.E("workspace.New", "failed to determine home directory", err)
}
rootPath := filepath.Join(home, ".core", "workspaces")
var _ Workspace = (*Service)(nil)
s := &Service{
core: c,
rootPath: rootPath,
medium: io.Local,
// Example: service, _ := workspace.New(workspace.Options{
// Example: KeyPairProvider: keyPairProvider,
// Example: RootPath: "/srv/workspaces",
// Example: Medium: io.NewMemoryMedium(),
// Example: })
// Example: workspaceID, _ := service.CreateWorkspace("alice", "pass123")
func New(options Options) (*Service, error) {
rootPath := options.RootPath
if rootPath == "" {
home := resolveWorkspaceHomeDirectory()
if home == "" {
return nil, core.E("workspace.New", "failed to determine home directory", fs.ErrNotExist)
}
rootPath = core.Path(home, ".core", "workspaces")
}
if len(crypt) > 0 && crypt[0] != nil {
s.crypt = crypt[0]
if options.KeyPairProvider == nil {
return nil, core.E("workspace.New", "key pair provider is required", fs.ErrInvalid)
}
if err := s.medium.EnsureDir(rootPath); err != nil {
return nil, coreerr.E("workspace.New", "failed to ensure root directory", err)
medium := options.Medium
if medium == nil {
medium = io.Local
}
if medium == nil {
return nil, core.E("workspace.New", "storage medium is required", fs.ErrInvalid)
}
return s, nil
service := &Service{
keyPairProvider: options.KeyPairProvider,
rootPath: rootPath,
medium: medium,
}
if err := service.medium.EnsureDir(rootPath); err != nil {
return nil, core.E("workspace.New", "failed to ensure root directory", err)
}
if options.Core != nil {
options.Core.RegisterAction(service.HandleWorkspaceMessage)
}
return service, nil
}
// CreateWorkspace creates a new encrypted workspace.
// Identifier is hashed (SHA-256) to create the directory name.
// A PGP keypair is generated using the password.
func (s *Service) CreateWorkspace(identifier, password string) (string, error) {
s.mu.Lock()
defer s.mu.Unlock()
// Example: workspaceID, _ := service.CreateWorkspace("alice", "pass123")
func (service *Service) CreateWorkspace(identifier, passphrase string) (string, error) {
service.stateLock.Lock()
defer service.stateLock.Unlock()
if s.crypt == nil {
return "", coreerr.E("workspace.CreateWorkspace", "crypt service not available", nil)
if service.keyPairProvider == nil {
return "", core.E("workspace.CreateWorkspace", "key pair provider not available", fs.ErrInvalid)
}
hash := sha256.Sum256([]byte(identifier))
wsID := hex.EncodeToString(hash[:])
wsPath := filepath.Join(s.rootPath, wsID)
if s.medium.Exists(wsPath) {
return "", coreerr.E("workspace.CreateWorkspace", "workspace already exists", nil)
}
for _, d := range []string{"config", "log", "data", "files", "keys"} {
if err := s.medium.EnsureDir(filepath.Join(wsPath, d)); err != nil {
return "", coreerr.E("workspace.CreateWorkspace", "failed to create directory: "+d, err)
}
}
privKey, err := s.crypt.CreateKeyPair(identifier, password)
if err != nil {
return "", coreerr.E("workspace.CreateWorkspace", "failed to generate keys", err)
}
if err := s.medium.WriteMode(filepath.Join(wsPath, "keys", "private.key"), privKey, 0600); err != nil {
return "", coreerr.E("workspace.CreateWorkspace", "failed to save private key", err)
}
return wsID, nil
}
// SwitchWorkspace changes the active workspace.
func (s *Service) SwitchWorkspace(name string) error {
s.mu.Lock()
defer s.mu.Unlock()
wsPath := filepath.Join(s.rootPath, name)
if !s.medium.IsDir(wsPath) {
return coreerr.E("workspace.SwitchWorkspace", "workspace not found: "+name, nil)
}
s.activeWorkspace = name
return nil
}
// activeFilePath returns the full path to a file in the active workspace,
// or an error if no workspace is active.
func (s *Service) activeFilePath(op, filename string) (string, error) {
if s.activeWorkspace == "" {
return "", coreerr.E(op, "no active workspace", nil)
}
return filepath.Join(s.rootPath, s.activeWorkspace, "files", filename), nil
}
// WorkspaceFileGet retrieves the content of a file from the active workspace.
func (s *Service) WorkspaceFileGet(filename string) (string, error) {
s.mu.RLock()
defer s.mu.RUnlock()
path, err := s.activeFilePath("workspace.WorkspaceFileGet", filename)
workspaceID := hex.EncodeToString(hash[:])
workspaceDirectory, err := service.resolveWorkspaceDirectory("workspace.CreateWorkspace", workspaceID)
if err != nil {
return "", err
}
return s.medium.Read(path)
if service.medium.Exists(workspaceDirectory) {
return "", core.E("workspace.CreateWorkspace", "workspace already exists", fs.ErrExist)
}
for _, directoryName := range []string{"config", "log", "data", "files", "keys"} {
if err := service.medium.EnsureDir(core.Path(workspaceDirectory, directoryName)); err != nil {
return "", core.E("workspace.CreateWorkspace", core.Concat("failed to create directory: ", directoryName), err)
}
}
privateKey, err := service.keyPairProvider.CreateKeyPair(identifier, passphrase)
if err != nil {
return "", core.E("workspace.CreateWorkspace", "failed to generate keys", err)
}
if err := service.medium.WriteMode(core.Path(workspaceDirectory, "keys", "private.key"), privateKey, 0600); err != nil {
return "", core.E("workspace.CreateWorkspace", "failed to save private key", err)
}
return workspaceID, nil
}
// WorkspaceFileSet saves content to a file in the active workspace.
func (s *Service) WorkspaceFileSet(filename, content string) error {
s.mu.Lock()
defer s.mu.Unlock()
// Example: _ = service.SwitchWorkspace(workspaceID)
func (service *Service) SwitchWorkspace(workspaceID string) error {
service.stateLock.Lock()
defer service.stateLock.Unlock()
path, err := s.activeFilePath("workspace.WorkspaceFileSet", filename)
workspaceDirectory, err := service.resolveWorkspaceDirectory("workspace.SwitchWorkspace", workspaceID)
if err != nil {
return err
}
return s.medium.Write(path, content)
}
// HandleIPCEvents handles workspace-related IPC messages.
func (s *Service) HandleIPCEvents(c *core.Core, msg core.Message) core.Result {
switch m := msg.(type) {
case map[string]any:
action, _ := m["action"].(string)
switch action {
case "workspace.create":
id, _ := m["identifier"].(string)
pass, _ := m["password"].(string)
wsID, err := s.CreateWorkspace(id, pass)
if err != nil {
return core.Result{}
}
return core.Result{Value: wsID, OK: true}
case "workspace.switch":
name, _ := m["name"].(string)
if err := s.SwitchWorkspace(name); err != nil {
return core.Result{}
}
return core.Result{OK: true}
}
if !service.medium.IsDir(workspaceDirectory) {
return core.E("workspace.SwitchWorkspace", core.Concat("workspace not found: ", workspaceID), fs.ErrNotExist)
}
return core.Result{OK: true}
service.activeWorkspaceID = core.PathBase(workspaceDirectory)
return nil
}
// Ensure Service implements Workspace.
var _ Workspace = (*Service)(nil)
func (service *Service) resolveActiveWorkspaceFilePath(operation, workspaceFilePath string) (string, error) {
if service.activeWorkspaceID == "" {
return "", core.E(operation, "no active workspace", fs.ErrNotExist)
}
filesRoot := core.Path(service.rootPath, service.activeWorkspaceID, "files")
filePath, err := joinPathWithinRoot(filesRoot, workspaceFilePath)
if err != nil {
return "", core.E(operation, "file path escapes workspace files", fs.ErrPermission)
}
if filePath == filesRoot {
return "", core.E(operation, "workspace file path is required", fs.ErrInvalid)
}
return filePath, nil
}
// Example: cipherSigil, _ := service.workspaceCipherSigil("workspace.ReadWorkspaceFile")
func (service *Service) workspaceCipherSigil(operation string) (*sigil.ChaChaPolySigil, error) {
if service.activeWorkspaceID == "" {
return nil, core.E(operation, "no active workspace", fs.ErrNotExist)
}
keyPath := core.Path(service.rootPath, service.activeWorkspaceID, "keys", "private.key")
rawKey, err := service.medium.Read(keyPath)
if err != nil {
return nil, core.E(operation, "failed to read workspace key", err)
}
derived := sha256.Sum256([]byte(rawKey))
cipherSigil, err := sigil.NewChaChaPolySigil(derived[:], nil)
if err != nil {
return nil, core.E(operation, "failed to create cipher sigil", err)
}
return cipherSigil, nil
}
// Example: content, _ := service.ReadWorkspaceFile("notes/todo.txt")
func (service *Service) ReadWorkspaceFile(workspaceFilePath string) (string, error) {
service.stateLock.RLock()
defer service.stateLock.RUnlock()
filePath, err := service.resolveActiveWorkspaceFilePath("workspace.ReadWorkspaceFile", workspaceFilePath)
if err != nil {
return "", err
}
cipherSigil, err := service.workspaceCipherSigil("workspace.ReadWorkspaceFile")
if err != nil {
return "", err
}
encoded, err := service.medium.Read(filePath)
if err != nil {
return "", err
}
plaintext, err := sigil.Untransmute([]byte(encoded), []sigil.Sigil{cipherSigil})
if err != nil {
return "", core.E("workspace.ReadWorkspaceFile", "failed to decrypt file content", err)
}
return string(plaintext), nil
}
// Example: _ = service.WriteWorkspaceFile("notes/todo.txt", "ship it")
func (service *Service) WriteWorkspaceFile(workspaceFilePath, content string) error {
service.stateLock.Lock()
defer service.stateLock.Unlock()
filePath, err := service.resolveActiveWorkspaceFilePath("workspace.WriteWorkspaceFile", workspaceFilePath)
if err != nil {
return err
}
cipherSigil, err := service.workspaceCipherSigil("workspace.WriteWorkspaceFile")
if err != nil {
return err
}
ciphertext, err := sigil.Transmute([]byte(content), []sigil.Sigil{cipherSigil})
if err != nil {
return core.E("workspace.WriteWorkspaceFile", "failed to encrypt file content", err)
}
return service.medium.Write(filePath, string(ciphertext))
}
// Example: commandResult := service.HandleWorkspaceCommand(WorkspaceCommand{Action: WorkspaceCreateAction, Identifier: "alice", Password: "pass123"})
func (service *Service) HandleWorkspaceCommand(command WorkspaceCommand) core.Result {
switch command.Action {
case WorkspaceCreateAction:
passphrase := command.Password
workspaceID, err := service.CreateWorkspace(command.Identifier, passphrase)
if err != nil {
return core.Result{}.New(err)
}
return core.Result{Value: workspaceID, OK: true}
case WorkspaceSwitchAction:
if err := service.SwitchWorkspace(command.WorkspaceID); err != nil {
return core.Result{}.New(err)
}
return core.Result{OK: true}
}
return core.Result{}.New(core.E("workspace.HandleWorkspaceCommand", core.Concat("unsupported action: ", command.Action), fs.ErrInvalid))
}
// Example: result := service.HandleWorkspaceMessage(core.New(), WorkspaceCommand{Action: WorkspaceSwitchAction, WorkspaceID: "f3f0d7"})
func (service *Service) HandleWorkspaceMessage(_ *core.Core, message core.Message) core.Result {
switch command := message.(type) {
case WorkspaceCommand:
return service.HandleWorkspaceCommand(command)
}
return core.Result{}.New(core.E("workspace.HandleWorkspaceMessage", "unsupported message type", fs.ErrInvalid))
}
func resolveWorkspaceHomeDirectory() string {
if home := core.Env("CORE_HOME"); home != "" {
return home
}
if home := core.Env("HOME"); home != "" {
return home
}
return core.Env("DIR_HOME")
}
func joinPathWithinRoot(root string, parts ...string) (string, error) {
candidate := core.Path(append([]string{root}, parts...)...)
separator := core.Env("CORE_PATH_SEPARATOR")
if separator == "" {
separator = core.Env("DS")
}
if separator == "" {
separator = "/"
}
if candidate == root || core.HasPrefix(candidate, root+separator) {
return candidate, nil
}
return "", fs.ErrPermission
}
func (service *Service) resolveWorkspaceDirectory(operation, workspaceID string) (string, error) {
if workspaceID == "" {
return "", core.E(operation, "workspace id is required", fs.ErrInvalid)
}
workspaceDirectory, err := joinPathWithinRoot(service.rootPath, workspaceID)
if err != nil {
return "", core.E(operation, "workspace path escapes root", err)
}
if core.PathDir(workspaceDirectory) != service.rootPath {
return "", core.E(operation, core.Concat("invalid workspace id: ", workspaceID), fs.ErrPermission)
}
return workspaceDirectory, nil
}

View file

@ -1,48 +1,214 @@
package workspace
import (
"path/filepath"
"io/fs"
"testing"
core "dappco.re/go/core"
"forge.lthn.ai/core/go-crypt/crypt/openpgp"
coreio "dappco.re/go/core/io"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestWorkspace(t *testing.T) {
c := core.New()
pgpSvc, err := openpgp.New(nil)
assert.NoError(t, err)
type testKeyPairProvider struct {
privateKey string
err error
}
func (provider testKeyPairProvider) CreateKeyPair(identifier, passphrase string) (string, error) {
if provider.err != nil {
return "", provider.err
}
return provider.privateKey, nil
}
func newWorkspaceService(t *testing.T) (*Service, string) {
t.Helper()
tempHome := t.TempDir()
t.Setenv("HOME", tempHome)
svc, err := New(c, pgpSvc.(cryptProvider))
assert.NoError(t, err)
s := svc.(*Service)
// Test CreateWorkspace
id, err := s.CreateWorkspace("test-user", "pass123")
assert.NoError(t, err)
assert.NotEmpty(t, id)
wsPath := filepath.Join(tempHome, ".core", "workspaces", id)
assert.DirExists(t, wsPath)
assert.DirExists(t, filepath.Join(wsPath, "keys"))
assert.FileExists(t, filepath.Join(wsPath, "keys", "private.key"))
// Test SwitchWorkspace
err = s.SwitchWorkspace(id)
assert.NoError(t, err)
assert.Equal(t, id, s.activeWorkspace)
// Test File operations
filename := "secret.txt"
content := "top secret info"
err = s.WorkspaceFileSet(filename, content)
assert.NoError(t, err)
got, err := s.WorkspaceFileGet(filename)
assert.NoError(t, err)
assert.Equal(t, content, got)
service, err := New(Options{KeyPairProvider: testKeyPairProvider{privateKey: "private-key"}})
require.NoError(t, err)
return service, tempHome
}
func TestService_New_MissingKeyPairProvider_Bad(t *testing.T) {
_, err := New(Options{})
require.Error(t, err)
}
func TestService_New_CustomRootPathAndMedium_Good(t *testing.T) {
medium := coreio.NewMemoryMedium()
rootPath := core.Path(t.TempDir(), "custom", "workspaces")
service, err := New(Options{
KeyPairProvider: testKeyPairProvider{privateKey: "private-key"},
RootPath: rootPath,
Medium: medium,
})
require.NoError(t, err)
assert.Equal(t, rootPath, service.rootPath)
assert.Same(t, medium, service.medium)
workspaceID, err := service.CreateWorkspace("custom-user", "pass123")
require.NoError(t, err)
assert.NotEmpty(t, workspaceID)
expectedWorkspacePath := core.Path(rootPath, workspaceID)
assert.True(t, medium.IsDir(rootPath))
assert.True(t, medium.IsDir(core.Path(expectedWorkspacePath, "keys")))
assert.True(t, medium.Exists(core.Path(expectedWorkspacePath, "keys", "private.key")))
}
func TestService_WorkspaceFileRoundTrip_Good(t *testing.T) {
service, tempHome := newWorkspaceService(t)
workspaceID, err := service.CreateWorkspace("test-user", "pass123")
require.NoError(t, err)
assert.NotEmpty(t, workspaceID)
workspacePath := core.Path(tempHome, ".core", "workspaces", workspaceID)
assert.DirExists(t, workspacePath)
assert.DirExists(t, core.Path(workspacePath, "keys"))
assert.FileExists(t, core.Path(workspacePath, "keys", "private.key"))
err = service.SwitchWorkspace(workspaceID)
require.NoError(t, err)
assert.Equal(t, workspaceID, service.activeWorkspaceID)
err = service.WriteWorkspaceFile("secret.txt", "top secret info")
require.NoError(t, err)
got, err := service.ReadWorkspaceFile("secret.txt")
require.NoError(t, err)
assert.Equal(t, "top secret info", got)
}
func TestService_SwitchWorkspace_TraversalBlocked_Bad(t *testing.T) {
service, tempHome := newWorkspaceService(t)
outside := core.Path(tempHome, ".core", "escaped")
require.NoError(t, service.medium.EnsureDir(outside))
err := service.SwitchWorkspace("../escaped")
require.Error(t, err)
assert.Empty(t, service.activeWorkspaceID)
}
func TestService_WriteWorkspaceFile_TraversalBlocked_Bad(t *testing.T) {
service, tempHome := newWorkspaceService(t)
workspaceID, err := service.CreateWorkspace("test-user", "pass123")
require.NoError(t, err)
require.NoError(t, service.SwitchWorkspace(workspaceID))
keyPath := core.Path(tempHome, ".core", "workspaces", workspaceID, "keys", "private.key")
before, err := service.medium.Read(keyPath)
require.NoError(t, err)
err = service.WriteWorkspaceFile("../keys/private.key", "hijack")
require.Error(t, err)
after, err := service.medium.Read(keyPath)
require.NoError(t, err)
assert.Equal(t, before, after)
_, err = service.ReadWorkspaceFile("../keys/private.key")
require.Error(t, err)
}
func TestService_JoinPathWithinRoot_DefaultSeparator_Good(t *testing.T) {
t.Setenv("CORE_PATH_SEPARATOR", "")
path, err := joinPathWithinRoot("/tmp/workspaces", "../workspaces2")
require.Error(t, err)
assert.ErrorIs(t, err, fs.ErrPermission)
assert.Empty(t, path)
}
func TestService_New_IPCAutoRegistration_Good(t *testing.T) {
tempHome := t.TempDir()
t.Setenv("HOME", tempHome)
c := core.New()
service, err := New(Options{
KeyPairProvider: testKeyPairProvider{privateKey: "private-key"},
Core: c,
})
require.NoError(t, err)
// Create a workspace directly, then switch via the Core IPC bus.
workspaceID, err := service.CreateWorkspace("ipc-bus-user", "pass789")
require.NoError(t, err)
// Dispatching workspace.switch via ACTION must reach the auto-registered handler.
c.ACTION(WorkspaceCommand{
Action: WorkspaceSwitchAction,
WorkspaceID: workspaceID,
})
assert.Equal(t, workspaceID, service.activeWorkspaceID)
}
func TestService_New_IPCCreate_Good(t *testing.T) {
tempHome := t.TempDir()
t.Setenv("HOME", tempHome)
c := core.New()
service, err := New(Options{
KeyPairProvider: testKeyPairProvider{privateKey: "private-key"},
Core: c,
})
require.NoError(t, err)
// workspace.create dispatched via the bus must create the workspace on the medium.
c.ACTION(WorkspaceCommand{
Action: WorkspaceCreateAction,
Identifier: "ipc-create-user",
Password: "pass123",
})
// A duplicate create must fail — proves the first create succeeded.
_, err = service.CreateWorkspace("ipc-create-user", "pass123")
require.Error(t, err)
}
func TestService_New_NoCoreOption_NoRegistration_Good(t *testing.T) {
tempHome := t.TempDir()
t.Setenv("HOME", tempHome)
// Without Core in Options, New must succeed and no IPC handler is registered.
service, err := New(Options{
KeyPairProvider: testKeyPairProvider{privateKey: "private-key"},
})
require.NoError(t, err)
assert.NotNil(t, service)
}
func TestService_HandleWorkspaceMessage_Command_Good(t *testing.T) {
service, _ := newWorkspaceService(t)
create := service.HandleWorkspaceMessage(core.New(), WorkspaceCommand{
Action: WorkspaceCreateAction,
Identifier: "ipc-user",
Password: "pass123",
})
assert.True(t, create.OK)
workspaceID, ok := create.Value.(string)
require.True(t, ok)
require.NotEmpty(t, workspaceID)
switchResult := service.HandleWorkspaceMessage(core.New(), WorkspaceCommand{
Action: WorkspaceSwitchAction,
WorkspaceID: workspaceID,
})
assert.True(t, switchResult.OK)
assert.Equal(t, workspaceID, service.activeWorkspaceID)
unknownAction := service.HandleWorkspaceCommand(WorkspaceCommand{Action: "noop"})
assert.False(t, unknownAction.OK)
unknown := service.HandleWorkspaceMessage(core.New(), "noop")
assert.False(t, unknown.OK)
}