Compare commits

..

13 commits
dev ... main

Author SHA1 Message Date
b2fc6fe0e8 Merge pull request 'dev' (#14) from dev into main
Some checks failed
CI / test (push) Failing after 7s
CI / auto-fix (push) Failing after 1s
CI / auto-merge (push) Failing after 0s
Reviewed-on: #14
2026-03-24 10:08:42 +00:00
Snider
9daab749ec merge: resolve main→dev conflicts — migrate coreerr imports to dappco.re
Some checks failed
CI / test (push) Failing after 1s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
CI / test (pull_request) Failing after 1s
CI / auto-fix (pull_request) Failing after 0s
CI / auto-merge (pull_request) Failing after 0s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-23 20:49:08 +00:00
7dbb8fcafa Merge pull request '[agent/codex] API contract extraction. Read CLAUDE.md. For every exported ...' (#13) from agent/api-contract-extraction--read-claude-md into dev
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
CI / test (pull_request) Failing after 2s
CI / auto-fix (pull_request) Failing after 1s
CI / auto-merge (pull_request) Failing after 1s
2026-03-23 14:54:23 +00:00
Virgil
e208589493 docs: add API contract report 2026-03-23 14:53:51 +00:00
17e0624daf Merge pull request '[agent/codex] Convention drift check. Read CLAUDE.md. Find: missing SPDX h...' (#12) from agent/convention-drift-check--read-claude-md into dev
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-23 14:48:38 +00:00
Virgil
39d5ca8480 docs: add convention drift audit 2026-03-23 14:48:12 +00:00
0ed97567fc Merge pull request '[agent/codex] Security attack vector mapping. Read CLAUDE.md. Map every ex...' (#9) from agent/security-attack-vector-mapping--read-cla into dev
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-23 13:21:04 +00:00
Virgil
19c4339229 docs: add security attack vector mapping 2026-03-23 13:20:41 +00:00
f3272f2f2d Merge pull request '[agent/codex] Fix ALL findings from issue #4. Read CLAUDE.md first. Path t...' (#6) from agent/deep-audit-per-issue--4--read-claude-md into dev
Some checks failed
CI / auto-fix (push) Failing after 0s
CI / test (push) Failing after 2s
CI / auto-merge (push) Failing after 0s
2026-03-23 07:26:48 +00:00
Virgil
2acfc3d548 fix(io): address audit issue 4 findings
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-23 07:26:09 +00:00
dfea9a6808 Merge pull request '[agent/codex] Full audit per issue #4. Read CLAUDE.md. Report ALL findings...' (#5) from agent/deep-audit-per-issue--4--read-claude-md into dev
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-22 18:12:39 +00:00
3194c8e1ed Merge pull request '[agent/claude] Update go.mod require lines from forge.lthn.ai to dappco.re ...' (#3) from agent/update-go-mod-require-lines-from-forge-l into main
Some checks failed
CI / test (push) Failing after 2s
CI / auto-fix (push) Failing after 0s
CI / auto-merge (push) Failing after 0s
2026-03-22 01:29:00 +00:00
Snider
e9aebf757b chore(deps): migrate go-log import to dappco.re/go/core/log v0.1.0
Some checks failed
CI / test (pull_request) Failing after 3s
CI / auto-fix (pull_request) Failing after 0s
CI / auto-merge (pull_request) Failing after 0s
Update go.mod require lines from forge.lthn.ai to dappco.re paths where
vanity redirects exist. Bump core to v0.5.0 and log to v0.1.0. Borg and
go-crypt remain at forge.lthn.ai until their vanity paths are published.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-22 01:28:41 +00:00
44 changed files with 6231 additions and 8766 deletions

View file

@ -4,7 +4,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## Project Overview
`forge.lthn.ai/core/go-io` is the **mandatory I/O abstraction layer** for the CoreGO ecosystem. All data access — files, configs, journals, state — MUST go through the `io.Medium` interface. Never use raw `os`, `filepath`, or `ioutil` calls.
`dappco.re/go/core/io` is the **mandatory I/O abstraction layer** for the CoreGO ecosystem. All data access — files, configs, journals, state — MUST go through the `io.Medium` interface. Never use raw `os`, `filepath`, or `ioutil` calls.
### The Premise
@ -34,7 +34,7 @@ GOWORK=off go test -cover ./...
### Core Interface
`io.Medium` — 17 methods: Read, Write, WriteMode, EnsureDir, IsFile, Delete, DeleteAll, Rename, List, Stat, Open, Create, Append, ReadStream, WriteStream, Exists, IsDir.
`io.Medium` — 18 methods: Read, Write, EnsureDir, IsFile, FileGet, FileSet, Delete, DeleteAll, Rename, List, Stat, Open, Create, Append, ReadStream, WriteStream, Exists, IsDir.
```go
// Sandboxed to a project directory
@ -60,7 +60,7 @@ io.Copy(s3Medium, "backup.tar", localMedium, "restore/backup.tar")
| `datanode` | Borg DataNode | Thread-safe (RWMutex) in-memory, snapshot/restore via tar |
| `store` | SQLite KV store | Group-namespaced key-value with Go template rendering |
| `workspace` | Core service | Encrypted workspaces, SHA-256 IDs, PGP keypairs |
| `MemoryMedium` | In-memory map | Testing — no filesystem needed |
| `MockMedium` | In-memory map | Testing — no filesystem needed |
`store.Medium` maps filesystem paths as `group/key` — first path segment is the group, remainder is the key. `List("")` returns groups as directories.
@ -103,13 +103,13 @@ Sigils can be created by name via `sigil.NewSigil("hex")`, `sigil.NewSigil("sha2
Standard `io` is always aliased to avoid collision with this package:
```go
goio "io"
coreerr "forge.lthn.ai/core/go-log"
coreio "forge.lthn.ai/core/go-io" // when imported from subpackages
coreerr "dappco.re/go/core/log"
coreio "dappco.re/go/core/io" // when imported from subpackages
```
### Error Handling
All errors use `coreerr.E("pkg.Method", "description", wrappedErr)` from `forge.lthn.ai/core/go-log`. Follow this pattern in new code.
All errors use `coreerr.E("pkg.Method", "description", wrappedErr)` from `dappco.re/go/core/log`. Follow this pattern in new code.
### Compile-Time Interface Checks
@ -117,10 +117,10 @@ Backend packages use `var _ io.Medium = (*Medium)(nil)` to verify interface comp
## Dependencies
- `forge.lthn.ai/Snider/Borg` — DataNode container
- `forge.lthn.ai/core/go-log` — error handling (`coreerr.E()`)
- `forge.lthn.ai/core/go` — Core DI (workspace service only)
- `forge.lthn.ai/core/go-crypt` — PGP key generation (workspace service only)
- `forge.lthn.ai/Snider/Borg` — DataNode container (pending dappco.re migration)
- `dappco.re/go/core/log` — error handling (`coreerr.E()`)
- `dappco.re/go/core` — Core DI (workspace service only)
- `forge.lthn.ai/core/go-crypt` — PGP key generation (workspace service only, pending dappco.re migration)
- `aws-sdk-go-v2` — S3 backend
- `golang.org/x/crypto` — XChaCha20-Poly1305, BLAKE2, SHA-3 (sigil package)
- `modernc.org/sqlite` — SQLite backends (pure Go, no CGO)
@ -128,8 +128,8 @@ Backend packages use `var _ io.Medium = (*Medium)(nil)` to verify interface comp
### Sentinel Errors
Sentinel errors (`var NotFoundError`, `var InvalidKeyError`, etc.) use standard `errors.New()` — this is correct Go convention. Only inline error returns in functions should use `coreerr.E()`.
Sentinel errors (`var ErrNotFound`, `var ErrInvalidKey`, etc.) use standard `errors.New()` — this is correct Go convention. Only inline error returns in functions should use `coreerr.E()`.
## Testing
Use `io.NewMemoryMedium()` or `io.NewSandboxed(t.TempDir())` in tests — never hit real S3/SQLite unless integration testing. S3 tests use an interface-based mock (`s3.Client`).
Use `io.MockMedium` or `io.NewSandboxed(t.TempDir())` in tests — never hit real S3/SQLite unless integration testing. S3 tests use an interface-based mock (`s3API`).

34
CONSUMERS.md Normal file
View file

@ -0,0 +1,34 @@
# Consumers of go-io
These modules import `dappco.re/go/core/io`:
- agent
- core
- config
- go-ai
- go-ansible
- go-blockchain
- go-build
- go-cache
- go-container
- go-crypt
- go-forge
- go-html
- go-infra
- go-ml
- go-mlx
- go-netops
- go-p2p
- go-process
- go-rag
- go-ratelimit
- go-scm
- gui
- ide
- lint
- mcp
- php
- ts
- LEM
**Breaking change risk: 28 consumers.**

View file

@ -4,31 +4,31 @@ import (
"testing"
)
func BenchmarkMemoryMedium_Write(b *testing.B) {
medium := NewMemoryMedium()
func BenchmarkMockMedium_Write(b *testing.B) {
m := NewMockMedium()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = medium.Write("test.txt", "some content")
_ = m.Write("test.txt", "some content")
}
}
func BenchmarkMemoryMedium_Read(b *testing.B) {
medium := NewMemoryMedium()
_ = medium.Write("test.txt", "some content")
func BenchmarkMockMedium_Read(b *testing.B) {
m := NewMockMedium()
_ = m.Write("test.txt", "some content")
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, _ = medium.Read("test.txt")
_, _ = m.Read("test.txt")
}
}
func BenchmarkMemoryMedium_List(b *testing.B) {
medium := NewMemoryMedium()
_ = medium.EnsureDir("dir")
func BenchmarkMockMedium_List(b *testing.B) {
m := NewMockMedium()
_ = m.EnsureDir("dir")
for i := 0; i < 100; i++ {
_ = medium.Write("dir/file"+string(rune(i))+".txt", "content")
_ = m.Write("dir/file"+string(rune(i))+".txt", "content")
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, _ = medium.List("dir")
_, _ = m.List("dir")
}
}

260
client_test.go Normal file
View file

@ -0,0 +1,260 @@
package io
import (
"testing"
"github.com/stretchr/testify/assert"
)
// --- MockMedium Tests ---
func TestNewMockMedium_Good(t *testing.T) {
m := NewMockMedium()
assert.NotNil(t, m)
assert.NotNil(t, m.Files)
assert.NotNil(t, m.Dirs)
assert.Empty(t, m.Files)
assert.Empty(t, m.Dirs)
}
func TestMockMedium_Read_Good(t *testing.T) {
m := NewMockMedium()
m.Files["test.txt"] = "hello world"
content, err := m.Read("test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello world", content)
}
func TestMockMedium_Read_Bad(t *testing.T) {
m := NewMockMedium()
_, err := m.Read("nonexistent.txt")
assert.Error(t, err)
}
func TestMockMedium_Write_Good(t *testing.T) {
m := NewMockMedium()
err := m.Write("test.txt", "content")
assert.NoError(t, err)
assert.Equal(t, "content", m.Files["test.txt"])
// Overwrite existing file
err = m.Write("test.txt", "new content")
assert.NoError(t, err)
assert.Equal(t, "new content", m.Files["test.txt"])
}
func TestMockMedium_EnsureDir_Good(t *testing.T) {
m := NewMockMedium()
err := m.EnsureDir("/path/to/dir")
assert.NoError(t, err)
assert.True(t, m.Dirs["/path/to/dir"])
}
func TestMockMedium_IsFile_Good(t *testing.T) {
m := NewMockMedium()
m.Files["exists.txt"] = "content"
assert.True(t, m.IsFile("exists.txt"))
assert.False(t, m.IsFile("nonexistent.txt"))
}
func TestMockMedium_FileGet_Good(t *testing.T) {
m := NewMockMedium()
m.Files["test.txt"] = "content"
content, err := m.FileGet("test.txt")
assert.NoError(t, err)
assert.Equal(t, "content", content)
}
func TestMockMedium_FileSet_Good(t *testing.T) {
m := NewMockMedium()
err := m.FileSet("test.txt", "content")
assert.NoError(t, err)
assert.Equal(t, "content", m.Files["test.txt"])
}
func TestMockMedium_Delete_Good(t *testing.T) {
m := NewMockMedium()
m.Files["test.txt"] = "content"
err := m.Delete("test.txt")
assert.NoError(t, err)
assert.False(t, m.IsFile("test.txt"))
}
func TestMockMedium_Delete_Bad_NotFound(t *testing.T) {
m := NewMockMedium()
err := m.Delete("nonexistent.txt")
assert.Error(t, err)
}
func TestMockMedium_Delete_Bad_DirNotEmpty(t *testing.T) {
m := NewMockMedium()
m.Dirs["mydir"] = true
m.Files["mydir/file.txt"] = "content"
err := m.Delete("mydir")
assert.Error(t, err)
}
func TestMockMedium_DeleteAll_Good(t *testing.T) {
m := NewMockMedium()
m.Dirs["mydir"] = true
m.Dirs["mydir/subdir"] = true
m.Files["mydir/file.txt"] = "content"
m.Files["mydir/subdir/nested.txt"] = "nested"
err := m.DeleteAll("mydir")
assert.NoError(t, err)
assert.Empty(t, m.Dirs)
assert.Empty(t, m.Files)
}
func TestMockMedium_Rename_Good(t *testing.T) {
m := NewMockMedium()
m.Files["old.txt"] = "content"
err := m.Rename("old.txt", "new.txt")
assert.NoError(t, err)
assert.False(t, m.IsFile("old.txt"))
assert.True(t, m.IsFile("new.txt"))
assert.Equal(t, "content", m.Files["new.txt"])
}
func TestMockMedium_Rename_Good_Dir(t *testing.T) {
m := NewMockMedium()
m.Dirs["olddir"] = true
m.Files["olddir/file.txt"] = "content"
err := m.Rename("olddir", "newdir")
assert.NoError(t, err)
assert.False(t, m.Dirs["olddir"])
assert.True(t, m.Dirs["newdir"])
assert.Equal(t, "content", m.Files["newdir/file.txt"])
}
func TestMockMedium_List_Good(t *testing.T) {
m := NewMockMedium()
m.Dirs["mydir"] = true
m.Files["mydir/file1.txt"] = "content1"
m.Files["mydir/file2.txt"] = "content2"
m.Dirs["mydir/subdir"] = true
entries, err := m.List("mydir")
assert.NoError(t, err)
assert.Len(t, entries, 3)
names := make(map[string]bool)
for _, e := range entries {
names[e.Name()] = true
}
assert.True(t, names["file1.txt"])
assert.True(t, names["file2.txt"])
assert.True(t, names["subdir"])
}
func TestMockMedium_Stat_Good(t *testing.T) {
m := NewMockMedium()
m.Files["test.txt"] = "hello world"
info, err := m.Stat("test.txt")
assert.NoError(t, err)
assert.Equal(t, "test.txt", info.Name())
assert.Equal(t, int64(11), info.Size())
assert.False(t, info.IsDir())
}
func TestMockMedium_Stat_Good_Dir(t *testing.T) {
m := NewMockMedium()
m.Dirs["mydir"] = true
info, err := m.Stat("mydir")
assert.NoError(t, err)
assert.Equal(t, "mydir", info.Name())
assert.True(t, info.IsDir())
}
func TestMockMedium_Exists_Good(t *testing.T) {
m := NewMockMedium()
m.Files["file.txt"] = "content"
m.Dirs["mydir"] = true
assert.True(t, m.Exists("file.txt"))
assert.True(t, m.Exists("mydir"))
assert.False(t, m.Exists("nonexistent"))
}
func TestMockMedium_IsDir_Good(t *testing.T) {
m := NewMockMedium()
m.Files["file.txt"] = "content"
m.Dirs["mydir"] = true
assert.False(t, m.IsDir("file.txt"))
assert.True(t, m.IsDir("mydir"))
assert.False(t, m.IsDir("nonexistent"))
}
// --- Wrapper Function Tests ---
func TestRead_Good(t *testing.T) {
m := NewMockMedium()
m.Files["test.txt"] = "hello"
content, err := Read(m, "test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello", content)
}
func TestWrite_Good(t *testing.T) {
m := NewMockMedium()
err := Write(m, "test.txt", "hello")
assert.NoError(t, err)
assert.Equal(t, "hello", m.Files["test.txt"])
}
func TestEnsureDir_Good(t *testing.T) {
m := NewMockMedium()
err := EnsureDir(m, "/my/dir")
assert.NoError(t, err)
assert.True(t, m.Dirs["/my/dir"])
}
func TestIsFile_Good(t *testing.T) {
m := NewMockMedium()
m.Files["exists.txt"] = "content"
assert.True(t, IsFile(m, "exists.txt"))
assert.False(t, IsFile(m, "nonexistent.txt"))
}
func TestCopy_Good(t *testing.T) {
source := NewMockMedium()
dest := NewMockMedium()
source.Files["test.txt"] = "hello"
err := Copy(source, "test.txt", dest, "test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello", dest.Files["test.txt"])
// Copy to different path
source.Files["original.txt"] = "content"
err = Copy(source, "original.txt", dest, "copied.txt")
assert.NoError(t, err)
assert.Equal(t, "content", dest.Files["copied.txt"])
}
func TestCopy_Bad(t *testing.T) {
source := NewMockMedium()
dest := NewMockMedium()
err := Copy(source, "nonexistent.txt", dest, "dest.txt")
assert.Error(t, err)
}
// --- Local Global Tests ---
func TestLocalGlobal_Good(t *testing.T) {
// io.Local should be initialised by init()
assert.NotNil(t, Local, "io.Local should be initialised")
// Should be able to use it as a Medium
var m = Local
assert.NotNil(t, m)
}

630
datanode/client.go Normal file
View file

@ -0,0 +1,630 @@
// Package datanode provides an in-memory io.Medium backed by Borg's DataNode.
//
// DataNode is an in-memory fs.FS that serializes to tar. Wrapping it as a
// Medium lets any code that works with io.Medium transparently operate on
// an in-memory filesystem that can be snapshotted, shipped as a crash report,
// or wrapped in a TIM container for runc execution.
package datanode
import (
"cmp"
goio "io"
"io/fs"
"os"
"path"
"slices"
"strings"
"sync"
"time"
coreerr "dappco.re/go/core/log"
borgdatanode "forge.lthn.ai/Snider/Borg/pkg/datanode"
)
var (
dataNodeWalkDir = func(fsys fs.FS, root string, fn fs.WalkDirFunc) error {
return fs.WalkDir(fsys, root, fn)
}
dataNodeOpen = func(dn *borgdatanode.DataNode, name string) (fs.File, error) {
return dn.Open(name)
}
dataNodeReadAll = func(r goio.Reader) ([]byte, error) {
return goio.ReadAll(r)
}
)
// Medium is an in-memory storage backend backed by a Borg DataNode.
// All paths are relative (no leading slash). Thread-safe via RWMutex.
type Medium struct {
dn *borgdatanode.DataNode
dirs map[string]bool // explicit directory tracking
mu sync.RWMutex
}
// New creates a new empty DataNode Medium.
func New() *Medium {
return &Medium{
dn: borgdatanode.New(),
dirs: make(map[string]bool),
}
}
// FromTar creates a Medium from a tarball, restoring all files.
func FromTar(data []byte) (*Medium, error) {
dn, err := borgdatanode.FromTar(data)
if err != nil {
return nil, coreerr.E("datanode.FromTar", "failed to restore", err)
}
return &Medium{
dn: dn,
dirs: make(map[string]bool),
}, nil
}
// Snapshot serializes the entire filesystem to a tarball.
// Use this for crash reports, workspace packaging, or TIM creation.
func (m *Medium) Snapshot() ([]byte, error) {
m.mu.RLock()
defer m.mu.RUnlock()
data, err := m.dn.ToTar()
if err != nil {
return nil, coreerr.E("datanode.Snapshot", "tar failed", err)
}
return data, nil
}
// Restore replaces the filesystem contents from a tarball.
func (m *Medium) Restore(data []byte) error {
dn, err := borgdatanode.FromTar(data)
if err != nil {
return coreerr.E("datanode.Restore", "tar failed", err)
}
m.mu.Lock()
defer m.mu.Unlock()
m.dn = dn
m.dirs = make(map[string]bool)
return nil
}
// DataNode returns the underlying Borg DataNode.
// Use this to wrap the filesystem in a TIM container.
func (m *Medium) DataNode() *borgdatanode.DataNode {
m.mu.RLock()
defer m.mu.RUnlock()
return m.dn
}
// clean normalises a path: strips leading slash, cleans traversal.
func clean(p string) string {
p = strings.TrimPrefix(p, "/")
p = path.Clean(p)
if p == "." {
return ""
}
return p
}
// --- io.Medium interface ---
func (m *Medium) Read(p string) (string, error) {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
f, err := m.dn.Open(p)
if err != nil {
return "", coreerr.E("datanode.Read", "not found: "+p, os.ErrNotExist)
}
defer f.Close()
info, err := f.Stat()
if err != nil {
return "", coreerr.E("datanode.Read", "stat failed: "+p, err)
}
if info.IsDir() {
return "", coreerr.E("datanode.Read", "is a directory: "+p, os.ErrInvalid)
}
data, err := goio.ReadAll(f)
if err != nil {
return "", coreerr.E("datanode.Read", "read failed: "+p, err)
}
return string(data), nil
}
func (m *Medium) Write(p, content string) error {
m.mu.Lock()
defer m.mu.Unlock()
p = clean(p)
if p == "" {
return coreerr.E("datanode.Write", "empty path", os.ErrInvalid)
}
m.dn.AddData(p, []byte(content))
// ensure parent dirs are tracked
m.ensureDirsLocked(path.Dir(p))
return nil
}
func (m *Medium) WriteMode(p, content string, mode os.FileMode) error {
return m.Write(p, content)
}
func (m *Medium) EnsureDir(p string) error {
m.mu.Lock()
defer m.mu.Unlock()
p = clean(p)
if p == "" {
return nil
}
m.ensureDirsLocked(p)
return nil
}
// ensureDirsLocked marks a directory and all ancestors as existing.
// Caller must hold m.mu.
func (m *Medium) ensureDirsLocked(p string) {
for p != "" && p != "." {
m.dirs[p] = true
p = path.Dir(p)
if p == "." {
break
}
}
}
func (m *Medium) IsFile(p string) bool {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
info, err := m.dn.Stat(p)
return err == nil && !info.IsDir()
}
func (m *Medium) FileGet(p string) (string, error) {
return m.Read(p)
}
func (m *Medium) FileSet(p, content string) error {
return m.Write(p, content)
}
func (m *Medium) Delete(p string) error {
m.mu.Lock()
defer m.mu.Unlock()
p = clean(p)
if p == "" {
return coreerr.E("datanode.Delete", "cannot delete root", os.ErrPermission)
}
// Check if it's a file in the DataNode
info, err := m.dn.Stat(p)
if err != nil {
// Check explicit dirs
if m.dirs[p] {
// Check if dir is empty
hasChildren, err := m.hasPrefixLocked(p + "/")
if err != nil {
return coreerr.E("datanode.Delete", "failed to inspect directory: "+p, err)
}
if hasChildren {
return coreerr.E("datanode.Delete", "directory not empty: "+p, os.ErrExist)
}
delete(m.dirs, p)
return nil
}
return coreerr.E("datanode.Delete", "not found: "+p, os.ErrNotExist)
}
if info.IsDir() {
hasChildren, err := m.hasPrefixLocked(p + "/")
if err != nil {
return coreerr.E("datanode.Delete", "failed to inspect directory: "+p, err)
}
if hasChildren {
return coreerr.E("datanode.Delete", "directory not empty: "+p, os.ErrExist)
}
delete(m.dirs, p)
return nil
}
// Remove the file by creating a new DataNode without it
if err := m.removeFileLocked(p); err != nil {
return coreerr.E("datanode.Delete", "failed to delete file: "+p, err)
}
return nil
}
func (m *Medium) DeleteAll(p string) error {
m.mu.Lock()
defer m.mu.Unlock()
p = clean(p)
if p == "" {
return coreerr.E("datanode.DeleteAll", "cannot delete root", os.ErrPermission)
}
prefix := p + "/"
found := false
// Check if p itself is a file
info, err := m.dn.Stat(p)
if err == nil && !info.IsDir() {
if err := m.removeFileLocked(p); err != nil {
return coreerr.E("datanode.DeleteAll", "failed to delete file: "+p, err)
}
found = true
}
// Remove all files under prefix
entries, err := m.collectAllLocked()
if err != nil {
return coreerr.E("datanode.DeleteAll", "failed to inspect tree: "+p, err)
}
for _, name := range entries {
if name == p || strings.HasPrefix(name, prefix) {
if err := m.removeFileLocked(name); err != nil {
return coreerr.E("datanode.DeleteAll", "failed to delete file: "+name, err)
}
found = true
}
}
// Remove explicit dirs under prefix
for d := range m.dirs {
if d == p || strings.HasPrefix(d, prefix) {
delete(m.dirs, d)
found = true
}
}
if !found {
return coreerr.E("datanode.DeleteAll", "not found: "+p, os.ErrNotExist)
}
return nil
}
func (m *Medium) Rename(oldPath, newPath string) error {
m.mu.Lock()
defer m.mu.Unlock()
oldPath = clean(oldPath)
newPath = clean(newPath)
// Check if source is a file
info, err := m.dn.Stat(oldPath)
if err != nil {
return coreerr.E("datanode.Rename", "not found: "+oldPath, os.ErrNotExist)
}
if !info.IsDir() {
// Read old, write new, delete old
data, err := m.readFileLocked(oldPath)
if err != nil {
return coreerr.E("datanode.Rename", "failed to read source file: "+oldPath, err)
}
m.dn.AddData(newPath, data)
m.ensureDirsLocked(path.Dir(newPath))
if err := m.removeFileLocked(oldPath); err != nil {
return coreerr.E("datanode.Rename", "failed to remove source file: "+oldPath, err)
}
return nil
}
// Directory rename: move all files under oldPath to newPath
oldPrefix := oldPath + "/"
newPrefix := newPath + "/"
entries, err := m.collectAllLocked()
if err != nil {
return coreerr.E("datanode.Rename", "failed to inspect tree: "+oldPath, err)
}
for _, name := range entries {
if strings.HasPrefix(name, oldPrefix) {
newName := newPrefix + strings.TrimPrefix(name, oldPrefix)
data, err := m.readFileLocked(name)
if err != nil {
return coreerr.E("datanode.Rename", "failed to read source file: "+name, err)
}
m.dn.AddData(newName, data)
if err := m.removeFileLocked(name); err != nil {
return coreerr.E("datanode.Rename", "failed to remove source file: "+name, err)
}
}
}
// Move explicit dirs
dirsToMove := make(map[string]string)
for d := range m.dirs {
if d == oldPath || strings.HasPrefix(d, oldPrefix) {
newD := newPath + strings.TrimPrefix(d, oldPath)
dirsToMove[d] = newD
}
}
for old, nw := range dirsToMove {
delete(m.dirs, old)
m.dirs[nw] = true
}
return nil
}
func (m *Medium) List(p string) ([]fs.DirEntry, error) {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
entries, err := m.dn.ReadDir(p)
if err != nil {
// Check explicit dirs
if p == "" || m.dirs[p] {
return []fs.DirEntry{}, nil
}
return nil, coreerr.E("datanode.List", "not found: "+p, os.ErrNotExist)
}
// Also include explicit subdirectories not discovered via files
prefix := p
if prefix != "" {
prefix += "/"
}
seen := make(map[string]bool)
for _, e := range entries {
seen[e.Name()] = true
}
for d := range m.dirs {
if !strings.HasPrefix(d, prefix) {
continue
}
rest := strings.TrimPrefix(d, prefix)
if rest == "" {
continue
}
first := strings.SplitN(rest, "/", 2)[0]
if !seen[first] {
seen[first] = true
entries = append(entries, &dirEntry{name: first})
}
}
slices.SortFunc(entries, func(a, b fs.DirEntry) int {
return cmp.Compare(a.Name(), b.Name())
})
return entries, nil
}
func (m *Medium) Stat(p string) (fs.FileInfo, error) {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
if p == "" {
return &fileInfo{name: ".", isDir: true, mode: fs.ModeDir | 0755}, nil
}
info, err := m.dn.Stat(p)
if err == nil {
return info, nil
}
if m.dirs[p] {
return &fileInfo{name: path.Base(p), isDir: true, mode: fs.ModeDir | 0755}, nil
}
return nil, coreerr.E("datanode.Stat", "not found: "+p, os.ErrNotExist)
}
func (m *Medium) Open(p string) (fs.File, error) {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
return m.dn.Open(p)
}
func (m *Medium) Create(p string) (goio.WriteCloser, error) {
p = clean(p)
if p == "" {
return nil, coreerr.E("datanode.Create", "empty path", os.ErrInvalid)
}
return &writeCloser{m: m, path: p}, nil
}
func (m *Medium) Append(p string) (goio.WriteCloser, error) {
p = clean(p)
if p == "" {
return nil, coreerr.E("datanode.Append", "empty path", os.ErrInvalid)
}
// Read existing content
var existing []byte
m.mu.RLock()
if m.IsFile(p) {
data, err := m.readFileLocked(p)
if err != nil {
m.mu.RUnlock()
return nil, coreerr.E("datanode.Append", "failed to read existing content: "+p, err)
}
existing = data
}
m.mu.RUnlock()
return &writeCloser{m: m, path: p, buf: existing}, nil
}
func (m *Medium) ReadStream(p string) (goio.ReadCloser, error) {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
f, err := m.dn.Open(p)
if err != nil {
return nil, coreerr.E("datanode.ReadStream", "not found: "+p, os.ErrNotExist)
}
return f.(goio.ReadCloser), nil
}
func (m *Medium) WriteStream(p string) (goio.WriteCloser, error) {
return m.Create(p)
}
func (m *Medium) Exists(p string) bool {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
if p == "" {
return true // root always exists
}
_, err := m.dn.Stat(p)
if err == nil {
return true
}
return m.dirs[p]
}
func (m *Medium) IsDir(p string) bool {
m.mu.RLock()
defer m.mu.RUnlock()
p = clean(p)
if p == "" {
return true
}
info, err := m.dn.Stat(p)
if err == nil {
return info.IsDir()
}
return m.dirs[p]
}
// --- internal helpers ---
// hasPrefixLocked checks if any file path starts with prefix. Caller holds lock.
func (m *Medium) hasPrefixLocked(prefix string) (bool, error) {
entries, err := m.collectAllLocked()
if err != nil {
return false, err
}
for _, name := range entries {
if strings.HasPrefix(name, prefix) {
return true, nil
}
}
for d := range m.dirs {
if strings.HasPrefix(d, prefix) {
return true, nil
}
}
return false, nil
}
// collectAllLocked returns all file paths in the DataNode. Caller holds lock.
func (m *Medium) collectAllLocked() ([]string, error) {
var names []string
err := dataNodeWalkDir(m.dn, ".", func(p string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
if !d.IsDir() {
names = append(names, p)
}
return nil
})
return names, err
}
func (m *Medium) readFileLocked(name string) ([]byte, error) {
f, err := dataNodeOpen(m.dn, name)
if err != nil {
return nil, err
}
data, readErr := dataNodeReadAll(f)
closeErr := f.Close()
if readErr != nil {
return nil, readErr
}
if closeErr != nil {
return nil, closeErr
}
return data, nil
}
// removeFileLocked removes a single file by rebuilding the DataNode.
// This is necessary because Borg's DataNode doesn't expose a Remove method.
// Caller must hold m.mu write lock.
func (m *Medium) removeFileLocked(target string) error {
entries, err := m.collectAllLocked()
if err != nil {
return err
}
newDN := borgdatanode.New()
for _, name := range entries {
if name == target {
continue
}
data, err := m.readFileLocked(name)
if err != nil {
return err
}
newDN.AddData(name, data)
}
m.dn = newDN
return nil
}
// --- writeCloser buffers writes and flushes to DataNode on Close ---
type writeCloser struct {
m *Medium
path string
buf []byte
}
func (w *writeCloser) Write(p []byte) (int, error) {
w.buf = append(w.buf, p...)
return len(p), nil
}
func (w *writeCloser) Close() error {
w.m.mu.Lock()
defer w.m.mu.Unlock()
w.m.dn.AddData(w.path, w.buf)
w.m.ensureDirsLocked(path.Dir(w.path))
return nil
}
// --- fs types for explicit directories ---
type dirEntry struct {
name string
}
func (d *dirEntry) Name() string { return d.name }
func (d *dirEntry) IsDir() bool { return true }
func (d *dirEntry) Type() fs.FileMode { return fs.ModeDir }
func (d *dirEntry) Info() (fs.FileInfo, error) {
return &fileInfo{name: d.name, isDir: true, mode: fs.ModeDir | 0755}, nil
}
type fileInfo struct {
name string
size int64
mode fs.FileMode
modTime time.Time
isDir bool
}
func (fi *fileInfo) Name() string { return fi.name }
func (fi *fileInfo) Size() int64 { return fi.size }
func (fi *fileInfo) Mode() fs.FileMode { return fi.mode }
func (fi *fileInfo) ModTime() time.Time { return fi.modTime }
func (fi *fileInfo) IsDir() bool { return fi.isDir }
func (fi *fileInfo) Sys() any { return nil }

440
datanode/client_test.go Normal file
View file

@ -0,0 +1,440 @@
package datanode
import (
"errors"
"io"
"io/fs"
"testing"
coreio "dappco.re/go/core/io"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Compile-time check: Medium implements io.Medium.
var _ coreio.Medium = (*Medium)(nil)
func TestReadWrite_Good(t *testing.T) {
m := New()
err := m.Write("hello.txt", "world")
require.NoError(t, err)
got, err := m.Read("hello.txt")
require.NoError(t, err)
assert.Equal(t, "world", got)
}
func TestReadWrite_Bad(t *testing.T) {
m := New()
_, err := m.Read("missing.txt")
assert.Error(t, err)
err = m.Write("", "content")
assert.Error(t, err)
}
func TestNestedPaths_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("a/b/c/deep.txt", "deep"))
got, err := m.Read("a/b/c/deep.txt")
require.NoError(t, err)
assert.Equal(t, "deep", got)
assert.True(t, m.IsDir("a"))
assert.True(t, m.IsDir("a/b"))
assert.True(t, m.IsDir("a/b/c"))
}
func TestLeadingSlash_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("/leading/file.txt", "stripped"))
got, err := m.Read("leading/file.txt")
require.NoError(t, err)
assert.Equal(t, "stripped", got)
got, err = m.Read("/leading/file.txt")
require.NoError(t, err)
assert.Equal(t, "stripped", got)
}
func TestIsFile_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("file.go", "package main"))
assert.True(t, m.IsFile("file.go"))
assert.False(t, m.IsFile("missing.go"))
assert.False(t, m.IsFile("")) // empty path
}
func TestEnsureDir_Good(t *testing.T) {
m := New()
require.NoError(t, m.EnsureDir("foo/bar/baz"))
assert.True(t, m.IsDir("foo"))
assert.True(t, m.IsDir("foo/bar"))
assert.True(t, m.IsDir("foo/bar/baz"))
assert.True(t, m.Exists("foo/bar/baz"))
}
func TestDelete_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("delete-me.txt", "bye"))
assert.True(t, m.Exists("delete-me.txt"))
require.NoError(t, m.Delete("delete-me.txt"))
assert.False(t, m.Exists("delete-me.txt"))
}
func TestDelete_Bad(t *testing.T) {
m := New()
// Delete non-existent
assert.Error(t, m.Delete("ghost.txt"))
// Delete non-empty dir
require.NoError(t, m.Write("dir/file.txt", "content"))
assert.Error(t, m.Delete("dir"))
}
func TestDelete_Bad_DirectoryInspectionFailure(t *testing.T) {
m := New()
require.NoError(t, m.Write("dir/file.txt", "content"))
original := dataNodeWalkDir
dataNodeWalkDir = func(_ fs.FS, _ string, _ fs.WalkDirFunc) error {
return errors.New("walk failed")
}
t.Cleanup(func() {
dataNodeWalkDir = original
})
err := m.Delete("dir")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to inspect directory")
}
func TestDeleteAll_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("tree/a.txt", "a"))
require.NoError(t, m.Write("tree/sub/b.txt", "b"))
require.NoError(t, m.Write("keep.txt", "keep"))
require.NoError(t, m.DeleteAll("tree"))
assert.False(t, m.Exists("tree/a.txt"))
assert.False(t, m.Exists("tree/sub/b.txt"))
assert.True(t, m.Exists("keep.txt"))
}
func TestDeleteAll_Bad_WalkFailure(t *testing.T) {
m := New()
require.NoError(t, m.Write("tree/a.txt", "a"))
original := dataNodeWalkDir
dataNodeWalkDir = func(_ fs.FS, _ string, _ fs.WalkDirFunc) error {
return errors.New("walk failed")
}
t.Cleanup(func() {
dataNodeWalkDir = original
})
err := m.DeleteAll("tree")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to inspect tree")
}
func TestDelete_Bad_RemoveFailure(t *testing.T) {
m := New()
require.NoError(t, m.Write("keep.txt", "keep"))
require.NoError(t, m.Write("bad.txt", "bad"))
original := dataNodeReadAll
dataNodeReadAll = func(_ io.Reader) ([]byte, error) {
return nil, errors.New("read failed")
}
t.Cleanup(func() {
dataNodeReadAll = original
})
err := m.Delete("bad.txt")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to delete file")
}
func TestRename_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("old.txt", "content"))
require.NoError(t, m.Rename("old.txt", "new.txt"))
assert.False(t, m.Exists("old.txt"))
got, err := m.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "content", got)
}
func TestRenameDir_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("src/a.go", "package a"))
require.NoError(t, m.Write("src/sub/b.go", "package b"))
require.NoError(t, m.Rename("src", "dst"))
assert.False(t, m.Exists("src/a.go"))
got, err := m.Read("dst/a.go")
require.NoError(t, err)
assert.Equal(t, "package a", got)
got, err = m.Read("dst/sub/b.go")
require.NoError(t, err)
assert.Equal(t, "package b", got)
}
func TestRenameDir_Bad_ReadFailure(t *testing.T) {
m := New()
require.NoError(t, m.Write("src/a.go", "package a"))
original := dataNodeReadAll
dataNodeReadAll = func(_ io.Reader) ([]byte, error) {
return nil, errors.New("read failed")
}
t.Cleanup(func() {
dataNodeReadAll = original
})
err := m.Rename("src", "dst")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to read source file")
}
func TestList_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("root.txt", "r"))
require.NoError(t, m.Write("pkg/a.go", "a"))
require.NoError(t, m.Write("pkg/b.go", "b"))
require.NoError(t, m.Write("pkg/sub/c.go", "c"))
entries, err := m.List("")
require.NoError(t, err)
names := make([]string, len(entries))
for i, e := range entries {
names[i] = e.Name()
}
assert.Contains(t, names, "root.txt")
assert.Contains(t, names, "pkg")
entries, err = m.List("pkg")
require.NoError(t, err)
names = make([]string, len(entries))
for i, e := range entries {
names[i] = e.Name()
}
assert.Contains(t, names, "a.go")
assert.Contains(t, names, "b.go")
assert.Contains(t, names, "sub")
}
func TestStat_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("stat.txt", "hello"))
info, err := m.Stat("stat.txt")
require.NoError(t, err)
assert.Equal(t, int64(5), info.Size())
assert.False(t, info.IsDir())
// Root stat
info, err = m.Stat("")
require.NoError(t, err)
assert.True(t, info.IsDir())
}
func TestOpen_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("open.txt", "opened"))
f, err := m.Open("open.txt")
require.NoError(t, err)
defer f.Close()
data, err := io.ReadAll(f)
require.NoError(t, err)
assert.Equal(t, "opened", string(data))
}
func TestCreateAppend_Good(t *testing.T) {
m := New()
// Create
w, err := m.Create("new.txt")
require.NoError(t, err)
w.Write([]byte("hello"))
w.Close()
got, err := m.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "hello", got)
// Append
w, err = m.Append("new.txt")
require.NoError(t, err)
w.Write([]byte(" world"))
w.Close()
got, err = m.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "hello world", got)
}
func TestAppend_Bad_ReadFailure(t *testing.T) {
m := New()
require.NoError(t, m.Write("new.txt", "hello"))
original := dataNodeReadAll
dataNodeReadAll = func(_ io.Reader) ([]byte, error) {
return nil, errors.New("read failed")
}
t.Cleanup(func() {
dataNodeReadAll = original
})
_, err := m.Append("new.txt")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to read existing content")
}
func TestStreams_Good(t *testing.T) {
m := New()
// WriteStream
ws, err := m.WriteStream("stream.txt")
require.NoError(t, err)
ws.Write([]byte("streamed"))
ws.Close()
// ReadStream
rs, err := m.ReadStream("stream.txt")
require.NoError(t, err)
data, err := io.ReadAll(rs)
require.NoError(t, err)
assert.Equal(t, "streamed", string(data))
rs.Close()
}
func TestFileGetFileSet_Good(t *testing.T) {
m := New()
require.NoError(t, m.FileSet("alias.txt", "via set"))
got, err := m.FileGet("alias.txt")
require.NoError(t, err)
assert.Equal(t, "via set", got)
}
func TestSnapshotRestore_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("a.txt", "alpha"))
require.NoError(t, m.Write("b/c.txt", "charlie"))
snap, err := m.Snapshot()
require.NoError(t, err)
assert.NotEmpty(t, snap)
// Restore into a new Medium
m2, err := FromTar(snap)
require.NoError(t, err)
got, err := m2.Read("a.txt")
require.NoError(t, err)
assert.Equal(t, "alpha", got)
got, err = m2.Read("b/c.txt")
require.NoError(t, err)
assert.Equal(t, "charlie", got)
}
func TestRestore_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("original.txt", "before"))
snap, err := m.Snapshot()
require.NoError(t, err)
// Modify
require.NoError(t, m.Write("original.txt", "after"))
require.NoError(t, m.Write("extra.txt", "extra"))
// Restore to snapshot
require.NoError(t, m.Restore(snap))
got, err := m.Read("original.txt")
require.NoError(t, err)
assert.Equal(t, "before", got)
assert.False(t, m.Exists("extra.txt"))
}
func TestDataNode_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("test.txt", "borg"))
dn := m.DataNode()
assert.NotNil(t, dn)
// Verify we can use the DataNode directly
f, err := dn.Open("test.txt")
require.NoError(t, err)
defer f.Close()
data, err := io.ReadAll(f)
require.NoError(t, err)
assert.Equal(t, "borg", string(data))
}
func TestOverwrite_Good(t *testing.T) {
m := New()
require.NoError(t, m.Write("file.txt", "v1"))
require.NoError(t, m.Write("file.txt", "v2"))
got, err := m.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "v2", got)
}
func TestExists_Good(t *testing.T) {
m := New()
assert.True(t, m.Exists("")) // root
assert.False(t, m.Exists("x"))
require.NoError(t, m.Write("x", "y"))
assert.True(t, m.Exists("x"))
}
func TestReadDir_Ugly(t *testing.T) {
m := New()
// Read from a file path (not a dir) should return empty or error
require.NoError(t, m.Write("file.txt", "content"))
_, err := m.Read("file.txt")
require.NoError(t, err)
}

View file

@ -1,597 +0,0 @@
// Example: medium := datanode.New()
// Example: _ = medium.Write("jobs/run.log", "started")
// Example: snapshot, _ := medium.Snapshot()
// Example: restored, _ := datanode.FromTar(snapshot)
package datanode
import (
"cmp"
goio "io"
"io/fs"
"path"
"slices"
"sync"
"time"
core "dappco.re/go/core"
borgdatanode "forge.lthn.ai/Snider/Borg/pkg/datanode"
)
var (
dataNodeWalkDir = func(fileSystem fs.FS, root string, callback fs.WalkDirFunc) error {
return fs.WalkDir(fileSystem, root, callback)
}
dataNodeOpen = func(dataNode *borgdatanode.DataNode, filePath string) (fs.File, error) {
return dataNode.Open(filePath)
}
dataNodeReadAll = func(reader goio.Reader) ([]byte, error) {
return goio.ReadAll(reader)
}
)
// Example: medium := datanode.New()
// Example: _ = medium.Write("jobs/run.log", "started")
// Example: snapshot, _ := medium.Snapshot()
type Medium struct {
dataNode *borgdatanode.DataNode
directorySet map[string]bool
lock sync.RWMutex
}
// Example: medium := datanode.New()
// Example: _ = medium.Write("jobs/run.log", "started")
func New() *Medium {
return &Medium{
dataNode: borgdatanode.New(),
directorySet: make(map[string]bool),
}
}
// Example: sourceMedium := datanode.New()
// Example: snapshot, _ := sourceMedium.Snapshot()
// Example: restored, _ := datanode.FromTar(snapshot)
func FromTar(data []byte) (*Medium, error) {
dataNode, err := borgdatanode.FromTar(data)
if err != nil {
return nil, core.E("datanode.FromTar", "failed to restore", err)
}
return &Medium{
dataNode: dataNode,
directorySet: make(map[string]bool),
}, nil
}
// Example: snapshot, _ := medium.Snapshot()
func (medium *Medium) Snapshot() ([]byte, error) {
medium.lock.RLock()
defer medium.lock.RUnlock()
data, err := medium.dataNode.ToTar()
if err != nil {
return nil, core.E("datanode.Snapshot", "tar failed", err)
}
return data, nil
}
// Example: _ = medium.Restore(snapshot)
func (medium *Medium) Restore(data []byte) error {
dataNode, err := borgdatanode.FromTar(data)
if err != nil {
return core.E("datanode.Restore", "tar failed", err)
}
medium.lock.Lock()
defer medium.lock.Unlock()
medium.dataNode = dataNode
medium.directorySet = make(map[string]bool)
return nil
}
// Example: dataNode := medium.DataNode()
func (medium *Medium) DataNode() *borgdatanode.DataNode {
medium.lock.RLock()
defer medium.lock.RUnlock()
return medium.dataNode
}
func normaliseEntryPath(filePath string) string {
filePath = core.TrimPrefix(filePath, "/")
filePath = path.Clean(filePath)
if filePath == "." {
return ""
}
return filePath
}
func (medium *Medium) Read(filePath string) (string, error) {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
file, err := medium.dataNode.Open(filePath)
if err != nil {
return "", core.E("datanode.Read", core.Concat("not found: ", filePath), fs.ErrNotExist)
}
defer file.Close()
info, err := file.Stat()
if err != nil {
return "", core.E("datanode.Read", core.Concat("stat failed: ", filePath), err)
}
if info.IsDir() {
return "", core.E("datanode.Read", core.Concat("is a directory: ", filePath), fs.ErrInvalid)
}
data, err := goio.ReadAll(file)
if err != nil {
return "", core.E("datanode.Read", core.Concat("read failed: ", filePath), err)
}
return string(data), nil
}
func (medium *Medium) Write(filePath, content string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return core.E("datanode.Write", "empty path", fs.ErrInvalid)
}
medium.dataNode.AddData(filePath, []byte(content))
medium.ensureDirsLocked(path.Dir(filePath))
return nil
}
func (medium *Medium) WriteMode(filePath, content string, mode fs.FileMode) error {
return medium.Write(filePath, content)
}
func (medium *Medium) EnsureDir(filePath string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return nil
}
medium.ensureDirsLocked(filePath)
return nil
}
func (medium *Medium) ensureDirsLocked(directoryPath string) {
for directoryPath != "" && directoryPath != "." {
medium.directorySet[directoryPath] = true
directoryPath = path.Dir(directoryPath)
if directoryPath == "." {
break
}
}
}
func (medium *Medium) IsFile(filePath string) bool {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
info, err := medium.dataNode.Stat(filePath)
return err == nil && !info.IsDir()
}
func (medium *Medium) Delete(filePath string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return core.E("datanode.Delete", "cannot delete root", fs.ErrPermission)
}
info, err := medium.dataNode.Stat(filePath)
if err != nil {
if medium.directorySet[filePath] {
hasChildren, err := medium.hasPrefixLocked(filePath + "/")
if err != nil {
return core.E("datanode.Delete", core.Concat("failed to inspect directory: ", filePath), err)
}
if hasChildren {
return core.E("datanode.Delete", core.Concat("directory not empty: ", filePath), fs.ErrExist)
}
delete(medium.directorySet, filePath)
return nil
}
return core.E("datanode.Delete", core.Concat("not found: ", filePath), fs.ErrNotExist)
}
if info.IsDir() {
hasChildren, err := medium.hasPrefixLocked(filePath + "/")
if err != nil {
return core.E("datanode.Delete", core.Concat("failed to inspect directory: ", filePath), err)
}
if hasChildren {
return core.E("datanode.Delete", core.Concat("directory not empty: ", filePath), fs.ErrExist)
}
delete(medium.directorySet, filePath)
return nil
}
if err := medium.removeFileLocked(filePath); err != nil {
return core.E("datanode.Delete", core.Concat("failed to delete file: ", filePath), err)
}
return nil
}
func (medium *Medium) DeleteAll(filePath string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return core.E("datanode.DeleteAll", "cannot delete root", fs.ErrPermission)
}
prefix := filePath + "/"
found := false
info, err := medium.dataNode.Stat(filePath)
if err == nil && !info.IsDir() {
if err := medium.removeFileLocked(filePath); err != nil {
return core.E("datanode.DeleteAll", core.Concat("failed to delete file: ", filePath), err)
}
found = true
}
entries, err := medium.collectAllLocked()
if err != nil {
return core.E("datanode.DeleteAll", core.Concat("failed to inspect tree: ", filePath), err)
}
for _, name := range entries {
if name == filePath || core.HasPrefix(name, prefix) {
if err := medium.removeFileLocked(name); err != nil {
return core.E("datanode.DeleteAll", core.Concat("failed to delete file: ", name), err)
}
found = true
}
}
for directoryPath := range medium.directorySet {
if directoryPath == filePath || core.HasPrefix(directoryPath, prefix) {
delete(medium.directorySet, directoryPath)
found = true
}
}
if !found {
return core.E("datanode.DeleteAll", core.Concat("not found: ", filePath), fs.ErrNotExist)
}
return nil
}
func (medium *Medium) Rename(oldPath, newPath string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
oldPath = normaliseEntryPath(oldPath)
newPath = normaliseEntryPath(newPath)
info, err := medium.dataNode.Stat(oldPath)
if err != nil {
return core.E("datanode.Rename", core.Concat("not found: ", oldPath), fs.ErrNotExist)
}
if !info.IsDir() {
data, err := medium.readFileLocked(oldPath)
if err != nil {
return core.E("datanode.Rename", core.Concat("failed to read source file: ", oldPath), err)
}
medium.dataNode.AddData(newPath, data)
medium.ensureDirsLocked(path.Dir(newPath))
if err := medium.removeFileLocked(oldPath); err != nil {
return core.E("datanode.Rename", core.Concat("failed to remove source file: ", oldPath), err)
}
return nil
}
oldPrefix := oldPath + "/"
newPrefix := newPath + "/"
entries, err := medium.collectAllLocked()
if err != nil {
return core.E("datanode.Rename", core.Concat("failed to inspect tree: ", oldPath), err)
}
for _, name := range entries {
if core.HasPrefix(name, oldPrefix) {
newName := core.Concat(newPrefix, core.TrimPrefix(name, oldPrefix))
data, err := medium.readFileLocked(name)
if err != nil {
return core.E("datanode.Rename", core.Concat("failed to read source file: ", name), err)
}
medium.dataNode.AddData(newName, data)
if err := medium.removeFileLocked(name); err != nil {
return core.E("datanode.Rename", core.Concat("failed to remove source file: ", name), err)
}
}
}
dirsToMove := make(map[string]string)
for directoryPath := range medium.directorySet {
if directoryPath == oldPath || core.HasPrefix(directoryPath, oldPrefix) {
newDirectoryPath := core.Concat(newPath, core.TrimPrefix(directoryPath, oldPath))
dirsToMove[directoryPath] = newDirectoryPath
}
}
for oldDirectoryPath, newDirectoryPath := range dirsToMove {
delete(medium.directorySet, oldDirectoryPath)
medium.directorySet[newDirectoryPath] = true
}
return nil
}
func (medium *Medium) List(filePath string) ([]fs.DirEntry, error) {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
entries, err := medium.dataNode.ReadDir(filePath)
if err != nil {
if filePath == "" || medium.directorySet[filePath] {
return []fs.DirEntry{}, nil
}
return nil, core.E("datanode.List", core.Concat("not found: ", filePath), fs.ErrNotExist)
}
prefix := filePath
if prefix != "" {
prefix += "/"
}
seen := make(map[string]bool)
for _, entry := range entries {
seen[entry.Name()] = true
}
for directoryPath := range medium.directorySet {
if !core.HasPrefix(directoryPath, prefix) {
continue
}
rest := core.TrimPrefix(directoryPath, prefix)
if rest == "" {
continue
}
first := core.SplitN(rest, "/", 2)[0]
if !seen[first] {
seen[first] = true
entries = append(entries, &dirEntry{name: first})
}
}
slices.SortFunc(entries, func(a, b fs.DirEntry) int {
return cmp.Compare(a.Name(), b.Name())
})
return entries, nil
}
func (medium *Medium) Stat(filePath string) (fs.FileInfo, error) {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return &fileInfo{name: ".", isDir: true, mode: fs.ModeDir | 0755}, nil
}
info, err := medium.dataNode.Stat(filePath)
if err == nil {
return info, nil
}
if medium.directorySet[filePath] {
return &fileInfo{name: path.Base(filePath), isDir: true, mode: fs.ModeDir | 0755}, nil
}
return nil, core.E("datanode.Stat", core.Concat("not found: ", filePath), fs.ErrNotExist)
}
func (medium *Medium) Open(filePath string) (fs.File, error) {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
return medium.dataNode.Open(filePath)
}
func (medium *Medium) Create(filePath string) (goio.WriteCloser, error) {
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return nil, core.E("datanode.Create", "empty path", fs.ErrInvalid)
}
return &writeCloser{medium: medium, path: filePath}, nil
}
func (medium *Medium) Append(filePath string) (goio.WriteCloser, error) {
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return nil, core.E("datanode.Append", "empty path", fs.ErrInvalid)
}
var existing []byte
medium.lock.RLock()
if medium.IsFile(filePath) {
data, err := medium.readFileLocked(filePath)
if err != nil {
medium.lock.RUnlock()
return nil, core.E("datanode.Append", core.Concat("failed to read existing content: ", filePath), err)
}
existing = data
}
medium.lock.RUnlock()
return &writeCloser{medium: medium, path: filePath, buffer: existing}, nil
}
func (medium *Medium) ReadStream(filePath string) (goio.ReadCloser, error) {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
file, err := medium.dataNode.Open(filePath)
if err != nil {
return nil, core.E("datanode.ReadStream", core.Concat("not found: ", filePath), fs.ErrNotExist)
}
return file.(goio.ReadCloser), nil
}
func (medium *Medium) WriteStream(filePath string) (goio.WriteCloser, error) {
return medium.Create(filePath)
}
func (medium *Medium) Exists(filePath string) bool {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return true
}
_, err := medium.dataNode.Stat(filePath)
if err == nil {
return true
}
return medium.directorySet[filePath]
}
func (medium *Medium) IsDir(filePath string) bool {
medium.lock.RLock()
defer medium.lock.RUnlock()
filePath = normaliseEntryPath(filePath)
if filePath == "" {
return true
}
info, err := medium.dataNode.Stat(filePath)
if err == nil {
return info.IsDir()
}
return medium.directorySet[filePath]
}
func (medium *Medium) hasPrefixLocked(prefix string) (bool, error) {
entries, err := medium.collectAllLocked()
if err != nil {
return false, err
}
for _, name := range entries {
if core.HasPrefix(name, prefix) {
return true, nil
}
}
for directoryPath := range medium.directorySet {
if core.HasPrefix(directoryPath, prefix) {
return true, nil
}
}
return false, nil
}
func (medium *Medium) collectAllLocked() ([]string, error) {
var names []string
err := dataNodeWalkDir(medium.dataNode, ".", func(filePath string, entry fs.DirEntry, err error) error {
if err != nil {
return err
}
if !entry.IsDir() {
names = append(names, filePath)
}
return nil
})
return names, err
}
func (medium *Medium) readFileLocked(filePath string) ([]byte, error) {
file, err := dataNodeOpen(medium.dataNode, filePath)
if err != nil {
return nil, err
}
data, readErr := dataNodeReadAll(file)
closeErr := file.Close()
if readErr != nil {
return nil, readErr
}
if closeErr != nil {
return nil, closeErr
}
return data, nil
}
func (medium *Medium) removeFileLocked(target string) error {
entries, err := medium.collectAllLocked()
if err != nil {
return err
}
newDataNode := borgdatanode.New()
for _, name := range entries {
if name == target {
continue
}
data, err := medium.readFileLocked(name)
if err != nil {
return err
}
newDataNode.AddData(name, data)
}
medium.dataNode = newDataNode
return nil
}
type writeCloser struct {
medium *Medium
path string
buffer []byte
}
func (writer *writeCloser) Write(data []byte) (int, error) {
writer.buffer = append(writer.buffer, data...)
return len(data), nil
}
func (writer *writeCloser) Close() error {
writer.medium.lock.Lock()
defer writer.medium.lock.Unlock()
writer.medium.dataNode.AddData(writer.path, writer.buffer)
writer.medium.ensureDirsLocked(path.Dir(writer.path))
return nil
}
type dirEntry struct {
name string
}
func (entry *dirEntry) Name() string { return entry.name }
func (entry *dirEntry) IsDir() bool { return true }
func (entry *dirEntry) Type() fs.FileMode { return fs.ModeDir }
func (entry *dirEntry) Info() (fs.FileInfo, error) {
return &fileInfo{name: entry.name, isDir: true, mode: fs.ModeDir | 0755}, nil
}
type fileInfo struct {
name string
size int64
mode fs.FileMode
modTime time.Time
isDir bool
}
func (info *fileInfo) Name() string { return info.name }
func (info *fileInfo) Size() int64 { return info.size }
func (info *fileInfo) Mode() fs.FileMode { return info.mode }
func (info *fileInfo) ModTime() time.Time { return info.modTime }
func (info *fileInfo) IsDir() bool { return info.isDir }
func (info *fileInfo) Sys() any { return nil }

View file

@ -1,418 +0,0 @@
package datanode
import (
"io"
"io/fs"
"testing"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var _ coreio.Medium = (*Medium)(nil)
func TestDataNode_ReadWrite_Good(t *testing.T) {
dataNodeMedium := New()
err := dataNodeMedium.Write("hello.txt", "world")
require.NoError(t, err)
got, err := dataNodeMedium.Read("hello.txt")
require.NoError(t, err)
assert.Equal(t, "world", got)
}
func TestDataNode_ReadWrite_Bad(t *testing.T) {
dataNodeMedium := New()
_, err := dataNodeMedium.Read("missing.txt")
assert.Error(t, err)
err = dataNodeMedium.Write("", "content")
assert.Error(t, err)
}
func TestDataNode_NestedPaths_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("a/b/c/deep.txt", "deep"))
got, err := dataNodeMedium.Read("a/b/c/deep.txt")
require.NoError(t, err)
assert.Equal(t, "deep", got)
assert.True(t, dataNodeMedium.IsDir("a"))
assert.True(t, dataNodeMedium.IsDir("a/b"))
assert.True(t, dataNodeMedium.IsDir("a/b/c"))
}
func TestDataNode_LeadingSlash_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("/leading/file.txt", "stripped"))
got, err := dataNodeMedium.Read("leading/file.txt")
require.NoError(t, err)
assert.Equal(t, "stripped", got)
got, err = dataNodeMedium.Read("/leading/file.txt")
require.NoError(t, err)
assert.Equal(t, "stripped", got)
}
func TestDataNode_IsFile_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("file.go", "package main"))
assert.True(t, dataNodeMedium.IsFile("file.go"))
assert.False(t, dataNodeMedium.IsFile("missing.go"))
assert.False(t, dataNodeMedium.IsFile(""))
}
func TestDataNode_EnsureDir_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.EnsureDir("foo/bar/baz"))
assert.True(t, dataNodeMedium.IsDir("foo"))
assert.True(t, dataNodeMedium.IsDir("foo/bar"))
assert.True(t, dataNodeMedium.IsDir("foo/bar/baz"))
assert.True(t, dataNodeMedium.Exists("foo/bar/baz"))
}
func TestDataNode_Delete_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("delete-me.txt", "bye"))
assert.True(t, dataNodeMedium.Exists("delete-me.txt"))
require.NoError(t, dataNodeMedium.Delete("delete-me.txt"))
assert.False(t, dataNodeMedium.Exists("delete-me.txt"))
}
func TestDataNode_Delete_Bad(t *testing.T) {
dataNodeMedium := New()
assert.Error(t, dataNodeMedium.Delete("ghost.txt"))
require.NoError(t, dataNodeMedium.Write("dir/file.txt", "content"))
assert.Error(t, dataNodeMedium.Delete("dir"))
}
func TestDataNode_Delete_DirectoryInspectionFailure_Bad(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("dir/file.txt", "content"))
original := dataNodeWalkDir
dataNodeWalkDir = func(_ fs.FS, _ string, _ fs.WalkDirFunc) error {
return core.NewError("walk failed")
}
t.Cleanup(func() {
dataNodeWalkDir = original
})
err := dataNodeMedium.Delete("dir")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to inspect directory")
}
func TestDataNode_DeleteAll_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("tree/a.txt", "a"))
require.NoError(t, dataNodeMedium.Write("tree/sub/b.txt", "b"))
require.NoError(t, dataNodeMedium.Write("keep.txt", "keep"))
require.NoError(t, dataNodeMedium.DeleteAll("tree"))
assert.False(t, dataNodeMedium.Exists("tree/a.txt"))
assert.False(t, dataNodeMedium.Exists("tree/sub/b.txt"))
assert.True(t, dataNodeMedium.Exists("keep.txt"))
}
func TestDataNode_DeleteAll_WalkFailure_Bad(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("tree/a.txt", "a"))
original := dataNodeWalkDir
dataNodeWalkDir = func(_ fs.FS, _ string, _ fs.WalkDirFunc) error {
return core.NewError("walk failed")
}
t.Cleanup(func() {
dataNodeWalkDir = original
})
err := dataNodeMedium.DeleteAll("tree")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to inspect tree")
}
func TestDataNode_Delete_RemoveFailure_Bad(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("keep.txt", "keep"))
require.NoError(t, dataNodeMedium.Write("bad.txt", "bad"))
original := dataNodeReadAll
dataNodeReadAll = func(_ io.Reader) ([]byte, error) {
return nil, core.NewError("read failed")
}
t.Cleanup(func() {
dataNodeReadAll = original
})
err := dataNodeMedium.Delete("bad.txt")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to delete file")
}
func TestDataNode_Rename_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("old.txt", "content"))
require.NoError(t, dataNodeMedium.Rename("old.txt", "new.txt"))
assert.False(t, dataNodeMedium.Exists("old.txt"))
got, err := dataNodeMedium.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "content", got)
}
func TestDataNode_RenameDir_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("src/a.go", "package a"))
require.NoError(t, dataNodeMedium.Write("src/sub/b.go", "package b"))
require.NoError(t, dataNodeMedium.Rename("src", "destination"))
assert.False(t, dataNodeMedium.Exists("src/a.go"))
got, err := dataNodeMedium.Read("destination/a.go")
require.NoError(t, err)
assert.Equal(t, "package a", got)
got, err = dataNodeMedium.Read("destination/sub/b.go")
require.NoError(t, err)
assert.Equal(t, "package b", got)
}
func TestDataNode_RenameDir_ReadFailure_Bad(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("src/a.go", "package a"))
original := dataNodeReadAll
dataNodeReadAll = func(_ io.Reader) ([]byte, error) {
return nil, core.NewError("read failed")
}
t.Cleanup(func() {
dataNodeReadAll = original
})
err := dataNodeMedium.Rename("src", "destination")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to read source file")
}
func TestDataNode_List_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("root.txt", "r"))
require.NoError(t, dataNodeMedium.Write("pkg/a.go", "a"))
require.NoError(t, dataNodeMedium.Write("pkg/b.go", "b"))
require.NoError(t, dataNodeMedium.Write("pkg/sub/c.go", "c"))
entries, err := dataNodeMedium.List("")
require.NoError(t, err)
names := make([]string, len(entries))
for index, entry := range entries {
names[index] = entry.Name()
}
assert.Contains(t, names, "root.txt")
assert.Contains(t, names, "pkg")
entries, err = dataNodeMedium.List("pkg")
require.NoError(t, err)
names = make([]string, len(entries))
for index, entry := range entries {
names[index] = entry.Name()
}
assert.Contains(t, names, "a.go")
assert.Contains(t, names, "b.go")
assert.Contains(t, names, "sub")
}
func TestDataNode_Stat_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("stat.txt", "hello"))
info, err := dataNodeMedium.Stat("stat.txt")
require.NoError(t, err)
assert.Equal(t, int64(5), info.Size())
assert.False(t, info.IsDir())
info, err = dataNodeMedium.Stat("")
require.NoError(t, err)
assert.True(t, info.IsDir())
}
func TestDataNode_Open_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("open.txt", "opened"))
file, err := dataNodeMedium.Open("open.txt")
require.NoError(t, err)
defer file.Close()
data, err := io.ReadAll(file)
require.NoError(t, err)
assert.Equal(t, "opened", string(data))
}
func TestDataNode_CreateAppend_Good(t *testing.T) {
dataNodeMedium := New()
writer, err := dataNodeMedium.Create("new.txt")
require.NoError(t, err)
_, _ = writer.Write([]byte("hello"))
require.NoError(t, writer.Close())
got, err := dataNodeMedium.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "hello", got)
writer, err = dataNodeMedium.Append("new.txt")
require.NoError(t, err)
_, _ = writer.Write([]byte(" world"))
require.NoError(t, writer.Close())
got, err = dataNodeMedium.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "hello world", got)
}
func TestDataNode_Append_ReadFailure_Bad(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("new.txt", "hello"))
original := dataNodeReadAll
dataNodeReadAll = func(_ io.Reader) ([]byte, error) {
return nil, core.NewError("read failed")
}
t.Cleanup(func() {
dataNodeReadAll = original
})
_, err := dataNodeMedium.Append("new.txt")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to read existing content")
}
func TestDataNode_Streams_Good(t *testing.T) {
dataNodeMedium := New()
writeStream, err := dataNodeMedium.WriteStream("stream.txt")
require.NoError(t, err)
_, _ = writeStream.Write([]byte("streamed"))
require.NoError(t, writeStream.Close())
readStream, err := dataNodeMedium.ReadStream("stream.txt")
require.NoError(t, err)
data, err := io.ReadAll(readStream)
require.NoError(t, err)
assert.Equal(t, "streamed", string(data))
require.NoError(t, readStream.Close())
}
func TestDataNode_SnapshotRestore_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("a.txt", "alpha"))
require.NoError(t, dataNodeMedium.Write("b/c.txt", "charlie"))
snapshotData, err := dataNodeMedium.Snapshot()
require.NoError(t, err)
assert.NotEmpty(t, snapshotData)
restoredNode, err := FromTar(snapshotData)
require.NoError(t, err)
got, err := restoredNode.Read("a.txt")
require.NoError(t, err)
assert.Equal(t, "alpha", got)
got, err = restoredNode.Read("b/c.txt")
require.NoError(t, err)
assert.Equal(t, "charlie", got)
}
func TestDataNode_Restore_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("original.txt", "before"))
snapshotData, err := dataNodeMedium.Snapshot()
require.NoError(t, err)
require.NoError(t, dataNodeMedium.Write("original.txt", "after"))
require.NoError(t, dataNodeMedium.Write("extra.txt", "extra"))
require.NoError(t, dataNodeMedium.Restore(snapshotData))
got, err := dataNodeMedium.Read("original.txt")
require.NoError(t, err)
assert.Equal(t, "before", got)
assert.False(t, dataNodeMedium.Exists("extra.txt"))
}
func TestDataNode_DataNode_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("test.txt", "borg"))
dataNode := dataNodeMedium.DataNode()
assert.NotNil(t, dataNode)
file, err := dataNode.Open("test.txt")
require.NoError(t, err)
defer file.Close()
data, err := io.ReadAll(file)
require.NoError(t, err)
assert.Equal(t, "borg", string(data))
}
func TestDataNode_Overwrite_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("file.txt", "v1"))
require.NoError(t, dataNodeMedium.Write("file.txt", "v2"))
got, err := dataNodeMedium.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "v2", got)
}
func TestDataNode_Exists_Good(t *testing.T) {
dataNodeMedium := New()
assert.True(t, dataNodeMedium.Exists(""))
assert.False(t, dataNodeMedium.Exists("x"))
require.NoError(t, dataNodeMedium.Write("x", "y"))
assert.True(t, dataNodeMedium.Exists("x"))
}
func TestDataNode_ReadExistingFile_Good(t *testing.T) {
dataNodeMedium := New()
require.NoError(t, dataNodeMedium.Write("file.txt", "content"))
got, err := dataNodeMedium.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "content", got)
}

5
doc.go
View file

@ -1,5 +0,0 @@
// Example: medium, _ := io.NewSandboxed("/srv/app")
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
// Example: backup, _ := io.NewSandboxed("/srv/backup")
// Example: _ = io.Copy(medium, "data/report.json", backup, "daily/report.json")
package io

View file

@ -1,440 +0,0 @@
# RFC-025: Agent Experience (AX) Design Principles
- **Status:** Draft
- **Authors:** Snider, Cladius
- **Date:** 2026-03-19
- **Applies to:** All Core ecosystem packages (CoreGO, CorePHP, CoreTS, core-agent)
## Abstract
Agent Experience (AX) is a design paradigm for software systems where the primary code consumer is an AI agent, not a human developer. AX sits alongside User Experience (UX) and Developer Experience (DX) as the third era of interface design.
This RFC establishes AX as a formal design principle for the Core ecosystem and defines the conventions that follow from it.
## Motivation
As of early 2026, AI agents write, review, and maintain the majority of code in the Core ecosystem. The original author has not manually edited code (outside of Core struct design) since October 2025. Code is processed semantically — agents reason about intent, not characters.
Design patterns inherited from the human-developer era optimise for the wrong consumer:
- **Short names** save keystrokes but increase semantic ambiguity
- **Functional option chains** are fluent for humans but opaque for agents tracing configuration
- **Error-at-every-call-site** produces 50% boilerplate that obscures intent
- **Generic type parameters** force agents to carry type context that the runtime already has
- **Panic-hiding conventions** (`Must*`) create implicit control flow that agents must special-case
AX acknowledges this shift and provides principles for designing code, APIs, file structures, and conventions that serve AI agents as first-class consumers.
## The Three Eras
| Era | Primary Consumer | Optimises For | Key Metric |
|-----|-----------------|---------------|------------|
| UX | End users | Discoverability, forgiveness, visual clarity | Task completion time |
| DX | Developers | Typing speed, IDE support, convention familiarity | Time to first commit |
| AX | AI agents | Predictability, composability, semantic navigation | Correct-on-first-pass rate |
AX does not replace UX or DX. End users still need good UX. Developers still need good DX. But when the primary code author and maintainer is an AI agent, the codebase should be designed for that consumer first.
## Principles
### 1. Predictable Names Over Short Names
Names are tokens that agents pattern-match across languages and contexts. Abbreviations introduce mapping overhead.
```
Config not Cfg
Service not Srv
Embed not Emb
Error not Err (as a subsystem name; err for local variables is fine)
Options not Opts
```
**Rule:** If a name would require a comment to explain, it is too short.
**Exception:** Industry-standard abbreviations that are universally understood (`HTTP`, `URL`, `ID`, `IPC`, `I18n`) are acceptable. The test: would an agent trained on any mainstream language recognise it without context?
### 2. Comments as Usage Examples
The function signature tells WHAT. The comment shows HOW with real values.
```go
// Detect the project type from files present
setup.Detect("/path/to/project")
// Set up a workspace with auto-detected template
setup.Run(setup.Options{Path: ".", Template: "auto"})
// Scaffold a PHP module workspace
setup.Run(setup.Options{Path: "./my-module", Template: "php"})
```
**Rule:** If a comment restates what the type signature already says, delete it. If a comment shows a concrete usage with realistic values, keep it.
**Rationale:** Agents learn from examples more effectively than from descriptions. A comment like "Run executes the setup process" adds zero information. A comment like `setup.Run(setup.Options{Path: ".", Template: "auto"})` teaches an agent exactly how to call the function.
### 3. Path Is Documentation
File and directory paths should be self-describing. An agent navigating the filesystem should understand what it is looking at without reading a README.
```
flow/deploy/to/homelab.yaml — deploy TO the homelab
flow/deploy/from/github.yaml — deploy FROM GitHub
flow/code/review.yaml — code review flow
template/file/go/struct.go.tmpl — Go struct file template
template/dir/workspace/php/ — PHP workspace scaffold
```
**Rule:** If an agent needs to read a file to understand what a directory contains, the directory naming has failed.
**Corollary:** The unified path convention (folder structure = HTTP route = CLI command = test path) is AX-native. One path, every surface.
### 4. Templates Over Freeform
When an agent generates code from a template, the output is constrained to known-good shapes. When an agent writes freeform, the output varies.
```go
// Template-driven — consistent output
lib.RenderFile("php/action", data)
lib.ExtractDir("php", targetDir, data)
// Freeform — variance in output
"write a PHP action class that..."
```
**Rule:** For any code pattern that recurs, provide a template. Templates are guardrails for agents.
**Scope:** Templates apply to file generation, workspace scaffolding, config generation, and commit messages. They do NOT apply to novel logic — agents should write business logic freeform with the domain knowledge available.
### 5. Declarative Over Imperative
Agents reason better about declarations of intent than sequences of operations.
```yaml
# Declarative — agent sees what should happen
steps:
- name: build
flow: tools/docker-build
with:
context: "{{ .app_dir }}"
image_name: "{{ .image_name }}"
- name: deploy
flow: deploy/with/docker
with:
host: "{{ .host }}"
```
```go
// Imperative — agent must trace execution
cmd := exec.Command("docker", "build", "--platform", "linux/amd64", "-t", imageName, ".")
cmd.Dir = appDir
if err := cmd.Run(); err != nil {
return fmt.Errorf("docker build: %w", err)
}
```
**Rule:** Orchestration, configuration, and pipeline logic should be declarative (YAML/JSON). Implementation logic should be imperative (Go/PHP/TS). The boundary is: if an agent needs to compose or modify the logic, make it declarative.
### 6. Universal Types (Core Primitives)
Every component in the ecosystem accepts and returns the same primitive types. An agent processing any level of the tree sees identical shapes.
```go
// Universal contract
setup.Run(core.Options{Path: ".", Template: "auto"})
brain.New(core.Options{Name: "openbrain"})
deploy.Run(core.Options{Flow: "deploy/to/homelab"})
// Fractal — Core itself is a Service
core.New(core.Options{
Services: []core.Service{
process.New(core.Options{Name: "process"}),
brain.New(core.Options{Name: "brain"}),
},
})
```
**Core primitive types:**
| Type | Purpose |
|------|---------|
| `core.Options` | Input configuration (what you want) |
| `core.Config` | Runtime settings (what is active) |
| `core.Data` | Embedded or stored content |
| `core.Service` | A managed component with lifecycle |
| `core.Result[T]` | Return value with OK/fail state |
**What this replaces:**
| Go Convention | Core AX | Why |
|--------------|---------|-----|
| `func With*(v) Option` | `core.Options{Field: v}` | Struct literal is parseable; option chain requires tracing |
| `func Must*(v) T` | `core.Result[T]` | No hidden panics; errors flow through Core |
| `func *For[T](c) T` | `c.Service("name")` | String lookup is greppable; generics require type context |
| `val, err :=` everywhere | Single return via `core.Result` | Intent not obscured by error handling |
| `_ = err` | Never needed | Core handles all errors internally |
### 7. Directory as Semantics
The directory structure tells an agent the intent before it reads a word. Top-level directories are semantic categories, not organisational bins.
```
plans/
├── code/ # Pure primitives — read for WHAT exists
├── project/ # Products — read for WHAT we're building and WHY
└── rfc/ # Contracts — read for constraints and rules
```
**Rule:** An agent should know what kind of document it's reading from the path alone. `code/core/go/io/RFC.md` = a lib primitive spec. `project/ofm/RFC.md` = a product spec that cross-references code/. `rfc/snider/borg/RFC-BORG-006-SMSG-FORMAT.md` = an immutable contract for the Borg SMSG protocol.
**Corollary:** The three-way split (code/project/rfc) extends principle 3 (Path Is Documentation) from files to entire subtrees. The path IS the metadata.
### 8. Lib Never Imports Consumer
Dependency flows one direction. Libraries define primitives. Consumers compose from them. A new feature in a consumer can never break a library.
```
code/core/go/* → lib tier (stable foundation)
code/core/agent/ → consumer tier (composes from go/*)
code/core/cli/ → consumer tier (composes from go/*)
code/core/gui/ → consumer tier (composes from go/*)
```
**Rule:** If package A is in `go/` and package B is in the consumer tier, B may import A but A must never import B. The repo naming convention enforces this: `go-{name}` = lib, bare `{name}` = consumer.
**Why this matters for agents:** When an agent is dispatched to implement a feature in `core/agent`, it can freely import from `go-io`, `go-scm`, `go-process`. But if an agent is dispatched to `go-io`, it knows its changes are foundational — every consumer depends on it, so the contract must not break.
### 9. Issues Are N+(rounds) Deep
Problems in code and specs are layered. Surface issues mask deeper issues. Fixing the surface reveals the next layer. This is not a failure mode — it is the discovery process.
```
Pass 1: Find 16 issues (surface — naming, imports, obvious errors)
Pass 2: Find 11 issues (structural — contradictions, missing types)
Pass 3: Find 5 issues (architectural — signature mismatches, registration gaps)
Pass 4: Find 4 issues (contract — cross-spec API mismatches)
Pass 5: Find 2 issues (mechanical — path format, nil safety)
Pass N: Findings are trivial → spec/code is complete
```
**Rule:** Iteration is required, not a failure. Each pass sees what the previous pass could not, because the context changed. An agent dispatched with the same task on the same repo will find different things each time — this is correct behaviour.
**Corollary:** The cheapest model should do the most passes (surface work). The frontier model should arrive last, when only deep issues remain. Tiered iteration: grunt model grinds → mid model pre-warms → frontier model polishes.
**Anti-pattern:** One-shot generation expecting valid output. No model, no human, produces correct-on-first-pass for non-trivial work. Expecting it wastes the first pass on surface issues that a cheaper pass would have caught.
### 10. CLI Tests as Artifact Validation
Unit tests verify the code. CLI tests verify the binary. The directory structure IS the command structure — path maps to command, Taskfile runs the test.
```
tests/cli/
├── core/
│ └── lint/
│ ├── Taskfile.yaml ← test `core-lint` (root)
│ ├── run/
│ │ ├── Taskfile.yaml ← test `core-lint run`
│ │ └── fixtures/
│ ├── go/
│ │ ├── Taskfile.yaml ← test `core-lint go`
│ │ └── fixtures/
│ └── security/
│ ├── Taskfile.yaml ← test `core-lint security`
│ └── fixtures/
```
**Rule:** Every CLI command has a matching `tests/cli/{path}/Taskfile.yaml`. The Taskfile runs the compiled binary against fixtures with known inputs and validates the output. If the CLI test passes, the underlying actions work — because CLI commands call actions, MCP tools call actions, API endpoints call actions. Test the CLI, trust the rest.
**Pattern:**
```yaml
# tests/cli/core/lint/go/Taskfile.yaml
version: '3'
tasks:
test:
cmds:
- core-lint go --output json fixtures/ > /tmp/result.json
- jq -e '.findings | length > 0' /tmp/result.json
- jq -e '.summary.passed == false' /tmp/result.json
```
**Why this matters for agents:** An agent can validate its own work by running `task test` in the matching `tests/cli/` directory. No test framework, no mocking, no setup — just the binary, fixtures, and `jq` assertions. The agent builds the binary, runs the test, sees the result. If it fails, the agent can read the fixture, read the output, and fix the code.
**Corollary:** Fixtures are planted bugs. Each fixture file has a known issue that the linter must find. If the linter doesn't find it, the test fails. Fixtures are the spec for what the tool must detect — they ARE the test cases, not descriptions of test cases.
## Applying AX to Existing Patterns
### File Structure
```
# AX-native: path describes content
core/agent/
├── go/ # Go source
├── php/ # PHP source
├── ui/ # Frontend source
├── claude/ # Claude Code plugin
└── codex/ # Codex plugin
# Not AX: generic names requiring README
src/
├── lib/
├── utils/
└── helpers/
```
### Error Handling
```go
// AX-native: errors are infrastructure, not application logic
svc := c.Service("brain")
cfg := c.Config().Get("database.host")
// Errors logged by Core. Code reads like a spec.
// Not AX: errors dominate the code
svc, err := c.ServiceFor[brain.Service]()
if err != nil {
return fmt.Errorf("get brain service: %w", err)
}
cfg, err := c.Config().Get("database.host")
if err != nil {
_ = err // silenced because "it'll be fine"
}
```
### API Design
```go
// AX-native: one shape, every surface
core.New(core.Options{
Name: "my-app",
Services: []core.Service{...},
Config: core.Config{...},
})
// Not AX: multiple patterns for the same thing
core.New(
core.WithName("my-app"),
core.WithService(factory1),
core.WithService(factory2),
core.WithConfig(cfg),
)
```
## The Plans Convention — AX Development Lifecycle
The `plans/` directory structure encodes a development methodology designed for how generative AI actually works: iterative refinement across structured phases, not one-shot generation.
### The Three-Way Split
```
plans/
├── project/ # 1. WHAT and WHY — start here
├── rfc/ # 2. CONSTRAINTS — immutable contracts
└── code/ # 3. HOW — implementation specs
```
Each directory is a phase. Work flows from project → rfc → code. Each transition forces a refinement pass — you cannot write a code spec without discovering gaps in the project spec, and you cannot write an RFC without discovering assumptions in both.
**Three places for data that can't be written simultaneously = three guaranteed iterations of "actually, this needs changing."** Refinement is baked into the structure, not bolted on as a review step.
### Phase 1: Project (Vision)
Start with `project/`. No code exists yet. Define:
- What the product IS and who it serves
- What existing primitives it consumes (cross-ref to `code/`)
- What constraints it operates under (cross-ref to `rfc/`)
This is where creativity lives. Map features to building blocks. Connect systems. The project spec is integrative — it references everything else.
### Phase 2: RFC (Contracts)
Extract the immutable rules into `rfc/`. These are constraints that don't change with implementation:
- Wire formats, protocols, hash algorithms
- Security properties that must hold
- Compatibility guarantees
RFCs are numbered per component (`RFC-BORG-006-SMSG-FORMAT.md`) and never modified after acceptance. If the contract changes, write a new RFC.
### Phase 3: Code (Implementation Specs)
Define the implementation in `code/`. Each component gets an RFC.md that an agent can implement from:
- Struct definitions (the DTOs — see principle 6)
- Method signatures and behaviour
- Error conditions and edge cases
- Cross-references to other code/ specs
The code spec IS the product. Write the spec → dispatch to an agent → review output → iterate.
### Pre-Launch: Alignment Protocol
Before dispatching for implementation, verify spec-model alignment:
```
1. REVIEW — The implementation model (Codex/Jules) reads the spec
and reports missing elements. This surfaces the delta between
the model's training and the spec's assumptions.
"I need X, Y, Z to implement this" is the model saying
"I hear you but I'm missing context" — without asking.
2. ADJUST — Update the spec to close the gaps. Add examples,
clarify ambiguities, provide the context the model needs.
This is shared alignment, not compromise.
3. VERIFY — A different model (or sub-agent) reviews the adjusted
spec without the planner's bias. Fresh eyes on the contract.
"Does this make sense to someone who wasn't in the room?"
4. READY — When the review findings are trivial or deployment-
related (not architectural), the spec is ready to dispatch.
```
### Implementation: Iterative Dispatch
Same prompt, multiple runs. Each pass sees deeper because the context evolved:
```
Round 1: Build features (the obvious gaps)
Round 2: Write tests (verify what was built)
Round 3: Harden security (what can go wrong?)
Round 4: Next RFC section (what's still missing?)
Round N: Findings are trivial → implementation is complete
```
Re-running is not failure. It is the process. Each pass changes the codebase, which changes what the next pass can see. The iteration IS the refinement.
### Post-Implementation: Auto-Documentation
The QA/verify chain produces artefacts that feed forward:
- Test results document the contract (what works, what doesn't)
- Coverage reports surface untested paths
- Diff summaries prep the changelog for the next release
- Doc site updates from the spec (the spec IS the documentation)
The output of one cycle is the input to the next. The plans repo stays current because the specs drive the code, not the other way round.
## Compatibility
AX conventions are valid, idiomatic Go/PHP/TS. They do not require language extensions, code generation, or non-standard tooling. An AX-designed codebase compiles, tests, and deploys with standard toolchains.
The conventions diverge from community patterns (functional options, Must/For, etc.) but do not violate language specifications. This is a style choice, not a fork.
## Adoption
AX applies to all new code in the Core ecosystem. Existing code migrates incrementally as it is touched — no big-bang rewrite.
Priority order:
1. **Public APIs** (package-level functions, struct constructors)
2. **File structure** (path naming, template locations)
3. **Internal fields** (struct field names, local variables)
## References
- dAppServer unified path convention (2024)
- CoreGO DTO pattern refactor (2026-03-18)
- Core primitives design (2026-03-19)
- Go Proverbs, Rob Pike (2015) — AX provides an updated lens
## Changelog
- 2026-03-19: Initial draft

File diff suppressed because it is too large Load diff

285
docs/api-contract.md Normal file
View file

@ -0,0 +1,285 @@
# API Contract
Descriptions use doc comments when present; otherwise they are short code-based summaries.
Test coverage is `Yes` when same-package tests directly execute or reference the exported symbol; otherwise `No`.
`CODEX.md` was not present in the repository at generation time.
| Name | Signature | Package Path | Description | Test Coverage |
| --- | --- | --- | --- | --- |
| `DirEntry` | `type DirEntry struct` | `dappco.re/go/core/io` | DirEntry provides a simple implementation of fs.DirEntry for mock testing. | Yes |
| `FileInfo` | `type FileInfo struct` | `dappco.re/go/core/io` | FileInfo provides a simple implementation of fs.FileInfo for mock testing. | Yes |
| `Medium` | `type Medium interface` | `dappco.re/go/core/io` | Medium defines the standard interface for a storage backend. | Yes |
| `MockFile` | `type MockFile struct` | `dappco.re/go/core/io` | MockFile implements fs.File for MockMedium. | No |
| `MockMedium` | `type MockMedium struct` | `dappco.re/go/core/io` | MockMedium is an in-memory implementation of Medium for testing. | Yes |
| `MockWriteCloser` | `type MockWriteCloser struct` | `dappco.re/go/core/io` | MockWriteCloser implements WriteCloser for MockMedium. | No |
| `Copy` | `func Copy(src Medium, srcPath string, dst Medium, dstPath string) error` | `dappco.re/go/core/io` | Copy copies a file from one medium to another. | Yes |
| `EnsureDir` | `func EnsureDir(m Medium, path string) error` | `dappco.re/go/core/io` | EnsureDir makes sure a directory exists in the given medium. | Yes |
| `IsFile` | `func IsFile(m Medium, path string) bool` | `dappco.re/go/core/io` | IsFile checks if a path exists and is a regular file in the given medium. | Yes |
| `NewMockMedium` | `func NewMockMedium() *MockMedium` | `dappco.re/go/core/io` | NewMockMedium creates a new MockMedium instance. | Yes |
| `NewSandboxed` | `func NewSandboxed(root string) (Medium, error)` | `dappco.re/go/core/io` | NewSandboxed creates a new Medium sandboxed to the given root directory. | No |
| `Read` | `func Read(m Medium, path string) (string, error)` | `dappco.re/go/core/io` | Read retrieves the content of a file from the given medium. | Yes |
| `ReadStream` | `func ReadStream(m Medium, path string) (goio.ReadCloser, error)` | `dappco.re/go/core/io` | ReadStream returns a reader for the file content from the given medium. | No |
| `Write` | `func Write(m Medium, path, content string) error` | `dappco.re/go/core/io` | Write saves the given content to a file in the given medium. | Yes |
| `WriteStream` | `func WriteStream(m Medium, path string) (goio.WriteCloser, error)` | `dappco.re/go/core/io` | WriteStream returns a writer for the file content in the given medium. | No |
| `DirEntry.Info` | `func (DirEntry) Info() (fs.FileInfo, error)` | `dappco.re/go/core/io` | Returns file info for the entry. | No |
| `DirEntry.IsDir` | `func (DirEntry) IsDir() bool` | `dappco.re/go/core/io` | Reports whether the entry represents a directory. | No |
| `DirEntry.Name` | `func (DirEntry) Name() string` | `dappco.re/go/core/io` | Returns the stored entry name. | Yes |
| `DirEntry.Type` | `func (DirEntry) Type() fs.FileMode` | `dappco.re/go/core/io` | Returns the entry type bits. | No |
| `FileInfo.IsDir` | `func (FileInfo) IsDir() bool` | `dappco.re/go/core/io` | Reports whether the entry represents a directory. | Yes |
| `FileInfo.ModTime` | `func (FileInfo) ModTime() time.Time` | `dappco.re/go/core/io` | Returns the stored modification time. | No |
| `FileInfo.Mode` | `func (FileInfo) Mode() fs.FileMode` | `dappco.re/go/core/io` | Returns the stored file mode. | No |
| `FileInfo.Name` | `func (FileInfo) Name() string` | `dappco.re/go/core/io` | Returns the stored entry name. | Yes |
| `FileInfo.Size` | `func (FileInfo) Size() int64` | `dappco.re/go/core/io` | Returns the stored size in bytes. | Yes |
| `FileInfo.Sys` | `func (FileInfo) Sys() any` | `dappco.re/go/core/io` | Returns the underlying system-specific data. | No |
| `Medium.Append` | `Append(path string) (goio.WriteCloser, error)` | `dappco.re/go/core/io` | Append opens the named file for appending, creating it if it doesn't exist. | No |
| `Medium.Create` | `Create(path string) (goio.WriteCloser, error)` | `dappco.re/go/core/io` | Create creates or truncates the named file. | No |
| `Medium.Delete` | `Delete(path string) error` | `dappco.re/go/core/io` | Delete removes a file or empty directory. | Yes |
| `Medium.DeleteAll` | `DeleteAll(path string) error` | `dappco.re/go/core/io` | DeleteAll removes a file or directory and all its contents recursively. | Yes |
| `Medium.EnsureDir` | `EnsureDir(path string) error` | `dappco.re/go/core/io` | EnsureDir makes sure a directory exists, creating it if necessary. | Yes |
| `Medium.Exists` | `Exists(path string) bool` | `dappco.re/go/core/io` | Exists checks if a path exists (file or directory). | Yes |
| `Medium.FileGet` | `FileGet(path string) (string, error)` | `dappco.re/go/core/io` | FileGet is a convenience function that reads a file from the medium. | Yes |
| `Medium.FileSet` | `FileSet(path, content string) error` | `dappco.re/go/core/io` | FileSet is a convenience function that writes a file to the medium. | Yes |
| `Medium.IsDir` | `IsDir(path string) bool` | `dappco.re/go/core/io` | IsDir checks if a path exists and is a directory. | Yes |
| `Medium.IsFile` | `IsFile(path string) bool` | `dappco.re/go/core/io` | IsFile checks if a path exists and is a regular file. | Yes |
| `Medium.List` | `List(path string) ([]fs.DirEntry, error)` | `dappco.re/go/core/io` | List returns the directory entries for the given path. | Yes |
| `Medium.Open` | `Open(path string) (fs.File, error)` | `dappco.re/go/core/io` | Open opens the named file for reading. | No |
| `Medium.Read` | `Read(path string) (string, error)` | `dappco.re/go/core/io` | Read retrieves the content of a file as a string. | Yes |
| `Medium.ReadStream` | `ReadStream(path string) (goio.ReadCloser, error)` | `dappco.re/go/core/io` | ReadStream returns a reader for the file content. | No |
| `Medium.Rename` | `Rename(oldPath, newPath string) error` | `dappco.re/go/core/io` | Rename moves a file or directory from oldPath to newPath. | Yes |
| `Medium.Stat` | `Stat(path string) (fs.FileInfo, error)` | `dappco.re/go/core/io` | Stat returns file information for the given path. | Yes |
| `Medium.Write` | `Write(path, content string) error` | `dappco.re/go/core/io` | Write saves the given content to a file, overwriting it if it exists. | Yes |
| `Medium.WriteMode` | `WriteMode(path, content string, mode os.FileMode) error` | `dappco.re/go/core/io` | WriteMode saves content with explicit file permissions. | No |
| `Medium.WriteStream` | `WriteStream(path string) (goio.WriteCloser, error)` | `dappco.re/go/core/io` | WriteStream returns a writer for the file content. | No |
| `MockFile.Close` | `func (*MockFile) Close() error` | `dappco.re/go/core/io` | Closes the current value. | No |
| `MockFile.Read` | `func (*MockFile) Read(b []byte) (int, error)` | `dappco.re/go/core/io` | Reads data from the current value. | No |
| `MockFile.Stat` | `func (*MockFile) Stat() (fs.FileInfo, error)` | `dappco.re/go/core/io` | Returns file metadata for the current value. | No |
| `MockMedium.Append` | `func (*MockMedium) Append(path string) (goio.WriteCloser, error)` | `dappco.re/go/core/io` | Append opens a file for appending in the mock filesystem. | No |
| `MockMedium.Create` | `func (*MockMedium) Create(path string) (goio.WriteCloser, error)` | `dappco.re/go/core/io` | Create creates a file in the mock filesystem. | No |
| `MockMedium.Delete` | `func (*MockMedium) Delete(path string) error` | `dappco.re/go/core/io` | Delete removes a file or empty directory from the mock filesystem. | Yes |
| `MockMedium.DeleteAll` | `func (*MockMedium) DeleteAll(path string) error` | `dappco.re/go/core/io` | DeleteAll removes a file or directory and all contents from the mock filesystem. | Yes |
| `MockMedium.EnsureDir` | `func (*MockMedium) EnsureDir(path string) error` | `dappco.re/go/core/io` | EnsureDir records that a directory exists in the mock filesystem. | Yes |
| `MockMedium.Exists` | `func (*MockMedium) Exists(path string) bool` | `dappco.re/go/core/io` | Exists checks if a path exists in the mock filesystem. | Yes |
| `MockMedium.FileGet` | `func (*MockMedium) FileGet(path string) (string, error)` | `dappco.re/go/core/io` | FileGet is a convenience function that reads a file from the mock filesystem. | Yes |
| `MockMedium.FileSet` | `func (*MockMedium) FileSet(path, content string) error` | `dappco.re/go/core/io` | FileSet is a convenience function that writes a file to the mock filesystem. | Yes |
| `MockMedium.IsDir` | `func (*MockMedium) IsDir(path string) bool` | `dappco.re/go/core/io` | IsDir checks if a path is a directory in the mock filesystem. | Yes |
| `MockMedium.IsFile` | `func (*MockMedium) IsFile(path string) bool` | `dappco.re/go/core/io` | IsFile checks if a path exists as a file in the mock filesystem. | Yes |
| `MockMedium.List` | `func (*MockMedium) List(path string) ([]fs.DirEntry, error)` | `dappco.re/go/core/io` | List returns directory entries for the mock filesystem. | Yes |
| `MockMedium.Open` | `func (*MockMedium) Open(path string) (fs.File, error)` | `dappco.re/go/core/io` | Open opens a file from the mock filesystem. | No |
| `MockMedium.Read` | `func (*MockMedium) Read(path string) (string, error)` | `dappco.re/go/core/io` | Read retrieves the content of a file from the mock filesystem. | Yes |
| `MockMedium.ReadStream` | `func (*MockMedium) ReadStream(path string) (goio.ReadCloser, error)` | `dappco.re/go/core/io` | ReadStream returns a reader for the file content in the mock filesystem. | No |
| `MockMedium.Rename` | `func (*MockMedium) Rename(oldPath, newPath string) error` | `dappco.re/go/core/io` | Rename moves a file or directory in the mock filesystem. | Yes |
| `MockMedium.Stat` | `func (*MockMedium) Stat(path string) (fs.FileInfo, error)` | `dappco.re/go/core/io` | Stat returns file information for the mock filesystem. | Yes |
| `MockMedium.Write` | `func (*MockMedium) Write(path, content string) error` | `dappco.re/go/core/io` | Write saves the given content to a file in the mock filesystem. | Yes |
| `MockMedium.WriteMode` | `func (*MockMedium) WriteMode(path, content string, mode os.FileMode) error` | `dappco.re/go/core/io` | Writes content using an explicit file mode. | No |
| `MockMedium.WriteStream` | `func (*MockMedium) WriteStream(path string) (goio.WriteCloser, error)` | `dappco.re/go/core/io` | WriteStream returns a writer for the file content in the mock filesystem. | No |
| `MockWriteCloser.Close` | `func (*MockWriteCloser) Close() error` | `dappco.re/go/core/io` | Closes the current value. | No |
| `MockWriteCloser.Write` | `func (*MockWriteCloser) Write(p []byte) (int, error)` | `dappco.re/go/core/io` | Writes data to the current value. | No |
| `Medium` | `type Medium struct` | `dappco.re/go/core/io/datanode` | Medium is an in-memory storage backend backed by a Borg DataNode. | Yes |
| `FromTar` | `func FromTar(data []byte) (*Medium, error)` | `dappco.re/go/core/io/datanode` | FromTar creates a Medium from a tarball, restoring all files. | Yes |
| `New` | `func New() *Medium` | `dappco.re/go/core/io/datanode` | New creates a new empty DataNode Medium. | Yes |
| `Medium.Append` | `func (*Medium) Append(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/datanode` | Opens the named file for appending, creating it if needed. | Yes |
| `Medium.Create` | `func (*Medium) Create(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/datanode` | Creates or truncates the named file and returns a writer. | Yes |
| `Medium.DataNode` | `func (*Medium) DataNode() *datanode.DataNode` | `dappco.re/go/core/io/datanode` | DataNode returns the underlying Borg DataNode. | Yes |
| `Medium.Delete` | `func (*Medium) Delete(p string) error` | `dappco.re/go/core/io/datanode` | Removes a file, key, or empty directory. | Yes |
| `Medium.DeleteAll` | `func (*Medium) DeleteAll(p string) error` | `dappco.re/go/core/io/datanode` | Removes a file or directory tree recursively. | Yes |
| `Medium.EnsureDir` | `func (*Medium) EnsureDir(p string) error` | `dappco.re/go/core/io/datanode` | Ensures a directory path exists. | Yes |
| `Medium.Exists` | `func (*Medium) Exists(p string) bool` | `dappco.re/go/core/io/datanode` | Reports whether the path exists. | Yes |
| `Medium.FileGet` | `func (*Medium) FileGet(p string) (string, error)` | `dappco.re/go/core/io/datanode` | Reads a file or key through the convenience accessor. | Yes |
| `Medium.FileSet` | `func (*Medium) FileSet(p, content string) error` | `dappco.re/go/core/io/datanode` | Writes a file or key through the convenience accessor. | Yes |
| `Medium.IsDir` | `func (*Medium) IsDir(p string) bool` | `dappco.re/go/core/io/datanode` | Reports whether the entry represents a directory. | Yes |
| `Medium.IsFile` | `func (*Medium) IsFile(p string) bool` | `dappco.re/go/core/io/datanode` | Reports whether the path exists as a regular file. | Yes |
| `Medium.List` | `func (*Medium) List(p string) ([]fs.DirEntry, error)` | `dappco.re/go/core/io/datanode` | Lists directory entries beneath the given path. | Yes |
| `Medium.Open` | `func (*Medium) Open(p string) (fs.File, error)` | `dappco.re/go/core/io/datanode` | Opens the named file for reading. | Yes |
| `Medium.Read` | `func (*Medium) Read(p string) (string, error)` | `dappco.re/go/core/io/datanode` | Reads data from the current value. | Yes |
| `Medium.ReadStream` | `func (*Medium) ReadStream(p string) (goio.ReadCloser, error)` | `dappco.re/go/core/io/datanode` | Opens a streaming reader for the file content. | Yes |
| `Medium.Rename` | `func (*Medium) Rename(oldPath, newPath string) error` | `dappco.re/go/core/io/datanode` | Moves a file or directory to a new path. | Yes |
| `Medium.Restore` | `func (*Medium) Restore(data []byte) error` | `dappco.re/go/core/io/datanode` | Restore replaces the filesystem contents from a tarball. | Yes |
| `Medium.Snapshot` | `func (*Medium) Snapshot() ([]byte, error)` | `dappco.re/go/core/io/datanode` | Snapshot serializes the entire filesystem to a tarball. | Yes |
| `Medium.Stat` | `func (*Medium) Stat(p string) (fs.FileInfo, error)` | `dappco.re/go/core/io/datanode` | Returns file metadata for the current value. | Yes |
| `Medium.Write` | `func (*Medium) Write(p, content string) error` | `dappco.re/go/core/io/datanode` | Writes data to the current value. | Yes |
| `Medium.WriteMode` | `func (*Medium) WriteMode(p, content string, mode os.FileMode) error` | `dappco.re/go/core/io/datanode` | Writes content using an explicit file mode. | No |
| `Medium.WriteStream` | `func (*Medium) WriteStream(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/datanode` | Opens a streaming writer for the file content. | Yes |
| `Medium` | `type Medium struct` | `dappco.re/go/core/io/local` | Medium is a local filesystem storage backend. | Yes |
| `New` | `func New(root string) (*Medium, error)` | `dappco.re/go/core/io/local` | New creates a new local Medium rooted at the given directory. | Yes |
| `Medium.Append` | `func (*Medium) Append(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/local` | Append opens the named file for appending, creating it if it doesn't exist. | No |
| `Medium.Create` | `func (*Medium) Create(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/local` | Create creates or truncates the named file. | Yes |
| `Medium.Delete` | `func (*Medium) Delete(p string) error` | `dappco.re/go/core/io/local` | Delete removes a file or empty directory. | Yes |
| `Medium.DeleteAll` | `func (*Medium) DeleteAll(p string) error` | `dappco.re/go/core/io/local` | DeleteAll removes a file or directory recursively. | Yes |
| `Medium.EnsureDir` | `func (*Medium) EnsureDir(p string) error` | `dappco.re/go/core/io/local` | EnsureDir creates directory if it doesn't exist. | Yes |
| `Medium.Exists` | `func (*Medium) Exists(p string) bool` | `dappco.re/go/core/io/local` | Exists returns true if path exists. | Yes |
| `Medium.FileGet` | `func (*Medium) FileGet(p string) (string, error)` | `dappco.re/go/core/io/local` | FileGet is an alias for Read. | Yes |
| `Medium.FileSet` | `func (*Medium) FileSet(p, content string) error` | `dappco.re/go/core/io/local` | FileSet is an alias for Write. | Yes |
| `Medium.IsDir` | `func (*Medium) IsDir(p string) bool` | `dappco.re/go/core/io/local` | IsDir returns true if path is a directory. | Yes |
| `Medium.IsFile` | `func (*Medium) IsFile(p string) bool` | `dappco.re/go/core/io/local` | IsFile returns true if path is a regular file. | Yes |
| `Medium.List` | `func (*Medium) List(p string) ([]fs.DirEntry, error)` | `dappco.re/go/core/io/local` | List returns directory entries. | Yes |
| `Medium.Open` | `func (*Medium) Open(p string) (fs.File, error)` | `dappco.re/go/core/io/local` | Open opens the named file for reading. | Yes |
| `Medium.Read` | `func (*Medium) Read(p string) (string, error)` | `dappco.re/go/core/io/local` | Read returns file contents as string. | Yes |
| `Medium.ReadStream` | `func (*Medium) ReadStream(path string) (goio.ReadCloser, error)` | `dappco.re/go/core/io/local` | ReadStream returns a reader for the file content. | Yes |
| `Medium.Rename` | `func (*Medium) Rename(oldPath, newPath string) error` | `dappco.re/go/core/io/local` | Rename moves a file or directory. | Yes |
| `Medium.Stat` | `func (*Medium) Stat(p string) (fs.FileInfo, error)` | `dappco.re/go/core/io/local` | Stat returns file info. | Yes |
| `Medium.Write` | `func (*Medium) Write(p, content string) error` | `dappco.re/go/core/io/local` | Write saves content to file, creating parent directories as needed. | Yes |
| `Medium.WriteMode` | `func (*Medium) WriteMode(p, content string, mode os.FileMode) error` | `dappco.re/go/core/io/local` | WriteMode saves content to file with explicit permissions. | Yes |
| `Medium.WriteStream` | `func (*Medium) WriteStream(path string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/local` | WriteStream returns a writer for the file content. | Yes |
| `Node` | `type Node struct` | `dappco.re/go/core/io/node` | Node is an in-memory filesystem that implements coreio.Node (and therefore coreio.Medium). | Yes |
| `WalkOptions` | `type WalkOptions struct` | `dappco.re/go/core/io/node` | WalkOptions configures the behaviour of Walk. | Yes |
| `FromTar` | `func FromTar(data []byte) (*Node, error)` | `dappco.re/go/core/io/node` | FromTar creates a new Node from a tar archive. | Yes |
| `New` | `func New() *Node` | `dappco.re/go/core/io/node` | New creates a new, empty Node. | Yes |
| `Node.AddData` | `func (*Node) AddData(name string, content []byte)` | `dappco.re/go/core/io/node` | AddData stages content in the in-memory filesystem. | Yes |
| `Node.Append` | `func (*Node) Append(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/node` | Append opens the named file for appending, creating it if needed. | No |
| `Node.CopyFile` | `func (*Node) CopyFile(src, dst string, perm fs.FileMode) error` | `dappco.re/go/core/io/node` | CopyFile copies a file from the in-memory tree to the local filesystem. | Yes |
| `Node.CopyTo` | `func (*Node) CopyTo(target coreio.Medium, sourcePath, destPath string) error` | `dappco.re/go/core/io/node` | CopyTo copies a file (or directory tree) from the node to any Medium. | No |
| `Node.Create` | `func (*Node) Create(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/node` | Create creates or truncates the named file, returning a WriteCloser. | No |
| `Node.Delete` | `func (*Node) Delete(p string) error` | `dappco.re/go/core/io/node` | Delete removes a single file. | No |
| `Node.DeleteAll` | `func (*Node) DeleteAll(p string) error` | `dappco.re/go/core/io/node` | DeleteAll removes a file or directory and all children. | No |
| `Node.EnsureDir` | `func (*Node) EnsureDir(_ string) error` | `dappco.re/go/core/io/node` | EnsureDir is a no-op because directories are implicit in Node. | No |
| `Node.Exists` | `func (*Node) Exists(p string) bool` | `dappco.re/go/core/io/node` | Exists checks if a path exists (file or directory). | Yes |
| `Node.FileGet` | `func (*Node) FileGet(p string) (string, error)` | `dappco.re/go/core/io/node` | FileGet is an alias for Read. | No |
| `Node.FileSet` | `func (*Node) FileSet(p, content string) error` | `dappco.re/go/core/io/node` | FileSet is an alias for Write. | No |
| `Node.IsDir` | `func (*Node) IsDir(p string) bool` | `dappco.re/go/core/io/node` | IsDir checks if a path exists and is a directory. | No |
| `Node.IsFile` | `func (*Node) IsFile(p string) bool` | `dappco.re/go/core/io/node` | IsFile checks if a path exists and is a regular file. | No |
| `Node.List` | `func (*Node) List(p string) ([]fs.DirEntry, error)` | `dappco.re/go/core/io/node` | List returns directory entries for the given path. | No |
| `Node.LoadTar` | `func (*Node) LoadTar(data []byte) error` | `dappco.re/go/core/io/node` | LoadTar replaces the in-memory tree with the contents of a tar archive. | Yes |
| `Node.Open` | `func (*Node) Open(name string) (fs.File, error)` | `dappco.re/go/core/io/node` | Open opens a file from the Node. | Yes |
| `Node.Read` | `func (*Node) Read(p string) (string, error)` | `dappco.re/go/core/io/node` | Read retrieves the content of a file as a string. | No |
| `Node.ReadDir` | `func (*Node) ReadDir(name string) ([]fs.DirEntry, error)` | `dappco.re/go/core/io/node` | ReadDir reads and returns all directory entries for the named directory. | Yes |
| `Node.ReadFile` | `func (*Node) ReadFile(name string) ([]byte, error)` | `dappco.re/go/core/io/node` | ReadFile returns the content of the named file as a byte slice. | Yes |
| `Node.ReadStream` | `func (*Node) ReadStream(p string) (goio.ReadCloser, error)` | `dappco.re/go/core/io/node` | ReadStream returns a ReadCloser for the file content. | No |
| `Node.Rename` | `func (*Node) Rename(oldPath, newPath string) error` | `dappco.re/go/core/io/node` | Rename moves a file from oldPath to newPath. | No |
| `Node.Stat` | `func (*Node) Stat(name string) (fs.FileInfo, error)` | `dappco.re/go/core/io/node` | Stat returns file information for the given path. | Yes |
| `Node.ToTar` | `func (*Node) ToTar() ([]byte, error)` | `dappco.re/go/core/io/node` | ToTar serialises the entire in-memory tree to a tar archive. | Yes |
| `Node.Walk` | `func (*Node) Walk(root string, fn fs.WalkDirFunc, opts ...WalkOptions) error` | `dappco.re/go/core/io/node` | Walk walks the in-memory tree with optional WalkOptions. | Yes |
| `Node.WalkNode` | `func (*Node) WalkNode(root string, fn fs.WalkDirFunc) error` | `dappco.re/go/core/io/node` | WalkNode walks the in-memory tree, calling fn for each entry. | No |
| `Node.Write` | `func (*Node) Write(p, content string) error` | `dappco.re/go/core/io/node` | Write saves the given content to a file, overwriting it if it exists. | No |
| `Node.WriteMode` | `func (*Node) WriteMode(p, content string, mode os.FileMode) error` | `dappco.re/go/core/io/node` | WriteMode saves content with explicit permissions (no-op for in-memory node). | No |
| `Node.WriteStream` | `func (*Node) WriteStream(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/node` | WriteStream returns a WriteCloser for the file content. | No |
| `Medium` | `type Medium struct` | `dappco.re/go/core/io/s3` | Medium is an S3-backed storage backend implementing the io.Medium interface. | Yes |
| `Option` | `type Option func(*Medium)` | `dappco.re/go/core/io/s3` | Option configures a Medium. | Yes |
| `New` | `func New(bucket string, opts ...Option) (*Medium, error)` | `dappco.re/go/core/io/s3` | New creates a new S3 Medium for the given bucket. | Yes |
| `WithClient` | `func WithClient(client *s3.Client) Option` | `dappco.re/go/core/io/s3` | WithClient sets the S3 client for dependency injection. | No |
| `WithPrefix` | `func WithPrefix(prefix string) Option` | `dappco.re/go/core/io/s3` | WithPrefix sets an optional key prefix for all operations. | Yes |
| `Medium.Append` | `func (*Medium) Append(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/s3` | Append opens the named file for appending. | Yes |
| `Medium.Create` | `func (*Medium) Create(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/s3` | Create creates or truncates the named file. | Yes |
| `Medium.Delete` | `func (*Medium) Delete(p string) error` | `dappco.re/go/core/io/s3` | Delete removes a single object. | Yes |
| `Medium.DeleteAll` | `func (*Medium) DeleteAll(p string) error` | `dappco.re/go/core/io/s3` | DeleteAll removes all objects under the given prefix. | Yes |
| `Medium.EnsureDir` | `func (*Medium) EnsureDir(_ string) error` | `dappco.re/go/core/io/s3` | EnsureDir is a no-op for S3 (S3 has no real directories). | Yes |
| `Medium.Exists` | `func (*Medium) Exists(p string) bool` | `dappco.re/go/core/io/s3` | Exists checks if a path exists (file or directory prefix). | Yes |
| `Medium.FileGet` | `func (*Medium) FileGet(p string) (string, error)` | `dappco.re/go/core/io/s3` | FileGet is a convenience function that reads a file from the medium. | Yes |
| `Medium.FileSet` | `func (*Medium) FileSet(p, content string) error` | `dappco.re/go/core/io/s3` | FileSet is a convenience function that writes a file to the medium. | Yes |
| `Medium.IsDir` | `func (*Medium) IsDir(p string) bool` | `dappco.re/go/core/io/s3` | IsDir checks if a path exists and is a directory (has objects under it as a prefix). | Yes |
| `Medium.IsFile` | `func (*Medium) IsFile(p string) bool` | `dappco.re/go/core/io/s3` | IsFile checks if a path exists and is a regular file (not a "directory" prefix). | Yes |
| `Medium.List` | `func (*Medium) List(p string) ([]fs.DirEntry, error)` | `dappco.re/go/core/io/s3` | List returns directory entries for the given path using ListObjectsV2 with delimiter. | Yes |
| `Medium.Open` | `func (*Medium) Open(p string) (fs.File, error)` | `dappco.re/go/core/io/s3` | Open opens the named file for reading. | Yes |
| `Medium.Read` | `func (*Medium) Read(p string) (string, error)` | `dappco.re/go/core/io/s3` | Read retrieves the content of a file as a string. | Yes |
| `Medium.ReadStream` | `func (*Medium) ReadStream(p string) (goio.ReadCloser, error)` | `dappco.re/go/core/io/s3` | ReadStream returns a reader for the file content. | Yes |
| `Medium.Rename` | `func (*Medium) Rename(oldPath, newPath string) error` | `dappco.re/go/core/io/s3` | Rename moves an object by copying then deleting the original. | Yes |
| `Medium.Stat` | `func (*Medium) Stat(p string) (fs.FileInfo, error)` | `dappco.re/go/core/io/s3` | Stat returns file information for the given path using HeadObject. | Yes |
| `Medium.Write` | `func (*Medium) Write(p, content string) error` | `dappco.re/go/core/io/s3` | Write saves the given content to a file, overwriting it if it exists. | Yes |
| `Medium.WriteStream` | `func (*Medium) WriteStream(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/s3` | WriteStream returns a writer for the file content. | Yes |
| `Base64Sigil` | `type Base64Sigil struct` | `dappco.re/go/core/io/sigil` | Base64Sigil is a Sigil that encodes/decodes data to/from base64. | Yes |
| `ChaChaPolySigil` | `type ChaChaPolySigil struct` | `dappco.re/go/core/io/sigil` | ChaChaPolySigil is a Sigil that encrypts/decrypts data using ChaCha20-Poly1305. | Yes |
| `GzipSigil` | `type GzipSigil struct` | `dappco.re/go/core/io/sigil` | GzipSigil is a Sigil that compresses/decompresses data using gzip. | Yes |
| `HashSigil` | `type HashSigil struct` | `dappco.re/go/core/io/sigil` | HashSigil is a Sigil that hashes the data using a specified algorithm. | Yes |
| `HexSigil` | `type HexSigil struct` | `dappco.re/go/core/io/sigil` | HexSigil is a Sigil that encodes/decodes data to/from hexadecimal. | Yes |
| `JSONSigil` | `type JSONSigil struct` | `dappco.re/go/core/io/sigil` | JSONSigil is a Sigil that compacts or indents JSON data. | Yes |
| `PreObfuscator` | `type PreObfuscator interface` | `dappco.re/go/core/io/sigil` | PreObfuscator applies a reversible transformation to data before encryption. | Yes |
| `ReverseSigil` | `type ReverseSigil struct` | `dappco.re/go/core/io/sigil` | ReverseSigil is a Sigil that reverses the bytes of the payload. | Yes |
| `ShuffleMaskObfuscator` | `type ShuffleMaskObfuscator struct` | `dappco.re/go/core/io/sigil` | ShuffleMaskObfuscator provides stronger obfuscation through byte shuffling and masking. | Yes |
| `Sigil` | `type Sigil interface` | `dappco.re/go/core/io/sigil` | Sigil defines the interface for a data transformer. | Yes |
| `XORObfuscator` | `type XORObfuscator struct` | `dappco.re/go/core/io/sigil` | XORObfuscator performs XOR-based obfuscation using an entropy-derived key stream. | Yes |
| `GetNonceFromCiphertext` | `func GetNonceFromCiphertext(ciphertext []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | GetNonceFromCiphertext extracts the nonce from encrypted output. | Yes |
| `NewChaChaPolySigil` | `func NewChaChaPolySigil(key []byte) (*ChaChaPolySigil, error)` | `dappco.re/go/core/io/sigil` | NewChaChaPolySigil creates a new encryption sigil with the given key. | Yes |
| `NewChaChaPolySigilWithObfuscator` | `func NewChaChaPolySigilWithObfuscator(key []byte, obfuscator PreObfuscator) (*ChaChaPolySigil, error)` | `dappco.re/go/core/io/sigil` | NewChaChaPolySigilWithObfuscator creates a new encryption sigil with custom obfuscator. | Yes |
| `NewHashSigil` | `func NewHashSigil(h crypto.Hash) *HashSigil` | `dappco.re/go/core/io/sigil` | NewHashSigil creates a new HashSigil. | Yes |
| `NewSigil` | `func NewSigil(name string) (Sigil, error)` | `dappco.re/go/core/io/sigil` | NewSigil is a factory function that returns a Sigil based on a string name. | Yes |
| `Transmute` | `func Transmute(data []byte, sigils []Sigil) ([]byte, error)` | `dappco.re/go/core/io/sigil` | Transmute applies a series of sigils to data in sequence. | Yes |
| `Untransmute` | `func Untransmute(data []byte, sigils []Sigil) ([]byte, error)` | `dappco.re/go/core/io/sigil` | Untransmute reverses a transmutation by applying Out in reverse order. | Yes |
| `Base64Sigil.In` | `func (*Base64Sigil) In(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | In encodes the data to base64. | Yes |
| `Base64Sigil.Out` | `func (*Base64Sigil) Out(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | Out decodes the data from base64. | Yes |
| `ChaChaPolySigil.In` | `func (*ChaChaPolySigil) In(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | In encrypts the data with pre-obfuscation. | Yes |
| `ChaChaPolySigil.Out` | `func (*ChaChaPolySigil) Out(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | Out decrypts the data and reverses obfuscation. | Yes |
| `GzipSigil.In` | `func (*GzipSigil) In(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | In compresses the data using gzip. | Yes |
| `GzipSigil.Out` | `func (*GzipSigil) Out(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | Out decompresses the data using gzip. | Yes |
| `HashSigil.In` | `func (*HashSigil) In(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | In hashes the data. | Yes |
| `HashSigil.Out` | `func (*HashSigil) Out(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | Out is a no-op for HashSigil. | Yes |
| `HexSigil.In` | `func (*HexSigil) In(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | In encodes the data to hexadecimal. | Yes |
| `HexSigil.Out` | `func (*HexSigil) Out(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | Out decodes the data from hexadecimal. | Yes |
| `JSONSigil.In` | `func (*JSONSigil) In(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | In compacts or indents the JSON data. | Yes |
| `JSONSigil.Out` | `func (*JSONSigil) Out(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | Out is a no-op for JSONSigil. | Yes |
| `PreObfuscator.Deobfuscate` | `Deobfuscate(data []byte, entropy []byte) []byte` | `dappco.re/go/core/io/sigil` | Deobfuscate reverses the transformation after decryption. | Yes |
| `PreObfuscator.Obfuscate` | `Obfuscate(data []byte, entropy []byte) []byte` | `dappco.re/go/core/io/sigil` | Obfuscate transforms plaintext before encryption using the provided entropy. | Yes |
| `ReverseSigil.In` | `func (*ReverseSigil) In(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | In reverses the bytes of the data. | Yes |
| `ReverseSigil.Out` | `func (*ReverseSigil) Out(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | Out reverses the bytes of the data. | Yes |
| `ShuffleMaskObfuscator.Deobfuscate` | `func (*ShuffleMaskObfuscator) Deobfuscate(data []byte, entropy []byte) []byte` | `dappco.re/go/core/io/sigil` | Deobfuscate reverses the shuffle and mask operations. | Yes |
| `ShuffleMaskObfuscator.Obfuscate` | `func (*ShuffleMaskObfuscator) Obfuscate(data []byte, entropy []byte) []byte` | `dappco.re/go/core/io/sigil` | Obfuscate shuffles bytes and applies a mask derived from entropy. | Yes |
| `Sigil.In` | `In(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | In applies the forward transformation to the data. | Yes |
| `Sigil.Out` | `Out(data []byte) ([]byte, error)` | `dappco.re/go/core/io/sigil` | Out applies the reverse transformation to the data. | Yes |
| `XORObfuscator.Deobfuscate` | `func (*XORObfuscator) Deobfuscate(data []byte, entropy []byte) []byte` | `dappco.re/go/core/io/sigil` | Deobfuscate reverses the XOR transformation (XOR is symmetric). | Yes |
| `XORObfuscator.Obfuscate` | `func (*XORObfuscator) Obfuscate(data []byte, entropy []byte) []byte` | `dappco.re/go/core/io/sigil` | Obfuscate XORs the data with a key stream derived from the entropy. | Yes |
| `Medium` | `type Medium struct` | `dappco.re/go/core/io/sqlite` | Medium is a SQLite-backed storage backend implementing the io.Medium interface. | Yes |
| `Option` | `type Option func(*Medium)` | `dappco.re/go/core/io/sqlite` | Option configures a Medium. | Yes |
| `New` | `func New(dbPath string, opts ...Option) (*Medium, error)` | `dappco.re/go/core/io/sqlite` | New creates a new SQLite Medium at the given database path. | Yes |
| `WithTable` | `func WithTable(table string) Option` | `dappco.re/go/core/io/sqlite` | WithTable sets the table name (default: "files"). | Yes |
| `Medium.Append` | `func (*Medium) Append(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/sqlite` | Append opens the named file for appending, creating it if it doesn't exist. | Yes |
| `Medium.Close` | `func (*Medium) Close() error` | `dappco.re/go/core/io/sqlite` | Close closes the underlying database connection. | Yes |
| `Medium.Create` | `func (*Medium) Create(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/sqlite` | Create creates or truncates the named file. | Yes |
| `Medium.Delete` | `func (*Medium) Delete(p string) error` | `dappco.re/go/core/io/sqlite` | Delete removes a file or empty directory. | Yes |
| `Medium.DeleteAll` | `func (*Medium) DeleteAll(p string) error` | `dappco.re/go/core/io/sqlite` | DeleteAll removes a file or directory and all its contents recursively. | Yes |
| `Medium.EnsureDir` | `func (*Medium) EnsureDir(p string) error` | `dappco.re/go/core/io/sqlite` | EnsureDir makes sure a directory exists, creating it if necessary. | Yes |
| `Medium.Exists` | `func (*Medium) Exists(p string) bool` | `dappco.re/go/core/io/sqlite` | Exists checks if a path exists (file or directory). | Yes |
| `Medium.FileGet` | `func (*Medium) FileGet(p string) (string, error)` | `dappco.re/go/core/io/sqlite` | FileGet is a convenience function that reads a file from the medium. | Yes |
| `Medium.FileSet` | `func (*Medium) FileSet(p, content string) error` | `dappco.re/go/core/io/sqlite` | FileSet is a convenience function that writes a file to the medium. | Yes |
| `Medium.IsDir` | `func (*Medium) IsDir(p string) bool` | `dappco.re/go/core/io/sqlite` | IsDir checks if a path exists and is a directory. | Yes |
| `Medium.IsFile` | `func (*Medium) IsFile(p string) bool` | `dappco.re/go/core/io/sqlite` | IsFile checks if a path exists and is a regular file. | Yes |
| `Medium.List` | `func (*Medium) List(p string) ([]fs.DirEntry, error)` | `dappco.re/go/core/io/sqlite` | List returns the directory entries for the given path. | Yes |
| `Medium.Open` | `func (*Medium) Open(p string) (fs.File, error)` | `dappco.re/go/core/io/sqlite` | Open opens the named file for reading. | Yes |
| `Medium.Read` | `func (*Medium) Read(p string) (string, error)` | `dappco.re/go/core/io/sqlite` | Read retrieves the content of a file as a string. | Yes |
| `Medium.ReadStream` | `func (*Medium) ReadStream(p string) (goio.ReadCloser, error)` | `dappco.re/go/core/io/sqlite` | ReadStream returns a reader for the file content. | Yes |
| `Medium.Rename` | `func (*Medium) Rename(oldPath, newPath string) error` | `dappco.re/go/core/io/sqlite` | Rename moves a file or directory from oldPath to newPath. | Yes |
| `Medium.Stat` | `func (*Medium) Stat(p string) (fs.FileInfo, error)` | `dappco.re/go/core/io/sqlite` | Stat returns file information for the given path. | Yes |
| `Medium.Write` | `func (*Medium) Write(p, content string) error` | `dappco.re/go/core/io/sqlite` | Write saves the given content to a file, overwriting it if it exists. | Yes |
| `Medium.WriteStream` | `func (*Medium) WriteStream(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/sqlite` | WriteStream returns a writer for the file content. | Yes |
| `Medium` | `type Medium struct` | `dappco.re/go/core/io/store` | Medium wraps a Store to satisfy the io.Medium interface. | Yes |
| `Store` | `type Store struct` | `dappco.re/go/core/io/store` | Store is a group-namespaced key-value store backed by SQLite. | Yes |
| `New` | `func New(dbPath string) (*Store, error)` | `dappco.re/go/core/io/store` | New creates a Store at the given SQLite path. | Yes |
| `NewMedium` | `func NewMedium(dbPath string) (*Medium, error)` | `dappco.re/go/core/io/store` | NewMedium creates an io.Medium backed by a KV store at the given SQLite path. | Yes |
| `Medium.Append` | `func (*Medium) Append(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/store` | Append opens a key for appending. | Yes |
| `Medium.Close` | `func (*Medium) Close() error` | `dappco.re/go/core/io/store` | Close closes the underlying store. | Yes |
| `Medium.Create` | `func (*Medium) Create(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/store` | Create creates or truncates a key. | Yes |
| `Medium.Delete` | `func (*Medium) Delete(p string) error` | `dappco.re/go/core/io/store` | Delete removes a key, or checks that a group is empty. | Yes |
| `Medium.DeleteAll` | `func (*Medium) DeleteAll(p string) error` | `dappco.re/go/core/io/store` | DeleteAll removes a key, or all keys in a group. | Yes |
| `Medium.EnsureDir` | `func (*Medium) EnsureDir(_ string) error` | `dappco.re/go/core/io/store` | EnsureDir is a no-op — groups are created implicitly on Set. | No |
| `Medium.Exists` | `func (*Medium) Exists(p string) bool` | `dappco.re/go/core/io/store` | Exists returns true if a group or key exists. | Yes |
| `Medium.FileGet` | `func (*Medium) FileGet(p string) (string, error)` | `dappco.re/go/core/io/store` | FileGet is an alias for Read. | No |
| `Medium.FileSet` | `func (*Medium) FileSet(p, content string) error` | `dappco.re/go/core/io/store` | FileSet is an alias for Write. | No |
| `Medium.IsDir` | `func (*Medium) IsDir(p string) bool` | `dappco.re/go/core/io/store` | IsDir returns true if the path is a group with entries. | Yes |
| `Medium.IsFile` | `func (*Medium) IsFile(p string) bool` | `dappco.re/go/core/io/store` | IsFile returns true if a group/key pair exists. | Yes |
| `Medium.List` | `func (*Medium) List(p string) ([]fs.DirEntry, error)` | `dappco.re/go/core/io/store` | List returns directory entries. | Yes |
| `Medium.Open` | `func (*Medium) Open(p string) (fs.File, error)` | `dappco.re/go/core/io/store` | Open opens a key for reading. | Yes |
| `Medium.Read` | `func (*Medium) Read(p string) (string, error)` | `dappco.re/go/core/io/store` | Read retrieves the value at group/key. | Yes |
| `Medium.ReadStream` | `func (*Medium) ReadStream(p string) (goio.ReadCloser, error)` | `dappco.re/go/core/io/store` | ReadStream returns a reader for the value. | No |
| `Medium.Rename` | `func (*Medium) Rename(oldPath, newPath string) error` | `dappco.re/go/core/io/store` | Rename moves a key from one path to another. | Yes |
| `Medium.Stat` | `func (*Medium) Stat(p string) (fs.FileInfo, error)` | `dappco.re/go/core/io/store` | Stat returns file info for a group (dir) or key (file). | Yes |
| `Medium.Store` | `func (*Medium) Store() *Store` | `dappco.re/go/core/io/store` | Store returns the underlying KV store for direct access. | No |
| `Medium.Write` | `func (*Medium) Write(p, content string) error` | `dappco.re/go/core/io/store` | Write stores a value at group/key. | Yes |
| `Medium.WriteStream` | `func (*Medium) WriteStream(p string) (goio.WriteCloser, error)` | `dappco.re/go/core/io/store` | WriteStream returns a writer. | No |
| `Store.AsMedium` | `func (*Store) AsMedium() *Medium` | `dappco.re/go/core/io/store` | AsMedium returns a Medium adapter for an existing Store. | Yes |
| `Store.Close` | `func (*Store) Close() error` | `dappco.re/go/core/io/store` | Close closes the underlying database. | Yes |
| `Store.Count` | `func (*Store) Count(group string) (int, error)` | `dappco.re/go/core/io/store` | Count returns the number of keys in a group. | Yes |
| `Store.Delete` | `func (*Store) Delete(group, key string) error` | `dappco.re/go/core/io/store` | Delete removes a single key from a group. | Yes |
| `Store.DeleteGroup` | `func (*Store) DeleteGroup(group string) error` | `dappco.re/go/core/io/store` | DeleteGroup removes all keys in a group. | Yes |
| `Store.Get` | `func (*Store) Get(group, key string) (string, error)` | `dappco.re/go/core/io/store` | Get retrieves a value by group and key. | Yes |
| `Store.GetAll` | `func (*Store) GetAll(group string) (map[string]string, error)` | `dappco.re/go/core/io/store` | GetAll returns all key-value pairs in a group. | Yes |
| `Store.Render` | `func (*Store) Render(tmplStr, group string) (string, error)` | `dappco.re/go/core/io/store` | Render loads all key-value pairs from a group and renders a Go template. | Yes |
| `Store.Set` | `func (*Store) Set(group, key, value string) error` | `dappco.re/go/core/io/store` | Set stores a value by group and key, overwriting if exists. | Yes |
| `Service` | `type Service struct` | `dappco.re/go/core/io/workspace` | Service implements the Workspace interface. | Yes |
| `Workspace` | `type Workspace interface` | `dappco.re/go/core/io/workspace` | Workspace provides management for encrypted user workspaces. | No |
| `New` | `func New(c *core.Core, crypt ...cryptProvider) (any, error)` | `dappco.re/go/core/io/workspace` | New creates a new Workspace service instance. | Yes |
| `Service.CreateWorkspace` | `func (*Service) CreateWorkspace(identifier, password string) (string, error)` | `dappco.re/go/core/io/workspace` | CreateWorkspace creates a new encrypted workspace. | Yes |
| `Service.HandleIPCEvents` | `func (*Service) HandleIPCEvents(c *core.Core, msg core.Message) core.Result` | `dappco.re/go/core/io/workspace` | HandleIPCEvents handles workspace-related IPC messages. | No |
| `Service.SwitchWorkspace` | `func (*Service) SwitchWorkspace(name string) error` | `dappco.re/go/core/io/workspace` | SwitchWorkspace changes the active workspace. | Yes |
| `Service.WorkspaceFileGet` | `func (*Service) WorkspaceFileGet(filename string) (string, error)` | `dappco.re/go/core/io/workspace` | WorkspaceFileGet retrieves the content of a file from the active workspace. | Yes |
| `Service.WorkspaceFileSet` | `func (*Service) WorkspaceFileSet(filename, content string) error` | `dappco.re/go/core/io/workspace` | WorkspaceFileSet saves content to a file in the active workspace. | Yes |
| `Workspace.CreateWorkspace` | `CreateWorkspace(identifier, password string) (string, error)` | `dappco.re/go/core/io/workspace` | Creates a new encrypted workspace and returns its ID. | Yes |
| `Workspace.SwitchWorkspace` | `SwitchWorkspace(name string) error` | `dappco.re/go/core/io/workspace` | Switches the active workspace. | Yes |
| `Workspace.WorkspaceFileGet` | `WorkspaceFileGet(filename string) (string, error)` | `dappco.re/go/core/io/workspace` | Reads a file from the active workspace. | Yes |
| `Workspace.WorkspaceFileSet` | `WorkspaceFileSet(filename, content string) error` | `dappco.re/go/core/io/workspace` | Writes a file into the active workspace. | Yes |

View file

@ -25,7 +25,7 @@ The `Medium` interface is defined in `io.go`. It is the only type that consuming
- **`io.Local`** — a package-level variable initialised in `init()` via `local.New("/")`. This gives unsandboxed access to the host filesystem, mirroring the behaviour of the standard `os` package.
- **`io.NewSandboxed(root)`** — creates a `local.Medium` restricted to `root`. All path resolution is confined within that directory.
- **`io.Copy(src, srcPath, dst, dstPath)`** — copies a file between any two mediums by reading from one and writing to the other.
- **`io.NewMemoryMedium()`** — a fully functional in-memory implementation for unit tests. It tracks files, directories, and modification times in plain maps.
- **`io.MockMedium`** — a fully functional in-memory implementation for unit tests. It tracks files, directories, and modification times in plain maps.
### FileInfo and DirEntry (root package)
@ -36,7 +36,7 @@ Simple struct implementations of `fs.FileInfo` and `fs.DirEntry` are exported fr
### local.Medium
**File:** `local/medium.go`
**File:** `local/client.go`
The local backend wraps the standard `os` package with two layers of path protection:
@ -60,7 +60,7 @@ The S3 backend translates `Medium` operations into AWS SDK calls. Key design dec
- **Directory semantics:** S3 has no real directories. `EnsureDir` is a no-op. `IsDir` and `Exists` for directory-like paths use `ListObjectsV2` with `MaxKeys: 1` to check for objects under the prefix.
- **Rename:** Implemented as copy-then-delete, since S3 has no atomic rename.
- **Append:** Downloads existing content, appends in memory, re-uploads on `Close()`. This is the only viable approach given S3's immutable-object model.
- **Testability:** The `Client` interface abstracts the six SDK methods used. Tests inject a `mockS3` that stores objects in a `map[string][]byte` with a `sync.RWMutex`.
- **Testability:** The `s3API` interface (unexported) abstracts the six SDK methods used. Tests inject a `mockS3` that stores objects in a `map[string][]byte` with a `sync.RWMutex`.
### sqlite.Medium
@ -81,7 +81,7 @@ CREATE TABLE IF NOT EXISTS files (
- **WAL mode** is enabled at connection time for better concurrent read performance.
- **Path cleaning** uses the same `path.Clean("/" + p)` pattern as other backends.
- **Rename** is transactional: it reads the source row, inserts at the destination, deletes the source, and moves all children (if it is a directory) within a single transaction.
- **Custom tables** are supported via `sqlite.Options{Path: ":memory:", Table: "name"}` to allow multiple logical filesystems in one database.
- **Custom tables** are supported via `WithTable("name")` to allow multiple logical filesystems in one database.
- **`:memory:`** databases work out of the box for tests.
### node.Node
@ -100,7 +100,7 @@ Key capabilities beyond `Medium`:
### datanode.Medium
**File:** `datanode/medium.go`
**File:** `datanode/client.go`
A thread-safe `Medium` backed by Borg's `DataNode` (an in-memory `fs.FS` with tar serialisation). It adds:
@ -117,7 +117,7 @@ A thread-safe `Medium` backed by Borg's `DataNode` (an in-memory `fs.FS` with ta
The store package provides two complementary APIs:
### KeyValueStore (key-value)
### Store (key-value)
A group-namespaced key-value store backed by SQLite:
@ -135,23 +135,22 @@ Operations: `Get`, `Set`, `Delete`, `Count`, `DeleteGroup`, `GetAll`, `Render`.
The `Render` method loads all key-value pairs from a group into a `map[string]string` and executes a Go `text/template` against them:
```go
keyValueStore, _ := store.New(store.Options{Path: ":memory:"})
keyValueStore.Set("user", "pool", "pool.lthn.io:3333")
keyValueStore.Set("user", "wallet", "iz...")
renderedText, _ := keyValueStore.Render(`{"pool":"{{ .pool }}"}`, "user")
assert.Equal(t, `{"pool":"pool.lthn.io:3333"}`, renderedText)
s.Set("user", "pool", "pool.lthn.io:3333")
s.Set("user", "wallet", "iz...")
out, _ := s.Render(`{"pool":"{{ .pool }}"}`, "user")
// out: {"pool":"pool.lthn.io:3333"}
```
### store.Medium (Medium adapter)
Wraps a `KeyValueStore` to satisfy the `Medium` interface. Paths are split as `group/key`:
Wraps a `Store` to satisfy the `Medium` interface. Paths are split as `group/key`:
- `Read("config/theme")` calls `Get("config", "theme")`
- `List("")` returns all groups as directories
- `List("config")` returns all keys in the `config` group as files
- `IsDir("config")` returns true if the group has entries
You can create it directly (`store.NewMedium(store.Options{Path: ":memory:"})`) or adapt an existing store (`keyValueStore.AsMedium()`).
You can create it directly (`NewMedium(":memory:")`) or adapt an existing store (`store.AsMedium()`).
## sigil Package
@ -164,8 +163,8 @@ The sigil package implements composable, reversible data transformations.
```go
type Sigil interface {
In(data []byte) ([]byte, error)
Out(data []byte) ([]byte, error)
In(data []byte) ([]byte, error) // forward transform
Out(data []byte) ([]byte, error) // reverse transform
}
```
@ -199,8 +198,10 @@ Created via `NewSigil(name)`:
### Pipeline Functions
```go
// Apply sigils left-to-right.
encoded, _ := sigil.Transmute(data, []sigil.Sigil{gzipSigil, hexSigil})
// Reverse sigils right-to-left.
original, _ := sigil.Untransmute(encoded, []sigil.Sigil{gzipSigil, hexSigil})
```
@ -229,11 +230,12 @@ The pre-obfuscation layer ensures that raw plaintext patterns are never sent dir
key := make([]byte, 32)
rand.Read(key)
cipherSigil, _ := sigil.NewChaChaPolySigil(key, nil)
ciphertext, _ := cipherSigil.In([]byte("secret"))
plaintext, _ := cipherSigil.Out(ciphertext)
s, _ := sigil.NewChaChaPolySigil(key)
ciphertext, _ := s.In([]byte("secret"))
plaintext, _ := s.Out(ciphertext)
shuffleCipherSigil, _ := sigil.NewChaChaPolySigil(key, &sigil.ShuffleMaskObfuscator{})
// With stronger obfuscation:
s2, _ := sigil.NewChaChaPolySigilWithObfuscator(key, &sigil.ShuffleMaskObfuscator{})
```
Each call to `In` generates a fresh random nonce, so encrypting the same plaintext twice produces different ciphertexts.
@ -268,8 +270,8 @@ Application code
+-- sqlite.Medium --> modernc.org/sqlite
+-- node.Node --> in-memory map + tar serialisation
+-- datanode.Medium --> Borg DataNode + sync.RWMutex
+-- store.Medium --> store.KeyValueStore (SQLite KV) --> Medium adapter
+-- MemoryMedium --> map[string]string (for tests)
+-- store.Medium --> store.Store (SQLite KV) --> Medium adapter
+-- MockMedium --> map[string]string (for tests)
```
Every backend normalises paths using the same `path.Clean("/" + p)` pattern, ensuring consistent behaviour regardless of which backend is in use.

View file

@ -0,0 +1,125 @@
<!-- SPDX-License-Identifier: EUPL-1.2 -->
# Convention Drift Audit
Date: 2026-03-23
Scope: tracked module files in the main repo surface (`*.go`, `*.md`), excluding `.core/`, `.github/`, `.idea/`, `go.mod`, `go.sum`, and generated coverage output.
Conventions used: `CLAUDE.md`, `docs/development.md`, `docs/index.md`, and `docs/architecture.md`.
Limitation: `CODEX.md` is not present in this repository. The `stdlib -> core.*` and usage-example findings below are therefore inferred from the documented guidance already in-tree.
## Missing SPDX Headers
- `CLAUDE.md:1`
- `bench_test.go:1`
- `client_test.go:1`
- `datanode/client.go:1`
- `datanode/client_test.go:1`
- `docs/architecture.md:1`
- `docs/development.md:1`
- `docs/index.md:1`
- `io.go:1`
- `local/client.go:1`
- `local/client_test.go:1`
- `node/node.go:1`
- `node/node_test.go:1`
- `s3/s3.go:1`
- `s3/s3_test.go:1`
- `sigil/crypto_sigil.go:1`
- `sigil/crypto_sigil_test.go:1`
- `sigil/sigil.go:1`
- `sigil/sigil_test.go:1`
- `sigil/sigils.go:1`
- `sqlite/sqlite.go:1`
- `sqlite/sqlite_test.go:1`
- `store/medium.go:1`
- `store/medium_test.go:1`
- `store/store.go:1`
- `store/store_test.go:1`
- `workspace/service.go:1`
- `workspace/service_test.go:1`
## `stdlib -> core.*` Drift
Interpretation note: `CLAUDE.md` only makes one direct stdlib replacement rule explicit: do not use raw `os` / `filepath` outside the backend boundary. The concrete drift in this repo therefore falls into two buckets: stale pre-`forge.lthn.ai` core import paths, and direct host-filesystem/path handling in non-backend production code.
- `go.mod:1` still declares `module dappco.re/go/core/io` while the repo documentation identifies the module as `forge.lthn.ai/core/go-io`.
- `go.mod:6` still depends on `dappco.re/go/core` while the repo docs list `forge.lthn.ai/core/go` as the current Core dependency.
- `io.go:12` imports `dappco.re/go/core/io/local` instead of the documented `forge.lthn.ai/core/go-io/local`.
- `node/node.go:18` imports `dappco.re/go/core/io` instead of the documented `forge.lthn.ai/core/go-io`.
- `workspace/service.go:10` imports `dappco.re/go/core` instead of the documented Core package path.
- `workspace/service.go:13` imports `dappco.re/go/core/io` instead of the documented `forge.lthn.ai/core/go-io`.
- `workspace/service_test.go:7` still imports `dappco.re/go/core`.
- `datanode/client_test.go:7` still imports `dappco.re/go/core/io`.
- `workspace/service.go:6` uses raw `os.UserHomeDir()` in non-backend production code, despite the repo guidance that filesystem access must go through the `io.Medium` abstraction.
- `workspace/service.go:7` builds runtime filesystem paths with `filepath.Join()` in non-backend production code, again bypassing the documented abstraction boundary.
## UK English Drift
- `datanode/client.go:3` uses `serializes`; `docs/development.md` calls for UK English (`serialises`).
- `datanode/client.go:52` uses `serializes`; `docs/development.md` calls for UK English (`serialises`).
- `sigil/crypto_sigil.go:3` uses `defense-in-depth`; `docs/development.md` calls for UK English (`defence-in-depth`).
- `sigil/crypto_sigil.go:38` uses `defense`; `docs/development.md` calls for UK English (`defence`).
## Missing Tests
Basis: `GOWORK=off go test -coverprofile=coverage.out ./...` and `go tool cover -func=coverage.out` on 2026-03-23. This list focuses on public or semantically meaningful API entrypoints at `0.0%` coverage and omits trivial one-line accessor helpers.
- `io.go:126` `NewSandboxed`
- `io.go:143` `ReadStream`
- `io.go:148` `WriteStream`
- `io.go:208` `(*MockMedium).WriteMode`
- `io.go:358` `(*MockMedium).Open`
- `io.go:370` `(*MockMedium).Create`
- `io.go:378` `(*MockMedium).Append`
- `io.go:388` `(*MockMedium).ReadStream`
- `io.go:393` `(*MockMedium).WriteStream`
- `datanode/client.go:138` `(*Medium).WriteMode`
- `local/client.go:231` `(*Medium).Append`
- `node/node.go:128` `(*Node).WalkNode`
- `node/node.go:218` `(*Node).CopyTo`
- `node/node.go:349` `(*Node).Read`
- `node/node.go:359` `(*Node).Write`
- `node/node.go:365` `(*Node).WriteMode`
- `node/node.go:370` `(*Node).FileGet`
- `node/node.go:375` `(*Node).FileSet`
- `node/node.go:380` `(*Node).EnsureDir`
- `node/node.go:393` `(*Node).IsFile`
- `node/node.go:400` `(*Node).IsDir`
- `node/node.go:411` `(*Node).Delete`
- `node/node.go:421` `(*Node).DeleteAll`
- `node/node.go:445` `(*Node).Rename`
- `node/node.go:461` `(*Node).List`
- `node/node.go:473` `(*Node).Create`
- `node/node.go:480` `(*Node).Append`
- `node/node.go:491` `(*Node).ReadStream`
- `node/node.go:500` `(*Node).WriteStream`
- `s3/s3.go:55` `WithClient`
- `store/medium.go:37` `(*Medium).Store`
- `store/medium.go:80` `(*Medium).EnsureDir`
- `store/medium.go:95` `(*Medium).FileGet`
- `store/medium.go:100` `(*Medium).FileSet`
- `store/medium.go:246` `(*Medium).ReadStream`
- `store/medium.go:259` `(*Medium).WriteStream`
- `workspace/service.go:150` `(*Service).HandleIPCEvents`
## Missing Usage-Example Comments
Interpretation note: because `CODEX.md` is absent, this section flags public entrypoints that expose the package's main behaviour but do not have a nearby comment block showing concrete usage. `sigil/sigil.go` is the only production file in the repo that currently includes an explicit `Example usage:` comment block.
- `io.go:123` `NewSandboxed`
- `local/client.go:22` `New`
- `s3/s3.go:68` `New`
- `sqlite/sqlite.go:35` `New`
- `node/node.go:32` `New`
- `node/node.go:217` `CopyTo`
- `datanode/client.go:32` `New`
- `datanode/client.go:40` `FromTar`
- `store/store.go:21` `New`
- `store/store.go:124` `Render`
- `store/medium.go:22` `NewMedium`
- `workspace/service.go:39` `New`
- `sigil/crypto_sigil.go:247` `NewChaChaPolySigil`
- `sigil/crypto_sigil.go:263` `NewChaChaPolySigilWithObfuscator`

View file

@ -88,31 +88,30 @@ func TestDelete_Bad_DirNotEmpty(t *testing.T) { /* returns error for non-empty d
## Writing Tests Against Medium
Use `MemoryMedium` from the root package for unit tests that need a storage backend but should not touch disk:
Use `MockMedium` from the root package for unit tests that need a storage backend but should not touch disk:
```go
func TestMyFeature(t *testing.T) {
memoryMedium := io.NewMemoryMedium()
_ = memoryMedium.Write("config.yaml", "key: value")
_ = memoryMedium.EnsureDir("data")
m := io.NewMockMedium()
m.Files["config.yaml"] = "key: value"
m.Dirs["data"] = true
result, err := myFunction(memoryMedium)
// Your code under test receives m as an io.Medium
result, err := myFunction(m)
assert.NoError(t, err)
output, err := memoryMedium.Read("output.txt")
require.NoError(t, err)
assert.Equal(t, "expected", output)
assert.Equal(t, "expected", m.Files["output.txt"])
}
```
For tests that need a temporary filesystem, use `local.New` with `t.TempDir()`:
For tests that need a real but ephemeral filesystem, use `local.New` with `t.TempDir()`:
```go
func TestLocalMedium_RoundTrip_Good(t *testing.T) {
localMedium, err := local.New(t.TempDir())
func TestWithRealFS(t *testing.T) {
m, err := local.New(t.TempDir())
require.NoError(t, err)
_ = localMedium.Write("file.txt", "hello")
content, _ := localMedium.Read("file.txt")
_ = m.Write("file.txt", "hello")
content, _ := m.Read("file.txt")
assert.Equal(t, "hello", content)
}
```
@ -120,12 +119,12 @@ func TestLocalMedium_RoundTrip_Good(t *testing.T) {
For SQLite-backed tests, use `:memory:`:
```go
func TestSqliteMedium_RoundTrip_Good(t *testing.T) {
sqliteMedium, err := sqlite.New(sqlite.Options{Path: ":memory:"})
func TestWithSQLite(t *testing.T) {
m, err := sqlite.New(":memory:")
require.NoError(t, err)
defer sqliteMedium.Close()
defer m.Close()
_ = sqliteMedium.Write("file.txt", "hello")
_ = m.Write("file.txt", "hello")
}
```
@ -135,7 +134,7 @@ func TestSqliteMedium_RoundTrip_Good(t *testing.T) {
To add a new `Medium` implementation:
1. Create a new package directory (e.g., `sftp/`).
2. Define a struct that implements all 17 methods of `io.Medium`.
2. Define a struct that implements all 18 methods of `io.Medium`.
3. Add a compile-time check at the top of your file:
```go
@ -143,7 +142,7 @@ var _ coreio.Medium = (*Medium)(nil)
```
4. Normalise paths using `path.Clean("/" + p)` to prevent traversal escapes. This is the convention followed by every existing backend.
5. Handle `nil` and empty input consistently: check how `MemoryMedium` and `local.Medium` behave and match that behaviour.
5. Handle `nil` and empty input consistently: check how `MockMedium` and `local.Medium` behave and match that behaviour.
6. Write tests using the `_Good` / `_Bad` / `_Ugly` naming convention.
7. Add your package to the table in `docs/index.md`.
@ -172,13 +171,13 @@ To add a new data transformation:
```
go-io/
├── io.go # Medium interface, helpers, MemoryMedium
├── medium_test.go # Tests for MemoryMedium and helpers
├── io.go # Medium interface, helpers, MockMedium
├── client_test.go # Tests for MockMedium and helpers
├── bench_test.go # Benchmarks
├── go.mod
├── local/
│ ├── medium.go # Local filesystem backend
│ └── medium_test.go
│ ├── client.go # Local filesystem backend
│ └── client_test.go
├── s3/
│ ├── s3.go # S3 backend
│ └── s3_test.go
@ -189,8 +188,8 @@ go-io/
│ ├── node.go # In-memory fs.FS + Medium
│ └── node_test.go
├── datanode/
│ ├── medium.go # Borg DataNode Medium wrapper
│ └── medium_test.go
│ ├── client.go # Borg DataNode Medium wrapper
│ └── client_test.go
├── store/
│ ├── store.go # KV store
│ ├── medium.go # Medium adapter for KV store

View file

@ -19,17 +19,21 @@ import (
"forge.lthn.ai/core/go-io/node"
)
// Use the pre-initialised local filesystem (unsandboxed, rooted at "/").
content, _ := io.Local.Read("/etc/hostname")
sandboxMedium, _ := io.NewSandboxed("/var/data/myapp")
_ = sandboxMedium.Write("config.yaml", "key: value")
// Create a sandboxed medium restricted to a single directory.
sandbox, _ := io.NewSandboxed("/var/data/myapp")
_ = sandbox.Write("config.yaml", "key: value")
nodeTree := node.New()
nodeTree.AddData("hello.txt", []byte("world"))
tarball, _ := nodeTree.ToTar()
// In-memory filesystem with tar serialisation.
mem := node.New()
mem.AddData("hello.txt", []byte("world"))
tarball, _ := mem.ToTar()
s3Medium, _ := s3.New(s3.Options{Bucket: "my-bucket", Client: awsClient, Prefix: "uploads/"})
_ = s3Medium.Write("photo.jpg", rawData)
// S3 backend (requires an *s3.Client from the AWS SDK).
bucket, _ := s3.New("my-bucket", s3.WithClient(awsClient), s3.WithPrefix("uploads/"))
_ = bucket.Write("photo.jpg", rawData)
```
@ -37,7 +41,7 @@ _ = s3Medium.Write("photo.jpg", rawData)
| Package | Import Path | Purpose |
|---------|-------------|---------|
| `io` (root) | `forge.lthn.ai/core/go-io` | `Medium` interface, helper functions, `MemoryMedium` for tests |
| `io` (root) | `forge.lthn.ai/core/go-io` | `Medium` interface, helper functions, `MockMedium` for tests |
| `local` | `forge.lthn.ai/core/go-io/local` | Local filesystem backend with path sandboxing and symlink-escape protection |
| `s3` | `forge.lthn.ai/core/go-io/s3` | Amazon S3 / S3-compatible backend (Garage, MinIO, etc.) |
| `sqlite` | `forge.lthn.ai/core/go-io/sqlite` | SQLite-backed virtual filesystem (pure Go driver, no CGO) |
@ -50,28 +54,34 @@ _ = s3Medium.Write("photo.jpg", rawData)
## The Medium Interface
Every storage backend implements the same 17-method interface:
Every storage backend implements the same 18-method interface:
```go
type Medium interface {
// Content operations
Read(path string) (string, error)
Write(path, content string) error
WriteMode(path, content string, mode fs.FileMode) error
FileGet(path string) (string, error) // alias for Read
FileSet(path, content string) error // alias for Write
// Streaming (for large files)
ReadStream(path string) (io.ReadCloser, error)
WriteStream(path string) (io.WriteCloser, error)
Open(path string) (fs.File, error)
Create(path string) (io.WriteCloser, error)
Append(path string) (io.WriteCloser, error)
// Directory operations
EnsureDir(path string) error
List(path string) ([]fs.DirEntry, error)
// Metadata
Stat(path string) (fs.FileInfo, error)
Exists(path string) bool
IsFile(path string) bool
IsDir(path string) bool
// Mutation
Delete(path string) error
DeleteAll(path string) error
Rename(oldPath, newPath string) error
@ -86,12 +96,12 @@ All backends implement this interface fully. Backends where a method has no natu
The root package provides helper functions that accept any `Medium`:
```go
sourceMedium := io.Local
destinationMedium := io.NewMemoryMedium()
err := io.Copy(sourceMedium, "source.txt", destinationMedium, "dest.txt")
// Copy a file between any two backends.
err := io.Copy(localMedium, "source.txt", s3Medium, "dest.txt")
content, err := io.Read(destinationMedium, "path")
err = io.Write(destinationMedium, "path", "content")
// Read/Write wrappers that take an explicit medium.
content, err := io.Read(medium, "path")
err := io.Write(medium, "path", "content")
```

View file

@ -0,0 +1,168 @@
# Security Attack Vector Mapping
`CODEX.md` was not present under `/workspace`, so this mapping follows [`CLAUDE.md`](/workspace/CLAUDE.md) and the current source tree.
Scope:
- Included: exported functions and methods that accept caller-controlled data or parse external payloads, plus public writer types returned from those APIs.
- Omitted: zero-argument accessors and teardown helpers such as `Close`, `Snapshot`, `Store`, `AsMedium`, `DataNode`, and `fs.FileInfo` getters because they are not ingress points.
Notes:
- `local` is the in-repo filesystem containment layer. Its protection depends on `validatePath`, but most mutating operations still have a post-validation TOCTOU window before the final `os.*` call.
- `workspace.Service` uses `io.Local` rooted at `/`, so its path joins are not sandboxed by this repository.
- `datanode.FromTar` and `datanode.Restore` inherit Borg `datanode.FromTar` behavior from `forge.lthn.ai/Snider/Borg` v0.3.1: it trims leading `/`, preserves symlink tar entries, and does not reject `..` segments or large archives.
## `io` Facade And `MockMedium`
| Function | File:Line | Input source | What it flows into | Current validation | Potential attack vector |
| --- | --- | --- | --- | --- | --- |
| `io.NewSandboxed` | `io.go:126` | Caller-supplied sandbox root | Delegates to `local.New(root)` and stores the resolved root in a `local.Medium` | `local.New` absolutizes and best-effort resolves root symlinks; no policy check on `/` or broad roots | Misconfiguration can disable containment entirely by choosing `/` or an overly broad root |
| `io.Read` | `io.go:133` | Caller path plus chosen backend | Direct `m.Read(path)` dispatch | No facade-level validation | Inherits backend read, enumeration, and path-handling attack surface |
| `io.Write` | `io.go:138` | Caller path/content plus chosen backend | Direct `m.Write(path, content)` dispatch | No facade-level validation | Inherits backend overwrite, creation, and storage-exhaustion attack surface |
| `io.ReadStream` | `io.go:143` | Caller path plus chosen backend | Direct `m.ReadStream(path)` dispatch | No facade-level validation | Inherits backend streaming-read surface and any unbounded downstream consumption risk |
| `io.WriteStream` | `io.go:148` | Caller path plus chosen backend; later streamed bytes from returned writer | Direct `m.WriteStream(path)` dispatch | No facade-level validation | Inherits backend streaming-write surface, including arbitrary object/file creation and unbounded buffering/disk growth |
| `io.EnsureDir` | `io.go:153` | Caller path plus chosen backend | Direct `m.EnsureDir(path)` dispatch | No facade-level validation | Inherits backend directory-creation semantics; on no-op backends this can create false assumptions about isolation |
| `io.IsFile` | `io.go:158` | Caller path plus chosen backend | Direct `m.IsFile(path)` dispatch | No facade-level validation | Inherits backend existence-oracle and metadata-disclosure surface |
| `io.Copy` | `io.go:163` | Caller-selected source/destination mediums and paths | `src.Read(srcPath)` loads full content into memory, then `dst.Write(dstPath, content)` | Validation delegated to both backends | Large source content can exhaust memory; can bridge trust zones and copy attacker-controlled names/content across backends |
| `(*io.MockMedium).Read`, `FileGet`, `Open`, `ReadStream`, `List`, `Stat`, `Exists`, `IsFile`, `IsDir` | `io.go:193`, `225`, `358`, `388`, `443`, `552`, `576`, `219`, `587` | Caller path | Direct map lookup and prefix scans in in-memory maps | No normalization, auth, or path restrictions | If reused outside tests, it becomes a trivial key/value disclosure and enumeration surface |
| `(*io.MockMedium).Write`, `WriteMode`, `FileSet`, `EnsureDir` | `io.go:202`, `208`, `230`, `213` | Caller path/content/mode | Direct map writes; `WriteMode` ignores `mode` | No validation; permissions are ignored | Arbitrary overwrite/creation of entries and silent permission-policy bypass |
| `(*io.MockMedium).Create`, `Append`, `WriteStream`, `(*io.MockWriteCloser).Write` | `io.go:370`, `378`, `393`, `431` | Caller path; streamed caller bytes | Buffers bytes in memory until `Close`, then commits to `Files[path]` | No validation or size limits | Memory exhaustion and arbitrary entry overwrite if used as anything other than a test double |
| `(*io.MockMedium).Delete`, `DeleteAll`, `Rename` | `io.go:235`, `263`, `299` | Caller path(s) | Direct map mutation and prefix scans | No normalization or authorization | Arbitrary delete/rename of entries; prefix-based operations can remove more than a caller expects |
## `local`
| Function | File:Line | Input source | What it flows into | Current validation | Potential attack vector |
| --- | --- | --- | --- | --- | --- |
| `local.New` | `local/client.go:24` | Caller-supplied root path | `filepath.Abs`, optional `filepath.EvalSymlinks`, stored as `Medium.root` | Absolutizes root and resolves root symlink when possible | Passing `/` creates unsandboxed host filesystem access; broad roots widen blast radius |
| `(*local.Medium).Read`, `FileGet` | `local/client.go:114`, `300` | Caller path | `validatePath` then `os.ReadFile` | `validatePath` cleans path, walks symlinks component-by-component, and blocks resolved escapes from `root` | Arbitrary read of anything reachable inside the sandbox; TOCTOU symlink swap remains possible after validation and before the final read |
| `(*local.Medium).Open`, `ReadStream` | `local/client.go:210`, `248` | Caller path | `validatePath` then `os.Open`; `ReadStream` delegates to `Open` | Same `validatePath` containment check | Same read/disclosure surface as `Read`, plus a validated path can still be swapped before `os.Open` |
| `(*local.Medium).List`, `Stat`, `Exists`, `IsFile`, `IsDir` | `local/client.go:192`, `201`, `182`, `169`, `156` | Caller path | `validatePath` then `os.ReadDir` or `os.Stat` | Same `validatePath` containment check | Metadata enumeration for any path inside the sandbox; TOCTOU can still skew the checked object before the final syscall |
| `(*local.Medium).Write`, `FileSet` | `local/client.go:129`, `305` | Caller path/content | Delegates to `WriteMode(..., 0644)` | Path containment only | Arbitrary overwrite inside the sandbox; default `0644` can expose secrets if higher layers use it for sensitive data |
| `(*local.Medium).WriteMode` | `local/client.go:135` | Caller path/content/mode | `validatePath`, `os.MkdirAll`, `os.WriteFile` | Path containment only; caller controls file mode | Arbitrary file write inside the sandbox; caller can choose overly broad modes; TOCTOU after validation can retarget the write |
| `(*local.Medium).Create`, `WriteStream`, `Append` | `local/client.go:219`, `258`, `231` | Caller path; later bytes written through the returned `*os.File` | `validatePath`, `os.MkdirAll`, `os.Create` or `os.OpenFile(..., O_APPEND)` | Path containment only | Arbitrary truncate/append within the sandbox, unbounded disk growth, and the same post-validation race window |
| `(*local.Medium).EnsureDir` | `local/client.go:147` | Caller path | `validatePath` then `os.MkdirAll` | Path containment only | Arbitrary directory creation inside the sandbox; TOCTOU race can still redirect the mkdir target |
| `(*local.Medium).Delete` | `local/client.go:263` | Caller path | `validatePath` then `os.Remove` | Path containment; explicit guard blocks `/` and `$HOME` | Arbitrary file or empty-dir deletion inside the sandbox; guard does not protect other critical paths if root is too broad; TOCTOU applies |
| `(*local.Medium).DeleteAll` | `local/client.go:275` | Caller path | `validatePath` then `os.RemoveAll` | Path containment; explicit guard blocks `/` and `$HOME` | Recursive delete of any sandboxed subtree; if the medium root is broad, the blast radius is broad too |
| `(*local.Medium).Rename` | `local/client.go:287` | Caller old/new paths | `validatePath` on both sides, then `os.Rename` | Path containment on both paths | Arbitrary move/overwrite inside the sandbox; attacker-controlled rename targets can be swapped after validation |
## `sqlite`
| Function | File:Line | Input source | What it flows into | Current validation | Potential attack vector |
| --- | --- | --- | --- | --- | --- |
| `sqlite.WithTable` | `sqlite/sqlite.go:29` | Caller-supplied table name option | Stored on `Medium.table` and concatenated into every SQL statement | No quoting or identifier validation | SQL injection or malformed SQL if an attacker can choose the table name |
| `sqlite.New` | `sqlite/sqlite.go:37` | Caller DB path/URI and options | `sql.Open("sqlite", dbPath)`, `PRAGMA`, `CREATE TABLE` using concatenated table name | Rejects empty `dbPath`; no table-name validation | Arbitrary SQLite file/URI selection and inherited SQL injection risk from `WithTable` |
| `(*sqlite.Medium).Read`, `FileGet`, `Open`, `ReadStream` | `sqlite/sqlite.go:94`, `172`, `455`, `521` | Caller path | `cleanPath` then parameterized `SELECT`; `Open`/`ReadStream` materialize the whole BLOB in memory | Leading-slash `path.Clean` collapses traversal and rejects empty/root keys; path value is parameterized, table name is not | Arbitrary logical-key read, existence disclosure, canonicalization collisions such as `../x -> x`, and memory exhaustion on large BLOBs |
| `(*sqlite.Medium).Write`, `FileSet` | `sqlite/sqlite.go:118`, `177` | Caller path/content | `cleanPath` then parameterized upsert | Same path normalization; table name still concatenated | Arbitrary logical-key overwrite and unbounded DB growth; different raw paths can alias to the same normalized key |
| `(*sqlite.Medium).Create`, `WriteStream`, `Append`, `(*sqlite.sqliteWriteCloser).Write` | `sqlite/sqlite.go:487`, `546`, `499`, `654` | Caller path; streamed caller bytes | `cleanPath`, optional preload of existing BLOB, in-memory buffering, then upsert on `Close` | Non-empty normalized key only | Memory exhaustion from buffering and append preloads; arbitrary overwrite/append of normalized keys |
| `(*sqlite.Medium).EnsureDir` | `sqlite/sqlite.go:136` | Caller path | `cleanPath` then inserts a directory marker row | Root becomes a no-op; other paths are normalized only | Arbitrary logical directory creation and aliasing through normalized names |
| `(*sqlite.Medium).List`, `Stat`, `Exists`, `IsFile`, `IsDir` | `sqlite/sqlite.go:349`, `424`, `551`, `155`, `569` | Caller path | `cleanPath` then parameterized listing/count/stat queries | Same normalized key handling; table name still concatenated | Namespace enumeration and metadata disclosure; canonicalization collisions can hide the caller's original path spelling |
| `(*sqlite.Medium).Delete` | `sqlite/sqlite.go:182` | Caller path | `cleanPath`, directory-child count, then `DELETE` | Rejects empty/root path and non-empty dirs | Arbitrary logical-key deletion |
| `(*sqlite.Medium).DeleteAll` | `sqlite/sqlite.go:227` | Caller path | `cleanPath` then `DELETE WHERE path = ? OR path LIKE ?` | Rejects empty/root path | Bulk deletion of any logical subtree |
| `(*sqlite.Medium).Rename` | `sqlite/sqlite.go:251` | Caller old/new paths | `cleanPath` both paths, then transactional copy/delete of entry and children | Requires non-empty normalized source and destination | Arbitrary move/overwrite of logical subtrees; normalized-path aliasing can redirect or collapse entries |
## `s3`
| Function | File:Line | Input source | What it flows into | Current validation | Potential attack vector |
| --- | --- | --- | --- | --- | --- |
| `s3.WithPrefix` | `s3/s3.go:44` | Caller-supplied prefix | Stored on `Medium.prefix` and prepended to every key | Only ensures a trailing `/` when non-empty | Cross-tenant namespace expansion or contraction if untrusted callers can choose the prefix; empty prefix exposes the whole bucket |
| `s3.WithClient` | `s3/s3.go:55` | Caller-supplied S3 client | Stored as `Medium.client` and trusted for all I/O | No validation | Malicious or wrapped clients can exfiltrate data, fake results, or bypass expected transport controls |
| `s3.New` | `s3/s3.go:69` | Caller bucket name and options | Stores bucket/prefix/client on `Medium` | Rejects empty bucket and missing client only | Redirecting operations to attacker-chosen buckets or prefixes if config is not trusted |
| `(*s3.Medium).EnsureDir` | `s3/s3.go:144` | Caller path (ignored) | No-op | Input is ignored entirely | Semantic mismatch: callers may believe a directory boundary now exists when S3 still has only object keys |
| `(*s3.Medium).Read`, `FileGet`, `Open` | `s3/s3.go:103`, `166`, `388` | Caller path | `key(p)` then `GetObject`; `Read`/`Open` read the whole body into memory | Leading-slash `path.Clean` keeps the key under `prefix`; rejects empty key | Arbitrary read inside the configured bucket/prefix, canonicalization collisions, and memory exhaustion on large objects |
| `(*s3.Medium).ReadStream` | `s3/s3.go:464` | Caller path | `key(p)` then `GetObject`, returning the raw response body | Same normalized key handling; no size/content checks | Delivers arbitrary remote object bodies to downstream consumers without integrity, type, or size enforcement |
| `(*s3.Medium).Write`, `FileSet` | `s3/s3.go:126`, `171` | Caller path/content | `key(p)` then `PutObject` | Same normalized key handling | Arbitrary object overwrite or creation within the configured prefix |
| `(*s3.Medium).Create`, `WriteStream`, `Append`, `(*s3.s3WriteCloser).Write` | `s3/s3.go:427`, `481`, `440`, `609` | Caller path; streamed caller bytes | `key(p)`, optional preload of existing object for append, in-memory buffer, then `PutObject` on `Close` | Non-empty normalized key only | Memory exhaustion from buffering and append preloads; arbitrary overwrite/append of objects under the prefix |
| `(*s3.Medium).List`, `Stat`, `Exists`, `IsFile`, `IsDir` | `s3/s3.go:282`, `355`, `486`, `149`, `518` | Caller path | `key(p)` then `ListObjectsV2` or `HeadObject` | Normalized key stays under `prefix`; no authz or tenancy checks beyond config | Namespace enumeration and metadata disclosure across any objects reachable by the configured prefix |
| `(*s3.Medium).Delete` | `s3/s3.go:176` | Caller path | `key(p)` then `DeleteObject` | Non-empty normalized key only | Arbitrary object deletion inside the configured prefix |
| `(*s3.Medium).DeleteAll` | `s3/s3.go:193` | Caller path | `key(p)`, then exact delete plus prefix-based `ListObjectsV2` and batched `DeleteObjects` | Non-empty normalized key only | Bulk deletion of every object under a caller-chosen logical subtree |
| `(*s3.Medium).Rename` | `s3/s3.go:252` | Caller old/new paths | `key(p)` on both paths, then `CopyObject` followed by `DeleteObject` | Non-empty normalized keys only | Arbitrary move/overwrite of objects within the configured prefix; special characters in `oldPath` can also make `CopySource` handling fragile |
## `store`
### `store.Store`
| Function | File:Line | Input source | What it flows into | Current validation | Potential attack vector |
| --- | --- | --- | --- | --- | --- |
| `store.New` | `store/store.go:22` | Caller DB path/URI | `sql.Open("sqlite", dbPath)`, `PRAGMA`, schema creation | No validation beyond driver errors | Arbitrary SQLite file/URI selection if configuration is attacker-controlled |
| `(*store.Store).Get` | `store/store.go:49` | Caller group/key | Parameterized `SELECT value FROM kv WHERE grp = ? AND key = ?` | Uses placeholders; no group/key policy | Arbitrary secret/config disclosure for any reachable group/key |
| `(*store.Store).Set` | `store/store.go:62` | Caller group/key/value | Parameterized upsert into `kv` | Uses placeholders; no group/key policy | Arbitrary overwrite or creation of stored values |
| `(*store.Store).Delete`, `DeleteGroup` | `store/store.go:75`, `94` | Caller group and optional key | Parameterized `DELETE` statements | Uses placeholders; no authorization or namespace policy | Single-key or whole-group deletion |
| `(*store.Store).Count`, `GetAll` | `store/store.go:84`, `103` | Caller group | Parameterized count or full scan of the group | Uses placeholders; no access control | Group enumeration and bulk disclosure of every key/value in a group |
| `(*store.Store).Render` | `store/store.go:125` | Caller template string and group name | Loads all `group` values into a map, then `template.Parse` and `template.Execute` | No template allowlist or output escaping; template funcs are default-only | Template-driven exfiltration of all values in the chosen group; downstream output injection if rendered text is later used in HTML, shell, or config sinks |
### `store.Medium`
| Function | File:Line | Input source | What it flows into | Current validation | Potential attack vector |
| --- | --- | --- | --- | --- | --- |
| `store.NewMedium` | `store/medium.go:23` | Caller DB path/URI | Delegates to `store.New(dbPath)` | No extra validation | Same arbitrary-DB selection risk as `store.New` |
| `(*store.Medium).EnsureDir` | `store/medium.go:80` | Caller path (ignored) | No-op | Input is ignored | Semantic mismatch: callers may assume they created a boundary when the store still treats group creation as implicit |
| `(*store.Medium).Read`, `FileGet`, `Open`, `ReadStream` | `store/medium.go:62`, `95`, `214`, `246` | Caller medium path | `splitPath` then `Store.Get`; `Open`/`ReadStream` materialize value bytes or a string reader | `path.Clean`, strip leading `/`, require `group/key`; does not forbid odd group names like `..` | Arbitrary logical-key disclosure and group/key aliasing if higher layers treat raw paths as identity |
| `(*store.Medium).Write`, `FileSet` | `store/medium.go:71`, `100` | Caller path/content | `splitPath` then `Store.Set` | Same `group/key` check only | Arbitrary overwrite of any reachable group/key |
| `(*store.Medium).Create`, `WriteStream`, `Append`, `(*store.kvWriteCloser).Write` | `store/medium.go:227`, `259`, `236`, `343` | Caller path; streamed caller bytes | `splitPath`, optional preload of existing value for append, in-memory buffer, then `Store.Set` on `Close` | Requires `group/key`; no size limit | Memory exhaustion and arbitrary value overwrite/append |
| `(*store.Medium).Delete` | `store/medium.go:105` | Caller path | `splitPath`; group-only paths call `Count`, group/key paths call `Store.Delete` | Rejects empty path; refuses non-empty group deletes | Arbitrary single-key deletion and group-existence probing |
| `(*store.Medium).DeleteAll` | `store/medium.go:124` | Caller path | `splitPath`; group-only paths call `DeleteGroup`, group/key paths call `Delete` | Rejects empty path | Whole-group deletion or single-key deletion |
| `(*store.Medium).Rename` | `store/medium.go:136` | Caller old/new paths | `splitPath`, `Store.Get`, `Store.Set`, `Store.Delete` | Requires both paths to include `group/key` | Arbitrary cross-group data movement and destination overwrite |
| `(*store.Medium).List` | `store/medium.go:154` | Caller path | Empty path lists groups; group path loads all keys via `GetAll` | `splitPath` only; no auth | Group and key enumeration; value lengths leak through returned file info sizes |
| `(*store.Medium).Stat`, `Exists`, `IsFile`, `IsDir` | `store/medium.go:191`, `264`, `85`, `278` | Caller path | `splitPath`, then `Count` or `Get` | Same `splitPath` behavior | Existence oracle and metadata disclosure for groups and keys |
## `node`
| Function | File:Line | Input source | What it flows into | Current validation | Potential attack vector |
| --- | --- | --- | --- | --- | --- |
| `node.AddData` | `node/node.go:40` | Caller file name and content | Stores `name` as a map key and `content` as in-memory bytes | Strips a leading `/`; ignores empty names and trailing `/`; does not clean `.` or `..` | Path-confusion payloads such as `../x` or `./x` persist verbatim and can later become traversal gadgets when copied out or tarred |
| `node.FromTar`, `(*node.Node).LoadTar` | `node/node.go:84`, `93` | Caller-supplied tar archive bytes | `archive/tar` reader, `io.ReadAll` per regular file, then `newFiles[name] = ...` | Trims a leading `/`; ignores empty names and directory entries; no `path.Clean`, no `..` rejection, no size limits | Tar-slip-style names survive in memory and can be exported later; huge or duplicate entries can exhaust memory or overwrite earlier entries |
| `(*node.Node).Read`, `FileGet`, `ReadFile`, `Open`, `ReadStream` | `node/node.go:349`, `370`, `187`, `259`, `491` | Caller path/name | Direct map lookup or directory inference; `Read` and `ReadFile` copy/convert content to memory | Only strips a leading `/` | Arbitrary access to weird literal names and confusion if callers assume canonical path handling |
| `(*node.Node).Write`, `WriteMode`, `FileSet` | `node/node.go:359`, `365`, `375` | Caller path/content/mode | Delegates to `AddData`; `WriteMode` ignores `mode` | Same minimal trimming as `AddData` | Arbitrary overwrite of any key, including attacker-planted `../` names; false sense of permission control |
| `(*node.Node).Create`, `WriteStream`, `Append`, `(*node.nodeWriter).Write` | `node/node.go:473`, `500`, `480`, `513` | Caller path; streamed caller bytes | Buffer bytes in memory and commit them as a map entry on `Close` | Only strips a leading `/`; no size limit | Memory exhaustion and creation of path-confusion payloads that can escape on later export |
| `(*node.Node).Delete`, `DeleteAll`, `Rename` | `node/node.go:411`, `421`, `445` | Caller path(s) | Direct map mutation keyed by caller-supplied names | Only strips a leading `/` | Arbitrary delete/rename of any key, including `../`-style names; no directory-safe rename logic |
| `(*node.Node).Stat`, `List`, `ReadDir`, `Exists`, `IsFile`, `IsDir` | `node/node.go:278`, `461`, `297`, `387`, `393`, `400` | Caller path/name | Directory inference from map keys and `fs` adapter methods | Only strips a leading `/` | Namespace enumeration and ambiguity around equivalent-looking path spellings |
| `(*node.Node).WalkNode`, `Walk` | `node/node.go:128`, `145` | Caller root path, callback, filters | `fs.WalkDir` over the in-memory tree | No root normalization beyond whatever `Node` already does | Attackers who can plant names can force callback traversal over weird paths; `SkipErrors` can suppress unexpected failures |
| `(*node.Node).CopyFile` | `node/node.go:200` | Caller source key, destination host path, permissions | Reads node content and calls `os.WriteFile(dst, ...)` directly | Only checks that `src` exists and is not a directory | Arbitrary host filesystem write to a caller-chosen `dst` path |
| `(*node.Node).CopyTo` | `node/node.go:218` | Caller target medium, source path, destination path | Reads node entries and calls `target.Write(destPath or destPath/rel, content)` | Only checks that the source exists | Stored `../`-style node keys can propagate into destination paths, enabling traversal or overwrite depending on the target backend |
| `(*node.Node).EnsureDir` | `node/node.go:380` | Caller path (ignored) | No-op | Input is ignored | Semantic mismatch: callers may assume a directory boundary was created when directories remain implicit |
## `datanode`
| Function | File:Line | Input source | What it flows into | Current validation | Potential attack vector |
| --- | --- | --- | --- | --- | --- |
| `datanode.FromTar`, `(*datanode.Medium).Restore` | `datanode/client.go:41`, `65` | Caller-supplied tar archive bytes | Delegates to Borg `datanode.FromTar(data)` and replaces the in-memory filesystem | Wrapper adds no checks; inherited Borg behavior trims leading `/` only and accepts symlink tar entries | Archive bombs, preserved symlink entries, and `../`-style names can be restored into the in-memory tree |
| `(*datanode.Medium).Read`, `FileGet`, `Open`, `ReadStream` | `datanode/client.go:97`, `175`, `394`, `429` | Caller path | `clean(p)` then `dn.Open`/`dn.Stat`; `Read` loads the full file into memory | `clean` strips a leading `/` and runs `path.Clean`, but it does not sandbox `..` at the start of the path | Arbitrary logical-key reads, including odd names such as `../x`; full reads can exhaust memory on large files |
| `(*datanode.Medium).Write`, `WriteMode`, `FileSet` | `datanode/client.go:123`, `138`, `179` | Caller path/content/mode | `clean(p)`, then `dn.AddData` and explicit parent-dir tracking | Rejects empty path only; `WriteMode` ignores `mode` | Arbitrary overwrite/creation of logical entries, including `../`-style names; canonicalization can also collapse some raw paths onto the same key |
| `(*datanode.Medium).Create`, `WriteStream`, `Append`, `(*datanode.writeCloser).Write` | `datanode/client.go:402`, `441`, `410`, `540` | Caller path; streamed caller bytes | `clean(p)`, optional preload of existing data for append, in-memory buffer, then `dn.AddData` on `Close` | Rejects empty path; no size limit | Memory exhaustion and arbitrary overwrite/append of logical entries |
| `(*datanode.Medium).EnsureDir` | `datanode/client.go:142` | Caller path | `clean(p)` then marks explicit directories in `m.dirs` | Empty path becomes a no-op; no policy on `..`-style names | Arbitrary logical directory creation and enumeration under attacker-chosen names |
| `(*datanode.Medium).Delete` | `datanode/client.go:183` | Caller path | `clean(p)`, then file removal or explicit-dir removal | Blocks deleting the empty/root path; otherwise no path policy | Arbitrary logical deletion of files or empty directories |
| `(*datanode.Medium).DeleteAll` | `datanode/client.go:220` | Caller path | `clean(p)`, then subtree walk and removal | Blocks deleting the empty/root path | Recursive deletion of any logical subtree |
| `(*datanode.Medium).Rename` | `datanode/client.go:262` | Caller old/new paths | `clean` both paths, then read-add-delete for files or subtree move for dirs | Existence checks only; no destination restrictions | Arbitrary subtree move/overwrite, including `../`-style names that later escape on export or copy-out |
| `(*datanode.Medium).List`, `Stat`, `Exists`, `IsFile`, `IsDir` | `datanode/client.go:327`, `374`, `445`, `166`, `460` | Caller path | `clean(p)`, then `dn.ReadDir`/`dn.Stat`/explicit-dir map lookups | Same non-sandboxing `clean` behavior | Namespace enumeration and metadata disclosure for weird or traversal-looking logical names |
## `workspace`
| Function | File:Line | Input source | What it flows into | Current validation | Potential attack vector |
| --- | --- | --- | --- | --- | --- |
| `workspace.New` | `workspace/service.go:41` | Caller `core.Core` and optional `cryptProvider` | Resolves `$HOME`, sets `rootPath = ~/.core/workspaces`, and binds `medium = io.Local` | Ensures the root directory exists; no sandboxing because `io.Local` is rooted at `/` | All later workspace path joins operate on the real host filesystem, not a project sandbox |
| `(*workspace.Service).CreateWorkspace` | `workspace/service.go:68` | Caller identifier and password | SHA-256 hashes `identifier` into `wsID`, creates directories under `rootPath`, calls `crypt.CreateKeyPair`, writes `keys/private.key` | Requires `crypt` to exist, checks for workspace existence, writes key with mode `0600`; no password policy or identifier validation | Predictable unsalted workspace IDs can leak identifier privacy through offline guessing; creates real host directories/files if exposed remotely |
| `(*workspace.Service).SwitchWorkspace` | `workspace/service.go:103` | Caller workspace name | `filepath.Join(rootPath, name)` then `medium.IsDir`, stores `activeWorkspace = name` | Only checks that the joined path currently exists as a directory | Path traversal via `name` can escape `rootPath` and bind the service to arbitrary host directories |
| `(*workspace.Service).WorkspaceFileGet` | `workspace/service.go:126` | Caller filename | `activeFilePath` uses `filepath.Join(rootPath, activeWorkspace, "files", filename)`, then `medium.Read` | Only checks that an active workspace is set; no filename containment check | `filename` can escape the `files/` directory, and a malicious `activeWorkspace` can turn reads into arbitrary host-file access |
| `(*workspace.Service).WorkspaceFileSet` | `workspace/service.go:138` | Caller filename and content | Same `activeFilePath` join, then `medium.Write` | Only checks that an active workspace is set; no filename containment check | Arbitrary host-file write if `activeWorkspace` or `filename` contains traversal segments |
| `(*workspace.Service).HandleIPCEvents` | `workspace/service.go:150` | Untrusted `core.Message` payload, typically `map[string]any` from IPC | Extracts `"action"` and dispatches to `CreateWorkspace` or `SwitchWorkspace` | Only loose type assertions; no schema, authz, or audit response on failure | Remote IPC callers can trigger workspace creation or retarget the service to arbitrary directories because downstream helpers do not enforce containment |
## `sigil`
| Function | File:Line | Input source | What it flows into | Current validation | Potential attack vector |
| --- | --- | --- | --- | --- | --- |
| `sigil.Transmute` | `sigil/sigil.go:46` | Caller data bytes and sigil chain | Sequential `Sigil.In` calls | No chain policy; relies on each sigil | Attacker-chosen chains can trigger expensive transforms or weaken policy if callers let the attacker choose the sigils |
| `sigil.Untransmute` | `sigil/sigil.go:62` | Caller data bytes and sigil chain | Reverse-order `Sigil.Out` calls | No chain policy; relies on each sigil | Expensive or mismatched reverse chains can become a CPU/memory DoS surface |
| `(*sigil.ReverseSigil).In`, `Out` | `sigil/sigils.go:29`, `41` | Caller data bytes | Allocates a new buffer and reverses it | Nil-safe only | Large inputs allocate a second full-sized buffer; otherwise low risk |
| `(*sigil.HexSigil).In`, `Out` | `sigil/sigils.go:50`, `60` | Caller data bytes | Hex encode/decode into fresh buffers | Nil-safe only; decode returns errors from `hex.Decode` | Large or malformed input can still drive allocation and CPU usage |
| `(*sigil.Base64Sigil).In`, `Out` | `sigil/sigils.go:74`, `84` | Caller data bytes | Base64 encode/decode into fresh buffers | Nil-safe only; decode returns errors from `StdEncoding.Decode` | Large or malformed input can still drive allocation and CPU usage |
| `(*sigil.GzipSigil).In` | `sigil/sigils.go:100` | Caller data bytes | `gzip.NewWriter`, compression into a `bytes.Buffer` | Nil-safe only | Large input can consume significant CPU and memory while compressing |
| `(*sigil.GzipSigil).Out` | `sigil/sigils.go:120` | Caller compressed bytes | `gzip.NewReader` then `io.ReadAll` | Nil-safe only; malformed gzip errors out | Zip-bomb style payloads can decompress to unbounded memory |
| `(*sigil.JSONSigil).In`, `Out` | `sigil/sigils.go:137`, `149` | Caller JSON bytes | `json.Compact`/`json.Indent`; `Out` is a pass-through | No schema validation; `Out` does nothing | Large inputs can consume CPU/memory; callers may wrongly assume `Out` validates or normalizes JSON |
| `sigil.NewHashSigil`, `(*sigil.HashSigil).In`, `Out` | `sigil/sigils.go:161`, `166`, `215` | Caller hash enum and data bytes | Selects a hash implementation, hashes input, and leaves `Out` as pass-through | Unsupported hashes error out; weak algorithms are still allowed | If algorithm choice is attacker-controlled, callers can be downgraded to weak digests such as MD4/MD5/SHA1; large inputs can still be CPU-heavy |
| `sigil.NewSigil` | `sigil/sigils.go:221` | Caller sigil name | Factory switch returning encoding, compression, formatting, hashing, or weak hash sigils | Fixed allowlist only | If exposed as user config, attackers can select weak or semantically wrong transforms and bypass higher-level crypto expectations |
| `(*sigil.XORObfuscator).Obfuscate`, `Deobfuscate` | `sigil/crypto_sigil.go:65`, `73` | Caller data and entropy bytes | SHA-256-derived keystream then XOR over a full-size output buffer | No validation | Safe only as a subroutine; if misused as standalone protection, it is merely obfuscation and still a CPU/memory surface on large input |
| `(*sigil.ShuffleMaskObfuscator).Obfuscate`, `Deobfuscate` | `sigil/crypto_sigil.go:127`, `154` | Caller data and entropy bytes | Deterministic permutation and XOR-mask over full-size buffers | No validation | Large inputs drive multiple full-size allocations and CPU work; still only obfuscation if used outside authenticated encryption |
| `sigil.NewChaChaPolySigil` | `sigil/crypto_sigil.go:247` | Caller key bytes | Copies key into `ChaChaPolySigil` state | Validates only that the key is exactly 32 bytes | Weak but correctly-sized keys are accepted; long-lived key material stays resident in process memory |
| `sigil.NewChaChaPolySigilWithObfuscator` | `sigil/crypto_sigil.go:263` | Caller key bytes and custom obfuscator | Builds a `ChaChaPolySigil` and optionally swaps the obfuscator | Key length is validated; obfuscator is trusted if non-nil | Malicious or buggy obfuscators can break the intended defense-in-depth model or leak patterns |
| `(*sigil.ChaChaPolySigil).In` | `sigil/crypto_sigil.go:276` | Caller plaintext bytes | `rand.Reader` nonce, optional obfuscation, then `chacha20poly1305.Seal` | Requires a configured key; nil input is allowed | Large plaintexts allocate full ciphertexts; if `randReader` is replaced in tests or DI, nonce quality becomes attacker-influenced |
| `(*sigil.ChaChaPolySigil).Out` | `sigil/crypto_sigil.go:315` | Caller ciphertext bytes | Nonce extraction, `aead.Open`, optional deobfuscation | Requires a configured key, checks minimum length, and relies on AEAD authentication | Primarily a CPU DoS surface on repeated bogus ciphertext; integrity is otherwise strong |
| `sigil.GetNonceFromCiphertext` | `sigil/crypto_sigil.go:359` | Caller ciphertext bytes | Copies the first 24 bytes as a nonce | Length check only | Low-risk parser surface; malformed short inputs just error |

3
go.mod
View file

@ -3,7 +3,8 @@ module dappco.re/go/core/io
go 1.26.0
require (
dappco.re/go/core v0.8.0-alpha.1
dappco.re/go/core v0.6.0
dappco.re/go/core/log v0.1.0
forge.lthn.ai/Snider/Borg v0.3.1
github.com/aws/aws-sdk-go-v2 v1.41.4
github.com/aws/aws-sdk-go-v2/service/s3 v1.97.1

6
go.sum
View file

@ -1,5 +1,7 @@
dappco.re/go/core v0.8.0-alpha.1 h1:gj7+Scv+L63Z7wMxbJYHhaRFkHJo2u4MMPuUSv/Dhtk=
dappco.re/go/core v0.8.0-alpha.1/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A=
dappco.re/go/core v0.6.0 h1:0wmuO/UmCWXxJkxQ6XvVLnqkAuWitbd49PhxjCsplyk=
dappco.re/go/core v0.6.0/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A=
dappco.re/go/core/log v0.1.0 h1:pa71Vq2TD2aoEUQWFKwNcaJ3GBY8HbaNGqtE688Unyc=
dappco.re/go/core/log v0.1.0/go.mod h1:Nkqb8gsXhZAO8VLpx7B8i1iAmohhzqA20b9Zr8VUcJs=
forge.lthn.ai/Snider/Borg v0.3.1 h1:gfC1ZTpLoZai07oOWJiVeQ8+qJYK8A795tgVGJHbVL8=
forge.lthn.ai/Snider/Borg v0.3.1/go.mod h1:Z7DJD0yHXsxSyM7Mjl6/g4gH1NBsIz44Bf5AFlV76Wg=
github.com/aws/aws-sdk-go-v2 v1.41.4 h1:10f50G7WyU02T56ox1wWXq+zTX9I1zxG46HYuG1hH/k=

716
io.go
View file

@ -1,76 +1,84 @@
package io
import (
"bytes"
"cmp"
goio "io"
"io/fs"
"path"
"slices"
"os"
"strings"
"time"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
"dappco.re/go/core/io/local"
)
// Example: medium, _ := io.NewSandboxed("/srv/app")
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
// Example: backup, _ := io.NewSandboxed("/srv/backup")
// Example: _ = io.Copy(medium, "data/report.json", backup, "daily/report.json")
// Medium defines the standard interface for a storage backend.
// This allows for different implementations (e.g., local disk, S3, SFTP)
// to be used interchangeably.
type Medium interface {
// Example: content, _ := medium.Read("config/app.yaml")
// Read retrieves the content of a file as a string.
Read(path string) (string, error)
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
// Write saves the given content to a file, overwriting it if it exists.
// Default permissions: 0644. For sensitive files, use WriteMode.
Write(path, content string) error
// Example: _ = medium.WriteMode("keys/private.key", key, 0600)
WriteMode(path, content string, mode fs.FileMode) error
// WriteMode saves content with explicit file permissions.
// Use 0600 for sensitive files (keys, secrets, encrypted output).
WriteMode(path, content string, mode os.FileMode) error
// Example: _ = medium.EnsureDir("config/app")
// EnsureDir makes sure a directory exists, creating it if necessary.
EnsureDir(path string) error
// Example: isFile := medium.IsFile("config/app.yaml")
// IsFile checks if a path exists and is a regular file.
IsFile(path string) bool
// Example: _ = medium.Delete("config/app.yaml")
// FileGet is a convenience function that reads a file from the medium.
FileGet(path string) (string, error)
// FileSet is a convenience function that writes a file to the medium.
FileSet(path, content string) error
// Delete removes a file or empty directory.
Delete(path string) error
// Example: _ = medium.DeleteAll("logs/archive")
// DeleteAll removes a file or directory and all its contents recursively.
DeleteAll(path string) error
// Example: _ = medium.Rename("drafts/todo.txt", "archive/todo.txt")
// Rename moves a file or directory from oldPath to newPath.
Rename(oldPath, newPath string) error
// Example: entries, _ := medium.List("config")
// List returns the directory entries for the given path.
List(path string) ([]fs.DirEntry, error)
// Example: info, _ := medium.Stat("config/app.yaml")
// Stat returns file information for the given path.
Stat(path string) (fs.FileInfo, error)
// Example: file, _ := medium.Open("config/app.yaml")
// Open opens the named file for reading.
Open(path string) (fs.File, error)
// Example: writer, _ := medium.Create("logs/app.log")
// Create creates or truncates the named file.
Create(path string) (goio.WriteCloser, error)
// Example: writer, _ := medium.Append("logs/app.log")
// Append opens the named file for appending, creating it if it doesn't exist.
Append(path string) (goio.WriteCloser, error)
// Example: reader, _ := medium.ReadStream("logs/app.log")
// ReadStream returns a reader for the file content.
// Use this for large files to avoid loading the entire content into memory.
ReadStream(path string) (goio.ReadCloser, error)
// Example: writer, _ := medium.WriteStream("logs/app.log")
// WriteStream returns a writer for the file content.
// Use this for large files to avoid loading the entire content into memory.
WriteStream(path string) (goio.WriteCloser, error)
// Example: exists := medium.Exists("config/app.yaml")
// Exists checks if a path exists (file or directory).
Exists(path string) bool
// Example: isDirectory := medium.IsDir("config")
// IsDir checks if a path exists and is a directory.
IsDir(path string) bool
}
// Example: info := io.NewFileInfo("app.yaml", 8, 0644, time.Unix(0, 0), false)
// FileInfo provides a simple implementation of fs.FileInfo for mock testing.
type FileInfo struct {
name string
size int64
@ -79,22 +87,14 @@ type FileInfo struct {
isDir bool
}
var _ fs.FileInfo = FileInfo{}
func (fi FileInfo) Name() string { return fi.name }
func (fi FileInfo) Size() int64 { return fi.size }
func (fi FileInfo) Mode() fs.FileMode { return fi.mode }
func (fi FileInfo) ModTime() time.Time { return fi.modTime }
func (fi FileInfo) IsDir() bool { return fi.isDir }
func (fi FileInfo) Sys() any { return nil }
func (info FileInfo) Name() string { return info.name }
func (info FileInfo) Size() int64 { return info.size }
func (info FileInfo) Mode() fs.FileMode { return info.mode }
func (info FileInfo) ModTime() time.Time { return info.modTime }
func (info FileInfo) IsDir() bool { return info.isDir }
func (info FileInfo) Sys() any { return nil }
// Example: info := io.NewFileInfo("app.yaml", 8, 0644, time.Unix(0, 0), false)
// Example: entry := io.NewDirEntry("app.yaml", false, 0644, info)
// DirEntry provides a simple implementation of fs.DirEntry for mock testing.
type DirEntry struct {
name string
isDir bool
@ -102,563 +102,489 @@ type DirEntry struct {
info fs.FileInfo
}
var _ fs.DirEntry = DirEntry{}
func (de DirEntry) Name() string { return de.name }
func (de DirEntry) IsDir() bool { return de.isDir }
func (de DirEntry) Type() fs.FileMode { return de.mode.Type() }
func (de DirEntry) Info() (fs.FileInfo, error) { return de.info, nil }
func (entry DirEntry) Name() string { return entry.name }
func (entry DirEntry) IsDir() bool { return entry.isDir }
func (entry DirEntry) Type() fs.FileMode { return entry.mode.Type() }
func (entry DirEntry) Info() (fs.FileInfo, error) { return entry.info, nil }
// Example: info := io.NewFileInfo("app.yaml", 8, 0644, time.Unix(0, 0), false)
func NewFileInfo(name string, size int64, mode fs.FileMode, modTime time.Time, isDir bool) FileInfo {
return FileInfo{
name: name,
size: size,
mode: mode,
modTime: modTime,
isDir: isDir,
}
}
// Example: info := io.NewFileInfo("app.yaml", 8, 0644, time.Unix(0, 0), false)
// Example: entry := io.NewDirEntry("app.yaml", false, 0644, info)
func NewDirEntry(name string, isDir bool, mode fs.FileMode, info fs.FileInfo) DirEntry {
return DirEntry{
name: name,
isDir: isDir,
mode: mode,
info: info,
}
}
// Example: _ = io.Local.Read("/etc/hostname")
// Local is a pre-initialised medium for the local filesystem.
// It uses "/" as root, providing unsandboxed access to the filesystem.
// For sandboxed access, use NewSandboxed with a specific root path.
var Local Medium
var _ Medium = (*local.Medium)(nil)
func init() {
var err error
Local, err = local.New("/")
if err != nil {
core.Warn("io.Local init failed", "error", err)
coreerr.Warn("io: failed to initialise Local medium, io.Local will be nil", "error", err)
}
}
// Example: medium, _ := io.NewSandboxed("/srv/app")
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
// NewSandboxed creates a new Medium sandboxed to the given root directory.
// All file operations are restricted to paths within the root.
// The root directory will be created if it doesn't exist.
func NewSandboxed(root string) (Medium, error) {
return local.New(root)
}
// Example: content, _ := io.Read(medium, "config/app.yaml")
func Read(medium Medium, path string) (string, error) {
return medium.Read(path)
// --- Helper Functions ---
// Read retrieves the content of a file from the given medium.
func Read(m Medium, path string) (string, error) {
return m.Read(path)
}
// Example: _ = io.Write(medium, "config/app.yaml", "port: 8080")
func Write(medium Medium, path, content string) error {
return medium.Write(path, content)
// Write saves the given content to a file in the given medium.
func Write(m Medium, path, content string) error {
return m.Write(path, content)
}
// Example: reader, _ := io.ReadStream(medium, "logs/app.log")
func ReadStream(medium Medium, path string) (goio.ReadCloser, error) {
return medium.ReadStream(path)
// ReadStream returns a reader for the file content from the given medium.
func ReadStream(m Medium, path string) (goio.ReadCloser, error) {
return m.ReadStream(path)
}
// Example: writer, _ := io.WriteStream(medium, "logs/app.log")
func WriteStream(medium Medium, path string) (goio.WriteCloser, error) {
return medium.WriteStream(path)
// WriteStream returns a writer for the file content in the given medium.
func WriteStream(m Medium, path string) (goio.WriteCloser, error) {
return m.WriteStream(path)
}
// Example: _ = io.EnsureDir(medium, "config")
func EnsureDir(medium Medium, path string) error {
return medium.EnsureDir(path)
// EnsureDir makes sure a directory exists in the given medium.
func EnsureDir(m Medium, path string) error {
return m.EnsureDir(path)
}
// Example: isFile := io.IsFile(medium, "config/app.yaml")
func IsFile(medium Medium, path string) bool {
return medium.IsFile(path)
// IsFile checks if a path exists and is a regular file in the given medium.
func IsFile(m Medium, path string) bool {
return m.IsFile(path)
}
// Example: _ = io.Copy(sourceMedium, "input.txt", destinationMedium, "backup/input.txt")
func Copy(sourceMedium Medium, sourcePath string, destinationMedium Medium, destinationPath string) error {
content, err := sourceMedium.Read(sourcePath)
// Copy copies a file from one medium to another.
func Copy(src Medium, srcPath string, dst Medium, dstPath string) error {
content, err := src.Read(srcPath)
if err != nil {
return core.E("io.Copy", core.Concat("read failed: ", sourcePath), err)
return coreerr.E("io.Copy", "read failed: "+srcPath, err)
}
if err := destinationMedium.Write(destinationPath, content); err != nil {
return core.E("io.Copy", core.Concat("write failed: ", destinationPath), err)
if err := dst.Write(dstPath, content); err != nil {
return coreerr.E("io.Copy", "write failed: "+dstPath, err)
}
return nil
}
// Example: medium := io.NewMemoryMedium()
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
type MemoryMedium struct {
fileContents map[string]string
fileModes map[string]fs.FileMode
directories map[string]bool
modificationTimes map[string]time.Time
// --- MockMedium ---
// MockMedium is an in-memory implementation of Medium for testing.
type MockMedium struct {
Files map[string]string
Dirs map[string]bool
ModTimes map[string]time.Time
}
var _ Medium = (*MemoryMedium)(nil)
// Example: medium := io.NewMemoryMedium()
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
func NewMemoryMedium() *MemoryMedium {
return &MemoryMedium{
fileContents: make(map[string]string),
fileModes: make(map[string]fs.FileMode),
directories: make(map[string]bool),
modificationTimes: make(map[string]time.Time),
// NewMockMedium creates a new MockMedium instance.
func NewMockMedium() *MockMedium {
return &MockMedium{
Files: make(map[string]string),
Dirs: make(map[string]bool),
ModTimes: make(map[string]time.Time),
}
}
func (medium *MemoryMedium) ensureAncestorDirectories(filePath string) {
parentPath := path.Dir(filePath)
for parentPath != "." && parentPath != "" {
medium.directories[parentPath] = true
nextParentPath := path.Dir(parentPath)
if nextParentPath == parentPath {
break
}
parentPath = nextParentPath
}
}
func (medium *MemoryMedium) directoryExists(path string) bool {
if path == "" {
return false
}
if _, ok := medium.directories[path]; ok {
return true
}
prefix := path
if !core.HasSuffix(prefix, "/") {
prefix += "/"
}
for filePath := range medium.fileContents {
if core.HasPrefix(filePath, prefix) {
return true
}
}
for directoryPath := range medium.directories {
if directoryPath != path && core.HasPrefix(directoryPath, prefix) {
return true
}
}
return false
}
// Example: value, _ := io.NewMemoryMedium().Read("notes.txt")
func (medium *MemoryMedium) Read(path string) (string, error) {
content, ok := medium.fileContents[path]
// Read retrieves the content of a file from the mock filesystem.
func (m *MockMedium) Read(path string) (string, error) {
content, ok := m.Files[path]
if !ok {
return "", core.E("io.MemoryMedium.Read", core.Concat("file not found: ", path), fs.ErrNotExist)
return "", coreerr.E("io.MockMedium.Read", "file not found: "+path, os.ErrNotExist)
}
return content, nil
}
// Example: _ = io.NewMemoryMedium().Write("notes.txt", "hello")
func (medium *MemoryMedium) Write(path, content string) error {
return medium.WriteMode(path, content, 0644)
}
// Example: _ = io.NewMemoryMedium().WriteMode("keys/private.key", "secret", 0600)
func (medium *MemoryMedium) WriteMode(path, content string, mode fs.FileMode) error {
medium.ensureAncestorDirectories(path)
medium.fileContents[path] = content
medium.fileModes[path] = mode
medium.modificationTimes[path] = time.Now()
// Write saves the given content to a file in the mock filesystem.
func (m *MockMedium) Write(path, content string) error {
m.Files[path] = content
m.ModTimes[path] = time.Now()
return nil
}
// Example: _ = io.NewMemoryMedium().EnsureDir("config/app")
func (medium *MemoryMedium) EnsureDir(path string) error {
medium.ensureAncestorDirectories(path)
medium.directories[path] = true
func (m *MockMedium) WriteMode(path, content string, mode os.FileMode) error {
return m.Write(path, content)
}
// EnsureDir records that a directory exists in the mock filesystem.
func (m *MockMedium) EnsureDir(path string) error {
m.Dirs[path] = true
return nil
}
// Example: ok := io.NewMemoryMedium().IsFile("notes.txt")
func (medium *MemoryMedium) IsFile(path string) bool {
_, ok := medium.fileContents[path]
// IsFile checks if a path exists as a file in the mock filesystem.
func (m *MockMedium) IsFile(path string) bool {
_, ok := m.Files[path]
return ok
}
// Example: _ = io.NewMemoryMedium().Delete("old.txt")
func (medium *MemoryMedium) Delete(path string) error {
if _, ok := medium.fileContents[path]; ok {
delete(medium.fileContents, path)
delete(medium.fileModes, path)
delete(medium.modificationTimes, path)
return nil
}
if medium.directoryExists(path) {
prefix := path
if !core.HasSuffix(prefix, "/") {
prefix += "/"
}
hasChildren := false
for filePath := range medium.fileContents {
if core.HasPrefix(filePath, prefix) {
hasChildren = true
break
}
}
if !hasChildren {
for directoryPath := range medium.directories {
if directoryPath != path && core.HasPrefix(directoryPath, prefix) {
hasChildren = true
break
}
}
}
if hasChildren {
return core.E("io.MemoryMedium.Delete", core.Concat("directory not empty: ", path), fs.ErrExist)
}
delete(medium.directories, path)
return nil
}
return core.E("io.MemoryMedium.Delete", core.Concat("path not found: ", path), fs.ErrNotExist)
// FileGet is a convenience function that reads a file from the mock filesystem.
func (m *MockMedium) FileGet(path string) (string, error) {
return m.Read(path)
}
// Example: _ = io.NewMemoryMedium().DeleteAll("logs")
func (medium *MemoryMedium) DeleteAll(path string) error {
// FileSet is a convenience function that writes a file to the mock filesystem.
func (m *MockMedium) FileSet(path, content string) error {
return m.Write(path, content)
}
// Delete removes a file or empty directory from the mock filesystem.
func (m *MockMedium) Delete(path string) error {
if _, ok := m.Files[path]; ok {
delete(m.Files, path)
return nil
}
if _, ok := m.Dirs[path]; ok {
// Check if directory is empty (no files or subdirs with this prefix)
prefix := path
if !strings.HasSuffix(prefix, "/") {
prefix += "/"
}
for f := range m.Files {
if strings.HasPrefix(f, prefix) {
return coreerr.E("io.MockMedium.Delete", "directory not empty: "+path, os.ErrExist)
}
}
for d := range m.Dirs {
if d != path && strings.HasPrefix(d, prefix) {
return coreerr.E("io.MockMedium.Delete", "directory not empty: "+path, os.ErrExist)
}
}
delete(m.Dirs, path)
return nil
}
return coreerr.E("io.MockMedium.Delete", "path not found: "+path, os.ErrNotExist)
}
// DeleteAll removes a file or directory and all contents from the mock filesystem.
func (m *MockMedium) DeleteAll(path string) error {
found := false
if _, ok := medium.fileContents[path]; ok {
delete(medium.fileContents, path)
delete(medium.fileModes, path)
delete(medium.modificationTimes, path)
if _, ok := m.Files[path]; ok {
delete(m.Files, path)
found = true
}
if _, ok := medium.directories[path]; ok {
delete(medium.directories, path)
if _, ok := m.Dirs[path]; ok {
delete(m.Dirs, path)
found = true
}
// Delete all entries under this path
prefix := path
if !core.HasSuffix(prefix, "/") {
if !strings.HasSuffix(prefix, "/") {
prefix += "/"
}
for filePath := range medium.fileContents {
if core.HasPrefix(filePath, prefix) {
delete(medium.fileContents, filePath)
delete(medium.fileModes, filePath)
delete(medium.modificationTimes, filePath)
for f := range m.Files {
if strings.HasPrefix(f, prefix) {
delete(m.Files, f)
found = true
}
}
for directoryPath := range medium.directories {
if core.HasPrefix(directoryPath, prefix) {
delete(medium.directories, directoryPath)
for d := range m.Dirs {
if strings.HasPrefix(d, prefix) {
delete(m.Dirs, d)
found = true
}
}
if !found {
return core.E("io.MemoryMedium.DeleteAll", core.Concat("path not found: ", path), fs.ErrNotExist)
return coreerr.E("io.MockMedium.DeleteAll", "path not found: "+path, os.ErrNotExist)
}
return nil
}
// Example: _ = io.NewMemoryMedium().Rename("drafts/todo.txt", "archive/todo.txt")
func (medium *MemoryMedium) Rename(oldPath, newPath string) error {
if content, ok := medium.fileContents[oldPath]; ok {
medium.fileContents[newPath] = content
delete(medium.fileContents, oldPath)
if mode, ok := medium.fileModes[oldPath]; ok {
medium.fileModes[newPath] = mode
delete(medium.fileModes, oldPath)
}
if modTime, ok := medium.modificationTimes[oldPath]; ok {
medium.modificationTimes[newPath] = modTime
delete(medium.modificationTimes, oldPath)
// Rename moves a file or directory in the mock filesystem.
func (m *MockMedium) Rename(oldPath, newPath string) error {
if content, ok := m.Files[oldPath]; ok {
m.Files[newPath] = content
delete(m.Files, oldPath)
if mt, ok := m.ModTimes[oldPath]; ok {
m.ModTimes[newPath] = mt
delete(m.ModTimes, oldPath)
}
return nil
}
if medium.directoryExists(oldPath) {
medium.directories[newPath] = true
if _, ok := medium.directories[oldPath]; ok {
delete(medium.directories, oldPath)
}
if _, ok := m.Dirs[oldPath]; ok {
// Move directory and all contents
m.Dirs[newPath] = true
delete(m.Dirs, oldPath)
oldPrefix := oldPath
if !core.HasSuffix(oldPrefix, "/") {
if !strings.HasSuffix(oldPrefix, "/") {
oldPrefix += "/"
}
newPrefix := newPath
if !core.HasSuffix(newPrefix, "/") {
if !strings.HasSuffix(newPrefix, "/") {
newPrefix += "/"
}
// Collect files to move first (don't mutate during iteration)
filesToMove := make(map[string]string)
for filePath := range medium.fileContents {
if core.HasPrefix(filePath, oldPrefix) {
newFilePath := core.Concat(newPrefix, core.TrimPrefix(filePath, oldPrefix))
filesToMove[filePath] = newFilePath
for f := range m.Files {
if strings.HasPrefix(f, oldPrefix) {
newF := newPrefix + strings.TrimPrefix(f, oldPrefix)
filesToMove[f] = newF
}
}
for oldFilePath, newFilePath := range filesToMove {
medium.fileContents[newFilePath] = medium.fileContents[oldFilePath]
delete(medium.fileContents, oldFilePath)
if modTime, ok := medium.modificationTimes[oldFilePath]; ok {
medium.modificationTimes[newFilePath] = modTime
delete(medium.modificationTimes, oldFilePath)
for oldF, newF := range filesToMove {
m.Files[newF] = m.Files[oldF]
delete(m.Files, oldF)
if mt, ok := m.ModTimes[oldF]; ok {
m.ModTimes[newF] = mt
delete(m.ModTimes, oldF)
}
}
// Collect directories to move first
dirsToMove := make(map[string]string)
for directoryPath := range medium.directories {
if core.HasPrefix(directoryPath, oldPrefix) {
newDirectoryPath := core.Concat(newPrefix, core.TrimPrefix(directoryPath, oldPrefix))
dirsToMove[directoryPath] = newDirectoryPath
for d := range m.Dirs {
if strings.HasPrefix(d, oldPrefix) {
newD := newPrefix + strings.TrimPrefix(d, oldPrefix)
dirsToMove[d] = newD
}
}
for oldDirectoryPath, newDirectoryPath := range dirsToMove {
medium.directories[newDirectoryPath] = true
delete(medium.directories, oldDirectoryPath)
for oldD, newD := range dirsToMove {
m.Dirs[newD] = true
delete(m.Dirs, oldD)
}
return nil
}
return core.E("io.MemoryMedium.Rename", core.Concat("path not found: ", oldPath), fs.ErrNotExist)
return coreerr.E("io.MockMedium.Rename", "path not found: "+oldPath, os.ErrNotExist)
}
// Example: file, _ := io.NewMemoryMedium().Open("notes.txt")
func (medium *MemoryMedium) Open(path string) (fs.File, error) {
content, ok := medium.fileContents[path]
// Open opens a file from the mock filesystem.
func (m *MockMedium) Open(path string) (fs.File, error) {
content, ok := m.Files[path]
if !ok {
return nil, core.E("io.MemoryMedium.Open", core.Concat("file not found: ", path), fs.ErrNotExist)
return nil, coreerr.E("io.MockMedium.Open", "file not found: "+path, os.ErrNotExist)
}
return &MemoryFile{
return &MockFile{
name: core.PathBase(path),
content: []byte(content),
mode: medium.modeForPath(path),
modTime: medium.modificationTimeForPath(path),
}, nil
}
// Example: writer, _ := io.NewMemoryMedium().Create("notes.txt")
func (medium *MemoryMedium) Create(path string) (goio.WriteCloser, error) {
return &MemoryWriteCloser{
medium: medium,
// Create creates a file in the mock filesystem.
func (m *MockMedium) Create(path string) (goio.WriteCloser, error) {
return &MockWriteCloser{
medium: m,
path: path,
mode: 0644,
}, nil
}
// Example: writer, _ := io.NewMemoryMedium().Append("notes.txt")
func (medium *MemoryMedium) Append(path string) (goio.WriteCloser, error) {
content := medium.fileContents[path]
return &MemoryWriteCloser{
medium: medium,
// Append opens a file for appending in the mock filesystem.
func (m *MockMedium) Append(path string) (goio.WriteCloser, error) {
content := m.Files[path]
return &MockWriteCloser{
medium: m,
path: path,
data: []byte(content),
mode: medium.modeForPath(path),
}, nil
}
// Example: reader, _ := io.NewMemoryMedium().ReadStream("notes.txt")
func (medium *MemoryMedium) ReadStream(path string) (goio.ReadCloser, error) {
return medium.Open(path)
// ReadStream returns a reader for the file content in the mock filesystem.
func (m *MockMedium) ReadStream(path string) (goio.ReadCloser, error) {
return m.Open(path)
}
// Example: writer, _ := io.NewMemoryMedium().WriteStream("notes.txt")
func (medium *MemoryMedium) WriteStream(path string) (goio.WriteCloser, error) {
return medium.Create(path)
// WriteStream returns a writer for the file content in the mock filesystem.
func (m *MockMedium) WriteStream(path string) (goio.WriteCloser, error) {
return m.Create(path)
}
// Example: file, _ := io.NewMemoryMedium().Open("notes.txt")
type MemoryFile struct {
// MockFile implements fs.File for MockMedium.
type MockFile struct {
name string
content []byte
offset int64
mode fs.FileMode
modTime time.Time
}
var _ fs.File = (*MemoryFile)(nil)
var _ goio.ReadCloser = (*MemoryFile)(nil)
func (file *MemoryFile) Stat() (fs.FileInfo, error) {
return NewFileInfo(file.name, int64(len(file.content)), file.mode, file.modTime, false), nil
func (f *MockFile) Stat() (fs.FileInfo, error) {
return FileInfo{
name: f.name,
size: int64(len(f.content)),
}, nil
}
func (file *MemoryFile) Read(buffer []byte) (int, error) {
if file.offset >= int64(len(file.content)) {
func (f *MockFile) Read(b []byte) (int, error) {
if f.offset >= int64(len(f.content)) {
return 0, goio.EOF
}
readCount := copy(buffer, file.content[file.offset:])
file.offset += int64(readCount)
return readCount, nil
n := copy(b, f.content[f.offset:])
f.offset += int64(n)
return n, nil
}
func (file *MemoryFile) Close() error {
func (f *MockFile) Close() error {
return nil
}
// Example: writer, _ := io.NewMemoryMedium().Create("notes.txt")
type MemoryWriteCloser struct {
medium *MemoryMedium
// MockWriteCloser implements WriteCloser for MockMedium.
type MockWriteCloser struct {
medium *MockMedium
path string
data []byte
mode fs.FileMode
}
var _ goio.WriteCloser = (*MemoryWriteCloser)(nil)
func (writeCloser *MemoryWriteCloser) Write(data []byte) (int, error) {
writeCloser.data = append(writeCloser.data, data...)
return len(data), nil
func (w *MockWriteCloser) Write(p []byte) (int, error) {
w.data = append(w.data, p...)
return len(p), nil
}
func (writeCloser *MemoryWriteCloser) Close() error {
writeCloser.medium.ensureAncestorDirectories(writeCloser.path)
writeCloser.medium.fileContents[writeCloser.path] = string(writeCloser.data)
writeCloser.medium.fileModes[writeCloser.path] = writeCloser.mode
writeCloser.medium.modificationTimes[writeCloser.path] = time.Now()
func (w *MockWriteCloser) Close() error {
w.medium.Files[w.path] = string(w.data)
w.medium.ModTimes[w.path] = time.Now()
return nil
}
func (medium *MemoryMedium) modeForPath(path string) fs.FileMode {
if mode, ok := medium.fileModes[path]; ok {
return mode
}
return 0644
}
func (medium *MemoryMedium) modificationTimeForPath(path string) time.Time {
if modTime, ok := medium.modificationTimes[path]; ok {
return modTime
}
return time.Time{}
}
// Example: entries, _ := io.NewMemoryMedium().List("config")
func (medium *MemoryMedium) List(path string) ([]fs.DirEntry, error) {
if _, ok := medium.directories[path]; !ok {
// List returns directory entries for the mock filesystem.
func (m *MockMedium) List(path string) ([]fs.DirEntry, error) {
if _, ok := m.Dirs[path]; !ok {
// Check if it's the root or has children
hasChildren := false
prefix := path
if path != "" && !core.HasSuffix(prefix, "/") {
if path != "" && !strings.HasSuffix(prefix, "/") {
prefix += "/"
}
for filePath := range medium.fileContents {
if core.HasPrefix(filePath, prefix) {
for f := range m.Files {
if strings.HasPrefix(f, prefix) {
hasChildren = true
break
}
}
if !hasChildren {
for directoryPath := range medium.directories {
if core.HasPrefix(directoryPath, prefix) {
for d := range m.Dirs {
if strings.HasPrefix(d, prefix) {
hasChildren = true
break
}
}
}
if !hasChildren && path != "" {
return nil, core.E("io.MemoryMedium.List", core.Concat("directory not found: ", path), fs.ErrNotExist)
return nil, coreerr.E("io.MockMedium.List", "directory not found: "+path, os.ErrNotExist)
}
}
prefix := path
if path != "" && !core.HasSuffix(prefix, "/") {
if path != "" && !strings.HasSuffix(prefix, "/") {
prefix += "/"
}
seen := make(map[string]bool)
var entries []fs.DirEntry
for filePath, content := range medium.fileContents {
if !core.HasPrefix(filePath, prefix) {
// Find immediate children (files)
for f, content := range m.Files {
if !strings.HasPrefix(f, prefix) {
continue
}
rest := core.TrimPrefix(filePath, prefix)
if rest == "" || core.Contains(rest, "/") {
if idx := bytes.IndexByte([]byte(rest), '/'); idx != -1 {
rest := strings.TrimPrefix(f, prefix)
if rest == "" || strings.Contains(rest, "/") {
// Skip if it's not an immediate child
if idx := strings.Index(rest, "/"); idx != -1 {
// This is a subdirectory
dirName := rest[:idx]
if !seen[dirName] {
seen[dirName] = true
entries = append(entries, NewDirEntry(
dirName,
true,
fs.ModeDir|0755,
NewFileInfo(dirName, 0, fs.ModeDir|0755, time.Time{}, true),
))
entries = append(entries, DirEntry{
name: dirName,
isDir: true,
mode: fs.ModeDir | 0755,
info: FileInfo{
name: dirName,
isDir: true,
mode: fs.ModeDir | 0755,
},
})
}
}
continue
}
if !seen[rest] {
seen[rest] = true
filePath := core.Concat(prefix, rest)
entries = append(entries, NewDirEntry(
rest,
false,
medium.modeForPath(filePath),
NewFileInfo(rest, int64(len(content)), medium.modeForPath(filePath), medium.modificationTimeForPath(filePath), false),
))
entries = append(entries, DirEntry{
name: rest,
isDir: false,
mode: 0644,
info: FileInfo{
name: rest,
size: int64(len(content)),
mode: 0644,
},
})
}
}
for directoryPath := range medium.directories {
if !core.HasPrefix(directoryPath, prefix) {
// Find immediate subdirectories
for d := range m.Dirs {
if !strings.HasPrefix(d, prefix) {
continue
}
rest := core.TrimPrefix(directoryPath, prefix)
rest := strings.TrimPrefix(d, prefix)
if rest == "" {
continue
}
if idx := bytes.IndexByte([]byte(rest), '/'); idx != -1 {
// Get only immediate child
if idx := strings.Index(rest, "/"); idx != -1 {
rest = rest[:idx]
}
if !seen[rest] {
seen[rest] = true
entries = append(entries, NewDirEntry(
rest,
true,
fs.ModeDir|0755,
NewFileInfo(rest, 0, fs.ModeDir|0755, time.Time{}, true),
))
entries = append(entries, DirEntry{
name: rest,
isDir: true,
mode: fs.ModeDir | 0755,
info: FileInfo{
name: rest,
isDir: true,
mode: fs.ModeDir | 0755,
},
})
}
}
slices.SortFunc(entries, func(a, b fs.DirEntry) int {
return cmp.Compare(a.Name(), b.Name())
})
return entries, nil
}
// Example: info, _ := io.NewMemoryMedium().Stat("notes.txt")
func (medium *MemoryMedium) Stat(path string) (fs.FileInfo, error) {
if content, ok := medium.fileContents[path]; ok {
modTime, ok := medium.modificationTimes[path]
// Stat returns file information for the mock filesystem.
func (m *MockMedium) Stat(path string) (fs.FileInfo, error) {
if content, ok := m.Files[path]; ok {
modTime, ok := m.ModTimes[path]
if !ok {
modTime = time.Now()
}
return NewFileInfo(core.PathBase(path), int64(len(content)), medium.modeForPath(path), modTime, false), nil
return FileInfo{
name: core.PathBase(path),
size: int64(len(content)),
mode: 0644,
modTime: modTime,
}, nil
}
if medium.directoryExists(path) {
return NewFileInfo(core.PathBase(path), 0, fs.ModeDir|0755, time.Time{}, true), nil
if _, ok := m.Dirs[path]; ok {
return FileInfo{
name: core.PathBase(path),
isDir: true,
mode: fs.ModeDir | 0755,
}, nil
}
return nil, core.E("io.MemoryMedium.Stat", core.Concat("path not found: ", path), fs.ErrNotExist)
return nil, coreerr.E("io.MockMedium.Stat", "path not found: "+path, os.ErrNotExist)
}
// Example: ok := io.NewMemoryMedium().Exists("notes.txt")
func (medium *MemoryMedium) Exists(path string) bool {
if _, ok := medium.fileContents[path]; ok {
// Exists checks if a path exists in the mock filesystem.
func (m *MockMedium) Exists(path string) bool {
if _, ok := m.Files[path]; ok {
return true
}
return medium.directoryExists(path)
if _, ok := m.Dirs[path]; ok {
return true
}
return false
}
// Example: ok := io.NewMemoryMedium().IsDir("config")
func (medium *MemoryMedium) IsDir(path string) bool {
return medium.directoryExists(path)
// IsDir checks if a path is a directory in the mock filesystem.
func (m *MockMedium) IsDir(path string) bool {
_, ok := m.Dirs[path]
return ok
}

438
local/client.go Normal file
View file

@ -0,0 +1,438 @@
// Package local provides a local filesystem implementation of the io.Medium interface.
package local
import (
"fmt"
goio "io"
"io/fs"
"os"
"strings"
"time"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
)
// Medium is a local filesystem storage backend.
type Medium struct {
root string
}
// New creates a new local Medium rooted at the given directory.
// Pass "/" for full filesystem access, or a specific path to sandbox.
func New(root string) (*Medium, error) {
abs := absolutePath(root)
// Resolve symlinks so sandbox checks compare like-for-like.
// On macOS, /var is a symlink to /private/var — without this,
// resolving child paths resolves to /private/var/... while
// root stays /var/..., causing false sandbox escape detections.
if resolved, err := resolveSymlinksPath(abs); err == nil {
abs = resolved
}
return &Medium{root: abs}, nil
}
func dirSeparator() string {
if sep := core.Env("DS"); sep != "" {
return sep
}
return string(os.PathSeparator)
}
func normalisePath(p string) string {
sep := dirSeparator()
if sep == "/" {
return strings.ReplaceAll(p, "\\", sep)
}
return strings.ReplaceAll(p, "/", sep)
}
func currentWorkingDir() string {
if cwd, err := os.Getwd(); err == nil && cwd != "" {
return cwd
}
if cwd := core.Env("DIR_CWD"); cwd != "" {
return cwd
}
return "."
}
func absolutePath(p string) string {
p = normalisePath(p)
if core.PathIsAbs(p) {
return core.Path(p)
}
return core.Path(currentWorkingDir(), p)
}
func cleanSandboxPath(p string) string {
return core.Path(dirSeparator() + normalisePath(p))
}
func splitPathParts(p string) []string {
trimmed := strings.TrimPrefix(p, dirSeparator())
if trimmed == "" {
return nil
}
var parts []string
for _, part := range strings.Split(trimmed, dirSeparator()) {
if part == "" {
continue
}
parts = append(parts, part)
}
return parts
}
func resolveSymlinksPath(p string) (string, error) {
return resolveSymlinksRecursive(absolutePath(p), map[string]struct{}{})
}
func resolveSymlinksRecursive(p string, seen map[string]struct{}) (string, error) {
p = core.Path(p)
if p == dirSeparator() {
return p, nil
}
current := dirSeparator()
for _, part := range splitPathParts(p) {
next := core.Path(current, part)
info, err := os.Lstat(next)
if err != nil {
if os.IsNotExist(err) {
current = next
continue
}
return "", err
}
if info.Mode()&os.ModeSymlink == 0 {
current = next
continue
}
target, err := os.Readlink(next)
if err != nil {
return "", err
}
target = normalisePath(target)
if !core.PathIsAbs(target) {
target = core.Path(current, target)
} else {
target = core.Path(target)
}
if _, ok := seen[target]; ok {
return "", coreerr.E("local.resolveSymlinksPath", "symlink cycle: "+target, os.ErrInvalid)
}
seen[target] = struct{}{}
resolved, err := resolveSymlinksRecursive(target, seen)
delete(seen, target)
if err != nil {
return "", err
}
current = resolved
}
return current, nil
}
func isWithinRoot(root, target string) bool {
root = core.Path(root)
target = core.Path(target)
if root == dirSeparator() {
return true
}
return target == root || strings.HasPrefix(target, root+dirSeparator())
}
func canonicalPath(p string) string {
if p == "" {
return ""
}
if resolved, err := resolveSymlinksPath(p); err == nil {
return resolved
}
return absolutePath(p)
}
func isProtectedPath(full string) bool {
full = canonicalPath(full)
protected := map[string]struct{}{
canonicalPath(dirSeparator()): {},
}
for _, home := range []string{core.Env("HOME"), core.Env("DIR_HOME")} {
if home == "" {
continue
}
protected[canonicalPath(home)] = struct{}{}
}
_, ok := protected[full]
return ok
}
func logSandboxEscape(root, path, attempted string) {
username := core.Env("USER")
if username == "" {
username = "unknown"
}
fmt.Fprintf(os.Stderr, "[%s] SECURITY sandbox escape detected root=%s path=%s attempted=%s user=%s\n",
time.Now().Format(time.RFC3339), root, path, attempted, username)
}
// path sanitises and returns the full path.
// Absolute paths are sandboxed under root (unless root is "/").
func (m *Medium) path(p string) string {
if p == "" {
return m.root
}
// If the path is relative and the medium is rooted at "/",
// treat it as relative to the current working directory.
// This makes io.Local behave more like the standard 'os' package.
if m.root == dirSeparator() && !core.PathIsAbs(normalisePath(p)) {
return core.Path(currentWorkingDir(), normalisePath(p))
}
// Use a cleaned absolute path to resolve all .. and . internally
// before joining with the root. This is a standard way to sandbox paths.
clean := cleanSandboxPath(p)
// If root is "/", allow absolute paths through
if m.root == dirSeparator() {
return clean
}
// Join cleaned relative path with root
return core.Path(m.root, strings.TrimPrefix(clean, dirSeparator()))
}
// validatePath ensures the path is within the sandbox, following symlinks if they exist.
func (m *Medium) validatePath(p string) (string, error) {
if m.root == dirSeparator() {
return m.path(p), nil
}
// Split the cleaned path into components
parts := splitPathParts(cleanSandboxPath(p))
current := m.root
for _, part := range parts {
next := core.Path(current, part)
realNext, err := resolveSymlinksPath(next)
if err != nil {
if os.IsNotExist(err) {
// Part doesn't exist, we can't follow symlinks anymore.
// Since the path is already Cleaned and current is safe,
// appending a component to current will not escape.
current = next
continue
}
return "", err
}
// Verify the resolved part is still within the root
if !isWithinRoot(m.root, realNext) {
// Security event: sandbox escape attempt
logSandboxEscape(m.root, p, realNext)
return "", os.ErrPermission // Path escapes sandbox
}
current = realNext
}
return current, nil
}
// Read returns file contents as string.
func (m *Medium) Read(p string) (string, error) {
full, err := m.validatePath(p)
if err != nil {
return "", err
}
data, err := os.ReadFile(full)
if err != nil {
return "", err
}
return string(data), nil
}
// Write saves content to file, creating parent directories as needed.
// Files are created with mode 0644. For sensitive files (keys, secrets),
// use WriteMode with 0600.
func (m *Medium) Write(p, content string) error {
return m.WriteMode(p, content, 0644)
}
// WriteMode saves content to file with explicit permissions.
// Use 0600 for sensitive files (encryption output, private keys, auth hashes).
func (m *Medium) WriteMode(p, content string, mode os.FileMode) error {
full, err := m.validatePath(p)
if err != nil {
return err
}
if err := os.MkdirAll(core.PathDir(full), 0755); err != nil {
return err
}
return os.WriteFile(full, []byte(content), mode)
}
// EnsureDir creates directory if it doesn't exist.
func (m *Medium) EnsureDir(p string) error {
full, err := m.validatePath(p)
if err != nil {
return err
}
return os.MkdirAll(full, 0755)
}
// IsDir returns true if path is a directory.
func (m *Medium) IsDir(p string) bool {
if p == "" {
return false
}
full, err := m.validatePath(p)
if err != nil {
return false
}
info, err := os.Stat(full)
return err == nil && info.IsDir()
}
// IsFile returns true if path is a regular file.
func (m *Medium) IsFile(p string) bool {
if p == "" {
return false
}
full, err := m.validatePath(p)
if err != nil {
return false
}
info, err := os.Stat(full)
return err == nil && info.Mode().IsRegular()
}
// Exists returns true if path exists.
func (m *Medium) Exists(p string) bool {
full, err := m.validatePath(p)
if err != nil {
return false
}
_, err = os.Stat(full)
return err == nil
}
// List returns directory entries.
func (m *Medium) List(p string) ([]fs.DirEntry, error) {
full, err := m.validatePath(p)
if err != nil {
return nil, err
}
return os.ReadDir(full)
}
// Stat returns file info.
func (m *Medium) Stat(p string) (fs.FileInfo, error) {
full, err := m.validatePath(p)
if err != nil {
return nil, err
}
return os.Stat(full)
}
// Open opens the named file for reading.
func (m *Medium) Open(p string) (fs.File, error) {
full, err := m.validatePath(p)
if err != nil {
return nil, err
}
return os.Open(full)
}
// Create creates or truncates the named file.
func (m *Medium) Create(p string) (goio.WriteCloser, error) {
full, err := m.validatePath(p)
if err != nil {
return nil, err
}
if err := os.MkdirAll(core.PathDir(full), 0755); err != nil {
return nil, err
}
return os.Create(full)
}
// Append opens the named file for appending, creating it if it doesn't exist.
func (m *Medium) Append(p string) (goio.WriteCloser, error) {
full, err := m.validatePath(p)
if err != nil {
return nil, err
}
if err := os.MkdirAll(core.PathDir(full), 0755); err != nil {
return nil, err
}
return os.OpenFile(full, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
}
// ReadStream returns a reader for the file content.
//
// This is a convenience wrapper around Open that exposes a streaming-oriented
// API, as required by the io.Medium interface, while Open provides the more
// general filesystem-level operation. Both methods are kept for semantic
// clarity and backward compatibility.
func (m *Medium) ReadStream(path string) (goio.ReadCloser, error) {
return m.Open(path)
}
// WriteStream returns a writer for the file content.
//
// This is a convenience wrapper around Create that exposes a streaming-oriented
// API, as required by the io.Medium interface, while Create provides the more
// general filesystem-level operation. Both methods are kept for semantic
// clarity and backward compatibility.
func (m *Medium) WriteStream(path string) (goio.WriteCloser, error) {
return m.Create(path)
}
// Delete removes a file or empty directory.
func (m *Medium) Delete(p string) error {
full, err := m.validatePath(p)
if err != nil {
return err
}
if isProtectedPath(full) {
return coreerr.E("local.Delete", "refusing to delete protected path: "+full, nil)
}
return os.Remove(full)
}
// DeleteAll removes a file or directory recursively.
func (m *Medium) DeleteAll(p string) error {
full, err := m.validatePath(p)
if err != nil {
return err
}
if isProtectedPath(full) {
return coreerr.E("local.DeleteAll", "refusing to delete protected path: "+full, nil)
}
return os.RemoveAll(full)
}
// Rename moves a file or directory.
func (m *Medium) Rename(oldPath, newPath string) error {
oldFull, err := m.validatePath(oldPath)
if err != nil {
return err
}
newFull, err := m.validatePath(newPath)
if err != nil {
return err
}
return os.Rename(oldFull, newFull)
}
// FileGet is an alias for Read.
func (m *Medium) FileGet(p string) (string, error) {
return m.Read(p)
}
// FileSet is an alias for Write.
func (m *Medium) FileSet(p, content string) error {
return m.Write(p, content)
}

541
local/client_test.go Normal file
View file

@ -0,0 +1,541 @@
package local
import (
"io"
"os"
"path/filepath"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNew(t *testing.T) {
root := t.TempDir()
m, err := New(root)
assert.NoError(t, err)
// New() resolves symlinks (macOS /var → /private/var), so compare resolved paths.
resolved, _ := filepath.EvalSymlinks(root)
assert.Equal(t, resolved, m.root)
}
func TestPath(t *testing.T) {
m := &Medium{root: "/home/user"}
// Normal paths
assert.Equal(t, "/home/user/file.txt", m.path("file.txt"))
assert.Equal(t, "/home/user/dir/file.txt", m.path("dir/file.txt"))
// Empty returns root
assert.Equal(t, "/home/user", m.path(""))
// Traversal attempts get sanitised
assert.Equal(t, "/home/user/file.txt", m.path("../file.txt"))
assert.Equal(t, "/home/user/file.txt", m.path("dir/../file.txt"))
// Absolute paths are constrained to sandbox (no escape)
assert.Equal(t, "/home/user/etc/passwd", m.path("/etc/passwd"))
}
func TestPath_RootFilesystem(t *testing.T) {
m := &Medium{root: "/"}
// When root is "/", absolute paths pass through
assert.Equal(t, "/etc/passwd", m.path("/etc/passwd"))
assert.Equal(t, "/home/user/file.txt", m.path("/home/user/file.txt"))
// Relative paths are relative to CWD when root is "/"
cwd, _ := os.Getwd()
assert.Equal(t, filepath.Join(cwd, "file.txt"), m.path("file.txt"))
}
func TestReadWrite(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
// Write and read back
err := m.Write("test.txt", "hello")
assert.NoError(t, err)
content, err := m.Read("test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello", content)
// Write creates parent dirs
err = m.Write("a/b/c.txt", "nested")
assert.NoError(t, err)
content, err = m.Read("a/b/c.txt")
assert.NoError(t, err)
assert.Equal(t, "nested", content)
// Read nonexistent
_, err = m.Read("nope.txt")
assert.Error(t, err)
}
func TestEnsureDir(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
err := m.EnsureDir("one/two/three")
assert.NoError(t, err)
info, err := os.Stat(filepath.Join(root, "one/two/three"))
assert.NoError(t, err)
assert.True(t, info.IsDir())
}
func TestIsDir(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.Mkdir(filepath.Join(root, "mydir"), 0755)
_ = os.WriteFile(filepath.Join(root, "myfile"), []byte("x"), 0644)
assert.True(t, m.IsDir("mydir"))
assert.False(t, m.IsDir("myfile"))
assert.False(t, m.IsDir("nope"))
assert.False(t, m.IsDir(""))
}
func TestIsFile(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.Mkdir(filepath.Join(root, "mydir"), 0755)
_ = os.WriteFile(filepath.Join(root, "myfile"), []byte("x"), 0644)
assert.True(t, m.IsFile("myfile"))
assert.False(t, m.IsFile("mydir"))
assert.False(t, m.IsFile("nope"))
assert.False(t, m.IsFile(""))
}
func TestExists(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.WriteFile(filepath.Join(root, "exists"), []byte("x"), 0644)
assert.True(t, m.Exists("exists"))
assert.False(t, m.Exists("nope"))
}
func TestList(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.WriteFile(filepath.Join(root, "a.txt"), []byte("a"), 0644)
_ = os.WriteFile(filepath.Join(root, "b.txt"), []byte("b"), 0644)
_ = os.Mkdir(filepath.Join(root, "subdir"), 0755)
entries, err := m.List("")
assert.NoError(t, err)
assert.Len(t, entries, 3)
}
func TestStat(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.WriteFile(filepath.Join(root, "file"), []byte("content"), 0644)
info, err := m.Stat("file")
assert.NoError(t, err)
assert.Equal(t, int64(7), info.Size())
}
func TestDelete(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.WriteFile(filepath.Join(root, "todelete"), []byte("x"), 0644)
assert.True(t, m.Exists("todelete"))
err := m.Delete("todelete")
assert.NoError(t, err)
assert.False(t, m.Exists("todelete"))
}
func TestDeleteAll(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.MkdirAll(filepath.Join(root, "dir/sub"), 0755)
_ = os.WriteFile(filepath.Join(root, "dir/sub/file"), []byte("x"), 0644)
err := m.DeleteAll("dir")
assert.NoError(t, err)
assert.False(t, m.Exists("dir"))
}
func TestDelete_ProtectedHomeViaSymlinkEnv(t *testing.T) {
realHome := t.TempDir()
linkParent := t.TempDir()
homeLink := filepath.Join(linkParent, "home-link")
require.NoError(t, os.Symlink(realHome, homeLink))
t.Setenv("HOME", homeLink)
m, err := New("/")
require.NoError(t, err)
err = m.Delete(realHome)
require.Error(t, err)
assert.DirExists(t, realHome)
}
func TestDeleteAll_ProtectedHomeViaEnv(t *testing.T) {
tempHome := t.TempDir()
t.Setenv("HOME", tempHome)
m, err := New("/")
require.NoError(t, err)
err = m.DeleteAll(tempHome)
require.Error(t, err)
assert.DirExists(t, tempHome)
}
func TestRename(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
_ = os.WriteFile(filepath.Join(root, "old"), []byte("x"), 0644)
err := m.Rename("old", "new")
assert.NoError(t, err)
assert.False(t, m.Exists("old"))
assert.True(t, m.Exists("new"))
}
func TestFileGetFileSet(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
err := m.FileSet("data", "value")
assert.NoError(t, err)
val, err := m.FileGet("data")
assert.NoError(t, err)
assert.Equal(t, "value", val)
}
func TestDelete_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_delete_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
// Create and delete a file
err = medium.Write("file.txt", "content")
assert.NoError(t, err)
assert.True(t, medium.IsFile("file.txt"))
err = medium.Delete("file.txt")
assert.NoError(t, err)
assert.False(t, medium.IsFile("file.txt"))
// Create and delete an empty directory
err = medium.EnsureDir("emptydir")
assert.NoError(t, err)
err = medium.Delete("emptydir")
assert.NoError(t, err)
assert.False(t, medium.IsDir("emptydir"))
}
func TestDelete_Bad_NotEmpty(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_delete_notempty_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
// Create a directory with a file
err = medium.Write("mydir/file.txt", "content")
assert.NoError(t, err)
// Try to delete non-empty directory
err = medium.Delete("mydir")
assert.Error(t, err)
}
func TestDeleteAll_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_deleteall_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
// Create nested structure
err = medium.Write("mydir/file1.txt", "content1")
assert.NoError(t, err)
err = medium.Write("mydir/subdir/file2.txt", "content2")
assert.NoError(t, err)
// Delete all
err = medium.DeleteAll("mydir")
assert.NoError(t, err)
assert.False(t, medium.Exists("mydir"))
assert.False(t, medium.Exists("mydir/file1.txt"))
assert.False(t, medium.Exists("mydir/subdir/file2.txt"))
}
func TestRename_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_rename_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
// Rename a file
err = medium.Write("old.txt", "content")
assert.NoError(t, err)
err = medium.Rename("old.txt", "new.txt")
assert.NoError(t, err)
assert.False(t, medium.IsFile("old.txt"))
assert.True(t, medium.IsFile("new.txt"))
content, err := medium.Read("new.txt")
assert.NoError(t, err)
assert.Equal(t, "content", content)
}
func TestRename_Traversal_Sanitised(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_rename_traversal_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
err = medium.Write("file.txt", "content")
assert.NoError(t, err)
// Traversal attempts are sanitised (.. becomes .), so this renames to "./escaped.txt"
// which is just "escaped.txt" in the root
err = medium.Rename("file.txt", "../escaped.txt")
assert.NoError(t, err)
assert.False(t, medium.Exists("file.txt"))
assert.True(t, medium.Exists("escaped.txt"))
}
func TestList_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_list_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
// Create some files and directories
err = medium.Write("file1.txt", "content1")
assert.NoError(t, err)
err = medium.Write("file2.txt", "content2")
assert.NoError(t, err)
err = medium.EnsureDir("subdir")
assert.NoError(t, err)
// List root
entries, err := medium.List(".")
assert.NoError(t, err)
assert.Len(t, entries, 3)
names := make(map[string]bool)
for _, e := range entries {
names[e.Name()] = true
}
assert.True(t, names["file1.txt"])
assert.True(t, names["file2.txt"])
assert.True(t, names["subdir"])
}
func TestStat_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_stat_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
// Stat a file
err = medium.Write("file.txt", "hello world")
assert.NoError(t, err)
info, err := medium.Stat("file.txt")
assert.NoError(t, err)
assert.Equal(t, "file.txt", info.Name())
assert.Equal(t, int64(11), info.Size())
assert.False(t, info.IsDir())
// Stat a directory
err = medium.EnsureDir("mydir")
assert.NoError(t, err)
info, err = medium.Stat("mydir")
assert.NoError(t, err)
assert.Equal(t, "mydir", info.Name())
assert.True(t, info.IsDir())
}
func TestExists_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_exists_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
assert.False(t, medium.Exists("nonexistent"))
err = medium.Write("file.txt", "content")
assert.NoError(t, err)
assert.True(t, medium.Exists("file.txt"))
err = medium.EnsureDir("mydir")
assert.NoError(t, err)
assert.True(t, medium.Exists("mydir"))
}
func TestIsDir_Good(t *testing.T) {
testRoot, err := os.MkdirTemp("", "local_isdir_test")
assert.NoError(t, err)
defer func() { _ = os.RemoveAll(testRoot) }()
medium, err := New(testRoot)
assert.NoError(t, err)
err = medium.Write("file.txt", "content")
assert.NoError(t, err)
assert.False(t, medium.IsDir("file.txt"))
err = medium.EnsureDir("mydir")
assert.NoError(t, err)
assert.True(t, medium.IsDir("mydir"))
assert.False(t, medium.IsDir("nonexistent"))
}
func TestReadStream(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
content := "streaming content"
err := m.Write("stream.txt", content)
assert.NoError(t, err)
reader, err := m.ReadStream("stream.txt")
assert.NoError(t, err)
defer reader.Close()
// Read only first 9 bytes
limitReader := io.LimitReader(reader, 9)
data, err := io.ReadAll(limitReader)
assert.NoError(t, err)
assert.Equal(t, "streaming", string(data))
}
func TestWriteStream(t *testing.T) {
root := t.TempDir()
m, _ := New(root)
writer, err := m.WriteStream("output.txt")
assert.NoError(t, err)
_, err = io.Copy(writer, strings.NewReader("piped data"))
assert.NoError(t, err)
err = writer.Close()
assert.NoError(t, err)
content, err := m.Read("output.txt")
assert.NoError(t, err)
assert.Equal(t, "piped data", content)
}
func TestPath_Traversal_Advanced(t *testing.T) {
m := &Medium{root: "/sandbox"}
// Multiple levels of traversal
assert.Equal(t, "/sandbox/file.txt", m.path("../../../file.txt"))
assert.Equal(t, "/sandbox/target", m.path("dir/../../target"))
// Traversal with hidden files
assert.Equal(t, "/sandbox/.ssh/id_rsa", m.path(".ssh/id_rsa"))
assert.Equal(t, "/sandbox/id_rsa", m.path(".ssh/../id_rsa"))
// Null bytes (Go's filepath.Clean handles them, but good to check)
assert.Equal(t, "/sandbox/file\x00.txt", m.path("file\x00.txt"))
}
func TestValidatePath_Security(t *testing.T) {
root := t.TempDir()
m, err := New(root)
assert.NoError(t, err)
// Create a directory outside the sandbox
outside := t.TempDir()
outsideFile := filepath.Join(outside, "secret.txt")
err = os.WriteFile(outsideFile, []byte("secret"), 0644)
assert.NoError(t, err)
// Test 1: Simple traversal
_, err = m.validatePath("../outside.txt")
assert.NoError(t, err) // path() sanitises to root, so this shouldn't escape
// Test 2: Symlink escape
// Create a symlink inside the sandbox pointing outside
linkPath := filepath.Join(root, "evil_link")
err = os.Symlink(outside, linkPath)
assert.NoError(t, err)
// Try to access a file through the symlink
_, err = m.validatePath("evil_link/secret.txt")
assert.Error(t, err)
assert.ErrorIs(t, err, os.ErrPermission)
// Test 3: Nested symlink escape
innerDir := filepath.Join(root, "inner")
err = os.Mkdir(innerDir, 0755)
assert.NoError(t, err)
nestedLink := filepath.Join(innerDir, "nested_evil")
err = os.Symlink(outside, nestedLink)
assert.NoError(t, err)
_, err = m.validatePath("inner/nested_evil/secret.txt")
assert.Error(t, err)
assert.ErrorIs(t, err, os.ErrPermission)
}
func TestEmptyPaths(t *testing.T) {
root := t.TempDir()
m, err := New(root)
assert.NoError(t, err)
// Read empty path (should fail as it's a directory)
_, err = m.Read("")
assert.Error(t, err)
// Write empty path (should fail as it's a directory)
err = m.Write("", "content")
assert.Error(t, err)
// EnsureDir empty path (should be ok, it's just the root)
err = m.EnsureDir("")
assert.NoError(t, err)
// IsDir empty path (should be true for root, but current impl returns false for "")
// Wait, I noticed IsDir returns false for "" in the code.
assert.False(t, m.IsDir(""))
// Exists empty path (root exists)
assert.True(t, m.Exists(""))
// List empty path (lists root)
entries, err := m.List("")
assert.NoError(t, err)
assert.NotNil(t, entries)
}

View file

@ -1,482 +0,0 @@
// Example: medium, _ := local.New("/srv/app")
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
// Example: content, _ := medium.Read("config/app.yaml")
package local
import (
"cmp"
goio "io"
"io/fs"
"slices"
"syscall"
core "dappco.re/go/core"
)
// Example: medium, _ := local.New("/srv/app")
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
type Medium struct {
filesystemRoot string
}
var unrestrictedFileSystem = (&core.Fs{}).NewUnrestricted()
// Example: medium, _ := local.New("/srv/app")
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
func New(root string) (*Medium, error) {
absoluteRoot := absolutePath(root)
if resolvedRoot, err := resolveSymlinksPath(absoluteRoot); err == nil {
absoluteRoot = resolvedRoot
}
return &Medium{filesystemRoot: absoluteRoot}, nil
}
func dirSeparator() string {
if separator := core.Env("CORE_PATH_SEPARATOR"); separator != "" {
return separator
}
if separator := core.Env("DS"); separator != "" {
return separator
}
return "/"
}
func normalisePath(path string) string {
separator := dirSeparator()
if separator == "/" {
return core.Replace(path, "\\", separator)
}
return core.Replace(path, "/", separator)
}
func currentWorkingDir() string {
if workingDirectory := core.Env("CORE_WORKING_DIRECTORY"); workingDirectory != "" {
return workingDirectory
}
if workingDirectory := core.Env("DIR_CWD"); workingDirectory != "" {
return workingDirectory
}
return "."
}
func absolutePath(path string) string {
path = normalisePath(path)
if core.PathIsAbs(path) {
return core.Path(path)
}
return core.Path(currentWorkingDir(), path)
}
func cleanSandboxPath(path string) string {
return core.Path(dirSeparator() + normalisePath(path))
}
func splitPathParts(path string) []string {
trimmed := core.TrimPrefix(path, dirSeparator())
if trimmed == "" {
return nil
}
var parts []string
for _, part := range core.Split(trimmed, dirSeparator()) {
if part == "" {
continue
}
parts = append(parts, part)
}
return parts
}
func resolveSymlinksPath(path string) (string, error) {
return resolveSymlinksRecursive(absolutePath(path), map[string]struct{}{})
}
func resolveSymlinksRecursive(path string, seen map[string]struct{}) (string, error) {
path = core.Path(path)
if path == dirSeparator() {
return path, nil
}
current := dirSeparator()
for _, part := range splitPathParts(path) {
next := core.Path(current, part)
info, err := lstat(next)
if err != nil {
if core.Is(err, syscall.ENOENT) {
current = next
continue
}
return "", err
}
if !isSymlink(info.Mode) {
current = next
continue
}
target, err := readlink(next)
if err != nil {
return "", err
}
target = normalisePath(target)
if !core.PathIsAbs(target) {
target = core.Path(current, target)
} else {
target = core.Path(target)
}
if _, ok := seen[target]; ok {
return "", core.E("local.resolveSymlinksPath", core.Concat("symlink cycle: ", target), fs.ErrInvalid)
}
seen[target] = struct{}{}
resolved, err := resolveSymlinksRecursive(target, seen)
delete(seen, target)
if err != nil {
return "", err
}
current = resolved
}
return current, nil
}
func isWithinRoot(root, target string) bool {
root = core.Path(root)
target = core.Path(target)
if root == dirSeparator() {
return true
}
return target == root || core.HasPrefix(target, root+dirSeparator())
}
func canonicalPath(path string) string {
if path == "" {
return ""
}
if resolved, err := resolveSymlinksPath(path); err == nil {
return resolved
}
return absolutePath(path)
}
func isProtectedPath(fullPath string) bool {
fullPath = canonicalPath(fullPath)
protected := map[string]struct{}{
canonicalPath(dirSeparator()): {},
}
for _, home := range []string{core.Env("HOME"), core.Env("DIR_HOME")} {
if home == "" {
continue
}
protected[canonicalPath(home)] = struct{}{}
}
_, ok := protected[fullPath]
return ok
}
func logSandboxEscape(root, path, attempted string) {
username := core.Env("USER")
if username == "" {
username = "unknown"
}
core.Security("sandbox escape detected", "root", root, "path", path, "attempted", attempted, "user", username)
}
func (medium *Medium) sandboxedPath(path string) string {
if path == "" {
return medium.filesystemRoot
}
if medium.filesystemRoot == dirSeparator() && !core.PathIsAbs(normalisePath(path)) {
return core.Path(currentWorkingDir(), normalisePath(path))
}
clean := cleanSandboxPath(path)
if medium.filesystemRoot == dirSeparator() {
return clean
}
return core.Path(medium.filesystemRoot, core.TrimPrefix(clean, dirSeparator()))
}
func (medium *Medium) validatePath(path string) (string, error) {
if medium.filesystemRoot == dirSeparator() {
return medium.sandboxedPath(path), nil
}
parts := splitPathParts(cleanSandboxPath(path))
current := medium.filesystemRoot
for _, part := range parts {
next := core.Path(current, part)
realNext, err := resolveSymlinksPath(next)
if err != nil {
if core.Is(err, syscall.ENOENT) {
current = next
continue
}
return "", err
}
if !isWithinRoot(medium.filesystemRoot, realNext) {
logSandboxEscape(medium.filesystemRoot, path, realNext)
return "", fs.ErrPermission
}
current = realNext
}
return current, nil
}
func (medium *Medium) Read(path string) (string, error) {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return "", err
}
return resultString("local.Read", core.Concat("read failed: ", path), unrestrictedFileSystem.Read(resolvedPath))
}
func (medium *Medium) Write(path, content string) error {
return medium.WriteMode(path, content, 0644)
}
func (medium *Medium) WriteMode(path, content string, mode fs.FileMode) error {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return err
}
return resultError("local.WriteMode", core.Concat("write failed: ", path), unrestrictedFileSystem.WriteMode(resolvedPath, content, mode))
}
// Example: _ = medium.EnsureDir("config/app")
func (medium *Medium) EnsureDir(path string) error {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return err
}
return resultError("local.EnsureDir", core.Concat("ensure dir failed: ", path), unrestrictedFileSystem.EnsureDir(resolvedPath))
}
// Example: isDirectory := medium.IsDir("config")
func (medium *Medium) IsDir(path string) bool {
if path == "" {
return false
}
resolvedPath, err := medium.validatePath(path)
if err != nil {
return false
}
return unrestrictedFileSystem.IsDir(resolvedPath)
}
// Example: isFile := medium.IsFile("config/app.yaml")
func (medium *Medium) IsFile(path string) bool {
if path == "" {
return false
}
resolvedPath, err := medium.validatePath(path)
if err != nil {
return false
}
return unrestrictedFileSystem.IsFile(resolvedPath)
}
// Example: exists := medium.Exists("config/app.yaml")
func (medium *Medium) Exists(path string) bool {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return false
}
return unrestrictedFileSystem.Exists(resolvedPath)
}
// Example: entries, _ := medium.List("config")
func (medium *Medium) List(path string) ([]fs.DirEntry, error) {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return nil, err
}
entries, err := resultDirEntries("local.List", core.Concat("list failed: ", path), unrestrictedFileSystem.List(resolvedPath))
if err != nil {
return nil, err
}
slices.SortFunc(entries, func(a, b fs.DirEntry) int {
return cmp.Compare(a.Name(), b.Name())
})
return entries, nil
}
// Example: info, _ := medium.Stat("config/app.yaml")
func (medium *Medium) Stat(path string) (fs.FileInfo, error) {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return nil, err
}
return resultFileInfo("local.Stat", core.Concat("stat failed: ", path), unrestrictedFileSystem.Stat(resolvedPath))
}
// Example: file, _ := medium.Open("config/app.yaml")
func (medium *Medium) Open(path string) (fs.File, error) {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return nil, err
}
return resultFile("local.Open", core.Concat("open failed: ", path), unrestrictedFileSystem.Open(resolvedPath))
}
// Example: writer, _ := medium.Create("logs/app.log")
func (medium *Medium) Create(path string) (goio.WriteCloser, error) {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return nil, err
}
return resultWriteCloser("local.Create", core.Concat("create failed: ", path), unrestrictedFileSystem.Create(resolvedPath))
}
// Example: writer, _ := medium.Append("logs/app.log")
func (medium *Medium) Append(path string) (goio.WriteCloser, error) {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return nil, err
}
return resultWriteCloser("local.Append", core.Concat("append failed: ", path), unrestrictedFileSystem.Append(resolvedPath))
}
// Example: reader, _ := medium.ReadStream("logs/app.log")
func (medium *Medium) ReadStream(path string) (goio.ReadCloser, error) {
return medium.Open(path)
}
// Example: writer, _ := medium.WriteStream("logs/app.log")
func (medium *Medium) WriteStream(path string) (goio.WriteCloser, error) {
return medium.Create(path)
}
// Example: _ = medium.Delete("config/app.yaml")
func (medium *Medium) Delete(path string) error {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return err
}
if isProtectedPath(resolvedPath) {
return core.E("local.Delete", core.Concat("refusing to delete protected path: ", resolvedPath), nil)
}
return resultError("local.Delete", core.Concat("delete failed: ", path), unrestrictedFileSystem.Delete(resolvedPath))
}
// Example: _ = medium.DeleteAll("logs/archive")
func (medium *Medium) DeleteAll(path string) error {
resolvedPath, err := medium.validatePath(path)
if err != nil {
return err
}
if isProtectedPath(resolvedPath) {
return core.E("local.DeleteAll", core.Concat("refusing to delete protected path: ", resolvedPath), nil)
}
return resultError("local.DeleteAll", core.Concat("delete all failed: ", path), unrestrictedFileSystem.DeleteAll(resolvedPath))
}
// Example: _ = medium.Rename("drafts/todo.txt", "archive/todo.txt")
func (medium *Medium) Rename(oldPath, newPath string) error {
oldResolvedPath, err := medium.validatePath(oldPath)
if err != nil {
return err
}
newResolvedPath, err := medium.validatePath(newPath)
if err != nil {
return err
}
return resultError("local.Rename", core.Concat("rename failed: ", oldPath), unrestrictedFileSystem.Rename(oldResolvedPath, newResolvedPath))
}
func lstat(path string) (*syscall.Stat_t, error) {
info := &syscall.Stat_t{}
if err := syscall.Lstat(path, info); err != nil {
return nil, err
}
return info, nil
}
func isSymlink(mode uint32) bool {
return mode&syscall.S_IFMT == syscall.S_IFLNK
}
func readlink(path string) (string, error) {
size := 256
for {
linkBuffer := make([]byte, size)
bytesRead, err := syscall.Readlink(path, linkBuffer)
if err != nil {
return "", err
}
if bytesRead < len(linkBuffer) {
return string(linkBuffer[:bytesRead]), nil
}
size *= 2
}
}
func resultError(operation, message string, result core.Result) error {
if result.OK {
return nil
}
if err, ok := result.Value.(error); ok {
return core.E(operation, message, err)
}
return core.E(operation, message, nil)
}
func resultString(operation, message string, result core.Result) (string, error) {
if !result.OK {
return "", resultError(operation, message, result)
}
value, ok := result.Value.(string)
if !ok {
return "", core.E(operation, "unexpected result type", nil)
}
return value, nil
}
func resultDirEntries(operation, message string, result core.Result) ([]fs.DirEntry, error) {
if !result.OK {
return nil, resultError(operation, message, result)
}
entries, ok := result.Value.([]fs.DirEntry)
if !ok {
return nil, core.E(operation, "unexpected result type", nil)
}
return entries, nil
}
func resultFileInfo(operation, message string, result core.Result) (fs.FileInfo, error) {
if !result.OK {
return nil, resultError(operation, message, result)
}
fileInfo, ok := result.Value.(fs.FileInfo)
if !ok {
return nil, core.E(operation, "unexpected result type", nil)
}
return fileInfo, nil
}
func resultFile(operation, message string, result core.Result) (fs.File, error) {
if !result.OK {
return nil, resultError(operation, message, result)
}
file, ok := result.Value.(fs.File)
if !ok {
return nil, core.E(operation, "unexpected result type", nil)
}
return file, nil
}
func resultWriteCloser(operation, message string, result core.Result) (goio.WriteCloser, error) {
if !result.OK {
return nil, resultError(operation, message, result)
}
writer, ok := result.Value.(goio.WriteCloser)
if !ok {
return nil, core.E(operation, "unexpected result type", nil)
}
return writer, nil
}

View file

@ -1,473 +0,0 @@
package local
import (
"io"
"io/fs"
"syscall"
"testing"
core "dappco.re/go/core"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestLocal_New_ResolvesRoot_Good(t *testing.T) {
root := t.TempDir()
localMedium, err := New(root)
assert.NoError(t, err)
resolved, err := resolveSymlinksPath(root)
require.NoError(t, err)
assert.Equal(t, resolved, localMedium.filesystemRoot)
}
func TestLocal_Path_Sandboxed_Good(t *testing.T) {
localMedium := &Medium{filesystemRoot: "/home/user"}
assert.Equal(t, "/home/user/file.txt", localMedium.sandboxedPath("file.txt"))
assert.Equal(t, "/home/user/dir/file.txt", localMedium.sandboxedPath("dir/file.txt"))
assert.Equal(t, "/home/user", localMedium.sandboxedPath(""))
assert.Equal(t, "/home/user/file.txt", localMedium.sandboxedPath("../file.txt"))
assert.Equal(t, "/home/user/file.txt", localMedium.sandboxedPath("dir/../file.txt"))
assert.Equal(t, "/home/user/etc/passwd", localMedium.sandboxedPath("/etc/passwd"))
}
func TestLocal_Path_RootFilesystem_Good(t *testing.T) {
localMedium := &Medium{filesystemRoot: "/"}
assert.Equal(t, "/etc/passwd", localMedium.sandboxedPath("/etc/passwd"))
assert.Equal(t, "/home/user/file.txt", localMedium.sandboxedPath("/home/user/file.txt"))
workingDirectory := currentWorkingDir()
assert.Equal(t, core.Path(workingDirectory, "file.txt"), localMedium.sandboxedPath("file.txt"))
}
func TestLocal_ReadWrite_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
err := localMedium.Write("test.txt", "hello")
assert.NoError(t, err)
content, err := localMedium.Read("test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello", content)
err = localMedium.Write("a/b/c.txt", "nested")
assert.NoError(t, err)
content, err = localMedium.Read("a/b/c.txt")
assert.NoError(t, err)
assert.Equal(t, "nested", content)
_, err = localMedium.Read("nope.txt")
assert.Error(t, err)
}
func TestLocal_EnsureDir_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
err := localMedium.EnsureDir("one/two/three")
assert.NoError(t, err)
info, err := localMedium.Stat("one/two/three")
assert.NoError(t, err)
assert.True(t, info.IsDir())
}
func TestLocal_IsDir_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.EnsureDir("mydir")
_ = localMedium.Write("myfile", "x")
assert.True(t, localMedium.IsDir("mydir"))
assert.False(t, localMedium.IsDir("myfile"))
assert.False(t, localMedium.IsDir("nope"))
assert.False(t, localMedium.IsDir(""))
}
func TestLocal_IsFile_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.EnsureDir("mydir")
_ = localMedium.Write("myfile", "x")
assert.True(t, localMedium.IsFile("myfile"))
assert.False(t, localMedium.IsFile("mydir"))
assert.False(t, localMedium.IsFile("nope"))
assert.False(t, localMedium.IsFile(""))
}
func TestLocal_Exists_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.Write("exists", "x")
assert.True(t, localMedium.Exists("exists"))
assert.False(t, localMedium.Exists("nope"))
}
func TestLocal_List_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.Write("a.txt", "a")
_ = localMedium.Write("b.txt", "b")
_ = localMedium.EnsureDir("subdir")
entries, err := localMedium.List("")
assert.NoError(t, err)
assert.Len(t, entries, 3)
}
func TestLocal_Stat_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.Write("file", "content")
info, err := localMedium.Stat("file")
assert.NoError(t, err)
assert.Equal(t, int64(7), info.Size())
}
func TestLocal_Delete_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.Write("todelete", "x")
assert.True(t, localMedium.Exists("todelete"))
err := localMedium.Delete("todelete")
assert.NoError(t, err)
assert.False(t, localMedium.Exists("todelete"))
}
func TestLocal_DeleteAll_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.Write("dir/sub/file", "x")
err := localMedium.DeleteAll("dir")
assert.NoError(t, err)
assert.False(t, localMedium.Exists("dir"))
}
func TestLocal_Delete_ProtectedHomeViaSymlinkEnv_Bad(t *testing.T) {
realHome := t.TempDir()
linkParent := t.TempDir()
homeLink := core.Path(linkParent, "home-link")
require.NoError(t, syscall.Symlink(realHome, homeLink))
t.Setenv("HOME", homeLink)
localMedium, err := New("/")
require.NoError(t, err)
err = localMedium.Delete(realHome)
require.Error(t, err)
assert.DirExists(t, realHome)
}
func TestLocal_DeleteAll_ProtectedHomeViaEnv_Bad(t *testing.T) {
tempHome := t.TempDir()
t.Setenv("HOME", tempHome)
localMedium, err := New("/")
require.NoError(t, err)
err = localMedium.DeleteAll(tempHome)
require.Error(t, err)
assert.DirExists(t, tempHome)
}
func TestLocal_Rename_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
_ = localMedium.Write("old", "x")
err := localMedium.Rename("old", "new")
assert.NoError(t, err)
assert.False(t, localMedium.Exists("old"))
assert.True(t, localMedium.Exists("new"))
}
func TestLocal_Delete_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("file.txt", "content")
assert.NoError(t, err)
assert.True(t, localMedium.IsFile("file.txt"))
err = localMedium.Delete("file.txt")
assert.NoError(t, err)
assert.False(t, localMedium.IsFile("file.txt"))
err = localMedium.EnsureDir("emptydir")
assert.NoError(t, err)
err = localMedium.Delete("emptydir")
assert.NoError(t, err)
assert.False(t, localMedium.IsDir("emptydir"))
}
func TestLocal_Delete_NotEmpty_Bad(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("mydir/file.txt", "content")
assert.NoError(t, err)
err = localMedium.Delete("mydir")
assert.Error(t, err)
}
func TestLocal_DeleteAll_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("mydir/file1.txt", "content1")
assert.NoError(t, err)
err = localMedium.Write("mydir/subdir/file2.txt", "content2")
assert.NoError(t, err)
err = localMedium.DeleteAll("mydir")
assert.NoError(t, err)
assert.False(t, localMedium.Exists("mydir"))
assert.False(t, localMedium.Exists("mydir/file1.txt"))
assert.False(t, localMedium.Exists("mydir/subdir/file2.txt"))
}
func TestLocal_Rename_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("old.txt", "content")
assert.NoError(t, err)
err = localMedium.Rename("old.txt", "new.txt")
assert.NoError(t, err)
assert.False(t, localMedium.IsFile("old.txt"))
assert.True(t, localMedium.IsFile("new.txt"))
content, err := localMedium.Read("new.txt")
assert.NoError(t, err)
assert.Equal(t, "content", content)
}
func TestLocal_Rename_TraversalSanitised_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("file.txt", "content")
assert.NoError(t, err)
err = localMedium.Rename("file.txt", "../escaped.txt")
assert.NoError(t, err)
assert.False(t, localMedium.Exists("file.txt"))
assert.True(t, localMedium.Exists("escaped.txt"))
}
func TestLocal_List_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("file1.txt", "content1")
assert.NoError(t, err)
err = localMedium.Write("file2.txt", "content2")
assert.NoError(t, err)
err = localMedium.EnsureDir("subdir")
assert.NoError(t, err)
entries, err := localMedium.List(".")
assert.NoError(t, err)
assert.Len(t, entries, 3)
names := make(map[string]bool)
for _, entry := range entries {
names[entry.Name()] = true
}
assert.True(t, names["file1.txt"])
assert.True(t, names["file2.txt"])
assert.True(t, names["subdir"])
}
func TestLocal_Stat_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("file.txt", "hello world")
assert.NoError(t, err)
info, err := localMedium.Stat("file.txt")
assert.NoError(t, err)
assert.Equal(t, "file.txt", info.Name())
assert.Equal(t, int64(11), info.Size())
assert.False(t, info.IsDir())
err = localMedium.EnsureDir("mydir")
assert.NoError(t, err)
info, err = localMedium.Stat("mydir")
assert.NoError(t, err)
assert.Equal(t, "mydir", info.Name())
assert.True(t, info.IsDir())
}
func TestLocal_Exists_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
assert.False(t, localMedium.Exists("nonexistent"))
err = localMedium.Write("file.txt", "content")
assert.NoError(t, err)
assert.True(t, localMedium.Exists("file.txt"))
err = localMedium.EnsureDir("mydir")
assert.NoError(t, err)
assert.True(t, localMedium.Exists("mydir"))
}
func TestLocal_IsDir_Good(t *testing.T) {
testRoot := t.TempDir()
localMedium, err := New(testRoot)
assert.NoError(t, err)
err = localMedium.Write("file.txt", "content")
assert.NoError(t, err)
assert.False(t, localMedium.IsDir("file.txt"))
err = localMedium.EnsureDir("mydir")
assert.NoError(t, err)
assert.True(t, localMedium.IsDir("mydir"))
assert.False(t, localMedium.IsDir("nonexistent"))
}
func TestLocal_ReadStream_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
content := "streaming content"
err := localMedium.Write("stream.txt", content)
assert.NoError(t, err)
reader, err := localMedium.ReadStream("stream.txt")
assert.NoError(t, err)
defer reader.Close()
limitReader := io.LimitReader(reader, 9)
data, err := io.ReadAll(limitReader)
assert.NoError(t, err)
assert.Equal(t, "streaming", string(data))
}
func TestLocal_WriteStream_Basic_Good(t *testing.T) {
root := t.TempDir()
localMedium, _ := New(root)
writer, err := localMedium.WriteStream("output.txt")
assert.NoError(t, err)
_, err = io.Copy(writer, core.NewReader("piped data"))
assert.NoError(t, err)
err = writer.Close()
assert.NoError(t, err)
content, err := localMedium.Read("output.txt")
assert.NoError(t, err)
assert.Equal(t, "piped data", content)
}
func TestLocal_Path_TraversalSandbox_Good(t *testing.T) {
localMedium := &Medium{filesystemRoot: "/sandbox"}
assert.Equal(t, "/sandbox/file.txt", localMedium.sandboxedPath("../../../file.txt"))
assert.Equal(t, "/sandbox/target", localMedium.sandboxedPath("dir/../../target"))
assert.Equal(t, "/sandbox/.ssh/id_rsa", localMedium.sandboxedPath(".ssh/id_rsa"))
assert.Equal(t, "/sandbox/id_rsa", localMedium.sandboxedPath(".ssh/../id_rsa"))
assert.Equal(t, "/sandbox/file\x00.txt", localMedium.sandboxedPath("file\x00.txt"))
}
func TestLocal_ValidatePath_SymlinkEscape_Bad(t *testing.T) {
root := t.TempDir()
localMedium, err := New(root)
assert.NoError(t, err)
outside := t.TempDir()
outsideFile := core.Path(outside, "secret.txt")
outsideMedium, err := New("/")
require.NoError(t, err)
err = outsideMedium.Write(outsideFile, "secret")
assert.NoError(t, err)
_, err = localMedium.validatePath("../outside.txt")
assert.NoError(t, err)
linkPath := core.Path(root, "evil_link")
err = syscall.Symlink(outside, linkPath)
assert.NoError(t, err)
_, err = localMedium.validatePath("evil_link/secret.txt")
assert.Error(t, err)
assert.ErrorIs(t, err, fs.ErrPermission)
err = localMedium.EnsureDir("inner")
assert.NoError(t, err)
innerDir := core.Path(root, "inner")
nestedLink := core.Path(innerDir, "nested_evil")
err = syscall.Symlink(outside, nestedLink)
assert.NoError(t, err)
_, err = localMedium.validatePath("inner/nested_evil/secret.txt")
assert.Error(t, err)
assert.ErrorIs(t, err, fs.ErrPermission)
}
func TestLocal_EmptyPaths_Good(t *testing.T) {
root := t.TempDir()
localMedium, err := New(root)
assert.NoError(t, err)
_, err = localMedium.Read("")
assert.Error(t, err)
err = localMedium.Write("", "content")
assert.Error(t, err)
err = localMedium.EnsureDir("")
assert.NoError(t, err)
assert.False(t, localMedium.IsDir(""))
assert.True(t, localMedium.Exists(""))
entries, err := localMedium.List("")
assert.NoError(t, err)
assert.NotNil(t, entries)
}

View file

@ -1,432 +0,0 @@
package io
import (
goio "io"
"io/fs"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestMemoryMedium_NewMemoryMedium_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
assert.NotNil(t, memoryMedium)
assert.NotNil(t, memoryMedium.fileContents)
assert.NotNil(t, memoryMedium.directories)
assert.Empty(t, memoryMedium.fileContents)
assert.Empty(t, memoryMedium.directories)
}
func TestMemoryMedium_NewFileInfo_Good(t *testing.T) {
info := NewFileInfo("app.yaml", 8, 0644, time.Unix(0, 0), false)
assert.Equal(t, "app.yaml", info.Name())
assert.Equal(t, int64(8), info.Size())
assert.Equal(t, fs.FileMode(0644), info.Mode())
assert.True(t, info.ModTime().Equal(time.Unix(0, 0)))
assert.False(t, info.IsDir())
assert.Nil(t, info.Sys())
}
func TestMemoryMedium_NewDirEntry_Good(t *testing.T) {
info := NewFileInfo("app.yaml", 8, 0644, time.Unix(0, 0), false)
entry := NewDirEntry("app.yaml", false, 0644, info)
assert.Equal(t, "app.yaml", entry.Name())
assert.False(t, entry.IsDir())
assert.Equal(t, fs.FileMode(0), entry.Type())
entryInfo, err := entry.Info()
require.NoError(t, err)
assert.Equal(t, "app.yaml", entryInfo.Name())
assert.Equal(t, int64(8), entryInfo.Size())
}
func TestMemoryMedium_Read_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["test.txt"] = "hello world"
content, err := memoryMedium.Read("test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello world", content)
}
func TestMemoryMedium_Read_Bad(t *testing.T) {
memoryMedium := NewMemoryMedium()
_, err := memoryMedium.Read("nonexistent.txt")
assert.Error(t, err)
}
func TestMemoryMedium_Write_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
err := memoryMedium.Write("test.txt", "content")
assert.NoError(t, err)
assert.Equal(t, "content", memoryMedium.fileContents["test.txt"])
err = memoryMedium.Write("test.txt", "new content")
assert.NoError(t, err)
assert.Equal(t, "new content", memoryMedium.fileContents["test.txt"])
}
func TestMemoryMedium_WriteMode_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
err := memoryMedium.WriteMode("secure.txt", "secret", 0600)
require.NoError(t, err)
content, err := memoryMedium.Read("secure.txt")
require.NoError(t, err)
assert.Equal(t, "secret", content)
info, err := memoryMedium.Stat("secure.txt")
require.NoError(t, err)
assert.Equal(t, fs.FileMode(0600), info.Mode())
file, err := memoryMedium.Open("secure.txt")
require.NoError(t, err)
fileInfo, err := file.Stat()
require.NoError(t, err)
assert.Equal(t, fs.FileMode(0600), fileInfo.Mode())
}
func TestMemoryMedium_EnsureDir_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
err := memoryMedium.EnsureDir("/path/to/dir")
assert.NoError(t, err)
assert.True(t, memoryMedium.directories["/path/to/dir"])
}
func TestMemoryMedium_EnsureDir_CreatesParents_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
require.NoError(t, memoryMedium.EnsureDir("alpha/beta/gamma"))
assert.True(t, memoryMedium.IsDir("alpha"))
assert.True(t, memoryMedium.IsDir("alpha/beta"))
assert.True(t, memoryMedium.IsDir("alpha/beta/gamma"))
}
func TestMemoryMedium_IsFile_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["exists.txt"] = "content"
assert.True(t, memoryMedium.IsFile("exists.txt"))
assert.False(t, memoryMedium.IsFile("nonexistent.txt"))
}
func TestMemoryMedium_Write_CreatesParentDirectories_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
require.NoError(t, memoryMedium.Write("nested/path/file.txt", "content"))
assert.True(t, memoryMedium.Exists("nested"))
assert.True(t, memoryMedium.IsDir("nested"))
assert.True(t, memoryMedium.Exists("nested/path"))
assert.True(t, memoryMedium.IsDir("nested/path"))
}
func TestMemoryMedium_Delete_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["test.txt"] = "content"
err := memoryMedium.Delete("test.txt")
assert.NoError(t, err)
assert.False(t, memoryMedium.IsFile("test.txt"))
}
func TestMemoryMedium_Delete_NotFound_Bad(t *testing.T) {
memoryMedium := NewMemoryMedium()
err := memoryMedium.Delete("nonexistent.txt")
assert.Error(t, err)
}
func TestMemoryMedium_Delete_DirNotEmpty_Bad(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.directories["mydir"] = true
memoryMedium.fileContents["mydir/file.txt"] = "content"
err := memoryMedium.Delete("mydir")
assert.Error(t, err)
}
func TestMemoryMedium_Delete_InferredDirNotEmpty_Bad(t *testing.T) {
memoryMedium := NewMemoryMedium()
require.NoError(t, memoryMedium.Write("mydir/file.txt", "content"))
err := memoryMedium.Delete("mydir")
assert.Error(t, err)
}
func TestMemoryMedium_DeleteAll_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.directories["mydir"] = true
memoryMedium.directories["mydir/subdir"] = true
memoryMedium.fileContents["mydir/file.txt"] = "content"
memoryMedium.fileContents["mydir/subdir/nested.txt"] = "nested"
err := memoryMedium.DeleteAll("mydir")
assert.NoError(t, err)
assert.Empty(t, memoryMedium.directories)
assert.Empty(t, memoryMedium.fileContents)
}
func TestMemoryMedium_Rename_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["old.txt"] = "content"
err := memoryMedium.Rename("old.txt", "new.txt")
assert.NoError(t, err)
assert.False(t, memoryMedium.IsFile("old.txt"))
assert.True(t, memoryMedium.IsFile("new.txt"))
assert.Equal(t, "content", memoryMedium.fileContents["new.txt"])
}
func TestMemoryMedium_Rename_Dir_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.directories["olddir"] = true
memoryMedium.fileContents["olddir/file.txt"] = "content"
err := memoryMedium.Rename("olddir", "newdir")
assert.NoError(t, err)
assert.False(t, memoryMedium.directories["olddir"])
assert.True(t, memoryMedium.directories["newdir"])
assert.Equal(t, "content", memoryMedium.fileContents["newdir/file.txt"])
}
func TestMemoryMedium_Rename_InferredDir_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
require.NoError(t, memoryMedium.Write("olddir/file.txt", "content"))
require.NoError(t, memoryMedium.Rename("olddir", "newdir"))
assert.False(t, memoryMedium.Exists("olddir"))
assert.True(t, memoryMedium.Exists("newdir"))
assert.True(t, memoryMedium.IsDir("newdir"))
assert.Equal(t, "content", memoryMedium.fileContents["newdir/file.txt"])
}
func TestMemoryMedium_List_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.directories["mydir"] = true
memoryMedium.fileContents["mydir/file1.txt"] = "content1"
memoryMedium.fileContents["mydir/file2.txt"] = "content2"
memoryMedium.directories["mydir/subdir"] = true
entries, err := memoryMedium.List("mydir")
assert.NoError(t, err)
assert.Len(t, entries, 3)
assert.Equal(t, "file1.txt", entries[0].Name())
assert.Equal(t, "file2.txt", entries[1].Name())
assert.Equal(t, "subdir", entries[2].Name())
names := make(map[string]bool)
for _, entry := range entries {
names[entry.Name()] = true
}
assert.True(t, names["file1.txt"])
assert.True(t, names["file2.txt"])
assert.True(t, names["subdir"])
}
func TestMemoryMedium_Stat_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["test.txt"] = "hello world"
info, err := memoryMedium.Stat("test.txt")
assert.NoError(t, err)
assert.Equal(t, "test.txt", info.Name())
assert.Equal(t, int64(11), info.Size())
assert.False(t, info.IsDir())
}
func TestMemoryMedium_Stat_Dir_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.directories["mydir"] = true
info, err := memoryMedium.Stat("mydir")
assert.NoError(t, err)
assert.Equal(t, "mydir", info.Name())
assert.True(t, info.IsDir())
}
func TestMemoryMedium_Exists_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["file.txt"] = "content"
memoryMedium.directories["mydir"] = true
assert.True(t, memoryMedium.Exists("file.txt"))
assert.True(t, memoryMedium.Exists("mydir"))
assert.False(t, memoryMedium.Exists("nonexistent"))
}
func TestMemoryMedium_IsDir_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["file.txt"] = "content"
memoryMedium.directories["mydir"] = true
assert.False(t, memoryMedium.IsDir("file.txt"))
assert.True(t, memoryMedium.IsDir("mydir"))
assert.False(t, memoryMedium.IsDir("nonexistent"))
}
func TestMemoryMedium_StreamAndFSHelpers_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
require.NoError(t, memoryMedium.EnsureDir("dir"))
require.NoError(t, memoryMedium.Write("dir/file.txt", "alpha"))
statInfo, err := memoryMedium.Stat("dir/file.txt")
require.NoError(t, err)
file, err := memoryMedium.Open("dir/file.txt")
require.NoError(t, err)
info, err := file.Stat()
require.NoError(t, err)
assert.Equal(t, "file.txt", info.Name())
assert.Equal(t, int64(5), info.Size())
assert.Equal(t, fs.FileMode(0644), info.Mode())
assert.Equal(t, statInfo.ModTime(), info.ModTime())
assert.False(t, info.IsDir())
assert.Nil(t, info.Sys())
data, err := goio.ReadAll(file)
require.NoError(t, err)
assert.Equal(t, "alpha", string(data))
require.NoError(t, file.Close())
entries, err := memoryMedium.List("dir")
require.NoError(t, err)
require.Len(t, entries, 1)
assert.Equal(t, "file.txt", entries[0].Name())
assert.False(t, entries[0].IsDir())
assert.Equal(t, fs.FileMode(0), entries[0].Type())
entryInfo, err := entries[0].Info()
require.NoError(t, err)
assert.Equal(t, "file.txt", entryInfo.Name())
assert.Equal(t, int64(5), entryInfo.Size())
assert.Equal(t, fs.FileMode(0644), entryInfo.Mode())
assert.Equal(t, statInfo.ModTime(), entryInfo.ModTime())
writer, err := memoryMedium.Create("created.txt")
require.NoError(t, err)
_, err = writer.Write([]byte("created"))
require.NoError(t, err)
require.NoError(t, writer.Close())
appendWriter, err := memoryMedium.Append("created.txt")
require.NoError(t, err)
_, err = appendWriter.Write([]byte(" later"))
require.NoError(t, err)
require.NoError(t, appendWriter.Close())
reader, err := memoryMedium.ReadStream("created.txt")
require.NoError(t, err)
streamed, err := goio.ReadAll(reader)
require.NoError(t, err)
assert.Equal(t, "created later", string(streamed))
require.NoError(t, reader.Close())
writeStream, err := memoryMedium.WriteStream("streamed.txt")
require.NoError(t, err)
_, err = writeStream.Write([]byte("stream output"))
require.NoError(t, err)
require.NoError(t, writeStream.Close())
assert.Equal(t, "stream output", memoryMedium.fileContents["streamed.txt"])
statInfo, err = memoryMedium.Stat("streamed.txt")
require.NoError(t, err)
assert.Equal(t, fs.FileMode(0644), statInfo.Mode())
assert.False(t, statInfo.ModTime().IsZero())
}
func TestIO_Read_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["test.txt"] = "hello"
content, err := Read(memoryMedium, "test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello", content)
}
func TestIO_Write_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
err := Write(memoryMedium, "test.txt", "hello")
assert.NoError(t, err)
assert.Equal(t, "hello", memoryMedium.fileContents["test.txt"])
}
func TestIO_EnsureDir_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
err := EnsureDir(memoryMedium, "/my/dir")
assert.NoError(t, err)
assert.True(t, memoryMedium.directories["/my/dir"])
}
func TestIO_IsFile_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
memoryMedium.fileContents["exists.txt"] = "content"
assert.True(t, IsFile(memoryMedium, "exists.txt"))
assert.False(t, IsFile(memoryMedium, "nonexistent.txt"))
}
func TestIO_NewSandboxed_Good(t *testing.T) {
root := t.TempDir()
memoryMedium, err := NewSandboxed(root)
require.NoError(t, err)
require.NoError(t, memoryMedium.Write("config/app.yaml", "port: 8080"))
content, err := memoryMedium.Read("config/app.yaml")
require.NoError(t, err)
assert.Equal(t, "port: 8080", content)
assert.True(t, memoryMedium.IsDir("config"))
}
func TestIO_ReadWriteStream_Good(t *testing.T) {
memoryMedium := NewMemoryMedium()
writer, err := WriteStream(memoryMedium, "logs/run.txt")
require.NoError(t, err)
_, err = writer.Write([]byte("started"))
require.NoError(t, err)
require.NoError(t, writer.Close())
reader, err := ReadStream(memoryMedium, "logs/run.txt")
require.NoError(t, err)
data, err := goio.ReadAll(reader)
require.NoError(t, err)
assert.Equal(t, "started", string(data))
require.NoError(t, reader.Close())
}
func TestIO_Copy_Good(t *testing.T) {
source := NewMemoryMedium()
dest := NewMemoryMedium()
source.fileContents["test.txt"] = "hello"
err := Copy(source, "test.txt", dest, "test.txt")
assert.NoError(t, err)
assert.Equal(t, "hello", dest.fileContents["test.txt"])
source.fileContents["original.txt"] = "content"
err = Copy(source, "original.txt", dest, "copied.txt")
assert.NoError(t, err)
assert.Equal(t, "content", dest.fileContents["copied.txt"])
}
func TestIO_Copy_Bad(t *testing.T) {
source := NewMemoryMedium()
dest := NewMemoryMedium()
err := Copy(source, "nonexistent.txt", dest, "dest.txt")
assert.Error(t, err)
}
func TestIO_LocalGlobal_Good(t *testing.T) {
assert.NotNil(t, Local, "io.Local should be initialised")
var memoryMedium = Local
assert.NotNil(t, memoryMedium)
}

View file

@ -1,7 +1,6 @@
// Example: nodeTree := node.New()
// Example: nodeTree.AddData("config/app.yaml", []byte("port: 8080"))
// Example: snapshot, _ := nodeTree.ToTar()
// Example: restored, _ := node.FromTar(snapshot)
// Package node provides an in-memory filesystem implementation of io.Medium
// ported from Borg's DataNode. It stores files in memory with implicit
// directory structure and supports tar serialisation.
package node
import (
@ -10,90 +9,93 @@ import (
"cmp"
goio "io"
"io/fs"
"os"
"path"
"slices"
"strings"
"time"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
)
// Example: nodeTree := node.New()
// Example: nodeTree.AddData("config/app.yaml", []byte("port: 8080"))
// Example: snapshot, _ := nodeTree.ToTar()
// Example: restored, _ := node.FromTar(snapshot)
// Node is an in-memory filesystem that implements coreio.Node (and therefore
// coreio.Medium). Directories are implicit -- they exist whenever a file path
// contains a "/".
type Node struct {
files map[string]*dataFile
}
// compile-time interface checks
var _ coreio.Medium = (*Node)(nil)
var _ fs.ReadFileFS = (*Node)(nil)
// Example: nodeTree := node.New()
// Example: _ = nodeTree.Write("config/app.yaml", "port: 8080")
// New creates a new, empty Node.
func New() *Node {
return &Node{files: make(map[string]*dataFile)}
}
// Example: nodeTree.AddData("config/app.yaml", []byte("port: 8080"))
func (node *Node) AddData(name string, content []byte) {
name = core.TrimPrefix(name, "/")
// ---------- Node-specific methods ----------
// AddData stages content in the in-memory filesystem.
func (n *Node) AddData(name string, content []byte) {
name = strings.TrimPrefix(name, "/")
if name == "" {
return
}
if core.HasSuffix(name, "/") {
// Directories are implicit, so we don't store them.
if strings.HasSuffix(name, "/") {
return
}
node.files[name] = &dataFile{
n.files[name] = &dataFile{
name: name,
content: content,
modTime: time.Now(),
}
}
// Example: snapshot, _ := nodeTree.ToTar()
func (node *Node) ToTar() ([]byte, error) {
buffer := new(bytes.Buffer)
tarWriter := tar.NewWriter(buffer)
// ToTar serialises the entire in-memory tree to a tar archive.
func (n *Node) ToTar() ([]byte, error) {
buf := new(bytes.Buffer)
tw := tar.NewWriter(buf)
for _, file := range node.files {
for _, file := range n.files {
hdr := &tar.Header{
Name: file.name,
Mode: 0600,
Size: int64(len(file.content)),
ModTime: file.modTime,
}
if err := tarWriter.WriteHeader(hdr); err != nil {
if err := tw.WriteHeader(hdr); err != nil {
return nil, err
}
if _, err := tarWriter.Write(file.content); err != nil {
if _, err := tw.Write(file.content); err != nil {
return nil, err
}
}
if err := tarWriter.Close(); err != nil {
if err := tw.Close(); err != nil {
return nil, err
}
return buffer.Bytes(), nil
return buf.Bytes(), nil
}
// Example: restored, _ := node.FromTar(snapshot)
// FromTar creates a new Node from a tar archive.
func FromTar(data []byte) (*Node, error) {
restoredNode := New()
if err := restoredNode.LoadTar(data); err != nil {
n := New()
if err := n.LoadTar(data); err != nil {
return nil, err
}
return restoredNode, nil
return n, nil
}
// Example: _ = nodeTree.LoadTar(snapshot)
func (node *Node) LoadTar(data []byte) error {
// LoadTar replaces the in-memory tree with the contents of a tar archive.
func (n *Node) LoadTar(data []byte) error {
newFiles := make(map[string]*dataFile)
tarReader := tar.NewReader(bytes.NewReader(data))
tr := tar.NewReader(bytes.NewReader(data))
for {
header, err := tarReader.Next()
header, err := tr.Next()
if err == goio.EOF {
break
}
@ -102,12 +104,12 @@ func (node *Node) LoadTar(data []byte) error {
}
if header.Typeflag == tar.TypeReg {
content, err := goio.ReadAll(tarReader)
content, err := goio.ReadAll(tr)
if err != nil {
return core.E("node.LoadTar", "read tar entry", err)
return err
}
name := core.TrimPrefix(header.Name, "/")
if name == "" || core.HasSuffix(name, "/") {
name := strings.TrimPrefix(header.Name, "/")
if name == "" || strings.HasSuffix(name, "/") {
continue
}
newFiles[name] = &dataFile{
@ -118,164 +120,188 @@ func (node *Node) LoadTar(data []byte) error {
}
}
node.files = newFiles
n.files = newFiles
return nil
}
// Example: options := node.WalkOptions{MaxDepth: 1, SkipErrors: true}
// WalkNode walks the in-memory tree, calling fn for each entry.
func (n *Node) WalkNode(root string, fn fs.WalkDirFunc) error {
return fs.WalkDir(n, root, fn)
}
// WalkOptions configures the behaviour of Walk.
type WalkOptions struct {
MaxDepth int
Filter func(entryPath string, entry fs.DirEntry) bool
// MaxDepth limits how many directory levels to descend. 0 means unlimited.
MaxDepth int
// Filter, if set, is called for each entry. Return true to include the
// entry (and descend into it if it is a directory).
Filter func(path string, d fs.DirEntry) bool
// SkipErrors suppresses errors (e.g. nonexistent root) instead of
// propagating them through the callback.
SkipErrors bool
}
// Example: _ = nodeTree.Walk(".", func(_ string, _ fs.DirEntry, _ error) error { return nil }, node.WalkOptions{MaxDepth: 1, SkipErrors: true})
func (node *Node) Walk(root string, walkFunc fs.WalkDirFunc, options WalkOptions) error {
if options.SkipErrors {
if _, err := node.Stat(root); err != nil {
// Walk walks the in-memory tree with optional WalkOptions.
func (n *Node) Walk(root string, fn fs.WalkDirFunc, opts ...WalkOptions) error {
var opt WalkOptions
if len(opts) > 0 {
opt = opts[0]
}
if opt.SkipErrors {
// If root doesn't exist, silently return nil.
if _, err := n.Stat(root); err != nil {
return nil
}
}
return fs.WalkDir(node, root, func(entryPath string, entry fs.DirEntry, err error) error {
if options.Filter != nil && err == nil {
if !options.Filter(entryPath, entry) {
if entry != nil && entry.IsDir() {
return fs.WalkDir(n, root, func(p string, d fs.DirEntry, err error) error {
if opt.Filter != nil && err == nil {
if !opt.Filter(p, d) {
if d != nil && d.IsDir() {
return fs.SkipDir
}
return nil
}
}
walkResult := walkFunc(entryPath, entry, err)
// Call the user's function first so the entry is visited.
result := fn(p, d, err)
if walkResult == nil && options.MaxDepth > 0 && entry != nil && entry.IsDir() && entryPath != root {
relativePath := core.TrimPrefix(entryPath, root)
relativePath = core.TrimPrefix(relativePath, "/")
depth := len(core.Split(relativePath, "/"))
if depth >= options.MaxDepth {
// After visiting a directory at MaxDepth, prevent descending further.
if result == nil && opt.MaxDepth > 0 && d != nil && d.IsDir() && p != root {
rel := strings.TrimPrefix(p, root)
rel = strings.TrimPrefix(rel, "/")
depth := strings.Count(rel, "/") + 1
if depth >= opt.MaxDepth {
return fs.SkipDir
}
}
return walkResult
return result
})
}
// Example: content, _ := nodeTree.ReadFile("config/app.yaml")
func (node *Node) ReadFile(name string) ([]byte, error) {
name = core.TrimPrefix(name, "/")
file, ok := node.files[name]
// ReadFile returns the content of the named file as a byte slice.
// Implements fs.ReadFileFS.
func (n *Node) ReadFile(name string) ([]byte, error) {
name = strings.TrimPrefix(name, "/")
f, ok := n.files[name]
if !ok {
return nil, core.E("node.ReadFile", core.Concat("path not found: ", name), fs.ErrNotExist)
return nil, &fs.PathError{Op: "read", Path: name, Err: fs.ErrNotExist}
}
result := make([]byte, len(file.content))
copy(result, file.content)
// Return a copy to prevent callers from mutating internal state.
result := make([]byte, len(f.content))
copy(result, f.content)
return result, nil
}
// Example: _ = nodeTree.CopyFile("config/app.yaml", "backup/app.yaml", 0644)
func (node *Node) CopyFile(sourcePath, destinationPath string, permissions fs.FileMode) error {
sourcePath = core.TrimPrefix(sourcePath, "/")
file, ok := node.files[sourcePath]
// CopyFile copies a file from the in-memory tree to the local filesystem.
func (n *Node) CopyFile(src, dst string, perm fs.FileMode) error {
src = strings.TrimPrefix(src, "/")
f, ok := n.files[src]
if !ok {
info, err := node.Stat(sourcePath)
// Check if it's a directory — can't copy directories this way.
info, err := n.Stat(src)
if err != nil {
return core.E("node.CopyFile", core.Concat("source not found: ", sourcePath), fs.ErrNotExist)
return &fs.PathError{Op: "copyfile", Path: src, Err: fs.ErrNotExist}
}
if info.IsDir() {
return core.E("node.CopyFile", core.Concat("source is a directory: ", sourcePath), fs.ErrInvalid)
return &fs.PathError{Op: "copyfile", Path: src, Err: fs.ErrInvalid}
}
return core.E("node.CopyFile", core.Concat("source not found: ", sourcePath), fs.ErrNotExist)
return &fs.PathError{Op: "copyfile", Path: src, Err: fs.ErrNotExist}
}
parent := core.PathDir(destinationPath)
if parent != "." && parent != "" && parent != destinationPath && !coreio.Local.IsDir(parent) {
return &fs.PathError{Op: "copyfile", Path: destinationPath, Err: fs.ErrNotExist}
}
return coreio.Local.WriteMode(destinationPath, string(file.content), permissions)
return os.WriteFile(dst, f.content, perm)
}
// Example: _ = nodeTree.CopyTo(io.NewMemoryMedium(), "config", "backup/config")
func (node *Node) CopyTo(target coreio.Medium, sourcePath, destinationPath string) error {
sourcePath = core.TrimPrefix(sourcePath, "/")
info, err := node.Stat(sourcePath)
// CopyTo copies a file (or directory tree) from the node to any Medium.
func (n *Node) CopyTo(target coreio.Medium, sourcePath, destPath string) error {
sourcePath = strings.TrimPrefix(sourcePath, "/")
info, err := n.Stat(sourcePath)
if err != nil {
return err
}
if !info.IsDir() {
file, ok := node.files[sourcePath]
// Single file copy
f, ok := n.files[sourcePath]
if !ok {
return core.E("node.CopyTo", core.Concat("path not found: ", sourcePath), fs.ErrNotExist)
return fs.ErrNotExist
}
return target.Write(destinationPath, string(file.content))
return target.Write(destPath, string(f.content))
}
// Directory: walk and copy all files underneath
prefix := sourcePath
if prefix != "" && !core.HasSuffix(prefix, "/") {
if prefix != "" && !strings.HasSuffix(prefix, "/") {
prefix += "/"
}
for filePath, file := range node.files {
if !core.HasPrefix(filePath, prefix) && filePath != sourcePath {
for p, f := range n.files {
if !strings.HasPrefix(p, prefix) && p != sourcePath {
continue
}
relativePath := core.TrimPrefix(filePath, prefix)
copyDestinationPath := destinationPath
if relativePath != "" {
copyDestinationPath = core.Concat(destinationPath, "/", relativePath)
rel := strings.TrimPrefix(p, prefix)
dest := destPath
if rel != "" {
dest = destPath + "/" + rel
}
if err := target.Write(copyDestinationPath, string(file.content)); err != nil {
if err := target.Write(dest, string(f.content)); err != nil {
return err
}
}
return nil
}
// Example: file, _ := nodeTree.Open("config/app.yaml")
func (node *Node) Open(name string) (fs.File, error) {
name = core.TrimPrefix(name, "/")
if dataFile, ok := node.files[name]; ok {
return &dataFileReader{file: dataFile}, nil
// ---------- Medium interface: fs.FS methods ----------
// Open opens a file from the Node. Implements fs.FS.
func (n *Node) Open(name string) (fs.File, error) {
name = strings.TrimPrefix(name, "/")
if file, ok := n.files[name]; ok {
return &dataFileReader{file: file}, nil
}
// Check if it's a directory
prefix := name + "/"
if name == "." || name == "" {
prefix = ""
}
for filePath := range node.files {
if core.HasPrefix(filePath, prefix) {
for p := range n.files {
if strings.HasPrefix(p, prefix) {
return &dirFile{path: name, modTime: time.Now()}, nil
}
}
return nil, core.E("node.Open", core.Concat("path not found: ", name), fs.ErrNotExist)
return nil, fs.ErrNotExist
}
// Example: info, _ := nodeTree.Stat("config/app.yaml")
func (node *Node) Stat(name string) (fs.FileInfo, error) {
name = core.TrimPrefix(name, "/")
if dataFile, ok := node.files[name]; ok {
return dataFile.Stat()
// Stat returns file information for the given path.
func (n *Node) Stat(name string) (fs.FileInfo, error) {
name = strings.TrimPrefix(name, "/")
if file, ok := n.files[name]; ok {
return file.Stat()
}
// Check if it's a directory
prefix := name + "/"
if name == "." || name == "" {
prefix = ""
}
for filePath := range node.files {
if core.HasPrefix(filePath, prefix) {
for p := range n.files {
if strings.HasPrefix(p, prefix) {
return &dirInfo{name: path.Base(name), modTime: time.Now()}, nil
}
}
return nil, core.E("node.Stat", core.Concat("path not found: ", name), fs.ErrNotExist)
return nil, fs.ErrNotExist
}
// Example: entries, _ := nodeTree.ReadDir("config")
func (node *Node) ReadDir(name string) ([]fs.DirEntry, error) {
name = core.TrimPrefix(name, "/")
// ReadDir reads and returns all directory entries for the named directory.
func (n *Node) ReadDir(name string) ([]fs.DirEntry, error) {
name = strings.TrimPrefix(name, "/")
if name == "." {
name = ""
}
if info, err := node.Stat(name); err == nil && !info.IsDir() {
// Disallow reading a file as a directory.
if info, err := n.Stat(name); err == nil && !info.IsDir() {
return nil, &fs.PathError{Op: "readdir", Path: name, Err: fs.ErrInvalid}
}
@ -287,24 +313,24 @@ func (node *Node) ReadDir(name string) ([]fs.DirEntry, error) {
prefix = name + "/"
}
for filePath := range node.files {
if !core.HasPrefix(filePath, prefix) {
for p := range n.files {
if !strings.HasPrefix(p, prefix) {
continue
}
relPath := core.TrimPrefix(filePath, prefix)
firstComponent := core.SplitN(relPath, "/", 2)[0]
relPath := strings.TrimPrefix(p, prefix)
firstComponent := strings.Split(relPath, "/")[0]
if seen[firstComponent] {
continue
}
seen[firstComponent] = true
if core.Contains(relPath, "/") {
directoryInfo := &dirInfo{name: firstComponent, modTime: time.Now()}
entries = append(entries, fs.FileInfoToDirEntry(directoryInfo))
if strings.Contains(relPath, "/") {
dir := &dirInfo{name: firstComponent, modTime: time.Now()}
entries = append(entries, fs.FileInfoToDirEntry(dir))
} else {
file := node.files[filePath]
file := n.files[p]
info, _ := file.Stat()
entries = append(entries, fs.FileInfoToDirEntry(info))
}
@ -317,245 +343,272 @@ func (node *Node) ReadDir(name string) ([]fs.DirEntry, error) {
return entries, nil
}
// Example: content, _ := nodeTree.Read("config/app.yaml")
func (node *Node) Read(filePath string) (string, error) {
filePath = core.TrimPrefix(filePath, "/")
file, ok := node.files[filePath]
// ---------- Medium interface: read/write ----------
// Read retrieves the content of a file as a string.
func (n *Node) Read(p string) (string, error) {
p = strings.TrimPrefix(p, "/")
f, ok := n.files[p]
if !ok {
return "", core.E("node.Read", core.Concat("path not found: ", filePath), fs.ErrNotExist)
return "", fs.ErrNotExist
}
return string(file.content), nil
return string(f.content), nil
}
// Example: _ = nodeTree.Write("config/app.yaml", "port: 8080")
func (node *Node) Write(filePath, content string) error {
node.AddData(filePath, []byte(content))
// Write saves the given content to a file, overwriting it if it exists.
func (n *Node) Write(p, content string) error {
n.AddData(p, []byte(content))
return nil
}
// Example: _ = nodeTree.WriteMode("keys/private.key", key, 0600)
func (node *Node) WriteMode(filePath, content string, mode fs.FileMode) error {
return node.Write(filePath, content)
// WriteMode saves content with explicit permissions (no-op for in-memory node).
func (n *Node) WriteMode(p, content string, mode os.FileMode) error {
return n.Write(p, content)
}
// Example: _ = nodeTree.EnsureDir("config")
func (node *Node) EnsureDir(directoryPath string) error {
// FileGet is an alias for Read.
func (n *Node) FileGet(p string) (string, error) {
return n.Read(p)
}
// FileSet is an alias for Write.
func (n *Node) FileSet(p, content string) error {
return n.Write(p, content)
}
// EnsureDir is a no-op because directories are implicit in Node.
func (n *Node) EnsureDir(_ string) error {
return nil
}
// Example: exists := nodeTree.Exists("config/app.yaml")
func (node *Node) Exists(filePath string) bool {
_, err := node.Stat(filePath)
// ---------- Medium interface: existence checks ----------
// Exists checks if a path exists (file or directory).
func (n *Node) Exists(p string) bool {
_, err := n.Stat(p)
return err == nil
}
// Example: isFile := nodeTree.IsFile("config/app.yaml")
func (node *Node) IsFile(filePath string) bool {
filePath = core.TrimPrefix(filePath, "/")
_, ok := node.files[filePath]
// IsFile checks if a path exists and is a regular file.
func (n *Node) IsFile(p string) bool {
p = strings.TrimPrefix(p, "/")
_, ok := n.files[p]
return ok
}
// Example: isDirectory := nodeTree.IsDir("config")
func (node *Node) IsDir(filePath string) bool {
info, err := node.Stat(filePath)
// IsDir checks if a path exists and is a directory.
func (n *Node) IsDir(p string) bool {
info, err := n.Stat(p)
if err != nil {
return false
}
return info.IsDir()
}
// Example: _ = nodeTree.Delete("config/app.yaml")
func (node *Node) Delete(filePath string) error {
filePath = core.TrimPrefix(filePath, "/")
if _, ok := node.files[filePath]; ok {
delete(node.files, filePath)
// ---------- Medium interface: mutations ----------
// Delete removes a single file.
func (n *Node) Delete(p string) error {
p = strings.TrimPrefix(p, "/")
if _, ok := n.files[p]; ok {
delete(n.files, p)
return nil
}
return core.E("node.Delete", core.Concat("path not found: ", filePath), fs.ErrNotExist)
return fs.ErrNotExist
}
// Example: _ = nodeTree.DeleteAll("logs/archive")
func (node *Node) DeleteAll(filePath string) error {
filePath = core.TrimPrefix(filePath, "/")
// DeleteAll removes a file or directory and all children.
func (n *Node) DeleteAll(p string) error {
p = strings.TrimPrefix(p, "/")
found := false
if _, ok := node.files[filePath]; ok {
delete(node.files, filePath)
if _, ok := n.files[p]; ok {
delete(n.files, p)
found = true
}
prefix := filePath + "/"
for entryPath := range node.files {
if core.HasPrefix(entryPath, prefix) {
delete(node.files, entryPath)
prefix := p + "/"
for k := range n.files {
if strings.HasPrefix(k, prefix) {
delete(n.files, k)
found = true
}
}
if !found {
return core.E("node.DeleteAll", core.Concat("path not found: ", filePath), fs.ErrNotExist)
return fs.ErrNotExist
}
return nil
}
// Example: _ = nodeTree.Rename("drafts/todo.txt", "archive/todo.txt")
func (node *Node) Rename(oldPath, newPath string) error {
oldPath = core.TrimPrefix(oldPath, "/")
newPath = core.TrimPrefix(newPath, "/")
// Rename moves a file from oldPath to newPath.
func (n *Node) Rename(oldPath, newPath string) error {
oldPath = strings.TrimPrefix(oldPath, "/")
newPath = strings.TrimPrefix(newPath, "/")
file, ok := node.files[oldPath]
f, ok := n.files[oldPath]
if !ok {
return core.E("node.Rename", core.Concat("path not found: ", oldPath), fs.ErrNotExist)
return fs.ErrNotExist
}
file.name = newPath
node.files[newPath] = file
delete(node.files, oldPath)
f.name = newPath
n.files[newPath] = f
delete(n.files, oldPath)
return nil
}
// Example: entries, _ := nodeTree.List("config")
func (node *Node) List(filePath string) ([]fs.DirEntry, error) {
filePath = core.TrimPrefix(filePath, "/")
if filePath == "" || filePath == "." {
return node.ReadDir(".")
// List returns directory entries for the given path.
func (n *Node) List(p string) ([]fs.DirEntry, error) {
p = strings.TrimPrefix(p, "/")
if p == "" || p == "." {
return n.ReadDir(".")
}
return node.ReadDir(filePath)
return n.ReadDir(p)
}
// Example: writer, _ := nodeTree.Create("logs/app.log")
func (node *Node) Create(filePath string) (goio.WriteCloser, error) {
filePath = core.TrimPrefix(filePath, "/")
return &nodeWriter{node: node, path: filePath}, nil
// ---------- Medium interface: streams ----------
// Create creates or truncates the named file, returning a WriteCloser.
// Content is committed to the Node on Close.
func (n *Node) Create(p string) (goio.WriteCloser, error) {
p = strings.TrimPrefix(p, "/")
return &nodeWriter{node: n, path: p}, nil
}
// Example: writer, _ := nodeTree.Append("logs/app.log")
func (node *Node) Append(filePath string) (goio.WriteCloser, error) {
filePath = core.TrimPrefix(filePath, "/")
// Append opens the named file for appending, creating it if needed.
// Content is committed to the Node on Close.
func (n *Node) Append(p string) (goio.WriteCloser, error) {
p = strings.TrimPrefix(p, "/")
var existing []byte
if file, ok := node.files[filePath]; ok {
existing = make([]byte, len(file.content))
copy(existing, file.content)
if f, ok := n.files[p]; ok {
existing = make([]byte, len(f.content))
copy(existing, f.content)
}
return &nodeWriter{node: node, path: filePath, buffer: existing}, nil
return &nodeWriter{node: n, path: p, buf: existing}, nil
}
func (node *Node) ReadStream(filePath string) (goio.ReadCloser, error) {
file, err := node.Open(filePath)
// ReadStream returns a ReadCloser for the file content.
func (n *Node) ReadStream(p string) (goio.ReadCloser, error) {
f, err := n.Open(p)
if err != nil {
return nil, err
}
return goio.NopCloser(file), nil
return goio.NopCloser(f), nil
}
func (node *Node) WriteStream(filePath string) (goio.WriteCloser, error) {
return node.Create(filePath)
// WriteStream returns a WriteCloser for the file content.
func (n *Node) WriteStream(p string) (goio.WriteCloser, error) {
return n.Create(p)
}
// ---------- Internal types ----------
// nodeWriter buffers writes and commits them to the Node on Close.
type nodeWriter struct {
node *Node
path string
buffer []byte
node *Node
path string
buf []byte
}
func (writer *nodeWriter) Write(data []byte) (int, error) {
writer.buffer = append(writer.buffer, data...)
return len(data), nil
func (w *nodeWriter) Write(p []byte) (int, error) {
w.buf = append(w.buf, p...)
return len(p), nil
}
func (writer *nodeWriter) Close() error {
writer.node.files[writer.path] = &dataFile{
name: writer.path,
content: writer.buffer,
func (w *nodeWriter) Close() error {
w.node.files[w.path] = &dataFile{
name: w.path,
content: w.buf,
modTime: time.Now(),
}
return nil
}
// dataFile represents a file in the Node.
type dataFile struct {
name string
content []byte
modTime time.Time
}
func (file *dataFile) Stat() (fs.FileInfo, error) { return &dataFileInfo{file: file}, nil }
func (file *dataFile) Read(buffer []byte) (int, error) { return 0, goio.EOF }
func (file *dataFile) Close() error { return nil }
func (d *dataFile) Stat() (fs.FileInfo, error) { return &dataFileInfo{file: d}, nil }
func (d *dataFile) Read(_ []byte) (int, error) { return 0, goio.EOF }
func (d *dataFile) Close() error { return nil }
// dataFileInfo implements fs.FileInfo for a dataFile.
type dataFileInfo struct{ file *dataFile }
func (info *dataFileInfo) Name() string { return path.Base(info.file.name) }
func (info *dataFileInfo) Size() int64 { return int64(len(info.file.content)) }
func (info *dataFileInfo) Mode() fs.FileMode { return 0444 }
func (info *dataFileInfo) ModTime() time.Time { return info.file.modTime }
func (info *dataFileInfo) IsDir() bool { return false }
func (info *dataFileInfo) Sys() any { return nil }
func (d *dataFileInfo) Name() string { return path.Base(d.file.name) }
func (d *dataFileInfo) Size() int64 { return int64(len(d.file.content)) }
func (d *dataFileInfo) Mode() fs.FileMode { return 0444 }
func (d *dataFileInfo) ModTime() time.Time { return d.file.modTime }
func (d *dataFileInfo) IsDir() bool { return false }
func (d *dataFileInfo) Sys() any { return nil }
// dataFileReader implements fs.File for reading a dataFile.
type dataFileReader struct {
file *dataFile
reader *bytes.Reader
}
func (reader *dataFileReader) Stat() (fs.FileInfo, error) { return reader.file.Stat() }
func (reader *dataFileReader) Read(buffer []byte) (int, error) {
if reader.reader == nil {
reader.reader = bytes.NewReader(reader.file.content)
func (d *dataFileReader) Stat() (fs.FileInfo, error) { return d.file.Stat() }
func (d *dataFileReader) Read(p []byte) (int, error) {
if d.reader == nil {
d.reader = bytes.NewReader(d.file.content)
}
return reader.reader.Read(buffer)
return d.reader.Read(p)
}
func (d *dataFileReader) Close() error { return nil }
func (reader *dataFileReader) Close() error { return nil }
// dirInfo implements fs.FileInfo for an implicit directory.
type dirInfo struct {
name string
modTime time.Time
}
func (info *dirInfo) Name() string { return info.name }
func (info *dirInfo) Size() int64 { return 0 }
func (info *dirInfo) Mode() fs.FileMode { return fs.ModeDir | 0555 }
func (info *dirInfo) ModTime() time.Time { return info.modTime }
func (info *dirInfo) IsDir() bool { return true }
func (info *dirInfo) Sys() any { return nil }
func (d *dirInfo) Name() string { return d.name }
func (d *dirInfo) Size() int64 { return 0 }
func (d *dirInfo) Mode() fs.FileMode { return fs.ModeDir | 0555 }
func (d *dirInfo) ModTime() time.Time { return d.modTime }
func (d *dirInfo) IsDir() bool { return true }
func (d *dirInfo) Sys() any { return nil }
// dirFile implements fs.File for a directory.
type dirFile struct {
path string
modTime time.Time
}
func (directory *dirFile) Stat() (fs.FileInfo, error) {
return &dirInfo{name: path.Base(directory.path), modTime: directory.modTime}, nil
func (d *dirFile) Stat() (fs.FileInfo, error) {
return &dirInfo{name: path.Base(d.path), modTime: d.modTime}, nil
}
func (directory *dirFile) Read([]byte) (int, error) {
return 0, core.E("node.dirFile.Read", core.Concat("cannot read directory: ", directory.path), &fs.PathError{Op: "read", Path: directory.path, Err: fs.ErrInvalid})
func (d *dirFile) Read([]byte) (int, error) {
return 0, &fs.PathError{Op: "read", Path: d.path, Err: fs.ErrInvalid}
}
func (d *dirFile) Close() error { return nil }
func (directory *dirFile) Close() error { return nil }
// Ensure Node implements fs.FS so WalkDir works.
var _ fs.FS = (*Node)(nil)
// Ensure Node also satisfies fs.StatFS and fs.ReadDirFS for WalkDir.
var _ fs.StatFS = (*Node)(nil)
var _ fs.ReadDirFS = (*Node)(nil)
// Unexported helper: ensure ReadStream result also satisfies fs.File
// (for cases where callers do a type assertion).
var _ goio.ReadCloser = goio.NopCloser(nil)
// Ensure nodeWriter satisfies goio.WriteCloser.
var _ goio.WriteCloser = (*nodeWriter)(nil)
// Ensure dirFile satisfies fs.File.
var _ fs.File = (*dirFile)(nil)
// Ensure dataFileReader satisfies fs.File.
var _ fs.File = (*dataFileReader)(nil)
// ReadDirFile is not needed since fs.WalkDir works via ReadDirFS on the FS itself,
// but we need the Node to satisfy fs.ReadDirFS.
// ensure all internal compile-time checks are grouped above
// no further type assertions needed

View file

@ -3,28 +3,38 @@ package node
import (
"archive/tar"
"bytes"
"errors"
"io"
"io/fs"
"os"
"path/filepath"
"sort"
"strings"
"testing"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNode_New_Good(t *testing.T) {
nodeTree := New()
require.NotNil(t, nodeTree, "New() must not return nil")
assert.NotNil(t, nodeTree.files, "New() must initialise the files map")
// ---------------------------------------------------------------------------
// New
// ---------------------------------------------------------------------------
func TestNew_Good(t *testing.T) {
n := New()
require.NotNil(t, n, "New() must not return nil")
assert.NotNil(t, n.files, "New() must initialise the files map")
}
func TestNode_AddData_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
// ---------------------------------------------------------------------------
// AddData
// ---------------------------------------------------------------------------
file, ok := nodeTree.files["foo.txt"]
func TestAddData_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
file, ok := n.files["foo.txt"]
require.True(t, ok, "file foo.txt should be present")
assert.Equal(t, []byte("foo"), file.content)
@ -33,251 +43,287 @@ func TestNode_AddData_Good(t *testing.T) {
assert.Equal(t, "foo.txt", info.Name())
}
func TestNode_AddData_Bad(t *testing.T) {
nodeTree := New()
func TestAddData_Bad(t *testing.T) {
n := New()
nodeTree.AddData("", []byte("data"))
assert.Empty(t, nodeTree.files, "empty name must not be stored")
// Empty name is silently ignored.
n.AddData("", []byte("data"))
assert.Empty(t, n.files, "empty name must not be stored")
nodeTree.AddData("dir/", nil)
assert.Empty(t, nodeTree.files, "directory entry must not be stored")
// Directory entry (trailing slash) is silently ignored.
n.AddData("dir/", nil)
assert.Empty(t, n.files, "directory entry must not be stored")
}
func TestNode_AddData_EdgeCases_Good(t *testing.T) {
func TestAddData_Ugly(t *testing.T) {
t.Run("Overwrite", func(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
nodeTree.AddData("foo.txt", []byte("bar"))
n := New()
n.AddData("foo.txt", []byte("foo"))
n.AddData("foo.txt", []byte("bar"))
file := nodeTree.files["foo.txt"]
file := n.files["foo.txt"]
assert.Equal(t, []byte("bar"), file.content, "second AddData should overwrite")
})
t.Run("LeadingSlash", func(t *testing.T) {
nodeTree := New()
nodeTree.AddData("/hello.txt", []byte("hi"))
_, ok := nodeTree.files["hello.txt"]
n := New()
n.AddData("/hello.txt", []byte("hi"))
_, ok := n.files["hello.txt"]
assert.True(t, ok, "leading slash should be trimmed")
})
}
func TestNode_Open_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
// ---------------------------------------------------------------------------
// Open
// ---------------------------------------------------------------------------
file, err := nodeTree.Open("foo.txt")
func TestOpen_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
file, err := n.Open("foo.txt")
require.NoError(t, err)
defer file.Close()
readBuffer := make([]byte, 10)
nr, err := file.Read(readBuffer)
buf := make([]byte, 10)
nr, err := file.Read(buf)
require.True(t, nr > 0 || err == io.EOF)
assert.Equal(t, "foo", string(readBuffer[:nr]))
assert.Equal(t, "foo", string(buf[:nr]))
}
func TestNode_Open_Bad(t *testing.T) {
nodeTree := New()
_, err := nodeTree.Open("nonexistent.txt")
func TestOpen_Bad(t *testing.T) {
n := New()
_, err := n.Open("nonexistent.txt")
require.Error(t, err)
assert.ErrorIs(t, err, fs.ErrNotExist)
}
func TestNode_Open_Directory_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("bar/baz.txt", []byte("baz"))
func TestOpen_Ugly(t *testing.T) {
n := New()
n.AddData("bar/baz.txt", []byte("baz"))
file, err := nodeTree.Open("bar")
// Opening a directory should succeed.
file, err := n.Open("bar")
require.NoError(t, err)
defer file.Close()
// Reading from a directory should fail.
_, err = file.Read(make([]byte, 1))
require.Error(t, err)
var pathError *fs.PathError
require.True(t, core.As(err, &pathError))
assert.Equal(t, fs.ErrInvalid, pathError.Err)
var pathErr *fs.PathError
require.True(t, errors.As(err, &pathErr))
assert.Equal(t, fs.ErrInvalid, pathErr.Err)
}
func TestNode_Stat_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
nodeTree.AddData("bar/baz.txt", []byte("baz"))
// ---------------------------------------------------------------------------
// Stat
// ---------------------------------------------------------------------------
info, err := nodeTree.Stat("bar/baz.txt")
func TestStat_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
n.AddData("bar/baz.txt", []byte("baz"))
// File stat.
info, err := n.Stat("bar/baz.txt")
require.NoError(t, err)
assert.Equal(t, "baz.txt", info.Name())
assert.Equal(t, int64(3), info.Size())
assert.False(t, info.IsDir())
dirInfo, err := nodeTree.Stat("bar")
// Directory stat.
dirInfo, err := n.Stat("bar")
require.NoError(t, err)
assert.True(t, dirInfo.IsDir())
assert.Equal(t, "bar", dirInfo.Name())
}
func TestNode_Stat_Bad(t *testing.T) {
nodeTree := New()
_, err := nodeTree.Stat("nonexistent")
func TestStat_Bad(t *testing.T) {
n := New()
_, err := n.Stat("nonexistent")
require.Error(t, err)
assert.ErrorIs(t, err, fs.ErrNotExist)
}
func TestNode_Stat_RootDirectory_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
func TestStat_Ugly(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
info, err := nodeTree.Stat(".")
// Root directory.
info, err := n.Stat(".")
require.NoError(t, err)
assert.True(t, info.IsDir())
assert.Equal(t, ".", info.Name())
}
func TestNode_ReadFile_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("hello.txt", []byte("hello world"))
// ---------------------------------------------------------------------------
// ReadFile
// ---------------------------------------------------------------------------
data, err := nodeTree.ReadFile("hello.txt")
func TestReadFile_Good(t *testing.T) {
n := New()
n.AddData("hello.txt", []byte("hello world"))
data, err := n.ReadFile("hello.txt")
require.NoError(t, err)
assert.Equal(t, []byte("hello world"), data)
}
func TestNode_ReadFile_Bad(t *testing.T) {
nodeTree := New()
_, err := nodeTree.ReadFile("missing.txt")
func TestReadFile_Bad(t *testing.T) {
n := New()
_, err := n.ReadFile("missing.txt")
require.Error(t, err)
assert.ErrorIs(t, err, fs.ErrNotExist)
}
func TestNode_ReadFile_ReturnsCopy_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("data.bin", []byte("original"))
func TestReadFile_Ugly(t *testing.T) {
n := New()
n.AddData("data.bin", []byte("original"))
data, err := nodeTree.ReadFile("data.bin")
// Returned slice must be a copy — mutating it must not affect internal state.
data, err := n.ReadFile("data.bin")
require.NoError(t, err)
data[0] = 'X'
data2, err := nodeTree.ReadFile("data.bin")
data2, err := n.ReadFile("data.bin")
require.NoError(t, err)
assert.Equal(t, []byte("original"), data2, "ReadFile must return an independent copy")
}
func TestNode_ReadDir_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
nodeTree.AddData("bar/baz.txt", []byte("baz"))
nodeTree.AddData("bar/qux.txt", []byte("qux"))
// ---------------------------------------------------------------------------
// ReadDir
// ---------------------------------------------------------------------------
entries, err := nodeTree.ReadDir(".")
func TestReadDir_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
n.AddData("bar/baz.txt", []byte("baz"))
n.AddData("bar/qux.txt", []byte("qux"))
// Root.
entries, err := n.ReadDir(".")
require.NoError(t, err)
assert.Equal(t, []string{"bar", "foo.txt"}, sortedNames(entries))
barEntries, err := nodeTree.ReadDir("bar")
// Subdirectory.
barEntries, err := n.ReadDir("bar")
require.NoError(t, err)
assert.Equal(t, []string{"baz.txt", "qux.txt"}, sortedNames(barEntries))
}
func TestNode_ReadDir_Bad(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
func TestReadDir_Bad(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
_, err := nodeTree.ReadDir("foo.txt")
// Reading a file as a directory should fail.
_, err := n.ReadDir("foo.txt")
require.Error(t, err)
var pathError *fs.PathError
require.True(t, core.As(err, &pathError))
assert.Equal(t, fs.ErrInvalid, pathError.Err)
var pathErr *fs.PathError
require.True(t, errors.As(err, &pathErr))
assert.Equal(t, fs.ErrInvalid, pathErr.Err)
}
func TestNode_ReadDir_IgnoresEmptyEntry_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("bar/baz.txt", []byte("baz"))
nodeTree.AddData("empty_dir/", nil)
func TestReadDir_Ugly(t *testing.T) {
n := New()
n.AddData("bar/baz.txt", []byte("baz"))
n.AddData("empty_dir/", nil) // Ignored by AddData.
entries, err := nodeTree.ReadDir(".")
entries, err := n.ReadDir(".")
require.NoError(t, err)
assert.Equal(t, []string{"bar"}, sortedNames(entries))
}
func TestNode_Exists_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
nodeTree.AddData("bar/baz.txt", []byte("baz"))
// ---------------------------------------------------------------------------
// Exists
// ---------------------------------------------------------------------------
assert.True(t, nodeTree.Exists("foo.txt"))
assert.True(t, nodeTree.Exists("bar"))
func TestExists_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
n.AddData("bar/baz.txt", []byte("baz"))
assert.True(t, n.Exists("foo.txt"))
assert.True(t, n.Exists("bar"))
}
func TestNode_Exists_Bad(t *testing.T) {
nodeTree := New()
assert.False(t, nodeTree.Exists("nonexistent"))
func TestExists_Bad(t *testing.T) {
n := New()
assert.False(t, n.Exists("nonexistent"))
}
func TestNode_Exists_RootAndEmptyPath_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("dummy.txt", []byte("dummy"))
func TestExists_Ugly(t *testing.T) {
n := New()
n.AddData("dummy.txt", []byte("dummy"))
assert.True(t, nodeTree.Exists("."), "root '.' must exist")
assert.True(t, nodeTree.Exists(""), "empty path (root) must exist")
assert.True(t, n.Exists("."), "root '.' must exist")
assert.True(t, n.Exists(""), "empty path (root) must exist")
}
func TestNode_Walk_Default_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
nodeTree.AddData("bar/baz.txt", []byte("baz"))
nodeTree.AddData("bar/qux.txt", []byte("qux"))
// ---------------------------------------------------------------------------
// Walk
// ---------------------------------------------------------------------------
func TestWalk_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
n.AddData("bar/baz.txt", []byte("baz"))
n.AddData("bar/qux.txt", []byte("qux"))
var paths []string
err := nodeTree.Walk(".", func(p string, d fs.DirEntry, err error) error {
err := n.Walk(".", func(p string, d fs.DirEntry, err error) error {
paths = append(paths, p)
return nil
}, WalkOptions{})
})
require.NoError(t, err)
sort.Strings(paths)
assert.Equal(t, []string{".", "bar", "bar/baz.txt", "bar/qux.txt", "foo.txt"}, paths)
}
func TestNode_Walk_Default_Bad(t *testing.T) {
nodeTree := New()
func TestWalk_Bad(t *testing.T) {
n := New()
var called bool
err := nodeTree.Walk("nonexistent", func(p string, d fs.DirEntry, err error) error {
err := n.Walk("nonexistent", func(p string, d fs.DirEntry, err error) error {
called = true
assert.Error(t, err)
assert.ErrorIs(t, err, fs.ErrNotExist)
return err
}, WalkOptions{})
})
assert.True(t, called, "walk function must be called for nonexistent root")
assert.ErrorIs(t, err, fs.ErrNotExist)
}
func TestNode_Walk_CallbackError_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("a/b.txt", []byte("b"))
nodeTree.AddData("a/c.txt", []byte("c"))
func TestWalk_Ugly(t *testing.T) {
n := New()
n.AddData("a/b.txt", []byte("b"))
n.AddData("a/c.txt", []byte("c"))
walkErr := core.NewError("stop walking")
// Stop walk early with a custom error.
walkErr := errors.New("stop walking")
var paths []string
err := nodeTree.Walk(".", func(p string, d fs.DirEntry, err error) error {
err := n.Walk(".", func(p string, d fs.DirEntry, err error) error {
if p == "a/b.txt" {
return walkErr
}
paths = append(paths, p)
return nil
}, WalkOptions{})
})
assert.Equal(t, walkErr, err, "Walk must propagate the callback error")
}
func TestNode_Walk_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("root.txt", []byte("root"))
nodeTree.AddData("a/a1.txt", []byte("a1"))
nodeTree.AddData("a/b/b1.txt", []byte("b1"))
nodeTree.AddData("c/c1.txt", []byte("c1"))
func TestWalk_Options(t *testing.T) {
n := New()
n.AddData("root.txt", []byte("root"))
n.AddData("a/a1.txt", []byte("a1"))
n.AddData("a/b/b1.txt", []byte("b1"))
n.AddData("c/c1.txt", []byte("c1"))
t.Run("MaxDepth", func(t *testing.T) {
var paths []string
err := nodeTree.Walk(".", func(p string, d fs.DirEntry, err error) error {
err := n.Walk(".", func(p string, d fs.DirEntry, err error) error {
paths = append(paths, p)
return nil
}, WalkOptions{MaxDepth: 1})
@ -289,11 +335,11 @@ func TestNode_Walk_Good(t *testing.T) {
t.Run("Filter", func(t *testing.T) {
var paths []string
err := nodeTree.Walk(".", func(p string, d fs.DirEntry, err error) error {
err := n.Walk(".", func(p string, d fs.DirEntry, err error) error {
paths = append(paths, p)
return nil
}, WalkOptions{Filter: func(p string, d fs.DirEntry) bool {
return !core.HasPrefix(p, "a")
return !strings.HasPrefix(p, "a")
}})
require.NoError(t, err)
@ -303,7 +349,7 @@ func TestNode_Walk_Good(t *testing.T) {
t.Run("SkipErrors", func(t *testing.T) {
var called bool
err := nodeTree.Walk("nonexistent", func(p string, d fs.DirEntry, err error) error {
err := n.Walk("nonexistent", func(p string, d fs.DirEntry, err error) error {
called = true
return err
}, WalkOptions{SkipErrors: true})
@ -313,165 +359,70 @@ func TestNode_Walk_Good(t *testing.T) {
})
}
func TestNode_CopyFile_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
// ---------------------------------------------------------------------------
// CopyFile
// ---------------------------------------------------------------------------
destinationPath := core.Path(t.TempDir(), "test.txt")
err := nodeTree.CopyFile("foo.txt", destinationPath, 0644)
func TestCopyFile_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
tmpfile := filepath.Join(t.TempDir(), "test.txt")
err := n.CopyFile("foo.txt", tmpfile, 0644)
require.NoError(t, err)
content, err := coreio.Local.Read(destinationPath)
content, err := os.ReadFile(tmpfile)
require.NoError(t, err)
assert.Equal(t, "foo", content)
assert.Equal(t, "foo", string(content))
}
func TestNode_CopyFile_Bad(t *testing.T) {
nodeTree := New()
destinationPath := core.Path(t.TempDir(), "test.txt")
func TestCopyFile_Bad(t *testing.T) {
n := New()
tmpfile := filepath.Join(t.TempDir(), "test.txt")
err := nodeTree.CopyFile("nonexistent.txt", destinationPath, 0644)
// Source does not exist.
err := n.CopyFile("nonexistent.txt", tmpfile, 0644)
assert.Error(t, err)
nodeTree.AddData("foo.txt", []byte("foo"))
err = nodeTree.CopyFile("foo.txt", "/nonexistent_dir/test.txt", 0644)
// Destination not writable.
n.AddData("foo.txt", []byte("foo"))
err = n.CopyFile("foo.txt", "/nonexistent_dir/test.txt", 0644)
assert.Error(t, err)
}
func TestNode_CopyFile_DirectorySource_Bad(t *testing.T) {
nodeTree := New()
nodeTree.AddData("bar/baz.txt", []byte("baz"))
destinationPath := core.Path(t.TempDir(), "test.txt")
func TestCopyFile_Ugly(t *testing.T) {
n := New()
n.AddData("bar/baz.txt", []byte("baz"))
tmpfile := filepath.Join(t.TempDir(), "test.txt")
err := nodeTree.CopyFile("bar", destinationPath, 0644)
// Attempting to copy a directory should fail.
err := n.CopyFile("bar", tmpfile, 0644)
assert.Error(t, err)
}
func TestNode_CopyTo_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("config/app.yaml", []byte("port: 8080"))
nodeTree.AddData("config/env/app.env", []byte("MODE=test"))
// ---------------------------------------------------------------------------
// ToTar / FromTar
// ---------------------------------------------------------------------------
fileTarget := coreio.NewMemoryMedium()
err := nodeTree.CopyTo(fileTarget, "config/app.yaml", "backup/app.yaml")
require.NoError(t, err)
content, err := fileTarget.Read("backup/app.yaml")
require.NoError(t, err)
assert.Equal(t, "port: 8080", content)
func TestToTar_Good(t *testing.T) {
n := New()
n.AddData("foo.txt", []byte("foo"))
n.AddData("bar/baz.txt", []byte("baz"))
dirTarget := coreio.NewMemoryMedium()
err = nodeTree.CopyTo(dirTarget, "config", "backup/config")
require.NoError(t, err)
content, err = dirTarget.Read("backup/config/app.yaml")
require.NoError(t, err)
assert.Equal(t, "port: 8080", content)
content, err = dirTarget.Read("backup/config/env/app.env")
require.NoError(t, err)
assert.Equal(t, "MODE=test", content)
}
func TestNode_CopyTo_Bad(t *testing.T) {
nodeTree := New()
err := nodeTree.CopyTo(coreio.NewMemoryMedium(), "missing", "backup/missing")
assert.Error(t, err)
}
func TestNode_MediumFacade_Good(t *testing.T) {
nodeTree := New()
require.NoError(t, nodeTree.Write("docs/readme.txt", "hello"))
require.NoError(t, nodeTree.WriteMode("docs/mode.txt", "mode", 0600))
require.NoError(t, nodeTree.Write("docs/guide.txt", "guide"))
require.NoError(t, nodeTree.EnsureDir("ignored"))
value, err := nodeTree.Read("docs/readme.txt")
require.NoError(t, err)
assert.Equal(t, "hello", value)
value, err = nodeTree.Read("docs/guide.txt")
require.NoError(t, err)
assert.Equal(t, "guide", value)
assert.True(t, nodeTree.IsFile("docs/readme.txt"))
assert.True(t, nodeTree.IsDir("docs"))
entries, err := nodeTree.List("docs")
require.NoError(t, err)
assert.Equal(t, []string{"guide.txt", "mode.txt", "readme.txt"}, sortedNames(entries))
file, err := nodeTree.Open("docs/readme.txt")
require.NoError(t, err)
info, err := file.Stat()
require.NoError(t, err)
assert.Equal(t, "readme.txt", info.Name())
assert.Equal(t, fs.FileMode(0444), info.Mode())
assert.False(t, info.IsDir())
assert.Nil(t, info.Sys())
require.NoError(t, file.Close())
dir, err := nodeTree.Open("docs")
require.NoError(t, err)
dirInfo, err := dir.Stat()
require.NoError(t, err)
assert.Equal(t, "docs", dirInfo.Name())
assert.True(t, dirInfo.IsDir())
assert.Equal(t, fs.ModeDir|0555, dirInfo.Mode())
assert.Nil(t, dirInfo.Sys())
require.NoError(t, dir.Close())
createWriter, err := nodeTree.Create("docs/generated.txt")
require.NoError(t, err)
_, err = createWriter.Write([]byte("generated"))
require.NoError(t, err)
require.NoError(t, createWriter.Close())
appendWriter, err := nodeTree.Append("docs/generated.txt")
require.NoError(t, err)
_, err = appendWriter.Write([]byte(" content"))
require.NoError(t, err)
require.NoError(t, appendWriter.Close())
streamReader, err := nodeTree.ReadStream("docs/generated.txt")
require.NoError(t, err)
streamData, err := io.ReadAll(streamReader)
require.NoError(t, err)
assert.Equal(t, "generated content", string(streamData))
require.NoError(t, streamReader.Close())
writeStream, err := nodeTree.WriteStream("docs/stream.txt")
require.NoError(t, err)
_, err = writeStream.Write([]byte("stream"))
require.NoError(t, err)
require.NoError(t, writeStream.Close())
require.NoError(t, nodeTree.Rename("docs/stream.txt", "docs/stream-renamed.txt"))
assert.True(t, nodeTree.Exists("docs/stream-renamed.txt"))
require.NoError(t, nodeTree.Delete("docs/stream-renamed.txt"))
assert.False(t, nodeTree.Exists("docs/stream-renamed.txt"))
require.NoError(t, nodeTree.DeleteAll("docs"))
assert.False(t, nodeTree.Exists("docs"))
}
func TestNode_ToTar_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("foo.txt", []byte("foo"))
nodeTree.AddData("bar/baz.txt", []byte("baz"))
tarball, err := nodeTree.ToTar()
tarball, err := n.ToTar()
require.NoError(t, err)
require.NotEmpty(t, tarball)
tarReader := tar.NewReader(bytes.NewReader(tarball))
// Verify tar content.
tr := tar.NewReader(bytes.NewReader(tarball))
files := make(map[string]string)
for {
header, err := tarReader.Next()
header, err := tr.Next()
if err == io.EOF {
break
}
require.NoError(t, err)
content, err := io.ReadAll(tarReader)
content, err := io.ReadAll(tr)
require.NoError(t, err)
files[header.Name] = string(content)
}
@ -480,84 +431,97 @@ func TestNode_ToTar_Good(t *testing.T) {
assert.Equal(t, "baz", files["bar/baz.txt"])
}
func TestNode_FromTar_Good(t *testing.T) {
buffer := new(bytes.Buffer)
tarWriter := tar.NewWriter(buffer)
func TestFromTar_Good(t *testing.T) {
buf := new(bytes.Buffer)
tw := tar.NewWriter(buf)
for _, file := range []struct{ Name, Body string }{
for _, f := range []struct{ Name, Body string }{
{"foo.txt", "foo"},
{"bar/baz.txt", "baz"},
} {
hdr := &tar.Header{
Name: file.Name,
Name: f.Name,
Mode: 0600,
Size: int64(len(file.Body)),
Size: int64(len(f.Body)),
Typeflag: tar.TypeReg,
}
require.NoError(t, tarWriter.WriteHeader(hdr))
_, err := tarWriter.Write([]byte(file.Body))
require.NoError(t, tw.WriteHeader(hdr))
_, err := tw.Write([]byte(f.Body))
require.NoError(t, err)
}
require.NoError(t, tarWriter.Close())
require.NoError(t, tw.Close())
nodeTree, err := FromTar(buffer.Bytes())
n, err := FromTar(buf.Bytes())
require.NoError(t, err)
assert.True(t, nodeTree.Exists("foo.txt"), "foo.txt should exist")
assert.True(t, nodeTree.Exists("bar/baz.txt"), "bar/baz.txt should exist")
assert.True(t, n.Exists("foo.txt"), "foo.txt should exist")
assert.True(t, n.Exists("bar/baz.txt"), "bar/baz.txt should exist")
}
func TestNode_FromTar_Bad(t *testing.T) {
func TestFromTar_Bad(t *testing.T) {
// Truncated data that cannot be a valid tar.
truncated := make([]byte, 100)
_, err := FromTar(truncated)
assert.Error(t, err, "truncated data should produce an error")
}
func TestNode_TarRoundTrip_Good(t *testing.T) {
nodeTree1 := New()
nodeTree1.AddData("a.txt", []byte("alpha"))
nodeTree1.AddData("b/c.txt", []byte("charlie"))
func TestTarRoundTrip_Good(t *testing.T) {
n1 := New()
n1.AddData("a.txt", []byte("alpha"))
n1.AddData("b/c.txt", []byte("charlie"))
tarball, err := nodeTree1.ToTar()
tarball, err := n1.ToTar()
require.NoError(t, err)
nodeTree2, err := FromTar(tarball)
n2, err := FromTar(tarball)
require.NoError(t, err)
data, err := nodeTree2.ReadFile("a.txt")
// Verify n2 matches n1.
data, err := n2.ReadFile("a.txt")
require.NoError(t, err)
assert.Equal(t, []byte("alpha"), data)
data, err = nodeTree2.ReadFile("b/c.txt")
data, err = n2.ReadFile("b/c.txt")
require.NoError(t, err)
assert.Equal(t, []byte("charlie"), data)
}
func TestNode_FSInterface_Good(t *testing.T) {
nodeTree := New()
nodeTree.AddData("hello.txt", []byte("world"))
// ---------------------------------------------------------------------------
// fs.FS interface compliance
// ---------------------------------------------------------------------------
var fsys fs.FS = nodeTree
func TestFSInterface_Good(t *testing.T) {
n := New()
n.AddData("hello.txt", []byte("world"))
// fs.FS
var fsys fs.FS = n
file, err := fsys.Open("hello.txt")
require.NoError(t, err)
defer file.Close()
var statFS fs.StatFS = nodeTree
// fs.StatFS
var statFS fs.StatFS = n
info, err := statFS.Stat("hello.txt")
require.NoError(t, err)
assert.Equal(t, "hello.txt", info.Name())
assert.Equal(t, int64(5), info.Size())
var readFS fs.ReadFileFS = nodeTree
// fs.ReadFileFS
var readFS fs.ReadFileFS = n
data, err := readFS.ReadFile("hello.txt")
require.NoError(t, err)
assert.Equal(t, []byte("world"), data)
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
func sortedNames(entries []fs.DirEntry) []string {
var names []string
for _, entry := range entries {
names = append(names, entry.Name())
for _, e := range entries {
names = append(names, e.Name())
}
sort.Strings(names)
return names

514
s3/s3.go
View file

@ -1,6 +1,4 @@
// Example: client := awss3.NewFromConfig(aws.Config{Region: "us-east-1"})
// Example: medium, _ := s3.New(s3.Options{Bucket: "backups", Client: client, Prefix: "daily/"})
// Example: _ = medium.Write("reports/daily.txt", "done")
// Package s3 provides an S3-backed implementation of the io.Medium interface.
package s3
import (
@ -8,318 +6,332 @@ import (
"context"
goio "io"
"io/fs"
"os"
"path"
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
awss3 "github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
coreerr "dappco.re/go/core/log"
)
// Example: client := awss3.NewFromConfig(aws.Config{Region: "us-east-1"})
// Example: medium, _ := s3.New(s3.Options{Bucket: "backups", Client: client, Prefix: "daily/"})
type Client interface {
GetObject(ctx context.Context, params *awss3.GetObjectInput, optFns ...func(*awss3.Options)) (*awss3.GetObjectOutput, error)
PutObject(ctx context.Context, params *awss3.PutObjectInput, optFns ...func(*awss3.Options)) (*awss3.PutObjectOutput, error)
DeleteObject(ctx context.Context, params *awss3.DeleteObjectInput, optFns ...func(*awss3.Options)) (*awss3.DeleteObjectOutput, error)
DeleteObjects(ctx context.Context, params *awss3.DeleteObjectsInput, optFns ...func(*awss3.Options)) (*awss3.DeleteObjectsOutput, error)
HeadObject(ctx context.Context, params *awss3.HeadObjectInput, optFns ...func(*awss3.Options)) (*awss3.HeadObjectOutput, error)
ListObjectsV2(ctx context.Context, params *awss3.ListObjectsV2Input, optFns ...func(*awss3.Options)) (*awss3.ListObjectsV2Output, error)
CopyObject(ctx context.Context, params *awss3.CopyObjectInput, optFns ...func(*awss3.Options)) (*awss3.CopyObjectOutput, error)
// s3API is the subset of the S3 client API used by this package.
// This allows for interface-based mocking in tests.
type s3API interface {
GetObject(ctx context.Context, params *s3.GetObjectInput, optFns ...func(*s3.Options)) (*s3.GetObjectOutput, error)
PutObject(ctx context.Context, params *s3.PutObjectInput, optFns ...func(*s3.Options)) (*s3.PutObjectOutput, error)
DeleteObject(ctx context.Context, params *s3.DeleteObjectInput, optFns ...func(*s3.Options)) (*s3.DeleteObjectOutput, error)
DeleteObjects(ctx context.Context, params *s3.DeleteObjectsInput, optFns ...func(*s3.Options)) (*s3.DeleteObjectsOutput, error)
HeadObject(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error)
ListObjectsV2(ctx context.Context, params *s3.ListObjectsV2Input, optFns ...func(*s3.Options)) (*s3.ListObjectsV2Output, error)
CopyObject(ctx context.Context, params *s3.CopyObjectInput, optFns ...func(*s3.Options)) (*s3.CopyObjectOutput, error)
}
// Example: medium, _ := s3.New(s3.Options{Bucket: "backups", Client: client, Prefix: "daily/"})
// Example: _ = medium.Write("reports/daily.txt", "done")
// Medium is an S3-backed storage backend implementing the io.Medium interface.
type Medium struct {
client Client
client s3API
bucket string
prefix string
}
var _ coreio.Medium = (*Medium)(nil)
// Example: medium, _ := s3.New(s3.Options{Bucket: "backups", Client: client, Prefix: "daily/"})
type Options struct {
Bucket string
Client Client
Prefix string
}
func deleteObjectsError(prefix string, errs []types.Error) error {
if len(errs) == 0 {
return nil
}
details := make([]string, 0, len(errs))
for _, errorItem := range errs {
key := aws.ToString(errorItem.Key)
code := aws.ToString(errorItem.Code)
message := aws.ToString(errorItem.Message)
for _, item := range errs {
key := aws.ToString(item.Key)
code := aws.ToString(item.Code)
msg := aws.ToString(item.Message)
switch {
case code != "" && message != "":
details = append(details, core.Concat(key, ": ", code, " ", message))
case code != "" && msg != "":
details = append(details, key+": "+code+" "+msg)
case code != "":
details = append(details, core.Concat(key, ": ", code))
case message != "":
details = append(details, core.Concat(key, ": ", message))
details = append(details, key+": "+code)
case msg != "":
details = append(details, key+": "+msg)
default:
details = append(details, key)
}
}
return core.E("s3.DeleteAll", core.Concat("partial delete failed under ", prefix, ": ", core.Join("; ", details...)), nil)
return coreerr.E("s3.DeleteAll", "partial delete failed under "+prefix+": "+strings.Join(details, "; "), nil)
}
func normalisePrefix(prefix string) string {
if prefix == "" {
return ""
// Option configures a Medium.
type Option func(*Medium)
// WithPrefix sets an optional key prefix for all operations.
func WithPrefix(prefix string) Option {
return func(m *Medium) {
// Ensure prefix ends with "/" if non-empty
if prefix != "" && !strings.HasSuffix(prefix, "/") {
prefix += "/"
}
m.prefix = prefix
}
clean := path.Clean("/" + prefix)
if clean == "/" {
return ""
}
clean = core.TrimPrefix(clean, "/")
if clean != "" && !core.HasSuffix(clean, "/") {
clean += "/"
}
return clean
}
// Example: medium, _ := s3.New(s3.Options{Bucket: "backups", Client: client, Prefix: "daily/"})
// Example: _ = medium.Write("reports/daily.txt", "done")
func New(options Options) (*Medium, error) {
if options.Bucket == "" {
return nil, core.E("s3.New", "bucket name is required", fs.ErrInvalid)
// WithClient sets the S3 client for dependency injection.
func WithClient(client *s3.Client) Option {
return func(m *Medium) {
m.client = client
}
if options.Client == nil {
return nil, core.E("s3.New", "client is required", fs.ErrInvalid)
}
medium := &Medium{
client: options.Client,
bucket: options.Bucket,
prefix: normalisePrefix(options.Prefix),
}
return medium, nil
}
func (medium *Medium) objectKey(filePath string) string {
clean := path.Clean("/" + filePath)
// withAPI sets the s3API interface directly (for testing with mocks).
func withAPI(api s3API) Option {
return func(m *Medium) {
m.client = api
}
}
// New creates a new S3 Medium for the given bucket.
func New(bucket string, opts ...Option) (*Medium, error) {
if bucket == "" {
return nil, coreerr.E("s3.New", "bucket name is required", nil)
}
m := &Medium{bucket: bucket}
for _, opt := range opts {
opt(m)
}
if m.client == nil {
return nil, coreerr.E("s3.New", "S3 client is required (use WithClient option)", nil)
}
return m, nil
}
// key returns the full S3 object key for a given path.
func (m *Medium) key(p string) string {
// Clean the path using a leading "/" to sandbox traversal attempts,
// then strip the "/" prefix. This ensures ".." can't escape.
clean := path.Clean("/" + p)
if clean == "/" {
clean = ""
}
clean = core.TrimPrefix(clean, "/")
clean = strings.TrimPrefix(clean, "/")
if medium.prefix == "" {
if m.prefix == "" {
return clean
}
if clean == "" {
return medium.prefix
return m.prefix
}
return medium.prefix + clean
return m.prefix + clean
}
// Example: content, _ := medium.Read("reports/daily.txt")
func (medium *Medium) Read(filePath string) (string, error) {
key := medium.objectKey(filePath)
// Read retrieves the content of a file as a string.
func (m *Medium) Read(p string) (string, error) {
key := m.key(p)
if key == "" {
return "", core.E("s3.Read", "path is required", fs.ErrInvalid)
return "", coreerr.E("s3.Read", "path is required", os.ErrInvalid)
}
out, err := medium.client.GetObject(context.Background(), &awss3.GetObjectInput{
Bucket: aws.String(medium.bucket),
out, err := m.client.GetObject(context.Background(), &s3.GetObjectInput{
Bucket: aws.String(m.bucket),
Key: aws.String(key),
})
if err != nil {
return "", core.E("s3.Read", core.Concat("failed to get object: ", key), err)
return "", coreerr.E("s3.Read", "failed to get object: "+key, err)
}
defer out.Body.Close()
data, err := goio.ReadAll(out.Body)
if err != nil {
return "", core.E("s3.Read", core.Concat("failed to read body: ", key), err)
return "", coreerr.E("s3.Read", "failed to read body: "+key, err)
}
return string(data), nil
}
// Example: _ = medium.Write("reports/daily.txt", "done")
func (medium *Medium) Write(filePath, content string) error {
key := medium.objectKey(filePath)
// Write saves the given content to a file, overwriting it if it exists.
func (m *Medium) Write(p, content string) error {
key := m.key(p)
if key == "" {
return core.E("s3.Write", "path is required", fs.ErrInvalid)
return coreerr.E("s3.Write", "path is required", os.ErrInvalid)
}
_, err := medium.client.PutObject(context.Background(), &awss3.PutObjectInput{
Bucket: aws.String(medium.bucket),
_, err := m.client.PutObject(context.Background(), &s3.PutObjectInput{
Bucket: aws.String(m.bucket),
Key: aws.String(key),
Body: core.NewReader(content),
Body: strings.NewReader(content),
})
if err != nil {
return core.E("s3.Write", core.Concat("failed to put object: ", key), err)
return coreerr.E("s3.Write", "failed to put object: "+key, err)
}
return nil
}
// Example: _ = medium.WriteMode("keys/private.key", key, 0600)
func (medium *Medium) WriteMode(filePath, content string, mode fs.FileMode) error {
return medium.Write(filePath, content)
}
// Example: _ = medium.EnsureDir("reports/2026")
func (medium *Medium) EnsureDir(directoryPath string) error {
// EnsureDir is a no-op for S3 (S3 has no real directories).
func (m *Medium) EnsureDir(_ string) error {
return nil
}
// Example: isFile := medium.IsFile("reports/daily.txt")
func (medium *Medium) IsFile(filePath string) bool {
key := medium.objectKey(filePath)
// IsFile checks if a path exists and is a regular file (not a "directory" prefix).
func (m *Medium) IsFile(p string) bool {
key := m.key(p)
if key == "" {
return false
}
if core.HasSuffix(key, "/") {
// A "file" in S3 is an object whose key does not end with "/"
if strings.HasSuffix(key, "/") {
return false
}
_, err := medium.client.HeadObject(context.Background(), &awss3.HeadObjectInput{
Bucket: aws.String(medium.bucket),
_, err := m.client.HeadObject(context.Background(), &s3.HeadObjectInput{
Bucket: aws.String(m.bucket),
Key: aws.String(key),
})
return err == nil
}
// Example: _ = medium.Delete("reports/daily.txt")
func (medium *Medium) Delete(filePath string) error {
key := medium.objectKey(filePath)
// FileGet is a convenience function that reads a file from the medium.
func (m *Medium) FileGet(p string) (string, error) {
return m.Read(p)
}
// FileSet is a convenience function that writes a file to the medium.
func (m *Medium) FileSet(p, content string) error {
return m.Write(p, content)
}
// Delete removes a single object.
func (m *Medium) Delete(p string) error {
key := m.key(p)
if key == "" {
return core.E("s3.Delete", "path is required", fs.ErrInvalid)
return coreerr.E("s3.Delete", "path is required", os.ErrInvalid)
}
_, err := medium.client.DeleteObject(context.Background(), &awss3.DeleteObjectInput{
Bucket: aws.String(medium.bucket),
_, err := m.client.DeleteObject(context.Background(), &s3.DeleteObjectInput{
Bucket: aws.String(m.bucket),
Key: aws.String(key),
})
if err != nil {
return core.E("s3.Delete", core.Concat("failed to delete object: ", key), err)
return coreerr.E("s3.Delete", "failed to delete object: "+key, err)
}
return nil
}
// Example: _ = medium.DeleteAll("reports/2026")
func (medium *Medium) DeleteAll(filePath string) error {
key := medium.objectKey(filePath)
// DeleteAll removes all objects under the given prefix.
func (m *Medium) DeleteAll(p string) error {
key := m.key(p)
if key == "" {
return core.E("s3.DeleteAll", "path is required", fs.ErrInvalid)
return coreerr.E("s3.DeleteAll", "path is required", os.ErrInvalid)
}
_, err := medium.client.DeleteObject(context.Background(), &awss3.DeleteObjectInput{
Bucket: aws.String(medium.bucket),
// First, try deleting the exact key
_, err := m.client.DeleteObject(context.Background(), &s3.DeleteObjectInput{
Bucket: aws.String(m.bucket),
Key: aws.String(key),
})
if err != nil {
return core.E("s3.DeleteAll", core.Concat("failed to delete object: ", key), err)
return coreerr.E("s3.DeleteAll", "failed to delete object: "+key, err)
}
// Then delete all objects under the prefix
prefix := key
if !core.HasSuffix(prefix, "/") {
if !strings.HasSuffix(prefix, "/") {
prefix += "/"
}
continueListing := true
paginator := true
var continuationToken *string
for continueListing {
listOutput, err := medium.client.ListObjectsV2(context.Background(), &awss3.ListObjectsV2Input{
Bucket: aws.String(medium.bucket),
for paginator {
listOut, err := m.client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{
Bucket: aws.String(m.bucket),
Prefix: aws.String(prefix),
ContinuationToken: continuationToken,
})
if err != nil {
return core.E("s3.DeleteAll", core.Concat("failed to list objects: ", prefix), err)
return coreerr.E("s3.DeleteAll", "failed to list objects: "+prefix, err)
}
if len(listOutput.Contents) == 0 {
if len(listOut.Contents) == 0 {
break
}
objects := make([]types.ObjectIdentifier, len(listOutput.Contents))
for i, object := range listOutput.Contents {
objects[i] = types.ObjectIdentifier{Key: object.Key}
objects := make([]types.ObjectIdentifier, len(listOut.Contents))
for i, obj := range listOut.Contents {
objects[i] = types.ObjectIdentifier{Key: obj.Key}
}
deleteOut, err := medium.client.DeleteObjects(context.Background(), &awss3.DeleteObjectsInput{
Bucket: aws.String(medium.bucket),
deleteOut, err := m.client.DeleteObjects(context.Background(), &s3.DeleteObjectsInput{
Bucket: aws.String(m.bucket),
Delete: &types.Delete{Objects: objects, Quiet: aws.Bool(true)},
})
if err != nil {
return core.E("s3.DeleteAll", "failed to delete objects", err)
return coreerr.E("s3.DeleteAll", "failed to delete objects", err)
}
if err := deleteObjectsError(prefix, deleteOut.Errors); err != nil {
return err
}
if listOutput.IsTruncated != nil && *listOutput.IsTruncated {
continuationToken = listOutput.NextContinuationToken
if listOut.IsTruncated != nil && *listOut.IsTruncated {
continuationToken = listOut.NextContinuationToken
} else {
continueListing = false
paginator = false
}
}
return nil
}
// Example: _ = medium.Rename("drafts/todo.txt", "archive/todo.txt")
func (medium *Medium) Rename(oldPath, newPath string) error {
oldKey := medium.objectKey(oldPath)
newKey := medium.objectKey(newPath)
// Rename moves an object by copying then deleting the original.
func (m *Medium) Rename(oldPath, newPath string) error {
oldKey := m.key(oldPath)
newKey := m.key(newPath)
if oldKey == "" || newKey == "" {
return core.E("s3.Rename", "both old and new paths are required", fs.ErrInvalid)
return coreerr.E("s3.Rename", "both old and new paths are required", os.ErrInvalid)
}
copySource := medium.bucket + "/" + oldKey
copySource := m.bucket + "/" + oldKey
_, err := medium.client.CopyObject(context.Background(), &awss3.CopyObjectInput{
Bucket: aws.String(medium.bucket),
_, err := m.client.CopyObject(context.Background(), &s3.CopyObjectInput{
Bucket: aws.String(m.bucket),
CopySource: aws.String(copySource),
Key: aws.String(newKey),
})
if err != nil {
return core.E("s3.Rename", core.Concat("failed to copy object: ", oldKey, " -> ", newKey), err)
return coreerr.E("s3.Rename", "failed to copy object: "+oldKey+" -> "+newKey, err)
}
_, err = medium.client.DeleteObject(context.Background(), &awss3.DeleteObjectInput{
Bucket: aws.String(medium.bucket),
_, err = m.client.DeleteObject(context.Background(), &s3.DeleteObjectInput{
Bucket: aws.String(m.bucket),
Key: aws.String(oldKey),
})
if err != nil {
return core.E("s3.Rename", core.Concat("failed to delete source object: ", oldKey), err)
return coreerr.E("s3.Rename", "failed to delete source object: "+oldKey, err)
}
return nil
}
// Example: entries, _ := medium.List("reports")
func (medium *Medium) List(filePath string) ([]fs.DirEntry, error) {
prefix := medium.objectKey(filePath)
if prefix != "" && !core.HasSuffix(prefix, "/") {
// List returns directory entries for the given path using ListObjectsV2 with delimiter.
func (m *Medium) List(p string) ([]fs.DirEntry, error) {
prefix := m.key(p)
if prefix != "" && !strings.HasSuffix(prefix, "/") {
prefix += "/"
}
var entries []fs.DirEntry
listOutput, err := medium.client.ListObjectsV2(context.Background(), &awss3.ListObjectsV2Input{
Bucket: aws.String(medium.bucket),
listOut, err := m.client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{
Bucket: aws.String(m.bucket),
Prefix: aws.String(prefix),
Delimiter: aws.String("/"),
})
if err != nil {
return nil, core.E("s3.List", core.Concat("failed to list objects: ", prefix), err)
return nil, coreerr.E("s3.List", "failed to list objects: "+prefix, err)
}
for _, commonPrefix := range listOutput.CommonPrefixes {
if commonPrefix.Prefix == nil {
// Common prefixes are "directories"
for _, cp := range listOut.CommonPrefixes {
if cp.Prefix == nil {
continue
}
name := core.TrimPrefix(*commonPrefix.Prefix, prefix)
name = core.TrimSuffix(name, "/")
name := strings.TrimPrefix(*cp.Prefix, prefix)
name = strings.TrimSuffix(name, "/")
if name == "" {
continue
}
@ -335,21 +347,22 @@ func (medium *Medium) List(filePath string) ([]fs.DirEntry, error) {
})
}
for _, object := range listOutput.Contents {
if object.Key == nil {
// Contents are "files" (excluding the prefix itself)
for _, obj := range listOut.Contents {
if obj.Key == nil {
continue
}
name := core.TrimPrefix(*object.Key, prefix)
if name == "" || core.Contains(name, "/") {
name := strings.TrimPrefix(*obj.Key, prefix)
if name == "" || strings.Contains(name, "/") {
continue
}
var size int64
if object.Size != nil {
size = *object.Size
if obj.Size != nil {
size = *obj.Size
}
var modTime time.Time
if object.LastModified != nil {
modTime = *object.LastModified
if obj.LastModified != nil {
modTime = *obj.LastModified
}
entries = append(entries, &dirEntry{
name: name,
@ -367,19 +380,19 @@ func (medium *Medium) List(filePath string) ([]fs.DirEntry, error) {
return entries, nil
}
// Example: info, _ := medium.Stat("reports/daily.txt")
func (medium *Medium) Stat(filePath string) (fs.FileInfo, error) {
key := medium.objectKey(filePath)
// Stat returns file information for the given path using HeadObject.
func (m *Medium) Stat(p string) (fs.FileInfo, error) {
key := m.key(p)
if key == "" {
return nil, core.E("s3.Stat", "path is required", fs.ErrInvalid)
return nil, coreerr.E("s3.Stat", "path is required", os.ErrInvalid)
}
out, err := medium.client.HeadObject(context.Background(), &awss3.HeadObjectInput{
Bucket: aws.String(medium.bucket),
out, err := m.client.HeadObject(context.Background(), &s3.HeadObjectInput{
Bucket: aws.String(m.bucket),
Key: aws.String(key),
})
if err != nil {
return nil, core.E("s3.Stat", core.Concat("failed to head object: ", key), err)
return nil, coreerr.E("s3.Stat", "failed to head object: "+key, err)
}
var size int64
@ -400,24 +413,25 @@ func (medium *Medium) Stat(filePath string) (fs.FileInfo, error) {
}, nil
}
func (medium *Medium) Open(filePath string) (fs.File, error) {
key := medium.objectKey(filePath)
// Open opens the named file for reading.
func (m *Medium) Open(p string) (fs.File, error) {
key := m.key(p)
if key == "" {
return nil, core.E("s3.Open", "path is required", fs.ErrInvalid)
return nil, coreerr.E("s3.Open", "path is required", os.ErrInvalid)
}
out, err := medium.client.GetObject(context.Background(), &awss3.GetObjectInput{
Bucket: aws.String(medium.bucket),
out, err := m.client.GetObject(context.Background(), &s3.GetObjectInput{
Bucket: aws.String(m.bucket),
Key: aws.String(key),
})
if err != nil {
return nil, core.E("s3.Open", core.Concat("failed to get object: ", key), err)
return nil, coreerr.E("s3.Open", "failed to get object: "+key, err)
}
data, err := goio.ReadAll(out.Body)
out.Body.Close()
if err != nil {
return nil, core.E("s3.Open", core.Concat("failed to read body: ", key), err)
return nil, coreerr.E("s3.Open", "failed to read body: "+key, err)
}
var size int64
@ -437,28 +451,30 @@ func (medium *Medium) Open(filePath string) (fs.File, error) {
}, nil
}
// Example: writer, _ := medium.Create("reports/daily.txt")
func (medium *Medium) Create(filePath string) (goio.WriteCloser, error) {
key := medium.objectKey(filePath)
// Create creates or truncates the named file. Returns a writer that
// uploads the content on Close.
func (m *Medium) Create(p string) (goio.WriteCloser, error) {
key := m.key(p)
if key == "" {
return nil, core.E("s3.Create", "path is required", fs.ErrInvalid)
return nil, coreerr.E("s3.Create", "path is required", os.ErrInvalid)
}
return &s3WriteCloser{
medium: medium,
medium: m,
key: key,
}, nil
}
// Example: writer, _ := medium.Append("reports/daily.txt")
func (medium *Medium) Append(filePath string) (goio.WriteCloser, error) {
key := medium.objectKey(filePath)
// Append opens the named file for appending. It downloads the existing
// content (if any) and re-uploads the combined content on Close.
func (m *Medium) Append(p string) (goio.WriteCloser, error) {
key := m.key(p)
if key == "" {
return nil, core.E("s3.Append", "path is required", fs.ErrInvalid)
return nil, coreerr.E("s3.Append", "path is required", os.ErrInvalid)
}
var existing []byte
out, err := medium.client.GetObject(context.Background(), &awss3.GetObjectInput{
Bucket: aws.String(medium.bucket),
out, err := m.client.GetObject(context.Background(), &s3.GetObjectInput{
Bucket: aws.String(m.bucket),
Key: aws.String(key),
})
if err == nil {
@ -467,87 +483,92 @@ func (medium *Medium) Append(filePath string) (goio.WriteCloser, error) {
}
return &s3WriteCloser{
medium: medium,
medium: m,
key: key,
data: existing,
}, nil
}
// Example: reader, _ := medium.ReadStream("reports/daily.txt")
func (medium *Medium) ReadStream(filePath string) (goio.ReadCloser, error) {
key := medium.objectKey(filePath)
// ReadStream returns a reader for the file content.
func (m *Medium) ReadStream(p string) (goio.ReadCloser, error) {
key := m.key(p)
if key == "" {
return nil, core.E("s3.ReadStream", "path is required", fs.ErrInvalid)
return nil, coreerr.E("s3.ReadStream", "path is required", os.ErrInvalid)
}
out, err := medium.client.GetObject(context.Background(), &awss3.GetObjectInput{
Bucket: aws.String(medium.bucket),
out, err := m.client.GetObject(context.Background(), &s3.GetObjectInput{
Bucket: aws.String(m.bucket),
Key: aws.String(key),
})
if err != nil {
return nil, core.E("s3.ReadStream", core.Concat("failed to get object: ", key), err)
return nil, coreerr.E("s3.ReadStream", "failed to get object: "+key, err)
}
return out.Body, nil
}
// Example: writer, _ := medium.WriteStream("reports/daily.txt")
func (medium *Medium) WriteStream(filePath string) (goio.WriteCloser, error) {
return medium.Create(filePath)
// WriteStream returns a writer for the file content. Content is uploaded on Close.
func (m *Medium) WriteStream(p string) (goio.WriteCloser, error) {
return m.Create(p)
}
// Example: exists := medium.Exists("reports/daily.txt")
func (medium *Medium) Exists(filePath string) bool {
key := medium.objectKey(filePath)
// Exists checks if a path exists (file or directory prefix).
func (m *Medium) Exists(p string) bool {
key := m.key(p)
if key == "" {
return false
}
_, err := medium.client.HeadObject(context.Background(), &awss3.HeadObjectInput{
Bucket: aws.String(medium.bucket),
// Check as an exact object
_, err := m.client.HeadObject(context.Background(), &s3.HeadObjectInput{
Bucket: aws.String(m.bucket),
Key: aws.String(key),
})
if err == nil {
return true
}
// Check as a "directory" prefix
prefix := key
if !core.HasSuffix(prefix, "/") {
if !strings.HasSuffix(prefix, "/") {
prefix += "/"
}
listOutput, err := medium.client.ListObjectsV2(context.Background(), &awss3.ListObjectsV2Input{
Bucket: aws.String(medium.bucket),
listOut, err := m.client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{
Bucket: aws.String(m.bucket),
Prefix: aws.String(prefix),
MaxKeys: aws.Int32(1),
})
if err != nil {
return false
}
return len(listOutput.Contents) > 0 || len(listOutput.CommonPrefixes) > 0
return len(listOut.Contents) > 0 || len(listOut.CommonPrefixes) > 0
}
// Example: isDirectory := medium.IsDir("reports")
func (medium *Medium) IsDir(filePath string) bool {
key := medium.objectKey(filePath)
// IsDir checks if a path exists and is a directory (has objects under it as a prefix).
func (m *Medium) IsDir(p string) bool {
key := m.key(p)
if key == "" {
return false
}
prefix := key
if !core.HasSuffix(prefix, "/") {
if !strings.HasSuffix(prefix, "/") {
prefix += "/"
}
listOutput, err := medium.client.ListObjectsV2(context.Background(), &awss3.ListObjectsV2Input{
Bucket: aws.String(medium.bucket),
listOut, err := m.client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{
Bucket: aws.String(m.bucket),
Prefix: aws.String(prefix),
MaxKeys: aws.Int32(1),
})
if err != nil {
return false
}
return len(listOutput.Contents) > 0 || len(listOutput.CommonPrefixes) > 0
return len(listOut.Contents) > 0 || len(listOut.CommonPrefixes) > 0
}
// --- Internal types ---
// fileInfo implements fs.FileInfo for S3 objects.
type fileInfo struct {
name string
size int64
@ -556,18 +577,14 @@ type fileInfo struct {
isDir bool
}
func (info *fileInfo) Name() string { return info.name }
func (info *fileInfo) Size() int64 { return info.size }
func (info *fileInfo) Mode() fs.FileMode { return info.mode }
func (info *fileInfo) ModTime() time.Time { return info.modTime }
func (info *fileInfo) IsDir() bool { return info.isDir }
func (info *fileInfo) Sys() any { return nil }
func (fi *fileInfo) Name() string { return fi.name }
func (fi *fileInfo) Size() int64 { return fi.size }
func (fi *fileInfo) Mode() fs.FileMode { return fi.mode }
func (fi *fileInfo) ModTime() time.Time { return fi.modTime }
func (fi *fileInfo) IsDir() bool { return fi.isDir }
func (fi *fileInfo) Sys() any { return nil }
// dirEntry implements fs.DirEntry for S3 listings.
type dirEntry struct {
name string
isDir bool
@ -575,14 +592,12 @@ type dirEntry struct {
info fs.FileInfo
}
func (entry *dirEntry) Name() string { return entry.name }
func (entry *dirEntry) IsDir() bool { return entry.isDir }
func (entry *dirEntry) Type() fs.FileMode { return entry.mode.Type() }
func (entry *dirEntry) Info() (fs.FileInfo, error) { return entry.info, nil }
func (de *dirEntry) Name() string { return de.name }
func (de *dirEntry) IsDir() bool { return de.isDir }
func (de *dirEntry) Type() fs.FileMode { return de.mode.Type() }
func (de *dirEntry) Info() (fs.FileInfo, error) { return de.info, nil }
// s3File implements fs.File for S3 objects.
type s3File struct {
name string
content []byte
@ -591,47 +606,48 @@ type s3File struct {
modTime time.Time
}
func (file *s3File) Stat() (fs.FileInfo, error) {
func (f *s3File) Stat() (fs.FileInfo, error) {
return &fileInfo{
name: file.name,
size: int64(len(file.content)),
name: f.name,
size: int64(len(f.content)),
mode: 0644,
modTime: file.modTime,
modTime: f.modTime,
}, nil
}
func (file *s3File) Read(buffer []byte) (int, error) {
if file.offset >= int64(len(file.content)) {
func (f *s3File) Read(b []byte) (int, error) {
if f.offset >= int64(len(f.content)) {
return 0, goio.EOF
}
bytesRead := copy(buffer, file.content[file.offset:])
file.offset += int64(bytesRead)
return bytesRead, nil
n := copy(b, f.content[f.offset:])
f.offset += int64(n)
return n, nil
}
func (file *s3File) Close() error {
func (f *s3File) Close() error {
return nil
}
// s3WriteCloser buffers writes and uploads to S3 on Close.
type s3WriteCloser struct {
medium *Medium
key string
data []byte
}
func (writer *s3WriteCloser) Write(data []byte) (int, error) {
writer.data = append(writer.data, data...)
return len(data), nil
func (w *s3WriteCloser) Write(p []byte) (int, error) {
w.data = append(w.data, p...)
return len(p), nil
}
func (writer *s3WriteCloser) Close() error {
_, err := writer.medium.client.PutObject(context.Background(), &awss3.PutObjectInput{
Bucket: aws.String(writer.medium.bucket),
Key: aws.String(writer.key),
Body: bytes.NewReader(writer.data),
func (w *s3WriteCloser) Close() error {
_, err := w.medium.client.PutObject(context.Background(), &s3.PutObjectInput{
Bucket: aws.String(w.medium.bucket),
Key: aws.String(w.key),
Body: bytes.NewReader(w.data),
})
if err != nil {
return core.E("s3.writeCloser.Close", "failed to upload on close", err)
return coreerr.E("s3.writeCloser.Close", "failed to upload on close", err)
}
return nil
}

View file

@ -3,22 +3,25 @@ package s3
import (
"bytes"
"context"
"errors"
"fmt"
goio "io"
"io/fs"
"sort"
"strings"
"sync"
"testing"
"time"
core "dappco.re/go/core"
"github.com/aws/aws-sdk-go-v2/aws"
awss3 "github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type testS3Client struct {
// mockS3 is an in-memory mock implementing the s3API interface.
type mockS3 struct {
mu sync.RWMutex
objects map[string][]byte
mtimes map[string]time.Time
@ -26,8 +29,8 @@ type testS3Client struct {
deleteObjectsErrs map[string]types.Error
}
func newTestS3Client() *testS3Client {
return &testS3Client{
func newMockS3() *mockS3 {
return &mockS3{
objects: make(map[string][]byte),
mtimes: make(map[string]time.Time),
deleteObjectErrors: make(map[string]error),
@ -35,86 +38,86 @@ func newTestS3Client() *testS3Client {
}
}
func (client *testS3Client) GetObject(operationContext context.Context, params *awss3.GetObjectInput, optionFns ...func(*awss3.Options)) (*awss3.GetObjectOutput, error) {
client.mu.RLock()
defer client.mu.RUnlock()
func (m *mockS3) GetObject(_ context.Context, params *s3.GetObjectInput, _ ...func(*s3.Options)) (*s3.GetObjectOutput, error) {
m.mu.RLock()
defer m.mu.RUnlock()
key := aws.ToString(params.Key)
data, ok := client.objects[key]
data, ok := m.objects[key]
if !ok {
return nil, core.E("s3test.testS3Client.GetObject", core.Sprintf("NoSuchKey: key %q not found", key), fs.ErrNotExist)
return nil, fmt.Errorf("NoSuchKey: key %q not found", key)
}
mtime := client.mtimes[key]
return &awss3.GetObjectOutput{
mtime := m.mtimes[key]
return &s3.GetObjectOutput{
Body: goio.NopCloser(bytes.NewReader(data)),
ContentLength: aws.Int64(int64(len(data))),
LastModified: &mtime,
}, nil
}
func (client *testS3Client) PutObject(operationContext context.Context, params *awss3.PutObjectInput, optionFns ...func(*awss3.Options)) (*awss3.PutObjectOutput, error) {
client.mu.Lock()
defer client.mu.Unlock()
func (m *mockS3) PutObject(_ context.Context, params *s3.PutObjectInput, _ ...func(*s3.Options)) (*s3.PutObjectOutput, error) {
m.mu.Lock()
defer m.mu.Unlock()
key := aws.ToString(params.Key)
data, err := goio.ReadAll(params.Body)
if err != nil {
return nil, err
}
client.objects[key] = data
client.mtimes[key] = time.Now()
return &awss3.PutObjectOutput{}, nil
m.objects[key] = data
m.mtimes[key] = time.Now()
return &s3.PutObjectOutput{}, nil
}
func (client *testS3Client) DeleteObject(operationContext context.Context, params *awss3.DeleteObjectInput, optionFns ...func(*awss3.Options)) (*awss3.DeleteObjectOutput, error) {
client.mu.Lock()
defer client.mu.Unlock()
func (m *mockS3) DeleteObject(_ context.Context, params *s3.DeleteObjectInput, _ ...func(*s3.Options)) (*s3.DeleteObjectOutput, error) {
m.mu.Lock()
defer m.mu.Unlock()
key := aws.ToString(params.Key)
if err, ok := client.deleteObjectErrors[key]; ok {
if err, ok := m.deleteObjectErrors[key]; ok {
return nil, err
}
delete(client.objects, key)
delete(client.mtimes, key)
return &awss3.DeleteObjectOutput{}, nil
delete(m.objects, key)
delete(m.mtimes, key)
return &s3.DeleteObjectOutput{}, nil
}
func (client *testS3Client) DeleteObjects(operationContext context.Context, params *awss3.DeleteObjectsInput, optionFns ...func(*awss3.Options)) (*awss3.DeleteObjectsOutput, error) {
client.mu.Lock()
defer client.mu.Unlock()
func (m *mockS3) DeleteObjects(_ context.Context, params *s3.DeleteObjectsInput, _ ...func(*s3.Options)) (*s3.DeleteObjectsOutput, error) {
m.mu.Lock()
defer m.mu.Unlock()
var outErrs []types.Error
for _, obj := range params.Delete.Objects {
key := aws.ToString(obj.Key)
if errInfo, ok := client.deleteObjectsErrs[key]; ok {
if errInfo, ok := m.deleteObjectsErrs[key]; ok {
outErrs = append(outErrs, errInfo)
continue
}
delete(client.objects, key)
delete(client.mtimes, key)
delete(m.objects, key)
delete(m.mtimes, key)
}
return &awss3.DeleteObjectsOutput{Errors: outErrs}, nil
return &s3.DeleteObjectsOutput{Errors: outErrs}, nil
}
func (client *testS3Client) HeadObject(operationContext context.Context, params *awss3.HeadObjectInput, optionFns ...func(*awss3.Options)) (*awss3.HeadObjectOutput, error) {
client.mu.RLock()
defer client.mu.RUnlock()
func (m *mockS3) HeadObject(_ context.Context, params *s3.HeadObjectInput, _ ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {
m.mu.RLock()
defer m.mu.RUnlock()
key := aws.ToString(params.Key)
data, ok := client.objects[key]
data, ok := m.objects[key]
if !ok {
return nil, core.E("s3test.testS3Client.HeadObject", core.Sprintf("NotFound: key %q not found", key), fs.ErrNotExist)
return nil, fmt.Errorf("NotFound: key %q not found", key)
}
mtime := client.mtimes[key]
return &awss3.HeadObjectOutput{
mtime := m.mtimes[key]
return &s3.HeadObjectOutput{
ContentLength: aws.Int64(int64(len(data))),
LastModified: &mtime,
}, nil
}
func (client *testS3Client) ListObjectsV2(operationContext context.Context, params *awss3.ListObjectsV2Input, optionFns ...func(*awss3.Options)) (*awss3.ListObjectsV2Output, error) {
client.mu.RLock()
defer client.mu.RUnlock()
func (m *mockS3) ListObjectsV2(_ context.Context, params *s3.ListObjectsV2Input, _ ...func(*s3.Options)) (*s3.ListObjectsV2Output, error) {
m.mu.RLock()
defer m.mu.RUnlock()
prefix := aws.ToString(params.Prefix)
delimiter := aws.ToString(params.Delimiter)
@ -123,9 +126,10 @@ func (client *testS3Client) ListObjectsV2(operationContext context.Context, para
maxKeys = *params.MaxKeys
}
// Collect all matching keys sorted
var allKeys []string
for k := range client.objects {
if core.HasPrefix(k, prefix) {
for k := range m.objects {
if strings.HasPrefix(k, prefix) {
allKeys = append(allKeys, k)
}
}
@ -135,12 +139,12 @@ func (client *testS3Client) ListObjectsV2(operationContext context.Context, para
commonPrefixes := make(map[string]bool)
for _, k := range allKeys {
rest := core.TrimPrefix(k, prefix)
rest := strings.TrimPrefix(k, prefix)
if delimiter != "" {
parts := core.SplitN(rest, delimiter, 2)
if len(parts) == 2 {
cp := core.Concat(prefix, parts[0], delimiter)
if idx := strings.Index(rest, delimiter); idx >= 0 {
// This key has a delimiter after the prefix -> common prefix
cp := prefix + rest[:idx+len(delimiter)]
commonPrefixes[cp] = true
continue
}
@ -150,8 +154,8 @@ func (client *testS3Client) ListObjectsV2(operationContext context.Context, para
break
}
data := client.objects[k]
mtime := client.mtimes[k]
data := m.objects[k]
mtime := m.mtimes[k]
contents = append(contents, types.Object{
Key: aws.String(k),
Size: aws.Int64(int64(len(data))),
@ -160,6 +164,7 @@ func (client *testS3Client) ListObjectsV2(operationContext context.Context, para
}
var cpSlice []types.CommonPrefix
// Sort common prefixes for deterministic output
var cpKeys []string
for cp := range commonPrefixes {
cpKeys = append(cpKeys, cp)
@ -169,248 +174,268 @@ func (client *testS3Client) ListObjectsV2(operationContext context.Context, para
cpSlice = append(cpSlice, types.CommonPrefix{Prefix: aws.String(cp)})
}
return &awss3.ListObjectsV2Output{
return &s3.ListObjectsV2Output{
Contents: contents,
CommonPrefixes: cpSlice,
IsTruncated: aws.Bool(false),
}, nil
}
func (client *testS3Client) CopyObject(operationContext context.Context, params *awss3.CopyObjectInput, optionFns ...func(*awss3.Options)) (*awss3.CopyObjectOutput, error) {
client.mu.Lock()
defer client.mu.Unlock()
func (m *mockS3) CopyObject(_ context.Context, params *s3.CopyObjectInput, _ ...func(*s3.Options)) (*s3.CopyObjectOutput, error) {
m.mu.Lock()
defer m.mu.Unlock()
// CopySource is "bucket/key"
source := aws.ToString(params.CopySource)
parts := core.SplitN(source, "/", 2)
parts := strings.SplitN(source, "/", 2)
if len(parts) != 2 {
return nil, core.E("s3test.testS3Client.CopyObject", core.Sprintf("invalid CopySource: %s", source), fs.ErrInvalid)
return nil, fmt.Errorf("invalid CopySource: %s", source)
}
srcKey := parts[1]
data, ok := client.objects[srcKey]
data, ok := m.objects[srcKey]
if !ok {
return nil, core.E("s3test.testS3Client.CopyObject", core.Sprintf("NoSuchKey: source key %q not found", srcKey), fs.ErrNotExist)
return nil, fmt.Errorf("NoSuchKey: source key %q not found", srcKey)
}
destKey := aws.ToString(params.Key)
client.objects[destKey] = append([]byte{}, data...)
client.mtimes[destKey] = time.Now()
m.objects[destKey] = append([]byte{}, data...)
m.mtimes[destKey] = time.Now()
return &awss3.CopyObjectOutput{}, nil
return &s3.CopyObjectOutput{}, nil
}
func newS3Medium(t *testing.T) (*Medium, *testS3Client) {
// --- Helper ---
func newTestMedium(t *testing.T) (*Medium, *mockS3) {
t.Helper()
testS3Client := newTestS3Client()
s3Medium, err := New(Options{Bucket: "test-bucket", Client: testS3Client})
mock := newMockS3()
m, err := New("test-bucket", withAPI(mock))
require.NoError(t, err)
return s3Medium, testS3Client
return m, mock
}
func TestS3_New_Good(t *testing.T) {
testS3Client := newTestS3Client()
s3Medium, err := New(Options{Bucket: "my-bucket", Client: testS3Client})
// --- Tests ---
func TestNew_Good(t *testing.T) {
mock := newMockS3()
m, err := New("my-bucket", withAPI(mock))
require.NoError(t, err)
assert.Equal(t, "my-bucket", s3Medium.bucket)
assert.Equal(t, "", s3Medium.prefix)
assert.Equal(t, "my-bucket", m.bucket)
assert.Equal(t, "", m.prefix)
}
func TestS3_New_NoBucket_Bad(t *testing.T) {
_, err := New(Options{Client: newTestS3Client()})
func TestNew_Bad_NoBucket(t *testing.T) {
_, err := New("")
assert.Error(t, err)
assert.Contains(t, err.Error(), "bucket name is required")
}
func TestS3_New_NoClient_Bad(t *testing.T) {
_, err := New(Options{Bucket: "bucket"})
func TestNew_Bad_NoClient(t *testing.T) {
_, err := New("bucket")
assert.Error(t, err)
assert.Contains(t, err.Error(), "client is required")
assert.Contains(t, err.Error(), "S3 client is required")
}
func TestS3_New_Options_Good(t *testing.T) {
testS3Client := newTestS3Client()
s3Medium, err := New(Options{Bucket: "bucket", Client: testS3Client, Prefix: "data/"})
func TestWithPrefix_Good(t *testing.T) {
mock := newMockS3()
m, err := New("bucket", withAPI(mock), WithPrefix("data/"))
require.NoError(t, err)
assert.Equal(t, "data/", s3Medium.prefix)
assert.Equal(t, "data/", m.prefix)
prefixedS3Medium, err := New(Options{Bucket: "bucket", Client: testS3Client, Prefix: "data"})
// Prefix without trailing slash gets one added
m2, err := New("bucket", withAPI(mock), WithPrefix("data"))
require.NoError(t, err)
assert.Equal(t, "data/", prefixedS3Medium.prefix)
assert.Equal(t, "data/", m2.prefix)
}
func TestS3_ReadWrite_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestReadWrite_Good(t *testing.T) {
m, _ := newTestMedium(t)
err := s3Medium.Write("hello.txt", "world")
err := m.Write("hello.txt", "world")
require.NoError(t, err)
content, err := s3Medium.Read("hello.txt")
content, err := m.Read("hello.txt")
require.NoError(t, err)
assert.Equal(t, "world", content)
}
func TestS3_ReadWrite_NotFound_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestReadWrite_Bad_NotFound(t *testing.T) {
m, _ := newTestMedium(t)
_, err := s3Medium.Read("nonexistent.txt")
_, err := m.Read("nonexistent.txt")
assert.Error(t, err)
}
func TestS3_ReadWrite_EmptyPath_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestReadWrite_Bad_EmptyPath(t *testing.T) {
m, _ := newTestMedium(t)
_, err := s3Medium.Read("")
_, err := m.Read("")
assert.Error(t, err)
err = s3Medium.Write("", "content")
err = m.Write("", "content")
assert.Error(t, err)
}
func TestS3_ReadWrite_Prefix_Good(t *testing.T) {
testS3Client := newTestS3Client()
s3Medium, err := New(Options{Bucket: "bucket", Client: testS3Client, Prefix: "pfx"})
func TestReadWrite_Good_WithPrefix(t *testing.T) {
mock := newMockS3()
m, err := New("bucket", withAPI(mock), WithPrefix("pfx"))
require.NoError(t, err)
err = s3Medium.Write("file.txt", "data")
err = m.Write("file.txt", "data")
require.NoError(t, err)
_, ok := testS3Client.objects["pfx/file.txt"]
// Verify the key has the prefix
_, ok := mock.objects["pfx/file.txt"]
assert.True(t, ok, "object should be stored with prefix")
content, err := s3Medium.Read("file.txt")
content, err := m.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "data", content)
}
func TestS3_EnsureDir_Good(t *testing.T) {
medium, _ := newS3Medium(t)
err := medium.EnsureDir("any/path")
func TestEnsureDir_Good(t *testing.T) {
m, _ := newTestMedium(t)
// EnsureDir is a no-op for S3
err := m.EnsureDir("any/path")
assert.NoError(t, err)
}
func TestS3_IsFile_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestIsFile_Good(t *testing.T) {
m, _ := newTestMedium(t)
err := s3Medium.Write("file.txt", "content")
err := m.Write("file.txt", "content")
require.NoError(t, err)
assert.True(t, s3Medium.IsFile("file.txt"))
assert.False(t, s3Medium.IsFile("nonexistent.txt"))
assert.False(t, s3Medium.IsFile(""))
assert.True(t, m.IsFile("file.txt"))
assert.False(t, m.IsFile("nonexistent.txt"))
assert.False(t, m.IsFile(""))
}
func TestS3_Delete_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestFileGetFileSet_Good(t *testing.T) {
m, _ := newTestMedium(t)
err := s3Medium.Write("to-delete.txt", "content")
err := m.FileSet("key.txt", "value")
require.NoError(t, err)
assert.True(t, s3Medium.Exists("to-delete.txt"))
err = s3Medium.Delete("to-delete.txt")
val, err := m.FileGet("key.txt")
require.NoError(t, err)
assert.False(t, s3Medium.IsFile("to-delete.txt"))
assert.Equal(t, "value", val)
}
func TestS3_Delete_EmptyPath_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
err := s3Medium.Delete("")
func TestDelete_Good(t *testing.T) {
m, _ := newTestMedium(t)
err := m.Write("to-delete.txt", "content")
require.NoError(t, err)
assert.True(t, m.Exists("to-delete.txt"))
err = m.Delete("to-delete.txt")
require.NoError(t, err)
assert.False(t, m.IsFile("to-delete.txt"))
}
func TestDelete_Bad_EmptyPath(t *testing.T) {
m, _ := newTestMedium(t)
err := m.Delete("")
assert.Error(t, err)
}
func TestS3_DeleteAll_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestDeleteAll_Good(t *testing.T) {
m, _ := newTestMedium(t)
require.NoError(t, s3Medium.Write("dir/file1.txt", "a"))
require.NoError(t, s3Medium.Write("dir/sub/file2.txt", "b"))
require.NoError(t, s3Medium.Write("other.txt", "c"))
// Create nested structure
require.NoError(t, m.Write("dir/file1.txt", "a"))
require.NoError(t, m.Write("dir/sub/file2.txt", "b"))
require.NoError(t, m.Write("other.txt", "c"))
err := s3Medium.DeleteAll("dir")
err := m.DeleteAll("dir")
require.NoError(t, err)
assert.False(t, s3Medium.IsFile("dir/file1.txt"))
assert.False(t, s3Medium.IsFile("dir/sub/file2.txt"))
assert.True(t, s3Medium.IsFile("other.txt"))
assert.False(t, m.IsFile("dir/file1.txt"))
assert.False(t, m.IsFile("dir/sub/file2.txt"))
assert.True(t, m.IsFile("other.txt"))
}
func TestS3_DeleteAll_EmptyPath_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
err := s3Medium.DeleteAll("")
func TestDeleteAll_Bad_EmptyPath(t *testing.T) {
m, _ := newTestMedium(t)
err := m.DeleteAll("")
assert.Error(t, err)
}
func TestS3_DeleteAll_DeleteObjectError_Bad(t *testing.T) {
s3Medium, testS3Client := newS3Medium(t)
testS3Client.deleteObjectErrors["dir"] = core.NewError("boom")
func TestDeleteAll_Bad_DeleteObjectError(t *testing.T) {
m, mock := newTestMedium(t)
mock.deleteObjectErrors["dir"] = errors.New("boom")
err := s3Medium.DeleteAll("dir")
err := m.DeleteAll("dir")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to delete object: dir")
}
func TestS3_DeleteAll_PartialDelete_Bad(t *testing.T) {
s3Medium, testS3Client := newS3Medium(t)
func TestDeleteAll_Bad_PartialDelete(t *testing.T) {
m, mock := newTestMedium(t)
require.NoError(t, s3Medium.Write("dir/file1.txt", "a"))
require.NoError(t, s3Medium.Write("dir/file2.txt", "b"))
testS3Client.deleteObjectsErrs["dir/file2.txt"] = types.Error{
require.NoError(t, m.Write("dir/file1.txt", "a"))
require.NoError(t, m.Write("dir/file2.txt", "b"))
mock.deleteObjectsErrs["dir/file2.txt"] = types.Error{
Key: aws.String("dir/file2.txt"),
Code: aws.String("AccessDenied"),
Message: aws.String("blocked"),
}
err := s3Medium.DeleteAll("dir")
err := m.DeleteAll("dir")
require.Error(t, err)
assert.Contains(t, err.Error(), "partial delete failed")
assert.Contains(t, err.Error(), "dir/file2.txt")
assert.True(t, s3Medium.IsFile("dir/file2.txt"))
assert.False(t, s3Medium.IsFile("dir/file1.txt"))
assert.True(t, m.IsFile("dir/file2.txt"))
assert.False(t, m.IsFile("dir/file1.txt"))
}
func TestS3_Rename_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestRename_Good(t *testing.T) {
m, _ := newTestMedium(t)
require.NoError(t, s3Medium.Write("old.txt", "content"))
assert.True(t, s3Medium.IsFile("old.txt"))
require.NoError(t, m.Write("old.txt", "content"))
assert.True(t, m.IsFile("old.txt"))
err := s3Medium.Rename("old.txt", "new.txt")
err := m.Rename("old.txt", "new.txt")
require.NoError(t, err)
assert.False(t, s3Medium.IsFile("old.txt"))
assert.True(t, s3Medium.IsFile("new.txt"))
assert.False(t, m.IsFile("old.txt"))
assert.True(t, m.IsFile("new.txt"))
content, err := s3Medium.Read("new.txt")
content, err := m.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "content", content)
}
func TestS3_Rename_EmptyPath_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
err := s3Medium.Rename("", "new.txt")
func TestRename_Bad_EmptyPath(t *testing.T) {
m, _ := newTestMedium(t)
err := m.Rename("", "new.txt")
assert.Error(t, err)
err = s3Medium.Rename("old.txt", "")
err = m.Rename("old.txt", "")
assert.Error(t, err)
}
func TestS3_Rename_SourceNotFound_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
err := s3Medium.Rename("nonexistent.txt", "new.txt")
func TestRename_Bad_SourceNotFound(t *testing.T) {
m, _ := newTestMedium(t)
err := m.Rename("nonexistent.txt", "new.txt")
assert.Error(t, err)
}
func TestS3_List_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestList_Good(t *testing.T) {
m, _ := newTestMedium(t)
require.NoError(t, s3Medium.Write("dir/file1.txt", "a"))
require.NoError(t, s3Medium.Write("dir/file2.txt", "b"))
require.NoError(t, s3Medium.Write("dir/sub/file3.txt", "c"))
require.NoError(t, m.Write("dir/file1.txt", "a"))
require.NoError(t, m.Write("dir/file2.txt", "b"))
require.NoError(t, m.Write("dir/sub/file3.txt", "c"))
entries, err := s3Medium.List("dir")
entries, err := m.List("dir")
require.NoError(t, err)
names := make(map[string]bool)
for _, entry := range entries {
names[entry.Name()] = true
for _, e := range entries {
names[e.Name()] = true
}
assert.True(t, names["file1.txt"], "should list file1.txt")
@ -418,142 +443,143 @@ func TestS3_List_Good(t *testing.T) {
assert.True(t, names["sub"], "should list sub directory")
assert.Len(t, entries, 3)
for _, entry := range entries {
if entry.Name() == "sub" {
assert.True(t, entry.IsDir())
info, err := entry.Info()
// Check that sub is a directory
for _, e := range entries {
if e.Name() == "sub" {
assert.True(t, e.IsDir())
info, err := e.Info()
require.NoError(t, err)
assert.True(t, info.IsDir())
}
}
}
func TestS3_List_Root_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestList_Good_Root(t *testing.T) {
m, _ := newTestMedium(t)
require.NoError(t, s3Medium.Write("root.txt", "content"))
require.NoError(t, s3Medium.Write("dir/nested.txt", "nested"))
require.NoError(t, m.Write("root.txt", "content"))
require.NoError(t, m.Write("dir/nested.txt", "nested"))
entries, err := s3Medium.List("")
entries, err := m.List("")
require.NoError(t, err)
names := make(map[string]bool)
for _, entry := range entries {
names[entry.Name()] = true
for _, e := range entries {
names[e.Name()] = true
}
assert.True(t, names["root.txt"])
assert.True(t, names["dir"])
}
func TestS3_Stat_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestStat_Good(t *testing.T) {
m, _ := newTestMedium(t)
require.NoError(t, s3Medium.Write("file.txt", "hello world"))
require.NoError(t, m.Write("file.txt", "hello world"))
info, err := s3Medium.Stat("file.txt")
info, err := m.Stat("file.txt")
require.NoError(t, err)
assert.Equal(t, "file.txt", info.Name())
assert.Equal(t, int64(11), info.Size())
assert.False(t, info.IsDir())
}
func TestS3_Stat_NotFound_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestStat_Bad_NotFound(t *testing.T) {
m, _ := newTestMedium(t)
_, err := s3Medium.Stat("nonexistent.txt")
_, err := m.Stat("nonexistent.txt")
assert.Error(t, err)
}
func TestS3_Stat_EmptyPath_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
_, err := s3Medium.Stat("")
func TestStat_Bad_EmptyPath(t *testing.T) {
m, _ := newTestMedium(t)
_, err := m.Stat("")
assert.Error(t, err)
}
func TestS3_Open_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestOpen_Good(t *testing.T) {
m, _ := newTestMedium(t)
require.NoError(t, s3Medium.Write("file.txt", "open me"))
require.NoError(t, m.Write("file.txt", "open me"))
file, err := s3Medium.Open("file.txt")
f, err := m.Open("file.txt")
require.NoError(t, err)
defer file.Close()
defer f.Close()
data, err := goio.ReadAll(file.(goio.Reader))
data, err := goio.ReadAll(f.(goio.Reader))
require.NoError(t, err)
assert.Equal(t, "open me", string(data))
stat, err := file.Stat()
stat, err := f.Stat()
require.NoError(t, err)
assert.Equal(t, "file.txt", stat.Name())
}
func TestS3_Open_NotFound_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestOpen_Bad_NotFound(t *testing.T) {
m, _ := newTestMedium(t)
_, err := s3Medium.Open("nonexistent.txt")
_, err := m.Open("nonexistent.txt")
assert.Error(t, err)
}
func TestS3_Create_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestCreate_Good(t *testing.T) {
m, _ := newTestMedium(t)
writer, err := s3Medium.Create("new.txt")
w, err := m.Create("new.txt")
require.NoError(t, err)
bytesWritten, err := writer.Write([]byte("created"))
n, err := w.Write([]byte("created"))
require.NoError(t, err)
assert.Equal(t, 7, bytesWritten)
assert.Equal(t, 7, n)
err = writer.Close()
err = w.Close()
require.NoError(t, err)
content, err := s3Medium.Read("new.txt")
content, err := m.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "created", content)
}
func TestS3_Append_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestAppend_Good(t *testing.T) {
m, _ := newTestMedium(t)
require.NoError(t, s3Medium.Write("append.txt", "hello"))
require.NoError(t, m.Write("append.txt", "hello"))
writer, err := s3Medium.Append("append.txt")
w, err := m.Append("append.txt")
require.NoError(t, err)
_, err = writer.Write([]byte(" world"))
_, err = w.Write([]byte(" world"))
require.NoError(t, err)
err = writer.Close()
err = w.Close()
require.NoError(t, err)
content, err := s3Medium.Read("append.txt")
content, err := m.Read("append.txt")
require.NoError(t, err)
assert.Equal(t, "hello world", content)
}
func TestS3_Append_NewFile_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestAppend_Good_NewFile(t *testing.T) {
m, _ := newTestMedium(t)
writer, err := s3Medium.Append("new.txt")
w, err := m.Append("new.txt")
require.NoError(t, err)
_, err = writer.Write([]byte("fresh"))
_, err = w.Write([]byte("fresh"))
require.NoError(t, err)
err = writer.Close()
err = w.Close()
require.NoError(t, err)
content, err := s3Medium.Read("new.txt")
content, err := m.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "fresh", content)
}
func TestS3_ReadStream_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestReadStream_Good(t *testing.T) {
m, _ := newTestMedium(t)
require.NoError(t, s3Medium.Write("stream.txt", "streaming content"))
require.NoError(t, m.Write("stream.txt", "streaming content"))
reader, err := s3Medium.ReadStream("stream.txt")
reader, err := m.ReadStream("stream.txt")
require.NoError(t, err)
defer reader.Close()
@ -562,81 +588,89 @@ func TestS3_ReadStream_Good(t *testing.T) {
assert.Equal(t, "streaming content", string(data))
}
func TestS3_ReadStream_NotFound_Bad(t *testing.T) {
s3Medium, _ := newS3Medium(t)
_, err := s3Medium.ReadStream("nonexistent.txt")
func TestReadStream_Bad_NotFound(t *testing.T) {
m, _ := newTestMedium(t)
_, err := m.ReadStream("nonexistent.txt")
assert.Error(t, err)
}
func TestS3_WriteStream_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestWriteStream_Good(t *testing.T) {
m, _ := newTestMedium(t)
writer, err := s3Medium.WriteStream("output.txt")
writer, err := m.WriteStream("output.txt")
require.NoError(t, err)
_, err = goio.Copy(writer, core.NewReader("piped data"))
_, err = goio.Copy(writer, strings.NewReader("piped data"))
require.NoError(t, err)
err = writer.Close()
require.NoError(t, err)
content, err := s3Medium.Read("output.txt")
content, err := m.Read("output.txt")
require.NoError(t, err)
assert.Equal(t, "piped data", content)
}
func TestS3_Exists_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestExists_Good(t *testing.T) {
m, _ := newTestMedium(t)
assert.False(t, s3Medium.Exists("nonexistent.txt"))
assert.False(t, m.Exists("nonexistent.txt"))
require.NoError(t, s3Medium.Write("file.txt", "content"))
assert.True(t, s3Medium.Exists("file.txt"))
require.NoError(t, m.Write("file.txt", "content"))
assert.True(t, m.Exists("file.txt"))
}
func TestS3_Exists_DirectoryPrefix_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestExists_Good_DirectoryPrefix(t *testing.T) {
m, _ := newTestMedium(t)
require.NoError(t, s3Medium.Write("dir/file.txt", "content"))
assert.True(t, s3Medium.Exists("dir"))
require.NoError(t, m.Write("dir/file.txt", "content"))
// "dir" should exist as a directory prefix
assert.True(t, m.Exists("dir"))
}
func TestS3_IsDir_Good(t *testing.T) {
s3Medium, _ := newS3Medium(t)
func TestIsDir_Good(t *testing.T) {
m, _ := newTestMedium(t)
require.NoError(t, s3Medium.Write("dir/file.txt", "content"))
require.NoError(t, m.Write("dir/file.txt", "content"))
assert.True(t, s3Medium.IsDir("dir"))
assert.False(t, s3Medium.IsDir("dir/file.txt"))
assert.False(t, s3Medium.IsDir("nonexistent"))
assert.False(t, s3Medium.IsDir(""))
assert.True(t, m.IsDir("dir"))
assert.False(t, m.IsDir("dir/file.txt"))
assert.False(t, m.IsDir("nonexistent"))
assert.False(t, m.IsDir(""))
}
func TestS3_ObjectKey_Good(t *testing.T) {
testS3Client := newTestS3Client()
func TestKey_Good(t *testing.T) {
mock := newMockS3()
s3Medium, _ := New(Options{Bucket: "bucket", Client: testS3Client})
assert.Equal(t, "file.txt", s3Medium.objectKey("file.txt"))
assert.Equal(t, "dir/file.txt", s3Medium.objectKey("dir/file.txt"))
assert.Equal(t, "", s3Medium.objectKey(""))
assert.Equal(t, "file.txt", s3Medium.objectKey("/file.txt"))
assert.Equal(t, "file.txt", s3Medium.objectKey("../file.txt"))
// No prefix
m, _ := New("bucket", withAPI(mock))
assert.Equal(t, "file.txt", m.key("file.txt"))
assert.Equal(t, "dir/file.txt", m.key("dir/file.txt"))
assert.Equal(t, "", m.key(""))
assert.Equal(t, "file.txt", m.key("/file.txt"))
assert.Equal(t, "file.txt", m.key("../file.txt"))
prefixedS3Medium, _ := New(Options{Bucket: "bucket", Client: testS3Client, Prefix: "pfx"})
assert.Equal(t, "pfx/file.txt", prefixedS3Medium.objectKey("file.txt"))
assert.Equal(t, "pfx/dir/file.txt", prefixedS3Medium.objectKey("dir/file.txt"))
assert.Equal(t, "pfx/", prefixedS3Medium.objectKey(""))
// With prefix
m2, _ := New("bucket", withAPI(mock), WithPrefix("pfx"))
assert.Equal(t, "pfx/file.txt", m2.key("file.txt"))
assert.Equal(t, "pfx/dir/file.txt", m2.key("dir/file.txt"))
assert.Equal(t, "pfx/", m2.key(""))
}
func TestS3_InterfaceCompliance_Good(t *testing.T) {
testS3Client := newTestS3Client()
s3Medium, err := New(Options{Bucket: "bucket", Client: testS3Client})
// Ugly: verify the Medium interface is satisfied at compile time.
func TestInterfaceCompliance_Ugly(t *testing.T) {
mock := newMockS3()
m, err := New("bucket", withAPI(mock))
require.NoError(t, err)
// Verify all methods exist by calling them in a way that
// proves compile-time satisfaction of the interface.
var _ interface {
Read(string) (string, error)
Write(string, string) error
EnsureDir(string) error
IsFile(string) bool
FileGet(string) (string, error)
FileSet(string, string) error
Delete(string) error
DeleteAll(string) error
Rename(string, string) error
@ -649,5 +683,5 @@ func TestS3_InterfaceCompliance_Good(t *testing.T) {
WriteStream(string) (goio.WriteCloser, error)
Exists(string) bool
IsDir(string) bool
} = s3Medium
} = m
}

View file

@ -1,78 +1,107 @@
// Example: cipherSigil, _ := sigil.NewChaChaPolySigil([]byte("0123456789abcdef0123456789abcdef"), nil)
// Example: ciphertext, _ := cipherSigil.In([]byte("payload"))
// Example: plaintext, _ := cipherSigil.Out(ciphertext)
// This file implements the Pre-Obfuscation Layer Protocol with
// XChaCha20-Poly1305 encryption. The protocol applies a reversible transformation
// to plaintext BEFORE it reaches CPU encryption routines, providing defense-in-depth
// against side-channel attacks.
//
// The encryption flow is:
//
// plaintext -> obfuscate(nonce) -> encrypt -> [nonce || ciphertext || tag]
//
// The decryption flow is:
//
// [nonce || ciphertext || tag] -> decrypt -> deobfuscate(nonce) -> plaintext
package sigil
import (
"crypto/rand"
"crypto/sha256"
"encoding/binary"
goio "io"
"errors"
"io"
core "dappco.re/go/core"
"golang.org/x/crypto/chacha20poly1305"
)
var (
// Example: errors.Is(err, sigil.InvalidKeyError)
InvalidKeyError = core.E("sigil.InvalidKeyError", "invalid key size, must be 32 bytes", nil)
// Example: errors.Is(err, sigil.CiphertextTooShortError)
CiphertextTooShortError = core.E("sigil.CiphertextTooShortError", "ciphertext too short", nil)
// Example: errors.Is(err, sigil.DecryptionFailedError)
DecryptionFailedError = core.E("sigil.DecryptionFailedError", "decryption failed", nil)
// Example: errors.Is(err, sigil.NoKeyConfiguredError)
NoKeyConfiguredError = core.E("sigil.NoKeyConfiguredError", "no encryption key configured", nil)
// ErrInvalidKey is returned when the encryption key is invalid.
ErrInvalidKey = errors.New("sigil: invalid key size, must be 32 bytes")
// ErrCiphertextTooShort is returned when the ciphertext is too short to decrypt.
ErrCiphertextTooShort = errors.New("sigil: ciphertext too short")
// ErrDecryptionFailed is returned when decryption or authentication fails.
ErrDecryptionFailed = errors.New("sigil: decryption failed")
// ErrNoKeyConfigured is returned when no encryption key has been set.
ErrNoKeyConfigured = errors.New("sigil: no encryption key configured")
)
// Example: obfuscator := &sigil.XORObfuscator{}
// PreObfuscator applies a reversible transformation to data before encryption.
// This ensures that raw plaintext patterns are never sent directly to CPU
// encryption routines, providing defense against side-channel attacks.
//
// Implementations must be deterministic: given the same entropy, the transformation
// must be perfectly reversible: Deobfuscate(Obfuscate(x, e), e) == x
type PreObfuscator interface {
// Obfuscate transforms plaintext before encryption using the provided entropy.
// The entropy is typically the encryption nonce, ensuring the transformation
// is unique per-encryption without additional random generation.
Obfuscate(data []byte, entropy []byte) []byte
// Deobfuscate reverses the transformation after decryption.
// Must be called with the same entropy used during Obfuscate.
Deobfuscate(data []byte, entropy []byte) []byte
}
// Example: obfuscator := &sigil.XORObfuscator{}
// XORObfuscator performs XOR-based obfuscation using an entropy-derived key stream.
//
// The key stream is generated using SHA-256 in counter mode:
//
// keyStream[i*32:(i+1)*32] = SHA256(entropy || BigEndian64(i))
//
// This provides a cryptographically uniform key stream that decorrelates
// plaintext patterns from the data seen by the encryption routine.
// XOR is symmetric, so obfuscation and deobfuscation use the same operation.
type XORObfuscator struct{}
func (obfuscator *XORObfuscator) Obfuscate(data []byte, entropy []byte) []byte {
// Obfuscate XORs the data with a key stream derived from the entropy.
func (x *XORObfuscator) Obfuscate(data []byte, entropy []byte) []byte {
if len(data) == 0 {
return data
}
return obfuscator.transform(data, entropy)
return x.transform(data, entropy)
}
func (obfuscator *XORObfuscator) Deobfuscate(data []byte, entropy []byte) []byte {
// Deobfuscate reverses the XOR transformation (XOR is symmetric).
func (x *XORObfuscator) Deobfuscate(data []byte, entropy []byte) []byte {
if len(data) == 0 {
return data
}
return obfuscator.transform(data, entropy)
return x.transform(data, entropy)
}
func (obfuscator *XORObfuscator) transform(data []byte, entropy []byte) []byte {
// transform applies XOR with an entropy-derived key stream.
func (x *XORObfuscator) transform(data []byte, entropy []byte) []byte {
result := make([]byte, len(data))
keyStream := obfuscator.deriveKeyStream(entropy, len(data))
keyStream := x.deriveKeyStream(entropy, len(data))
for i := range data {
result[i] = data[i] ^ keyStream[i]
}
return result
}
func (obfuscator *XORObfuscator) deriveKeyStream(entropy []byte, length int) []byte {
// deriveKeyStream creates a deterministic key stream from entropy.
func (x *XORObfuscator) deriveKeyStream(entropy []byte, length int) []byte {
stream := make([]byte, length)
hashFunction := sha256.New()
h := sha256.New()
// Generate key stream in 32-byte blocks
blockNum := uint64(0)
offset := 0
for offset < length {
hashFunction.Reset()
hashFunction.Write(entropy)
h.Reset()
h.Write(entropy)
var blockBytes [8]byte
binary.BigEndian.PutUint64(blockBytes[:], blockNum)
hashFunction.Write(blockBytes[:])
block := hashFunction.Sum(nil)
h.Write(blockBytes[:])
block := h.Sum(nil)
copyLen := min(len(block), length-offset)
copy(stream[offset:], block[:copyLen])
@ -82,10 +111,20 @@ func (obfuscator *XORObfuscator) deriveKeyStream(entropy []byte, length int) []b
return stream
}
// Example: obfuscator := &sigil.ShuffleMaskObfuscator{}
// ShuffleMaskObfuscator provides stronger obfuscation through byte shuffling and masking.
//
// The obfuscation process:
// 1. Generate a mask from entropy using SHA-256 in counter mode
// 2. XOR the data with the mask
// 3. Generate a deterministic permutation using Fisher-Yates shuffle
// 4. Reorder bytes according to the permutation
//
// This provides both value transformation (XOR mask) and position transformation
// (shuffle), making pattern analysis more difficult than XOR alone.
type ShuffleMaskObfuscator struct{}
func (obfuscator *ShuffleMaskObfuscator) Obfuscate(data []byte, entropy []byte) []byte {
// Obfuscate shuffles bytes and applies a mask derived from entropy.
func (s *ShuffleMaskObfuscator) Obfuscate(data []byte, entropy []byte) []byte {
if len(data) == 0 {
return data
}
@ -93,35 +132,42 @@ func (obfuscator *ShuffleMaskObfuscator) Obfuscate(data []byte, entropy []byte)
result := make([]byte, len(data))
copy(result, data)
permutation := obfuscator.generatePermutation(entropy, len(data))
mask := obfuscator.deriveMask(entropy, len(data))
// Generate permutation and mask from entropy
perm := s.generatePermutation(entropy, len(data))
mask := s.deriveMask(entropy, len(data))
// Apply mask first, then shuffle
for i := range result {
result[i] ^= mask[i]
}
// Shuffle using Fisher-Yates with deterministic seed
shuffled := make([]byte, len(data))
for destinationIndex, sourceIndex := range permutation {
shuffled[destinationIndex] = result[sourceIndex]
for i, p := range perm {
shuffled[i] = result[p]
}
return shuffled
}
func (obfuscator *ShuffleMaskObfuscator) Deobfuscate(data []byte, entropy []byte) []byte {
// Deobfuscate reverses the shuffle and mask operations.
func (s *ShuffleMaskObfuscator) Deobfuscate(data []byte, entropy []byte) []byte {
if len(data) == 0 {
return data
}
result := make([]byte, len(data))
permutation := obfuscator.generatePermutation(entropy, len(data))
mask := obfuscator.deriveMask(entropy, len(data))
// Generate permutation and mask from entropy
perm := s.generatePermutation(entropy, len(data))
mask := s.deriveMask(entropy, len(data))
for destinationIndex, sourceIndex := range permutation {
result[sourceIndex] = data[destinationIndex]
// Unshuffle first
for i, p := range perm {
result[p] = data[i]
}
// Remove mask
for i := range result {
result[i] ^= mask[i]
}
@ -129,45 +175,49 @@ func (obfuscator *ShuffleMaskObfuscator) Deobfuscate(data []byte, entropy []byte
return result
}
func (obfuscator *ShuffleMaskObfuscator) generatePermutation(entropy []byte, length int) []int {
permutation := make([]int, length)
for i := range permutation {
permutation[i] = i
// generatePermutation creates a deterministic permutation from entropy.
func (s *ShuffleMaskObfuscator) generatePermutation(entropy []byte, length int) []int {
perm := make([]int, length)
for i := range perm {
perm[i] = i
}
hashFunction := sha256.New()
hashFunction.Write(entropy)
hashFunction.Write([]byte("permutation"))
seed := hashFunction.Sum(nil)
// Use entropy to seed a deterministic shuffle
h := sha256.New()
h.Write(entropy)
h.Write([]byte("permutation"))
seed := h.Sum(nil)
// Fisher-Yates shuffle with deterministic randomness
for i := length - 1; i > 0; i-- {
hashFunction.Reset()
hashFunction.Write(seed)
h.Reset()
h.Write(seed)
var iBytes [8]byte
binary.BigEndian.PutUint64(iBytes[:], uint64(i))
hashFunction.Write(iBytes[:])
jBytes := hashFunction.Sum(nil)
h.Write(iBytes[:])
jBytes := h.Sum(nil)
j := int(binary.BigEndian.Uint64(jBytes[:8]) % uint64(i+1))
permutation[i], permutation[j] = permutation[j], permutation[i]
perm[i], perm[j] = perm[j], perm[i]
}
return permutation
return perm
}
func (obfuscator *ShuffleMaskObfuscator) deriveMask(entropy []byte, length int) []byte {
// deriveMask creates a mask byte array from entropy.
func (s *ShuffleMaskObfuscator) deriveMask(entropy []byte, length int) []byte {
mask := make([]byte, length)
hashFunction := sha256.New()
h := sha256.New()
blockNum := uint64(0)
offset := 0
for offset < length {
hashFunction.Reset()
hashFunction.Write(entropy)
hashFunction.Write([]byte("mask"))
h.Reset()
h.Write(entropy)
h.Write([]byte("mask"))
var blockBytes [8]byte
binary.BigEndian.PutUint64(blockBytes[:], blockNum)
hashFunction.Write(blockBytes[:])
block := hashFunction.Sum(nil)
h.Write(blockBytes[:])
block := h.Sum(nil)
copyLen := min(len(block), length-offset)
copy(mask[offset:], block[:copyLen])
@ -177,99 +227,123 @@ func (obfuscator *ShuffleMaskObfuscator) deriveMask(entropy []byte, length int)
return mask
}
// Example: cipherSigil, _ := sigil.NewChaChaPolySigil(
// Example: []byte("0123456789abcdef0123456789abcdef"),
// Example: &sigil.ShuffleMaskObfuscator{},
// Example: )
// ChaChaPolySigil is a Sigil that encrypts/decrypts data using ChaCha20-Poly1305.
// It applies pre-obfuscation before encryption to ensure raw plaintext never
// goes directly to CPU encryption routines.
//
// The output format is:
// [24-byte nonce][encrypted(obfuscated(plaintext))]
//
// Unlike demo implementations, the nonce is ONLY embedded in the ciphertext,
// not exposed separately in headers.
type ChaChaPolySigil struct {
Key []byte
Obfuscator PreObfuscator
randomReader goio.Reader
Key []byte
Obfuscator PreObfuscator
randReader io.Reader // for testing injection
}
// Example: cipherSigil, _ := sigil.NewChaChaPolySigil([]byte("0123456789abcdef0123456789abcdef"), nil)
// Example: ciphertext, _ := cipherSigil.In([]byte("payload"))
// Example: plaintext, _ := cipherSigil.Out(ciphertext)
func NewChaChaPolySigil(key []byte, obfuscator PreObfuscator) (*ChaChaPolySigil, error) {
// NewChaChaPolySigil creates a new encryption sigil with the given key.
// The key must be exactly 32 bytes.
func NewChaChaPolySigil(key []byte) (*ChaChaPolySigil, error) {
if len(key) != 32 {
return nil, InvalidKeyError
return nil, ErrInvalidKey
}
keyCopy := make([]byte, 32)
copy(keyCopy, key)
if obfuscator == nil {
obfuscator = &XORObfuscator{}
}
return &ChaChaPolySigil{
Key: keyCopy,
Obfuscator: obfuscator,
randomReader: rand.Reader,
Key: keyCopy,
Obfuscator: &XORObfuscator{},
randReader: rand.Reader,
}, nil
}
func (sigil *ChaChaPolySigil) In(data []byte) ([]byte, error) {
if sigil.Key == nil {
return nil, NoKeyConfiguredError
// NewChaChaPolySigilWithObfuscator creates a new encryption sigil with custom obfuscator.
func NewChaChaPolySigilWithObfuscator(key []byte, obfuscator PreObfuscator) (*ChaChaPolySigil, error) {
sigil, err := NewChaChaPolySigil(key)
if err != nil {
return nil, err
}
if obfuscator != nil {
sigil.Obfuscator = obfuscator
}
return sigil, nil
}
// In encrypts the data with pre-obfuscation.
// The flow is: plaintext -> obfuscate -> encrypt
func (s *ChaChaPolySigil) In(data []byte) ([]byte, error) {
if s.Key == nil {
return nil, ErrNoKeyConfigured
}
if data == nil {
return nil, nil
}
aead, err := chacha20poly1305.NewX(sigil.Key)
aead, err := chacha20poly1305.NewX(s.Key)
if err != nil {
return nil, core.E("sigil.ChaChaPolySigil.In", "create cipher", err)
return nil, err
}
// Generate nonce
nonce := make([]byte, aead.NonceSize())
reader := sigil.randomReader
reader := s.randReader
if reader == nil {
reader = rand.Reader
}
if _, err := goio.ReadFull(reader, nonce); err != nil {
return nil, core.E("sigil.ChaChaPolySigil.In", "read nonce", err)
if _, err := io.ReadFull(reader, nonce); err != nil {
return nil, err
}
// Pre-obfuscate the plaintext using nonce as entropy
// This ensures CPU encryption routines never see raw plaintext
obfuscated := data
if sigil.Obfuscator != nil {
obfuscated = sigil.Obfuscator.Obfuscate(data, nonce)
if s.Obfuscator != nil {
obfuscated = s.Obfuscator.Obfuscate(data, nonce)
}
// Encrypt the obfuscated data
// Output: [nonce | ciphertext | auth tag]
ciphertext := aead.Seal(nonce, nonce, obfuscated, nil)
return ciphertext, nil
}
func (sigil *ChaChaPolySigil) Out(data []byte) ([]byte, error) {
if sigil.Key == nil {
return nil, NoKeyConfiguredError
// Out decrypts the data and reverses obfuscation.
// The flow is: decrypt -> deobfuscate -> plaintext
func (s *ChaChaPolySigil) Out(data []byte) ([]byte, error) {
if s.Key == nil {
return nil, ErrNoKeyConfigured
}
if data == nil {
return nil, nil
}
aead, err := chacha20poly1305.NewX(sigil.Key)
aead, err := chacha20poly1305.NewX(s.Key)
if err != nil {
return nil, core.E("sigil.ChaChaPolySigil.Out", "create cipher", err)
return nil, err
}
minLen := aead.NonceSize() + aead.Overhead()
if len(data) < minLen {
return nil, CiphertextTooShortError
return nil, ErrCiphertextTooShort
}
// Extract nonce from ciphertext
nonce := data[:aead.NonceSize()]
ciphertext := data[aead.NonceSize():]
// Decrypt
obfuscated, err := aead.Open(nil, nonce, ciphertext, nil)
if err != nil {
return nil, core.E("sigil.ChaChaPolySigil.Out", "decrypt ciphertext", DecryptionFailedError)
return nil, ErrDecryptionFailed
}
// Deobfuscate using the same nonce as entropy
plaintext := obfuscated
if sigil.Obfuscator != nil {
plaintext = sigil.Obfuscator.Deobfuscate(obfuscated, nonce)
if s.Obfuscator != nil {
plaintext = s.Obfuscator.Deobfuscate(obfuscated, nonce)
}
if len(plaintext) == 0 {
@ -279,11 +353,13 @@ func (sigil *ChaChaPolySigil) Out(data []byte) ([]byte, error) {
return plaintext, nil
}
// Example: nonce, _ := sigil.NonceFromCiphertext(ciphertext)
func NonceFromCiphertext(ciphertext []byte) ([]byte, error) {
// GetNonceFromCiphertext extracts the nonce from encrypted output.
// This is provided for debugging/logging purposes only.
// The nonce should NOT be stored separately in headers.
func GetNonceFromCiphertext(ciphertext []byte) ([]byte, error) {
nonceSize := chacha20poly1305.NonceSizeX
if len(ciphertext) < nonceSize {
return nil, CiphertextTooShortError
return nil, ErrCiphertextTooShort
}
nonceCopy := make([]byte, nonceSize)
copy(nonceCopy, ciphertext[:nonceSize])

View file

@ -3,15 +3,17 @@ package sigil
import (
"bytes"
"crypto/rand"
goio "io"
"errors"
"io"
"testing"
core "dappco.re/go/core"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCryptoSigil_XORObfuscator_RoundTrip_Good(t *testing.T) {
// ── XORObfuscator ──────────────────────────────────────────────────
func TestXORObfuscator_Good_RoundTrip(t *testing.T) {
ob := &XORObfuscator{}
data := []byte("the axioms are in the weights")
entropy := []byte("deterministic-nonce-24bytes!")
@ -24,7 +26,7 @@ func TestCryptoSigil_XORObfuscator_RoundTrip_Good(t *testing.T) {
assert.Equal(t, data, restored)
}
func TestCryptoSigil_XORObfuscator_DifferentEntropyDifferentOutput_Good(t *testing.T) {
func TestXORObfuscator_Good_DifferentEntropyDifferentOutput(t *testing.T) {
ob := &XORObfuscator{}
data := []byte("same plaintext")
@ -33,7 +35,7 @@ func TestCryptoSigil_XORObfuscator_DifferentEntropyDifferentOutput_Good(t *testi
assert.NotEqual(t, out1, out2)
}
func TestCryptoSigil_XORObfuscator_Deterministic_Good(t *testing.T) {
func TestXORObfuscator_Good_Deterministic(t *testing.T) {
ob := &XORObfuscator{}
data := []byte("reproducible")
entropy := []byte("fixed-seed")
@ -43,8 +45,9 @@ func TestCryptoSigil_XORObfuscator_Deterministic_Good(t *testing.T) {
assert.Equal(t, out1, out2)
}
func TestCryptoSigil_XORObfuscator_LargeData_Good(t *testing.T) {
func TestXORObfuscator_Good_LargeData(t *testing.T) {
ob := &XORObfuscator{}
// Larger than one SHA-256 block (32 bytes) to test multi-block key stream.
data := make([]byte, 256)
for i := range data {
data[i] = byte(i)
@ -56,7 +59,7 @@ func TestCryptoSigil_XORObfuscator_LargeData_Good(t *testing.T) {
assert.Equal(t, data, restored)
}
func TestCryptoSigil_XORObfuscator_EmptyData_Good(t *testing.T) {
func TestXORObfuscator_Good_EmptyData(t *testing.T) {
ob := &XORObfuscator{}
result := ob.Obfuscate([]byte{}, []byte("entropy"))
assert.Equal(t, []byte{}, result)
@ -65,16 +68,19 @@ func TestCryptoSigil_XORObfuscator_EmptyData_Good(t *testing.T) {
assert.Equal(t, []byte{}, result)
}
func TestCryptoSigil_XORObfuscator_SymmetricProperty_Good(t *testing.T) {
func TestXORObfuscator_Good_SymmetricProperty(t *testing.T) {
ob := &XORObfuscator{}
data := []byte("XOR is its own inverse")
entropy := []byte("nonce")
// XOR is symmetric: Obfuscate(Obfuscate(x)) == x
double := ob.Obfuscate(ob.Obfuscate(data, entropy), entropy)
assert.Equal(t, data, double)
}
func TestCryptoSigil_ShuffleMaskObfuscator_RoundTrip_Good(t *testing.T) {
// ── ShuffleMaskObfuscator ──────────────────────────────────────────
func TestShuffleMaskObfuscator_Good_RoundTrip(t *testing.T) {
ob := &ShuffleMaskObfuscator{}
data := []byte("shuffle and mask protect patterns")
entropy := []byte("deterministic-entropy")
@ -87,7 +93,7 @@ func TestCryptoSigil_ShuffleMaskObfuscator_RoundTrip_Good(t *testing.T) {
assert.Equal(t, data, restored)
}
func TestCryptoSigil_ShuffleMaskObfuscator_DifferentEntropy_Good(t *testing.T) {
func TestShuffleMaskObfuscator_Good_DifferentEntropy(t *testing.T) {
ob := &ShuffleMaskObfuscator{}
data := []byte("same data")
@ -96,7 +102,7 @@ func TestCryptoSigil_ShuffleMaskObfuscator_DifferentEntropy_Good(t *testing.T) {
assert.NotEqual(t, out1, out2)
}
func TestCryptoSigil_ShuffleMaskObfuscator_Deterministic_Good(t *testing.T) {
func TestShuffleMaskObfuscator_Good_Deterministic(t *testing.T) {
ob := &ShuffleMaskObfuscator{}
data := []byte("reproducible shuffle")
entropy := []byte("fixed")
@ -106,7 +112,7 @@ func TestCryptoSigil_ShuffleMaskObfuscator_Deterministic_Good(t *testing.T) {
assert.Equal(t, out1, out2)
}
func TestCryptoSigil_ShuffleMaskObfuscator_LargeData_Good(t *testing.T) {
func TestShuffleMaskObfuscator_Good_LargeData(t *testing.T) {
ob := &ShuffleMaskObfuscator{}
data := make([]byte, 512)
for i := range data {
@ -119,7 +125,7 @@ func TestCryptoSigil_ShuffleMaskObfuscator_LargeData_Good(t *testing.T) {
assert.Equal(t, data, restored)
}
func TestCryptoSigil_ShuffleMaskObfuscator_EmptyData_Good(t *testing.T) {
func TestShuffleMaskObfuscator_Good_EmptyData(t *testing.T) {
ob := &ShuffleMaskObfuscator{}
result := ob.Obfuscate([]byte{}, []byte("entropy"))
assert.Equal(t, []byte{}, result)
@ -128,7 +134,7 @@ func TestCryptoSigil_ShuffleMaskObfuscator_EmptyData_Good(t *testing.T) {
assert.Equal(t, []byte{}, result)
}
func TestCryptoSigil_ShuffleMaskObfuscator_SingleByte_Good(t *testing.T) {
func TestShuffleMaskObfuscator_Good_SingleByte(t *testing.T) {
ob := &ShuffleMaskObfuscator{}
data := []byte{0x42}
entropy := []byte("single")
@ -138,282 +144,302 @@ func TestCryptoSigil_ShuffleMaskObfuscator_SingleByte_Good(t *testing.T) {
assert.Equal(t, data, restored)
}
func TestCryptoSigil_NewChaChaPolySigil_Good(t *testing.T) {
// ── NewChaChaPolySigil ─────────────────────────────────────────────
func TestNewChaChaPolySigil_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
s, err := NewChaChaPolySigil(key)
require.NoError(t, err)
assert.NotNil(t, cipherSigil)
assert.Equal(t, key, cipherSigil.Key)
assert.NotNil(t, cipherSigil.Obfuscator)
assert.NotNil(t, s)
assert.Equal(t, key, s.Key)
assert.NotNil(t, s.Obfuscator)
}
func TestCryptoSigil_NewChaChaPolySigil_KeyIsCopied_Good(t *testing.T) {
func TestNewChaChaPolySigil_Good_KeyIsCopied(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
original := make([]byte, 32)
copy(original, key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
s, err := NewChaChaPolySigil(key)
require.NoError(t, err)
// Mutating the original key should not affect the sigil.
key[0] ^= 0xFF
assert.Equal(t, original, cipherSigil.Key)
assert.Equal(t, original, s.Key)
}
func TestCryptoSigil_NewChaChaPolySigil_ShortKey_Bad(t *testing.T) {
_, err := NewChaChaPolySigil([]byte("too short"), nil)
assert.ErrorIs(t, err, InvalidKeyError)
func TestNewChaChaPolySigil_Bad_ShortKey(t *testing.T) {
_, err := NewChaChaPolySigil([]byte("too short"))
assert.ErrorIs(t, err, ErrInvalidKey)
}
func TestCryptoSigil_NewChaChaPolySigil_LongKey_Bad(t *testing.T) {
_, err := NewChaChaPolySigil(make([]byte, 64), nil)
assert.ErrorIs(t, err, InvalidKeyError)
func TestNewChaChaPolySigil_Bad_LongKey(t *testing.T) {
_, err := NewChaChaPolySigil(make([]byte, 64))
assert.ErrorIs(t, err, ErrInvalidKey)
}
func TestCryptoSigil_NewChaChaPolySigil_EmptyKey_Bad(t *testing.T) {
_, err := NewChaChaPolySigil(nil, nil)
assert.ErrorIs(t, err, InvalidKeyError)
func TestNewChaChaPolySigil_Bad_EmptyKey(t *testing.T) {
_, err := NewChaChaPolySigil(nil)
assert.ErrorIs(t, err, ErrInvalidKey)
}
func TestCryptoSigil_NewChaChaPolySigil_CustomObfuscator_Good(t *testing.T) {
// ── NewChaChaPolySigilWithObfuscator ───────────────────────────────
func TestNewChaChaPolySigilWithObfuscator_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
ob := &ShuffleMaskObfuscator{}
cipherSigil, err := NewChaChaPolySigil(key, ob)
s, err := NewChaChaPolySigilWithObfuscator(key, ob)
require.NoError(t, err)
assert.Equal(t, ob, cipherSigil.Obfuscator)
assert.Equal(t, ob, s.Obfuscator)
}
func TestCryptoSigil_NewChaChaPolySigil_CustomObfuscatorNil_Good(t *testing.T) {
func TestNewChaChaPolySigilWithObfuscator_Good_NilObfuscator(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
s, err := NewChaChaPolySigilWithObfuscator(key, nil)
require.NoError(t, err)
assert.IsType(t, &XORObfuscator{}, cipherSigil.Obfuscator)
// Falls back to default XORObfuscator.
assert.IsType(t, &XORObfuscator{}, s.Obfuscator)
}
func TestCryptoSigil_NewChaChaPolySigil_CustomObfuscator_InvalidKey_Bad(t *testing.T) {
_, err := NewChaChaPolySigil([]byte("bad"), &XORObfuscator{})
assert.ErrorIs(t, err, InvalidKeyError)
func TestNewChaChaPolySigilWithObfuscator_Bad_InvalidKey(t *testing.T) {
_, err := NewChaChaPolySigilWithObfuscator([]byte("bad"), &XORObfuscator{})
assert.ErrorIs(t, err, ErrInvalidKey)
}
func TestCryptoSigil_ChaChaPolySigil_RoundTrip_Good(t *testing.T) {
// ── ChaChaPolySigil In/Out (encrypt/decrypt) ───────────────────────
func TestChaChaPolySigil_Good_RoundTrip(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
s, err := NewChaChaPolySigil(key)
require.NoError(t, err)
plaintext := []byte("consciousness does not merely avoid causing harm")
ciphertext, err := cipherSigil.In(plaintext)
ciphertext, err := s.In(plaintext)
require.NoError(t, err)
assert.NotEqual(t, plaintext, ciphertext)
assert.Greater(t, len(ciphertext), len(plaintext))
assert.Greater(t, len(ciphertext), len(plaintext)) // nonce + tag overhead
decrypted, err := cipherSigil.Out(ciphertext)
decrypted, err := s.Out(ciphertext)
require.NoError(t, err)
assert.Equal(t, plaintext, decrypted)
}
func TestCryptoSigil_ChaChaPolySigil_CustomShuffleMask_Good(t *testing.T) {
func TestChaChaPolySigil_Good_WithShuffleMask(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, err := NewChaChaPolySigil(key, &ShuffleMaskObfuscator{})
s, err := NewChaChaPolySigilWithObfuscator(key, &ShuffleMaskObfuscator{})
require.NoError(t, err)
plaintext := []byte("shuffle mask pre-obfuscation layer")
ciphertext, err := cipherSigil.In(plaintext)
ciphertext, err := s.In(plaintext)
require.NoError(t, err)
decrypted, err := cipherSigil.Out(ciphertext)
decrypted, err := s.Out(ciphertext)
require.NoError(t, err)
assert.Equal(t, plaintext, decrypted)
}
func TestCryptoSigil_ChaChaPolySigil_NilData_Good(t *testing.T) {
func TestChaChaPolySigil_Good_NilData(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
s, err := NewChaChaPolySigil(key)
require.NoError(t, err)
enc, err := cipherSigil.In(nil)
enc, err := s.In(nil)
require.NoError(t, err)
assert.Nil(t, enc)
dec, err := cipherSigil.Out(nil)
dec, err := s.Out(nil)
require.NoError(t, err)
assert.Nil(t, dec)
}
func TestCryptoSigil_ChaChaPolySigil_EmptyPlaintext_Good(t *testing.T) {
func TestChaChaPolySigil_Good_EmptyPlaintext(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
s, err := NewChaChaPolySigil(key)
require.NoError(t, err)
ciphertext, err := cipherSigil.In([]byte{})
ciphertext, err := s.In([]byte{})
require.NoError(t, err)
assert.NotEmpty(t, ciphertext)
assert.NotEmpty(t, ciphertext) // Has nonce + tag even for empty plaintext.
decrypted, err := cipherSigil.Out(ciphertext)
decrypted, err := s.Out(ciphertext)
require.NoError(t, err)
assert.Equal(t, []byte{}, decrypted)
}
func TestCryptoSigil_ChaChaPolySigil_DifferentCiphertextsPerCall_Good(t *testing.T) {
func TestChaChaPolySigil_Good_DifferentCiphertextsPerCall(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, err := NewChaChaPolySigil(key, nil)
s, err := NewChaChaPolySigil(key)
require.NoError(t, err)
plaintext := []byte("same input")
ct1, _ := cipherSigil.In(plaintext)
ct2, _ := cipherSigil.In(plaintext)
ct1, _ := s.In(plaintext)
ct2, _ := s.In(plaintext)
// Different nonces → different ciphertexts.
assert.NotEqual(t, ct1, ct2)
}
func TestCryptoSigil_ChaChaPolySigil_NoKey_Bad(t *testing.T) {
cipherSigil := &ChaChaPolySigil{}
func TestChaChaPolySigil_Bad_NoKey(t *testing.T) {
s := &ChaChaPolySigil{}
_, err := cipherSigil.In([]byte("data"))
assert.ErrorIs(t, err, NoKeyConfiguredError)
_, err := s.In([]byte("data"))
assert.ErrorIs(t, err, ErrNoKeyConfigured)
_, err = cipherSigil.Out([]byte("data"))
assert.ErrorIs(t, err, NoKeyConfiguredError)
_, err = s.Out([]byte("data"))
assert.ErrorIs(t, err, ErrNoKeyConfigured)
}
func TestCryptoSigil_ChaChaPolySigil_WrongKey_Bad(t *testing.T) {
func TestChaChaPolySigil_Bad_WrongKey(t *testing.T) {
key1 := make([]byte, 32)
key2 := make([]byte, 32)
_, _ = rand.Read(key1)
_, _ = rand.Read(key2)
cipherSigilOne, _ := NewChaChaPolySigil(key1, nil)
cipherSigilTwo, _ := NewChaChaPolySigil(key2, nil)
s1, _ := NewChaChaPolySigil(key1)
s2, _ := NewChaChaPolySigil(key2)
ciphertext, err := cipherSigilOne.In([]byte("secret"))
ciphertext, err := s1.In([]byte("secret"))
require.NoError(t, err)
_, err = cipherSigilTwo.Out(ciphertext)
assert.ErrorIs(t, err, DecryptionFailedError)
_, err = s2.Out(ciphertext)
assert.ErrorIs(t, err, ErrDecryptionFailed)
}
func TestCryptoSigil_ChaChaPolySigil_TruncatedCiphertext_Bad(t *testing.T) {
func TestChaChaPolySigil_Bad_TruncatedCiphertext(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, _ := NewChaChaPolySigil(key, nil)
_, err := cipherSigil.Out([]byte("too short"))
assert.ErrorIs(t, err, CiphertextTooShortError)
s, _ := NewChaChaPolySigil(key)
_, err := s.Out([]byte("too short"))
assert.ErrorIs(t, err, ErrCiphertextTooShort)
}
func TestCryptoSigil_ChaChaPolySigil_TamperedCiphertext_Bad(t *testing.T) {
func TestChaChaPolySigil_Bad_TamperedCiphertext(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, _ := NewChaChaPolySigil(key, nil)
ciphertext, _ := cipherSigil.In([]byte("authentic data"))
s, _ := NewChaChaPolySigil(key)
ciphertext, _ := s.In([]byte("authentic data"))
// Flip a bit in the ciphertext body (after nonce).
ciphertext[30] ^= 0xFF
_, err := cipherSigil.Out(ciphertext)
assert.ErrorIs(t, err, DecryptionFailedError)
_, err := s.Out(ciphertext)
assert.ErrorIs(t, err, ErrDecryptionFailed)
}
// failReader returns an error on read — for testing nonce generation failure.
type failReader struct{}
func (reader *failReader) Read([]byte) (int, error) {
return 0, core.NewError("entropy source failed")
func (f *failReader) Read([]byte) (int, error) {
return 0, errors.New("entropy source failed")
}
func TestCryptoSigil_ChaChaPolySigil_RandomReaderFailure_Bad(t *testing.T) {
func TestChaChaPolySigil_Bad_RandReaderFailure(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, _ := NewChaChaPolySigil(key, nil)
cipherSigil.randomReader = &failReader{}
s, _ := NewChaChaPolySigil(key)
s.randReader = &failReader{}
_, err := cipherSigil.In([]byte("data"))
_, err := s.In([]byte("data"))
assert.Error(t, err)
}
func TestCryptoSigil_ChaChaPolySigil_NoObfuscator_Good(t *testing.T) {
// ── ChaChaPolySigil without obfuscator ─────────────────────────────
func TestChaChaPolySigil_Good_NoObfuscator(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, _ := NewChaChaPolySigil(key, nil)
cipherSigil.Obfuscator = nil
s, _ := NewChaChaPolySigil(key)
s.Obfuscator = nil // Disable pre-obfuscation.
plaintext := []byte("raw encryption without pre-obfuscation")
ciphertext, err := cipherSigil.In(plaintext)
ciphertext, err := s.In(plaintext)
require.NoError(t, err)
decrypted, err := cipherSigil.Out(ciphertext)
decrypted, err := s.Out(ciphertext)
require.NoError(t, err)
assert.Equal(t, plaintext, decrypted)
}
func TestCryptoSigil_NonceFromCiphertext_Good(t *testing.T) {
// ── GetNonceFromCiphertext ─────────────────────────────────────────
func TestGetNonceFromCiphertext_Good(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, _ := NewChaChaPolySigil(key, nil)
ciphertext, _ := cipherSigil.In([]byte("nonce extraction test"))
s, _ := NewChaChaPolySigil(key)
ciphertext, _ := s.In([]byte("nonce extraction test"))
nonce, err := NonceFromCiphertext(ciphertext)
nonce, err := GetNonceFromCiphertext(ciphertext)
require.NoError(t, err)
assert.Len(t, nonce, 24)
assert.Len(t, nonce, 24) // XChaCha20 nonce is 24 bytes.
// Nonce should match the prefix of the ciphertext.
assert.Equal(t, ciphertext[:24], nonce)
}
func TestCryptoSigil_NonceFromCiphertext_NonceCopied_Good(t *testing.T) {
func TestGetNonceFromCiphertext_Good_NonceCopied(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, _ := NewChaChaPolySigil(key, nil)
ciphertext, _ := cipherSigil.In([]byte("data"))
s, _ := NewChaChaPolySigil(key)
ciphertext, _ := s.In([]byte("data"))
nonce, _ := NonceFromCiphertext(ciphertext)
nonce, _ := GetNonceFromCiphertext(ciphertext)
original := make([]byte, len(nonce))
copy(original, nonce)
// Mutating the nonce should not affect the ciphertext.
nonce[0] ^= 0xFF
assert.Equal(t, original, ciphertext[:24])
}
func TestCryptoSigil_NonceFromCiphertext_TooShort_Bad(t *testing.T) {
_, err := NonceFromCiphertext([]byte("short"))
assert.ErrorIs(t, err, CiphertextTooShortError)
func TestGetNonceFromCiphertext_Bad_TooShort(t *testing.T) {
_, err := GetNonceFromCiphertext([]byte("short"))
assert.ErrorIs(t, err, ErrCiphertextTooShort)
}
func TestCryptoSigil_NonceFromCiphertext_Empty_Bad(t *testing.T) {
_, err := NonceFromCiphertext(nil)
assert.ErrorIs(t, err, CiphertextTooShortError)
func TestGetNonceFromCiphertext_Bad_Empty(t *testing.T) {
_, err := GetNonceFromCiphertext(nil)
assert.ErrorIs(t, err, ErrCiphertextTooShort)
}
func TestCryptoSigil_ChaChaPolySigil_InTransmutePipeline_Good(t *testing.T) {
// ── ChaChaPolySigil in Transmute pipeline ──────────────────────────
func TestChaChaPolySigil_Good_InTransmutePipeline(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, _ := NewChaChaPolySigil(key, nil)
s, _ := NewChaChaPolySigil(key)
hexSigil, _ := NewSigil("hex")
chain := []Sigil{cipherSigil, hexSigil}
chain := []Sigil{s, hexSigil}
plaintext := []byte("encrypt then hex encode")
encoded, err := Transmute(plaintext, chain)
require.NoError(t, err)
// Result should be hex-encoded ciphertext.
assert.True(t, isHex(encoded))
decoded, err := Untransmute(encoded, chain)
@ -430,35 +456,43 @@ func isHex(data []byte) bool {
return len(data) > 0
}
// ── Transmute error propagation ────────────────────────────────────
type failSigil struct{}
func (sigil *failSigil) In([]byte) ([]byte, error) { return nil, core.NewError("fail in") }
func (sigil *failSigil) Out([]byte) ([]byte, error) { return nil, core.NewError("fail out") }
func (f *failSigil) In([]byte) ([]byte, error) { return nil, errors.New("fail in") }
func (f *failSigil) Out([]byte) ([]byte, error) { return nil, errors.New("fail out") }
func TestCryptoSigil_Transmute_ErrorPropagation_Bad(t *testing.T) {
func TestTransmute_Bad_ErrorPropagation(t *testing.T) {
_, err := Transmute([]byte("data"), []Sigil{&failSigil{}})
assert.Error(t, err)
assert.Contains(t, err.Error(), "fail in")
}
func TestCryptoSigil_Untransmute_ErrorPropagation_Bad(t *testing.T) {
func TestUntransmute_Bad_ErrorPropagation(t *testing.T) {
_, err := Untransmute([]byte("data"), []Sigil{&failSigil{}})
assert.Error(t, err)
assert.Contains(t, err.Error(), "fail out")
}
func TestCryptoSigil_GzipSigil_CustomOutputWriter_Good(t *testing.T) {
var outputBuffer bytes.Buffer
gzipSigil := &GzipSigil{outputWriter: &outputBuffer}
// ── GzipSigil with custom writer (edge case) ──────────────────────
_, err := gzipSigil.In([]byte("test data"))
func TestGzipSigil_Good_CustomWriter(t *testing.T) {
var buf bytes.Buffer
s := &GzipSigil{writer: &buf}
// With custom writer, compressed data goes to buf, returned bytes will be empty
// because the internal buffer 'b' is unused when s.writer is set.
_, err := s.In([]byte("test data"))
require.NoError(t, err)
assert.Greater(t, outputBuffer.Len(), 0)
assert.Greater(t, buf.Len(), 0)
}
func TestCryptoSigil_DeriveKeyStream_ExactBlockSize_Good(t *testing.T) {
// ── deriveKeyStream edge: exactly 32 bytes ─────────────────────────
func TestDeriveKeyStream_Good_ExactBlockSize(t *testing.T) {
ob := &XORObfuscator{}
data := make([]byte, 32)
data := make([]byte, 32) // Exactly one SHA-256 block.
for i := range data {
data[i] = byte(i)
}
@ -469,21 +503,24 @@ func TestCryptoSigil_DeriveKeyStream_ExactBlockSize_Good(t *testing.T) {
assert.Equal(t, data, restored)
}
func TestCryptoSigil_ChaChaPolySigil_NilRandomReader_Good(t *testing.T) {
// ── io.Reader fallback in In ───────────────────────────────────────
func TestChaChaPolySigil_Good_NilRandReader(t *testing.T) {
key := make([]byte, 32)
_, _ = rand.Read(key)
cipherSigil, _ := NewChaChaPolySigil(key, nil)
cipherSigil.randomReader = nil
s, _ := NewChaChaPolySigil(key)
s.randReader = nil // Should fall back to crypto/rand.Reader.
ciphertext, err := cipherSigil.In([]byte("fallback reader"))
ciphertext, err := s.In([]byte("fallback reader"))
require.NoError(t, err)
decrypted, err := cipherSigil.Out(ciphertext)
decrypted, err := s.Out(ciphertext)
require.NoError(t, err)
assert.Equal(t, []byte("fallback reader"), decrypted)
}
// limitReader returns exactly N bytes then EOF — for deterministic tests.
type limitReader struct {
data []byte
pos int
@ -491,9 +528,9 @@ type limitReader struct {
func (l *limitReader) Read(p []byte) (int, error) {
if l.pos >= len(l.data) {
return 0, goio.EOF
return 0, io.EOF
}
bytesCopied := copy(p, l.data[l.pos:])
l.pos += bytesCopied
return bytesCopied, nil
n := copy(p, l.data[l.pos:])
l.pos += n
return n, nil
}

View file

@ -1,39 +1,70 @@
// Example: hexSigil, _ := sigil.NewSigil("hex")
// Example: gzipSigil, _ := sigil.NewSigil("gzip")
// Example: encoded, _ := sigil.Transmute([]byte("payload"), []sigil.Sigil{hexSigil, gzipSigil})
// Example: decoded, _ := sigil.Untransmute(encoded, []sigil.Sigil{hexSigil, gzipSigil})
// Package sigil provides the Sigil transformation framework for composable,
// reversible data transformations.
//
// Sigils are the core abstraction - each sigil implements a specific transformation
// (encoding, compression, hashing, encryption) with a uniform interface. Sigils can
// be chained together to create transformation pipelines.
//
// Example usage:
//
// hexSigil, _ := sigil.NewSigil("hex")
// base64Sigil, _ := sigil.NewSigil("base64")
// result, _ := sigil.Transmute(data, []sigil.Sigil{hexSigil, base64Sigil})
package sigil
import core "dappco.re/go/core"
// Example: var transform sigil.Sigil = &sigil.HexSigil{}
// Sigil defines the interface for a data transformer.
//
// A Sigil represents a single transformation unit that can be applied to byte data.
// Sigils may be reversible (encoding, compression, encryption) or irreversible (hashing).
//
// For reversible sigils: Out(In(x)) == x for all valid x
// For irreversible sigils: Out returns the input unchanged
// For symmetric sigils: In(x) == Out(x)
//
// Implementations must handle nil input by returning nil without error,
// and empty input by returning an empty slice without error.
type Sigil interface {
// Example: encoded, _ := hexSigil.In([]byte("payload"))
// In applies the forward transformation to the data.
// For encoding sigils, this encodes the data.
// For compression sigils, this compresses the data.
// For hash sigils, this computes the digest.
In(data []byte) ([]byte, error)
// Example: decoded, _ := hexSigil.Out(encoded)
// Out applies the reverse transformation to the data.
// For reversible sigils, this recovers the original data.
// For irreversible sigils (e.g., hashing), this returns the input unchanged.
Out(data []byte) ([]byte, error)
}
// Example: encoded, _ := sigil.Transmute([]byte("payload"), []sigil.Sigil{hexSigil, gzipSigil})
// Transmute applies a series of sigils to data in sequence.
//
// Each sigil's In method is called in order, with the output of one sigil
// becoming the input of the next. If any sigil returns an error, Transmute
// stops immediately and returns nil with that error.
//
// To reverse a transmutation, call each sigil's Out method in reverse order.
func Transmute(data []byte, sigils []Sigil) ([]byte, error) {
var err error
for _, sigilValue := range sigils {
data, err = sigilValue.In(data)
for _, s := range sigils {
data, err = s.In(data)
if err != nil {
return nil, core.E("sigil.Transmute", "sigil in failed", err)
return nil, err
}
}
return data, nil
}
// Example: decoded, _ := sigil.Untransmute(encoded, []sigil.Sigil{hexSigil, gzipSigil})
// Untransmute reverses a transmutation by applying Out in reverse order.
//
// Each sigil's Out method is called in reverse order, with the output of one sigil
// becoming the input of the next. If any sigil returns an error, Untransmute
// stops immediately and returns nil with that error.
func Untransmute(data []byte, sigils []Sigil) ([]byte, error) {
var err error
for i := len(sigils) - 1; i >= 0; i-- {
data, err = sigils[i].Out(data)
if err != nil {
return nil, core.E("sigil.Untransmute", "sigil out failed", err)
return nil, err
}
}
return data, nil

View file

@ -13,193 +13,229 @@ import (
"github.com/stretchr/testify/require"
)
func TestSigil_ReverseSigil_Good(t *testing.T) {
reverseSigil := &ReverseSigil{}
// ---------------------------------------------------------------------------
// ReverseSigil
// ---------------------------------------------------------------------------
out, err := reverseSigil.In([]byte("hello"))
func TestReverseSigil_Good(t *testing.T) {
s := &ReverseSigil{}
out, err := s.In([]byte("hello"))
require.NoError(t, err)
assert.Equal(t, []byte("olleh"), out)
restored, err := reverseSigil.Out(out)
// Symmetric: Out does the same thing.
restored, err := s.Out(out)
require.NoError(t, err)
assert.Equal(t, []byte("hello"), restored)
}
func TestSigil_ReverseSigil_Bad(t *testing.T) {
reverseSigil := &ReverseSigil{}
func TestReverseSigil_Bad(t *testing.T) {
s := &ReverseSigil{}
out, err := reverseSigil.In([]byte{})
// Empty input returns empty.
out, err := s.In([]byte{})
require.NoError(t, err)
assert.Equal(t, []byte{}, out)
}
func TestSigil_ReverseSigil_NilInput_Good(t *testing.T) {
reverseSigil := &ReverseSigil{}
func TestReverseSigil_Ugly(t *testing.T) {
s := &ReverseSigil{}
out, err := reverseSigil.In(nil)
// Nil input returns nil.
out, err := s.In(nil)
require.NoError(t, err)
assert.Nil(t, out)
out, err = reverseSigil.Out(nil)
out, err = s.Out(nil)
require.NoError(t, err)
assert.Nil(t, out)
}
func TestSigil_HexSigil_Good(t *testing.T) {
hexSigil := &HexSigil{}
// ---------------------------------------------------------------------------
// HexSigil
// ---------------------------------------------------------------------------
func TestHexSigil_Good(t *testing.T) {
s := &HexSigil{}
data := []byte("hello world")
encoded, err := hexSigil.In(data)
encoded, err := s.In(data)
require.NoError(t, err)
assert.Equal(t, []byte(hex.EncodeToString(data)), encoded)
decoded, err := hexSigil.Out(encoded)
decoded, err := s.Out(encoded)
require.NoError(t, err)
assert.Equal(t, data, decoded)
}
func TestSigil_HexSigil_Bad(t *testing.T) {
hexSigil := &HexSigil{}
func TestHexSigil_Bad(t *testing.T) {
s := &HexSigil{}
_, err := hexSigil.Out([]byte("zzzz"))
// Invalid hex input.
_, err := s.Out([]byte("zzzz"))
assert.Error(t, err)
out, err := hexSigil.In([]byte{})
// Empty input.
out, err := s.In([]byte{})
require.NoError(t, err)
assert.Equal(t, []byte{}, out)
}
func TestSigil_HexSigil_NilInput_Good(t *testing.T) {
hexSigil := &HexSigil{}
func TestHexSigil_Ugly(t *testing.T) {
s := &HexSigil{}
out, err := hexSigil.In(nil)
out, err := s.In(nil)
require.NoError(t, err)
assert.Nil(t, out)
out, err = hexSigil.Out(nil)
out, err = s.Out(nil)
require.NoError(t, err)
assert.Nil(t, out)
}
func TestSigil_Base64Sigil_Good(t *testing.T) {
base64Sigil := &Base64Sigil{}
// ---------------------------------------------------------------------------
// Base64Sigil
// ---------------------------------------------------------------------------
func TestBase64Sigil_Good(t *testing.T) {
s := &Base64Sigil{}
data := []byte("composable transforms")
encoded, err := base64Sigil.In(data)
encoded, err := s.In(data)
require.NoError(t, err)
assert.Equal(t, []byte(base64.StdEncoding.EncodeToString(data)), encoded)
decoded, err := base64Sigil.Out(encoded)
decoded, err := s.Out(encoded)
require.NoError(t, err)
assert.Equal(t, data, decoded)
}
func TestSigil_Base64Sigil_Bad(t *testing.T) {
base64Sigil := &Base64Sigil{}
func TestBase64Sigil_Bad(t *testing.T) {
s := &Base64Sigil{}
_, err := base64Sigil.Out([]byte("!!!"))
// Invalid base64 (wrong padding).
_, err := s.Out([]byte("!!!"))
assert.Error(t, err)
out, err := base64Sigil.In([]byte{})
// Empty input.
out, err := s.In([]byte{})
require.NoError(t, err)
assert.Equal(t, []byte{}, out)
}
func TestSigil_Base64Sigil_NilInput_Good(t *testing.T) {
base64Sigil := &Base64Sigil{}
func TestBase64Sigil_Ugly(t *testing.T) {
s := &Base64Sigil{}
out, err := base64Sigil.In(nil)
out, err := s.In(nil)
require.NoError(t, err)
assert.Nil(t, out)
out, err = base64Sigil.Out(nil)
out, err = s.Out(nil)
require.NoError(t, err)
assert.Nil(t, out)
}
func TestSigil_GzipSigil_Good(t *testing.T) {
gzipSigil := &GzipSigil{}
// ---------------------------------------------------------------------------
// GzipSigil
// ---------------------------------------------------------------------------
func TestGzipSigil_Good(t *testing.T) {
s := &GzipSigil{}
data := []byte("the quick brown fox jumps over the lazy dog")
compressed, err := gzipSigil.In(data)
compressed, err := s.In(data)
require.NoError(t, err)
assert.NotEqual(t, data, compressed)
decompressed, err := gzipSigil.Out(compressed)
decompressed, err := s.Out(compressed)
require.NoError(t, err)
assert.Equal(t, data, decompressed)
}
func TestSigil_GzipSigil_Bad(t *testing.T) {
gzipSigil := &GzipSigil{}
func TestGzipSigil_Bad(t *testing.T) {
s := &GzipSigil{}
_, err := gzipSigil.Out([]byte("not gzip"))
// Invalid gzip data.
_, err := s.Out([]byte("not gzip"))
assert.Error(t, err)
compressed, err := gzipSigil.In([]byte{})
// Empty input compresses to a valid gzip stream.
compressed, err := s.In([]byte{})
require.NoError(t, err)
assert.NotEmpty(t, compressed)
assert.NotEmpty(t, compressed) // gzip header is always present
decompressed, err := gzipSigil.Out(compressed)
decompressed, err := s.Out(compressed)
require.NoError(t, err)
assert.Equal(t, []byte{}, decompressed)
}
func TestSigil_GzipSigil_NilInput_Good(t *testing.T) {
gzipSigil := &GzipSigil{}
func TestGzipSigil_Ugly(t *testing.T) {
s := &GzipSigil{}
out, err := gzipSigil.In(nil)
out, err := s.In(nil)
require.NoError(t, err)
assert.Nil(t, out)
out, err = gzipSigil.Out(nil)
out, err = s.Out(nil)
require.NoError(t, err)
assert.Nil(t, out)
}
func TestSigil_JSONSigil_Good(t *testing.T) {
jsonSigil := &JSONSigil{Indent: false}
// ---------------------------------------------------------------------------
// JSONSigil
// ---------------------------------------------------------------------------
func TestJSONSigil_Good(t *testing.T) {
s := &JSONSigil{Indent: false}
data := []byte(`{ "key" : "value" }`)
compacted, err := jsonSigil.In(data)
compacted, err := s.In(data)
require.NoError(t, err)
assert.Equal(t, []byte(`{"key":"value"}`), compacted)
passthrough, err := jsonSigil.Out(compacted)
// Out is passthrough.
passthrough, err := s.Out(compacted)
require.NoError(t, err)
assert.Equal(t, compacted, passthrough)
}
func TestSigil_JSONSigil_Indent_Good(t *testing.T) {
jsonSigil := &JSONSigil{Indent: true}
func TestJSONSigil_Good_Indent(t *testing.T) {
s := &JSONSigil{Indent: true}
data := []byte(`{"key":"value"}`)
indented, err := jsonSigil.In(data)
indented, err := s.In(data)
require.NoError(t, err)
assert.Contains(t, string(indented), "\n")
assert.Contains(t, string(indented), " ")
}
func TestSigil_JSONSigil_Bad(t *testing.T) {
jsonSigil := &JSONSigil{Indent: false}
func TestJSONSigil_Bad(t *testing.T) {
s := &JSONSigil{Indent: false}
_, err := jsonSigil.In([]byte("not json"))
// Invalid JSON.
_, err := s.In([]byte("not json"))
assert.Error(t, err)
}
func TestSigil_JSONSigil_NilInput_Good(t *testing.T) {
jsonSigil := &JSONSigil{Indent: false}
func TestJSONSigil_Ugly(t *testing.T) {
s := &JSONSigil{Indent: false}
out, err := jsonSigil.In(nil)
require.NoError(t, err)
assert.Nil(t, out)
// json.Compact on nil/empty will produce an error (invalid JSON).
_, err := s.In(nil)
assert.Error(t, err)
out, err = jsonSigil.Out(nil)
// Out with nil is passthrough.
out, err := s.Out(nil)
require.NoError(t, err)
assert.Nil(t, out)
}
func TestSigil_HashSigil_Good(t *testing.T) {
// ---------------------------------------------------------------------------
// HashSigil
// ---------------------------------------------------------------------------
func TestHashSigil_Good(t *testing.T) {
data := []byte("hash me")
tests := []struct {
@ -229,37 +265,44 @@ func TestSigil_HashSigil_Good(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
sigilValue, err := NewSigil(tt.sigilName)
s, err := NewSigil(tt.sigilName)
require.NoError(t, err)
hashed, err := sigilValue.In(data)
hashed, err := s.In(data)
require.NoError(t, err)
assert.Len(t, hashed, tt.size)
passthrough, err := sigilValue.Out(hashed)
// Out is passthrough.
passthrough, err := s.Out(hashed)
require.NoError(t, err)
assert.Equal(t, hashed, passthrough)
})
}
}
func TestSigil_HashSigil_Bad(t *testing.T) {
hashSigil := &HashSigil{Hash: 0}
_, err := hashSigil.In([]byte("data"))
func TestHashSigil_Bad(t *testing.T) {
// Unsupported hash constant.
s := &HashSigil{Hash: 0}
_, err := s.In([]byte("data"))
assert.Error(t, err)
assert.Contains(t, err.Error(), "not available")
}
func TestSigil_HashSigil_EmptyInput_Good(t *testing.T) {
sigilValue, err := NewSigil("sha256")
func TestHashSigil_Ugly(t *testing.T) {
// Hashing empty data should still produce a valid digest.
s, err := NewSigil("sha256")
require.NoError(t, err)
hashed, err := sigilValue.In([]byte{})
hashed, err := s.In([]byte{})
require.NoError(t, err)
assert.Len(t, hashed, sha256.Size)
}
func TestSigil_NewSigil_Good(t *testing.T) {
// ---------------------------------------------------------------------------
// NewSigil factory
// ---------------------------------------------------------------------------
func TestNewSigil_Good(t *testing.T) {
names := []string{
"reverse", "hex", "base64", "gzip", "json", "json-indent",
"md4", "md5", "sha1", "sha224", "sha256", "sha384", "sha512",
@ -271,25 +314,29 @@ func TestSigil_NewSigil_Good(t *testing.T) {
for _, name := range names {
t.Run(name, func(t *testing.T) {
sigilValue, err := NewSigil(name)
s, err := NewSigil(name)
require.NoError(t, err)
assert.NotNil(t, sigilValue)
assert.NotNil(t, s)
})
}
}
func TestSigil_NewSigil_Bad(t *testing.T) {
func TestNewSigil_Bad(t *testing.T) {
_, err := NewSigil("nonexistent")
assert.Error(t, err)
assert.Contains(t, err.Error(), "unknown sigil name")
}
func TestSigil_NewSigil_EmptyName_Bad(t *testing.T) {
func TestNewSigil_Ugly(t *testing.T) {
_, err := NewSigil("")
assert.Error(t, err)
}
func TestSigil_Transmute_Good(t *testing.T) {
// ---------------------------------------------------------------------------
// Transmute / Untransmute
// ---------------------------------------------------------------------------
func TestTransmute_Good(t *testing.T) {
data := []byte("round trip")
hexSigil, err := NewSigil("hex")
@ -308,7 +355,7 @@ func TestSigil_Transmute_Good(t *testing.T) {
assert.Equal(t, data, decoded)
}
func TestSigil_Transmute_MultiSigil_Good(t *testing.T) {
func TestTransmute_Good_MultiSigil(t *testing.T) {
data := []byte("multi sigil pipeline test data")
reverseSigil, err := NewSigil("reverse")
@ -328,7 +375,7 @@ func TestSigil_Transmute_MultiSigil_Good(t *testing.T) {
assert.Equal(t, data, decoded)
}
func TestSigil_Transmute_GzipRoundTrip_Good(t *testing.T) {
func TestTransmute_Good_GzipRoundTrip(t *testing.T) {
data := []byte("compress then encode then decode then decompress")
gzipSigil, err := NewSigil("gzip")
@ -346,14 +393,17 @@ func TestSigil_Transmute_GzipRoundTrip_Good(t *testing.T) {
assert.Equal(t, data, decoded)
}
func TestSigil_Transmute_Bad(t *testing.T) {
func TestTransmute_Bad(t *testing.T) {
// Transmute with a sigil that will fail: hex decode on non-hex input.
hexSigil := &HexSigil{}
// Calling Out (decode) with invalid input via manual chain.
_, err := Untransmute([]byte("not-hex!!"), []Sigil{hexSigil})
assert.Error(t, err)
}
func TestSigil_Transmute_NilAndEmptyInput_Good(t *testing.T) {
func TestTransmute_Ugly(t *testing.T) {
// Empty sigil chain is a no-op.
data := []byte("unchanged")
result, err := Transmute(data, nil)
@ -364,6 +414,7 @@ func TestSigil_Transmute_NilAndEmptyInput_Good(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, data, result)
// Nil data through a chain.
hexSigil, _ := NewSigil("hex")
result, err = Transmute(nil, []Sigil{hexSigil})
require.NoError(t, err)

View file

@ -10,10 +10,10 @@ import (
"crypto/sha512"
"encoding/base64"
"encoding/hex"
goio "io"
"io/fs"
"encoding/json"
"io"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
"golang.org/x/crypto/blake2b"
"golang.org/x/crypto/blake2s"
"golang.org/x/crypto/md4"
@ -21,10 +21,12 @@ import (
"golang.org/x/crypto/sha3"
)
// Example: reverseSigil, _ := sigil.NewSigil("reverse")
// ReverseSigil is a Sigil that reverses the bytes of the payload.
// It is a symmetrical Sigil, meaning that the In and Out methods perform the same operation.
type ReverseSigil struct{}
func (sigil *ReverseSigil) In(data []byte) ([]byte, error) {
// In reverses the bytes of the data.
func (s *ReverseSigil) In(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
@ -35,187 +37,189 @@ func (sigil *ReverseSigil) In(data []byte) ([]byte, error) {
return reversed, nil
}
func (sigil *ReverseSigil) Out(data []byte) ([]byte, error) {
return sigil.In(data)
// Out reverses the bytes of the data.
func (s *ReverseSigil) Out(data []byte) ([]byte, error) {
return s.In(data)
}
// Example: hexSigil, _ := sigil.NewSigil("hex")
// HexSigil is a Sigil that encodes/decodes data to/from hexadecimal.
// The In method encodes the data, and the Out method decodes it.
type HexSigil struct{}
func (sigil *HexSigil) In(data []byte) ([]byte, error) {
// In encodes the data to hexadecimal.
func (s *HexSigil) In(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
encodedBytes := make([]byte, hex.EncodedLen(len(data)))
hex.Encode(encodedBytes, data)
return encodedBytes, nil
dst := make([]byte, hex.EncodedLen(len(data)))
hex.Encode(dst, data)
return dst, nil
}
func (sigil *HexSigil) Out(data []byte) ([]byte, error) {
// Out decodes the data from hexadecimal.
func (s *HexSigil) Out(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
decodedBytes := make([]byte, hex.DecodedLen(len(data)))
_, err := hex.Decode(decodedBytes, data)
return decodedBytes, err
dst := make([]byte, hex.DecodedLen(len(data)))
_, err := hex.Decode(dst, data)
return dst, err
}
// Example: base64Sigil, _ := sigil.NewSigil("base64")
// Base64Sigil is a Sigil that encodes/decodes data to/from base64.
// The In method encodes the data, and the Out method decodes it.
type Base64Sigil struct{}
func (sigil *Base64Sigil) In(data []byte) ([]byte, error) {
// In encodes the data to base64.
func (s *Base64Sigil) In(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
encodedBytes := make([]byte, base64.StdEncoding.EncodedLen(len(data)))
base64.StdEncoding.Encode(encodedBytes, data)
return encodedBytes, nil
dst := make([]byte, base64.StdEncoding.EncodedLen(len(data)))
base64.StdEncoding.Encode(dst, data)
return dst, nil
}
func (sigil *Base64Sigil) Out(data []byte) ([]byte, error) {
// Out decodes the data from base64.
func (s *Base64Sigil) Out(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
decodedBytes := make([]byte, base64.StdEncoding.DecodedLen(len(data)))
decodedCount, err := base64.StdEncoding.Decode(decodedBytes, data)
return decodedBytes[:decodedCount], err
dst := make([]byte, base64.StdEncoding.DecodedLen(len(data)))
n, err := base64.StdEncoding.Decode(dst, data)
return dst[:n], err
}
// Example: gzipSigil, _ := sigil.NewSigil("gzip")
// GzipSigil is a Sigil that compresses/decompresses data using gzip.
// The In method compresses the data, and the Out method decompresses it.
type GzipSigil struct {
outputWriter goio.Writer
writer io.Writer
}
func (sigil *GzipSigil) In(data []byte) ([]byte, error) {
// In compresses the data using gzip.
func (s *GzipSigil) In(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
var buffer bytes.Buffer
outputWriter := sigil.outputWriter
if outputWriter == nil {
outputWriter = &buffer
var b bytes.Buffer
w := s.writer
if w == nil {
w = &b
}
gzipWriter := gzip.NewWriter(outputWriter)
if _, err := gzipWriter.Write(data); err != nil {
return nil, core.E("sigil.GzipSigil.In", "write gzip payload", err)
gz := gzip.NewWriter(w)
if _, err := gz.Write(data); err != nil {
return nil, err
}
if err := gzipWriter.Close(); err != nil {
return nil, core.E("sigil.GzipSigil.In", "close gzip writer", err)
if err := gz.Close(); err != nil {
return nil, err
}
return buffer.Bytes(), nil
return b.Bytes(), nil
}
func (sigil *GzipSigil) Out(data []byte) ([]byte, error) {
// Out decompresses the data using gzip.
func (s *GzipSigil) Out(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
}
gzipReader, err := gzip.NewReader(bytes.NewReader(data))
r, err := gzip.NewReader(bytes.NewReader(data))
if err != nil {
return nil, core.E("sigil.GzipSigil.Out", "open gzip reader", err)
return nil, err
}
defer gzipReader.Close()
out, err := goio.ReadAll(gzipReader)
if err != nil {
return nil, core.E("sigil.GzipSigil.Out", "read gzip payload", err)
}
return out, nil
defer r.Close()
return io.ReadAll(r)
}
// Example: jsonSigil := &sigil.JSONSigil{Indent: true}
// JSONSigil is a Sigil that compacts or indents JSON data.
// The Out method is a no-op.
type JSONSigil struct{ Indent bool }
func (sigil *JSONSigil) In(data []byte) ([]byte, error) {
if data == nil {
return nil, nil
// In compacts or indents the JSON data.
func (s *JSONSigil) In(data []byte) ([]byte, error) {
if s.Indent {
var out bytes.Buffer
err := json.Indent(&out, data, "", " ")
return out.Bytes(), err
}
var decoded any
result := core.JSONUnmarshal(data, &decoded)
if !result.OK {
if err, ok := result.Value.(error); ok {
return nil, core.E("sigil.JSONSigil.In", "decode json", err)
}
return nil, core.E("sigil.JSONSigil.In", "decode json", fs.ErrInvalid)
}
compact := core.JSONMarshalString(decoded)
if sigil.Indent {
return []byte(indentJSON(compact)), nil
}
return []byte(compact), nil
var out bytes.Buffer
err := json.Compact(&out, data)
return out.Bytes(), err
}
func (sigil *JSONSigil) Out(data []byte) ([]byte, error) {
// Out is a no-op for JSONSigil.
func (s *JSONSigil) Out(data []byte) ([]byte, error) {
// For simplicity, Out is a no-op. The primary use is formatting.
return data, nil
}
// Example: hashSigil := sigil.NewHashSigil(crypto.SHA256)
// HashSigil is a Sigil that hashes the data using a specified algorithm.
// The In method hashes the data, and the Out method is a no-op.
type HashSigil struct {
Hash crypto.Hash
}
// Example: hashSigil := sigil.NewHashSigil(crypto.SHA256)
// Example: digest, _ := hashSigil.In([]byte("payload"))
func NewHashSigil(hashAlgorithm crypto.Hash) *HashSigil {
return &HashSigil{Hash: hashAlgorithm}
// NewHashSigil creates a new HashSigil.
func NewHashSigil(h crypto.Hash) *HashSigil {
return &HashSigil{Hash: h}
}
func (sigil *HashSigil) In(data []byte) ([]byte, error) {
var hasher goio.Writer
switch sigil.Hash {
// In hashes the data.
func (s *HashSigil) In(data []byte) ([]byte, error) {
var h io.Writer
switch s.Hash {
case crypto.MD4:
hasher = md4.New()
h = md4.New()
case crypto.MD5:
hasher = md5.New()
h = md5.New()
case crypto.SHA1:
hasher = sha1.New()
h = sha1.New()
case crypto.SHA224:
hasher = sha256.New224()
h = sha256.New224()
case crypto.SHA256:
hasher = sha256.New()
h = sha256.New()
case crypto.SHA384:
hasher = sha512.New384()
h = sha512.New384()
case crypto.SHA512:
hasher = sha512.New()
h = sha512.New()
case crypto.RIPEMD160:
hasher = ripemd160.New()
h = ripemd160.New()
case crypto.SHA3_224:
hasher = sha3.New224()
h = sha3.New224()
case crypto.SHA3_256:
hasher = sha3.New256()
h = sha3.New256()
case crypto.SHA3_384:
hasher = sha3.New384()
h = sha3.New384()
case crypto.SHA3_512:
hasher = sha3.New512()
h = sha3.New512()
case crypto.SHA512_224:
hasher = sha512.New512_224()
h = sha512.New512_224()
case crypto.SHA512_256:
hasher = sha512.New512_256()
h = sha512.New512_256()
case crypto.BLAKE2s_256:
hasher, _ = blake2s.New256(nil)
h, _ = blake2s.New256(nil)
case crypto.BLAKE2b_256:
hasher, _ = blake2b.New256(nil)
h, _ = blake2b.New256(nil)
case crypto.BLAKE2b_384:
hasher, _ = blake2b.New384(nil)
h, _ = blake2b.New384(nil)
case crypto.BLAKE2b_512:
hasher, _ = blake2b.New512(nil)
h, _ = blake2b.New512(nil)
default:
return nil, core.E("sigil.HashSigil.In", "hash algorithm not available", fs.ErrInvalid)
// MD5SHA1 is not supported as a direct hash
return nil, coreerr.E("sigil.HashSigil.In", "hash algorithm not available", nil)
}
hasher.Write(data)
return hasher.(interface{ Sum([]byte) []byte }).Sum(nil), nil
h.Write(data)
return h.(interface{ Sum([]byte) []byte }).Sum(nil), nil
}
func (sigil *HashSigil) Out(data []byte) ([]byte, error) {
// Out is a no-op for HashSigil.
func (s *HashSigil) Out(data []byte) ([]byte, error) {
return data, nil
}
// Example: hexSigil, _ := sigil.NewSigil("hex")
// Example: gzipSigil, _ := sigil.NewSigil("gzip")
// Example: transformed, _ := sigil.Transmute([]byte("payload"), []sigil.Sigil{hexSigil, gzipSigil})
func NewSigil(sigilName string) (Sigil, error) {
switch sigilName {
// NewSigil is a factory function that returns a Sigil based on a string name.
// It is the primary way to create Sigil instances.
func NewSigil(name string) (Sigil, error) {
switch name {
case "reverse":
return &ReverseSigil{}, nil
case "hex":
@ -265,72 +269,6 @@ func NewSigil(sigilName string) (Sigil, error) {
case "blake2b-512":
return NewHashSigil(crypto.BLAKE2b_512), nil
default:
return nil, core.E("sigil.NewSigil", core.Concat("unknown sigil name: ", sigilName), fs.ErrInvalid)
return nil, coreerr.E("sigil.NewSigil", "unknown sigil name: "+name, nil)
}
}
func indentJSON(compact string) string {
if compact == "" {
return ""
}
builder := core.NewBuilder()
indent := 0
inString := false
escaped := false
writeIndent := func(level int) {
for i := 0; i < level; i++ {
builder.WriteString(" ")
}
}
for i := 0; i < len(compact); i++ {
ch := compact[i]
if inString {
builder.WriteByte(ch)
if escaped {
escaped = false
continue
}
if ch == '\\' {
escaped = true
continue
}
if ch == '"' {
inString = false
}
continue
}
switch ch {
case '"':
inString = true
builder.WriteByte(ch)
case '{', '[':
builder.WriteByte(ch)
if i+1 < len(compact) && compact[i+1] != '}' && compact[i+1] != ']' {
indent++
builder.WriteByte('\n')
writeIndent(indent)
}
case '}', ']':
if i > 0 && compact[i-1] != '{' && compact[i-1] != '[' {
indent--
builder.WriteByte('\n')
writeIndent(indent)
}
builder.WriteByte(ch)
case ',':
builder.WriteByte(ch)
builder.WriteByte('\n')
writeIndent(indent)
case ':':
builder.WriteString(": ")
default:
builder.WriteByte(ch)
}
}
return builder.String()
}

View file

@ -1,5 +1,4 @@
// Example: medium, _ := sqlite.New(sqlite.Options{Path: ":memory:"})
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
// Package sqlite provides a SQLite-backed implementation of the io.Medium interface.
package sqlite
import (
@ -7,163 +6,161 @@ import (
"database/sql"
goio "io"
"io/fs"
"os"
"path"
"strings"
"time"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
coreerr "dappco.re/go/core/log"
_ "modernc.org/sqlite"
_ "modernc.org/sqlite" // Pure Go SQLite driver
)
// Example: medium, _ := sqlite.New(sqlite.Options{Path: ":memory:"})
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
// Medium is a SQLite-backed storage backend implementing the io.Medium interface.
type Medium struct {
database *sql.DB
table string
db *sql.DB
table string
}
var _ coreio.Medium = (*Medium)(nil)
// Option configures a Medium.
type Option func(*Medium)
// Example: medium, _ := sqlite.New(sqlite.Options{Path: ":memory:", Table: "files"})
type Options struct {
Path string
Table string
}
func normaliseTableName(table string) string {
if table == "" {
return "files"
// WithTable sets the table name (default: "files").
func WithTable(table string) Option {
return func(m *Medium) {
m.table = table
}
return table
}
// Example: medium, _ := sqlite.New(sqlite.Options{Path: ":memory:", Table: "files"})
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
func New(options Options) (*Medium, error) {
if options.Path == "" {
return nil, core.E("sqlite.New", "database path is required", fs.ErrInvalid)
// New creates a new SQLite Medium at the given database path.
// Use ":memory:" for an in-memory database.
func New(dbPath string, opts ...Option) (*Medium, error) {
if dbPath == "" {
return nil, coreerr.E("sqlite.New", "database path is required", nil)
}
medium := &Medium{table: normaliseTableName(options.Table)}
m := &Medium{table: "files"}
for _, opt := range opts {
opt(m)
}
database, err := sql.Open("sqlite", options.Path)
db, err := sql.Open("sqlite", dbPath)
if err != nil {
return nil, core.E("sqlite.New", "failed to open database", err)
return nil, coreerr.E("sqlite.New", "failed to open database", err)
}
if _, err := database.Exec("PRAGMA journal_mode=WAL"); err != nil {
database.Close()
return nil, core.E("sqlite.New", "failed to set WAL mode", err)
// Enable WAL mode for better concurrency
if _, err := db.Exec("PRAGMA journal_mode=WAL"); err != nil {
db.Close()
return nil, coreerr.E("sqlite.New", "failed to set WAL mode", err)
}
createSQL := `CREATE TABLE IF NOT EXISTS ` + medium.table + ` (
// Create the schema
createSQL := `CREATE TABLE IF NOT EXISTS ` + m.table + ` (
path TEXT PRIMARY KEY,
content BLOB NOT NULL,
mode INTEGER DEFAULT 420,
is_dir BOOLEAN DEFAULT FALSE,
mtime DATETIME DEFAULT CURRENT_TIMESTAMP
)`
if _, err := database.Exec(createSQL); err != nil {
database.Close()
return nil, core.E("sqlite.New", "failed to create table", err)
if _, err := db.Exec(createSQL); err != nil {
db.Close()
return nil, coreerr.E("sqlite.New", "failed to create table", err)
}
medium.database = database
return medium, nil
m.db = db
return m, nil
}
// Example: _ = medium.Close()
func (medium *Medium) Close() error {
if medium.database != nil {
return medium.database.Close()
// Close closes the underlying database connection.
func (m *Medium) Close() error {
if m.db != nil {
return m.db.Close()
}
return nil
}
func normaliseEntryPath(filePath string) string {
clean := path.Clean("/" + filePath)
// cleanPath normalises a path for consistent storage.
// Uses a leading "/" before Clean to sandbox traversal attempts.
func cleanPath(p string) string {
clean := path.Clean("/" + p)
if clean == "/" {
return ""
}
return core.TrimPrefix(clean, "/")
return strings.TrimPrefix(clean, "/")
}
// Example: content, _ := medium.Read("config/app.yaml")
func (medium *Medium) Read(filePath string) (string, error) {
key := normaliseEntryPath(filePath)
// Read retrieves the content of a file as a string.
func (m *Medium) Read(p string) (string, error) {
key := cleanPath(p)
if key == "" {
return "", core.E("sqlite.Read", "path is required", fs.ErrInvalid)
return "", coreerr.E("sqlite.Read", "path is required", os.ErrInvalid)
}
var content []byte
var isDir bool
err := medium.database.QueryRow(
`SELECT content, is_dir FROM `+medium.table+` WHERE path = ?`, key,
err := m.db.QueryRow(
`SELECT content, is_dir FROM `+m.table+` WHERE path = ?`, key,
).Scan(&content, &isDir)
if err == sql.ErrNoRows {
return "", core.E("sqlite.Read", core.Concat("file not found: ", key), fs.ErrNotExist)
return "", coreerr.E("sqlite.Read", "file not found: "+key, os.ErrNotExist)
}
if err != nil {
return "", core.E("sqlite.Read", core.Concat("query failed: ", key), err)
return "", coreerr.E("sqlite.Read", "query failed: "+key, err)
}
if isDir {
return "", core.E("sqlite.Read", core.Concat("path is a directory: ", key), fs.ErrInvalid)
return "", coreerr.E("sqlite.Read", "path is a directory: "+key, os.ErrInvalid)
}
return string(content), nil
}
// Example: _ = medium.Write("config/app.yaml", "port: 8080")
func (medium *Medium) Write(filePath, content string) error {
return medium.WriteMode(filePath, content, 0644)
}
// Example: _ = medium.WriteMode("keys/private.key", key, 0600)
func (medium *Medium) WriteMode(filePath, content string, mode fs.FileMode) error {
key := normaliseEntryPath(filePath)
// Write saves the given content to a file, overwriting it if it exists.
func (m *Medium) Write(p, content string) error {
key := cleanPath(p)
if key == "" {
return core.E("sqlite.WriteMode", "path is required", fs.ErrInvalid)
return coreerr.E("sqlite.Write", "path is required", os.ErrInvalid)
}
_, err := medium.database.Exec(
`INSERT INTO `+medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, ?, FALSE, ?)
ON CONFLICT(path) DO UPDATE SET content = excluded.content, mode = excluded.mode, is_dir = FALSE, mtime = excluded.mtime`,
key, []byte(content), int(mode), time.Now().UTC(),
_, err := m.db.Exec(
`INSERT INTO `+m.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, 420, FALSE, ?)
ON CONFLICT(path) DO UPDATE SET content = excluded.content, is_dir = FALSE, mtime = excluded.mtime`,
key, []byte(content), time.Now().UTC(),
)
if err != nil {
return core.E("sqlite.WriteMode", core.Concat("insert failed: ", key), err)
return coreerr.E("sqlite.Write", "insert failed: "+key, err)
}
return nil
}
// Example: _ = medium.EnsureDir("config")
func (medium *Medium) EnsureDir(filePath string) error {
key := normaliseEntryPath(filePath)
// EnsureDir makes sure a directory exists, creating it if necessary.
func (m *Medium) EnsureDir(p string) error {
key := cleanPath(p)
if key == "" {
// Root always "exists"
return nil
}
_, err := medium.database.Exec(
`INSERT INTO `+medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, '', 493, TRUE, ?)
_, err := m.db.Exec(
`INSERT INTO `+m.table+` (path, content, mode, is_dir, mtime) VALUES (?, '', 493, TRUE, ?)
ON CONFLICT(path) DO NOTHING`,
key, time.Now().UTC(),
)
if err != nil {
return core.E("sqlite.EnsureDir", core.Concat("insert failed: ", key), err)
return coreerr.E("sqlite.EnsureDir", "insert failed: "+key, err)
}
return nil
}
// Example: isFile := medium.IsFile("config/app.yaml")
func (medium *Medium) IsFile(filePath string) bool {
key := normaliseEntryPath(filePath)
// IsFile checks if a path exists and is a regular file.
func (m *Medium) IsFile(p string) bool {
key := cleanPath(p)
if key == "" {
return false
}
var isDir bool
err := medium.database.QueryRow(
`SELECT is_dir FROM `+medium.table+` WHERE path = ?`, key,
err := m.db.QueryRow(
`SELECT is_dir FROM `+m.table+` WHERE path = ?`, key,
).Scan(&isDir)
if err != nil {
return false
@ -171,124 +168,141 @@ func (medium *Medium) IsFile(filePath string) bool {
return !isDir
}
// Example: _ = medium.Delete("config/app.yaml")
func (medium *Medium) Delete(filePath string) error {
key := normaliseEntryPath(filePath)
// FileGet is a convenience function that reads a file from the medium.
func (m *Medium) FileGet(p string) (string, error) {
return m.Read(p)
}
// FileSet is a convenience function that writes a file to the medium.
func (m *Medium) FileSet(p, content string) error {
return m.Write(p, content)
}
// Delete removes a file or empty directory.
func (m *Medium) Delete(p string) error {
key := cleanPath(p)
if key == "" {
return core.E("sqlite.Delete", "path is required", fs.ErrInvalid)
return coreerr.E("sqlite.Delete", "path is required", os.ErrInvalid)
}
// Check if it's a directory with children
var isDir bool
err := medium.database.QueryRow(
`SELECT is_dir FROM `+medium.table+` WHERE path = ?`, key,
err := m.db.QueryRow(
`SELECT is_dir FROM `+m.table+` WHERE path = ?`, key,
).Scan(&isDir)
if err == sql.ErrNoRows {
return core.E("sqlite.Delete", core.Concat("path not found: ", key), fs.ErrNotExist)
return coreerr.E("sqlite.Delete", "path not found: "+key, os.ErrNotExist)
}
if err != nil {
return core.E("sqlite.Delete", core.Concat("query failed: ", key), err)
return coreerr.E("sqlite.Delete", "query failed: "+key, err)
}
if isDir {
// Check for children
prefix := key + "/"
var count int
err := medium.database.QueryRow(
`SELECT COUNT(*) FROM `+medium.table+` WHERE path LIKE ? AND path != ?`, prefix+"%", key,
err := m.db.QueryRow(
`SELECT COUNT(*) FROM `+m.table+` WHERE path LIKE ? AND path != ?`, prefix+"%", key,
).Scan(&count)
if err != nil {
return core.E("sqlite.Delete", core.Concat("count failed: ", key), err)
return coreerr.E("sqlite.Delete", "count failed: "+key, err)
}
if count > 0 {
return core.E("sqlite.Delete", core.Concat("directory not empty: ", key), fs.ErrExist)
return coreerr.E("sqlite.Delete", "directory not empty: "+key, os.ErrExist)
}
}
execResult, err := medium.database.Exec(`DELETE FROM `+medium.table+` WHERE path = ?`, key)
res, err := m.db.Exec(`DELETE FROM `+m.table+` WHERE path = ?`, key)
if err != nil {
return core.E("sqlite.Delete", core.Concat("delete failed: ", key), err)
return coreerr.E("sqlite.Delete", "delete failed: "+key, err)
}
rowsAffected, _ := execResult.RowsAffected()
if rowsAffected == 0 {
return core.E("sqlite.Delete", core.Concat("path not found: ", key), fs.ErrNotExist)
n, _ := res.RowsAffected()
if n == 0 {
return coreerr.E("sqlite.Delete", "path not found: "+key, os.ErrNotExist)
}
return nil
}
// Example: _ = medium.DeleteAll("config")
func (medium *Medium) DeleteAll(filePath string) error {
key := normaliseEntryPath(filePath)
// DeleteAll removes a file or directory and all its contents recursively.
func (m *Medium) DeleteAll(p string) error {
key := cleanPath(p)
if key == "" {
return core.E("sqlite.DeleteAll", "path is required", fs.ErrInvalid)
return coreerr.E("sqlite.DeleteAll", "path is required", os.ErrInvalid)
}
prefix := key + "/"
execResult, err := medium.database.Exec(
`DELETE FROM `+medium.table+` WHERE path = ? OR path LIKE ?`,
// Delete the exact path and all children
res, err := m.db.Exec(
`DELETE FROM `+m.table+` WHERE path = ? OR path LIKE ?`,
key, prefix+"%",
)
if err != nil {
return core.E("sqlite.DeleteAll", core.Concat("delete failed: ", key), err)
return coreerr.E("sqlite.DeleteAll", "delete failed: "+key, err)
}
rowsAffected, _ := execResult.RowsAffected()
if rowsAffected == 0 {
return core.E("sqlite.DeleteAll", core.Concat("path not found: ", key), fs.ErrNotExist)
n, _ := res.RowsAffected()
if n == 0 {
return coreerr.E("sqlite.DeleteAll", "path not found: "+key, os.ErrNotExist)
}
return nil
}
// Example: _ = medium.Rename("drafts/todo.txt", "archive/todo.txt")
func (medium *Medium) Rename(oldPath, newPath string) error {
oldKey := normaliseEntryPath(oldPath)
newKey := normaliseEntryPath(newPath)
// Rename moves a file or directory from oldPath to newPath.
func (m *Medium) Rename(oldPath, newPath string) error {
oldKey := cleanPath(oldPath)
newKey := cleanPath(newPath)
if oldKey == "" || newKey == "" {
return core.E("sqlite.Rename", "both old and new paths are required", fs.ErrInvalid)
return coreerr.E("sqlite.Rename", "both old and new paths are required", os.ErrInvalid)
}
tx, err := medium.database.Begin()
tx, err := m.db.Begin()
if err != nil {
return core.E("sqlite.Rename", "begin tx failed", err)
return coreerr.E("sqlite.Rename", "begin tx failed", err)
}
defer tx.Rollback()
// Check if source exists
var content []byte
var mode int
var isDir bool
var mtime time.Time
err = tx.QueryRow(
`SELECT content, mode, is_dir, mtime FROM `+medium.table+` WHERE path = ?`, oldKey,
`SELECT content, mode, is_dir, mtime FROM `+m.table+` WHERE path = ?`, oldKey,
).Scan(&content, &mode, &isDir, &mtime)
if err == sql.ErrNoRows {
return core.E("sqlite.Rename", core.Concat("source not found: ", oldKey), fs.ErrNotExist)
return coreerr.E("sqlite.Rename", "source not found: "+oldKey, os.ErrNotExist)
}
if err != nil {
return core.E("sqlite.Rename", core.Concat("query failed: ", oldKey), err)
return coreerr.E("sqlite.Rename", "query failed: "+oldKey, err)
}
// Insert or replace at new path
_, err = tx.Exec(
`INSERT INTO `+medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, ?, ?, ?)
`INSERT INTO `+m.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, ?, ?, ?)
ON CONFLICT(path) DO UPDATE SET content = excluded.content, mode = excluded.mode, is_dir = excluded.is_dir, mtime = excluded.mtime`,
newKey, content, mode, isDir, mtime,
)
if err != nil {
return core.E("sqlite.Rename", core.Concat("insert at new path failed: ", newKey), err)
return coreerr.E("sqlite.Rename", "insert at new path failed: "+newKey, err)
}
_, err = tx.Exec(`DELETE FROM `+medium.table+` WHERE path = ?`, oldKey)
// Delete old path
_, err = tx.Exec(`DELETE FROM `+m.table+` WHERE path = ?`, oldKey)
if err != nil {
return core.E("sqlite.Rename", core.Concat("delete old path failed: ", oldKey), err)
return coreerr.E("sqlite.Rename", "delete old path failed: "+oldKey, err)
}
// If it's a directory, move all children
if isDir {
oldPrefix := oldKey + "/"
newPrefix := newKey + "/"
childRows, err := tx.Query(
`SELECT path, content, mode, is_dir, mtime FROM `+medium.table+` WHERE path LIKE ?`,
rows, err := tx.Query(
`SELECT path, content, mode, is_dir, mtime FROM `+m.table+` WHERE path LIKE ?`,
oldPrefix+"%",
)
if err != nil {
return core.E("sqlite.Rename", "query children failed", err)
return coreerr.E("sqlite.Rename", "query children failed", err)
}
type child struct {
@ -299,50 +313,52 @@ func (medium *Medium) Rename(oldPath, newPath string) error {
mtime time.Time
}
var children []child
for childRows.Next() {
var childEntry child
if err := childRows.Scan(&childEntry.path, &childEntry.content, &childEntry.mode, &childEntry.isDir, &childEntry.mtime); err != nil {
childRows.Close()
return core.E("sqlite.Rename", "scan child failed", err)
for rows.Next() {
var c child
if err := rows.Scan(&c.path, &c.content, &c.mode, &c.isDir, &c.mtime); err != nil {
rows.Close()
return coreerr.E("sqlite.Rename", "scan child failed", err)
}
children = append(children, childEntry)
children = append(children, c)
}
childRows.Close()
rows.Close()
for _, childEntry := range children {
newChildPath := core.Concat(newPrefix, core.TrimPrefix(childEntry.path, oldPrefix))
for _, c := range children {
newChildPath := newPrefix + strings.TrimPrefix(c.path, oldPrefix)
_, err = tx.Exec(
`INSERT INTO `+medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, ?, ?, ?)
`INSERT INTO `+m.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, ?, ?, ?)
ON CONFLICT(path) DO UPDATE SET content = excluded.content, mode = excluded.mode, is_dir = excluded.is_dir, mtime = excluded.mtime`,
newChildPath, childEntry.content, childEntry.mode, childEntry.isDir, childEntry.mtime,
newChildPath, c.content, c.mode, c.isDir, c.mtime,
)
if err != nil {
return core.E("sqlite.Rename", "insert child failed", err)
return coreerr.E("sqlite.Rename", "insert child failed", err)
}
}
_, err = tx.Exec(`DELETE FROM `+medium.table+` WHERE path LIKE ?`, oldPrefix+"%")
// Delete old children
_, err = tx.Exec(`DELETE FROM `+m.table+` WHERE path LIKE ?`, oldPrefix+"%")
if err != nil {
return core.E("sqlite.Rename", "delete old children failed", err)
return coreerr.E("sqlite.Rename", "delete old children failed", err)
}
}
return tx.Commit()
}
// Example: entries, _ := medium.List("config")
func (medium *Medium) List(filePath string) ([]fs.DirEntry, error) {
prefix := normaliseEntryPath(filePath)
// List returns the directory entries for the given path.
func (m *Medium) List(p string) ([]fs.DirEntry, error) {
prefix := cleanPath(p)
if prefix != "" {
prefix += "/"
}
rows, err := medium.database.Query(
`SELECT path, content, mode, is_dir, mtime FROM `+medium.table+` WHERE path LIKE ? OR path LIKE ?`,
// Query all paths under the prefix
rows, err := m.db.Query(
`SELECT path, content, mode, is_dir, mtime FROM `+m.table+` WHERE path LIKE ? OR path LIKE ?`,
prefix+"%", prefix+"%",
)
if err != nil {
return nil, core.E("sqlite.List", "query failed", err)
return nil, coreerr.E("sqlite.List", "query failed", err)
}
defer rows.Close()
@ -356,17 +372,18 @@ func (medium *Medium) List(filePath string) ([]fs.DirEntry, error) {
var isDir bool
var mtime time.Time
if err := rows.Scan(&rowPath, &content, &mode, &isDir, &mtime); err != nil {
return nil, core.E("sqlite.List", "scan failed", err)
return nil, coreerr.E("sqlite.List", "scan failed", err)
}
rest := core.TrimPrefix(rowPath, prefix)
rest := strings.TrimPrefix(rowPath, prefix)
if rest == "" {
continue
}
parts := core.SplitN(rest, "/", 2)
if len(parts) == 2 {
dirName := parts[0]
// Check if this is a direct child or nested
if idx := strings.Index(rest, "/"); idx >= 0 {
// Nested - register as a directory
dirName := rest[:idx]
if !seen[dirName] {
seen[dirName] = true
entries = append(entries, &dirEntry{
@ -381,6 +398,7 @@ func (medium *Medium) List(filePath string) ([]fs.DirEntry, error) {
})
}
} else {
// Direct child
if !seen[rest] {
seen[rest] = true
entries = append(entries, &dirEntry{
@ -399,31 +417,28 @@ func (medium *Medium) List(filePath string) ([]fs.DirEntry, error) {
}
}
if err := rows.Err(); err != nil {
return nil, core.E("sqlite.List", "rows", err)
}
return entries, nil
return entries, rows.Err()
}
// Example: info, _ := medium.Stat("config/app.yaml")
func (medium *Medium) Stat(filePath string) (fs.FileInfo, error) {
key := normaliseEntryPath(filePath)
// Stat returns file information for the given path.
func (m *Medium) Stat(p string) (fs.FileInfo, error) {
key := cleanPath(p)
if key == "" {
return nil, core.E("sqlite.Stat", "path is required", fs.ErrInvalid)
return nil, coreerr.E("sqlite.Stat", "path is required", os.ErrInvalid)
}
var content []byte
var mode int
var isDir bool
var mtime time.Time
err := medium.database.QueryRow(
`SELECT content, mode, is_dir, mtime FROM `+medium.table+` WHERE path = ?`, key,
err := m.db.QueryRow(
`SELECT content, mode, is_dir, mtime FROM `+m.table+` WHERE path = ?`, key,
).Scan(&content, &mode, &isDir, &mtime)
if err == sql.ErrNoRows {
return nil, core.E("sqlite.Stat", core.Concat("path not found: ", key), fs.ErrNotExist)
return nil, coreerr.E("sqlite.Stat", "path not found: "+key, os.ErrNotExist)
}
if err != nil {
return nil, core.E("sqlite.Stat", core.Concat("query failed: ", key), err)
return nil, coreerr.E("sqlite.Stat", "query failed: "+key, err)
}
name := path.Base(key)
@ -436,28 +451,28 @@ func (medium *Medium) Stat(filePath string) (fs.FileInfo, error) {
}, nil
}
// Example: file, _ := medium.Open("config/app.yaml")
func (medium *Medium) Open(filePath string) (fs.File, error) {
key := normaliseEntryPath(filePath)
// Open opens the named file for reading.
func (m *Medium) Open(p string) (fs.File, error) {
key := cleanPath(p)
if key == "" {
return nil, core.E("sqlite.Open", "path is required", fs.ErrInvalid)
return nil, coreerr.E("sqlite.Open", "path is required", os.ErrInvalid)
}
var content []byte
var mode int
var isDir bool
var mtime time.Time
err := medium.database.QueryRow(
`SELECT content, mode, is_dir, mtime FROM `+medium.table+` WHERE path = ?`, key,
err := m.db.QueryRow(
`SELECT content, mode, is_dir, mtime FROM `+m.table+` WHERE path = ?`, key,
).Scan(&content, &mode, &isDir, &mtime)
if err == sql.ErrNoRows {
return nil, core.E("sqlite.Open", core.Concat("file not found: ", key), fs.ErrNotExist)
return nil, coreerr.E("sqlite.Open", "file not found: "+key, os.ErrNotExist)
}
if err != nil {
return nil, core.E("sqlite.Open", core.Concat("query failed: ", key), err)
return nil, coreerr.E("sqlite.Open", "query failed: "+key, err)
}
if isDir {
return nil, core.E("sqlite.Open", core.Concat("path is a directory: ", key), fs.ErrInvalid)
return nil, coreerr.E("sqlite.Open", "path is a directory: "+key, os.ErrInvalid)
}
return &sqliteFile{
@ -468,80 +483,81 @@ func (medium *Medium) Open(filePath string) (fs.File, error) {
}, nil
}
// Example: writer, _ := medium.Create("logs/app.log")
func (medium *Medium) Create(filePath string) (goio.WriteCloser, error) {
key := normaliseEntryPath(filePath)
// Create creates or truncates the named file.
func (m *Medium) Create(p string) (goio.WriteCloser, error) {
key := cleanPath(p)
if key == "" {
return nil, core.E("sqlite.Create", "path is required", fs.ErrInvalid)
return nil, coreerr.E("sqlite.Create", "path is required", os.ErrInvalid)
}
return &sqliteWriteCloser{
medium: medium,
medium: m,
path: key,
}, nil
}
// Example: writer, _ := medium.Append("logs/app.log")
func (medium *Medium) Append(filePath string) (goio.WriteCloser, error) {
key := normaliseEntryPath(filePath)
// Append opens the named file for appending, creating it if it doesn't exist.
func (m *Medium) Append(p string) (goio.WriteCloser, error) {
key := cleanPath(p)
if key == "" {
return nil, core.E("sqlite.Append", "path is required", fs.ErrInvalid)
return nil, coreerr.E("sqlite.Append", "path is required", os.ErrInvalid)
}
var existing []byte
err := medium.database.QueryRow(
`SELECT content FROM `+medium.table+` WHERE path = ? AND is_dir = FALSE`, key,
err := m.db.QueryRow(
`SELECT content FROM `+m.table+` WHERE path = ? AND is_dir = FALSE`, key,
).Scan(&existing)
if err != nil && err != sql.ErrNoRows {
return nil, core.E("sqlite.Append", core.Concat("query failed: ", key), err)
return nil, coreerr.E("sqlite.Append", "query failed: "+key, err)
}
return &sqliteWriteCloser{
medium: medium,
medium: m,
path: key,
data: existing,
}, nil
}
// Example: reader, _ := medium.ReadStream("logs/app.log")
func (medium *Medium) ReadStream(filePath string) (goio.ReadCloser, error) {
key := normaliseEntryPath(filePath)
// ReadStream returns a reader for the file content.
func (m *Medium) ReadStream(p string) (goio.ReadCloser, error) {
key := cleanPath(p)
if key == "" {
return nil, core.E("sqlite.ReadStream", "path is required", fs.ErrInvalid)
return nil, coreerr.E("sqlite.ReadStream", "path is required", os.ErrInvalid)
}
var content []byte
var isDir bool
err := medium.database.QueryRow(
`SELECT content, is_dir FROM `+medium.table+` WHERE path = ?`, key,
err := m.db.QueryRow(
`SELECT content, is_dir FROM `+m.table+` WHERE path = ?`, key,
).Scan(&content, &isDir)
if err == sql.ErrNoRows {
return nil, core.E("sqlite.ReadStream", core.Concat("file not found: ", key), fs.ErrNotExist)
return nil, coreerr.E("sqlite.ReadStream", "file not found: "+key, os.ErrNotExist)
}
if err != nil {
return nil, core.E("sqlite.ReadStream", core.Concat("query failed: ", key), err)
return nil, coreerr.E("sqlite.ReadStream", "query failed: "+key, err)
}
if isDir {
return nil, core.E("sqlite.ReadStream", core.Concat("path is a directory: ", key), fs.ErrInvalid)
return nil, coreerr.E("sqlite.ReadStream", "path is a directory: "+key, os.ErrInvalid)
}
return goio.NopCloser(bytes.NewReader(content)), nil
}
// Example: writer, _ := medium.WriteStream("logs/app.log")
func (medium *Medium) WriteStream(filePath string) (goio.WriteCloser, error) {
return medium.Create(filePath)
// WriteStream returns a writer for the file content. Content is stored on Close.
func (m *Medium) WriteStream(p string) (goio.WriteCloser, error) {
return m.Create(p)
}
// Example: exists := medium.Exists("config/app.yaml")
func (medium *Medium) Exists(filePath string) bool {
key := normaliseEntryPath(filePath)
// Exists checks if a path exists (file or directory).
func (m *Medium) Exists(p string) bool {
key := cleanPath(p)
if key == "" {
// Root always exists
return true
}
var count int
err := medium.database.QueryRow(
`SELECT COUNT(*) FROM `+medium.table+` WHERE path = ?`, key,
err := m.db.QueryRow(
`SELECT COUNT(*) FROM `+m.table+` WHERE path = ?`, key,
).Scan(&count)
if err != nil {
return false
@ -549,16 +565,16 @@ func (medium *Medium) Exists(filePath string) bool {
return count > 0
}
// Example: isDirectory := medium.IsDir("config")
func (medium *Medium) IsDir(filePath string) bool {
key := normaliseEntryPath(filePath)
// IsDir checks if a path exists and is a directory.
func (m *Medium) IsDir(p string) bool {
key := cleanPath(p)
if key == "" {
return false
}
var isDir bool
err := medium.database.QueryRow(
`SELECT is_dir FROM `+medium.table+` WHERE path = ?`, key,
err := m.db.QueryRow(
`SELECT is_dir FROM `+m.table+` WHERE path = ?`, key,
).Scan(&isDir)
if err != nil {
return false
@ -566,6 +582,9 @@ func (medium *Medium) IsDir(filePath string) bool {
return isDir
}
// --- Internal types ---
// fileInfo implements fs.FileInfo for SQLite entries.
type fileInfo struct {
name string
size int64
@ -574,18 +593,14 @@ type fileInfo struct {
isDir bool
}
func (info *fileInfo) Name() string { return info.name }
func (info *fileInfo) Size() int64 { return info.size }
func (info *fileInfo) Mode() fs.FileMode { return info.mode }
func (info *fileInfo) ModTime() time.Time { return info.modTime }
func (info *fileInfo) IsDir() bool { return info.isDir }
func (info *fileInfo) Sys() any { return nil }
func (fi *fileInfo) Name() string { return fi.name }
func (fi *fileInfo) Size() int64 { return fi.size }
func (fi *fileInfo) Mode() fs.FileMode { return fi.mode }
func (fi *fileInfo) ModTime() time.Time { return fi.modTime }
func (fi *fileInfo) IsDir() bool { return fi.isDir }
func (fi *fileInfo) Sys() any { return nil }
// dirEntry implements fs.DirEntry for SQLite listings.
type dirEntry struct {
name string
isDir bool
@ -593,14 +608,12 @@ type dirEntry struct {
info fs.FileInfo
}
func (entry *dirEntry) Name() string { return entry.name }
func (entry *dirEntry) IsDir() bool { return entry.isDir }
func (entry *dirEntry) Type() fs.FileMode { return entry.mode.Type() }
func (entry *dirEntry) Info() (fs.FileInfo, error) { return entry.info, nil }
func (de *dirEntry) Name() string { return de.name }
func (de *dirEntry) IsDir() bool { return de.isDir }
func (de *dirEntry) Type() fs.FileMode { return de.mode.Type() }
func (de *dirEntry) Info() (fs.FileInfo, error) { return de.info, nil }
// sqliteFile implements fs.File for SQLite entries.
type sqliteFile struct {
name string
content []byte
@ -609,47 +622,48 @@ type sqliteFile struct {
modTime time.Time
}
func (file *sqliteFile) Stat() (fs.FileInfo, error) {
func (f *sqliteFile) Stat() (fs.FileInfo, error) {
return &fileInfo{
name: file.name,
size: int64(len(file.content)),
mode: file.mode,
modTime: file.modTime,
name: f.name,
size: int64(len(f.content)),
mode: f.mode,
modTime: f.modTime,
}, nil
}
func (file *sqliteFile) Read(buffer []byte) (int, error) {
if file.offset >= int64(len(file.content)) {
func (f *sqliteFile) Read(b []byte) (int, error) {
if f.offset >= int64(len(f.content)) {
return 0, goio.EOF
}
bytesRead := copy(buffer, file.content[file.offset:])
file.offset += int64(bytesRead)
return bytesRead, nil
n := copy(b, f.content[f.offset:])
f.offset += int64(n)
return n, nil
}
func (file *sqliteFile) Close() error {
func (f *sqliteFile) Close() error {
return nil
}
// sqliteWriteCloser buffers writes and stores to SQLite on Close.
type sqliteWriteCloser struct {
medium *Medium
path string
data []byte
}
func (writer *sqliteWriteCloser) Write(data []byte) (int, error) {
writer.data = append(writer.data, data...)
return len(data), nil
func (w *sqliteWriteCloser) Write(p []byte) (int, error) {
w.data = append(w.data, p...)
return len(p), nil
}
func (writer *sqliteWriteCloser) Close() error {
_, err := writer.medium.database.Exec(
`INSERT INTO `+writer.medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, 420, FALSE, ?)
func (w *sqliteWriteCloser) Close() error {
_, err := w.medium.db.Exec(
`INSERT INTO `+w.medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, 420, FALSE, ?)
ON CONFLICT(path) DO UPDATE SET content = excluded.content, is_dir = FALSE, mtime = excluded.mtime`,
writer.path, writer.data, time.Now().UTC(),
w.path, w.data, time.Now().UTC(),
)
if err != nil {
return core.E("sqlite.WriteCloser.Close", core.Concat("store failed: ", writer.path), err)
return coreerr.E("sqlite.WriteCloser.Close", "store failed: "+w.path, err)
}
return nil
}

View file

@ -3,287 +3,317 @@ package sqlite
import (
goio "io"
"io/fs"
"strings"
"testing"
core "dappco.re/go/core"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func newSqliteMedium(t *testing.T) *Medium {
func newTestMedium(t *testing.T) *Medium {
t.Helper()
sqliteMedium, err := New(Options{Path: ":memory:"})
m, err := New(":memory:")
require.NoError(t, err)
t.Cleanup(func() { sqliteMedium.Close() })
return sqliteMedium
t.Cleanup(func() { m.Close() })
return m
}
func TestSqlite_New_Good(t *testing.T) {
sqliteMedium, err := New(Options{Path: ":memory:"})
// --- Constructor Tests ---
func TestNew_Good(t *testing.T) {
m, err := New(":memory:")
require.NoError(t, err)
defer sqliteMedium.Close()
assert.Equal(t, "files", sqliteMedium.table)
defer m.Close()
assert.Equal(t, "files", m.table)
}
func TestSqlite_New_Options_Good(t *testing.T) {
sqliteMedium, err := New(Options{Path: ":memory:", Table: "custom"})
func TestNew_Good_WithTable(t *testing.T) {
m, err := New(":memory:", WithTable("custom"))
require.NoError(t, err)
defer sqliteMedium.Close()
assert.Equal(t, "custom", sqliteMedium.table)
defer m.Close()
assert.Equal(t, "custom", m.table)
}
func TestSqlite_New_EmptyPath_Bad(t *testing.T) {
_, err := New(Options{})
func TestNew_Bad_EmptyPath(t *testing.T) {
_, err := New("")
assert.Error(t, err)
assert.Contains(t, err.Error(), "database path is required")
}
func TestSqlite_ReadWrite_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- Read/Write Tests ---
err := sqliteMedium.Write("hello.txt", "world")
func TestReadWrite_Good(t *testing.T) {
m := newTestMedium(t)
err := m.Write("hello.txt", "world")
require.NoError(t, err)
content, err := sqliteMedium.Read("hello.txt")
content, err := m.Read("hello.txt")
require.NoError(t, err)
assert.Equal(t, "world", content)
}
func TestSqlite_ReadWrite_Overwrite_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestReadWrite_Good_Overwrite(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("file.txt", "first"))
require.NoError(t, sqliteMedium.Write("file.txt", "second"))
require.NoError(t, m.Write("file.txt", "first"))
require.NoError(t, m.Write("file.txt", "second"))
content, err := sqliteMedium.Read("file.txt")
content, err := m.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "second", content)
}
func TestSqlite_ReadWrite_NestedPath_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestReadWrite_Good_NestedPath(t *testing.T) {
m := newTestMedium(t)
err := sqliteMedium.Write("a/b/c.txt", "nested")
err := m.Write("a/b/c.txt", "nested")
require.NoError(t, err)
content, err := sqliteMedium.Read("a/b/c.txt")
content, err := m.Read("a/b/c.txt")
require.NoError(t, err)
assert.Equal(t, "nested", content)
}
func TestSqlite_Read_NotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestRead_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
_, err := sqliteMedium.Read("nonexistent.txt")
_, err := m.Read("nonexistent.txt")
assert.Error(t, err)
}
func TestSqlite_Read_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestRead_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
_, err := sqliteMedium.Read("")
_, err := m.Read("")
assert.Error(t, err)
}
func TestSqlite_Write_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestWrite_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
err := sqliteMedium.Write("", "content")
err := m.Write("", "content")
assert.Error(t, err)
}
func TestSqlite_Read_IsDirectory_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestRead_Bad_IsDirectory(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
_, err := sqliteMedium.Read("mydir")
require.NoError(t, m.EnsureDir("mydir"))
_, err := m.Read("mydir")
assert.Error(t, err)
}
func TestSqlite_EnsureDir_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- EnsureDir Tests ---
err := sqliteMedium.EnsureDir("mydir")
func TestEnsureDir_Good(t *testing.T) {
m := newTestMedium(t)
err := m.EnsureDir("mydir")
require.NoError(t, err)
assert.True(t, sqliteMedium.IsDir("mydir"))
assert.True(t, m.IsDir("mydir"))
}
func TestSqlite_EnsureDir_EmptyPath_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
err := sqliteMedium.EnsureDir("")
func TestEnsureDir_Good_EmptyPath(t *testing.T) {
m := newTestMedium(t)
// Root always exists, no-op
err := m.EnsureDir("")
assert.NoError(t, err)
}
func TestSqlite_EnsureDir_Idempotent_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestEnsureDir_Good_Idempotent(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
assert.True(t, sqliteMedium.IsDir("mydir"))
require.NoError(t, m.EnsureDir("mydir"))
require.NoError(t, m.EnsureDir("mydir"))
assert.True(t, m.IsDir("mydir"))
}
func TestSqlite_IsFile_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- IsFile Tests ---
require.NoError(t, sqliteMedium.Write("file.txt", "content"))
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
func TestIsFile_Good(t *testing.T) {
m := newTestMedium(t)
assert.True(t, sqliteMedium.IsFile("file.txt"))
assert.False(t, sqliteMedium.IsFile("mydir"))
assert.False(t, sqliteMedium.IsFile("nonexistent"))
assert.False(t, sqliteMedium.IsFile(""))
require.NoError(t, m.Write("file.txt", "content"))
require.NoError(t, m.EnsureDir("mydir"))
assert.True(t, m.IsFile("file.txt"))
assert.False(t, m.IsFile("mydir"))
assert.False(t, m.IsFile("nonexistent"))
assert.False(t, m.IsFile(""))
}
func TestSqlite_Delete_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- FileGet/FileSet Tests ---
require.NoError(t, sqliteMedium.Write("to-delete.txt", "content"))
assert.True(t, sqliteMedium.Exists("to-delete.txt"))
func TestFileGetFileSet_Good(t *testing.T) {
m := newTestMedium(t)
err := sqliteMedium.Delete("to-delete.txt")
err := m.FileSet("key.txt", "value")
require.NoError(t, err)
assert.False(t, sqliteMedium.Exists("to-delete.txt"))
}
func TestSqlite_Delete_EmptyDir_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
require.NoError(t, sqliteMedium.EnsureDir("emptydir"))
assert.True(t, sqliteMedium.IsDir("emptydir"))
err := sqliteMedium.Delete("emptydir")
val, err := m.FileGet("key.txt")
require.NoError(t, err)
assert.False(t, sqliteMedium.IsDir("emptydir"))
assert.Equal(t, "value", val)
}
func TestSqlite_Delete_NotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- Delete Tests ---
err := sqliteMedium.Delete("nonexistent")
func TestDelete_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, m.Write("to-delete.txt", "content"))
assert.True(t, m.Exists("to-delete.txt"))
err := m.Delete("to-delete.txt")
require.NoError(t, err)
assert.False(t, m.Exists("to-delete.txt"))
}
func TestDelete_Good_EmptyDir(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, m.EnsureDir("emptydir"))
assert.True(t, m.IsDir("emptydir"))
err := m.Delete("emptydir")
require.NoError(t, err)
assert.False(t, m.IsDir("emptydir"))
}
func TestDelete_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
err := m.Delete("nonexistent")
assert.Error(t, err)
}
func TestSqlite_Delete_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestDelete_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
err := sqliteMedium.Delete("")
err := m.Delete("")
assert.Error(t, err)
}
func TestSqlite_Delete_NotEmpty_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestDelete_Bad_NotEmpty(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
require.NoError(t, sqliteMedium.Write("mydir/file.txt", "content"))
require.NoError(t, m.EnsureDir("mydir"))
require.NoError(t, m.Write("mydir/file.txt", "content"))
err := sqliteMedium.Delete("mydir")
err := m.Delete("mydir")
assert.Error(t, err)
}
func TestSqlite_DeleteAll_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- DeleteAll Tests ---
require.NoError(t, sqliteMedium.Write("dir/file1.txt", "a"))
require.NoError(t, sqliteMedium.Write("dir/sub/file2.txt", "b"))
require.NoError(t, sqliteMedium.Write("other.txt", "c"))
func TestDeleteAll_Good(t *testing.T) {
m := newTestMedium(t)
err := sqliteMedium.DeleteAll("dir")
require.NoError(t, m.Write("dir/file1.txt", "a"))
require.NoError(t, m.Write("dir/sub/file2.txt", "b"))
require.NoError(t, m.Write("other.txt", "c"))
err := m.DeleteAll("dir")
require.NoError(t, err)
assert.False(t, sqliteMedium.Exists("dir/file1.txt"))
assert.False(t, sqliteMedium.Exists("dir/sub/file2.txt"))
assert.True(t, sqliteMedium.Exists("other.txt"))
assert.False(t, m.Exists("dir/file1.txt"))
assert.False(t, m.Exists("dir/sub/file2.txt"))
assert.True(t, m.Exists("other.txt"))
}
func TestSqlite_DeleteAll_SingleFile_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestDeleteAll_Good_SingleFile(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("file.txt", "content"))
require.NoError(t, m.Write("file.txt", "content"))
err := sqliteMedium.DeleteAll("file.txt")
err := m.DeleteAll("file.txt")
require.NoError(t, err)
assert.False(t, sqliteMedium.Exists("file.txt"))
assert.False(t, m.Exists("file.txt"))
}
func TestSqlite_DeleteAll_NotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestDeleteAll_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
err := sqliteMedium.DeleteAll("nonexistent")
err := m.DeleteAll("nonexistent")
assert.Error(t, err)
}
func TestSqlite_DeleteAll_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestDeleteAll_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
err := sqliteMedium.DeleteAll("")
err := m.DeleteAll("")
assert.Error(t, err)
}
func TestSqlite_Rename_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- Rename Tests ---
require.NoError(t, sqliteMedium.Write("old.txt", "content"))
func TestRename_Good(t *testing.T) {
m := newTestMedium(t)
err := sqliteMedium.Rename("old.txt", "new.txt")
require.NoError(t, m.Write("old.txt", "content"))
err := m.Rename("old.txt", "new.txt")
require.NoError(t, err)
assert.False(t, sqliteMedium.Exists("old.txt"))
assert.True(t, sqliteMedium.IsFile("new.txt"))
assert.False(t, m.Exists("old.txt"))
assert.True(t, m.IsFile("new.txt"))
content, err := sqliteMedium.Read("new.txt")
content, err := m.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "content", content)
}
func TestSqlite_Rename_Directory_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestRename_Good_Directory(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.EnsureDir("olddir"))
require.NoError(t, sqliteMedium.Write("olddir/file.txt", "content"))
require.NoError(t, m.EnsureDir("olddir"))
require.NoError(t, m.Write("olddir/file.txt", "content"))
err := sqliteMedium.Rename("olddir", "newdir")
err := m.Rename("olddir", "newdir")
require.NoError(t, err)
assert.False(t, sqliteMedium.Exists("olddir"))
assert.False(t, sqliteMedium.Exists("olddir/file.txt"))
assert.True(t, sqliteMedium.IsDir("newdir"))
assert.True(t, sqliteMedium.IsFile("newdir/file.txt"))
assert.False(t, m.Exists("olddir"))
assert.False(t, m.Exists("olddir/file.txt"))
assert.True(t, m.IsDir("newdir"))
assert.True(t, m.IsFile("newdir/file.txt"))
content, err := sqliteMedium.Read("newdir/file.txt")
content, err := m.Read("newdir/file.txt")
require.NoError(t, err)
assert.Equal(t, "content", content)
}
func TestSqlite_Rename_SourceNotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestRename_Bad_SourceNotFound(t *testing.T) {
m := newTestMedium(t)
err := sqliteMedium.Rename("nonexistent", "new")
err := m.Rename("nonexistent", "new")
assert.Error(t, err)
}
func TestSqlite_Rename_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestRename_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
err := sqliteMedium.Rename("", "new")
err := m.Rename("", "new")
assert.Error(t, err)
err = sqliteMedium.Rename("old", "")
err = m.Rename("old", "")
assert.Error(t, err)
}
func TestSqlite_List_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- List Tests ---
require.NoError(t, sqliteMedium.Write("dir/file1.txt", "a"))
require.NoError(t, sqliteMedium.Write("dir/file2.txt", "b"))
require.NoError(t, sqliteMedium.Write("dir/sub/file3.txt", "c"))
func TestList_Good(t *testing.T) {
m := newTestMedium(t)
entries, err := sqliteMedium.List("dir")
require.NoError(t, m.Write("dir/file1.txt", "a"))
require.NoError(t, m.Write("dir/file2.txt", "b"))
require.NoError(t, m.Write("dir/sub/file3.txt", "c"))
entries, err := m.List("dir")
require.NoError(t, err)
names := make(map[string]bool)
for _, entry := range entries {
names[entry.Name()] = true
for _, e := range entries {
names[e.Name()] = true
}
assert.True(t, names["file1.txt"])
@ -292,30 +322,30 @@ func TestSqlite_List_Good(t *testing.T) {
assert.Len(t, entries, 3)
}
func TestSqlite_List_Root_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestList_Good_Root(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("root.txt", "content"))
require.NoError(t, sqliteMedium.Write("dir/nested.txt", "nested"))
require.NoError(t, m.Write("root.txt", "content"))
require.NoError(t, m.Write("dir/nested.txt", "nested"))
entries, err := sqliteMedium.List("")
entries, err := m.List("")
require.NoError(t, err)
names := make(map[string]bool)
for _, entry := range entries {
names[entry.Name()] = true
for _, e := range entries {
names[e.Name()] = true
}
assert.True(t, names["root.txt"])
assert.True(t, names["dir"])
}
func TestSqlite_List_DirectoryEntry_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestList_Good_DirectoryEntry(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("dir/sub/file.txt", "content"))
require.NoError(t, m.Write("dir/sub/file.txt", "content"))
entries, err := sqliteMedium.List("dir")
entries, err := m.List("dir")
require.NoError(t, err)
require.Len(t, entries, 1)
@ -327,162 +357,172 @@ func TestSqlite_List_DirectoryEntry_Good(t *testing.T) {
assert.True(t, info.IsDir())
}
func TestSqlite_Stat_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- Stat Tests ---
require.NoError(t, sqliteMedium.Write("file.txt", "hello world"))
func TestStat_Good(t *testing.T) {
m := newTestMedium(t)
info, err := sqliteMedium.Stat("file.txt")
require.NoError(t, m.Write("file.txt", "hello world"))
info, err := m.Stat("file.txt")
require.NoError(t, err)
assert.Equal(t, "file.txt", info.Name())
assert.Equal(t, int64(11), info.Size())
assert.False(t, info.IsDir())
}
func TestSqlite_Stat_Directory_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestStat_Good_Directory(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
require.NoError(t, m.EnsureDir("mydir"))
info, err := sqliteMedium.Stat("mydir")
info, err := m.Stat("mydir")
require.NoError(t, err)
assert.Equal(t, "mydir", info.Name())
assert.True(t, info.IsDir())
}
func TestSqlite_Stat_NotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestStat_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
_, err := sqliteMedium.Stat("nonexistent")
_, err := m.Stat("nonexistent")
assert.Error(t, err)
}
func TestSqlite_Stat_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestStat_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
_, err := sqliteMedium.Stat("")
_, err := m.Stat("")
assert.Error(t, err)
}
func TestSqlite_Open_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- Open Tests ---
require.NoError(t, sqliteMedium.Write("file.txt", "open me"))
func TestOpen_Good(t *testing.T) {
m := newTestMedium(t)
file, err := sqliteMedium.Open("file.txt")
require.NoError(t, m.Write("file.txt", "open me"))
f, err := m.Open("file.txt")
require.NoError(t, err)
defer file.Close()
defer f.Close()
data, err := goio.ReadAll(file.(goio.Reader))
data, err := goio.ReadAll(f.(goio.Reader))
require.NoError(t, err)
assert.Equal(t, "open me", string(data))
stat, err := file.Stat()
stat, err := f.Stat()
require.NoError(t, err)
assert.Equal(t, "file.txt", stat.Name())
}
func TestSqlite_Open_NotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestOpen_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
_, err := sqliteMedium.Open("nonexistent.txt")
_, err := m.Open("nonexistent.txt")
assert.Error(t, err)
}
func TestSqlite_Open_IsDirectory_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestOpen_Bad_IsDirectory(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
_, err := sqliteMedium.Open("mydir")
require.NoError(t, m.EnsureDir("mydir"))
_, err := m.Open("mydir")
assert.Error(t, err)
}
func TestSqlite_Create_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- Create Tests ---
writer, err := sqliteMedium.Create("new.txt")
func TestCreate_Good(t *testing.T) {
m := newTestMedium(t)
w, err := m.Create("new.txt")
require.NoError(t, err)
bytesWritten, err := writer.Write([]byte("created"))
n, err := w.Write([]byte("created"))
require.NoError(t, err)
assert.Equal(t, 7, bytesWritten)
assert.Equal(t, 7, n)
err = writer.Close()
err = w.Close()
require.NoError(t, err)
content, err := sqliteMedium.Read("new.txt")
content, err := m.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "created", content)
}
func TestSqlite_Create_Overwrite_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestCreate_Good_Overwrite(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("file.txt", "old content"))
require.NoError(t, m.Write("file.txt", "old content"))
writer, err := sqliteMedium.Create("file.txt")
w, err := m.Create("file.txt")
require.NoError(t, err)
_, err = writer.Write([]byte("new"))
_, err = w.Write([]byte("new"))
require.NoError(t, err)
require.NoError(t, writer.Close())
require.NoError(t, w.Close())
content, err := sqliteMedium.Read("file.txt")
content, err := m.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "new", content)
}
func TestSqlite_Create_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestCreate_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
_, err := sqliteMedium.Create("")
_, err := m.Create("")
assert.Error(t, err)
}
func TestSqlite_Append_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- Append Tests ---
require.NoError(t, sqliteMedium.Write("append.txt", "hello"))
func TestAppend_Good(t *testing.T) {
m := newTestMedium(t)
writer, err := sqliteMedium.Append("append.txt")
require.NoError(t, m.Write("append.txt", "hello"))
w, err := m.Append("append.txt")
require.NoError(t, err)
_, err = writer.Write([]byte(" world"))
_, err = w.Write([]byte(" world"))
require.NoError(t, err)
require.NoError(t, writer.Close())
require.NoError(t, w.Close())
content, err := sqliteMedium.Read("append.txt")
content, err := m.Read("append.txt")
require.NoError(t, err)
assert.Equal(t, "hello world", content)
}
func TestSqlite_Append_NewFile_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestAppend_Good_NewFile(t *testing.T) {
m := newTestMedium(t)
writer, err := sqliteMedium.Append("new.txt")
w, err := m.Append("new.txt")
require.NoError(t, err)
_, err = writer.Write([]byte("fresh"))
_, err = w.Write([]byte("fresh"))
require.NoError(t, err)
require.NoError(t, writer.Close())
require.NoError(t, w.Close())
content, err := sqliteMedium.Read("new.txt")
content, err := m.Read("new.txt")
require.NoError(t, err)
assert.Equal(t, "fresh", content)
}
func TestSqlite_Append_EmptyPath_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestAppend_Bad_EmptyPath(t *testing.T) {
m := newTestMedium(t)
_, err := sqliteMedium.Append("")
_, err := m.Append("")
assert.Error(t, err)
}
func TestSqlite_ReadStream_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- ReadStream Tests ---
require.NoError(t, sqliteMedium.Write("stream.txt", "streaming content"))
func TestReadStream_Good(t *testing.T) {
m := newTestMedium(t)
reader, err := sqliteMedium.ReadStream("stream.txt")
require.NoError(t, m.Write("stream.txt", "streaming content"))
reader, err := m.ReadStream("stream.txt")
require.NoError(t, err)
defer reader.Close()
@ -491,84 +531,98 @@ func TestSqlite_ReadStream_Good(t *testing.T) {
assert.Equal(t, "streaming content", string(data))
}
func TestSqlite_ReadStream_NotFound_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestReadStream_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
_, err := sqliteMedium.ReadStream("nonexistent.txt")
_, err := m.ReadStream("nonexistent.txt")
assert.Error(t, err)
}
func TestSqlite_ReadStream_IsDirectory_Bad(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
func TestReadStream_Bad_IsDirectory(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
_, err := sqliteMedium.ReadStream("mydir")
require.NoError(t, m.EnsureDir("mydir"))
_, err := m.ReadStream("mydir")
assert.Error(t, err)
}
func TestSqlite_WriteStream_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- WriteStream Tests ---
writer, err := sqliteMedium.WriteStream("output.txt")
func TestWriteStream_Good(t *testing.T) {
m := newTestMedium(t)
writer, err := m.WriteStream("output.txt")
require.NoError(t, err)
_, err = goio.Copy(writer, core.NewReader("piped data"))
_, err = goio.Copy(writer, strings.NewReader("piped data"))
require.NoError(t, err)
require.NoError(t, writer.Close())
content, err := sqliteMedium.Read("output.txt")
content, err := m.Read("output.txt")
require.NoError(t, err)
assert.Equal(t, "piped data", content)
}
func TestSqlite_Exists_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- Exists Tests ---
assert.False(t, sqliteMedium.Exists("nonexistent"))
func TestExists_Good(t *testing.T) {
m := newTestMedium(t)
require.NoError(t, sqliteMedium.Write("file.txt", "content"))
assert.True(t, sqliteMedium.Exists("file.txt"))
assert.False(t, m.Exists("nonexistent"))
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
assert.True(t, sqliteMedium.Exists("mydir"))
require.NoError(t, m.Write("file.txt", "content"))
assert.True(t, m.Exists("file.txt"))
require.NoError(t, m.EnsureDir("mydir"))
assert.True(t, m.Exists("mydir"))
}
func TestSqlite_Exists_EmptyPath_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
assert.True(t, sqliteMedium.Exists(""))
func TestExists_Good_EmptyPath(t *testing.T) {
m := newTestMedium(t)
// Root always exists
assert.True(t, m.Exists(""))
}
func TestSqlite_IsDir_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- IsDir Tests ---
require.NoError(t, sqliteMedium.Write("file.txt", "content"))
require.NoError(t, sqliteMedium.EnsureDir("mydir"))
func TestIsDir_Good(t *testing.T) {
m := newTestMedium(t)
assert.True(t, sqliteMedium.IsDir("mydir"))
assert.False(t, sqliteMedium.IsDir("file.txt"))
assert.False(t, sqliteMedium.IsDir("nonexistent"))
assert.False(t, sqliteMedium.IsDir(""))
require.NoError(t, m.Write("file.txt", "content"))
require.NoError(t, m.EnsureDir("mydir"))
assert.True(t, m.IsDir("mydir"))
assert.False(t, m.IsDir("file.txt"))
assert.False(t, m.IsDir("nonexistent"))
assert.False(t, m.IsDir(""))
}
func TestSqlite_NormaliseEntryPath_Good(t *testing.T) {
assert.Equal(t, "file.txt", normaliseEntryPath("file.txt"))
assert.Equal(t, "dir/file.txt", normaliseEntryPath("dir/file.txt"))
assert.Equal(t, "file.txt", normaliseEntryPath("/file.txt"))
assert.Equal(t, "file.txt", normaliseEntryPath("../file.txt"))
assert.Equal(t, "file.txt", normaliseEntryPath("dir/../file.txt"))
assert.Equal(t, "", normaliseEntryPath(""))
assert.Equal(t, "", normaliseEntryPath("."))
assert.Equal(t, "", normaliseEntryPath("/"))
// --- cleanPath Tests ---
func TestCleanPath_Good(t *testing.T) {
assert.Equal(t, "file.txt", cleanPath("file.txt"))
assert.Equal(t, "dir/file.txt", cleanPath("dir/file.txt"))
assert.Equal(t, "file.txt", cleanPath("/file.txt"))
assert.Equal(t, "file.txt", cleanPath("../file.txt"))
assert.Equal(t, "file.txt", cleanPath("dir/../file.txt"))
assert.Equal(t, "", cleanPath(""))
assert.Equal(t, "", cleanPath("."))
assert.Equal(t, "", cleanPath("/"))
}
func TestSqlite_InterfaceCompliance_Good(t *testing.T) {
sqliteMedium := newSqliteMedium(t)
// --- Interface Compliance ---
func TestInterfaceCompliance_Ugly(t *testing.T) {
m := newTestMedium(t)
// Verify all methods exist by asserting the interface shape.
var _ interface {
Read(string) (string, error)
Write(string, string) error
EnsureDir(string) error
IsFile(string) bool
FileGet(string) (string, error)
FileSet(string, string) error
Delete(string) error
DeleteAll(string) error
Rename(string, string) error
@ -581,17 +635,19 @@ func TestSqlite_InterfaceCompliance_Good(t *testing.T) {
WriteStream(string) (goio.WriteCloser, error)
Exists(string) bool
IsDir(string) bool
} = sqliteMedium
} = m
}
func TestSqlite_CustomTable_Good(t *testing.T) {
sqliteMedium, err := New(Options{Path: ":memory:", Table: "my_files"})
// --- Custom Table ---
func TestCustomTable_Good(t *testing.T) {
m, err := New(":memory:", WithTable("my_files"))
require.NoError(t, err)
defer sqliteMedium.Close()
defer m.Close()
require.NoError(t, sqliteMedium.Write("file.txt", "content"))
require.NoError(t, m.Write("file.txt", "content"))
content, err := sqliteMedium.Read("file.txt")
content, err := m.Read("file.txt")
require.NoError(t, err)
assert.Equal(t, "content", content)
}

View file

@ -1,5 +0,0 @@
// Example: keyValueStore, _ := store.New(store.Options{Path: ":memory:"})
// Example: _ = keyValueStore.Set("app", "theme", "midnight")
// Example: medium := keyValueStore.AsMedium()
// Example: _ = medium.Write("app/theme", "midnight")
package store

View file

@ -3,348 +3,348 @@ package store
import (
goio "io"
"io/fs"
"os"
"path"
"strings"
"time"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
coreerr "dappco.re/go/core/log"
)
// Example: medium, _ := store.NewMedium(store.Options{Path: "config.db"})
// Example: _ = medium.Write("app/theme", "midnight")
// Example: entries, _ := medium.List("")
// Example: entries, _ := medium.List("app")
// Medium wraps a Store to satisfy the io.Medium interface.
// Paths are mapped as group/key — first segment is the group,
// the rest is the key. List("") returns groups as directories,
// List("group") returns keys as files.
type Medium struct {
keyValueStore *KeyValueStore
s *Store
}
var _ coreio.Medium = (*Medium)(nil)
// Example: medium, _ := store.NewMedium(store.Options{Path: "config.db"})
// Example: _ = medium.Write("app/theme", "midnight")
func NewMedium(options Options) (*Medium, error) {
keyValueStore, err := New(options)
// NewMedium creates an io.Medium backed by a KV store at the given SQLite path.
func NewMedium(dbPath string) (*Medium, error) {
s, err := New(dbPath)
if err != nil {
return nil, err
}
return &Medium{keyValueStore: keyValueStore}, nil
return &Medium{s: s}, nil
}
// Example: medium := keyValueStore.AsMedium()
func (keyValueStore *KeyValueStore) AsMedium() *Medium {
return &Medium{keyValueStore: keyValueStore}
// AsMedium returns a Medium adapter for an existing Store.
func (s *Store) AsMedium() *Medium {
return &Medium{s: s}
}
// Example: keyValueStore := medium.KeyValueStore()
func (medium *Medium) KeyValueStore() *KeyValueStore {
return medium.keyValueStore
// Store returns the underlying KV store for direct access.
func (m *Medium) Store() *Store {
return m.s
}
// Example: _ = medium.Close()
func (medium *Medium) Close() error {
return medium.keyValueStore.Close()
// Close closes the underlying store.
func (m *Medium) Close() error {
return m.s.Close()
}
func splitGroupKeyPath(entryPath string) (group, key string) {
clean := path.Clean(entryPath)
clean = core.TrimPrefix(clean, "/")
// splitPath splits a medium-style path into group and key.
// First segment = group, remainder = key.
func splitPath(p string) (group, key string) {
clean := path.Clean(p)
clean = strings.TrimPrefix(clean, "/")
if clean == "" || clean == "." {
return "", ""
}
parts := core.SplitN(clean, "/", 2)
parts := strings.SplitN(clean, "/", 2)
if len(parts) == 1 {
return parts[0], ""
}
return parts[0], parts[1]
}
func (medium *Medium) Read(entryPath string) (string, error) {
group, key := splitGroupKeyPath(entryPath)
// Read retrieves the value at group/key.
func (m *Medium) Read(p string) (string, error) {
group, key := splitPath(p)
if key == "" {
return "", core.E("store.Read", "path must include group/key", fs.ErrInvalid)
return "", coreerr.E("store.Read", "path must include group/key", os.ErrInvalid)
}
return medium.keyValueStore.Get(group, key)
return m.s.Get(group, key)
}
func (medium *Medium) Write(entryPath, content string) error {
group, key := splitGroupKeyPath(entryPath)
// Write stores a value at group/key.
func (m *Medium) Write(p, content string) error {
group, key := splitPath(p)
if key == "" {
return core.E("store.Write", "path must include group/key", fs.ErrInvalid)
return coreerr.E("store.Write", "path must include group/key", os.ErrInvalid)
}
return medium.keyValueStore.Set(group, key, content)
return m.s.Set(group, key, content)
}
// Example: _ = medium.WriteMode("app/theme", "midnight", 0600)
func (medium *Medium) WriteMode(entryPath, content string, mode fs.FileMode) error {
return medium.Write(entryPath, content)
}
// Example: _ = medium.EnsureDir("app")
func (medium *Medium) EnsureDir(entryPath string) error {
// EnsureDir is a no-op — groups are created implicitly on Set.
func (m *Medium) EnsureDir(_ string) error {
return nil
}
func (medium *Medium) IsFile(entryPath string) bool {
group, key := splitGroupKeyPath(entryPath)
// IsFile returns true if a group/key pair exists.
func (m *Medium) IsFile(p string) bool {
group, key := splitPath(p)
if key == "" {
return false
}
_, err := medium.keyValueStore.Get(group, key)
_, err := m.s.Get(group, key)
return err == nil
}
func (medium *Medium) Delete(entryPath string) error {
group, key := splitGroupKeyPath(entryPath)
// FileGet is an alias for Read.
func (m *Medium) FileGet(p string) (string, error) {
return m.Read(p)
}
// FileSet is an alias for Write.
func (m *Medium) FileSet(p, content string) error {
return m.Write(p, content)
}
// Delete removes a key, or checks that a group is empty.
func (m *Medium) Delete(p string) error {
group, key := splitPath(p)
if group == "" {
return core.E("store.Delete", "path is required", fs.ErrInvalid)
return coreerr.E("store.Delete", "path is required", os.ErrInvalid)
}
if key == "" {
entryCount, err := medium.keyValueStore.Count(group)
n, err := m.s.Count(group)
if err != nil {
return err
}
if entryCount > 0 {
return core.E("store.Delete", core.Concat("group not empty: ", group), fs.ErrExist)
if n > 0 {
return coreerr.E("store.Delete", "group not empty: "+group, os.ErrExist)
}
return nil
}
return medium.keyValueStore.Delete(group, key)
return m.s.Delete(group, key)
}
func (medium *Medium) DeleteAll(entryPath string) error {
group, key := splitGroupKeyPath(entryPath)
// DeleteAll removes a key, or all keys in a group.
func (m *Medium) DeleteAll(p string) error {
group, key := splitPath(p)
if group == "" {
return core.E("store.DeleteAll", "path is required", fs.ErrInvalid)
return coreerr.E("store.DeleteAll", "path is required", os.ErrInvalid)
}
if key == "" {
return medium.keyValueStore.DeleteGroup(group)
return m.s.DeleteGroup(group)
}
return medium.keyValueStore.Delete(group, key)
return m.s.Delete(group, key)
}
func (medium *Medium) Rename(oldPath, newPath string) error {
oldGroup, oldKey := splitGroupKeyPath(oldPath)
newGroup, newKey := splitGroupKeyPath(newPath)
if oldKey == "" || newKey == "" {
return core.E("store.Rename", "both paths must include group/key", fs.ErrInvalid)
// Rename moves a key from one path to another.
func (m *Medium) Rename(oldPath, newPath string) error {
og, ok := splitPath(oldPath)
ng, nk := splitPath(newPath)
if ok == "" || nk == "" {
return coreerr.E("store.Rename", "both paths must include group/key", os.ErrInvalid)
}
value, err := medium.keyValueStore.Get(oldGroup, oldKey)
val, err := m.s.Get(og, ok)
if err != nil {
return err
}
if err := medium.keyValueStore.Set(newGroup, newKey, value); err != nil {
if err := m.s.Set(ng, nk, val); err != nil {
return err
}
return medium.keyValueStore.Delete(oldGroup, oldKey)
return m.s.Delete(og, ok)
}
// Example: entries, _ := medium.List("app")
func (medium *Medium) List(entryPath string) ([]fs.DirEntry, error) {
group, key := splitGroupKeyPath(entryPath)
// List returns directory entries. Empty path returns groups.
// A group path returns keys in that group.
func (m *Medium) List(p string) ([]fs.DirEntry, error) {
group, key := splitPath(p)
if group == "" {
rows, err := medium.keyValueStore.database.Query("SELECT DISTINCT group_name FROM entries ORDER BY group_name")
rows, err := m.s.db.Query("SELECT DISTINCT grp FROM kv ORDER BY grp")
if err != nil {
return nil, core.E("store.List", "query groups", err)
return nil, coreerr.E("store.List", "query groups", err)
}
defer rows.Close()
var entries []fs.DirEntry
for rows.Next() {
var groupName string
if err := rows.Scan(&groupName); err != nil {
return nil, core.E("store.List", "scan", err)
var g string
if err := rows.Scan(&g); err != nil {
return nil, coreerr.E("store.List", "scan", err)
}
entries = append(entries, &keyValueDirEntry{name: groupName, isDir: true})
entries = append(entries, &kvDirEntry{name: g, isDir: true})
}
if err := rows.Err(); err != nil {
return nil, core.E("store.List", "rows", err)
}
return entries, nil
return entries, rows.Err()
}
if key != "" {
return nil, nil
return nil, nil // leaf node, nothing beneath
}
all, err := medium.keyValueStore.GetAll(group)
all, err := m.s.GetAll(group)
if err != nil {
return nil, err
}
var entries []fs.DirEntry
for key, value := range all {
entries = append(entries, &keyValueDirEntry{name: key, size: int64(len(value))})
for k, v := range all {
entries = append(entries, &kvDirEntry{name: k, size: int64(len(v))})
}
return entries, nil
}
// Example: info, _ := medium.Stat("app/theme")
func (medium *Medium) Stat(entryPath string) (fs.FileInfo, error) {
group, key := splitGroupKeyPath(entryPath)
// Stat returns file info for a group (dir) or key (file).
func (m *Medium) Stat(p string) (fs.FileInfo, error) {
group, key := splitPath(p)
if group == "" {
return nil, core.E("store.Stat", "path is required", fs.ErrInvalid)
return nil, coreerr.E("store.Stat", "path is required", os.ErrInvalid)
}
if key == "" {
entryCount, err := medium.keyValueStore.Count(group)
n, err := m.s.Count(group)
if err != nil {
return nil, err
}
if entryCount == 0 {
return nil, core.E("store.Stat", core.Concat("group not found: ", group), fs.ErrNotExist)
if n == 0 {
return nil, coreerr.E("store.Stat", "group not found: "+group, os.ErrNotExist)
}
return &keyValueFileInfo{name: group, isDir: true}, nil
return &kvFileInfo{name: group, isDir: true}, nil
}
value, err := medium.keyValueStore.Get(group, key)
val, err := m.s.Get(group, key)
if err != nil {
return nil, err
}
return &keyValueFileInfo{name: key, size: int64(len(value))}, nil
return &kvFileInfo{name: key, size: int64(len(val))}, nil
}
func (medium *Medium) Open(entryPath string) (fs.File, error) {
group, key := splitGroupKeyPath(entryPath)
// Open opens a key for reading.
func (m *Medium) Open(p string) (fs.File, error) {
group, key := splitPath(p)
if key == "" {
return nil, core.E("store.Open", "path must include group/key", fs.ErrInvalid)
return nil, coreerr.E("store.Open", "path must include group/key", os.ErrInvalid)
}
value, err := medium.keyValueStore.Get(group, key)
val, err := m.s.Get(group, key)
if err != nil {
return nil, err
}
return &keyValueFile{name: key, content: []byte(value)}, nil
return &kvFile{name: key, content: []byte(val)}, nil
}
func (medium *Medium) Create(entryPath string) (goio.WriteCloser, error) {
group, key := splitGroupKeyPath(entryPath)
// Create creates or truncates a key. Content is stored on Close.
func (m *Medium) Create(p string) (goio.WriteCloser, error) {
group, key := splitPath(p)
if key == "" {
return nil, core.E("store.Create", "path must include group/key", fs.ErrInvalid)
return nil, coreerr.E("store.Create", "path must include group/key", os.ErrInvalid)
}
return &keyValueWriteCloser{keyValueStore: medium.keyValueStore, group: group, key: key}, nil
return &kvWriteCloser{s: m.s, group: group, key: key}, nil
}
func (medium *Medium) Append(entryPath string) (goio.WriteCloser, error) {
group, key := splitGroupKeyPath(entryPath)
// Append opens a key for appending. Content is stored on Close.
func (m *Medium) Append(p string) (goio.WriteCloser, error) {
group, key := splitPath(p)
if key == "" {
return nil, core.E("store.Append", "path must include group/key", fs.ErrInvalid)
return nil, coreerr.E("store.Append", "path must include group/key", os.ErrInvalid)
}
existingValue, _ := medium.keyValueStore.Get(group, key)
return &keyValueWriteCloser{keyValueStore: medium.keyValueStore, group: group, key: key, data: []byte(existingValue)}, nil
existing, _ := m.s.Get(group, key)
return &kvWriteCloser{s: m.s, group: group, key: key, data: []byte(existing)}, nil
}
func (medium *Medium) ReadStream(entryPath string) (goio.ReadCloser, error) {
group, key := splitGroupKeyPath(entryPath)
// ReadStream returns a reader for the value.
func (m *Medium) ReadStream(p string) (goio.ReadCloser, error) {
group, key := splitPath(p)
if key == "" {
return nil, core.E("store.ReadStream", "path must include group/key", fs.ErrInvalid)
return nil, coreerr.E("store.ReadStream", "path must include group/key", os.ErrInvalid)
}
value, err := medium.keyValueStore.Get(group, key)
val, err := m.s.Get(group, key)
if err != nil {
return nil, err
}
return goio.NopCloser(core.NewReader(value)), nil
return goio.NopCloser(strings.NewReader(val)), nil
}
func (medium *Medium) WriteStream(entryPath string) (goio.WriteCloser, error) {
return medium.Create(entryPath)
// WriteStream returns a writer. Content is stored on Close.
func (m *Medium) WriteStream(p string) (goio.WriteCloser, error) {
return m.Create(p)
}
func (medium *Medium) Exists(entryPath string) bool {
group, key := splitGroupKeyPath(entryPath)
// Exists returns true if a group or key exists.
func (m *Medium) Exists(p string) bool {
group, key := splitPath(p)
if group == "" {
return false
}
if key == "" {
entryCount, err := medium.keyValueStore.Count(group)
return err == nil && entryCount > 0
n, err := m.s.Count(group)
return err == nil && n > 0
}
_, err := medium.keyValueStore.Get(group, key)
_, err := m.s.Get(group, key)
return err == nil
}
func (medium *Medium) IsDir(entryPath string) bool {
group, key := splitGroupKeyPath(entryPath)
// IsDir returns true if the path is a group with entries.
func (m *Medium) IsDir(p string) bool {
group, key := splitPath(p)
if key != "" || group == "" {
return false
}
entryCount, err := medium.keyValueStore.Count(group)
return err == nil && entryCount > 0
n, err := m.s.Count(group)
return err == nil && n > 0
}
type keyValueFileInfo struct {
// --- fs helper types ---
type kvFileInfo struct {
name string
size int64
isDir bool
}
func (fileInfo *keyValueFileInfo) Name() string { return fileInfo.name }
func (fi *kvFileInfo) Name() string { return fi.name }
func (fi *kvFileInfo) Size() int64 { return fi.size }
func (fi *kvFileInfo) Mode() fs.FileMode { if fi.isDir { return fs.ModeDir | 0755 }; return 0644 }
func (fi *kvFileInfo) ModTime() time.Time { return time.Time{} }
func (fi *kvFileInfo) IsDir() bool { return fi.isDir }
func (fi *kvFileInfo) Sys() any { return nil }
func (fileInfo *keyValueFileInfo) Size() int64 { return fileInfo.size }
func (fileInfo *keyValueFileInfo) Mode() fs.FileMode {
if fileInfo.isDir {
return fs.ModeDir | 0755
}
return 0644
}
func (fileInfo *keyValueFileInfo) ModTime() time.Time { return time.Time{} }
func (fileInfo *keyValueFileInfo) IsDir() bool { return fileInfo.isDir }
func (fileInfo *keyValueFileInfo) Sys() any { return nil }
type keyValueDirEntry struct {
type kvDirEntry struct {
name string
isDir bool
size int64
}
func (entry *keyValueDirEntry) Name() string { return entry.name }
func (entry *keyValueDirEntry) IsDir() bool { return entry.isDir }
func (entry *keyValueDirEntry) Type() fs.FileMode {
if entry.isDir {
return fs.ModeDir
}
return 0
func (de *kvDirEntry) Name() string { return de.name }
func (de *kvDirEntry) IsDir() bool { return de.isDir }
func (de *kvDirEntry) Type() fs.FileMode { if de.isDir { return fs.ModeDir }; return 0 }
func (de *kvDirEntry) Info() (fs.FileInfo, error) {
return &kvFileInfo{name: de.name, size: de.size, isDir: de.isDir}, nil
}
func (entry *keyValueDirEntry) Info() (fs.FileInfo, error) {
return &keyValueFileInfo{name: entry.name, size: entry.size, isDir: entry.isDir}, nil
}
type keyValueFile struct {
type kvFile struct {
name string
content []byte
offset int64
}
func (file *keyValueFile) Stat() (fs.FileInfo, error) {
return &keyValueFileInfo{name: file.name, size: int64(len(file.content))}, nil
func (f *kvFile) Stat() (fs.FileInfo, error) {
return &kvFileInfo{name: f.name, size: int64(len(f.content))}, nil
}
func (file *keyValueFile) Read(buffer []byte) (int, error) {
if file.offset >= int64(len(file.content)) {
func (f *kvFile) Read(b []byte) (int, error) {
if f.offset >= int64(len(f.content)) {
return 0, goio.EOF
}
readCount := copy(buffer, file.content[file.offset:])
file.offset += int64(readCount)
return readCount, nil
n := copy(b, f.content[f.offset:])
f.offset += int64(n)
return n, nil
}
func (file *keyValueFile) Close() error { return nil }
func (f *kvFile) Close() error { return nil }
type keyValueWriteCloser struct {
keyValueStore *KeyValueStore
type kvWriteCloser struct {
s *Store
group string
key string
data []byte
}
func (writer *keyValueWriteCloser) Write(data []byte) (int, error) {
writer.data = append(writer.data, data...)
return len(data), nil
func (w *kvWriteCloser) Write(p []byte) (int, error) {
w.data = append(w.data, p...)
return len(p), nil
}
func (writer *keyValueWriteCloser) Close() error {
return writer.keyValueStore.Set(writer.group, writer.key, string(writer.data))
func (w *kvWriteCloser) Close() error {
return w.s.Set(w.group, w.key, string(w.data))
}

View file

@ -2,256 +2,201 @@ package store
import (
"io"
"io/fs"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func newKeyValueMedium(t *testing.T) *Medium {
func newTestMedium(t *testing.T) *Medium {
t.Helper()
keyValueMedium, err := NewMedium(Options{Path: ":memory:"})
m, err := NewMedium(":memory:")
require.NoError(t, err)
t.Cleanup(func() { keyValueMedium.Close() })
return keyValueMedium
t.Cleanup(func() { m.Close() })
return m
}
func TestKeyValueMedium_ReadWrite_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
func TestMedium_ReadWrite_Good(t *testing.T) {
m := newTestMedium(t)
err := keyValueMedium.Write("config/theme", "dark")
err := m.Write("config/theme", "dark")
require.NoError(t, err)
value, err := keyValueMedium.Read("config/theme")
val, err := m.Read("config/theme")
require.NoError(t, err)
assert.Equal(t, "dark", value)
assert.Equal(t, "dark", val)
}
func TestKeyValueMedium_Read_NoKey_Bad(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_, err := keyValueMedium.Read("config")
func TestMedium_Read_Bad_NoKey(t *testing.T) {
m := newTestMedium(t)
_, err := m.Read("config")
assert.Error(t, err)
}
func TestKeyValueMedium_Read_NotFound_Bad(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_, err := keyValueMedium.Read("config/missing")
func TestMedium_Read_Bad_NotFound(t *testing.T) {
m := newTestMedium(t)
_, err := m.Read("config/missing")
assert.Error(t, err)
}
func TestKeyValueMedium_IsFile_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "val")
func TestMedium_IsFile_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "val")
assert.True(t, keyValueMedium.IsFile("group/key"))
assert.False(t, keyValueMedium.IsFile("group/nope"))
assert.False(t, keyValueMedium.IsFile("group"))
assert.True(t, m.IsFile("grp/key"))
assert.False(t, m.IsFile("grp/nope"))
assert.False(t, m.IsFile("grp"))
}
func TestKeyValueMedium_Delete_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "val")
func TestMedium_Delete_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "val")
err := keyValueMedium.Delete("group/key")
err := m.Delete("grp/key")
require.NoError(t, err)
assert.False(t, keyValueMedium.IsFile("group/key"))
assert.False(t, m.IsFile("grp/key"))
}
func TestKeyValueMedium_Delete_NonEmptyGroup_Bad(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "val")
func TestMedium_Delete_Bad_NonEmptyGroup(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "val")
err := keyValueMedium.Delete("group")
err := m.Delete("grp")
assert.Error(t, err)
}
func TestKeyValueMedium_DeleteAll_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/a", "1")
_ = keyValueMedium.Write("group/b", "2")
func TestMedium_DeleteAll_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/a", "1")
_ = m.Write("grp/b", "2")
err := keyValueMedium.DeleteAll("group")
err := m.DeleteAll("grp")
require.NoError(t, err)
assert.False(t, keyValueMedium.Exists("group"))
assert.False(t, m.Exists("grp"))
}
func TestKeyValueMedium_Rename_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("old/key", "val")
func TestMedium_Rename_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("old/key", "val")
err := keyValueMedium.Rename("old/key", "new/key")
err := m.Rename("old/key", "new/key")
require.NoError(t, err)
value, err := keyValueMedium.Read("new/key")
val, err := m.Read("new/key")
require.NoError(t, err)
assert.Equal(t, "val", value)
assert.False(t, keyValueMedium.IsFile("old/key"))
assert.Equal(t, "val", val)
assert.False(t, m.IsFile("old/key"))
}
func TestKeyValueMedium_List_Groups_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("alpha/a", "1")
_ = keyValueMedium.Write("beta/b", "2")
func TestMedium_List_Good_Groups(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("alpha/a", "1")
_ = m.Write("beta/b", "2")
entries, err := keyValueMedium.List("")
entries, err := m.List("")
require.NoError(t, err)
assert.Len(t, entries, 2)
names := make(map[string]bool)
for _, entry := range entries {
names[entry.Name()] = true
assert.True(t, entry.IsDir())
for _, e := range entries {
names[e.Name()] = true
assert.True(t, e.IsDir())
}
assert.True(t, names["alpha"])
assert.True(t, names["beta"])
}
func TestKeyValueMedium_List_Keys_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/a", "1")
_ = keyValueMedium.Write("group/b", "22")
func TestMedium_List_Good_Keys(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/a", "1")
_ = m.Write("grp/b", "22")
entries, err := keyValueMedium.List("group")
entries, err := m.List("grp")
require.NoError(t, err)
assert.Len(t, entries, 2)
}
func TestKeyValueMedium_Stat_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "hello")
func TestMedium_Stat_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "hello")
info, err := keyValueMedium.Stat("group")
// Stat group
info, err := m.Stat("grp")
require.NoError(t, err)
assert.True(t, info.IsDir())
info, err = keyValueMedium.Stat("group/key")
// Stat key
info, err = m.Stat("grp/key")
require.NoError(t, err)
assert.Equal(t, int64(5), info.Size())
assert.False(t, info.IsDir())
}
func TestKeyValueMedium_Exists_IsDir_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "val")
func TestMedium_Exists_IsDir_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "val")
assert.True(t, keyValueMedium.Exists("group"))
assert.True(t, keyValueMedium.Exists("group/key"))
assert.True(t, keyValueMedium.IsDir("group"))
assert.False(t, keyValueMedium.IsDir("group/key"))
assert.False(t, keyValueMedium.Exists("nope"))
assert.True(t, m.Exists("grp"))
assert.True(t, m.Exists("grp/key"))
assert.True(t, m.IsDir("grp"))
assert.False(t, m.IsDir("grp/key"))
assert.False(t, m.Exists("nope"))
}
func TestKeyValueMedium_Open_Read_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "hello world")
func TestMedium_Open_Read_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "hello world")
file, err := keyValueMedium.Open("group/key")
f, err := m.Open("grp/key")
require.NoError(t, err)
defer file.Close()
defer f.Close()
data, err := io.ReadAll(file)
data, err := io.ReadAll(f)
require.NoError(t, err)
assert.Equal(t, "hello world", string(data))
}
func TestKeyValueMedium_CreateClose_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
func TestMedium_CreateClose_Good(t *testing.T) {
m := newTestMedium(t)
writer, err := keyValueMedium.Create("group/key")
w, err := m.Create("grp/key")
require.NoError(t, err)
_, _ = writer.Write([]byte("streamed"))
require.NoError(t, writer.Close())
_, _ = w.Write([]byte("streamed"))
require.NoError(t, w.Close())
value, err := keyValueMedium.Read("group/key")
val, err := m.Read("grp/key")
require.NoError(t, err)
assert.Equal(t, "streamed", value)
assert.Equal(t, "streamed", val)
}
func TestKeyValueMedium_Append_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
_ = keyValueMedium.Write("group/key", "hello")
func TestMedium_Append_Good(t *testing.T) {
m := newTestMedium(t)
_ = m.Write("grp/key", "hello")
writer, err := keyValueMedium.Append("group/key")
w, err := m.Append("grp/key")
require.NoError(t, err)
_, _ = writer.Write([]byte(" world"))
require.NoError(t, writer.Close())
_, _ = w.Write([]byte(" world"))
require.NoError(t, w.Close())
value, err := keyValueMedium.Read("group/key")
val, err := m.Read("grp/key")
require.NoError(t, err)
assert.Equal(t, "hello world", value)
assert.Equal(t, "hello world", val)
}
func TestKeyValueMedium_AsMedium_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
keyValueMedium := keyValueStore.AsMedium()
require.NoError(t, keyValueMedium.Write("group/key", "val"))
value, err := keyValueStore.Get("group", "key")
func TestMedium_AsMedium_Good(t *testing.T) {
s, err := New(":memory:")
require.NoError(t, err)
assert.Equal(t, "val", value)
defer s.Close()
value, err = keyValueMedium.Read("group/key")
m := s.AsMedium()
require.NoError(t, m.Write("grp/key", "val"))
// Accessible through both APIs
val, err := s.Get("grp", "key")
require.NoError(t, err)
assert.Equal(t, "val", value)
}
func TestKeyValueMedium_KeyValueStore_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
assert.NotNil(t, keyValueMedium.KeyValueStore())
assert.Same(t, keyValueMedium.KeyValueStore(), keyValueMedium.KeyValueStore())
}
func TestKeyValueMedium_EnsureDir_ReadWrite_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
require.NoError(t, keyValueMedium.EnsureDir("ignored"))
require.NoError(t, keyValueMedium.Write("group/key", "value"))
value, err := keyValueMedium.Read("group/key")
require.NoError(t, err)
assert.Equal(t, "value", value)
}
func TestKeyValueMedium_StreamHelpers_Good(t *testing.T) {
keyValueMedium := newKeyValueMedium(t)
writer, err := keyValueMedium.WriteStream("group/key")
require.NoError(t, err)
_, err = writer.Write([]byte("streamed"))
require.NoError(t, err)
require.NoError(t, writer.Close())
reader, err := keyValueMedium.ReadStream("group/key")
require.NoError(t, err)
data, err := io.ReadAll(reader)
require.NoError(t, err)
assert.Equal(t, "streamed", string(data))
require.NoError(t, reader.Close())
file, err := keyValueMedium.Open("group/key")
require.NoError(t, err)
info, err := file.Stat()
require.NoError(t, err)
assert.Equal(t, "key", info.Name())
assert.Equal(t, int64(8), info.Size())
assert.Equal(t, fs.FileMode(0644), info.Mode())
assert.True(t, info.ModTime().IsZero())
assert.False(t, info.IsDir())
assert.Nil(t, info.Sys())
require.NoError(t, file.Close())
entries, err := keyValueMedium.List("group")
require.NoError(t, err)
require.Len(t, entries, 1)
assert.Equal(t, "key", entries[0].Name())
assert.False(t, entries[0].IsDir())
assert.Equal(t, fs.FileMode(0), entries[0].Type())
entryInfo, err := entries[0].Info()
require.NoError(t, err)
assert.Equal(t, "key", entryInfo.Name())
assert.Equal(t, int64(8), entryInfo.Size())
assert.Equal(t, "val", val)
val, err = m.Read("grp/key")
require.NoError(t, err)
assert.Equal(t, "val", val)
}

View file

@ -3,163 +3,151 @@ package store
import (
"database/sql"
"errors"
"io/fs"
"strings"
"text/template"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
_ "modernc.org/sqlite"
)
// Example: _, err := keyValueStore.Get("app", "theme")
var NotFoundError = errors.New("key not found")
// ErrNotFound is returned when a key does not exist in the store.
var ErrNotFound = errors.New("store: not found")
// Example: keyValueStore, _ := store.New(store.Options{Path: ":memory:"})
type KeyValueStore struct {
database *sql.DB
// Store is a group-namespaced key-value store backed by SQLite.
type Store struct {
db *sql.DB
}
// Example: keyValueStore, _ := store.New(store.Options{Path: ":memory:"})
type Options struct {
Path string
}
// Example: keyValueStore, _ := store.New(store.Options{Path: ":memory:"})
// Example: _ = keyValueStore.Set("app", "theme", "midnight")
func New(options Options) (*KeyValueStore, error) {
if options.Path == "" {
return nil, core.E("store.New", "database path is required", fs.ErrInvalid)
}
database, err := sql.Open("sqlite", options.Path)
// New creates a Store at the given SQLite path. Use ":memory:" for tests.
func New(dbPath string) (*Store, error) {
db, err := sql.Open("sqlite", dbPath)
if err != nil {
return nil, core.E("store.New", "open db", err)
return nil, coreerr.E("store.New", "open db", err)
}
if _, err := database.Exec("PRAGMA journal_mode=WAL"); err != nil {
database.Close()
return nil, core.E("store.New", "WAL mode", err)
if _, err := db.Exec("PRAGMA journal_mode=WAL"); err != nil {
db.Close()
return nil, coreerr.E("store.New", "WAL mode", err)
}
if _, err := database.Exec(`CREATE TABLE IF NOT EXISTS entries (
group_name TEXT NOT NULL,
entry_key TEXT NOT NULL,
entry_value TEXT NOT NULL,
PRIMARY KEY (group_name, entry_key)
if _, err := db.Exec(`CREATE TABLE IF NOT EXISTS kv (
grp TEXT NOT NULL,
key TEXT NOT NULL,
value TEXT NOT NULL,
PRIMARY KEY (grp, key)
)`); err != nil {
database.Close()
return nil, core.E("store.New", "create schema", err)
db.Close()
return nil, coreerr.E("store.New", "create schema", err)
}
return &KeyValueStore{database: database}, nil
return &Store{db: db}, nil
}
// Example: _ = keyValueStore.Close()
func (keyValueStore *KeyValueStore) Close() error {
return keyValueStore.database.Close()
// Close closes the underlying database.
func (s *Store) Close() error {
return s.db.Close()
}
// Example: theme, _ := keyValueStore.Get("app", "theme")
func (keyValueStore *KeyValueStore) Get(group, key string) (string, error) {
var value string
err := keyValueStore.database.QueryRow("SELECT entry_value FROM entries WHERE group_name = ? AND entry_key = ?", group, key).Scan(&value)
// Get retrieves a value by group and key.
func (s *Store) Get(group, key string) (string, error) {
var val string
err := s.db.QueryRow("SELECT value FROM kv WHERE grp = ? AND key = ?", group, key).Scan(&val)
if err == sql.ErrNoRows {
return "", core.E("store.Get", core.Concat("not found: ", group, "/", key), NotFoundError)
return "", coreerr.E("store.Get", "not found: "+group+"/"+key, ErrNotFound)
}
if err != nil {
return "", core.E("store.Get", "query", err)
return "", coreerr.E("store.Get", "query", err)
}
return value, nil
return val, nil
}
// Example: _ = keyValueStore.Set("app", "theme", "midnight")
func (keyValueStore *KeyValueStore) Set(group, key, value string) error {
_, err := keyValueStore.database.Exec(
`INSERT INTO entries (group_name, entry_key, entry_value) VALUES (?, ?, ?)
ON CONFLICT(group_name, entry_key) DO UPDATE SET entry_value = excluded.entry_value`,
// Set stores a value by group and key, overwriting if exists.
func (s *Store) Set(group, key, value string) error {
_, err := s.db.Exec(
`INSERT INTO kv (grp, key, value) VALUES (?, ?, ?)
ON CONFLICT(grp, key) DO UPDATE SET value = excluded.value`,
group, key, value,
)
if err != nil {
return core.E("store.Set", "exec", err)
return coreerr.E("store.Set", "exec", err)
}
return nil
}
// Example: _ = keyValueStore.Delete("app", "theme")
func (keyValueStore *KeyValueStore) Delete(group, key string) error {
_, err := keyValueStore.database.Exec("DELETE FROM entries WHERE group_name = ? AND entry_key = ?", group, key)
// Delete removes a single key from a group.
func (s *Store) Delete(group, key string) error {
_, err := s.db.Exec("DELETE FROM kv WHERE grp = ? AND key = ?", group, key)
if err != nil {
return core.E("store.Delete", "exec", err)
return coreerr.E("store.Delete", "exec", err)
}
return nil
}
// Example: count, _ := keyValueStore.Count("app")
func (keyValueStore *KeyValueStore) Count(group string) (int, error) {
var count int
err := keyValueStore.database.QueryRow("SELECT COUNT(*) FROM entries WHERE group_name = ?", group).Scan(&count)
// Count returns the number of keys in a group.
func (s *Store) Count(group string) (int, error) {
var n int
err := s.db.QueryRow("SELECT COUNT(*) FROM kv WHERE grp = ?", group).Scan(&n)
if err != nil {
return 0, core.E("store.Count", "query", err)
return 0, coreerr.E("store.Count", "query", err)
}
return count, nil
return n, nil
}
// Example: _ = keyValueStore.DeleteGroup("app")
func (keyValueStore *KeyValueStore) DeleteGroup(group string) error {
_, err := keyValueStore.database.Exec("DELETE FROM entries WHERE group_name = ?", group)
// DeleteGroup removes all keys in a group.
func (s *Store) DeleteGroup(group string) error {
_, err := s.db.Exec("DELETE FROM kv WHERE grp = ?", group)
if err != nil {
return core.E("store.DeleteGroup", "exec", err)
return coreerr.E("store.DeleteGroup", "exec", err)
}
return nil
}
// Example: values, _ := keyValueStore.GetAll("app")
func (keyValueStore *KeyValueStore) GetAll(group string) (map[string]string, error) {
rows, err := keyValueStore.database.Query("SELECT entry_key, entry_value FROM entries WHERE group_name = ?", group)
// GetAll returns all key-value pairs in a group.
func (s *Store) GetAll(group string) (map[string]string, error) {
rows, err := s.db.Query("SELECT key, value FROM kv WHERE grp = ?", group)
if err != nil {
return nil, core.E("store.GetAll", "query", err)
return nil, coreerr.E("store.GetAll", "query", err)
}
defer rows.Close()
result := make(map[string]string)
for rows.Next() {
var key, value string
if err := rows.Scan(&key, &value); err != nil {
return nil, core.E("store.GetAll", "scan", err)
var k, v string
if err := rows.Scan(&k, &v); err != nil {
return nil, coreerr.E("store.GetAll", "scan", err)
}
result[key] = value
result[k] = v
}
if err := rows.Err(); err != nil {
return nil, core.E("store.GetAll", "rows", err)
return nil, coreerr.E("store.GetAll", "rows", err)
}
return result, nil
}
// Example: keyValueStore, _ := store.New(store.Options{Path: ":memory:"})
// Example: _ = keyValueStore.Set("user", "name", "alice")
// Example: renderedText, _ := keyValueStore.Render("hello {{ .name }}", "user")
func (keyValueStore *KeyValueStore) Render(templateText, group string) (string, error) {
rows, err := keyValueStore.database.Query("SELECT entry_key, entry_value FROM entries WHERE group_name = ?", group)
// Render loads all key-value pairs from a group and renders a Go template.
func (s *Store) Render(tmplStr, group string) (string, error) {
rows, err := s.db.Query("SELECT key, value FROM kv WHERE grp = ?", group)
if err != nil {
return "", core.E("store.Render", "query", err)
return "", coreerr.E("store.Render", "query", err)
}
defer rows.Close()
templateValues := make(map[string]string)
vars := make(map[string]string)
for rows.Next() {
var key, value string
if err := rows.Scan(&key, &value); err != nil {
return "", core.E("store.Render", "scan", err)
var k, v string
if err := rows.Scan(&k, &v); err != nil {
return "", coreerr.E("store.Render", "scan", err)
}
templateValues[key] = value
vars[k] = v
}
if err := rows.Err(); err != nil {
return "", core.E("store.Render", "rows", err)
return "", coreerr.E("store.Render", "rows", err)
}
renderTemplate, err := template.New("render").Parse(templateText)
tmpl, err := template.New("render").Parse(tmplStr)
if err != nil {
return "", core.E("store.Render", "parse template", err)
return "", coreerr.E("store.Render", "parse template", err)
}
builder := core.NewBuilder()
if err := renderTemplate.Execute(builder, templateValues); err != nil {
return "", core.E("store.Render", "execute template", err)
var b strings.Builder
if err := tmpl.Execute(&b, vars); err != nil {
return "", coreerr.E("store.Render", "execute template", err)
}
return builder.String(), nil
return b.String(), nil
}

View file

@ -7,109 +7,97 @@ import (
"github.com/stretchr/testify/require"
)
func newKeyValueStore(t *testing.T) *KeyValueStore {
t.Helper()
keyValueStore, err := New(Options{Path: ":memory:"})
func TestSetGet_Good(t *testing.T) {
s, err := New(":memory:")
require.NoError(t, err)
t.Cleanup(func() {
require.NoError(t, keyValueStore.Close())
})
return keyValueStore
defer s.Close()
err = s.Set("config", "theme", "dark")
require.NoError(t, err)
val, err := s.Get("config", "theme")
require.NoError(t, err)
assert.Equal(t, "dark", val)
}
func TestKeyValueStore_New_Options_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
assert.NotNil(t, keyValueStore)
}
func TestGet_Bad_NotFound(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
func TestKeyValueStore_New_Options_Bad(t *testing.T) {
_, err := New(Options{})
_, err := s.Get("config", "missing")
assert.Error(t, err)
}
func TestKeyValueStore_SetGet_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
func TestDelete_Good(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
err := keyValueStore.Set("config", "theme", "dark")
_ = s.Set("config", "key", "val")
err := s.Delete("config", "key")
require.NoError(t, err)
value, err := keyValueStore.Get("config", "theme")
_, err = s.Get("config", "key")
assert.Error(t, err)
}
func TestCount_Good(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
_ = s.Set("grp", "a", "1")
_ = s.Set("grp", "b", "2")
_ = s.Set("other", "c", "3")
n, err := s.Count("grp")
require.NoError(t, err)
assert.Equal(t, "dark", value)
assert.Equal(t, 2, n)
}
func TestKeyValueStore_Get_NotFound_Bad(t *testing.T) {
keyValueStore := newKeyValueStore(t)
func TestDeleteGroup_Good(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
_, err := keyValueStore.Get("config", "missing")
assert.ErrorIs(t, err, NotFoundError)
}
func TestKeyValueStore_Delete_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
_ = keyValueStore.Set("config", "key", "val")
err := keyValueStore.Delete("config", "key")
_ = s.Set("grp", "a", "1")
_ = s.Set("grp", "b", "2")
err := s.DeleteGroup("grp")
require.NoError(t, err)
_, err = keyValueStore.Get("config", "key")
assert.ErrorIs(t, err, NotFoundError)
n, _ := s.Count("grp")
assert.Equal(t, 0, n)
}
func TestKeyValueStore_Count_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
func TestGetAll_Good(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
_ = keyValueStore.Set("group", "a", "1")
_ = keyValueStore.Set("group", "b", "2")
_ = keyValueStore.Set("other", "c", "3")
_ = s.Set("grp", "a", "1")
_ = s.Set("grp", "b", "2")
_ = s.Set("other", "c", "3")
count, err := keyValueStore.Count("group")
require.NoError(t, err)
assert.Equal(t, 2, count)
}
func TestKeyValueStore_DeleteGroup_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
_ = keyValueStore.Set("group", "a", "1")
_ = keyValueStore.Set("group", "b", "2")
err := keyValueStore.DeleteGroup("group")
require.NoError(t, err)
count, _ := keyValueStore.Count("group")
assert.Equal(t, 0, count)
}
func TestKeyValueStore_GetAll_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
_ = keyValueStore.Set("group", "a", "1")
_ = keyValueStore.Set("group", "b", "2")
_ = keyValueStore.Set("other", "c", "3")
all, err := keyValueStore.GetAll("group")
all, err := s.GetAll("grp")
require.NoError(t, err)
assert.Equal(t, map[string]string{"a": "1", "b": "2"}, all)
}
func TestKeyValueStore_GetAll_Empty_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
func TestGetAll_Good_Empty(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
all, err := keyValueStore.GetAll("empty")
all, err := s.GetAll("empty")
require.NoError(t, err)
assert.Empty(t, all)
}
func TestKeyValueStore_Render_Good(t *testing.T) {
keyValueStore := newKeyValueStore(t)
func TestRender_Good(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
_ = keyValueStore.Set("user", "pool", "pool.lthn.io:3333")
_ = keyValueStore.Set("user", "wallet", "iz...")
_ = s.Set("user", "pool", "pool.lthn.io:3333")
_ = s.Set("user", "wallet", "iz...")
templateText := `{"pool":"{{ .pool }}","wallet":"{{ .wallet }}"}`
renderedText, err := keyValueStore.Render(templateText, "user")
tmpl := `{"pool":"{{ .pool }}","wallet":"{{ .wallet }}"}`
out, err := s.Render(tmpl, "user")
require.NoError(t, err)
assert.Contains(t, renderedText, "pool.lthn.io:3333")
assert.Contains(t, renderedText, "iz...")
assert.Contains(t, out, "pool.lthn.io:3333")
assert.Contains(t, out, "iz...")
}

View file

@ -1,9 +0,0 @@
// Example: service, _ := workspace.New(workspace.Options{
// Example: KeyPairProvider: keyPairProvider,
// Example: RootPath: "/srv/workspaces",
// Example: Medium: io.NewMemoryMedium(),
// Example: })
// Example: workspaceID, _ := service.CreateWorkspace("alice", "pass123")
// Example: _ = service.SwitchWorkspace(workspaceID)
// Example: _ = service.WriteWorkspaceFile("notes/todo.txt", "ship it")
package workspace

View file

@ -3,272 +3,189 @@ package workspace
import (
"crypto/sha256"
"encoding/hex"
"io/fs"
"os"
"strings"
"sync"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
"dappco.re/go/core/io"
"dappco.re/go/core/io/sigil"
)
// Example: service, _ := workspace.New(workspace.Options{KeyPairProvider: keyPairProvider})
// Workspace provides management for encrypted user workspaces.
type Workspace interface {
CreateWorkspace(identifier, passphrase string) (string, error)
SwitchWorkspace(workspaceID string) error
ReadWorkspaceFile(workspaceFilePath string) (string, error)
WriteWorkspaceFile(workspaceFilePath, content string) error
CreateWorkspace(identifier, password string) (string, error)
SwitchWorkspace(name string) error
WorkspaceFileGet(filename string) (string, error)
WorkspaceFileSet(filename, content string) error
}
// Example: key, _ := keyPairProvider.CreateKeyPair("alice", "pass123")
type KeyPairProvider interface {
CreateKeyPair(identifier, passphrase string) (string, error)
// cryptProvider is the interface for PGP key generation.
type cryptProvider interface {
CreateKeyPair(name, passphrase string) (string, error)
}
const (
WorkspaceCreateAction = "workspace.create"
WorkspaceSwitchAction = "workspace.switch"
)
// Example: command := WorkspaceCommand{Action: WorkspaceCreateAction, Identifier: "alice", Password: "pass123"}
type WorkspaceCommand struct {
Action string
Identifier string
Password string
WorkspaceID string
}
// Example: service, _ := workspace.New(workspace.Options{
// Example: KeyPairProvider: keyPairProvider,
// Example: RootPath: "/srv/workspaces",
// Example: Medium: io.NewMemoryMedium(),
// Example: Core: c,
// Example: })
type Options struct {
KeyPairProvider KeyPairProvider
RootPath string
Medium io.Medium
// Example: service, _ := workspace.New(workspace.Options{Core: core.New()})
Core *core.Core
}
// Example: service, _ := workspace.New(workspace.Options{KeyPairProvider: keyPairProvider})
// Service implements the Workspace interface.
type Service struct {
keyPairProvider KeyPairProvider
activeWorkspaceID string
rootPath string
medium io.Medium
stateLock sync.RWMutex
core *core.Core
crypt cryptProvider
activeWorkspace string
rootPath string
medium io.Medium
mu sync.RWMutex
}
var _ Workspace = (*Service)(nil)
// New creates a new Workspace service instance.
// An optional cryptProvider can be passed to supply PGP key generation.
func New(c *core.Core, crypt ...cryptProvider) (any, error) {
home := workspaceHome()
if home == "" {
return nil, coreerr.E("workspace.New", "failed to determine home directory", os.ErrNotExist)
}
rootPath := core.Path(home, ".core", "workspaces")
// Example: service, _ := workspace.New(workspace.Options{
// Example: KeyPairProvider: keyPairProvider,
// Example: RootPath: "/srv/workspaces",
// Example: Medium: io.NewMemoryMedium(),
// Example: })
// Example: workspaceID, _ := service.CreateWorkspace("alice", "pass123")
func New(options Options) (*Service, error) {
rootPath := options.RootPath
if rootPath == "" {
home := resolveWorkspaceHomeDirectory()
if home == "" {
return nil, core.E("workspace.New", "failed to determine home directory", fs.ErrNotExist)
}
rootPath = core.Path(home, ".core", "workspaces")
s := &Service{
core: c,
rootPath: rootPath,
medium: io.Local,
}
if options.KeyPairProvider == nil {
return nil, core.E("workspace.New", "key pair provider is required", fs.ErrInvalid)
if len(crypt) > 0 && crypt[0] != nil {
s.crypt = crypt[0]
}
medium := options.Medium
if medium == nil {
medium = io.Local
}
if medium == nil {
return nil, core.E("workspace.New", "storage medium is required", fs.ErrInvalid)
if err := s.medium.EnsureDir(rootPath); err != nil {
return nil, coreerr.E("workspace.New", "failed to ensure root directory", err)
}
service := &Service{
keyPairProvider: options.KeyPairProvider,
rootPath: rootPath,
medium: medium,
}
if err := service.medium.EnsureDir(rootPath); err != nil {
return nil, core.E("workspace.New", "failed to ensure root directory", err)
}
if options.Core != nil {
options.Core.RegisterAction(service.HandleWorkspaceMessage)
}
return service, nil
return s, nil
}
// Example: workspaceID, _ := service.CreateWorkspace("alice", "pass123")
func (service *Service) CreateWorkspace(identifier, passphrase string) (string, error) {
service.stateLock.Lock()
defer service.stateLock.Unlock()
// CreateWorkspace creates a new encrypted workspace.
// Identifier is hashed (SHA-256) to create the directory name.
// A PGP keypair is generated using the password.
func (s *Service) CreateWorkspace(identifier, password string) (string, error) {
s.mu.Lock()
defer s.mu.Unlock()
if service.keyPairProvider == nil {
return "", core.E("workspace.CreateWorkspace", "key pair provider not available", fs.ErrInvalid)
if s.crypt == nil {
return "", coreerr.E("workspace.CreateWorkspace", "crypt service not available", nil)
}
hash := sha256.Sum256([]byte(identifier))
workspaceID := hex.EncodeToString(hash[:])
workspaceDirectory, err := service.resolveWorkspaceDirectory("workspace.CreateWorkspace", workspaceID)
wsID := hex.EncodeToString(hash[:])
wsPath, err := s.workspacePath("workspace.CreateWorkspace", wsID)
if err != nil {
return "", err
}
if service.medium.Exists(workspaceDirectory) {
return "", core.E("workspace.CreateWorkspace", "workspace already exists", fs.ErrExist)
if s.medium.Exists(wsPath) {
return "", coreerr.E("workspace.CreateWorkspace", "workspace already exists", nil)
}
for _, directoryName := range []string{"config", "log", "data", "files", "keys"} {
if err := service.medium.EnsureDir(core.Path(workspaceDirectory, directoryName)); err != nil {
return "", core.E("workspace.CreateWorkspace", core.Concat("failed to create directory: ", directoryName), err)
for _, d := range []string{"config", "log", "data", "files", "keys"} {
if err := s.medium.EnsureDir(core.Path(wsPath, d)); err != nil {
return "", coreerr.E("workspace.CreateWorkspace", "failed to create directory: "+d, err)
}
}
privateKey, err := service.keyPairProvider.CreateKeyPair(identifier, passphrase)
privKey, err := s.crypt.CreateKeyPair(identifier, password)
if err != nil {
return "", core.E("workspace.CreateWorkspace", "failed to generate keys", err)
return "", coreerr.E("workspace.CreateWorkspace", "failed to generate keys", err)
}
if err := service.medium.WriteMode(core.Path(workspaceDirectory, "keys", "private.key"), privateKey, 0600); err != nil {
return "", core.E("workspace.CreateWorkspace", "failed to save private key", err)
if err := s.medium.WriteMode(core.Path(wsPath, "keys", "private.key"), privKey, 0600); err != nil {
return "", coreerr.E("workspace.CreateWorkspace", "failed to save private key", err)
}
return workspaceID, nil
return wsID, nil
}
// Example: _ = service.SwitchWorkspace(workspaceID)
func (service *Service) SwitchWorkspace(workspaceID string) error {
service.stateLock.Lock()
defer service.stateLock.Unlock()
// SwitchWorkspace changes the active workspace.
func (s *Service) SwitchWorkspace(name string) error {
s.mu.Lock()
defer s.mu.Unlock()
workspaceDirectory, err := service.resolveWorkspaceDirectory("workspace.SwitchWorkspace", workspaceID)
wsPath, err := s.workspacePath("workspace.SwitchWorkspace", name)
if err != nil {
return err
}
if !service.medium.IsDir(workspaceDirectory) {
return core.E("workspace.SwitchWorkspace", core.Concat("workspace not found: ", workspaceID), fs.ErrNotExist)
if !s.medium.IsDir(wsPath) {
return coreerr.E("workspace.SwitchWorkspace", "workspace not found: "+name, nil)
}
service.activeWorkspaceID = core.PathBase(workspaceDirectory)
s.activeWorkspace = core.PathBase(wsPath)
return nil
}
func (service *Service) resolveActiveWorkspaceFilePath(operation, workspaceFilePath string) (string, error) {
if service.activeWorkspaceID == "" {
return "", core.E(operation, "no active workspace", fs.ErrNotExist)
// activeFilePath returns the full path to a file in the active workspace,
// or an error if no workspace is active.
func (s *Service) activeFilePath(op, filename string) (string, error) {
if s.activeWorkspace == "" {
return "", coreerr.E(op, "no active workspace", nil)
}
filesRoot := core.Path(service.rootPath, service.activeWorkspaceID, "files")
filePath, err := joinPathWithinRoot(filesRoot, workspaceFilePath)
filesRoot := core.Path(s.rootPath, s.activeWorkspace, "files")
path, err := joinWithinRoot(filesRoot, filename)
if err != nil {
return "", core.E(operation, "file path escapes workspace files", fs.ErrPermission)
return "", coreerr.E(op, "file path escapes workspace files", os.ErrPermission)
}
if filePath == filesRoot {
return "", core.E(operation, "workspace file path is required", fs.ErrInvalid)
if path == filesRoot {
return "", coreerr.E(op, "filename is required", os.ErrInvalid)
}
return filePath, nil
return path, nil
}
// Example: cipherSigil, _ := service.workspaceCipherSigil("workspace.ReadWorkspaceFile")
func (service *Service) workspaceCipherSigil(operation string) (*sigil.ChaChaPolySigil, error) {
if service.activeWorkspaceID == "" {
return nil, core.E(operation, "no active workspace", fs.ErrNotExist)
}
keyPath := core.Path(service.rootPath, service.activeWorkspaceID, "keys", "private.key")
rawKey, err := service.medium.Read(keyPath)
if err != nil {
return nil, core.E(operation, "failed to read workspace key", err)
}
derived := sha256.Sum256([]byte(rawKey))
cipherSigil, err := sigil.NewChaChaPolySigil(derived[:], nil)
if err != nil {
return nil, core.E(operation, "failed to create cipher sigil", err)
}
return cipherSigil, nil
}
// WorkspaceFileGet retrieves the content of a file from the active workspace.
func (s *Service) WorkspaceFileGet(filename string) (string, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Example: content, _ := service.ReadWorkspaceFile("notes/todo.txt")
func (service *Service) ReadWorkspaceFile(workspaceFilePath string) (string, error) {
service.stateLock.RLock()
defer service.stateLock.RUnlock()
filePath, err := service.resolveActiveWorkspaceFilePath("workspace.ReadWorkspaceFile", workspaceFilePath)
path, err := s.activeFilePath("workspace.WorkspaceFileGet", filename)
if err != nil {
return "", err
}
cipherSigil, err := service.workspaceCipherSigil("workspace.ReadWorkspaceFile")
if err != nil {
return "", err
}
encoded, err := service.medium.Read(filePath)
if err != nil {
return "", err
}
plaintext, err := sigil.Untransmute([]byte(encoded), []sigil.Sigil{cipherSigil})
if err != nil {
return "", core.E("workspace.ReadWorkspaceFile", "failed to decrypt file content", err)
}
return string(plaintext), nil
return s.medium.Read(path)
}
// Example: _ = service.WriteWorkspaceFile("notes/todo.txt", "ship it")
func (service *Service) WriteWorkspaceFile(workspaceFilePath, content string) error {
service.stateLock.Lock()
defer service.stateLock.Unlock()
// WorkspaceFileSet saves content to a file in the active workspace.
func (s *Service) WorkspaceFileSet(filename, content string) error {
s.mu.Lock()
defer s.mu.Unlock()
filePath, err := service.resolveActiveWorkspaceFilePath("workspace.WriteWorkspaceFile", workspaceFilePath)
path, err := s.activeFilePath("workspace.WorkspaceFileSet", filename)
if err != nil {
return err
}
cipherSigil, err := service.workspaceCipherSigil("workspace.WriteWorkspaceFile")
if err != nil {
return err
}
ciphertext, err := sigil.Transmute([]byte(content), []sigil.Sigil{cipherSigil})
if err != nil {
return core.E("workspace.WriteWorkspaceFile", "failed to encrypt file content", err)
}
return service.medium.Write(filePath, string(ciphertext))
return s.medium.Write(path, content)
}
// Example: commandResult := service.HandleWorkspaceCommand(WorkspaceCommand{Action: WorkspaceCreateAction, Identifier: "alice", Password: "pass123"})
func (service *Service) HandleWorkspaceCommand(command WorkspaceCommand) core.Result {
switch command.Action {
case WorkspaceCreateAction:
passphrase := command.Password
workspaceID, err := service.CreateWorkspace(command.Identifier, passphrase)
if err != nil {
return core.Result{}.New(err)
// HandleIPCEvents handles workspace-related IPC messages.
func (s *Service) HandleIPCEvents(c *core.Core, msg core.Message) core.Result {
switch m := msg.(type) {
case map[string]any:
action, _ := m["action"].(string)
switch action {
case "workspace.create":
id, _ := m["identifier"].(string)
pass, _ := m["password"].(string)
wsID, err := s.CreateWorkspace(id, pass)
if err != nil {
return core.Result{}
}
return core.Result{Value: wsID, OK: true}
case "workspace.switch":
name, _ := m["name"].(string)
if err := s.SwitchWorkspace(name); err != nil {
return core.Result{}
}
return core.Result{OK: true}
}
return core.Result{Value: workspaceID, OK: true}
case WorkspaceSwitchAction:
if err := service.SwitchWorkspace(command.WorkspaceID); err != nil {
return core.Result{}.New(err)
}
return core.Result{OK: true}
}
return core.Result{}.New(core.E("workspace.HandleWorkspaceCommand", core.Concat("unsupported action: ", command.Action), fs.ErrInvalid))
return core.Result{OK: true}
}
// Example: result := service.HandleWorkspaceMessage(core.New(), WorkspaceCommand{Action: WorkspaceSwitchAction, WorkspaceID: "f3f0d7"})
func (service *Service) HandleWorkspaceMessage(_ *core.Core, message core.Message) core.Result {
switch command := message.(type) {
case WorkspaceCommand:
return service.HandleWorkspaceCommand(command)
}
return core.Result{}.New(core.E("workspace.HandleWorkspaceMessage", "unsupported message type", fs.ErrInvalid))
}
func resolveWorkspaceHomeDirectory() string {
func workspaceHome() string {
if home := core.Env("CORE_HOME"); home != "" {
return home
}
@ -278,31 +195,28 @@ func resolveWorkspaceHomeDirectory() string {
return core.Env("DIR_HOME")
}
func joinPathWithinRoot(root string, parts ...string) (string, error) {
func joinWithinRoot(root string, parts ...string) (string, error) {
candidate := core.Path(append([]string{root}, parts...)...)
separator := core.Env("CORE_PATH_SEPARATOR")
if separator == "" {
separator = core.Env("DS")
}
if separator == "" {
separator = "/"
}
if candidate == root || core.HasPrefix(candidate, root+separator) {
sep := core.Env("DS")
if candidate == root || strings.HasPrefix(candidate, root+sep) {
return candidate, nil
}
return "", fs.ErrPermission
return "", os.ErrPermission
}
func (service *Service) resolveWorkspaceDirectory(operation, workspaceID string) (string, error) {
if workspaceID == "" {
return "", core.E(operation, "workspace id is required", fs.ErrInvalid)
func (s *Service) workspacePath(op, name string) (string, error) {
if name == "" {
return "", coreerr.E(op, "workspace name is required", os.ErrInvalid)
}
workspaceDirectory, err := joinPathWithinRoot(service.rootPath, workspaceID)
path, err := joinWithinRoot(s.rootPath, name)
if err != nil {
return "", core.E(operation, "workspace path escapes root", err)
return "", coreerr.E(op, "workspace path escapes root", err)
}
if core.PathDir(workspaceDirectory) != service.rootPath {
return "", core.E(operation, core.Concat("invalid workspace id: ", workspaceID), fs.ErrPermission)
if core.PathDir(path) != s.rootPath {
return "", coreerr.E(op, "invalid workspace name: "+name, os.ErrPermission)
}
return workspaceDirectory, nil
return path, nil
}
// Ensure Service implements Workspace.
var _ Workspace = (*Service)(nil)

View file

@ -1,214 +1,90 @@
package workspace
import (
"io/fs"
"os"
"testing"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type testKeyPairProvider struct {
privateKey string
err error
type stubCrypt struct {
key string
err error
}
func (provider testKeyPairProvider) CreateKeyPair(identifier, passphrase string) (string, error) {
if provider.err != nil {
return "", provider.err
func (s stubCrypt) CreateKeyPair(_, _ string) (string, error) {
if s.err != nil {
return "", s.err
}
return provider.privateKey, nil
return s.key, nil
}
func newWorkspaceService(t *testing.T) (*Service, string) {
func newTestService(t *testing.T) (*Service, string) {
t.Helper()
tempHome := t.TempDir()
t.Setenv("HOME", tempHome)
service, err := New(Options{KeyPairProvider: testKeyPairProvider{privateKey: "private-key"}})
svc, err := New(core.New(), stubCrypt{key: "private-key"})
require.NoError(t, err)
return service, tempHome
return svc.(*Service), tempHome
}
func TestService_New_MissingKeyPairProvider_Bad(t *testing.T) {
_, err := New(Options{})
require.Error(t, err)
}
func TestWorkspace(t *testing.T) {
s, tempHome := newTestService(t)
func TestService_New_CustomRootPathAndMedium_Good(t *testing.T) {
medium := coreio.NewMemoryMedium()
rootPath := core.Path(t.TempDir(), "custom", "workspaces")
service, err := New(Options{
KeyPairProvider: testKeyPairProvider{privateKey: "private-key"},
RootPath: rootPath,
Medium: medium,
})
id, err := s.CreateWorkspace("test-user", "pass123")
require.NoError(t, err)
assert.Equal(t, rootPath, service.rootPath)
assert.Same(t, medium, service.medium)
assert.NotEmpty(t, id)
workspaceID, err := service.CreateWorkspace("custom-user", "pass123")
wsPath := core.Path(tempHome, ".core", "workspaces", id)
assert.DirExists(t, wsPath)
assert.DirExists(t, core.Path(wsPath, "keys"))
assert.FileExists(t, core.Path(wsPath, "keys", "private.key"))
err = s.SwitchWorkspace(id)
require.NoError(t, err)
assert.NotEmpty(t, workspaceID)
assert.Equal(t, id, s.activeWorkspace)
expectedWorkspacePath := core.Path(rootPath, workspaceID)
assert.True(t, medium.IsDir(rootPath))
assert.True(t, medium.IsDir(core.Path(expectedWorkspacePath, "keys")))
assert.True(t, medium.Exists(core.Path(expectedWorkspacePath, "keys", "private.key")))
}
func TestService_WorkspaceFileRoundTrip_Good(t *testing.T) {
service, tempHome := newWorkspaceService(t)
workspaceID, err := service.CreateWorkspace("test-user", "pass123")
require.NoError(t, err)
assert.NotEmpty(t, workspaceID)
workspacePath := core.Path(tempHome, ".core", "workspaces", workspaceID)
assert.DirExists(t, workspacePath)
assert.DirExists(t, core.Path(workspacePath, "keys"))
assert.FileExists(t, core.Path(workspacePath, "keys", "private.key"))
err = service.SwitchWorkspace(workspaceID)
require.NoError(t, err)
assert.Equal(t, workspaceID, service.activeWorkspaceID)
err = service.WriteWorkspaceFile("secret.txt", "top secret info")
err = s.WorkspaceFileSet("secret.txt", "top secret info")
require.NoError(t, err)
got, err := service.ReadWorkspaceFile("secret.txt")
got, err := s.WorkspaceFileGet("secret.txt")
require.NoError(t, err)
assert.Equal(t, "top secret info", got)
}
func TestService_SwitchWorkspace_TraversalBlocked_Bad(t *testing.T) {
service, tempHome := newWorkspaceService(t)
func TestSwitchWorkspace_TraversalBlocked(t *testing.T) {
s, tempHome := newTestService(t)
outside := core.Path(tempHome, ".core", "escaped")
require.NoError(t, service.medium.EnsureDir(outside))
require.NoError(t, os.MkdirAll(outside, 0755))
err := service.SwitchWorkspace("../escaped")
err := s.SwitchWorkspace("../escaped")
require.Error(t, err)
assert.Empty(t, service.activeWorkspaceID)
assert.Empty(t, s.activeWorkspace)
}
func TestService_WriteWorkspaceFile_TraversalBlocked_Bad(t *testing.T) {
service, tempHome := newWorkspaceService(t)
func TestWorkspaceFileSet_TraversalBlocked(t *testing.T) {
s, tempHome := newTestService(t)
workspaceID, err := service.CreateWorkspace("test-user", "pass123")
id, err := s.CreateWorkspace("test-user", "pass123")
require.NoError(t, err)
require.NoError(t, service.SwitchWorkspace(workspaceID))
require.NoError(t, s.SwitchWorkspace(id))
keyPath := core.Path(tempHome, ".core", "workspaces", workspaceID, "keys", "private.key")
before, err := service.medium.Read(keyPath)
keyPath := core.Path(tempHome, ".core", "workspaces", id, "keys", "private.key")
before, err := os.ReadFile(keyPath)
require.NoError(t, err)
err = service.WriteWorkspaceFile("../keys/private.key", "hijack")
err = s.WorkspaceFileSet("../keys/private.key", "hijack")
require.Error(t, err)
after, err := service.medium.Read(keyPath)
after, err := os.ReadFile(keyPath)
require.NoError(t, err)
assert.Equal(t, before, after)
assert.Equal(t, string(before), string(after))
_, err = service.ReadWorkspaceFile("../keys/private.key")
_, err = s.WorkspaceFileGet("../keys/private.key")
require.Error(t, err)
}
func TestService_JoinPathWithinRoot_DefaultSeparator_Good(t *testing.T) {
t.Setenv("CORE_PATH_SEPARATOR", "")
path, err := joinPathWithinRoot("/tmp/workspaces", "../workspaces2")
require.Error(t, err)
assert.ErrorIs(t, err, fs.ErrPermission)
assert.Empty(t, path)
}
func TestService_New_IPCAutoRegistration_Good(t *testing.T) {
tempHome := t.TempDir()
t.Setenv("HOME", tempHome)
c := core.New()
service, err := New(Options{
KeyPairProvider: testKeyPairProvider{privateKey: "private-key"},
Core: c,
})
require.NoError(t, err)
// Create a workspace directly, then switch via the Core IPC bus.
workspaceID, err := service.CreateWorkspace("ipc-bus-user", "pass789")
require.NoError(t, err)
// Dispatching workspace.switch via ACTION must reach the auto-registered handler.
c.ACTION(WorkspaceCommand{
Action: WorkspaceSwitchAction,
WorkspaceID: workspaceID,
})
assert.Equal(t, workspaceID, service.activeWorkspaceID)
}
func TestService_New_IPCCreate_Good(t *testing.T) {
tempHome := t.TempDir()
t.Setenv("HOME", tempHome)
c := core.New()
service, err := New(Options{
KeyPairProvider: testKeyPairProvider{privateKey: "private-key"},
Core: c,
})
require.NoError(t, err)
// workspace.create dispatched via the bus must create the workspace on the medium.
c.ACTION(WorkspaceCommand{
Action: WorkspaceCreateAction,
Identifier: "ipc-create-user",
Password: "pass123",
})
// A duplicate create must fail — proves the first create succeeded.
_, err = service.CreateWorkspace("ipc-create-user", "pass123")
require.Error(t, err)
}
func TestService_New_NoCoreOption_NoRegistration_Good(t *testing.T) {
tempHome := t.TempDir()
t.Setenv("HOME", tempHome)
// Without Core in Options, New must succeed and no IPC handler is registered.
service, err := New(Options{
KeyPairProvider: testKeyPairProvider{privateKey: "private-key"},
})
require.NoError(t, err)
assert.NotNil(t, service)
}
func TestService_HandleWorkspaceMessage_Command_Good(t *testing.T) {
service, _ := newWorkspaceService(t)
create := service.HandleWorkspaceMessage(core.New(), WorkspaceCommand{
Action: WorkspaceCreateAction,
Identifier: "ipc-user",
Password: "pass123",
})
assert.True(t, create.OK)
workspaceID, ok := create.Value.(string)
require.True(t, ok)
require.NotEmpty(t, workspaceID)
switchResult := service.HandleWorkspaceMessage(core.New(), WorkspaceCommand{
Action: WorkspaceSwitchAction,
WorkspaceID: workspaceID,
})
assert.True(t, switchResult.OK)
assert.Equal(t, workspaceID, service.activeWorkspaceID)
unknownAction := service.HandleWorkspaceCommand(WorkspaceCommand{Action: "noop"})
assert.False(t, unknownAction.OK)
unknown := service.HandleWorkspaceMessage(core.New(), "noop")
assert.False(t, unknown.OK)
}