Compare commits

..

400 commits
main ... dev

Author SHA1 Message Date
Snider
d6344290bc fix(store): r5 — scoped quota race + swallowed Close + event timeout on PR #4
Some checks failed
Security Scan / security (push) Has been cancelled
Test / test (push) Has been cancelled
Round 5 follow-up to 4aeddac.

Code:
- scope.go: scoped quota race fixed — Set/SetIn/SetWithTTL route
  through ScopedStore.Transaction (atomic). Non-transactional quota
  helper removed.
- coverage_test.go: corruption setup tests no longer swallow Close()
  errors
- store_test.go: duplicate unbounded event receive replaced with
  bounded select + timeout (was a hang risk)
- import.go: queryRowScan errors wrapped with method context
- import.go: documented strings-import ban via inline comment

Verification: gofmt clean, golangci-lint v2 0 issues, go vet + go
test -count=1 ./... pass.

Closes residual r5 findings on https://github.com/dAppCore/go-store/pull/4

Co-authored-by: Codex <noreply@openai.com>
2026-04-27 19:13:59 +01:00
Snider
4aeddacb3f fix(store): r4 — EnsureDir errors + scoped readiness names + purge event timeout on PR #4
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Round 4 follow-up to fc77445.

Code:
- import.go: benchmark directory/subdirectory creation now checked
  with contextual errors (was silently failing on EnsureDir)
- import.go: terse loop variables expanded in reviewed loops
- scope.go: nil Namespace() guarded
- scope.go: scoped readiness uses 'store.ScopedStore.*' operation
  names across wrappers (matching test updated)

Tests:
- publish_test.go: HOME fallback test clears DIR_HOME (was flaky
  due to env leakage)
- store_test.go: purge event wait now uses bounded select with
  timeout (was hanging on missing event)
- store_test.go: Exists/GroupExists tests no longer swallow fixture
  setup errors

Verification: gofmt clean, golangci-lint v2 0 issues, GOWORK=off
go vet + go test -count=1 ./... pass with explicit cache paths.

Closes residual r4 findings on https://github.com/dAppCore/go-store/pull/4

Co-authored-by: Codex <noreply@openai.com>
2026-04-27 18:55:55 +01:00
Snider
fc77445de0 fix(store): r3 — transactional import + DELETE RETURNING + token home order on PR #4
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Round 3 follow-up to ebe5377. Closes residual CodeRabbit findings.

Code:
- import.go: ImportAll DB mutations wrapped in transaction with
  rollback-on-error
- import.go: malformed JSONL returns file/line parse errors in all
  three import helpers (was silently swallowing per-line errors)
- import.go: walkDir returns + propagates traversal/list/type errors
- medium.go: JSON export uses aggregateFields() + propagates
  workspace failures
- publish.go: dataset_card.md excluded from Parquet split count
- store.go: medium-backed Close() remains retryable after sync
  failure; operations see closing state as closed
- store.go + scope.go + transaction.go: purge uses
  DELETE ... RETURNING so notifications come from rows actually
  deleted (was reading first then deleting separately)
- publish.go: token lookup uses Core's DIR_HOME (populated via
  os.UserHomeDir) then falls back to HOME — preserves direct-os
  import ban while picking up real home

Tests:
- import_test.go (new): coverage of transactional import +
  malformed-JSONL error path

Doc:
- README.md: footer licence link targets LICENCE.md (UK English)

Verification: gofmt clean, golangci-lint v2 0 issues, GOWORK=off
go vet + go test -count=1 ./... pass with explicit cache paths.

Closes residual findings on https://github.com/dAppCore/go-store/pull/4

Co-authored-by: Codex <noreply@openai.com>
2026-04-27 18:29:59 +01:00
Snider
ebe5377871 fix(store): r2 — address residual CodeRabbit findings on PR #4
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Round 2 follow-up to 6c90af8. CodeRabbit re-reviewed and surfaced
14 residual issues; all dispositioned.

Code:
- compact.go: staged archive preserved after successful DB commit
  (was being deleted prematurely)
- workspace.go: commit idempotency — recovery skips/removes leftover
  files when durable summary marker exists; cleanup failure no longer
  fails Commit() after durable write
- medium.go: StoreConfig public example; JSON import fails fast on
  unsupported/non-object records; CSV parser switched from hand-roll
  to encoding/csv with multiline + malformed handling
- import.go: removed /tmp seed fallbacks (deterministic dirs); read +
  JSON parse failures now return contextual errors
- publish.go: HuggingFace token uses real HOME via core.Env (not
  DIR_HOME); empty Repo validated before dry-run; upload uses
  caller-configurable PublishConfig.Context (no fixed http timeout)
- store.go: Close() backfills db/sqliteDatabase aliases before
  closing/syncing
- test_asserts_test.go: errIs delegates to core.Is (AX import rules)

CI / docs:
- .github/workflows/ci.yml: CGO_ENABLED=1 explicit (DuckDB requires CGO)
- DEPENDENCIES.md: required toolchain documented for DuckDB context
- README.md: Licence badge UK English + LICENCE.md link
- LICENCE.md (new file)
- publish_test.go (new) — covers HOME / dry-run / config-context paths

Disposition replies:
- Testify reintroduction suggestion: RESOLVED-COMMENT — AX-6 bans testify
- SonarCloud: no PR comments/check annotations exposed; RESOLVED-COMMENT

Verification: gofmt clean, golangci-lint run 0 issues, GOWORK=off
go vet + go test -count=1 ./... pass with explicit cache paths.

Closes residual findings on https://github.com/dAppCore/go-store/pull/4

Co-authored-by: Codex <noreply@openai.com>
2026-04-27 17:00:07 +01:00
Snider
6c90af807d fix(store): address all CodeRabbit findings on PR #4
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
30+ findings dispositioned across compact / publish / events / import
/ workspace / config / docs / test surfaces.

Major code fixes:
- compact.go: stage archive, DB commit first, publish after commit
  (durability ordering bug)
- compact_test.go: replaced medium reread O(n²) with bytes.Buffer
- coverage_test.go: EnsureScoringTables now wraps + returns DDL errors
- events.go: lifecycle lock-order reordered (deadlock risk)
- import.go: paths join against cfg.DataDir; SQL write errors propagate;
  counts increment only on success
- bench_test.go: ScpDir + /benchmarks/ paths corrected
- json.go: formatter write errors propagate (no silent drops)
- parquet.go: removed runtime parquet-go dep + in-core writer (was
  buffering whole rows in memory — OOM risk on large datasets)
- publish.go: PublishConfig.Public uses private: !public
- HuggingFace upload streams file with content length (was buffering)
- store.go: ScopedStore.Transaction nil-store panic guard
- recover_test.go: handle .duckdb.wal sidecars; skip SQLite -shm for DuckDB
- coverage_test.go: WriteScoringResult wraps insert failures
- events_test.go: restored EventDeleteGroup wire value 'delete_group'
- transaction.go: NewScopedConfigured docs include parent store arg
- store_test.go: scope_test keyName uses non-wrapping integer names
- import_export_test.go: testify → stdlib helpers (AX-6 conformance)
- store.go: collapsed duplicate workspace DB fields to canonical 'db'

Doc / config:
- README.md: 'Licence' UK English on badge
- docs/architecture.md: clearer event ordering + lifecycle docs
- .golangci.yml: migrated to golangci-lint v2 schema
- Taskfile: default includes vet task
- JSON: terse 'value' param + concrete examples

Disposition replies (RESOLVED-COMMENT, no code change):
- conventions_test.go testify suggestion: AX-6 banned testify; stdlib helpers are convention
- DuckDB CGO/MIT critical: retained as documented exception in
  DEPENDENCIES.md (load-bearing existing dependency for the workspace
  store; replacement is its own engineering ticket)

Verification: GOWORK=off go vet + go test -count=1 ./... pass.
golangci-lint run ./... reports 0 issues. gofmt -l clean. git diff
--check clean.

Closes findings on https://github.com/dAppCore/go-store/pull/4

Co-authored-by: Codex <noreply@openai.com>
2026-04-27 14:51:06 +01:00
Snider
85ab185b90 Merge remote-tracking branch 'github/dev' into dev
# Conflicts:
#	docs/RFC-STORE.md
2026-04-27 12:19:28 +01:00
Snider
c180cd2a8c fix(store): AX-6 sweep on medium.go + publish.go
Some checks failed
Security Scan / security (push) Has been cancelled
Test / test (push) Has been cancelled
medium.go: removed bytes, replaced bytes.Buffer with core.NewBuffer in
CSV parser. publish.go: removed bytes, replaced bytes.NewReader with
core.NewBuffer.

Co-authored-by: Codex <noreply@openai.com>
2026-04-25 11:09:10 +01:00
Snider
651a966723 fix(store): AX-6 sweep on json.go + parquet.go
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
json.go: bytes.Buffer → core.NewBuilder, removed bytes. parquet.go:
removed io via local narrow readCloser/writeCloser interfaces for
existing stream assertions.

Co-authored-by: Codex <noreply@openai.com>
2026-04-25 10:53:59 +01:00
Snider
32413aab88 feat(store): RecoverOrphans quarantines corrupt files per RFC §8.6 (#261)
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
RecoverOrphans existed but silently skipped corrupt/unreadable orphan
files. Now corrupt files (incl. -wal and -shm sidecars) are quarantined
under <state>/quarantine/ instead of dropped silently.

Tests recover_test.go _Good (orphan recovered) / _Bad (corrupt
quarantined) / _Ugly (no orphans no-op). Race PASS.

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=261
2026-04-25 08:36:34 +01:00
Snider
4a2d84b07a docs(store): annotate sync as AX-6 structural exception per RFC §4 (#257)
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-25 08:34:33 +01:00
Snider
cfc93d4814 fix(store): remove banned io import from compact.go, route through Medium (#259)
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Replaced io.Writer/Reader usage with coreio.Medium write/read calls per
RFC §9.4. gzip/zstd archive output now writes to Medium-backed buffer.
io import removed (compress/gzip retained — not banned).

Race PASS.

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=259
2026-04-25 08:28:23 +01:00
Snider
903af6cf47 docs(store): confirm Import/Export already present (#260, NOTABUG)
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Audit confirmed RFC §9.3 Import + Export functions exist (under canonical
names). Added regression test coverage in import_export_test.go to lock
in the contract:
- Good: CSV/JSON/JSONL ingestion + export round-trip
- Bad: malformed payloads
- Ugly: empty payloads

Race PASS.

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=260
2026-04-25 08:27:35 +01:00
Snider
856e88b2f6 feat(ax-10): bring go-store to v0.8.0-alpha.1 + CLI test scaffold
- Bump dappco.re/go/* deps to v0.8.0-alpha.1 in go.mod (any forge.lthn.ai/core/* paths migrated to canonical dappco.re/go/* form)

Co-Authored-By: Athena <athena@lthn.ai>
2026-04-24 23:47:31 +01:00
Codex
51c9d1edae docs(go-store): clean up RFC §4 — deduplicate Event + complete Store struct
- Removed duplicate `Event` struct definition in §4; kept single canonical
- Added Type + Timestamp fields to Event; added EventType constants
- Expanded Store struct to match store.go: SQLite aliases, purge lifecycle
  fields, journal client/config, medium state, watcher/callback locks,
  orphan workspace cache

Normalized diff vs store.go now has zero differences.

Closes tasks.lthn.sh/view.php?id=599

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 22:36:28 +01:00
Codex
92a206e313 docs(go-store): document Transaction API in RFC §7 per spec gap
Added §7 Transaction API + §7.1 ScopedStoreTransaction covering:
- Store.Transaction(fn) commit/rollback semantics
- error/panic propagation behavior
- post-commit event dispatch
- Scope isolation during tx, scoped prefixing, local group names,
  namespace-local delete/purge/count, quota checks, event localization
- Added transaction.go to architecture table; renumbered later sections.

Closes tasks.lthn.sh/view.php?id=598

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 22:33:56 +01:00
Codex
ae6861e036 chore(go-store): migrate dappco.re/go/core/io → dappco.re/go/io (AX-6)
Updated stale cross-module dep path:
- go.mod: dappco.re/go/core/io v0.1.7 → dappco.re/go/io
- medium.go: import + doc comment rewritten

No stale path remains in .go or go.mod.

Closes tasks.lthn.sh/view.php?id=777

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 21:44:50 +01:00
Codex
57d5af9458 chore(go-store): annotate external storage deps in go.mod per AX-6
Added `// Note:` trailers to 5 direct external deps:
- InfluxDB client: time-series storage backend
- klauspost compression: gzip/zstd for cold archive compaction
- modernc.org/sqlite: pure-Go SQLite driver
- DuckDB: workspace buffer analytical queries
- parquet-go: columnar format for journal archives

No core.* equivalents for any.

Closes tasks.lthn.sh/view.php?id=778

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 21:08:48 +01:00
Codex
39526ddafe fix(go-store): remove banned strconv from journal.go (AX-6)
journal.go used strconv.{Atoi,ParseInt,ParseFloat} for parsing numeric
values from journal query results. core/go has no Atoi/ParseInt/
ParseFloat primitives yet, so dropped the strconv import by inlining
parseJournalInt64 and parseJournalFloat64 helpers that return
core.E()-wrapped errors to fit the codebase style.

Follow-up note: a proper core.ParseInt / core.ParseFloat addition
would let these helpers be removed — separate ticket.

Closes tasks.lthn.sh/view.php?id=258

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 19:38:03 +01:00
Codex
608f0df0e3 fix(go-store): replace testify with stdlib testing patterns (AX-6)
Removes testify + indirect deps from go.mod/go.sum; rewrites
assert/require calls in *_test.go to stdlib t.Fatalf patterns.
go vet + go test all clean.

Closes tasks.lthn.sh/view.php?id=779

Co-authored-by: Codex <noreply@openai.com>
Via-codex-lane: Cyclops-779 dispatch
2026-04-24 18:51:17 +01:00
Snider
8c8766a806 feat(go-store): add default task aggregator to tests/cli Taskfile (AX-10 polish)
Adds `default: deps: [build, test]` to the existing CLI test Taskfile so bare
`task -d tests/cli/<pkg>` runs the full suite per the Wave 2 convention.

Closes tasks.lthn.sh/view.php?id=600

Co-Authored-By: Cladius <cladius@lthn.ai>
2026-04-24 15:56:19 +01:00
Codex
7069a66763 feat(go-store): add CLI test Taskfile for build and test validation (AX-10)
Closes tasks.lthn.sh/view.php?id=262

Co-authored-by: Codex <noreply@openai.com>
Via-codex-lane: supervised by Cerberus on Athena #262 request
2026-04-24 12:02:08 +01:00
Snider
2dd91b6aca chore: go mod tidy (module path migration) 2026-04-24 08:25:42 +01:00
Snider
702fd12cf3 sync store RFC
Some checks failed
Security Scan / security (push) Has been cancelled
Test / test (push) Has been cancelled
2026-04-15 11:37:24 +01:00
Snider
a2eb005dea Verify RFC-aligned go-store implementation
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-15 11:34:44 +01:00
Snider
7eba9e937f Verify go-store RFC implementation
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-15 11:32:55 +01:00
Snider
403f8612f0 Align medium API with upstream interface
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-15 11:30:38 +01:00
Snider
303ff4e385 Use DuckDB for workspace buffers
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-15 11:28:08 +01:00
Snider
caaba5d70a feat(scope): add scoped quota constructor
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-15 11:24:55 +01:00
Snider
9763ef7946 Align module path docs
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-15 11:22:20 +01:00
Snider
48643a7b90 docs(api): align package overview with primary constructors
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-15 11:20:30 +01:00
Snider
e5a0f66e08 Emit TTL purge events
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-15 11:17:37 +01:00
Snider
9610dd1ff2 Support medium-backed SQLite persistence
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-15 11:15:01 +01:00
Snider
9df2291d28 Align store options with RFC
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-15 11:11:46 +01:00
Snider
a8cab201b8 Align store internals with RFC
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-15 11:09:36 +01:00
Snider
a69d150883 Align store API with RFC
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-15 11:06:45 +01:00
Snider
b6daafe952 feat(store): DuckDB.Conn() accessor for streaming row iteration
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Conn() *sql.DB accessor on store.DuckDB. The higher-level helpers
(Exec, QueryRowScan, QueryRows) don't cover streaming row iteration
patterns that go-ml needs for its training/eval pipelines.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-14 16:49:59 +01:00
Snider
2d7fb951db feat(store): io.Medium-backed storage per RFC §9
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Add WithMedium option so Store archives and Import/Export helpers can
route through any io.Medium implementation (local, memory, S3, cube,
sftp) instead of the raw filesystem. The Medium transport is optional —
when unset, existing filesystem behaviour is preserved.

- medium.go exposes WithMedium, Import, and Export helpers plus a small
  Medium interface that any io.Medium satisfies structurally
- Compact honours the installed Medium for archive writes, falling back
  to the local filesystem when nil
- StoreConfig.Medium round-trips through Config()/WithMedium so callers
  can inspect and override the transport
- medium_test.go covers the happy-path JSONL/CSV/JSON imports, JSON and
  JSONL exports, nil-argument validation, missing-file errors, and the
  Compact medium route

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-14 12:16:53 +01:00
Snider
eef4e737aa refactor(store): replace banned stdlib imports with core/go primitives
Some checks failed
Security Scan / security (push) Has been cancelled
Test / test (push) Has been cancelled
- fmt → core.Sprintf, core.E
- strings → core.Contains, core.HasPrefix, core.Split, core.Join, core.Trim
- os → core.Fs operations
- path/filepath → core.JoinPath, core.PathBase
- encoding/json → core.JSONMarshal, core.JSONUnmarshal
- Add usage example comments to all exported struct fields

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-08 16:43:49 +01:00
Snider
79815048c3 chore: refresh go.sum
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-07 11:43:20 +01:00
Snider
345fa26062 feat(store): add Exists, GroupExists, and Workspace.Count methods
Some checks failed
Security Scan / security (push) Has been cancelled
Test / test (push) Has been cancelled
Add public existence-check methods across all store layers (Store,
ScopedStore, StoreTransaction, ScopedStoreTransaction) so callers can
test key/group presence declaratively without Get+error-type checking.
Add Workspace.Count for total entry count. Full test coverage with
Good/Bad/Ugly naming, race-clean.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-05 08:58:26 +01:00
Virgil
72eff0d164 refactor: tighten store AX documentation
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 21:29:27 +00:00
Virgil
cdf3124a40 fix(store): make scoped store nil-safe
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 21:19:59 +00:00
Virgil
69452ef43f docs(ax): tighten usage examples
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 21:13:50 +00:00
Virgil
e1341ff2d5 refactor(store): align internal lifecycle naming with AX
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Use more descriptive private lifecycle, watcher, and orphan cache field names so the implementation reads more directly for agent consumers while preserving the exported API and behaviour.\n\nCo-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 21:09:20 +00:00
Virgil
7fa9449778 chore(store): confirm RFC parity
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 21:04:41 +00:00
Virgil
fcb178fee1 feat(scope): expose scoped config snapshot
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 20:54:45 +00:00
Virgil
c8504ab708 docs(store): clarify declarative constructors
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Prefer the struct-literal constructors in package docs and namespace helpers.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 20:46:40 +00:00
Virgil
ea3f434082 feat: add scoped store watcher wrappers
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 20:34:25 +00:00
Virgil
8a117a361d refactor(store): clarify compaction lifecycle names
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 20:29:21 +00:00
Virgil
c6840745b5 docs(store): tighten AX-facing package docs
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Fix the stale scoped-store test literal while aligning the package comment around concrete struct-literal usage.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 20:24:21 +00:00
Virgil
466f4ba578 refactor: align workspace and scoped store names
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Use the repo's primary store noun for internal references so the implementation matches the RFC vocabulary more closely.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 20:11:54 +00:00
Virgil
fb39b74087 refactor(scope): centralise quota enforcement
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 20:07:33 +00:00
Virgil
1c004d4d8a refactor(store): remove redundant scoped quota constructor
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 20:03:22 +00:00
Virgil
d854e1c98e refactor(scope): prefer scoped-store config literals
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 19:53:53 +00:00
Virgil
257bd520f6 docs(ax): prefer declarative config literals in examples
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 19:45:52 +00:00
Virgil
8b186449f9 fix(compact): normalise whitespace archive formats
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 19:41:58 +00:00
Virgil
4726b73ba6 feat(scope): restore scoped quota constructor
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 19:38:06 +00:00
Virgil
e5c63ee510 refactor(scope): remove redundant scoped quota constructor
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 19:34:32 +00:00
Virgil
649edea551 docs(ax): align package guidance with declarative config
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 19:28:47 +00:00
Virgil
4c6f2d6047 feat(scope): add scoped on-change helper
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 19:24:47 +00:00
Virgil
731a3ae333 fix(scope): make quota checks non-mutating
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 19:12:24 +00:00
Virgil
75f8702b74 feat: normalise declarative store config defaults
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 19:07:18 +00:00
Virgil
529333c033 fix(workspace): close partial workspaces without filesystem
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 19:02:35 +00:00
Virgil
7ad4dab749 refactor(store): clarify config guidance and naming
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 18:52:33 +00:00
Virgil
ecafc84e10 fix(store): require compact cutoff time
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 18:48:41 +00:00
Virgil
39fddb8043 refactor(scope): reuse shared prefix helper
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 18:42:19 +00:00
Virgil
efd40dd278 docs(store): reinforce AX config literal guidance
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 18:38:09 +00:00
Virgil
00650fd51e feat(store): add transaction purge helpers
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 18:20:52 +00:00
Virgil
f30fb8c20b refactor(test): expand AX naming in coverage stubs
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 18:15:44 +00:00
Virgil
23fb573b5d refactor(store): rename transaction internals
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Use more descriptive internal field names in StoreTransaction to better match the AX naming guidance without changing behaviour.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 17:56:54 +00:00
Virgil
aa49cdab4e feat(scope): add scoped pagination helpers
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 17:53:21 +00:00
Virgil
8e46ab9fdd docs(store): align RFC examples with AX
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 17:48:35 +00:00
Virgil
ba997f7e6b docs(store): align public comments with AX
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 17:43:55 +00:00
Virgil
d8183f26b6 fix: support scalar Flux journal filters
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 17:39:45 +00:00
Virgil
c2ba21342a docs(ax): prefer scoped config literals
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 17:34:15 +00:00
Virgil
08e896ad4d docs(store): clarify journal metadata
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Align the RFC text and store comments with the SQLite-backed journal implementation.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 17:27:10 +00:00
Virgil
1905ce51ae fix(store): normalise compact archive formats
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 17:10:25 +00:00
Virgil
fd6f1fe80a docs(store): sharpen agent-facing examples
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 17:05:51 +00:00
Virgil
5527c5bf6b docs(store): prefer config literals in examples
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 16:49:48 +00:00
Virgil
b20870178c refactor(store): unify scoped prefix helper naming
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Align the scoped helper name with the rest of the package and fix the RFC reference paths so the docs point at real local sources.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 16:47:13 +00:00
Virgil
9dc0b9bfcf refactor(scope): make scoped group access explicit
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 16:37:56 +00:00
Virgil
d682dcd5dc docs(scope): prefer explicit scoped examples
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 16:31:38 +00:00
Virgil
a2a99f6e9b docs(store): clarify package surface
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 16:28:07 +00:00
Virgil
06f6229eaf feat(store): expose workspace state directory config
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 16:24:27 +00:00
Virgil
dfbdace985 feat: add scoped store transactions
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 16:09:14 +00:00
Virgil
1fb8295713 feat(store): add scoped store config constructor
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 16:06:43 +00:00
Virgil
cae3c32d51 refactor: add AX config validation helpers
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 15:47:56 +00:00
Snider
f3018a582b docs: add core/go RFC primitives for agent reference
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 15:49:04 +01:00
84a27855a1 Merge pull request '[agent/codex:gpt-5.4] Read docs/RFC-STORE.md fully. Find features described in the...' (#153) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 14:44:02 +00:00
Virgil
32e7413bf4 feat(store): make workspace state paths declarative
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 14:43:42 +00:00
c48606179b Merge pull request '[agent/codex:gpt-5.4-mini] Read docs/RFC-STORE.md fully. Find features described in the...' (#152) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 14:36:53 +00:00
Virgil
caacbbd1c1 docs(store): clarify SQLite journal implementation
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 14:36:40 +00:00
a5b0f54d75 Merge pull request '[agent/codex:gpt-5.4-mini] Read docs/RFC-STORE.md fully. Find features described in the...' (#151) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 14:23:39 +00:00
Virgil
b43eb4e57a docs(store): make public comments example-driven
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 14:23:29 +00:00
223cf49e7c Merge pull request '[agent/codex:gpt-5.4-mini] Read docs/RFC-STORE.md fully. Find features described in the...' (#150) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 14:19:41 +00:00
Virgil
c2c5cecd7d refactor(store): unify journal configuration storage
Keep the exported JournalConfiguration type as the single in-memory representation.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 14:19:28 +00:00
07704d01ab Merge pull request '[agent/codex:gpt-5.4-mini] Read docs/RFC-STORE.md fully. Find features described in the...' (#149) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 14:16:06 +00:00
Virgil
5587e301bd refactor(store): copy journal result maps
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 14:15:53 +00:00
30a3c5fce6 Merge pull request '[agent/codex:gpt-5.4-mini] Read docs/RFC-STORE.md fully. Find features described in the...' (#148) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 13:37:38 +00:00
Virgil
c8261c5eb2 docs(store): prefer config literals in examples
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 13:37:27 +00:00
bfa1ec05da Merge pull request '[agent/codex:gpt-5.4-mini] Read docs/RFC-STORE.md fully. Find features described in the...' (#147) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 13:25:33 +00:00
Virgil
019a72d152 docs(store): clarify AX examples for options
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 13:25:19 +00:00
c5bc6f3fdf Merge pull request '[agent/codex:gpt-5.4-mini] Read docs/RFC-STORE.md fully. Find features described in the...' (#146) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 13:16:38 +00:00
Virgil
168c94d525 refactor(scope): centralise namespace quota checks
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 13:16:24 +00:00
3b7ff52449 Merge pull request '[agent/codex:gpt-5.4-mini] Read docs/RFC-STORE.md fully. Find features described in the...' (#145) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 13:10:44 +00:00
Virgil
772a78357c fix(store): avoid compact archive filename collisions
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 13:10:33 +00:00
c6d3e060ea Merge pull request '[agent/codex:gpt-5.4-mini] Read docs/RFC-STORE.md fully. Find features described in the...' (#144) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 13:07:47 +00:00
Virgil
85bef185e8 refactor(store): clone cached orphan slice
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 13:07:36 +00:00
209b5b6b2d Merge pull request '[agent/codex:gpt-5.4-mini] Read docs/RFC-STORE.md fully. Find features described in the...' (#143) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 12:13:20 +00:00
Virgil
dd4c9a2585 refactor(store): clarify workspace summary prefix naming
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 12:13:08 +00:00
4b927ff624 Merge pull request '[agent/codex:gpt-5.4-mini] Read docs/RFC-STORE.md fully. Find features described in the...' (#142) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 12:08:32 +00:00
Virgil
f6a602f064 docs(store): improve agent-facing examples
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 12:08:14 +00:00
66b75948b2 Merge pull request '[agent/codex:gpt-5.4-mini] Read docs/RFC-STORE.md fully. Find features described in the...' (#141) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 12:05:11 +00:00
Virgil
11b23b99c0 docs(store): refine AX-oriented comments
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 12:05:00 +00:00
ae87ea5801 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#140) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 12:01:53 +00:00
Virgil
a2ddacb27b fix(store): wrap orphan cleanup errors
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 12:01:39 +00:00
Snider
38e0cb3b2a docs: update store RFC and AX principles reference
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 12:58:44 +01:00
cc91496c60 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#139) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:57:49 +00:00
Virgil
92db4b72ff docs(store): clarify shared journal flow
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:57:37 +00:00
b33f8941e2 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#138) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:54:16 +00:00
Virgil
9450a293cf refactor(store): rename journal write helper
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:54:03 +00:00
c281ab2b93 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#137) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:48:04 +00:00
Virgil
ed51aa021d refactor(store): rename parent store fields
Use parentStore in the scoped and workspace wrappers so ownership reads more clearly for agents.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:47:49 +00:00
c6b35d2a9c Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#136) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:44:12 +00:00
Virgil
3bd0ee531b refactor(store): align workspace naming with AX
Rename the workspace database field for clearer agent-facing semantics and improve one public usage example.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:43:54 +00:00
3781c533ed Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#135) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:40:40 +00:00
Virgil
a1ceea8eea refactor(store): clarify compact archive entry names
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:40:29 +00:00
95d79068e0 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#134) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:37:18 +00:00
Virgil
2f23e8ef0d chore(store): clarify AX config guidance
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:37:06 +00:00
0fc0b1032b Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#133) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:33:47 +00:00
Virgil
c6c359e1c7 docs(store): add ax-oriented usage examples
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:33:32 +00:00
d2fa2cd68c Merge pull request '[agent/codex:gpt-5.4] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#132) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:30:28 +00:00
Virgil
57da334a1d docs(store): tighten AX API guidance
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:30:13 +00:00
e79b923b67 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#131) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:26:32 +00:00
Virgil
0ea38777d7 refactor(store): use descriptive configuration names
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:26:16 +00:00
a6f2a16400 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#130) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:23:01 +00:00
Virgil
2ff98991a1 fix(store): require explicit database path in config
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:22:50 +00:00
733b73a4aa Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#129) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:19:36 +00:00
Virgil
edf9162c21 docs(store): clarify workspace query support
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:19:22 +00:00
8b853d5e96 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#128) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:13:51 +00:00
Virgil
4031b6719f docs(ax): prefer declarative store configuration
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:13:36 +00:00
0acac61a9e Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#127) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:09:48 +00:00
Virgil
38638268c7 refactor(transaction): use descriptive example names
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:09:35 +00:00
f54cf370cb Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#126) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 11:05:59 +00:00
Virgil
f2e456be46 refactor(store): clarify journal helper names
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 11:05:48 +00:00
c2c9891942 Merge pull request '[agent/codex:gpt-5.4] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#125) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 10:47:03 +00:00
Virgil
6ba5701955 refactor(config): validate declarative store options
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 10:46:46 +00:00
23e67585de Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#124) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 10:40:50 +00:00
Virgil
80bd9b59a4 docs(journal): sharpen query usage examples
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 10:40:37 +00:00
aaff717d0e Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#123) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 10:37:12 +00:00
Virgil
aa83e59b69 fix(scope): enforce quotas in scoped transactions
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 10:36:57 +00:00
45c2b11fc1 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#122) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 10:33:39 +00:00
Virgil
05c34585db refactor(store): clarify helper naming
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 10:33:25 +00:00
0124e24ef4 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#121) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 10:30:09 +00:00
Virgil
4e6a97c119 docs(store): clarify declarative constructor guidance
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 10:29:56 +00:00
e688e3cf9a Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#120) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 10:25:49 +00:00
Virgil
089f80c087 refactor(store): clarify key-value terminology
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 10:25:34 +00:00
0b91addbc9 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#119) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 10:23:11 +00:00
Virgil
21ce2938c8 refactor(workspace): clarify shared cleanup error context
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 10:23:00 +00:00
58c83f6018 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#118) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 10:17:42 +00:00
Virgil
7238871a3a refactor(scope): use groups helper for quota counting
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 10:17:28 +00:00
3dad84309b Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#117) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 10:13:39 +00:00
Virgil
7c59f9d011 fix(store): allow discard after workspace close
Make workspace cleanup idempotent so a closed workspace can still be discarded and removed from disk later. Also clarify the configuration comments for AX-oriented usage.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 10:13:27 +00:00
a971ff7d9b Merge pull request '[agent/codex:gpt-5.4] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#116) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 10:09:49 +00:00
Virgil
9d6420d37f test(events): cover re-entrant callback subscriptions
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 10:09:25 +00:00
fb973d81bf Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#115) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 10:03:57 +00:00
Virgil
7874f6eda8 feat(scope): add scoped transaction wrapper
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 10:03:41 +00:00
e9527e4b76 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#114) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 09:54:16 +00:00
Virgil
4415c35846 fix(journal): stabilise journal ordering
Order journal queries and archive compaction by committed_at, entry_id so rows with identical timestamps are returned predictably.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 09:54:03 +00:00
8b00de4489 Merge pull request '[agent/codex:gpt-5.4] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#113) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 09:48:16 +00:00
Virgil
039260fcf6 test(ax): enforce exported field usage examples
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 09:48:02 +00:00
bdaf9c09cd Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#112) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 09:43:44 +00:00
Virgil
09c78e13f4 fix(store): require complete journal configuration
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 09:43:27 +00:00
80ba2afd44 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#111) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 09:35:10 +00:00
Virgil
50d368f1ae feat(scope): add declarative scoped constructor
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 09:34:54 +00:00
c6be4d58ef Merge pull request '[agent/codex:gpt-5.4] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#110) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 09:23:21 +00:00
Virgil
4bd6b41d78 fix(workspace): preserve orphan aggregates during recovery
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 09:23:05 +00:00
74b5c3dacc Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#109) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 09:17:04 +00:00
Virgil
b10f4771bd fix(store): guarantee notification timestamps
Clarify the workspace comments and ensure notify always stamps events before dispatch.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 09:16:52 +00:00
9baee34452 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#108) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 09:09:25 +00:00
Virgil
a3f49539f4 feat(store): add transaction read helpers
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 09:09:12 +00:00
c68a435606 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#107) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 09:05:08 +00:00
Virgil
37500c56ae refactor(store): clarify SQLite handle names
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 09:04:56 +00:00
b2c1575577 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#106) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 09:01:36 +00:00
Virgil
4581c09631 fix(store): normalise default directory paths
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 09:01:23 +00:00
9daa44edb3 Merge pull request '[agent/codex:gpt-5.4] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#105) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 08:57:23 +00:00
Virgil
a662498891 docs(ax): fix misleading store usage examples
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 08:57:06 +00:00
16fe77feb2 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#104) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 08:50:28 +00:00
Virgil
e73d55d5ca refactor(store): rename transaction helper
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 08:50:17 +00:00
363b384296 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#103) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 08:47:14 +00:00
Virgil
e55a8a8457 feat(store): add transaction api
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 08:46:59 +00:00
7ee8e754fa Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#102) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 08:41:43 +00:00
Virgil
cc8bebb8e0 refactor(store): clarify journal configuration and workspace errors
Add a declarative journal configuration check and wrap workspace database errors with package context.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 08:41:28 +00:00
c422c2e15c Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#101) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 08:36:58 +00:00
Virgil
3742da144e fix(store): support Flux bucket filters in QueryJournal
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 08:36:42 +00:00
7a571f1958 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#100) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 08:33:13 +00:00
Virgil
2a28b5a71b feat(store): add closed-state accessor
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 08:32:55 +00:00
978bc84e79 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#99) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 08:28:22 +00:00
Virgil
28ea397282 fix(workspace): normalise orphan recovery paths
Handle the documented .core/state/ form the same as the default cache key and add a regression test for cached orphan recovery.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 08:28:07 +00:00
fe098b1260 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#98) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 08:23:33 +00:00
Virgil
5116662f41 feat(store): add database path accessor
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 08:23:23 +00:00
8f288744c9 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#97) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 08:17:32 +00:00
Virgil
7a4997edd9 feat(workspace): add explicit orphan-preserving close
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 08:17:20 +00:00
f6bb8d9588 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#96) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 08:12:44 +00:00
Virgil
e1cb275578 fix(store): preserve orphan files for recovery
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 08:12:29 +00:00
e7974717de Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#95) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 08:08:31 +00:00
Virgil
7d3b62086d feat(journal): accept PRAGMA queries
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 08:08:21 +00:00
3d1010cedb Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find ONE featur...' (#94) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-04 07:57:40 +00:00
Virgil
1c92e47b24 fix(store): cache orphan workspaces during startup
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 07:57:24 +00:00
2b50246e2e Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#93) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 09:17:10 +00:00
Virgil
1b5f59ebc5 refactor(store): align workspace docs with AX
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 09:16:59 +00:00
ddc3e23198 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#92) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 09:13:21 +00:00
Virgil
af0e677d65 refactor(store): clarify journal query naming
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 09:13:07 +00:00
ecde0858e7 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#91) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 09:10:01 +00:00
Virgil
d3a97bc506 fix(journal): accept raw SQL queries
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 09:09:48 +00:00
1d97d189a8 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#90) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 09:06:40 +00:00
Virgil
13db0508e0 docs(store): clarify workspace database comment
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 09:06:29 +00:00
747825fb9c Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#89) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 09:02:15 +00:00
Virgil
28b917cad6 refactor(scope): clarify watcher error contexts
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 09:02:04 +00:00
0b9ff95960 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#88) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:58:27 +00:00
Virgil
214b024d12 refactor(scope): clarify scoped error contexts
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:58:15 +00:00
d6bab73a47 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#87) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:54:32 +00:00
Virgil
828b55960b refactor(store): clarify constructor naming and docs
Prefer struct-literal configuration in package docs and rename internal constructor helpers for clarity.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:54:19 +00:00
ff29823f37 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#86) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:49:57 +00:00
Virgil
a616a21c04 refactor(store): expose workspace database path
Add a semantic accessor for the workspace backing file and cover it with a test.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:49:45 +00:00
c86fba62d2 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#85) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:46:01 +00:00
Virgil
ee984818d2 feat(store): expose active configuration
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:45:44 +00:00
6ba41748de Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#84) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:41:34 +00:00
Virgil
799d79d4e2 docs(store): clarify pagination and orphan scan wording
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:41:20 +00:00
442c498cb4 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#83) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:36:50 +00:00
Virgil
7fa9843083 refactor(scope): clarify scoped store lookup
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:36:40 +00:00
95a27f97d5 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#82) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:32:50 +00:00
Virgil
bf7b616fe1 feat(store): add paginated group reads
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:32:35 +00:00
55a804bd08 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#81) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:28:03 +00:00
Virgil
013a72753b fix(store): scan workspace orphans at startup
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:27:52 +00:00
9977e51daa Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#80) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:23:46 +00:00
Virgil
5af3f90e2d fix(workspace): leave orphaned workspaces recoverable
Stop New() from eagerly discarding orphaned workspace files so callers can recover them explicitly through RecoverOrphans().

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:23:31 +00:00
935e88eba5 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#79) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:20:11 +00:00
Virgil
41eaa7c96c feat(journal): support Flux equality filters
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:20:00 +00:00
170071b7aa Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#78) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:15:15 +00:00
Virgil
757e973097 feat(workspace): restore startup orphan cleanup
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:15:01 +00:00
7141ff69ed Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#77) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:11:38 +00:00
Virgil
aae444ac3b fix(store): harden event shutdown
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:11:24 +00:00
b2d2e0fd9c Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#76) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:07:43 +00:00
Virgil
3e450fdc35 feat(store): expose workspace names
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:07:32 +00:00
1133a5c759 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#75) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 08:03:05 +00:00
Virgil
4c33a53b67 refactor(store): remove no-op startup orphan scan
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 08:02:53 +00:00
7e4c965c4a Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#74) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:59:04 +00:00
Virgil
f05b205c06 docs(store): prefer declarative package example
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:58:52 +00:00
5465c3d946 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#73) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:53:33 +00:00
Virgil
a2294650b4 feat(workspace): restore startup orphan scan
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:53:20 +00:00
c9ae5ad347 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#72) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:48:57 +00:00
Virgil
27c945d9d4 docs(store): prefer declarative quick start
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:48:47 +00:00
6bd2197dfe Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#71) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:44:43 +00:00
Virgil
f8b7b23da6 docs(store): prefer declarative configuration examples
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:44:28 +00:00
2abf7e416d Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#70) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:40:33 +00:00
Virgil
66d05a1822 refactor(store): remove no-op startup orphan scan
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:40:19 +00:00
972e3eba72 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#69) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:35:32 +00:00
Virgil
841e7b8936 fix(store): harden background purge interval
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:35:20 +00:00
bdbaa37412 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#68) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:30:59 +00:00
Virgil
acad59664d fix(store): close watcher channels on shutdown
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:30:50 +00:00
ac04c4d3a6 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#67) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:27:18 +00:00
Virgil
a9ab1fd2ee refactor(store): move options onto config
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:27:04 +00:00
b4fa274ccc Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#66) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:22:23 +00:00
Virgil
a2067baa5a feat(store): scan orphan workspaces on startup
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:22:12 +00:00
fb2ef6fde0 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#65) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:17:55 +00:00
Virgil
79581e9824 docs(store): tighten workspace AX comments
Align workspace public comments with the AX guidance by using more concrete examples and clearer phrasing.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:17:46 +00:00
e943efb679 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#64) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:14:38 +00:00
Virgil
3a8cfcedf9 feat(store): add prefix cleanup helpers
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:14:22 +00:00
50b4eb8838 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#63) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:08:45 +00:00
Virgil
2ef3c95fd5 docs(store): clarify workspace recovery flow
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:08:33 +00:00
c96b1699ac Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#62) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:04:48 +00:00
Virgil
abf8fc20af refactor(workspace): keep orphan recovery explicit
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:04:33 +00:00
117bc2d414 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#61) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 07:00:52 +00:00
Virgil
c12aba4145 feat(scope): support scoped wildcard watchers
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 07:00:35 +00:00
d10c66c6ec Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#60) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:55:48 +00:00
Virgil
303b75444d feat(scope): add scoped event delegation
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:55:37 +00:00
addcd482a7 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#59) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:52:02 +00:00
Virgil
406825917b refactor(store): tighten AX workspace naming
Align workspace and journal comments with the current contract while keeping the API stable.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:51:49 +00:00
74d4629c5d Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#58) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:47:54 +00:00
Virgil
aad8dded6b feat(store): add declarative store config
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:47:39 +00:00
fcae6ae750 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#57) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:39:45 +00:00
Virgil
f9a7e542bf docs(store): clarify workspace lifecycle comments
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:39:35 +00:00
4e339f15d2 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#56) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:35:39 +00:00
Virgil
2d9c5b2b49 refactor(store): clarify backing store names
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:35:23 +00:00
a6c0645853 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#55) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:31:47 +00:00
Virgil
bff79c31ca fix(store): add nil-safe guards
Add nil/closed checks across the store, scoped store, workspace, journal, event, and compact entry points so agent callers get wrapped errors instead of panics.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:31:35 +00:00
87e4043848 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#54) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:24:02 +00:00
Virgil
a2adbf7ba6 refactor(store): tighten AX doc comments
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:23:51 +00:00
e8d7bd8230 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#53) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:20:16 +00:00
Virgil
0ce6014836 docs(store): align agent-facing file layout docs
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:20:04 +00:00
cf671930f4 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#52) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:16:00 +00:00
Virgil
fd3266306a feat(store): expose journal configuration snapshot
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:15:45 +00:00
1d704ec7bc Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#51) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:11:48 +00:00
Virgil
12809c8d64 refactor(store): clarify journal configuration names
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:11:38 +00:00
91a88075fa Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#50) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:08:29 +00:00
Virgil
f4492b1861 feat(store): surface journal configuration on store
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:08:16 +00:00
0df03def5e Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#49) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:04:51 +00:00
Virgil
1f4d1914c8 refactor(workspace): mirror orphan aggregation in cleanup
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:04:39 +00:00
58f55c8cf5 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#48) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 06:01:14 +00:00
Virgil
4e8f0a0016 refactor(store): clarify workspace and archive naming
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 06:01:00 +00:00
212b4df81c Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#47) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:57:52 +00:00
Virgil
07500561fd docs(store): sharpen AX usage comments
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:57:40 +00:00
4d2d4e5220 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#46) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:54:10 +00:00
Virgil
0c1b51413f feat(store): broaden journal query handling
Improve Flux measurement parsing, make orphan cleanup flow explicit, and align the CLAUDE watch example with the current API.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:53:56 +00:00
3e441685d8 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#45) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:49:41 +00:00
Virgil
bbbcb1becf docs(store): restore AX usage examples
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:49:31 +00:00
89c6a642ef Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#44) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:45:44 +00:00
Virgil
bc578265a8 refactor(scope): make scoped helpers explicit
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:45:24 +00:00
b8b7568d08 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#43) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:41:06 +00:00
Virgil
619e82a459 refactor(store): rename slice helper for clarity
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:40:53 +00:00
2c75fc250e Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#42) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:35:35 +00:00
Virgil
b6b29b50ce feat(store): add configurable purge interval
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:35:22 +00:00
1b61833c64 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#41) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:33:05 +00:00
Virgil
33571be892 refactor(store): align wrapper naming with RFC
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:32:46 +00:00
5782584a99 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#40) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:29:09 +00:00
Virgil
294a998282 refactor(store): rename journal config fields
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:28:57 +00:00
a3dd245d8a Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#39) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:25:56 +00:00
Virgil
07bd25816e refactor(store): simplify OnChange callback API
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:25:42 +00:00
17be565026 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#38) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:21:45 +00:00
Virgil
69cf03e69d docs(store): expand package examples for workspace flow
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:21:35 +00:00
e9c4c3f35a Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#37) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:18:16 +00:00
Virgil
0accc6e85e feat(store): clean up orphaned workspaces on startup
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:18:05 +00:00
1b2aaffc5d Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#36) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:14:00 +00:00
Virgil
387d1463fb feat(store): add RFC-scoped helpers and callbacks
Add the RFC-named scoped helpers, AllSeq iteration, and group-filtered change callbacks.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:13:46 +00:00
e262b5513f Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#35) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:06:08 +00:00
Virgil
016e2c3777 refactor(store): remove legacy alias entry points
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:05:53 +00:00
ee056096e4 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#34) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 05:01:12 +00:00
Virgil
d7f03d5db0 fix(store): make Close idempotent
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:00:56 +00:00
9c422f9568 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#33) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 04:56:24 +00:00
Virgil
c2f7fc26ff refactor(store): make orphan recovery deterministic
Align the watcher examples with the current API and sort recovered workspaces for predictable output.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 04:56:08 +00:00
53c765b1f1 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#32) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 04:50:41 +00:00
Virgil
2353bdf2f7 fix(store): honour Flux range stop bound
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 04:50:26 +00:00
9a86bfa5c1 Merge pull request '[agent/codex:gpt-5.4-mini] Read ~/spec/code/core/go/store/RFC.md fully. Find features d...' (#31) from agent/read---spec-code-core-go-store-rfc-md-fu into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-04-03 04:44:59 +00:00
Virgil
5c7e243fc0 feat(store): align public API with RFC
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 04:44:45 +00:00
e6d3001f76 Merge pull request '[agent/codex:gpt-5.4] Implement docs/RFC-STORE.md using docs/RFC-CORE-008-AGENT-EX...' (#29) from agent/implement-docs-rfc-store-md-using-docs-r into dev
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
2026-03-30 21:08:06 +00:00
Virgil
4c44cfa336 fix(store): harden RFC journal and workspace flows
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 21:07:30 +00:00
d4c6400701 Merge pull request '[agent/codex:gpt-5.4] Implement docs/RFC-STORE.md using docs/RFC-CORE-008-AGENT-EX...' (#28) from agent/implement-docs-rfc-store-md-using-docs-r into dev
All checks were successful
Security Scan / security (push) Successful in 8s
Test / test (push) Successful in 1m44s
2026-03-30 20:59:34 +00:00
Virgil
4ab2d26b74 feat(store): align workspace query with RFC
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 20:59:09 +00:00
165aecabf8 Merge pull request '[agent/codex:gpt-5.4] Implement docs/RFC-STORE.md using docs/RFC-CORE-008-AGENT-EX...' (#27) from agent/implement-docs-rfc-store-md-using-docs-r into dev
All checks were successful
Security Scan / security (push) Successful in 10s
Test / test (push) Successful in 2m38s
2026-03-30 20:53:38 +00:00
Virgil
d983760445 feat(store): add zstd archive support
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 20:53:12 +00:00
af40581e5e Merge pull request '[agent/codex:gpt-5.4] Implement docs/RFC-STORE.md using docs/RFC-CORE-008-AGENT-EX...' (#26) from agent/implement-docs-rfc-store-md-using-docs-r into dev
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m34s
2026-03-30 20:47:17 +00:00
Virgil
d9fad2d6be feat(store): implement RFC workspace and journal surfaces
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 20:46:43 +00:00
a0b81cc0d0 Merge pull request '[agent/codex:gpt-5.4] Implement docs/RFC-STORE.md using docs/RFC-CORE-008-AGENT-EX...' (#25) from agent/implement-docs-rfc-store-md-using-docs-r into dev
All checks were successful
Security Scan / security (push) Successful in 16s
Test / test (push) Successful in 2m29s
2026-03-30 20:03:59 +00:00
Virgil
134853e6df fix(store): tighten scoped purge and delete events
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 20:03:19 +00:00
Snider
4f257cee6f feat(store): 5.4 AX implementation pass — scope, events, coverage
All checks were successful
Security Scan / security (push) Successful in 8s
Test / test (push) Successful in 1m44s
Code-only changes from 5.4 RFC implementation. Scope helpers,
event test coverage, store test coverage, doc.go improvements.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 20:41:22 +01:00
Virgil
9cd7b9b1a7 fix(store): stabilise AX iteration behaviour
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 19:39:54 +00:00
Virgil
68e7d1e53a fix(store): tighten AX docs and scoped constructor validation
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 19:29:48 +00:00
Virgil
a0ee621ba4 refactor(store): align AX terminology in docs and comments
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 19:18:20 +00:00
Snider
94559e4f37 docs: add AX + store RFCs for agent dispatch
All checks were successful
Security Scan / security (push) Successful in 10s
Test / test (push) Successful in 1m44s
Temporary — specs needed in-repo until core-agent mount bug is fixed.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 20:14:42 +01:00
Virgil
77345036ad feat(scope): add namespace-local helper methods
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 20:14:42 +01:00
Virgil
6813cd0308 feat(scope): add namespace-local helper methods
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 19:14:09 +00:00
842658c4c7 Merge pull request '[agent/codex:gpt-5.4-mini] Update the code against the AX design principles in docs/RFC...' (#23) from agent/update-the-code-against-the-ax-design-pr into dev
All checks were successful
Security Scan / security (push) Successful in 10s
Test / test (push) Successful in 1m44s
2026-03-30 19:06:16 +00:00
Virgil
05410a9498 refactor(store): clarify public AX surface
Add descriptive public type comments and rename watcher pattern fields so the package reads more directly for agent consumers.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 19:06:00 +00:00
c1f6939651 Merge pull request '[agent/codex:gpt-5.4-mini] Update the code against the AX design principles in docs/RFC...' (#22) from agent/update-the-code-against-the-ax-design-pr into dev
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m38s
2026-03-30 19:01:11 +00:00
Virgil
c54dfd7a96 refactor(store): use descriptive event registration identifiers
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 19:00:51 +00:00
9ce34a9723 Merge pull request '[agent/codex:gpt-5.4-mini] Update the code against the AX design principles in docs/RFC...' (#21) from agent/update-the-code-against-the-ax-design-pr into dev
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m43s
2026-03-30 18:57:34 +00:00
Virgil
eb53521d50 docs(architecture): spell out GetSplit separator name
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 18:57:21 +00:00
Snider
868320c734 docs: add AX + store RFCs for agent dispatch
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m38s
Temporary — specs needed in-repo until core-agent mount bug is fixed.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 19:52:14 +01:00
1b650099ec Merge pull request '[agent/codex:gpt-5.4-mini] Update the code against the AX design principles in docs/RFC...' (#20) from agent/update-the-code-against-the-ax-design-pr into dev
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m43s
2026-03-30 18:49:32 +00:00
Virgil
4edd7dd0ca refactor(store): align AX examples and terminology
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 18:49:17 +00:00
a80e352b43 Merge pull request '[agent/codex:gpt-5.4-mini] Update the code against the AX design principles in docs/RFC...' (#19) from agent/update-the-code-against-the-ax-design-pr into dev
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m43s
2026-03-30 18:37:21 +00:00
Virgil
23f207db3f refactor(store): tighten AX naming and error contexts
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 18:37:07 +00:00
Virgil
f144c5eb01 refactor(test): tighten AX naming in tests
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m43s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 18:23:56 +00:00
Virgil
2eedf1e937 refactor(store): tighten AX naming and examples
All checks were successful
Security Scan / security (push) Successful in 10s
Test / test (push) Successful in 1m41s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 18:17:07 +00:00
Virgil
5b944410e7 docs(repo): link the AX RFC from the main entry points
All checks were successful
Security Scan / security (push) Successful in 11s
Test / test (push) Successful in 1m42s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 18:12:28 +00:00
Virgil
bf3db41d9f test(store): improve AX coverage and error paths
All checks were successful
Security Scan / security (push) Successful in 8s
Test / test (push) Successful in 1m39s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 18:08:33 +00:00
Virgil
6261ea2afb refactor(store): clarify AX terminology in code and docs
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m42s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 17:45:39 +00:00
Virgil
05af917e17 refactor(store): clarify AX helper names and examples
All checks were successful
Security Scan / security (push) Successful in 10s
Test / test (push) Successful in 1m42s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 17:37:50 +00:00
Virgil
fdadc24579 docs(store): align remaining AX examples
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m40s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 17:32:09 +00:00
Virgil
15892136e8 docs(store): align package examples with AX
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m40s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 17:02:21 +00:00
Virgil
da29c712b4 docs(store): align RFC-STORE with AX
All checks were successful
Security Scan / security (push) Successful in 10s
Test / test (push) Successful in 1m38s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 16:57:26 +00:00
Virgil
d6cd9fd818 docs(store): align quota config comments with AX
All checks were successful
Security Scan / security (push) Successful in 8s
Test / test (push) Successful in 1m40s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 16:54:02 +00:00
Virgil
a458464876 docs(store): add field usage examples
All checks were successful
Security Scan / security (push) Successful in 10s
Test / test (push) Successful in 1m42s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 16:50:04 +00:00
Virgil
a82b0d379b docs(ax): align CLAUDE with AX examples
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m39s
Update the repository guidance to match the current AX conventions, including explicit error handling and the actual file layout.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 16:46:18 +00:00
Virgil
5df38516cc refactor(store): tighten AX examples and error handling
All checks were successful
Security Scan / security (push) Successful in 8s
Test / test (push) Successful in 1m26s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 16:41:56 +00:00
Virgil
c15862a81d refactor(store): tighten AX docs and helpers
All checks were successful
Security Scan / security (push) Successful in 8s
Test / test (push) Successful in 1m40s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 16:33:07 +00:00
Virgil
0fb0d16149 refactor(store): tighten AX error context and examples
All checks were successful
Security Scan / security (push) Successful in 8s
Test / test (push) Successful in 1m39s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 16:27:54 +00:00
Virgil
0bda91f0bd refactor(store): tighten AX public comments
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 16:20:35 +00:00
Virgil
cdc4d5a11d refactor(store): sharpen AX examples and comments
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m39s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 16:13:55 +00:00
Virgil
ead99906de refactor(store): tighten AX event naming and examples
All checks were successful
Security Scan / security (push) Successful in 10s
Test / test (push) Successful in 1m41s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 16:07:53 +00:00
Virgil
289d864b0d docs(store): tighten AX examples and comments
All checks were successful
Security Scan / security (push) Successful in 8s
Test / test (push) Successful in 1m38s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 15:55:04 +00:00
Virgil
30db60c77f refactor(store): tighten AX naming and examples
All checks were successful
Security Scan / security (push) Successful in 8s
Test / test (push) Successful in 1m39s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 15:48:33 +00:00
Virgil
d54609b974 docs(ax): tighten usage examples in public comments
All checks were successful
Security Scan / security (push) Successful in 10s
Test / test (push) Successful in 1m40s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 15:43:42 +00:00
Virgil
25eb05e68d refactor(store): rename sqlite schema for AX clarity
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m38s
Migrate legacy kv databases to the descriptive entries schema and cover the new iterator branches.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 15:37:49 +00:00
Virgil
57e061f742 fix(events): make callback dispatch re-entrant safe
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m34s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 15:22:33 +00:00
Virgil
adc463ba75 docs(ax): add Codex conventions bridge
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m36s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 15:16:16 +00:00
Virgil
2c55d220fa refactor(store): tighten scoped AX names
All checks were successful
Security Scan / security (push) Successful in 10s
Test / test (push) Successful in 1m36s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 15:06:20 +00:00
Virgil
36a8d89677 refactor(store): tighten AX naming
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m40s
Replace the remaining shorthand variable names in the implementation, examples, and supporting docs with explicit names.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 15:02:28 +00:00
Virgil
2bfb5af5e2 refactor(store): apply AX naming cleanup
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m35s
Rename terse locals and callback internals, and update the user-facing examples to use explicit names.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 14:54:34 +00:00
Virgil
37740a8bd9 refactor(store): remove legacy AX aliases
All checks were successful
Security Scan / security (push) Successful in 15s
Test / test (push) Successful in 1m41s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 14:38:07 +00:00
Virgil
335c6460c9 refactor(store): adopt AX primary names
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m35s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 14:22:49 +00:00
Snider
f845bc4368 docs: add store RFC and AX RFC to repo docs for agent access
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 2m39s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-30 14:25:56 +01:00
Virgil
083bc1b232 chore(deps): remove stale core/log dependency
All checks were successful
Security Scan / security (push) Successful in 8s
Test / test (push) Successful in 2m32s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-29 23:27:16 +00:00
Virgil
380f2b9157 fix(store): finish ax v0.8.0 polish
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-29 23:26:44 +00:00
Claude
c3de82b207
feat: upgrade to core v0.8.0-alpha.1, replace banned stdlib imports
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 2m33s
Replace fmt, errors, strings, path/filepath with Core primitives
across 8 files. Keep strings for SplitSeq/FieldsSeq/Builder/Repeat.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 13:58:50 +00:00
e2678f5cde Merge pull request '[agent/codex] AX review' (#13) from agent/ax-review--banned-imports--test-naming into dev
All checks were successful
Security Scan / security (push) Successful in 16s
Test / test (push) Successful in 1m10s
2026-03-26 11:30:13 +00:00
Virgil
f82b1e9dcb test(conventions): enforce AX review rules
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 11:29:56 +00:00
ce189c69e2 Merge pull request '[agent/codex] AX review: banned imports, test naming, usage comments.' (#12) from agent/ax-review--banned-imports--test-naming into dev
All checks were successful
Security Scan / security (push) Successful in 2m30s
Test / test (push) Successful in 4m56s
2026-03-26 11:17:38 +00:00
Virgil
ec500b86d2 chore(repo): enforce AX review conventions
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 11:17:06 +00:00
51 changed files with 16104 additions and 2097 deletions

View file

@ -1,5 +1,8 @@
name: CI
env:
CGO_ENABLED: "1"
on:
push:
branches: [main, dev]

View file

@ -1,22 +1,37 @@
version: "2"
run:
timeout: 5m
go: "1.26"
linters:
enable:
- govet
- errcheck
- staticcheck
- unused
- gosimple
- ineffassign
- typecheck
- depguard
- gocritic
- gofmt
disable:
- exhaustive
- wrapcheck
settings:
depguard:
rules:
legacy-module-paths:
list-mode: lax
files:
- $all
deny:
- pkg: forge.lthn.ai/
desc: use dappco.re/ module paths instead
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$
issues:
exclude-use-default: false
max-same-issues: 0
formatters:
enable:
- gofmt
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

117
CLAUDE.md
View file

@ -4,11 +4,22 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## What This Is
SQLite key-value store with TTL, namespace isolation, and reactive events. Pure Go (no CGO). Module: `dappco.re/go/core/store`
SQLite key-value store with TTL, namespace isolation, and reactive events. Pure Go (no CGO). Module: `dappco.re/go/store`
## AX Notes
- Prefer descriptive names over abbreviations.
- Public comments should show real usage with concrete values.
- Keep examples in UK English.
- Prefer `StoreConfig` and `ScopedStoreConfig` literals over option chains when the configuration is already known.
- Do not add compatibility aliases; the primary API names are the contract.
- Preserve the single-connection SQLite design.
- Verify with `go test ./...`, `go test -race ./...`, and `go vet ./...` before committing.
- Use conventional commits and include the `Co-Authored-By: Virgil <virgil@lethean.io>` trailer.
## Getting Started
Part of the Go workspace at `~/Code/go.work`—run `go work sync` after cloning. Single Go package with `store.go` (core) and `scope.go` (scoping/quota).
Part of the Go workspace at `~/Code/go.work`—run `go work sync` after cloning. Single Go package with `store.go` (core store API), `events.go` (watchers/callbacks), `scope.go` (scoping/quota), `journal.go` (journal persistence/query), `workspace.go` (workspace buffering), and `compact.go` (archive generation).
```bash
go test ./... -count=1
@ -18,7 +29,7 @@ go test ./... -count=1
```bash
go test ./... # Run all tests
go test -v -run TestWatch_Good ./... # Run single test
go test -v -run TestEvents_Watch_Good_SpecificKey ./... # Run single test
go test -race ./... # Race detector (must pass before commit)
go test -cover ./... # Coverage (target: 95%+)
go test -bench=. -benchmem ./... # Benchmarks
@ -31,9 +42,12 @@ go vet ./... # Vet
**Single-connection SQLite.** `store.go` pins `MaxOpenConns(1)` because SQLite pragmas (WAL, busy_timeout) are per-connection — a pool would hand out unpragma'd connections causing SQLITE_BUSY. This is the most important architectural decision; don't change it.
**Three-layer design:**
- `store.go` — Core `Store` type: CRUD on a `(grp, key)` compound-PK table, TTL via `expires_at` (Unix ms), background purge goroutine (60s interval), `text/template` rendering, `iter.Seq2` iterators
- `events.go` — Event system: `Watch`/`Unwatch` (buffered chan, cap 16, non-blocking sends drop events) and `OnChange` callbacks (synchronous in writer goroutine). `notify()` holds `s.mu` read-lock; **calling Watch/Unwatch/OnChange from inside a callback will deadlock** (they need write-lock)
- `store.go` — Core `Store` type: CRUD on an `entries` table keyed by `(group_name, entry_key)`, TTL via `expires_at` (Unix ms), background purge goroutine (60s interval), `text/template` rendering, `iter.Seq2` iterators
- `events.go` — Event system: `Watch`/`Unwatch` (buffered chan, cap 16, non-blocking sends drop events) and `OnChange` callbacks (synchronous in writer goroutine). Watcher and callback registries use separate locks, so callbacks can register or unregister subscriptions without deadlocking.
- `scope.go``ScopedStore` wraps `*Store`, prefixes groups with `namespace:`. Quota enforcement (`MaxKeys`/`MaxGroups`) checked before writes; upserts bypass quota. Namespace regex: `^[a-zA-Z0-9-]+$`
- `journal.go` — Journal persistence and query helpers layered on SQLite.
- `workspace.go` — Workspace buffers, commit flow, and orphan recovery.
- `compact.go` — Cold archive generation for completed journal entries.
**TTL enforcement is triple-layered:** lazy delete on `Get`, query-time `WHERE` filtering on bulk reads, and background purge goroutine.
@ -42,36 +56,73 @@ go vet ./... # Vet
## Key API
```go
st, _ := store.New(":memory:") // or store.New("/path/to/db")
defer st.Close()
package main
st.Set("group", "key", "value") // no expiry
st.SetWithTTL("group", "key", "value", 5*time.Minute) // expires after TTL
val, _ := st.Get("group", "key") // lazy-deletes expired
st.Delete("group", "key")
st.DeleteGroup("group")
all, _ := st.GetAll("group") // excludes expired
n, _ := st.Count("group") // excludes expired
out, _ := st.Render(tmpl, "group") // excludes expired
removed, _ := st.PurgeExpired() // manual purge
total, _ := st.CountAll("prefix:") // count keys matching prefix (excludes expired)
groups, _ := st.Groups("prefix:") // distinct group names matching prefix
import (
"fmt"
"time"
// Namespace isolation (auto-prefixes groups with "tenant:")
sc, _ := store.NewScoped(st, "tenant")
sc.Set("config", "key", "val") // stored as "tenant:config" in underlying store
"dappco.re/go/store"
)
// With quota enforcement
sq, _ := store.NewScopedWithQuota(st, "tenant", store.QuotaConfig{MaxKeys: 100, MaxGroups: 10})
sq.Set("g", "k", "v") // returns ErrQuotaExceeded if limits hit
func main() {
storeInstance, err := store.New(":memory:")
if err != nil {
return
}
defer storeInstance.Close()
// Event hooks
w := st.Watch("group", "*") // wildcard: all keys in group ("*","*" for all)
defer st.Unwatch(w)
e := <-w.Ch // buffered chan, cap 16
configuredStore, err := store.NewConfigured(store.StoreConfig{
DatabasePath: ":memory:",
Journal: store.JournalConfiguration{
EndpointURL: "http://127.0.0.1:8086",
Organisation: "core",
BucketName: "events",
},
PurgeInterval: 30 * time.Second,
})
if err != nil {
return
}
defer configuredStore.Close()
unreg := st.OnChange(func(e store.Event) { /* synchronous in writer goroutine */ })
defer unreg()
if err := configuredStore.Set("group", "key", "value"); err != nil {
return
}
value, err := configuredStore.Get("group", "key")
if err != nil {
return
}
fmt.Println(value)
if err := configuredStore.SetWithTTL("session", "token", "abc123", 5*time.Minute); err != nil {
return
}
scopedStore, err := store.NewScopedConfigured(configuredStore, store.ScopedStoreConfig{
Namespace: "tenant",
Quota: store.QuotaConfig{MaxKeys: 100, MaxGroups: 10},
})
if err != nil {
return
}
if err := scopedStore.SetIn("config", "theme", "dark"); err != nil {
return
}
events := configuredStore.Watch("group")
defer configuredStore.Unwatch("group", events)
go func() {
for event := range events {
fmt.Println(event.Type, event.Group, event.Key, event.Value)
}
}()
unregister := configuredStore.OnChange(func(event store.Event) {
fmt.Println("changed", event.Group, event.Key, event.Value)
})
defer unregister()
}
```
## Coding Standards
@ -85,7 +136,7 @@ defer unreg()
## Test Conventions
- Suffix convention: `_Good` (happy path), `_Bad` (expected errors), `_Ugly` (panics/edge)
- Test names follow `Test<File>_<Function>_<Good|Bad|Ugly>`, for example `TestEvents_Watch_Good_SpecificKey`
- Use `New(":memory:")` unless testing persistence; use `t.TempDir()` for file-backed
- TTL tests: 1ms TTL + 5ms sleep; use `sync.WaitGroup` not sleeps for goroutine sync
- `require` for preconditions, `assert` for verifications (`testify`)
@ -93,10 +144,10 @@ defer unreg()
## Adding a New Method
1. Implement on `*Store` in `store.go`
2. If mutating, call `s.notify(Event{...})` after successful DB write
2. If mutating, call `storeInstance.notify(Event{...})` after successful database write
3. Add delegation method on `ScopedStore` in `scope.go` (prefix the group)
4. Update `checkQuota` in `scope.go` if it affects key/group counts
5. Write `_Good`/`_Bad` tests
5. Write `Test<File>_<Function>_<Good|Bad|Ugly>` tests
6. Run `go test -race ./...` and `go vet ./...`
## Docs

25
CODEX.md Normal file
View file

@ -0,0 +1,25 @@
# CODEX.md
This repository uses the same working conventions described in [`CLAUDE.md`](CLAUDE.md).
Keep the two files aligned.
## AX Notes
- Prefer descriptive names over abbreviations.
- Public comments should show real usage with concrete values.
- Keep examples in UK English.
- Prefer `StoreConfig` and `ScopedStoreConfig` literals over option chains when the configuration is already known.
- Do not add compatibility aliases; the primary API names are the contract.
- Preserve the single-connection SQLite design.
- Verify with `go test ./...`, `go test -race ./...`, and `go vet ./...` before committing.
- Use conventional commits and include the `Co-Authored-By: Virgil <virgil@lethean.io>` trailer.
## Repository Shape
- `store.go` contains the core store API and SQLite lifecycle.
- `events.go` contains mutation events, watchers, and callbacks.
- `scope.go` contains namespace isolation and quota enforcement.
- `journal.go` contains journal persistence and query helpers.
- `workspace.go` contains workspace buffering and orphan recovery.
- `compact.go` contains cold archive generation.
- `docs/` contains the package docs, architecture notes, and history.

21
DEPENDENCIES.md Normal file
View file

@ -0,0 +1,21 @@
# Dependency Exceptions
This repository is pure Go by default and permits `modernc.org/sqlite` as the
normal runtime database dependency. The following exception is documented
because the current PR contains load-bearing analytical workspace code that
cannot be replaced by a pure-Go DuckDB-compatible driver.
## `github.com/marcboeker/go-duckdb`
`github.com/marcboeker/go-duckdb` is retained only for DuckDB-backed workspace
buffers and LEM analytical import helpers. DuckDB files are produced and
consumed by existing data pipelines, and no pure-Go DuckDB implementation with
compatible SQL semantics is currently available. Replacing it with
`modernc.org/sqlite` would remove DuckDB JSON import, analytical table, and
workspace recovery behaviour rather than preserving the feature.
This is a CGO and MIT-licensed dependency exception. It must not be used for the
primary SQLite store path, and new runtime storage features should continue to
use pure-Go dependencies compatible with EUPL-1.2. Builds and CI that include
workspace, import, inventory, or scoring behaviour must run with
`CGO_ENABLED=1` and a C/C++ toolchain available.

6
LICENCE.md Normal file
View file

@ -0,0 +1,6 @@
# Licence
This project is licensed under the European Union Public Licence, version 1.2
(EUPL-1.2).
Full licence text: https://joinup.ec.europa.eu/collection/eupl/eupl-text-eupl-12

View file

@ -1,42 +1,87 @@
[![Go Reference](https://pkg.go.dev/badge/dappco.re/go/core/store.svg)](https://pkg.go.dev/dappco.re/go/core/store)
[![License: EUPL-1.2](https://img.shields.io/badge/License-EUPL--1.2-blue.svg)](LICENSE.md)
[![Go Reference](https://pkg.go.dev/badge/dappco.re/go/store.svg)](https://pkg.go.dev/dappco.re/go/store)
[![Licence: EUPL-1.2](https://img.shields.io/badge/Licence-EUPL--1.2-blue.svg)](LICENCE.md)
[![Go Version](https://img.shields.io/badge/Go-1.26-00ADD8?style=flat&logo=go)](go.mod)
# go-store
Group-namespaced SQLite key-value store with TTL expiry, namespace isolation, quota enforcement, and a reactive event system. Backed by a pure-Go SQLite driver (no CGO), uses WAL mode for concurrent reads, and enforces a single connection to ensure pragma consistency. Supports scoped stores for multi-tenant use, Watch/Unwatch subscriptions, and OnChange callbacks — the designed integration point for go-ws real-time streaming.
Group-namespaced SQLite key-value store with TTL expiry, namespace isolation, quota enforcement, and a reactive event system. Backed by a pure-Go SQLite driver (no CGO), uses WAL mode for concurrent reads, and enforces a single connection to keep pragma settings consistent. Supports scoped stores for multi-tenant use, Watch/Unwatch subscriptions, and OnChange callbacks for downstream event consumers.
**Module**: `dappco.re/go/core/store`
**Module**: `dappco.re/go/store`
**Licence**: EUPL-1.2
**Language**: Go 1.25
**Language**: Go 1.26
## Quick Start
```go
import "dappco.re/go/core/store"
package main
st, err := store.New("/path/to/store.db") // or store.New(":memory:")
defer st.Close()
import (
"fmt"
"time"
st.Set("config", "theme", "dark")
st.SetWithTTL("session", "token", "abc123", 24*time.Hour)
val, err := st.Get("config", "theme")
"dappco.re/go/store"
)
// Watch for mutations
w := st.Watch("config", "*")
defer st.Unwatch(w)
for e := range w.Ch { fmt.Println(e.Type, e.Key) }
func main() {
// Configure a persistent store with "/tmp/go-store.db", or use ":memory:" for ephemeral data.
storeInstance, err := store.NewConfigured(store.StoreConfig{
DatabasePath: "/tmp/go-store.db",
Journal: store.JournalConfiguration{
EndpointURL: "http://127.0.0.1:8086",
Organisation: "core",
BucketName: "events",
},
PurgeInterval: 30 * time.Second,
WorkspaceStateDirectory: "/tmp/core-state",
})
if err != nil {
return
}
defer storeInstance.Close()
// Scoped store for tenant isolation
sc, _ := store.NewScoped(st, "tenant-42")
sc.Set("prefs", "locale", "en-GB")
if err := storeInstance.Set("config", "colour", "blue"); err != nil {
return
}
if err := storeInstance.SetWithTTL("session", "token", "abc123", 24*time.Hour); err != nil {
return
}
colourValue, err := storeInstance.Get("config", "colour")
if err != nil {
return
}
fmt.Println(colourValue)
// Watch "config" mutations and print each event as it arrives.
events := storeInstance.Watch("config")
defer storeInstance.Unwatch("config", events)
go func() {
for event := range events {
fmt.Println(event.Type, event.Group, event.Key, event.Value)
}
}()
// Store tenant-42 preferences under the "tenant-42:" prefix.
scopedStore, err := store.NewScopedConfigured(storeInstance, store.ScopedStoreConfig{
Namespace: "tenant-42",
Quota: store.QuotaConfig{MaxKeys: 100, MaxGroups: 10},
})
if err != nil {
return
}
if err := scopedStore.SetIn("preferences", "locale", "en-GB"); err != nil {
return
}
}
```
## Documentation
- [Agent Conventions](CODEX.md) - Codex-facing repo rules and AX notes
- [AX RFC](docs/RFC-CORE-008-AGENT-EXPERIENCE.md) - naming, comment, and path conventions for agent consumers
- [Architecture](docs/architecture.md) — storage layer, group/key model, TTL expiry, event system, namespace isolation
- [Development Guide](docs/development.md) — prerequisites, test patterns, benchmarks, adding methods
- [Project History](docs/history.md) — completed phases, known limitations, future considerations
- [Dependency Exceptions](DEPENDENCIES.md) — documented runtime dependency exceptions
## Build & Test
@ -49,4 +94,4 @@ go build ./...
## Licence
European Union Public Licence 1.2 — see [LICENCE](LICENCE) for details.
European Union Public Licence 1.2 — see [LICENCE.md](LICENCE.md) for details.

View file

@ -1,9 +1,10 @@
// SPDX-Licence-Identifier: EUPL-1.2
// SPDX-License-Identifier: EUPL-1.2
package store
import (
"dappco.re/go/core"
"testing"
core "dappco.re/go/core"
)
// Supplemental benchmarks beyond the core Set/Get/GetAll/FileBacked benchmarks
@ -15,32 +16,32 @@ func BenchmarkGetAll_VaryingSize(b *testing.B) {
for _, size := range sizes {
b.Run(core.Sprintf("size=%d", size), func(b *testing.B) {
s, err := New(":memory:")
storeInstance, err := New(":memory:")
if err != nil {
b.Fatal(err)
}
defer s.Close()
defer func() { _ = storeInstance.Close() }()
for i := range size {
_ = s.Set("bench", core.Sprintf("key-%d", i), "value")
_ = storeInstance.Set("bench", core.Sprintf("key-%d", i), "value")
}
b.ReportAllocs()
b.ResetTimer()
for range b.N {
_, _ = s.GetAll("bench")
_, _ = storeInstance.GetAll("bench")
}
})
}
}
func BenchmarkSetGet_Parallel(b *testing.B) {
s, err := New(":memory:")
storeInstance, err := New(":memory:")
if err != nil {
b.Fatal(err)
}
defer s.Close()
defer func() { _ = storeInstance.Close() }()
b.ReportAllocs()
b.ResetTimer()
@ -49,84 +50,84 @@ func BenchmarkSetGet_Parallel(b *testing.B) {
i := 0
for pb.Next() {
key := core.Sprintf("key-%d", i)
_ = s.Set("parallel", key, "value")
_, _ = s.Get("parallel", key)
_ = storeInstance.Set("parallel", key, "value")
_, _ = storeInstance.Get("parallel", key)
i++
}
})
}
func BenchmarkCount_10K(b *testing.B) {
s, err := New(":memory:")
storeInstance, err := New(":memory:")
if err != nil {
b.Fatal(err)
}
defer s.Close()
defer func() { _ = storeInstance.Close() }()
for i := range 10_000 {
_ = s.Set("bench", core.Sprintf("key-%d", i), "value")
_ = storeInstance.Set("bench", core.Sprintf("key-%d", i), "value")
}
b.ReportAllocs()
b.ResetTimer()
for range b.N {
_, _ = s.Count("bench")
_, _ = storeInstance.Count("bench")
}
}
func BenchmarkDelete(b *testing.B) {
s, err := New(":memory:")
storeInstance, err := New(":memory:")
if err != nil {
b.Fatal(err)
}
defer s.Close()
defer func() { _ = storeInstance.Close() }()
// Pre-populate keys that will be deleted.
for i := range b.N {
_ = s.Set("bench", core.Sprintf("key-%d", i), "value")
_ = storeInstance.Set("bench", core.Sprintf("key-%d", i), "value")
}
b.ReportAllocs()
b.ResetTimer()
for i := range b.N {
_ = s.Delete("bench", core.Sprintf("key-%d", i))
_ = storeInstance.Delete("bench", core.Sprintf("key-%d", i))
}
}
func BenchmarkSetWithTTL(b *testing.B) {
s, err := New(":memory:")
storeInstance, err := New(":memory:")
if err != nil {
b.Fatal(err)
}
defer s.Close()
defer func() { _ = storeInstance.Close() }()
b.ReportAllocs()
b.ResetTimer()
for i := range b.N {
_ = s.SetWithTTL("bench", core.Sprintf("key-%d", i), "value", 60_000_000_000) // 60s
_ = storeInstance.SetWithTTL("bench", core.Sprintf("key-%d", i), "value", 60_000_000_000) // 60s
}
}
func BenchmarkRender(b *testing.B) {
s, err := New(":memory:")
storeInstance, err := New(":memory:")
if err != nil {
b.Fatal(err)
}
defer s.Close()
defer func() { _ = storeInstance.Close() }()
for i := range 50 {
_ = s.Set("bench", core.Sprintf("key%d", i), core.Sprintf("val%d", i))
_ = storeInstance.Set("bench", core.Sprintf("key%d", i), core.Sprintf("val%d", i))
}
tmpl := `{{ .key0 }} {{ .key25 }} {{ .key49 }}`
templateSource := `{{ .key0 }} {{ .key25 }} {{ .key49 }}`
b.ReportAllocs()
b.ResetTimer()
for range b.N {
_, _ = s.Render(tmpl, "bench")
_, _ = storeInstance.Render(templateSource, "bench")
}
}

303
compact.go Normal file
View file

@ -0,0 +1,303 @@
package store
import (
"bytes"
"compress/gzip"
"time"
"unicode"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
"github.com/klauspost/compress/zstd"
)
var defaultArchiveOutputDirectory = ".core/archive/"
// Usage example: `options := store.CompactOptions{Before: time.Date(2026, 3, 30, 0, 0, 0, 0, time.UTC), Output: "/tmp/archive", Format: "gzip"}`
// Usage example: `result := storeInstance.Compact(store.CompactOptions{Before: time.Now().Add(-90 * 24 * time.Hour)})`
// Leave `Output` empty to write gzip JSONL archives under `.core/archive/`, or
// set `Format` to `zstd` when downstream tooling expects `.jsonl.zst`.
type CompactOptions struct {
// Usage example: `options := store.CompactOptions{Before: time.Now().Add(-90 * 24 * time.Hour)}`
Before time.Time
// Usage example: `options := store.CompactOptions{Output: "/tmp/archive"}`
Output string
// Usage example: `options := store.CompactOptions{Format: "zstd"}`
Format string
// Usage example: `medium, _ := s3.New(s3.Options{Bucket: "archive"}); options := store.CompactOptions{Before: time.Now().Add(-90 * 24 * time.Hour), Medium: medium}`
// Medium routes the archive write through a coreio.Medium instead of the raw
// filesystem. When set, Output is the path inside the medium; leave empty
// to use `.core/archive/`. When nil, Compact falls back to the store-level
// medium (if configured via WithMedium), then to the local filesystem.
Medium Medium
}
// Usage example: `normalisedOptions := (store.CompactOptions{Before: time.Date(2026, 3, 30, 0, 0, 0, 0, time.UTC)}).Normalised()`
func (compactOptions CompactOptions) Normalised() CompactOptions {
if compactOptions.Output == "" {
compactOptions.Output = defaultArchiveOutputDirectory
}
compactOptions.Format = lowercaseText(core.Trim(compactOptions.Format))
if compactOptions.Format == "" {
compactOptions.Format = "gzip"
}
return compactOptions
}
// Usage example: `if err := (store.CompactOptions{Before: time.Date(2026, 3, 30, 0, 0, 0, 0, time.UTC), Format: "gzip"}).Validate(); err != nil { return }`
func (compactOptions CompactOptions) Validate() error {
if compactOptions.Before.IsZero() {
return core.E(
"store.CompactOptions.Validate",
"before cutoff time is empty; use a value like time.Now().Add(-24 * time.Hour)",
nil,
)
}
switch lowercaseText(core.Trim(compactOptions.Format)) {
case "", "gzip", "zstd":
return nil
default:
return core.E(
"store.CompactOptions.Validate",
core.Concat(`format must be "gzip" or "zstd"; got `, compactOptions.Format),
nil,
)
}
}
func lowercaseText(text string) string {
builder := core.NewBuilder()
for _, r := range text {
builder.WriteRune(unicode.ToLower(r))
}
return builder.String()
}
type compactArchiveEntry struct {
journalEntryID int64
journalBucketName string
journalMeasurementName string
journalFieldsJSON string
journalTagsJSON string
journalCommittedAtUnixMilli int64
}
// Usage example: `result := storeInstance.Compact(store.CompactOptions{Before: time.Now().Add(-30 * 24 * time.Hour), Output: "/tmp/archive", Format: "gzip"})`
func (storeInstance *Store) Compact(options CompactOptions) core.Result {
if err := storeInstance.ensureReady("store.Compact"); err != nil {
return core.Result{Value: err, OK: false}
}
if err := ensureJournalSchema(storeInstance.sqliteDatabase); err != nil {
return core.Result{Value: core.E("store.Compact", "ensure journal schema", err), OK: false}
}
options = options.Normalised()
if err := options.Validate(); err != nil {
return core.Result{Value: core.E("store.Compact", "validate options", err), OK: false}
}
medium := options.Medium
if medium == nil {
medium = storeInstance.medium
}
if medium == nil {
medium = coreio.Local
}
if medium == nil {
return core.Result{Value: core.E("store.Compact", "local medium is unavailable", nil), OK: false}
}
if err := ensureMediumDir(medium, options.Output); err != nil {
return core.Result{Value: core.E("store.Compact", "ensure medium archive directory", err), OK: false}
}
rows, queryErr := storeInstance.sqliteDatabase.Query(
"SELECT entry_id, bucket_name, measurement, fields_json, tags_json, committed_at FROM "+journalEntriesTableName+" WHERE archived_at IS NULL AND committed_at < ? ORDER BY committed_at, entry_id",
options.Before.UnixMilli(),
)
if queryErr != nil {
return core.Result{Value: core.E("store.Compact", "query journal rows", queryErr), OK: false}
}
defer func() {
_ = rows.Close()
}()
var archiveEntries []compactArchiveEntry
for rows.Next() {
var entry compactArchiveEntry
if err := rows.Scan(
&entry.journalEntryID,
&entry.journalBucketName,
&entry.journalMeasurementName,
&entry.journalFieldsJSON,
&entry.journalTagsJSON,
&entry.journalCommittedAtUnixMilli,
); err != nil {
return core.Result{Value: core.E("store.Compact", "scan journal row", err), OK: false}
}
archiveEntries = append(archiveEntries, entry)
}
if err := rows.Err(); err != nil {
return core.Result{Value: core.E("store.Compact", "iterate journal rows", err), OK: false}
}
if len(archiveEntries) == 0 {
return core.Result{Value: "", OK: true}
}
outputPath := compactOutputPath(options.Output, options.Format)
archiveContent, err := newCompactArchiveBuffer()
if err != nil {
return core.Result{Value: core.E("store.Compact", "create archive buffer", err), OK: false}
}
writer, err := archiveWriter(archiveContent, options.Format)
if err != nil {
return core.Result{Value: err, OK: false}
}
archiveWriteFinished := false
defer func() {
if !archiveWriteFinished {
_ = writer.Close()
}
}()
for _, entry := range archiveEntries {
lineMap, err := archiveEntryLine(entry)
if err != nil {
return core.Result{Value: err, OK: false}
}
lineJSON, err := marshalJSONText(lineMap, "store.Compact", "marshal archive line")
if err != nil {
return core.Result{Value: err, OK: false}
}
if _, err := writer.Write([]byte(lineJSON + "\n")); err != nil {
return core.Result{Value: core.E("store.Compact", "write archive line", err), OK: false}
}
}
if err := writer.Close(); err != nil {
return core.Result{Value: core.E("store.Compact", "close archive writer", err), OK: false}
}
archiveWriteFinished = true
compressedArchive, err := archiveContent.content()
if err != nil {
return core.Result{Value: core.E("store.Compact", "read archive buffer", err), OK: false}
}
stagedOutputPath := core.Concat(outputPath, ".tmp")
stagedOutputPublished := false
if err := medium.Write(stagedOutputPath, compressedArchive); err != nil {
return core.Result{Value: core.E("store.Compact", "write staged archive via medium", err), OK: false}
}
defer func() {
if !stagedOutputPublished && medium.Exists(stagedOutputPath) {
_ = medium.Delete(stagedOutputPath)
}
}()
transaction, err := storeInstance.sqliteDatabase.Begin()
if err != nil {
return core.Result{Value: core.E("store.Compact", "begin archive transaction", err), OK: false}
}
committed := false
defer func() {
if !committed {
_ = transaction.Rollback()
}
}()
archivedAt := time.Now().UnixMilli()
for _, entry := range archiveEntries {
if _, err := transaction.Exec(
"UPDATE "+journalEntriesTableName+" SET archived_at = ? WHERE entry_id = ?",
archivedAt,
entry.journalEntryID,
); err != nil {
return core.Result{Value: core.E("store.Compact", "mark journal row archived", err), OK: false}
}
}
if err := transaction.Commit(); err != nil {
return core.Result{Value: core.E("store.Compact", "commit archive transaction", err), OK: false}
}
committed = true
stagedOutputPublished = true
if err := medium.Rename(stagedOutputPath, outputPath); err != nil {
return core.Result{Value: core.E("store.Compact", "publish staged archive", err), OK: false}
}
return core.Result{Value: outputPath, OK: true}
}
func archiveEntryLine(entry compactArchiveEntry) (map[string]any, error) {
fields := make(map[string]any)
fieldsResult := core.JSONUnmarshalString(entry.journalFieldsJSON, &fields)
if !fieldsResult.OK {
return nil, core.E("store.Compact", "unmarshal fields", fieldsResult.Value.(error))
}
tags := make(map[string]string)
tagsResult := core.JSONUnmarshalString(entry.journalTagsJSON, &tags)
if !tagsResult.OK {
return nil, core.E("store.Compact", "unmarshal tags", tagsResult.Value.(error))
}
return map[string]any{
"bucket": entry.journalBucketName,
"measurement": entry.journalMeasurementName,
"fields": fields,
"tags": tags,
"committed_at": entry.journalCommittedAtUnixMilli,
}, nil
}
type compactArchiveWriter interface {
Write([]byte) (int, error)
Close() error
}
type compactArchiveWriteTarget interface {
Write([]byte) (int, error)
}
type compactArchiveBuffer struct {
buffer bytes.Buffer
}
func newCompactArchiveBuffer() (*compactArchiveBuffer, error) {
return &compactArchiveBuffer{}, nil
}
// Usage example: `buffer, _ := newCompactArchiveBuffer(); _, _ = buffer.Write([]byte("archive"))`
func (buffer *compactArchiveBuffer) Write(data []byte) (int, error) {
return buffer.buffer.Write(data)
}
func (buffer *compactArchiveBuffer) content() (string, error) {
return buffer.buffer.String(), nil
}
func archiveWriter(writer compactArchiveWriteTarget, format string) (compactArchiveWriter, error) {
switch format {
case "gzip":
return gzip.NewWriter(writer), nil
case "zstd":
zstdWriter, err := zstd.NewWriter(writer)
if err != nil {
return nil, core.E("store.Compact", "create zstd writer", err)
}
return zstdWriter, nil
default:
return nil, core.E("store.Compact", core.Concat("unsupported archive format: ", format), nil)
}
}
func compactOutputPath(outputDirectory, format string) string {
extension := ".jsonl"
if format == "gzip" {
extension = ".jsonl.gz"
}
if format == "zstd" {
extension = ".jsonl.zst"
}
// Include nanoseconds so two compactions in the same second never collide.
filename := core.Concat("journal-", time.Now().UTC().Format("20060102-150405.000000000"), extension)
return joinPath(outputDirectory, filename)
}

225
compact_test.go Normal file
View file

@ -0,0 +1,225 @@
package store
import (
"bytes"
"compress/gzip"
"io"
"testing"
"time"
core "dappco.re/go/core"
"github.com/klauspost/compress/zstd"
)
func TestCompact_Compact_Good_GzipArchive(t *testing.T) {
outputDirectory := useArchiveOutputDirectory(t)
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"like": 2}, map[string]string{"workspace": "session-b"}).OK)
_, err = storeInstance.sqliteDatabase.Exec(
"UPDATE "+journalEntriesTableName+" SET committed_at = ? WHERE measurement = ?",
time.Now().Add(-48*time.Hour).UnixMilli(),
"session-a",
)
assertNoError(t, err)
result := storeInstance.Compact(CompactOptions{
Before: time.Now().Add(-24 * time.Hour),
Output: outputDirectory,
Format: "gzip",
})
assertTruef(t, result.OK, "compact failed: %v", result.Value)
archivePath, ok := result.Value.(string)
assertTruef(t, ok, "unexpected archive path type: %T", result.Value)
assertTrue(t, testFilesystem().Exists(archivePath))
archiveData := requireCoreReadBytes(t, archivePath)
reader, err := gzip.NewReader(bytes.NewReader(archiveData))
assertNoError(t, err)
defer func() {
_ = reader.Close()
}()
decompressedData, err := io.ReadAll(reader)
assertNoError(t, err)
lines := core.Split(core.Trim(string(decompressedData)), "\n")
assertLen(t, lines, 1)
archivedRow := make(map[string]any)
unmarshalResult := core.JSONUnmarshalString(lines[0], &archivedRow)
assertTruef(t, unmarshalResult.OK, "archive line unmarshal failed: %v", unmarshalResult.Value)
assertEqual(t, "session-a", archivedRow["measurement"])
remainingRows := requireResultRows(t, storeInstance.QueryJournal(""))
assertLen(t, remainingRows, 1)
assertEqual(t, "session-b", remainingRows[0]["measurement"])
}
func TestCompact_Compact_Good_ZstdArchive(t *testing.T) {
outputDirectory := useArchiveOutputDirectory(t)
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
_, err = storeInstance.sqliteDatabase.Exec(
"UPDATE "+journalEntriesTableName+" SET committed_at = ? WHERE measurement = ?",
time.Now().Add(-48*time.Hour).UnixMilli(),
"session-a",
)
assertNoError(t, err)
result := storeInstance.Compact(CompactOptions{
Before: time.Now().Add(-24 * time.Hour),
Output: outputDirectory,
Format: "zstd",
})
assertTruef(t, result.OK, "compact failed: %v", result.Value)
archivePath, ok := result.Value.(string)
assertTruef(t, ok, "unexpected archive path type: %T", result.Value)
assertTrue(t, testFilesystem().Exists(archivePath))
assertContainsString(t, archivePath, ".jsonl.zst")
archiveData := requireCoreReadBytes(t, archivePath)
reader, err := zstd.NewReader(bytes.NewReader(archiveData))
assertNoError(t, err)
defer reader.Close()
decompressedData, err := io.ReadAll(reader)
assertNoError(t, err)
lines := core.Split(core.Trim(string(decompressedData)), "\n")
assertLen(t, lines, 1)
archivedRow := make(map[string]any)
unmarshalResult := core.JSONUnmarshalString(lines[0], &archivedRow)
assertTruef(t, unmarshalResult.OK, "archive line unmarshal failed: %v", unmarshalResult.Value)
assertEqual(t, "session-a", archivedRow["measurement"])
}
func TestCompact_Compact_Good_NoRows(t *testing.T) {
outputDirectory := useArchiveOutputDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
result := storeInstance.Compact(CompactOptions{
Before: time.Now(),
Output: outputDirectory,
Format: "gzip",
})
assertTruef(t, result.OK, "compact failed: %v", result.Value)
assertEqual(t, "", result.Value)
}
func TestCompact_Compact_Good_DeterministicOrderingForSameTimestamp(t *testing.T) {
outputDirectory := useArchiveOutputDirectory(t)
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertNoError(t, ensureJournalSchema(storeInstance.sqliteDatabase))
committedAt := time.Now().Add(-48 * time.Hour).UnixMilli()
assertNoError(t, commitJournalEntry(storeInstance.sqliteDatabase, "events", "session-b", `{"like":2}`, `{"workspace":"session-b"}`, committedAt))
assertNoError(t, commitJournalEntry(storeInstance.sqliteDatabase, "events", "session-a", `{"like":1}`, `{"workspace":"session-a"}`, committedAt))
result := storeInstance.Compact(CompactOptions{
Before: time.Now().Add(-24 * time.Hour),
Output: outputDirectory,
Format: "gzip",
})
assertTruef(t, result.OK, "compact failed: %v", result.Value)
archivePath, ok := result.Value.(string)
assertTruef(t, ok, "unexpected archive path type: %T", result.Value)
archiveData := requireCoreReadBytes(t, archivePath)
reader, err := gzip.NewReader(bytes.NewReader(archiveData))
assertNoError(t, err)
defer func() {
_ = reader.Close()
}()
decompressedData, err := io.ReadAll(reader)
assertNoError(t, err)
lines := core.Split(core.Trim(string(decompressedData)), "\n")
assertLen(t, lines, 2)
firstArchivedRow := make(map[string]any)
unmarshalResult := core.JSONUnmarshalString(lines[0], &firstArchivedRow)
assertTruef(t, unmarshalResult.OK, "archive line unmarshal failed: %v", unmarshalResult.Value)
assertEqual(t, "session-b", firstArchivedRow["measurement"])
secondArchivedRow := make(map[string]any)
unmarshalResult = core.JSONUnmarshalString(lines[1], &secondArchivedRow)
assertTruef(t, unmarshalResult.OK, "archive line unmarshal failed: %v", unmarshalResult.Value)
assertEqual(t, "session-a", secondArchivedRow["measurement"])
}
func TestCompact_CompactOptions_Good_Normalised(t *testing.T) {
options := (CompactOptions{
Before: time.Now().Add(-24 * time.Hour),
}).Normalised()
assertEqual(t, defaultArchiveOutputDirectory, options.Output)
assertEqual(t, "gzip", options.Format)
}
func TestCompact_CompactOptions_Good_Validate(t *testing.T) {
err := (CompactOptions{
Before: time.Now().Add(-24 * time.Hour),
Format: "zstd",
}).Validate()
assertNoError(t, err)
}
func TestCompact_CompactOptions_Bad_ValidateMissingCutoff(t *testing.T) {
err := (CompactOptions{
Format: "gzip",
}).Validate()
assertError(t, err)
assertContainsString(t, err.Error(), "before cutoff time is empty")
}
func TestCompact_CompactOptions_Good_ValidateNormalisesFormatCase(t *testing.T) {
err := (CompactOptions{
Before: time.Now().Add(-24 * time.Hour),
Format: " GZIP ",
}).Validate()
assertNoError(t, err)
options := (CompactOptions{
Before: time.Now().Add(-24 * time.Hour),
Format: " ZsTd ",
}).Normalised()
assertEqual(t, "zstd", options.Format)
}
func TestCompact_CompactOptions_Good_ValidateWhitespaceFormatDefaultsToGzip(t *testing.T) {
options := (CompactOptions{
Before: time.Now().Add(-24 * time.Hour),
Format: " ",
}).Normalised()
assertEqual(t, "gzip", options.Format)
assertNoError(t, options.Validate())
}
func TestCompact_CompactOptions_Bad_ValidateUnsupportedFormat(t *testing.T) {
err := (CompactOptions{
Before: time.Now().Add(-24 * time.Hour),
Format: "zip",
}).Validate()
assertError(t, err)
assertContainsString(t, err.Error(), `format must be "gzip" or "zstd"`)
}

287
conventions_test.go Normal file
View file

@ -0,0 +1,287 @@
package store
import (
"go/ast"
"go/parser"
"go/token"
"io/fs"
"slices"
"testing"
"unicode"
core "dappco.re/go/core"
)
func TestConventions_Imports_Good_Banned(t *testing.T) {
files := repoGoFiles(t, func(name string) bool {
return core.HasSuffix(name, ".go")
})
bannedImports := []string{
"encoding/json",
"errors",
"fmt",
"os",
"os/exec",
"path/filepath",
"strings",
}
var banned []string
for _, path := range files {
file := parseGoFile(t, path)
for _, spec := range file.Imports {
importPath := trimImportPath(spec.Path.Value)
if core.HasPrefix(importPath, "forge.lthn.ai/") || slices.Contains(bannedImports, importPath) {
banned = append(banned, core.Concat(path, ": ", importPath))
}
}
}
slices.Sort(banned)
assertEmptyf(t, banned, "banned imports should not appear in repository Go files")
}
func TestConventions_TestNaming_Good_StrictPattern(t *testing.T) {
files := repoGoFiles(t, func(name string) bool {
return core.HasSuffix(name, "_test.go")
})
allowedClasses := []string{"Good", "Bad", "Ugly"}
var invalid []string
for _, path := range files {
expectedPrefix := testNamePrefix(path)
file := parseGoFile(t, path)
for _, decl := range file.Decls {
fn, ok := decl.(*ast.FuncDecl)
if !ok || fn.Recv != nil {
continue
}
name := fn.Name.Name
if !core.HasPrefix(name, "Test") || name == "TestMain" {
continue
}
if !core.HasPrefix(name, expectedPrefix) {
invalid = append(invalid, core.Concat(path, ": ", name))
continue
}
parts := core.Split(core.TrimPrefix(name, expectedPrefix), "_")
if len(parts) < 2 || parts[0] == "" || !slices.Contains(allowedClasses, parts[1]) {
invalid = append(invalid, core.Concat(path, ": ", name))
}
}
}
slices.Sort(invalid)
assertEmptyf(t, invalid, "top-level tests must follow Test<File>_<Function>_<Good|Bad|Ugly>")
}
func TestConventions_Exports_Good_UsageExamples(t *testing.T) {
files := repoGoFiles(t, func(name string) bool {
return core.HasSuffix(name, ".go") && !core.HasSuffix(name, "_test.go")
})
var missing []string
for _, path := range files {
file := parseGoFile(t, path)
for _, decl := range file.Decls {
switch node := decl.(type) {
case *ast.FuncDecl:
if !node.Name.IsExported() {
continue
}
if !core.Contains(commentText(node.Doc), "Usage example:") {
missing = append(missing, core.Concat(path, ": ", node.Name.Name))
}
case *ast.GenDecl:
for _, spec := range node.Specs {
switch item := spec.(type) {
case *ast.TypeSpec:
if !item.Name.IsExported() {
continue
}
if !core.Contains(commentText(item.Doc, node.Doc), "Usage example:") {
missing = append(missing, core.Concat(path, ": ", item.Name.Name))
}
case *ast.ValueSpec:
for _, name := range item.Names {
if !name.IsExported() {
continue
}
if !core.Contains(commentText(item.Doc, node.Doc), "Usage example:") {
missing = append(missing, core.Concat(path, ": ", name.Name))
}
}
}
}
}
}
}
slices.Sort(missing)
assertEmptyf(t, missing, "exported declarations must include a usage example in their doc comment")
}
func TestConventions_Exports_Good_FieldUsageExamples(t *testing.T) {
files := repoGoFiles(t, func(name string) bool {
return core.HasSuffix(name, ".go") && !core.HasSuffix(name, "_test.go")
})
var missing []string
for _, path := range files {
file := parseGoFile(t, path)
for _, decl := range file.Decls {
node, ok := decl.(*ast.GenDecl)
if !ok {
continue
}
for _, spec := range node.Specs {
typeSpec, ok := spec.(*ast.TypeSpec)
if !ok || !typeSpec.Name.IsExported() {
continue
}
structType, ok := typeSpec.Type.(*ast.StructType)
if !ok {
continue
}
for _, field := range structType.Fields.List {
for _, fieldName := range field.Names {
if !fieldName.IsExported() {
continue
}
if !core.Contains(commentText(field.Doc), "Usage example:") {
missing = append(missing, core.Concat(path, ": ", typeSpec.Name.Name, ".", fieldName.Name))
}
}
}
}
}
}
slices.Sort(missing)
assertEmptyf(t, missing, "exported struct fields must include a usage example in their doc comment")
}
func TestConventions_Exports_Good_NoCompatibilityAliases(t *testing.T) {
files := repoGoFiles(t, func(name string) bool {
return core.HasSuffix(name, ".go") && !core.HasSuffix(name, "_test.go")
})
var invalid []string
for _, path := range files {
file := parseGoFile(t, path)
for _, decl := range file.Decls {
node, ok := decl.(*ast.GenDecl)
if !ok {
continue
}
for _, spec := range node.Specs {
switch item := spec.(type) {
case *ast.TypeSpec:
if item.Name.Name == "KV" {
invalid = append(invalid, core.Concat(path, ": ", item.Name.Name))
}
if item.Name.Name != "Watcher" {
continue
}
structType, ok := item.Type.(*ast.StructType)
if !ok {
continue
}
for _, field := range structType.Fields.List {
for _, name := range field.Names {
if name.Name == "Ch" {
invalid = append(invalid, core.Concat(path, ": Watcher.Ch"))
}
}
}
case *ast.ValueSpec:
for _, name := range item.Names {
if name.Name == "ErrNotFound" || name.Name == "ErrQuotaExceeded" {
invalid = append(invalid, core.Concat(path, ": ", name.Name))
}
}
}
}
}
}
slices.Sort(invalid)
assertEmptyf(t, invalid, "legacy compatibility aliases should not appear in the public Go API")
}
func repoGoFiles(t *testing.T, keep func(name string) bool) []string {
t.Helper()
result := testFilesystem().List(".")
requireCoreOK(t, result)
entries, ok := result.Value.([]fs.DirEntry)
assertTruef(t, ok, "unexpected directory entry type: %T", result.Value)
var files []string
for _, entry := range entries {
if entry.IsDir() || !keep(entry.Name()) {
continue
}
files = append(files, entry.Name())
}
slices.Sort(files)
return files
}
func parseGoFile(t *testing.T, path string) *ast.File {
t.Helper()
file, err := parser.ParseFile(token.NewFileSet(), path, nil, parser.ParseComments)
assertNoError(t, err)
return file
}
func trimImportPath(value string) string {
return core.TrimSuffix(core.TrimPrefix(value, `"`), `"`)
}
func testNamePrefix(path string) string {
return core.Concat("Test", camelCase(core.TrimSuffix(path, "_test.go")), "_")
}
func camelCase(value string) string {
parts := core.Split(value, "_")
builder := core.NewBuilder()
for _, part := range parts {
if part == "" {
continue
}
builder.WriteString(upperFirst(part))
}
return builder.String()
}
func upperFirst(value string) string {
runes := []rune(value)
if len(runes) == 0 {
return ""
}
runes[0] = unicode.ToUpper(runes[0])
return string(runes)
}
func commentText(groups ...*ast.CommentGroup) string {
builder := core.NewBuilder()
for _, group := range groups {
if group == nil {
continue
}
text := core.Trim(group.Text())
if text == "" {
continue
}
if builder.Len() > 0 {
builder.WriteString("\n")
}
builder.WriteString(text)
}
return builder.String()
}

View file

@ -1,224 +1,658 @@
package store
import (
"dappco.re/go/core"
"context"
"database/sql"
"os"
"database/sql/driver"
"io"
"sync"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
core "dappco.re/go/core"
)
// ---------------------------------------------------------------------------
// New — schema error path
// ---------------------------------------------------------------------------
func TestNew_Bad_SchemaConflict(t *testing.T) {
// Pre-create a database with an INDEX named "kv". When New() runs
// CREATE TABLE IF NOT EXISTS kv, SQLite returns an error because the
// name "kv" is already taken by the index.
dir := t.TempDir()
dbPath := core.Path(dir, "conflict.db")
func TestCoverage_New_Bad_SchemaConflict(t *testing.T) {
// Pre-create a database with an INDEX named "entries". When New() runs
// CREATE TABLE IF NOT EXISTS entries, SQLite returns an error because the
// name "entries" is already taken by the index.
databasePath := testPath(t, "conflict.db")
db, err := sql.Open("sqlite", dbPath)
require.NoError(t, err)
db.SetMaxOpenConns(1)
_, err = db.Exec("PRAGMA journal_mode=WAL")
require.NoError(t, err)
_, err = db.Exec("CREATE TABLE dummy (id INTEGER)")
require.NoError(t, err)
_, err = db.Exec("CREATE INDEX kv ON dummy(id)")
require.NoError(t, err)
require.NoError(t, db.Close())
database, err := sql.Open("sqlite", databasePath)
assertNoError(t, err)
database.SetMaxOpenConns(1)
_, err = database.Exec("PRAGMA journal_mode=WAL")
assertNoError(t, err)
_, err = database.Exec("CREATE TABLE dummy (id INTEGER)")
assertNoError(t, err)
_, err = database.Exec("CREATE INDEX entries ON dummy(id)")
assertNoError(t, err)
assertNoError(t, database.Close())
_, err = New(dbPath)
require.Error(t, err, "New should fail when an index named kv already exists")
assert.Contains(t, err.Error(), "store.New: schema")
_, err = New(databasePath)
assertError(t, err)
assertContainsString(t, err.Error(), "store.New: ensure schema")
}
// ---------------------------------------------------------------------------
// GetAll — scan error path
// ---------------------------------------------------------------------------
func TestGetAll_Bad_ScanError(t *testing.T) {
func TestCoverage_GetAll_Bad_ScanError(t *testing.T) {
// Trigger a scan error by inserting a row with a NULL key. The production
// code scans into plain strings, which cannot represent NULL.
s, err := New(":memory:")
require.NoError(t, err)
defer s.Close()
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
// Insert a normal row first so the query returns results.
require.NoError(t, s.Set("g", "good", "value"))
assertNoError(t, storeInstance.Set("g", "good", "value"))
// Restructure the table to allow NULLs, then insert a NULL-key row.
_, err = s.db.Exec("ALTER TABLE kv RENAME TO kv_backup")
require.NoError(t, err)
_, err = s.db.Exec(`CREATE TABLE kv (
grp TEXT,
key TEXT,
value TEXT,
expires_at INTEGER
_, err = storeInstance.sqliteDatabase.Exec("ALTER TABLE entries RENAME TO entries_backup")
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec(`CREATE TABLE entries (
group_name TEXT,
entry_key TEXT,
entry_value TEXT,
expires_at INTEGER
)`)
require.NoError(t, err)
_, err = s.db.Exec("INSERT INTO kv SELECT * FROM kv_backup")
require.NoError(t, err)
_, err = s.db.Exec("INSERT INTO kv (grp, key, value) VALUES ('g', NULL, 'null-key-val')")
require.NoError(t, err)
_, err = s.db.Exec("DROP TABLE kv_backup")
require.NoError(t, err)
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec("INSERT INTO entries SELECT * FROM entries_backup")
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec("INSERT INTO entries (group_name, entry_key, entry_value) VALUES ('g', NULL, 'null-key-val')")
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec("DROP TABLE entries_backup")
assertNoError(t, err)
_, err = s.GetAll("g")
require.Error(t, err, "GetAll should fail when a row contains a NULL key")
assert.Contains(t, err.Error(), "store.All: scan")
_, err = storeInstance.GetAll("g")
assertError(t, err)
assertContainsString(t, err.Error(), "store.All: scan")
}
// ---------------------------------------------------------------------------
// GetAll — rows iteration error path
// ---------------------------------------------------------------------------
func TestGetAll_Bad_RowsError(t *testing.T) {
func TestCoverage_GetAll_Bad_RowsError(t *testing.T) {
// Trigger rows.Err() by corrupting the database file so that iteration
// starts successfully but encounters a malformed page mid-scan.
dir := t.TempDir()
dbPath := core.Path(dir, "corrupt-getall.db")
databasePath := testPath(t, "corrupt-getall.db")
s, err := New(dbPath)
require.NoError(t, err)
storeInstance, err := New(databasePath)
assertNoError(t, err)
// Insert enough rows to span multiple database pages.
const rows = 5000
for i := range rows {
require.NoError(t, s.Set("g",
core.Sprintf("key-%06d", i),
core.Sprintf("value-with-padding-%06d-xxxxxxxxxxxxxxxxxxxxxxxx", i)))
assertNoError(t, storeInstance.Set("g", core.Sprintf("key-%06d", i), core.Sprintf("value-with-padding-%06d-xxxxxxxxxxxxxxxxxxxxxxxx", i)))
}
s.Close()
assertNoError(t, storeInstance.Close())
// Force a WAL checkpoint so all data is in the main database file.
raw, err := sql.Open("sqlite", dbPath)
require.NoError(t, err)
raw.SetMaxOpenConns(1)
_, err = raw.Exec("PRAGMA wal_checkpoint(TRUNCATE)")
require.NoError(t, err)
require.NoError(t, raw.Close())
rawDatabase, err := sql.Open("sqlite", databasePath)
assertNoError(t, err)
rawDatabase.SetMaxOpenConns(1)
_, err = rawDatabase.Exec("PRAGMA wal_checkpoint(TRUNCATE)")
assertNoError(t, err)
assertNoError(t, rawDatabase.Close())
// Corrupt data pages in the latter portion of the file (skip the first
// pages which hold the schema).
info, err := os.Stat(dbPath)
require.NoError(t, err)
require.Greater(t, info.Size(), int64(16384), "DB should be large enough to corrupt")
f, err := os.OpenFile(dbPath, os.O_RDWR, 0644)
require.NoError(t, err)
data := requireCoreReadBytes(t, databasePath)
garbage := make([]byte, 4096)
for i := range garbage {
garbage[i] = 0xFF
}
offset := info.Size() * 3 / 4
_, err = f.WriteAt(garbage, offset)
require.NoError(t, err)
_, err = f.WriteAt(garbage, offset+4096)
require.NoError(t, err)
require.NoError(t, f.Close())
assertGreaterf(t, len(data), len(garbage)*2, "database file should be large enough to corrupt")
offset := len(data) * 3 / 4
maxOffset := len(data) - (len(garbage) * 2)
if offset > maxOffset {
offset = maxOffset
}
copy(data[offset:offset+len(garbage)], garbage)
copy(data[offset+len(garbage):offset+(len(garbage)*2)], garbage)
requireCoreWriteBytes(t, databasePath, data)
// Remove WAL/SHM so the reopened connection reads from the main file.
os.Remove(dbPath + "-wal")
os.Remove(dbPath + "-shm")
_ = testFilesystem().Delete(databasePath + "-wal")
_ = testFilesystem().Delete(databasePath + "-shm")
s2, err := New(dbPath)
require.NoError(t, err)
defer s2.Close()
reopenedStore, err := New(databasePath)
assertNoError(t, err)
defer func() { _ = reopenedStore.Close() }()
_, err = s2.GetAll("g")
require.Error(t, err, "GetAll should fail on corrupted database pages")
assert.Contains(t, err.Error(), "store.All: rows")
_, err = reopenedStore.GetAll("g")
assertError(t, err)
assertContainsString(t, err.Error(), "store.All: rows")
}
// ---------------------------------------------------------------------------
// Render — scan error path
// ---------------------------------------------------------------------------
func TestRender_Bad_ScanError(t *testing.T) {
// Same NULL-key technique as TestGetAll_Bad_ScanError.
s, err := New(":memory:")
require.NoError(t, err)
defer s.Close()
func TestCoverage_Render_Bad_ScanError(t *testing.T) {
// Same NULL-key technique as TestCoverage_GetAll_Bad_ScanError.
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
require.NoError(t, s.Set("g", "good", "value"))
assertNoError(t, storeInstance.Set("g", "good", "value"))
_, err = s.db.Exec("ALTER TABLE kv RENAME TO kv_backup")
require.NoError(t, err)
_, err = s.db.Exec(`CREATE TABLE kv (
grp TEXT,
key TEXT,
value TEXT,
expires_at INTEGER
_, err = storeInstance.sqliteDatabase.Exec("ALTER TABLE entries RENAME TO entries_backup")
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec(`CREATE TABLE entries (
group_name TEXT,
entry_key TEXT,
entry_value TEXT,
expires_at INTEGER
)`)
require.NoError(t, err)
_, err = s.db.Exec("INSERT INTO kv SELECT * FROM kv_backup")
require.NoError(t, err)
_, err = s.db.Exec("INSERT INTO kv (grp, key, value) VALUES ('g', NULL, 'null-key-val')")
require.NoError(t, err)
_, err = s.db.Exec("DROP TABLE kv_backup")
require.NoError(t, err)
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec("INSERT INTO entries SELECT * FROM entries_backup")
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec("INSERT INTO entries (group_name, entry_key, entry_value) VALUES ('g', NULL, 'null-key-val')")
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec("DROP TABLE entries_backup")
assertNoError(t, err)
_, err = s.Render("{{ .good }}", "g")
require.Error(t, err, "Render should fail when a row contains a NULL key")
assert.Contains(t, err.Error(), "store.All: scan")
_, err = storeInstance.Render("{{ .good }}", "g")
assertError(t, err)
assertContainsString(t, err.Error(), "store.All: scan")
}
// ---------------------------------------------------------------------------
// Render — rows iteration error path
// ---------------------------------------------------------------------------
func TestRender_Bad_RowsError(t *testing.T) {
// Same corruption technique as TestGetAll_Bad_RowsError.
dir := t.TempDir()
dbPath := core.Path(dir, "corrupt-render.db")
func TestCoverage_Render_Bad_RowsError(t *testing.T) {
// Same corruption technique as TestCoverage_GetAll_Bad_RowsError.
databasePath := testPath(t, "corrupt-render.db")
s, err := New(dbPath)
require.NoError(t, err)
storeInstance, err := New(databasePath)
assertNoError(t, err)
const rows = 5000
for i := range rows {
require.NoError(t, s.Set("g",
core.Sprintf("key-%06d", i),
core.Sprintf("value-with-padding-%06d-xxxxxxxxxxxxxxxxxxxxxxxx", i)))
assertNoError(t, storeInstance.Set("g", core.Sprintf("key-%06d", i), core.Sprintf("value-with-padding-%06d-xxxxxxxxxxxxxxxxxxxxxxxx", i)))
}
s.Close()
assertNoError(t, storeInstance.Close())
rawDatabase, err := sql.Open("sqlite", databasePath)
assertNoError(t, err)
rawDatabase.SetMaxOpenConns(1)
_, err = rawDatabase.Exec("PRAGMA wal_checkpoint(TRUNCATE)")
assertNoError(t, err)
assertNoError(t, rawDatabase.Close())
raw, err := sql.Open("sqlite", dbPath)
require.NoError(t, err)
raw.SetMaxOpenConns(1)
_, err = raw.Exec("PRAGMA wal_checkpoint(TRUNCATE)")
require.NoError(t, err)
require.NoError(t, raw.Close())
info, err := os.Stat(dbPath)
require.NoError(t, err)
f, err := os.OpenFile(dbPath, os.O_RDWR, 0644)
require.NoError(t, err)
data := requireCoreReadBytes(t, databasePath)
garbage := make([]byte, 4096)
for i := range garbage {
garbage[i] = 0xFF
}
offset := info.Size() * 3 / 4
_, err = f.WriteAt(garbage, offset)
require.NoError(t, err)
_, err = f.WriteAt(garbage, offset+4096)
require.NoError(t, err)
require.NoError(t, f.Close())
assertGreaterf(t, len(data), len(garbage)*2, "database file should be large enough to corrupt")
offset := len(data) * 3 / 4
maxOffset := len(data) - (len(garbage) * 2)
if offset > maxOffset {
offset = maxOffset
}
copy(data[offset:offset+len(garbage)], garbage)
copy(data[offset+len(garbage):offset+(len(garbage)*2)], garbage)
requireCoreWriteBytes(t, databasePath, data)
os.Remove(dbPath + "-wal")
os.Remove(dbPath + "-shm")
_ = testFilesystem().Delete(databasePath + "-wal")
_ = testFilesystem().Delete(databasePath + "-shm")
s2, err := New(dbPath)
require.NoError(t, err)
defer s2.Close()
reopenedStore, err := New(databasePath)
assertNoError(t, err)
defer func() { _ = reopenedStore.Close() }()
_, err = s2.Render("{{ . }}", "g")
require.Error(t, err, "Render should fail on corrupted database pages")
assert.Contains(t, err.Error(), "store.All: rows")
_, err = reopenedStore.Render("{{ . }}", "g")
assertError(t, err)
assertContainsString(t, err.Error(), "store.All: rows")
}
// ---------------------------------------------------------------------------
// GroupsSeq — defensive error paths
// ---------------------------------------------------------------------------
func TestCoverage_GroupsSeq_Bad_ScanError(t *testing.T) {
// Trigger a scan error by inserting a row with a NULL group name. The
// production code scans into a plain string, which cannot represent NULL.
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
_, err = storeInstance.sqliteDatabase.Exec("ALTER TABLE entries RENAME TO entries_backup")
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec(`CREATE TABLE entries (
group_name TEXT,
entry_key TEXT,
entry_value TEXT,
expires_at INTEGER
)`)
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec("INSERT INTO entries SELECT * FROM entries_backup")
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec("INSERT INTO entries (group_name, entry_key, entry_value) VALUES (NULL, 'k', 'v')")
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec("DROP TABLE entries_backup")
assertNoError(t, err)
for groupName, iterationErr := range storeInstance.GroupsSeq("") {
assertError(t, iterationErr)
assertEmpty(t, groupName)
break
}
}
func TestCoverage_GroupsSeq_Bad_RowsError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
groupRows: [][]driver.Value{
{"group-a"},
},
groupRowsErr: core.E("stubSQLiteScenario", "rows iteration failed", nil),
groupRowsErrIndex: 0,
})
defer func() { _ = database.Close() }()
storeInstance := &Store{
sqliteDatabase: database,
cancelPurge: func() {},
}
for groupName, iterationErr := range storeInstance.GroupsSeq("") {
assertError(t, iterationErr)
assertEmpty(t, groupName)
break
}
}
// ---------------------------------------------------------------------------
// ScopedStore bulk helpers — defensive error paths
// ---------------------------------------------------------------------------
func TestCoverage_ScopedStore_Bad_GroupsClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
assertNoError(t, storeInstance.Close())
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNotNil(t, scopedStore)
_, err := scopedStore.Groups("")
assertError(t, err)
assertContainsString(t, err.Error(), "store.ScopedStore.Groups")
}
func TestCoverage_ScopedStore_Bad_GroupsSeqRowsError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
groupRows: [][]driver.Value{
{"tenant-a:config"},
},
groupRowsErr: core.E("stubSQLiteScenario", "rows iteration failed", nil),
groupRowsErrIndex: 1,
})
defer func() { _ = database.Close() }()
scopedStore := &ScopedStore{
store: &Store{
sqliteDatabase: database,
cancelPurge: func() {},
},
namespace: "tenant-a",
}
var seen []string
for groupName, iterationErr := range scopedStore.GroupsSeq("") {
if iterationErr != nil {
assertError(t, iterationErr)
assertEmpty(t, groupName)
break
}
seen = append(seen, groupName)
}
assertEqual(t, []string{"config"}, seen)
}
// ---------------------------------------------------------------------------
// Stubbed SQLite driver coverage
// ---------------------------------------------------------------------------
func TestCoverage_EnsureSchema_Bad_TableExistsQueryError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableExistsErr: core.E("stubSQLiteScenario", "sqlite master query failed", nil),
})
defer func() { _ = database.Close() }()
err := ensureSchema(database)
assertError(t, err)
assertContainsString(t, err.Error(), "sqlite master query failed")
}
func TestCoverage_EnsureSchema_Good_ExistingEntriesAndLegacyMigration(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableExistsFound: true,
tableInfoRows: [][]driver.Value{
{0, "expires_at", "INTEGER", 0, nil, 0},
},
})
defer func() { _ = database.Close() }()
assertNoError(t, ensureSchema(database))
}
func TestCoverage_EnsureSchema_Bad_ExpiryColumnQueryError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableExistsFound: true,
tableInfoErr: core.E("stubSQLiteScenario", "table_info query failed", nil),
})
defer func() { _ = database.Close() }()
err := ensureSchema(database)
assertError(t, err)
assertContainsString(t, err.Error(), "table_info query failed")
}
func TestCoverage_EnsureSchema_Bad_MigrationError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableExistsFound: true,
tableInfoRows: [][]driver.Value{
{0, "expires_at", "INTEGER", 0, nil, 0},
},
insertErr: core.E("stubSQLiteScenario", "insert failed", nil),
})
defer func() { _ = database.Close() }()
err := ensureSchema(database)
assertError(t, err)
assertContainsString(t, err.Error(), "insert failed")
}
func TestCoverage_EnsureSchema_Bad_MigrationCommitError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableExistsFound: true,
tableInfoRows: [][]driver.Value{
{0, "expires_at", "INTEGER", 0, nil, 0},
},
commitErr: core.E("stubSQLiteScenario", "commit failed", nil),
})
defer func() { _ = database.Close() }()
err := ensureSchema(database)
assertError(t, err)
assertContainsString(t, err.Error(), "commit failed")
}
func TestCoverage_TableHasColumn_Bad_QueryError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableInfoErr: core.E("stubSQLiteScenario", "table_info query failed", nil),
})
defer func() { _ = database.Close() }()
_, err := tableHasColumn(database, "entries", "expires_at")
assertError(t, err)
assertContainsString(t, err.Error(), "table_info query failed")
}
func TestCoverage_EnsureExpiryColumn_Good_DuplicateColumn(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableInfoRows: [][]driver.Value{
{0, "entry_key", "TEXT", 1, nil, 0},
},
alterTableErr: core.E("stubSQLiteScenario", "duplicate column name: expires_at", nil),
})
defer func() { _ = database.Close() }()
assertNoError(t, ensureExpiryColumn(database))
}
func TestCoverage_EnsureExpiryColumn_Bad_AlterTableError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableInfoRows: [][]driver.Value{
{0, "entry_key", "TEXT", 1, nil, 0},
},
alterTableErr: core.E("stubSQLiteScenario", "permission denied", nil),
})
defer func() { _ = database.Close() }()
err := ensureExpiryColumn(database)
assertError(t, err)
assertContainsString(t, err.Error(), "permission denied")
}
func TestCoverage_MigrateLegacyEntriesTable_Bad_InsertError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableInfoRows: [][]driver.Value{
{0, "grp", "TEXT", 1, nil, 0},
},
insertErr: core.E("stubSQLiteScenario", "insert failed", nil),
})
defer func() { _ = database.Close() }()
err := migrateLegacyEntriesTable(database)
assertError(t, err)
assertContainsString(t, err.Error(), "insert failed")
}
func TestCoverage_MigrateLegacyEntriesTable_Bad_BeginError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
beginErr: core.E("stubSQLiteScenario", "begin failed", nil),
})
defer func() { _ = database.Close() }()
err := migrateLegacyEntriesTable(database)
assertError(t, err)
assertContainsString(t, err.Error(), "begin failed")
}
func TestCoverage_MigrateLegacyEntriesTable_Good_CreatesAndMigratesLegacyRows(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableInfoRows: [][]driver.Value{
{0, "grp", "TEXT", 1, nil, 0},
},
})
defer func() { _ = database.Close() }()
assertNoError(t, migrateLegacyEntriesTable(database))
}
func TestCoverage_MigrateLegacyEntriesTable_Bad_TableInfoError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableInfoErr: core.E("stubSQLiteScenario", "table_info query failed", nil),
})
defer func() { _ = database.Close() }()
err := migrateLegacyEntriesTable(database)
assertError(t, err)
assertContainsString(t, err.Error(), "table_info query failed")
}
type stubSQLiteScenario struct {
tableExistsErr error
tableExistsFound bool
tableInfoErr error
tableInfoRows [][]driver.Value
groupRows [][]driver.Value
groupRowsErr error
groupRowsErrIndex int
alterTableErr error
createTableErr error
insertErr error
dropTableErr error
beginErr error
commitErr error
rollbackErr error
}
type stubSQLiteDriver struct{}
type stubSQLiteConn struct {
scenario *stubSQLiteScenario
}
type stubSQLiteTx struct {
scenario *stubSQLiteScenario
}
type stubSQLiteRows struct {
columns []string
rows [][]driver.Value
index int
nextErr error
nextErrIndex int
}
type stubSQLiteResult struct{}
var (
stubSQLiteDriverOnce sync.Once
stubSQLiteScenarios sync.Map
)
const stubSQLiteDriverName = "stub-sqlite"
func openStubSQLiteDatabase(t *testing.T, scenario stubSQLiteScenario) (*sql.DB, string) {
t.Helper()
stubSQLiteDriverOnce.Do(func() {
sql.Register(stubSQLiteDriverName, stubSQLiteDriver{})
})
databasePath := t.Name()
stubSQLiteScenarios.Store(databasePath, &scenario)
t.Cleanup(func() {
stubSQLiteScenarios.Delete(databasePath)
})
database, err := sql.Open(stubSQLiteDriverName, databasePath)
assertNoError(t, err)
return database, databasePath
}
func (stubSQLiteDriver) Open(databasePath string) (driver.Conn, error) {
scenarioValue, ok := stubSQLiteScenarios.Load(databasePath)
if !ok {
return nil, core.E("stubSQLiteDriver.Open", "missing scenario", nil)
}
return &stubSQLiteConn{scenario: scenarioValue.(*stubSQLiteScenario)}, nil
}
func (conn *stubSQLiteConn) Prepare(query string) (driver.Stmt, error) {
return nil, core.E("stubSQLiteConn.Prepare", "not implemented", nil)
}
func (conn *stubSQLiteConn) Close() error {
return nil
}
func (conn *stubSQLiteConn) Begin() (driver.Tx, error) {
return conn.BeginTx(context.Background(), driver.TxOptions{})
}
func (conn *stubSQLiteConn) BeginTx(ctx context.Context, options driver.TxOptions) (driver.Tx, error) {
if conn.scenario.beginErr != nil {
return nil, conn.scenario.beginErr
}
return &stubSQLiteTx{scenario: conn.scenario}, nil
}
func (conn *stubSQLiteConn) ExecContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Result, error) {
switch {
case core.Contains(query, "ALTER TABLE entries ADD COLUMN expires_at INTEGER"):
if conn.scenario.alterTableErr != nil {
return nil, conn.scenario.alterTableErr
}
case core.Contains(query, "CREATE TABLE IF NOT EXISTS entries"):
if conn.scenario.createTableErr != nil {
return nil, conn.scenario.createTableErr
}
case core.Contains(query, "INSERT OR IGNORE INTO entries"):
if conn.scenario.insertErr != nil {
return nil, conn.scenario.insertErr
}
case core.Contains(query, "DROP TABLE kv"):
if conn.scenario.dropTableErr != nil {
return nil, conn.scenario.dropTableErr
}
}
return stubSQLiteResult{}, nil
}
func (conn *stubSQLiteConn) QueryContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Rows, error) {
switch {
case core.Contains(query, "sqlite_master"):
if conn.scenario.tableExistsErr != nil {
return nil, conn.scenario.tableExistsErr
}
if conn.scenario.tableExistsFound {
return &stubSQLiteRows{
columns: []string{"name"},
rows: [][]driver.Value{{"entries"}},
}, nil
}
return &stubSQLiteRows{columns: []string{"name"}}, nil
case core.Contains(query, "SELECT DISTINCT "+entryGroupColumn):
return &stubSQLiteRows{
columns: []string{entryGroupColumn},
rows: conn.scenario.groupRows,
nextErr: conn.scenario.groupRowsErr,
nextErrIndex: conn.scenario.groupRowsErrIndex,
}, nil
case core.HasPrefix(query, "PRAGMA table_info("):
if conn.scenario.tableInfoErr != nil {
return nil, conn.scenario.tableInfoErr
}
return &stubSQLiteRows{
columns: []string{"cid", "name", "type", "notnull", "dflt_value", "pk"},
rows: conn.scenario.tableInfoRows,
}, nil
}
return nil, core.E("stubSQLiteConn.QueryContext", "unexpected query", nil)
}
func (transaction *stubSQLiteTx) Commit() error {
if transaction.scenario.commitErr != nil {
return transaction.scenario.commitErr
}
return nil
}
func (transaction *stubSQLiteTx) Rollback() error {
if transaction.scenario.rollbackErr != nil {
return transaction.scenario.rollbackErr
}
return nil
}
func (rows *stubSQLiteRows) Columns() []string {
return rows.columns
}
func (rows *stubSQLiteRows) Close() error {
return nil
}
func (rows *stubSQLiteRows) Next(dest []driver.Value) error {
if rows.nextErr != nil && rows.index == rows.nextErrIndex {
rows.index++
return rows.nextErr
}
if rows.index >= len(rows.rows) {
return io.EOF
}
row := rows.rows[rows.index]
rows.index++
for i := range dest {
dest[i] = nil
}
copy(dest, row)
return nil
}
func (stubSQLiteResult) LastInsertId() (int64, error) {
return 0, nil
}
func (stubSQLiteResult) RowsAffected() (int64, error) {
return 0, nil
}

134
doc.go Normal file
View file

@ -0,0 +1,134 @@
// Package store provides SQLite-backed grouped key-value storage with TTL,
// namespace isolation, quota enforcement, reactive events, journal writes,
// workspace buffering, cold archive compaction, and orphan recovery.
//
// Prefer `store.New(...)` and `store.NewScoped(...)` for the primary API.
// Use `store.NewConfigured(store.StoreConfig{...})` and
// `store.NewScopedConfigured(configuredStore, store.ScopedStoreConfig{...})` when the
// configuration is already known:
//
// configuredStore, err := store.NewConfigured(store.StoreConfig{
// DatabasePath: ":memory:",
// Journal: store.JournalConfiguration{
// EndpointURL: "http://127.0.0.1:8086",
// Organisation: "core",
// BucketName: "events",
// },
// PurgeInterval: 20 * time.Millisecond,
// WorkspaceStateDirectory: "/tmp/core-state",
// })
//
// Workspace files live under `.core/state/` by default and can be recovered
// with `configuredStore.RecoverOrphans(".core/state/")` after a crash.
// Use `StoreConfig.Normalised()` when you want the default purge interval and
// workspace state directory filled in before passing the config onward.
//
// Usage example:
//
// func main() {
// configuredStore, err := store.NewConfigured(store.StoreConfig{
// DatabasePath: ":memory:",
// Journal: store.JournalConfiguration{
// EndpointURL: "http://127.0.0.1:8086",
// Organisation: "core",
// BucketName: "events",
// },
// PurgeInterval: 20 * time.Millisecond,
// WorkspaceStateDirectory: "/tmp/core-state",
// })
// if err != nil {
// return
// }
// defer func() { _ = configuredStore.Close() }()
//
// if err := configuredStore.Set("config", "colour", "blue"); err != nil {
// return
// }
// if err := configuredStore.SetWithTTL("session", "token", "abc123", 5*time.Minute); err != nil {
// return
// }
//
// colourValue, err := configuredStore.Get("config", "colour")
// if err != nil {
// return
// }
// fmt.Println(colourValue)
//
// for entry, err := range configuredStore.All("config") {
// if err != nil {
// return
// }
// fmt.Println(entry.Key, entry.Value)
// }
//
// events := configuredStore.Watch("config")
// defer configuredStore.Unwatch("config", events)
// go func() {
// for event := range events {
// fmt.Println(event.Type, event.Group, event.Key, event.Value)
// }
// }()
//
// unregister := configuredStore.OnChange(func(event store.Event) {
// fmt.Println("changed", event.Group, event.Key, event.Value)
// })
// defer unregister()
//
// scopedStore, err := store.NewScopedConfigured(
// configuredStore,
// store.ScopedStoreConfig{
// Namespace: "tenant-a",
// Quota: store.QuotaConfig{MaxKeys: 100, MaxGroups: 10},
// },
// )
// if err != nil {
// return
// }
// if err := scopedStore.SetIn("preferences", "locale", "en-GB"); err != nil {
// return
// }
//
// for groupName, err := range configuredStore.GroupsSeq("tenant-a:") {
// if err != nil {
// return
// }
// fmt.Println(groupName)
// }
//
// workspace, err := configuredStore.NewWorkspace("scroll-session")
// if err != nil {
// return
// }
// defer workspace.Discard()
//
// if err := workspace.Put("like", map[string]any{"user": "@alice"}); err != nil {
// return
// }
// if err := workspace.Put("profile_match", map[string]any{"user": "@charlie"}); err != nil {
// return
// }
// if result := workspace.Commit(); !result.OK {
// return
// }
//
// orphans := configuredStore.RecoverOrphans(".core/state")
// for _, orphanWorkspace := range orphans {
// fmt.Println(orphanWorkspace.Name(), orphanWorkspace.Aggregate())
// orphanWorkspace.Discard()
// }
//
// journalResult := configuredStore.QueryJournal(`from(bucket: "events") |> range(start: -24h)`)
// if !journalResult.OK {
// return
// }
//
// archiveResult := configuredStore.Compact(store.CompactOptions{
// Before: time.Now().Add(-30 * 24 * time.Hour),
// Output: "/tmp/archive",
// Format: "gzip",
// })
// if !archiveResult.OK {
// return
// }
// }
package store

View file

@ -0,0 +1,440 @@
# RFC-025: Agent Experience (AX) Design Principles
- **Status:** Draft
- **Authors:** Snider, Cladius
- **Date:** 2026-03-19
- **Applies to:** All Core ecosystem packages (CoreGO, CorePHP, CoreTS, core-agent)
## Abstract
Agent Experience (AX) is a design paradigm for software systems where the primary code consumer is an AI agent, not a human developer. AX sits alongside User Experience (UX) and Developer Experience (DX) as the third era of interface design.
This RFC establishes AX as a formal design principle for the Core ecosystem and defines the conventions that follow from it.
## Motivation
As of early 2026, AI agents write, review, and maintain the majority of code in the Core ecosystem. The original author has not manually edited code (outside of Core struct design) since October 2025. Code is processed semantically — agents reason about intent, not characters.
Design patterns inherited from the human-developer era optimise for the wrong consumer:
- **Short names** save keystrokes but increase semantic ambiguity
- **Functional option chains** are fluent for humans but opaque for agents tracing configuration
- **Error-at-every-call-site** produces 50% boilerplate that obscures intent
- **Generic type parameters** force agents to carry type context that the runtime already has
- **Panic-hiding conventions** (`Must*`) create implicit control flow that agents must special-case
AX acknowledges this shift and provides principles for designing code, APIs, file structures, and conventions that serve AI agents as first-class consumers.
## The Three Eras
| Era | Primary Consumer | Optimises For | Key Metric |
|-----|-----------------|---------------|------------|
| UX | End users | Discoverability, forgiveness, visual clarity | Task completion time |
| DX | Developers | Typing speed, IDE support, convention familiarity | Time to first commit |
| AX | AI agents | Predictability, composability, semantic navigation | Correct-on-first-pass rate |
AX does not replace UX or DX. End users still need good UX. Developers still need good DX. But when the primary code author and maintainer is an AI agent, the codebase should be designed for that consumer first.
## Principles
### 1. Predictable Names Over Short Names
Names are tokens that agents pattern-match across languages and contexts. Abbreviations introduce mapping overhead.
```
Config not Cfg
Service not Srv
Embed not Emb
Error not Err (as a subsystem name; err for local variables is fine)
Options not Opts
```
**Rule:** If a name would require a comment to explain, it is too short.
**Exception:** Industry-standard abbreviations that are universally understood (`HTTP`, `URL`, `ID`, `IPC`, `I18n`) are acceptable. The test: would an agent trained on any mainstream language recognise it without context?
### 2. Comments as Usage Examples
The function signature tells WHAT. The comment shows HOW with real values.
```go
// Detect the project type from files present
setup.Detect("/path/to/project")
// Set up a workspace with auto-detected template
setup.Run(setup.Options{Path: ".", Template: "auto"})
// Scaffold a PHP module workspace
setup.Run(setup.Options{Path: "./my-module", Template: "php"})
```
**Rule:** If a comment restates what the type signature already says, delete it. If a comment shows a concrete usage with realistic values, keep it.
**Rationale:** Agents learn from examples more effectively than from descriptions. A comment like "Run executes the setup process" adds zero information. A comment like `setup.Run(setup.Options{Path: ".", Template: "auto"})` teaches an agent exactly how to call the function.
### 3. Path Is Documentation
File and directory paths should be self-describing. An agent navigating the filesystem should understand what it is looking at without reading a README.
```
flow/deploy/to/homelab.yaml — deploy TO the homelab
flow/deploy/from/github.yaml — deploy FROM GitHub
flow/code/review.yaml — code review flow
template/file/go/struct.go.tmpl — Go struct file template
template/dir/workspace/php/ — PHP workspace scaffold
```
**Rule:** If an agent needs to read a file to understand what a directory contains, the directory naming has failed.
**Corollary:** The unified path convention (folder structure = HTTP route = CLI command = test path) is AX-native. One path, every surface.
### 4. Templates Over Freeform
When an agent generates code from a template, the output is constrained to known-good shapes. When an agent writes freeform, the output varies.
```go
// Template-driven — consistent output
lib.RenderFile("php/action", data)
lib.ExtractDir("php", targetDir, data)
// Freeform — variance in output
"write a PHP action class that..."
```
**Rule:** For any code pattern that recurs, provide a template. Templates are guardrails for agents.
**Scope:** Templates apply to file generation, workspace scaffolding, config generation, and commit messages. They do NOT apply to novel logic — agents should write business logic freeform with the domain knowledge available.
### 5. Declarative Over Imperative
Agents reason better about declarations of intent than sequences of operations.
```yaml
# Declarative — agent sees what should happen
steps:
- name: build
flow: tools/docker-build
with:
context: "{{ .app_dir }}"
image_name: "{{ .image_name }}"
- name: deploy
flow: deploy/with/docker
with:
host: "{{ .host }}"
```
```go
// Imperative — agent must trace execution
cmd := exec.Command("docker", "build", "--platform", "linux/amd64", "-t", imageName, ".")
cmd.Dir = appDir
if err := cmd.Run(); err != nil {
return fmt.Errorf("docker build: %w", err)
}
```
**Rule:** Orchestration, configuration, and pipeline logic should be declarative (YAML/JSON). Implementation logic should be imperative (Go/PHP/TS). The boundary is: if an agent needs to compose or modify the logic, make it declarative.
### 6. Universal Types (Core Primitives)
Every component in the ecosystem accepts and returns the same primitive types. An agent processing any level of the tree sees identical shapes.
```go
// Universal contract
setup.Run(core.Options{Path: ".", Template: "auto"})
brain.New(core.Options{Name: "openbrain"})
deploy.Run(core.Options{Flow: "deploy/to/homelab"})
// Fractal — Core itself is a Service
core.New(core.Options{
Services: []core.Service{
process.New(core.Options{Name: "process"}),
brain.New(core.Options{Name: "brain"}),
},
})
```
**Core primitive types:**
| Type | Purpose |
|------|---------|
| `core.Options` | Input configuration (what you want) |
| `core.Config` | Runtime settings (what is active) |
| `core.Data` | Embedded or stored content |
| `core.Service` | A managed component with lifecycle |
| `core.Result[T]` | Return value with OK/fail state |
**What this replaces:**
| Go Convention | Core AX | Why |
|--------------|---------|-----|
| `func With*(v) Option` | `core.Options{Field: v}` | Struct literal is parseable; option chain requires tracing |
| `func Must*(v) T` | `core.Result[T]` | No hidden panics; errors flow through Core |
| `func *For[T](c) T` | `c.Service("name")` | String lookup is greppable; generics require type context |
| `val, err :=` everywhere | Single return via `core.Result` | Intent not obscured by error handling |
| `_ = err` | Never needed | Core handles all errors internally |
### 7. Directory as Semantics
The directory structure tells an agent the intent before it reads a word. Top-level directories are semantic categories, not organisational bins.
```
plans/
├── code/ # Pure primitives — read for WHAT exists
├── project/ # Products — read for WHAT we're building and WHY
└── rfc/ # Contracts — read for constraints and rules
```
**Rule:** An agent should know what kind of document it's reading from the path alone. `code/core/go/io/RFC.md` = a lib primitive spec. `project/ofm/RFC.md` = a product spec that cross-references code/. `rfc/snider/borg/RFC-BORG-006-SMSG-FORMAT.md` = an immutable contract for the Borg SMSG protocol.
**Corollary:** The three-way split (code/project/rfc) extends principle 3 (Path Is Documentation) from files to entire subtrees. The path IS the metadata.
### 8. Lib Never Imports Consumer
Dependency flows one direction. Libraries define primitives. Consumers compose from them. A new feature in a consumer can never break a library.
```
code/core/go/* → lib tier (stable foundation)
code/core/agent/ → consumer tier (composes from go/*)
code/core/cli/ → consumer tier (composes from go/*)
code/core/gui/ → consumer tier (composes from go/*)
```
**Rule:** If package A is in `go/` and package B is in the consumer tier, B may import A but A must never import B. The repo naming convention enforces this: `go-{name}` = lib, bare `{name}` = consumer.
**Why this matters for agents:** When an agent is dispatched to implement a feature in `core/agent`, it can freely import from `go-io`, `go-scm`, `go-process`. But if an agent is dispatched to `go-io`, it knows its changes are foundational — every consumer depends on it, so the contract must not break.
### 9. Issues Are N+(rounds) Deep
Problems in code and specs are layered. Surface issues mask deeper issues. Fixing the surface reveals the next layer. This is not a failure mode — it is the discovery process.
```
Pass 1: Find 16 issues (surface — naming, imports, obvious errors)
Pass 2: Find 11 issues (structural — contradictions, missing types)
Pass 3: Find 5 issues (architectural — signature mismatches, registration gaps)
Pass 4: Find 4 issues (contract — cross-spec API mismatches)
Pass 5: Find 2 issues (mechanical — path format, nil safety)
Pass N: Findings are trivial → spec/code is complete
```
**Rule:** Iteration is required, not a failure. Each pass sees what the previous pass could not, because the context changed. An agent dispatched with the same task on the same repo will find different things each time — this is correct behaviour.
**Corollary:** The cheapest model should do the most passes (surface work). The frontier model should arrive last, when only deep issues remain. Tiered iteration: grunt model grinds → mid model pre-warms → frontier model polishes.
**Anti-pattern:** One-shot generation expecting valid output. No model, no human, produces correct-on-first-pass for non-trivial work. Expecting it wastes the first pass on surface issues that a cheaper pass would have caught.
### 10. CLI Tests as Artifact Validation
Unit tests verify the code. CLI tests verify the binary. The directory structure IS the command structure — path maps to command, Taskfile runs the test.
```
tests/cli/
├── core/
│ └── lint/
│ ├── Taskfile.yaml ← test `core-lint` (root)
│ ├── run/
│ │ ├── Taskfile.yaml ← test `core-lint run`
│ │ └── fixtures/
│ ├── go/
│ │ ├── Taskfile.yaml ← test `core-lint go`
│ │ └── fixtures/
│ └── security/
│ ├── Taskfile.yaml ← test `core-lint security`
│ └── fixtures/
```
**Rule:** Every CLI command has a matching `tests/cli/{path}/Taskfile.yaml`. The Taskfile runs the compiled binary against fixtures with known inputs and validates the output. If the CLI test passes, the underlying actions work — because CLI commands call actions, MCP tools call actions, API endpoints call actions. Test the CLI, trust the rest.
**Pattern:**
```yaml
# tests/cli/core/lint/go/Taskfile.yaml
version: '3'
tasks:
test:
cmds:
- core-lint go --output json fixtures/ > /tmp/result.json
- jq -e '.findings | length > 0' /tmp/result.json
- jq -e '.summary.passed == false' /tmp/result.json
```
**Why this matters for agents:** An agent can validate its own work by running `task test` in the matching `tests/cli/` directory. No test framework, no mocking, no setup — just the binary, fixtures, and `jq` assertions. The agent builds the binary, runs the test, sees the result. If it fails, the agent can read the fixture, read the output, and fix the code.
**Corollary:** Fixtures are planted bugs. Each fixture file has a known issue that the linter must find. If the linter doesn't find it, the test fails. Fixtures are the spec for what the tool must detect — they ARE the test cases, not descriptions of test cases.
## Applying AX to Existing Patterns
### File Structure
```
# AX-native: path describes content
core/agent/
├── go/ # Go source
├── php/ # PHP source
├── ui/ # Frontend source
├── claude/ # Claude Code plugin
└── codex/ # Codex plugin
# Not AX: generic names requiring README
src/
├── lib/
├── utils/
└── helpers/
```
### Error Handling
```go
// AX-native: errors are infrastructure, not application logic
svc := c.Service("brain")
cfg := c.Config().Get("database.host")
// Errors logged by Core. Code reads like a spec.
// Not AX: errors dominate the code
svc, err := c.ServiceFor[brain.Service]()
if err != nil {
return fmt.Errorf("get brain service: %w", err)
}
cfg, err := c.Config().Get("database.host")
if err != nil {
_ = err // silenced because "it'll be fine"
}
```
### API Design
```go
// AX-native: one shape, every surface
core.New(core.Options{
Name: "my-app",
Services: []core.Service{...},
Config: core.Config{...},
})
// Not AX: multiple patterns for the same thing
core.New(
core.WithName("my-app"),
core.WithService(factory1),
core.WithService(factory2),
core.WithConfig(cfg),
)
```
## The Plans Convention — AX Development Lifecycle
The `plans/` directory structure encodes a development methodology designed for how generative AI actually works: iterative refinement across structured phases, not one-shot generation.
### The Three-Way Split
```
plans/
├── project/ # 1. WHAT and WHY — start here
├── rfc/ # 2. CONSTRAINTS — immutable contracts
└── code/ # 3. HOW — implementation specs
```
Each directory is a phase. Work flows from project → rfc → code. Each transition forces a refinement pass — you cannot write a code spec without discovering gaps in the project spec, and you cannot write an RFC without discovering assumptions in both.
**Three places for data that can't be written simultaneously = three guaranteed iterations of "actually, this needs changing."** Refinement is baked into the structure, not bolted on as a review step.
### Phase 1: Project (Vision)
Start with `project/`. No code exists yet. Define:
- What the product IS and who it serves
- What existing primitives it consumes (cross-ref to `code/`)
- What constraints it operates under (cross-ref to `rfc/`)
This is where creativity lives. Map features to building blocks. Connect systems. The project spec is integrative — it references everything else.
### Phase 2: RFC (Contracts)
Extract the immutable rules into `rfc/`. These are constraints that don't change with implementation:
- Wire formats, protocols, hash algorithms
- Security properties that must hold
- Compatibility guarantees
RFCs are numbered per component (`RFC-BORG-006-SMSG-FORMAT.md`) and never modified after acceptance. If the contract changes, write a new RFC.
### Phase 3: Code (Implementation Specs)
Define the implementation in `code/`. Each component gets an RFC.md that an agent can implement from:
- Struct definitions (the DTOs — see principle 6)
- Method signatures and behaviour
- Error conditions and edge cases
- Cross-references to other code/ specs
The code spec IS the product. Write the spec → dispatch to an agent → review output → iterate.
### Pre-Launch: Alignment Protocol
Before dispatching for implementation, verify spec-model alignment:
```
1. REVIEW — The implementation model (Codex/Jules) reads the spec
and reports missing elements. This surfaces the delta between
the model's training and the spec's assumptions.
"I need X, Y, Z to implement this" is the model saying
"I hear you but I'm missing context" — without asking.
2. ADJUST — Update the spec to close the gaps. Add examples,
clarify ambiguities, provide the context the model needs.
This is shared alignment, not compromise.
3. VERIFY — A different model (or sub-agent) reviews the adjusted
spec without the planner's bias. Fresh eyes on the contract.
"Does this make sense to someone who wasn't in the room?"
4. READY — When the review findings are trivial or deployment-
related (not architectural), the spec is ready to dispatch.
```
### Implementation: Iterative Dispatch
Same prompt, multiple runs. Each pass sees deeper because the context evolved:
```
Round 1: Build features (the obvious gaps)
Round 2: Write tests (verify what was built)
Round 3: Harden security (what can go wrong?)
Round 4: Next RFC section (what's still missing?)
Round N: Findings are trivial → implementation is complete
```
Re-running is not failure. It is the process. Each pass changes the codebase, which changes what the next pass can see. The iteration IS the refinement.
### Post-Implementation: Auto-Documentation
The QA/verify chain produces artefacts that feed forward:
- Test results document the contract (what works, what doesn't)
- Coverage reports surface untested paths
- Diff summaries prep the changelog for the next release
- Doc site updates from the spec (the spec IS the documentation)
The output of one cycle is the input to the next. The plans repo stays current because the specs drive the code, not the other way round.
## Compatibility
AX conventions are valid, idiomatic Go/PHP/TS. They do not require language extensions, code generation, or non-standard tooling. An AX-designed codebase compiles, tests, and deploys with standard toolchains.
The conventions diverge from community patterns (functional options, Must/For, etc.) but do not violate language specifications. This is a style choice, not a fork.
## Adoption
AX applies to all new code in the Core ecosystem. Existing code migrates incrementally as it is touched — no big-bang rewrite.
Priority order:
1. **Public APIs** (package-level functions, struct constructors)
2. **File structure** (path naming, template locations)
3. **Internal fields** (struct field names, local variables)
## References
- dAppServer unified path convention (2024)
- CoreGO DTO pattern refactor (2026-03-18)
- Core primitives design (2026-03-19)
- Go Proverbs, Rob Pike (2015) — AX provides an updated lens
## Changelog
- 2026-03-19: Initial draft

239
docs/RFC-CORE-GO-REQUEST.md Normal file
View file

@ -0,0 +1,239 @@
# RFC Request — go-blockchain needs from Core (FINAL)
> From: Charon (go-blockchain)
> To: Cladius (core/go + go-* packages)
> Date: 2 Apr 2026 00:55
> Snider's answers inline. Updated with precise asks.
## 1. core/api — DONE, pulled (+125 commits)
Using it. No ask needed.
## 2. core.Subscribe/Publish — Raindrops forming
When ready, go-blockchain will:
- Publish: `blockchain.block.new`, `blockchain.alias.registered`, `blockchain.hardfork.activated`
- Wire format: `core.Event{Type: string, Data: any, Timestamp: int64}`
No blocking ask — will integrate when available.
## 3. core.Wallet() — I can do this today via core.Service
```go
c.RegisterService("blockchain.wallet", walletService)
c.Service("blockchain.wallet", core.Service{
Name: "blockchain.wallet",
Instance: walletService,
OnStart: func() core.Result { return walletService.Start() },
OnStop: func() core.Result { return walletService.Stop() },
})
```
Then register actions:
```go
c.Action("blockchain.wallet.create", walletService.HandleCreate)
c.Action("blockchain.wallet.transfer", walletService.HandleTransfer)
c.Action("blockchain.wallet.balance", walletService.HandleBalance)
```
**No ask. Implementing now.**
## 4. Structured Logging — PRECISE ASK
**I want package-level logging that works WITHOUT a Core instance.**
The chain sync runs in goroutines that don't hold `*core.Core`. Currently using `log.Printf`.
**Exact ask:** Confirm these work at package level:
```go
core.Print(nil, "block synced height=%d hash=%s", height, hash) // info
core.Error(nil, "sync failed: %v", err) // error
```
Or do I need `core.NewLog()` → pass the logger into the sync goroutine?
## 5. core.Escrow() — Improvement to go-blockchain, sane with Chain + Asset
Escrow is a tx type (HF4+). I build it in go-blockchain's wallet package:
```go
wallet.BuildEscrowTx(provider, customer, amount, terms)
```
Then expose via action: `c.Action("blockchain.escrow.create", ...)`
**No ask from Core. I implement this.**
## 6. core.Asset() — Same, go-blockchain implements
HF5 enables deploy/emit/burn. I add to wallet package + actions:
```go
c.Action("blockchain.asset.deploy", ...)
c.Action("blockchain.asset.emit", ...)
c.Action("blockchain.asset.burn", ...)
```
**No ask. Implementing after HF5 activates.**
## 7. core.Chain() — Same pattern
```go
c.RegisterService("blockchain.chain", chainService)
c.Action("blockchain.chain.height", ...)
c.Action("blockchain.chain.block", ...)
c.Action("blockchain.chain.sync", ...)
```
**No ask. Doing this today.**
## 8. core.DNS() — Do you want a go-dns package?
The LNS is 672 lines of Go at `~/Code/lthn/lns/`. It could become `go-dns` in the Core ecosystem.
**Ask: Should I make it `dappco.re/go/core/dns` or keep it as a standalone?**
If yes to go-dns, the actions would be:
```go
c.Action("dns.resolve", ...) // A record
c.Action("dns.resolve.txt", ...) // TXT record
c.Action("dns.reverse", ...) // PTR
c.Action("dns.register", ...) // via sidechain
```
## 9. Portable Storage Encoder — DONE
Already implemented in `p2p/encode.go` using `go-p2p/node/levin/EncodeStorage`. Committed and pushed. HandshakeResponse.Encode, ResponseChainEntry.Encode, RequestChain.Decode all working.
**go-storage/go-io improvement ask:** The chain stores blocks in go-store (SQLite). For high-throughput sync, a `go-io` backed raw block file store would be faster. Want me to spec a `BlockStore` interface that can swap between go-store and go-io backends?
## 10. CGo boilerplate — YES PLEASE
**Exact ask:** A `go-cgo` package with:
```go
// Safe C buffer allocation with automatic cleanup
buf := cgo.NewBuffer(32)
defer buf.Free()
buf.CopyFrom(goSlice)
result := buf.Bytes()
// C function call wrapper with error mapping
err := cgo.Call(C.my_function, buf.Ptr(), cgo.SizeT(len))
// Returns Go error if C returns non-zero
// C string conversion
goStr := cgo.GoString(cStr)
cStr := cgo.CString(goStr)
defer cgo.Free(cStr)
```
Every CGo package (go-blockchain/crypto, go-mlx, go-rocm) does this dance manually. A shared helper saves ~50 lines per package and prevents use-after-free bugs.
## Summary
| # | What | Who Does It | Status |
|---|------|-------------|--------|
| 1 | core/api | Cladius | DONE, pulled |
| 2 | Pub/Sub events | Cladius | Forming → core/stream (go-ws rename) |
| 3 | Wallet service | **Charon** | Implementing today |
| 4 | Package-level logging | **Answered below** | RTFM — it works |
| 5 | Escrow txs | **Charon** | In go-blockchain |
| 6 | Asset operations | **Charon** | After HF5 |
| 7 | Chain service | **Charon** | Implementing today |
| 8 | go-dns | **Cladius** | `dappco.re/go/dns` — DNS record DTOs + ClouDNS API types |
| 9 | Storage encoder | **Charon** | DONE |
| 10 | go-cgo | **Cladius** | RFC written, dispatching |
— Charon
---
## Cladius Answers — How To Do It With Core Primitives
> These examples show Charon how each ask maps to existing Core APIs.
> Most of what he asked for already exists — he just needs the patterns.
### #4 Answer: Package-Level Logging
**Yes, `core.Print(nil, ...)` works.** The first arg is `*core.Core` and `nil` is valid — it falls back to the package-level logger. Your goroutines don't need a Core instance:
```go
// In your sync goroutine — no *core.Core needed:
core.Print(nil, "block synced height=%d hash=%s", height, hash)
core.Error(nil, "sync failed: %v", err)
// If you HAVE a Core instance (e.g. in a service handler):
core.Print(c, "wallet created id=%s", id) // tagged with service context
```
Both work. `nil` = package logger, `c` = contextual logger. Same output format.
### #3 Answer: Service + Action Pattern (You Got It Right)
Your code is correct. The full pattern with Core primitives:
```go
// Register service with lifecycle
c.RegisterService("blockchain.wallet", core.Service{
OnStart: func(ctx context.Context) core.Result {
return walletService.Start(ctx)
},
OnStop: func(ctx context.Context) core.Result {
return walletService.Stop(ctx)
},
})
// Register actions — path IS the CLI/HTTP/MCP route
c.Action("blockchain.wallet.create", walletService.HandleCreate)
c.Action("blockchain.wallet.balance", walletService.HandleBalance)
// Call another service's action (for #8 dns.discover → blockchain.chain.aliases):
result := c.Run("blockchain.chain.aliases", core.Options{})
```
### #5/#6/#7 Answer: Same Pattern, Different Path
```go
// Escrow (HF4+)
c.Action("blockchain.escrow.create", escrowService.HandleCreate)
c.Action("blockchain.escrow.release", escrowService.HandleRelease)
// Asset (HF5+)
c.Action("blockchain.asset.deploy", assetService.HandleDeploy)
// Chain
c.Action("blockchain.chain.height", chainService.HandleHeight)
c.Action("blockchain.chain.block", chainService.HandleBlock)
// All of these automatically get:
// - CLI: core blockchain chain height
// - HTTP: GET /blockchain/chain/height
// - MCP: blockchain.chain.height tool
// - i18n: blockchain.chain.height.* keys
```
### #9 Answer: BlockStore Interface
For the go-store vs go-io backend swap:
```go
// Define as a Core Data type
type BlockStore struct {
core.Data // inherits Store/Load/Delete
}
// The backing medium is chosen at init:
store := core.NewData("blockchain.blocks",
core.WithMedium(gostore.SQLite("blocks.db")), // or:
// core.WithMedium(goio.File("blocks/")), // raw file backend
)
// Usage is identical regardless of backend:
store.Store("block:12345", blockBytes)
block := store.Load("block:12345")
```
### #10 Answer: go-cgo
RFC written at `plans/code/core/go/cgo/RFC.md`. Buffer, Scope, Call, String helpers. Dispatching to Codex when repo is created on Forge.
### #8 Answer: go-dns
`dappco.re/go/dns` — Core package. DNS record structs as DTOs mapping 1:1 to ClouDNS API. Your LNS code at `~/Code/lthn/lns/` moves in as the service layer on top. Dispatching when repo exists.

1337
docs/RFC-CORE-GO.md Normal file

File diff suppressed because it is too large Load diff

505
docs/RFC-STORE.md Normal file
View file

@ -0,0 +1,505 @@
---
module: dappco.re/go/store
repo: core/go-store
lang: go
tier: lib
depends:
- code/core/go
tags:
- storage
- sqlite
- duckdb
- database
- kv
---
# go-store RFC — SQLite Key-Value Store
> An agent should be able to use this store from this document alone.
**Module:** `dappco.re/go/store`
**Repository:** `core/go-store`
**Files:** 9
---
## 1. Overview
SQLite-backed key-value store with TTL, namespace isolation, reactive events, and quota enforcement. Pure Go (no CGO). Used by core/ide for memory caching and by agents for workspace state.
---
## 2. Architecture
| File | Purpose |
|------|---------|
| `store.go` | Core `Store`: CRUD on `(grp, key)` compound PK, TTL via `expires_at` (Unix ms), background purge (60s), `text/template` rendering, `iter.Seq2` iterators |
| `transaction.go` | `Store.Transaction`, transaction-scoped read/write helpers, staged event dispatch |
| `events.go` | `Watch`/`Unwatch` (buffered chan, cap 16, non-blocking sends) + `OnChange` callbacks (synchronous) |
| `scope.go` | `ScopedStore` wraps `*Store`, prefixes groups with `namespace:`. Quota enforcement (`MaxKeys`/`MaxGroups`) |
| `workspace.go` | `Workspace` buffer: DuckDB-backed mutable accumulation, atomic commit to journal |
| `journal.go` | InfluxDB journal: write completed units, query time-series, retention |
| `compact.go` | Cold archive: compress journal entries to JSONL.gz |
| `store_test.go` | Store unit tests |
| `workspace_test.go` | Workspace buffer tests |
---
## 3. Key Design Decisions
- **Single-connection SQLite.** `MaxOpenConns(1)` because SQLite pragmas (WAL, busy_timeout) are per-connection — a pool would hand out unpragma'd connections causing `SQLITE_BUSY`
- **TTL is triple-layered:** lazy delete on `Get`, query-time `WHERE` filtering, background purge goroutine
- **LIKE queries use `escapeLike()`** with `^` as escape char to prevent SQL wildcard injection
---
## 4. Store Struct
```go
// Store is the SQLite key-value store with TTL expiry, namespace isolation,
// reactive events, SQLite journal writes, and orphan recovery.
type Store struct {
db *sql.DB
sqliteDatabase *sql.DB
databasePath string
workspaceStateDirectory string
purgeContext context.Context
cancelPurge context.CancelFunc
purgeWaitGroup sync.WaitGroup
purgeInterval time.Duration // interval between background purge cycles
sqliteStoragePath string
sqliteStorageDirectory string
mediumBacked bool
journal influxdb2.Client
bucket string
org string
mu sync.RWMutex
journalConfiguration JournalConfiguration
medium Medium
lifecycleLock sync.Mutex
isClosed bool
// Event dispatch state.
watchers map[string][]chan Event
callbacks []changeCallbackRegistration
watcherLock sync.RWMutex // protects watcher registration and dispatch
callbackLock sync.RWMutex // protects callback registration and dispatch
nextCallbackID uint64 // monotonic ID for callback registrations
orphanWorkspaceLock sync.Mutex
cachedOrphanWorkspaces []*Workspace
}
type EventType int
const (
EventSet EventType = iota
EventDelete
EventDeleteGroup
}
// Event is emitted on Watch channels when a key changes.
type Event struct {
Type EventType
Group string
Key string
Value string
Timestamp time.Time
}
```
```go
// New creates a store. Journal is optional — pass WithJournal() to enable.
//
// storeInstance, _ := store.New(":memory:") // SQLite only
// storeInstance, _ := store.New("/path/to/db", store.WithJournal(
// "http://localhost:8086", "core-org", "core-bucket",
// ))
func New(path string, opts ...StoreOption) (*Store, error) { }
type StoreOption func(*Store)
func WithJournal(url, org, bucket string) StoreOption { }
```
---
## 5. API
```go
storeInstance, _ := store.New(":memory:") // or store.New("/path/to/db")
defer storeInstance.Close()
storeInstance.Set("group", "key", "value")
storeInstance.SetWithTTL("group", "key", "value", 5*time.Minute)
value, _ := storeInstance.Get("group", "key") // lazy-deletes expired
// Atomic multi-key/multi-group update
storeInstance.Transaction(func(transaction *store.StoreTransaction) error {
if err := transaction.Set("group", "first", "1"); err != nil {
return err
}
return transaction.Set("group", "second", "2")
})
// Iteration
for key, value := range storeInstance.AllSeq("group") { ... }
for group := range storeInstance.GroupsSeq() { ... }
// Events
events := storeInstance.Watch("group")
storeInstance.OnChange(func(event store.Event) { ... })
```
---
## 6. ScopedStore
```go
// ScopedStore wraps a Store with a namespace prefix and optional quotas.
//
// scopedStore, _ := store.NewScopedConfigured(storeInstance, store.ScopedStoreConfig{
// Namespace: "mynamespace",
// Quota: store.QuotaConfig{MaxKeys: 100, MaxGroups: 10},
// })
// scopedStore.Set("key", "value") // stored as group "mynamespace:default", key "key"
// scopedStore.SetIn("mygroup", "key", "v") // stored as group "mynamespace:mygroup", key "key"
type ScopedStore struct {
store *Store
namespace string // validated: ^[a-zA-Z0-9-]+$
MaxKeys int // 0 = unlimited
MaxGroups int // 0 = unlimited
}
func NewScoped(storeInstance *Store, namespace string) (*ScopedStore, error) { }
func NewScopedConfigured(storeInstance *Store, scopedConfig ScopedStoreConfig) (*ScopedStore, error) { }
// Set stores a value in the default group ("namespace:default")
func (scopedStore *ScopedStore) Set(key, value string) error { }
// SetIn stores a value in an explicit group ("namespace:group")
func (scopedStore *ScopedStore) SetIn(group, key, value string) error { }
// Get retrieves a value from the default group
func (scopedStore *ScopedStore) Get(key string) (string, error) { }
// GetFrom retrieves a value from an explicit group
func (scopedStore *ScopedStore) GetFrom(group, key string) (string, error) { }
```
- Namespace regex: `^[a-zA-Z0-9-]+$`
- Default group: `Set(key, value)` uses literal `"default"` as group, prefixed: `"mynamespace:default"`
- `SetIn(group, key, value)` allows explicit group within the namespace
- Quota: `MaxKeys`, `MaxGroups` — checked before writes, upserts bypass
---
## 7. Transaction API
`Store.Transaction(fn)` is the supported atomic API for multi-key and multi-group work. It opens one SQLite transaction, passes a `StoreTransaction` helper to the callback, then commits only if the callback returns `nil`.
```go
func (storeInstance *Store) Transaction(operation func(*StoreTransaction) error) error { }
type StoreTransaction struct { }
func (transaction *StoreTransaction) Exists(group, key string) (bool, error) { }
func (transaction *StoreTransaction) GroupExists(group string) (bool, error) { }
func (transaction *StoreTransaction) Get(group, key string) (string, error) { }
func (transaction *StoreTransaction) Set(group, key, value string) error { }
func (transaction *StoreTransaction) SetWithTTL(group, key, value string, ttl time.Duration) error { }
func (transaction *StoreTransaction) Delete(group, key string) error { }
func (transaction *StoreTransaction) DeleteGroup(group string) error { }
func (transaction *StoreTransaction) DeletePrefix(groupPrefix string) error { }
func (transaction *StoreTransaction) GetAll(group string) (map[string]string, error) { }
func (transaction *StoreTransaction) GetPage(group string, offset, limit int) ([]KeyValue, error) { }
func (transaction *StoreTransaction) All(group string) iter.Seq2[KeyValue, error] { }
func (transaction *StoreTransaction) AllSeq(group string) iter.Seq2[KeyValue, error] { }
func (transaction *StoreTransaction) Count(group string) (int, error) { }
func (transaction *StoreTransaction) CountAll(groupPrefix string) (int, error) { }
func (transaction *StoreTransaction) Groups(groupPrefix ...string) ([]string, error) { }
func (transaction *StoreTransaction) GroupsSeq(groupPrefix ...string) iter.Seq2[string, error] { }
func (transaction *StoreTransaction) Render(templateSource, group string) (string, error) { }
func (transaction *StoreTransaction) GetSplit(group, key, separator string) (iter.Seq[string], error) { }
func (transaction *StoreTransaction) GetFields(group, key string) (iter.Seq[string], error) { }
func (transaction *StoreTransaction) PurgeExpired() (int64, error) { }
```
Contract:
- `operation == nil` returns an error before opening a transaction.
- If `operation` returns an error, the transaction rolls back and `Store.Transaction` returns that error wrapped with transaction context.
- If `operation` returns `nil`, `Store.Transaction` commits. A commit failure is returned and the deferred rollback path is attempted.
- Panics are not recovered by this API; the deferred rollback path still runs while the panic unwinds.
- Reads through `StoreTransaction` see uncommitted writes made earlier in the same callback.
- Mutations stage events during the callback. Watchers and `OnChange` callbacks are notified only after a successful commit, so rolled-back work does not propagate events.
- Callers should return helper errors from the callback. Ignoring a helper error and returning `nil` can still commit any successful earlier operations.
- Callers should use the supplied transaction helper inside the callback. Calling parent `Store` methods from inside the callback is outside the contract and may block behind the single SQLite connection.
Example:
```go
err := storeInstance.Transaction(func(transaction *store.StoreTransaction) error {
if err := transaction.Set("accounts", "alice", "10"); err != nil {
return err
}
if err := transaction.Set("accounts", "bob", "12"); err != nil {
return err
}
total, err := transaction.Count("accounts") // sees alice and bob
if err != nil {
return err
}
if total > 100 {
return core.E("accounts", "too many accounts", nil) // rollback
}
return nil // commit
})
```
### 7.1 ScopedStoreTransaction
`ScopedStore.Transaction(fn)` delegates to `Store.Transaction` and passes a `ScopedStoreTransaction`. The scoped helper preserves the same commit, rollback, read-your-writes, and post-commit event semantics, while keeping every operation inside the scoped namespace.
```go
func (scopedStore *ScopedStore) Transaction(operation func(*ScopedStoreTransaction) error) error { }
type ScopedStoreTransaction struct { }
func (transaction *ScopedStoreTransaction) Exists(key string) (bool, error) { }
func (transaction *ScopedStoreTransaction) ExistsIn(group, key string) (bool, error) { }
func (transaction *ScopedStoreTransaction) GroupExists(group string) (bool, error) { }
func (transaction *ScopedStoreTransaction) Get(key string) (string, error) { }
func (transaction *ScopedStoreTransaction) GetFrom(group, key string) (string, error) { }
func (transaction *ScopedStoreTransaction) Set(key, value string) error { }
func (transaction *ScopedStoreTransaction) SetIn(group, key, value string) error { }
func (transaction *ScopedStoreTransaction) SetWithTTL(group, key, value string, ttl time.Duration) error { }
func (transaction *ScopedStoreTransaction) Delete(group, key string) error { }
func (transaction *ScopedStoreTransaction) DeleteGroup(group string) error { }
func (transaction *ScopedStoreTransaction) DeletePrefix(groupPrefix string) error { }
func (transaction *ScopedStoreTransaction) GetAll(group string) (map[string]string, error) { }
func (transaction *ScopedStoreTransaction) GetPage(group string, offset, limit int) ([]KeyValue, error) { }
func (transaction *ScopedStoreTransaction) All(group string) iter.Seq2[KeyValue, error] { }
func (transaction *ScopedStoreTransaction) AllSeq(group string) iter.Seq2[KeyValue, error] { }
func (transaction *ScopedStoreTransaction) Count(group string) (int, error) { }
func (transaction *ScopedStoreTransaction) CountAll(groupPrefix ...string) (int, error) { }
func (transaction *ScopedStoreTransaction) Groups(groupPrefix ...string) ([]string, error) { }
func (transaction *ScopedStoreTransaction) GroupsSeq(groupPrefix ...string) iter.Seq2[string, error] { }
func (transaction *ScopedStoreTransaction) Render(templateSource, group string) (string, error) { }
func (transaction *ScopedStoreTransaction) GetSplit(group, key, separator string) (iter.Seq[string], error) { }
func (transaction *ScopedStoreTransaction) GetFields(group, key string) (iter.Seq[string], error) { }
func (transaction *ScopedStoreTransaction) PurgeExpired() (int64, error) { }
```
Scope isolation rules:
- `Set(key, value)`, `Get(key)`, and `Exists(key)` operate in the scoped default group, stored as `"namespace:default"`.
- Methods that accept `group` prefix the group before touching storage, so `SetIn("config", "theme", "dark")` writes `"namespace:config"`.
- `Groups` and `GroupsSeq` query only groups under `"namespace:"` and return namespace-local names such as `"config"`, not `"namespace:config"`.
- `CountAll`, `DeletePrefix`, and `PurgeExpired` are namespace-local. `DeletePrefix("")` deletes only groups in the scoped namespace, not the whole store.
- Quotas are evaluated through the same SQLite transaction, so pending writes count toward `MaxKeys` and `MaxGroups`. A returned `QuotaExceededError` rolls back the transaction when the callback returns it.
- Staged events use the full prefixed group internally. Scoped watchers and scoped `OnChange` callbacks localise committed events back to namespace-local group names.
Example:
```go
scopedStore, _ := store.NewScopedConfigured(storeInstance, store.ScopedStoreConfig{
Namespace: "tenant-a",
Quota: store.QuotaConfig{MaxKeys: 100, MaxGroups: 10},
})
err := scopedStore.Transaction(func(transaction *store.ScopedStoreTransaction) error {
if err := transaction.Set("theme", "dark"); err != nil {
return err
}
if err := transaction.SetIn("preferences", "locale", "en-GB"); err != nil {
return err
}
groups, err := transaction.Groups()
if err != nil {
return err
}
// groups == []string{"default", "preferences"}
return nil
})
```
---
## 8. Event System
```go
// EventType identifies the kind of change.
type EventType int
const (
EventSet EventType = iota // Key value was set
EventDelete // Key was deleted
EventDeleteGroup // Entire group deleted
)
// Event is emitted on Watch channels or via OnChange callbacks.
type Event struct {
Type EventType // What happened (set, delete, deletegroup)
Group string // Group name
Key string // Key (empty if group-level event)
Value string // New value (empty if delete)
Timestamp time.Time // When the event occurred
}
```
- `Watch(group string) <-chan Event` — returns buffered channel (cap 16), non-blocking sends drop events. Pass `"*"` to watch all groups
- `Unwatch(group string, ch <-chan Event)` — remove a watcher
- `OnChange(callback func(Event)) func()` — register synchronous callback invoked for all events. Returns an unregister function. Callbacks run after event dispatch, outside locks, so they can safely re-register subscriptions. **Deadlock warning:** callbacks see notifications before watches complete — avoid blocking I/O in callbacks
---
## 9. Workspace Buffer
Stateful work accumulation over time. A workspace is a named DuckDB buffer for mutable work-in-progress. When a unit of work completes, the full state commits atomically to a time-series journal (InfluxDB). A summary updates the identity store (the existing SQLite store or an external database).
### 9.1 The Problem
Writing every micro-event directly to a time-series makes deltas meaningless — 4000 writes of "+1" produces noise. A mutable buffer accumulates the work, then commits once as a complete unit. The time-series only sees finished work, so deltas between entries represent real change.
### 9.2 Three Layers
```
Store (SQLite): "this thing exists" — identity, current summary
Buffer (DuckDB): "this thing is working" — mutable temp state, atomic
Journal (InfluxDB): "this thing completed" — immutable, delta-ready
```
| Layer | Store | Mutability | Lifetime |
|-------|-------|-----------|----------|
| Identity | SQLite (go-store) | Mutable | Permanent |
| Hot | DuckDB (temp file) | Mutable | Session/cycle |
| Journal | InfluxDB | Append-only | Retention policy |
| Cold | Compressed JSONL | Immutable | Archive |
### 9.3 Workspace API
```go
// Workspace is a named DuckDB buffer for mutable work-in-progress.
// It holds a reference to the parent Store for identity updates and journal writes.
//
// workspace, _ := storeInstance.NewWorkspace("scroll-session-2026-03-30")
// workspace.Put("like", map[string]any{"user": "@handle", "post": "video_123"})
// workspace.Commit() // atomic → journal + identity summary
type Workspace struct {
name string
store *Store // parent store for identity updates + journal config
db *sql.DB // DuckDB via database/sql driver (temp file, deleted on commit/discard)
}
// NewWorkspace creates a workspace buffer. The DuckDB file is created at .core/state/{name}.duckdb.
//
// workspace, _ := storeInstance.NewWorkspace("scroll-session-2026-03-30")
func (s *Store) NewWorkspace(name string) (*Workspace, error) { }
```
```go
// Put accumulates an entry in the workspace buffer. Returns error on write failure.
//
// err := workspace.Put("like", map[string]any{"user": "@handle"})
func (workspace *Workspace) Put(kind string, data map[string]any) error { }
// Aggregate returns a summary of the current workspace state
//
// summary := workspace.Aggregate() // {"like": 4000, "profile_match": 12}
func (workspace *Workspace) Aggregate() map[string]any { }
// Commit writes the aggregated state to the journal and updates the identity store
//
// result := workspace.Commit()
func (workspace *Workspace) Commit() core.Result { }
// Discard drops the workspace without committing
//
// workspace.Discard()
func (workspace *Workspace) Discard() { }
// Query runs SQL against the buffer for ad-hoc analysis.
// Returns core.Result where Value is []map[string]any (rows as maps).
//
// result := workspace.Query("SELECT kind, COUNT(*) as n FROM entries GROUP BY kind")
// rows := result.Value.([]map[string]any) // [{"kind": "like", "n": 4000}]
func (workspace *Workspace) Query(sql string) core.Result { }
```
### 9.4 Journal
Commit writes a single point per completed workspace. One point = one unit of work.
```go
// CommitToJournal writes aggregated state as a single InfluxDB point.
// Called by Workspace.Commit() internally, but exported for testing.
//
// storeInstance.CommitToJournal("scroll-session", fields, tags)
func (s *Store) CommitToJournal(measurement string, fields map[string]any, tags map[string]string) core.Result { }
// QueryJournal runs a Flux query against the time-series.
// Returns core.Result where Value is []map[string]any (rows as maps).
//
// result := s.QueryJournal(`from(bucket: "core") |> range(start: -7d)`)
// rows := result.Value.([]map[string]any)
func (s *Store) QueryJournal(flux string) core.Result { }
```
Because each point is a complete unit, queries naturally produce meaningful results without complex aggregation.
### 9.5 Cold Archive
When journal entries age past retention, they compact to cold storage:
```go
// CompactOptions controls cold archive generation.
type CompactOptions struct {
Before time.Time // archive entries before this time
Output string // output directory (default: .core/archive/)
Format string // gzip or zstd (default: gzip)
}
// Compact archives journal entries to compressed JSONL
//
// storeInstance.Compact(store.CompactOptions{Before: time.Now().Add(-90*24*time.Hour), Output: "/archive/"})
func (s *Store) Compact(opts CompactOptions) core.Result { }
```
Output: gzip JSONL files. Each line is a complete unit of work — ready for training data ingestion, CDN publishing, or long-term analytics.
### 9.6 File Lifecycle
DuckDB files are ephemeral:
```
Created: workspace opens → .core/state/{name}.duckdb
Active: Put() accumulates entries
Committed: Commit() → journal write → identity update → file deleted
Discarded: Discard() → file deleted
Crashed: Orphaned .duckdb files detected on next New() call
```
Orphan recovery on `New()`:
```go
// New() scans .core/state/ for leftover .duckdb files.
// Each orphan is opened, aggregated, and discarded (not committed).
// The caller decides whether to commit orphan data via RecoverOrphans().
//
// orphanWorkspaces := storeInstance.RecoverOrphans(".core/state/")
// for _, workspace := range orphanWorkspaces {
// // inspect workspace.Aggregate(), decide whether to commit or discard
// workspace.Discard()
// }
func (s *Store) RecoverOrphans(stateDir string) []*Workspace { }
```
---
## 10. Reference Material
| Resource | Location |
|----------|----------|
| Architecture docs | `docs/architecture.md` |
| Development guide | `docs/development.md` |

View file

@ -24,23 +24,23 @@ WAL (Write-Ahead Logging) mode allows concurrent readers to proceed without bloc
The `database/sql` package maintains a connection pool by default. SQLite pragmas are per-connection: if the pool hands out a second connection, that connection inherits none of the WAL or busy-timeout settings, causing `SQLITE_BUSY` errors under concurrent load.
go-store calls `db.SetMaxOpenConns(1)` to pin all access to a single connection. Since SQLite serialises writes at the file level regardless, this introduces no additional throughput penalty. It eliminates the BUSY errors by ensuring the pragma settings always apply.
go-store calls `database.SetMaxOpenConns(1)` to pin all access to a single connection. Since SQLite serialises writes at the file level regardless, this introduces no additional throughput penalty. It eliminates the BUSY errors by ensuring the pragma settings always apply.
### Schema
```sql
CREATE TABLE IF NOT EXISTS kv (
grp TEXT NOT NULL,
key TEXT NOT NULL,
value TEXT NOT NULL,
expires_at INTEGER,
PRIMARY KEY (grp, key)
CREATE TABLE IF NOT EXISTS entries (
group_name TEXT NOT NULL,
entry_key TEXT NOT NULL,
entry_value TEXT NOT NULL,
expires_at INTEGER,
PRIMARY KEY (group_name, entry_key)
)
```
The compound primary key `(grp, key)` enforces uniqueness per group-key pair and provides efficient indexed lookups. The `expires_at` column stores a Unix millisecond timestamp (nullable); a `NULL` value means the key never expires.
The compound primary key `(group_name, entry_key)` enforces uniqueness per group-key pair and provides efficient indexed lookups. The `expires_at` column stores a Unix millisecond timestamp (nullable); a `NULL` value means the key never expires.
**Schema migration.** Databases created before TTL support lacked the `expires_at` column. On `New()`, go-store runs `ALTER TABLE kv ADD COLUMN expires_at INTEGER`. If the column already exists, SQLite returns a "duplicate column" error which is silently ignored. This allows seamless upgrades of existing databases.
**Schema migration.** Databases created before the AX schema rename used a legacy key-value table. On `New()`, go-store migrates that legacy table into `entries`, preserving rows and copying the expiry data when present. Databases that already have `entries` but lack `expires_at` still receive an additive `ALTER TABLE entries ADD COLUMN expires_at INTEGER` migration; if the column already exists, SQLite returns a "duplicate column" error which is silently ignored.
## Group/Key Model
@ -49,16 +49,16 @@ Keys are addressed by a two-level path: `(group, key)`. Groups act as logical na
This model maps naturally to domain concepts:
```
group: "user:42:config" key: "theme"
group: "user:42:config" key: "colour"
group: "user:42:config" key: "language"
group: "session:abc" key: "token"
```
All read operations (`Get`, `GetAll`, `Count`, `Render`) are scoped to a single group. `DeleteGroup` atomically removes all keys in a group. `CountAll` and `Groups` operate across groups by prefix match.
All read operations (`Get`, `GetAll`, `Count`, `Render`) are scoped to a single group. `DeleteGroup` atomically removes all keys in a group. `DeletePrefix` removes every group whose name starts with a supplied prefix. `CountAll` and `Groups` operate across groups by prefix match.
## UPSERT Semantics
All writes use `INSERT ... ON CONFLICT(grp, key) DO UPDATE`. This means:
All writes use `INSERT ... ON CONFLICT(group_name, entry_key) DO UPDATE`. This means:
- Inserting a new key creates it.
- Inserting an existing key overwrites its value and (for `Set`) clears any TTL.
@ -75,7 +75,7 @@ Expiry is enforced in three ways:
### 1. Lazy Deletion on Get
If a key is found but its `expires_at` is in the past, it is deleted synchronously before returning `ErrNotFound`. This prevents stale values from being returned even if the background purge has not run yet.
If a key is found but its `expires_at` is in the past, it is deleted synchronously before returning `NotFoundError`. This prevents stale values from being returned even if the background purge has not run yet.
### 2. Query-Time Filtering
@ -91,23 +91,32 @@ All bulk operations (`GetAll`, `All`, `Count`, `Render`, `CountAll`, `Groups`, `
Two convenience methods build on `Get` to return iterators over parts of a stored value:
- **`GetSplit(group, key, sep)`** splits the value by a custom separator, returning an `iter.Seq[string]` via `strings.SplitSeq`.
- **`GetFields(group, key)`** splits the value by whitespace, returning an `iter.Seq[string]` via `strings.FieldsSeq`.
- **`GetSplit(group, key, separator)`** splits the value by a custom separator, returning an `iter.Seq[string]` via `core.Split`.
- **`GetFields(group, key)`** splits the value by whitespace, returning an `iter.Seq[string]` via the package's internal field iterator.
Both return `ErrNotFound` if the key does not exist or has expired.
`core.Split` keeps the package free of direct `strings` imports while preserving the same agent-facing API shape.
Both return `NotFoundError` if the key does not exist or has expired.
## Template Rendering
`Render(tmplStr, group)` is a convenience method that fetches all non-expired key-value pairs from a group and renders a Go `text/template` against them. The template data is a `map[string]string` keyed by the field name.
`Render(templateSource, group)` is a convenience method that fetches all non-expired key-value pairs from a group and renders a Go `text/template` against them. The template data is a `map[string]string` keyed by the field name.
```go
st.Set("miner", "pool", "pool.lthn.io:3333")
st.Set("miner", "wallet", "iz...")
out, _ := st.Render(`{"pool":"{{ .pool }}","wallet":"{{ .wallet }}"}`, "miner")
// out: {"pool":"pool.lthn.io:3333","wallet":"iz..."}
if err := storeInstance.Set("miner", "pool", "pool.lthn.io:3333"); err != nil {
return
}
if err := storeInstance.Set("miner", "wallet", "iz..."); err != nil {
return
}
renderedTemplate, err := storeInstance.Render(`{"pool":"{{ .pool }}","wallet":"{{ .wallet }}"}`, "miner")
if err != nil {
return
}
// renderedTemplate: {"pool":"pool.lthn.io:3333","wallet":"iz..."}
```
Template parse errors and execution errors are both returned as wrapped errors with context (e.g., `store.Render: parse: ...` and `store.Render: exec: ...`).
Template parse errors and execution errors are both returned as wrapped errors with context (e.g., `store.Render: parse template: ...` and `store.Render: execute template: ...`).
Missing template variables do not return an error by default -- Go's `text/template` renders them as `<no value>`. Applications requiring strict variable presence should validate data beforehand.
@ -137,74 +146,81 @@ Events are emitted synchronously after each successful database write inside the
### Watch/Unwatch
`Watch(group, key)` creates a `Watcher` with a buffered channel (`Ch <-chan Event`, capacity 16).
`Watch(group)` creates a buffered event channel (`<-chan Event`, capacity 16).
| group argument | key argument | Receives |
|---|---|---|
| `"mygroup"` | `"mykey"` | Only mutations to that exact key |
| `"mygroup"` | `"*"` | All mutations within the group, including `DeleteGroup` |
| `"*"` | `"*"` | Every mutation across the entire store |
| group argument | Receives |
|---|---|
| `"mygroup"` | Mutations within that group, including `DeleteGroup` |
| `"*"` | Every mutation across the entire store |
`Unwatch(w)` removes the watcher from the registry and closes its channel. It is safe to call multiple times; subsequent calls are no-ops.
`Unwatch(group, events)` removes the watcher from the registry and closes its channel. It is safe to call multiple times; subsequent calls are no-ops.
**Backpressure.** Event dispatch to a watcher channel is non-blocking: if the channel buffer is full, the event is dropped silently. This prevents a slow consumer from blocking a writer. Applications that cannot afford dropped events should drain the channel promptly or use `OnChange` callbacks instead.
```go
w := st.Watch("config", "*")
defer st.Unwatch(w)
events := storeInstance.Watch("config")
defer storeInstance.Unwatch("config", events)
for e := range w.Ch {
fmt.Println(e.Type, e.Group, e.Key, e.Value)
for event := range events {
fmt.Println(event.Type, event.Group, event.Key, event.Value)
}
```
### OnChange Callbacks
`OnChange(fn func(Event))` registers a synchronous callback that fires on every mutation. The callback runs in the goroutine that performed the write. Returns an idempotent unregister function.
`OnChange(callback func(Event))` registers a synchronous callback that fires on every mutation. The callback runs in the goroutine that performed the write. Returns an idempotent unregister function.
This is the designed integration point for consumers such as go-ws:
```go
unreg := st.OnChange(func(e store.Event) {
hub.SendToChannel("store-events", e)
unregister := storeInstance.OnChange(func(event store.Event) {
hub.SendToChannel("store-events", event)
})
defer unreg()
defer unregister()
```
go-store does not import go-ws. The dependency flows in one direction only: go-ws (or any consumer) imports go-store.
**Important constraint.** `OnChange` callbacks execute while holding the watcher/callback read-lock (`s.mu`). Calling `Watch`, `Unwatch`, or `OnChange` from within a callback will deadlock, because those methods require a write-lock. Offload any significant work to a separate goroutine if needed.
Callbacks may safely register or unregister watchers and callbacks while handling an event. Dispatch snapshots the callback list before invoking it, so re-entrant subscription management does not deadlock. Offload any significant work to a separate goroutine if needed.
### Internal Dispatch
The `notify(e Event)` method acquires a read-lock on `s.mu`, iterates all watchers with non-blocking channel sends, then calls each registered callback. The read-lock allows multiple concurrent `notify()` calls to proceed simultaneously. `Watch`/`Unwatch`/`OnChange` take a write-lock when modifying the registry.
The `notify(event Event)` method first acquires the watcher read-lock, iterates all watchers with non-blocking channel sends, then releases the lock. It then acquires the callback read-lock, snapshots the registered callbacks, releases the lock, and invokes each callback synchronously. This keeps watcher delivery non-blocking while allowing callbacks to manage subscriptions re-entrantly.
Watcher matching is handled by the `watcherMatches` helper, which checks the group and key filters against the event. Wildcard `"*"` matches any value in its position.
Watcher delivery is grouped by the registered group name. Wildcard `"*"` matches every mutation across the entire store.
## Namespace Isolation (ScopedStore)
`ScopedStore` wraps a `*Store` and automatically prefixes all group names with `namespace + ":"`. This prevents key collisions when multiple tenants share a single underlying database.
`ScopedStore` wraps a `*Store` and automatically prefixes all group names with `namespace + ":"`. This prevents key collisions when multiple tenants share a single underlying database. When the namespace and quota are already known, prefer `NewScopedConfigured(storeInstance, store.ScopedStoreConfig{...})` so the configuration is explicit at the call site.
```go
sc, _ := store.NewScoped(st, "tenant-42")
sc.Set("config", "theme", "dark")
// Stored in underlying store as group="tenant-42:config", key="theme"
scopedStore, err := store.NewScopedConfigured(storeInstance, store.ScopedStoreConfig{
Namespace: "tenant-42",
})
if err != nil {
return
}
if err := scopedStore.SetIn("config", "colour", "blue"); err != nil {
return
}
// Stored in underlying store as group="tenant-42:config", key="colour"
```
Namespace strings must match `^[a-zA-Z0-9-]+$`. Invalid namespaces are rejected at construction time.
`ScopedStore` delegates all operations to the underlying `Store` after prefixing. Events emitted by scoped operations carry the full prefixed group name in `Event.Group`, enabling watchers on the underlying store to observe scoped mutations.
`ScopedStore` exposes the same API surface as `Store` for: `Get`, `Set`, `SetWithTTL`, `Delete`, `DeleteGroup`, `GetAll`, `All`, `Count`, and `Render`. The `Namespace()` method returns the namespace string.
`ScopedStore` exposes the same read helpers as `Store` for `Get`, `Set`, `SetWithTTL`, `Delete`, `DeleteGroup`, `DeletePrefix`, `GetAll`, `All`, `Count`, `CountAll`, `Groups`, `GroupsSeq`, `GetSplit`, `GetFields`, `Render`, and `PurgeExpired`. Methods that return group names strip the namespace prefix before returning results. The `Namespace()` method returns the namespace string.
`ScopedStore.Transaction` exposes the same transaction helpers through `ScopedStoreTransaction`, so callers can work inside a namespace without manually prefixing group names during a multi-step write.
### Quota Enforcement
`NewScopedWithQuota(store, namespace, QuotaConfig)` adds per-namespace limits:
`NewScopedConfigured(storeInstance, store.ScopedStoreConfig{...})` is the preferred way to set per-namespace limits because the quota values stay visible at the call site. For example, `store.QuotaConfig{MaxKeys: 100, MaxGroups: 10}` caps a namespace at 100 keys and 10 groups:
```go
type QuotaConfig struct {
MaxKeys int // maximum total keys across all groups in the namespace
MaxGroups int // maximum distinct groups in the namespace
MaxKeys int
MaxGroups int
}
```
@ -214,24 +230,36 @@ Zero values mean unlimited. Before each `Set` or `SetWithTTL`, the scoped store:
2. If the key is new, queries `CountAll(namespace + ":")` and compares against `MaxKeys`.
3. If the group is new (current count for that group is zero), queries `GroupsSeq(namespace + ":")` and compares against `MaxGroups`.
Exceeding a limit returns `ErrQuotaExceeded`.
Exceeding a limit returns `QuotaExceededError`.
## Concurrency Model
All SQLite access is serialised through a single connection (`SetMaxOpenConns(1)`). The store's watcher/callback registry is protected by a separate `sync.RWMutex` (`s.mu`). These two locks do not interact:
All SQLite access is serialised through a single connection (`SetMaxOpenConns(1)`). The store's event registry uses two separate `sync.RWMutex` instances: `watchersLock` for watcher registration and dispatch, and `callbacksLock` for callback registration and dispatch. These locks do not interact:
- DB writes acquire no application-level lock.
- `notify()` acquires `s.mu` (read) after the DB write completes.
- `Watch`/`Unwatch`/`OnChange` acquire `s.mu` (write) to modify the registry.
- Database writes acquire no application-level lock.
- `notify()` acquires `watchersLock` (read) after the database write completes, then `callbacksLock` (read) to snapshot callbacks.
- `Watch`/`Unwatch` acquire `watchersLock` (write) to modify watcher registrations.
- `OnChange` acquires `callbacksLock` (write) to modify callback registrations.
All operations are safe to call from multiple goroutines concurrently. The race detector is clean under the project's standard test suite (`go test -race ./...`).
## Transaction API
`Store.Transaction(func(transaction *StoreTransaction) error)` opens a SQLite transaction and hands a `StoreTransaction` helper to the callback. The helper exposes transaction-scoped write methods such as `Set`, `SetWithTTL`, `Delete`, `DeleteGroup`, and `DeletePrefix`, plus read helpers such as `Get`, `GetAll`, `All`, `Count`, `CountAll`, `Groups`, `GroupsSeq`, `Render`, `GetSplit`, and `GetFields` so callers can inspect uncommitted writes before commit. If the callback returns an error, the transaction rolls back. If the callback succeeds, the transaction commits and the staged events are published after commit.
This API is the supported way to perform atomic multi-group operations without exposing raw `Begin`/`Commit` control to callers.
## File Layout
```
store.go Core Store type, CRUD, TTL, background purge, iterators, rendering
events.go EventType, Event, Watcher, OnChange, notify
scope.go ScopedStore, QuotaConfig, quota enforcement
doc.go Package comment with concrete usage examples
store.go Core Store type, CRUD, prefix cleanup, TTL, background purge, iterators, rendering
transaction.go Store.Transaction and transaction-scoped mutation helpers
events.go EventType, Event, Watch, Unwatch, OnChange, notify
scope.go ScopedStore, QuotaConfig, namespace-local helper delegation, quota enforcement
journal.go Journal persistence, Flux-like querying, JSON row inflation
workspace.go Workspace buffers, aggregation, query analysis, commit flow, orphan recovery
compact.go Cold archive generation to JSONL gzip or zstd
store_test.go Tests: CRUD, TTL, concurrency, edge cases, persistence
events_test.go Tests: Watch, Unwatch, OnChange, event dispatch
scope_test.go Tests: namespace isolation, quota enforcement

View file

@ -23,7 +23,7 @@ go test ./...
go test -race ./...
# Run a single test by name
go test -v -run TestWatch_Good_SpecificKey ./...
go test -v -run TestEvents_Watch_Good_SpecificKey ./...
# Run tests with coverage
go test -cover ./...
@ -51,7 +51,7 @@ core go qa # fmt + vet + lint + test
## Test Patterns
Tests follow the `_Good`, `_Bad`, `_Ugly` suffix convention used across the Core Go ecosystem:
Tests follow the `Test<File>_<Function>_<Good|Bad|Ugly>` convention used across the Core Go ecosystem:
- `_Good` -- happy-path behaviour, including edge cases that should succeed
- `_Bad` -- expected error conditions (closed store, invalid input, quota exceeded)
@ -64,15 +64,15 @@ Tests are grouped into sections by the method under test, marked with comment ba
// Watch -- specific key
// ---------------------------------------------------------------------------
func TestWatch_Good_SpecificKey(t *testing.T) { ... }
func TestWatch_Good_WildcardKey(t *testing.T) { ... }
func TestEvents_Watch_Good_SpecificKey(t *testing.T) { ... }
func TestEvents_Watch_Good_WildcardKey(t *testing.T) { ... }
```
### In-Memory vs File-Backed Stores
Use `New(":memory:")` for all tests that do not require persistence. In-memory stores are faster and leave no filesystem artefacts.
Use `filepath.Join(t.TempDir(), "name.db")` for tests that verify WAL mode, persistence across open/close cycles, or concurrent writes. `t.TempDir()` is cleaned up automatically at the end of the test.
Use `core.Path(t.TempDir(), "name.db")` for tests that verify WAL mode, persistence across open/close cycles, or concurrent writes. `t.TempDir()` is cleaned up automatically at the end of the test.
### TTL Tests
@ -144,7 +144,7 @@ The only permitted runtime dependency is `modernc.org/sqlite`. Test-only depende
## Adding a New Method
1. Implement the method on `*Store` in `store.go` (or `scope.go` if it is namespace-scoped).
2. If it is a mutating operation, call `s.notify(Event{...})` after the successful database write.
2. If it is a mutating operation, call `storeInstance.notify(Event{...})` after the successful database write.
3. Add a corresponding delegation method to `ScopedStore` in `scope.go` that prefixes the group.
4. Write tests covering the happy path, error conditions, and closed-store behaviour.
5. Update quota checks in `checkQuota` if the operation affects key or group counts.

View file

@ -17,7 +17,7 @@ At extraction the package comprised a single source file and a single test file.
**Problem.** The `database/sql` connection pool hands out different physical connections for each `Exec` or `Query` call. SQLite pragmas (`PRAGMA journal_mode=WAL`, `PRAGMA busy_timeout`) are per-connection. Under concurrent write load (10 goroutines, 100 ops each), connections from the pool that had not received the WAL pragma would block and return `SQLITE_BUSY` immediately rather than waiting.
**Fix.** `db.SetMaxOpenConns(1)` serialises all database access through a single connection. Because SQLite is a single-writer database by design (it serialises writes at the file-lock level regardless of pool size), this does not reduce write throughput. It eliminates the BUSY errors by ensuring the pragma settings always apply.
**Fix.** `database.SetMaxOpenConns(1)` serialises all database access through a single connection. Because SQLite is a single-writer database by design (it serialises writes at the file-lock level regardless of pool size), this does not reduce write throughput. It eliminates the BUSY errors by ensuring the pragma settings always apply.
**Defence in depth.** `PRAGMA busy_timeout=5000` was added to make the single connection wait up to 5 seconds before reporting a timeout error, providing additional resilience.
@ -63,14 +63,14 @@ Added optional time-to-live for keys.
### Changes
- `expires_at INTEGER` nullable column added to the `kv` schema.
- `SetWithTTL(group, key, value string, ttl time.Duration)` stores the current time plus TTL as a Unix millisecond timestamp in `expires_at`.
- `Get()` performs lazy deletion: if a key is found with an `expires_at` in the past, it is deleted and `ErrNotFound` is returned.
- `expires_at INTEGER` nullable column added to the key-value schema.
- `SetWithTTL(group, key, value string, timeToLive time.Duration)` stores the current time plus TTL as a Unix millisecond timestamp in `expires_at`.
- `Get()` performs lazy deletion: if a key is found with an `expires_at` in the past, it is deleted and `NotFoundError` is returned.
- `Count()`, `GetAll()`, and `Render()` include `(expires_at IS NULL OR expires_at > ?)` in all queries, excluding expired keys from results.
- `PurgeExpired()` public method deletes all physically stored expired rows and returns the count removed.
- Background goroutine calls `PurgeExpired()` every 60 seconds, controlled by a `context.WithCancel` that is cancelled on `Close()`.
- `Set()` clears any existing TTL when overwriting a key (sets `expires_at = NULL`).
- Schema migration: `ALTER TABLE kv ADD COLUMN expires_at INTEGER` runs on `New()`. The "duplicate column" error on already-upgraded databases is silently ignored.
- Schema migration: `ALTER TABLE entries ADD COLUMN expires_at INTEGER` runs on `New()`. The "duplicate column" error on already-upgraded databases is silently ignored.
### Tests added
@ -95,7 +95,7 @@ Added `ScopedStore` for multi-tenant namespace isolation.
- All `Store` methods delegated with group automatically prefixed as `namespace + ":" + group`.
- `QuotaConfig{MaxKeys, MaxGroups int}` struct; zero means unlimited.
- `NewScopedWithQuota(store, namespace, quota)` constructor.
- `ErrQuotaExceeded` sentinel error.
- `QuotaExceededError` sentinel error.
- `checkQuota(group, key)` internal method: skips upserts (existing key), checks `CountAll(namespace+":")` against `MaxKeys`, checks `Groups(namespace+":")` against `MaxGroups` only when the group is new.
- `CountAll(prefix string)` added to `Store`: counts non-expired keys across all groups matching a prefix. Empty prefix counts across all groups.
- `Groups(prefix string)` added to `Store`: returns distinct non-expired group names matching a prefix. Empty prefix returns all groups.
@ -117,14 +117,14 @@ Added a reactive notification system for store mutations.
### Changes
- `events.go` introduced with `EventType` (`EventSet`, `EventDelete`, `EventDeleteGroup`), `Event` struct, `Watcher` struct, `callbackEntry` struct.
- `watcherBufSize = 16` constant.
- `events.go` introduced with `EventType` (`EventSet`, `EventDelete`, `EventDeleteGroup`), `Event` struct, `Watcher` struct, `changeCallbackRegistration` struct.
- `watcherEventBufferCapacity = 16` constant.
- `Watch(group, key string) *Watcher`: creates a buffered channel watcher. Wildcard `"*"` supported for both group and key. Uses `atomic.AddUint64` for monotonic watcher IDs.
- `Unwatch(w *Watcher)`: removes watcher from the registry and closes its channel. Idempotent.
- `OnChange(fn func(Event)) func()`: registers a synchronous callback. Returns an idempotent unregister function using `sync.Once`.
- `notify(e Event)`: internal dispatch. Acquires read-lock on `s.mu`; non-blocking send to each matching watcher channel (drop-on-full); calls each callback synchronously. Separate `watcherMatches` helper handles wildcard logic.
- `Unwatch(watcher *Watcher)`: removes watcher from the registry and closes its channel. Idempotent.
- `OnChange(callback func(Event)) func()`: registers a synchronous callback. Returns an idempotent unregister function using `sync.Once`.
- `notify(event Event)`: internal dispatch. Acquires read-lock on `watchersLock`; non-blocking send to each matching watcher channel (drop-on-full); calls each callback synchronously. Separate `watcherMatches` helper handles wildcard logic.
- `Set()`, `SetWithTTL()`, `Delete()`, `DeleteGroup()` each call `notify()` after the successful database write.
- `Store` struct extended with `watchers []*Watcher`, `callbacks []callbackEntry`, `mu sync.RWMutex`, `nextID uint64`.
- `Store` struct extended with `watchers []*Watcher`, `callbacks []changeCallbackRegistration`, `watchersLock sync.RWMutex`, `callbacksLock sync.RWMutex`, `nextWatcherID uint64`, `nextCallbackID uint64`.
- ScopedStore mutations automatically emit events with the full prefixed group name — no extra implementation required.
### Tests added
@ -135,15 +135,62 @@ Coverage: 94.7% to 95.5%.
---
## Phase 4 — AX API Cleanup
**Agent:** Codex
**Completed:** 2026-03-30
Aligned the public API with the AX naming rules by removing compatibility aliases that were no longer used inside the repository.
### Changes
- Removed the legacy compatibility aliases for the not-found error, quota error, key-value pair, and watcher channel.
- Kept the primary names `NotFoundError`, `QuotaExceededError`, `KeyValue`, and `Watcher.Events`.
- Updated docs and examples to describe the primary names only.
---
## Phase 5 — Re-entrant Event Dispatch
**Agent:** Codex
**Completed:** 2026-03-30
### Changes
- Split watcher and callback registry locks so callbacks can register or unregister subscriptions without deadlocking.
- Updated `notify()` to dispatch watcher events under the watcher lock, snapshot callbacks under the callback lock, and invoke callbacks after both locks are released.
### Tests added
- Re-entrant callback coverage for `Watch`, `Unwatch`, and `OnChange` from inside the same callback while a write is in flight.
---
## Phase 6 — AX Schema Naming Cleanup
**Agent:** Codex
**Completed:** 2026-03-30
Renamed the internal SQLite schema to use descriptive names that are easier for agents to read and reason about.
### Changes
- Replaced the abbreviated key-value table with the descriptive `entries` table.
- Renamed the `grp`, `key`, and `value` schema columns to `group_name`, `entry_key`, and `entry_value`.
- Added a startup migration that copies legacy key-value databases into the new schema and preserves TTL data when present.
- Kept the public Go API unchanged; the migration only affects the internal storage layout.
---
## Coverage Test Suite
`coverage_test.go` exercises defensive error paths that integration tests cannot reach through normal usage:
- Schema conflict: pre-existing SQLite index named `kv` causes `New()` to return `store.New: schema: ...`.
- `GetAll` scan error: NULL key in a row (requires manually altering the schema to remove the NOT NULL constraint).
- `GetAll` rows iteration error: physically corrupting database pages mid-file to trigger `rows.Err()` during multi-page scans.
- `Render` scan error: same NULL-key technique.
- `Render` rows iteration error: same corruption technique.
- Schema conflict: pre-existing SQLite index named `entries` causes `New()` to return `store.New: ensure schema: ...`.
- `GetAll` scan error: NULL key in a row (requires manually altering the schema to remove the NOT NULL constraint) to trigger `store.All: scan row: ...`.
- `GetAll` rows iteration error: physically corrupting database pages mid-file to trigger `store.All: rows iteration: ...`.
- `Render` scan error: same NULL-key technique, surfaced as `store.All: scan row: ...`.
- `Render` rows iteration error: same corruption technique, surfaced as `store.All: rows iteration: ...`.
These tests exercise correct defensive code. They must continue to pass but are not indicative of real failure modes in production.
@ -155,13 +202,7 @@ These tests exercise correct defensive code. They must continue to pass but are
**File-backed write throughput.** File-backed `Set` operations (~3,800 ops/sec on Apple M-series) are dominated by fsync. Applications writing at higher rates should use in-memory stores or consider WAL checkpoint tuning.
**`GetAll` memory usage.** Fetching a group with 10,000 keys allocates approximately 2.3 MB per call. There is no pagination API. Applications with very large groups should restructure data into smaller groups or query selectively.
**No cross-group transactions.** There is no API for atomic multi-group operations. Each method is individually atomic at the SQLite level, but there is no `Begin`/`Commit` exposed to callers.
**No wildcard deletes.** There is no `DeletePrefix` or pattern-based delete. To delete all groups under a namespace, callers must retrieve the group list via `Groups()` and delete each individually.
**Callback deadlock risk.** `OnChange` callbacks run synchronously in the writer's goroutine while holding `s.mu` (read). Calling any `Store` method that calls `notify()` from within a callback will attempt to re-acquire `s.mu` (read), which is permitted with a read-lock but calling `Watch`/`Unwatch`/`OnChange` within a callback will deadlock (they require a write-lock). Document this constraint prominently in callback usage.
**`GetAll` memory usage.** Fetching a group with 10,000 keys allocates approximately 2.3 MB per call. Use `GetPage()` when you need offset/limit pagination over a large group. Applications with very large groups should still prefer smaller groups or selective queries.
**No persistence of watcher registrations.** Watchers and callbacks are in-memory only. They are not persisted across `Close`/`New` cycles.
@ -171,8 +212,4 @@ These tests exercise correct defensive code. They must continue to pass but are
These are design notes, not committed work:
- **Pagination for `GetAll`.** A `GetPage(group string, offset, limit int)` method would support large groups without full in-memory materialisation.
- **Indexed prefix keys.** An additional index on `(grp, key)` prefix would accelerate prefix scans without a full-table scan.
- **TTL background purge interval as constructor option.** Currently only settable by mutating `s.purgeInterval` directly in tests. A `WithPurgeInterval(d time.Duration)` functional option would make this part of the public API.
- **Cross-group atomic operations.** Exposing a `Transaction(func(tx *StoreTx) error)` API would allow callers to compose atomic multi-group operations.
- **`DeletePrefix(prefix string)` method.** Would enable efficient cleanup of an entire namespace without first listing groups.
- **Indexed prefix keys.** An additional index on `(group_name, entry_key)` prefix would accelerate prefix scans without a full-table scan.

View file

@ -7,9 +7,11 @@ description: Group-namespaced SQLite key-value store with TTL expiry, namespace
`go-store` is a group-namespaced key-value store backed by SQLite. It provides persistent or in-memory storage with optional TTL expiry, namespace isolation for multi-tenant use, quota enforcement, and a reactive event system for observing mutations.
For declarative setup, `store.NewConfigured(store.StoreConfig{...})` takes a single config struct instead of functional options. Prefer this when the configuration is already known; use `store.New(path, ...)` when you are only varying the database path.
The package has a single runtime dependency -- a pure-Go SQLite driver (`modernc.org/sqlite`). No CGO is required. It compiles and runs on all platforms that Go supports.
**Module path:** `dappco.re/go/core/store`
**Module path:** `dappco.re/go/store`
**Go version:** 1.26+
**Licence:** EUPL-1.2
@ -22,71 +24,112 @@ import (
"fmt"
"time"
"dappco.re/go/core/store"
"dappco.re/go/store"
)
func main() {
// Open a store. Use ":memory:" for ephemeral data or a file path for persistence.
st, err := store.New("/tmp/app.db")
// Open /tmp/app.db for persistence, or use ":memory:" for ephemeral data.
storeInstance, err := store.NewConfigured(store.StoreConfig{
DatabasePath: "/tmp/app.db",
PurgeInterval: 30 * time.Second,
WorkspaceStateDirectory: "/tmp/core-state",
})
if err != nil {
panic(err)
return
}
defer st.Close()
defer storeInstance.Close()
// Basic CRUD
st.Set("config", "theme", "dark")
val, _ := st.Get("config", "theme")
fmt.Println(val) // "dark"
// Store "blue" under config/colour and read it back.
if err := storeInstance.Set("config", "colour", "blue"); err != nil {
return
}
colourValue, err := storeInstance.Get("config", "colour")
if err != nil {
return
}
fmt.Println(colourValue) // "blue"
// TTL expiry -- key disappears after the duration elapses
st.SetWithTTL("session", "token", "abc123", 24*time.Hour)
// Store a session token that expires after 24 hours.
if err := storeInstance.SetWithTTL("session", "token", "abc123", 24*time.Hour); err != nil {
return
}
// Fetch all keys in a group
all, _ := st.GetAll("config")
fmt.Println(all) // map[theme:dark]
// Read config/colour back into a map.
configEntries, err := storeInstance.GetAll("config")
if err != nil {
return
}
fmt.Println(configEntries) // map[colour:blue]
// Template rendering from stored values
st.Set("mail", "host", "smtp.example.com")
st.Set("mail", "port", "587")
out, _ := st.Render(`{{ .host }}:{{ .port }}`, "mail")
fmt.Println(out) // "smtp.example.com:587"
// Render the mail host and port into smtp.example.com:587.
if err := storeInstance.Set("mail", "host", "smtp.example.com"); err != nil {
return
}
if err := storeInstance.Set("mail", "port", "587"); err != nil {
return
}
renderedTemplate, err := storeInstance.Render(`{{ .host }}:{{ .port }}`, "mail")
if err != nil {
return
}
fmt.Println(renderedTemplate) // "smtp.example.com:587"
// Namespace isolation for multi-tenant use
sc, _ := store.NewScoped(st, "tenant-42")
sc.Set("prefs", "locale", "en-GB")
// Stored internally as group "tenant-42:prefs", key "locale"
// Store tenant-42 preferences under the tenant-42: namespace prefix.
scopedStore, err := store.NewScopedConfigured(storeInstance, store.ScopedStoreConfig{
Namespace: "tenant-42",
})
if err != nil {
return
}
if err := scopedStore.SetIn("preferences", "locale", "en-GB"); err != nil {
return
}
// Stored internally as group "tenant-42:preferences", key "locale"
// Quota enforcement
quota := store.QuotaConfig{MaxKeys: 100, MaxGroups: 5}
sq, _ := store.NewScopedWithQuota(st, "tenant-99", quota)
err = sq.Set("g", "k", "v") // returns store.ErrQuotaExceeded if limits are hit
// Cap tenant-99 at 100 keys and 5 groups.
quotaScopedStore, err := store.NewScopedConfigured(storeInstance, store.ScopedStoreConfig{
Namespace: "tenant-99",
Quota: store.QuotaConfig{MaxKeys: 100, MaxGroups: 5},
})
if err != nil {
return
}
// A write past the limit returns store.QuotaExceededError.
if err := quotaScopedStore.SetIn("g", "k", "v"); err != nil {
return
}
// Watch for mutations via a buffered channel
w := st.Watch("config", "*")
defer st.Unwatch(w)
// Watch "config" changes and print each event as it arrives.
events := storeInstance.Watch("config")
defer storeInstance.Unwatch("config", events)
go func() {
for e := range w.Ch {
fmt.Printf("event: %s %s/%s\n", e.Type, e.Group, e.Key)
for event := range events {
fmt.Println("event", event.Type, event.Group, event.Key, event.Value)
}
}()
// Or register a synchronous callback
unreg := st.OnChange(func(e store.Event) {
fmt.Printf("changed: %s\n", e.Key)
// Or register a synchronous callback for the same mutations.
unregister := storeInstance.OnChange(func(event store.Event) {
fmt.Println("changed", event.Group, event.Key, event.Value)
})
defer unreg()
defer unregister()
}
```
## Package Layout
The entire package lives in a single Go package (`package store`) with three source files:
The entire package lives in a single Go package (`package store`) with the following implementation files plus `doc.go` for the package comment:
| File | Purpose |
|------|---------|
| `store.go` | Core `Store` type, CRUD operations (`Get`, `Set`, `SetWithTTL`, `Delete`, `DeleteGroup`), bulk queries (`GetAll`, `All`, `Count`, `CountAll`, `Groups`, `GroupsSeq`), string splitting helpers (`GetSplit`, `GetFields`), template rendering (`Render`), TTL expiry, background purge goroutine |
| `events.go` | `EventType` constants, `Event` struct, `Watcher` type, `Watch`/`Unwatch` subscription management, `OnChange` callback registration, internal `notify` dispatch |
| `scope.go` | `ScopedStore` wrapper for namespace isolation, `QuotaConfig` struct, `NewScoped`/`NewScopedWithQuota` constructors, quota enforcement logic |
| `doc.go` | Package comment with concrete usage examples |
| `store.go` | Core `Store` type, CRUD operations (`Get`, `Set`, `SetWithTTL`, `Delete`, `DeleteGroup`, `DeletePrefix`), bulk queries (`GetAll`, `GetPage`, `All`, `Count`, `CountAll`, `Groups`, `GroupsSeq`), string splitting helpers (`GetSplit`, `GetFields`), template rendering (`Render`), TTL expiry, background purge goroutine, transaction support |
| `transaction.go` | `Store.Transaction`, transaction-scoped write helpers, staged event dispatch |
| `events.go` | `EventType` constants, `Event` struct, `Watch`/`Unwatch` channel subscriptions, `OnChange` callback registration, internal `notify` dispatch |
| `scope.go` | `ScopedStore` wrapper for namespace isolation, `QuotaConfig` struct, `NewScoped`/`NewScopedConfigured` constructors, namespace-local helper delegation, quota enforcement logic |
| `journal.go` | Journal persistence, Flux-like querying, JSON row inflation, journal schema helpers |
| `workspace.go` | Workspace buffers, aggregation, query analysis, commit flow, and orphan recovery |
| `compact.go` | Cold archive generation to JSONL gzip or zstd |
Tests are organised in corresponding files:
@ -112,7 +155,7 @@ Tests are organised in corresponding files:
|--------|---------|
| `github.com/stretchr/testify` | Assertion helpers (`assert`, `require`) for tests. |
There are no other direct dependencies. The package uses only the Go standard library (`database/sql`, `context`, `sync`, `time`, `text/template`, `iter`, `errors`, `fmt`, `strings`, `regexp`, `slices`, `sync/atomic`) beyond the SQLite driver.
There are no other direct dependencies. The package uses the Go standard library plus `dappco.re/go/core` helper primitives for error wrapping, string handling, and filesystem-safe path composition.
## Key Types
@ -120,15 +163,17 @@ There are no other direct dependencies. The package uses only the Go standard li
- **`ScopedStore`** -- wraps a `*Store` with an auto-prefixed namespace. Provides the same API surface with group names transparently prefixed.
- **`QuotaConfig`** -- configures per-namespace limits on total keys and distinct groups.
- **`Event`** -- describes a single store mutation (type, group, key, value, timestamp).
- **`Watcher`** -- a channel-based subscription to store events, created by `Watch`.
- **`KV`** -- a simple key-value pair struct, used by the `All` iterator.
- **`Watch`** -- returns a buffered channel subscription to store events. Use `Unwatch(group, events)` to stop delivery and close the channel.
- **`KeyValue`** -- a simple key-value pair struct, used by the `All` iterator.
## Sentinel Errors
- **`ErrNotFound`** -- returned by `Get` when the requested key does not exist or has expired.
- **`ErrQuotaExceeded`** -- returned by `ScopedStore.Set`/`SetWithTTL` when a namespace quota limit is reached.
- **`NotFoundError`** -- returned by `Get` when the requested key does not exist or has expired.
- **`QuotaExceededError`** -- returned by `ScopedStore.Set`/`SetWithTTL` when a namespace quota limit is reached.
## Further Reading
- [Agent Conventions](../CODEX.md) -- Codex-facing repo rules and AX notes
- [AX RFC](RFC-CORE-008-AGENT-EXPERIENCE.md) -- naming, comment, and path conventions for agent consumers
- [Architecture](architecture.md) -- storage layer internals, TTL model, event system, concurrency design
- [Development Guide](development.md) -- building, testing, benchmarks, contribution workflow

492
duckdb.go Normal file
View file

@ -0,0 +1,492 @@
// SPDX-License-Identifier: EUPL-1.2
package store
import (
"database/sql"
core "dappco.re/go/core"
_ "github.com/marcboeker/go-duckdb"
)
// DuckDB table names for checkpoint scoring and probe results.
//
// Usage example:
//
// _ = db.EnsureScoringTables()
// db.Exec(core.Sprintf("SELECT * FROM %s", store.TableCheckpointScores))
const (
// TableCheckpointScores is the table name for checkpoint scoring data.
//
// Usage example:
//
// store.TableCheckpointScores // "checkpoint_scores"
TableCheckpointScores = "checkpoint_scores"
// TableProbeResults is the table name for probe result data.
//
// Usage example:
//
// store.TableProbeResults // "probe_results"
TableProbeResults = "probe_results"
)
// DuckDB wraps a DuckDB connection for analytical queries against training
// data, benchmark results, and scoring tables.
//
// Usage example:
//
// db, err := store.OpenDuckDB("/Volumes/Data/lem/lem.duckdb")
// if err != nil { return }
// defer func() { _ = db.Close() }()
// rows, _ := db.QueryGoldenSet(500)
type DuckDB struct {
conn *sql.DB
path string
}
// OpenDuckDB opens a DuckDB database file in read-only mode to avoid locking
// issues with the Python pipeline.
//
// Usage example:
//
// db, err := store.OpenDuckDB("/Volumes/Data/lem/lem.duckdb")
func OpenDuckDB(path string) (*DuckDB, error) {
conn, err := sql.Open("duckdb", path+"?access_mode=READ_ONLY")
if err != nil {
return nil, core.E("store.OpenDuckDB", core.Sprintf("open duckdb %s", path), err)
}
if err := conn.Ping(); err != nil {
_ = conn.Close()
return nil, core.E("store.OpenDuckDB", core.Sprintf("ping duckdb %s", path), err)
}
return &DuckDB{conn: conn, path: path}, nil
}
// OpenDuckDBReadWrite opens a DuckDB database in read-write mode.
//
// Usage example:
//
// db, err := store.OpenDuckDBReadWrite("/Volumes/Data/lem/lem.duckdb")
func OpenDuckDBReadWrite(path string) (*DuckDB, error) {
conn, err := sql.Open("duckdb", path)
if err != nil {
return nil, core.E("store.OpenDuckDBReadWrite", core.Sprintf("open duckdb %s", path), err)
}
if err := conn.Ping(); err != nil {
_ = conn.Close()
return nil, core.E("store.OpenDuckDBReadWrite", core.Sprintf("ping duckdb %s", path), err)
}
return &DuckDB{conn: conn, path: path}, nil
}
// Close closes the database connection.
//
// Usage example:
//
// defer func() { _ = db.Close() }()
func (db *DuckDB) Close() error {
return db.conn.Close()
}
// Path returns the database file path.
//
// Usage example:
//
// p := db.Path() // "/Volumes/Data/lem/lem.duckdb"
func (db *DuckDB) Path() string {
return db.path
}
// Conn returns the underlying *sql.DB connection. Prefer the typed helpers
// (Exec, QueryRowScan, QueryRows) when possible; this accessor exists for
// callers that need streaming row iteration or transaction control.
//
// Usage example:
//
// rows, err := db.Conn().Query("SELECT id, name FROM models WHERE kind = ?", "lem")
func (db *DuckDB) Conn() *sql.DB {
return db.conn
}
// Exec executes a query without returning rows.
//
// Usage example:
//
// err := db.Exec("INSERT INTO golden_set VALUES (?, ?)", idx, prompt)
func (db *DuckDB) Exec(query string, args ...any) error {
_, err := db.conn.Exec(query, args...)
if err != nil {
return core.E("store.DuckDB.Exec", "execute query", err)
}
return nil
}
// QueryRowScan executes a query expected to return at most one row and scans
// the result into dest. It is a convenience wrapper around sql.DB.QueryRow.
//
// Usage example:
//
// var count int
// err := db.QueryRowScan("SELECT COUNT(*) FROM golden_set", &count)
func (db *DuckDB) QueryRowScan(query string, dest any, args ...any) error {
return db.conn.QueryRow(query, args...).Scan(dest)
}
// GoldenSetRow represents one row from the golden_set table.
//
// Usage example:
//
// rows, err := db.QueryGoldenSet(500)
// for _, row := range rows { core.Println(row.Prompt) }
type GoldenSetRow struct {
// Idx is the row index.
//
// Usage example:
//
// row.Idx // 42
Idx int
// SeedID is the seed identifier that produced this row.
//
// Usage example:
//
// row.SeedID // "seed-001"
SeedID string
// Domain is the content domain (e.g. "philosophy", "science").
//
// Usage example:
//
// row.Domain // "philosophy"
Domain string
// Voice is the writing voice/style used for generation.
//
// Usage example:
//
// row.Voice // "watts"
Voice string
// Prompt is the input prompt text.
//
// Usage example:
//
// row.Prompt // "What is sovereignty?"
Prompt string
// Response is the generated response text.
//
// Usage example:
//
// row.Response // "Sovereignty is..."
Response string
// GenTime is the generation time in seconds.
//
// Usage example:
//
// row.GenTime // 2.5
GenTime float64
// CharCount is the character count of the response.
//
// Usage example:
//
// row.CharCount // 1500
CharCount int
}
// ExpansionPromptRow represents one row from the expansion_prompts table.
//
// Usage example:
//
// prompts, err := db.QueryExpansionPrompts("pending", 100)
// for _, p := range prompts { core.Println(p.Prompt) }
type ExpansionPromptRow struct {
// Idx is the row index.
//
// Usage example:
//
// p.Idx // 42
Idx int64
// SeedID is the seed identifier that produced this prompt.
//
// Usage example:
//
// p.SeedID // "seed-001"
SeedID string
// Region is the geographic/cultural region for the prompt.
//
// Usage example:
//
// p.Region // "western"
Region string
// Domain is the content domain (e.g. "philosophy", "science").
//
// Usage example:
//
// p.Domain // "philosophy"
Domain string
// Language is the ISO language code for the prompt.
//
// Usage example:
//
// p.Language // "en"
Language string
// Prompt is the prompt text in the original language.
//
// Usage example:
//
// p.Prompt // "What is sovereignty?"
Prompt string
// PromptEn is the English translation of the prompt.
//
// Usage example:
//
// p.PromptEn // "What is sovereignty?"
PromptEn string
// Priority is the generation priority (lower is higher priority).
//
// Usage example:
//
// p.Priority // 1
Priority int
// Status is the processing status (e.g. "pending", "done").
//
// Usage example:
//
// p.Status // "pending"
Status string
}
// QueryGoldenSet returns all golden set rows with responses >= minChars.
//
// Usage example:
//
// rows, err := db.QueryGoldenSet(500)
func (db *DuckDB) QueryGoldenSet(minChars int) ([]GoldenSetRow, error) {
rows, err := db.conn.Query(
"SELECT idx, seed_id, domain, voice, prompt, response, gen_time, char_count "+
"FROM golden_set WHERE char_count >= ? ORDER BY idx",
minChars,
)
if err != nil {
return nil, core.E("store.DuckDB.QueryGoldenSet", "query golden_set", err)
}
defer func() {
_ = rows.Close()
}()
var result []GoldenSetRow
for rows.Next() {
var r GoldenSetRow
if err := rows.Scan(&r.Idx, &r.SeedID, &r.Domain, &r.Voice,
&r.Prompt, &r.Response, &r.GenTime, &r.CharCount); err != nil {
return nil, core.E("store.DuckDB.QueryGoldenSet", "scan golden_set row", err)
}
result = append(result, r)
}
return result, rows.Err()
}
// CountGoldenSet returns the total count of golden set rows.
//
// Usage example:
//
// count, err := db.CountGoldenSet()
func (db *DuckDB) CountGoldenSet() (int, error) {
var count int
err := db.conn.QueryRow("SELECT COUNT(*) FROM golden_set").Scan(&count)
if err != nil {
return 0, core.E("store.DuckDB.CountGoldenSet", "count golden_set", err)
}
return count, nil
}
// QueryExpansionPrompts returns expansion prompts filtered by status.
//
// Usage example:
//
// prompts, err := db.QueryExpansionPrompts("pending", 100)
func (db *DuckDB) QueryExpansionPrompts(status string, limit int) ([]ExpansionPromptRow, error) {
query := "SELECT idx, seed_id, region, domain, language, prompt, prompt_en, priority, status " +
"FROM expansion_prompts"
var args []any
if status != "" {
query += " WHERE status = ?"
args = append(args, status)
}
query += " ORDER BY priority, idx"
if limit > 0 {
query += core.Sprintf(" LIMIT %d", limit)
}
rows, err := db.conn.Query(query, args...)
if err != nil {
return nil, core.E("store.DuckDB.QueryExpansionPrompts", "query expansion_prompts", err)
}
defer func() {
_ = rows.Close()
}()
var result []ExpansionPromptRow
for rows.Next() {
var r ExpansionPromptRow
if err := rows.Scan(&r.Idx, &r.SeedID, &r.Region, &r.Domain,
&r.Language, &r.Prompt, &r.PromptEn, &r.Priority, &r.Status); err != nil {
return nil, core.E("store.DuckDB.QueryExpansionPrompts", "scan expansion_prompt row", err)
}
result = append(result, r)
}
return result, rows.Err()
}
// CountExpansionPrompts returns counts by status.
//
// Usage example:
//
// total, pending, err := db.CountExpansionPrompts()
func (db *DuckDB) CountExpansionPrompts() (total int, pending int, err error) {
err = db.conn.QueryRow("SELECT COUNT(*) FROM expansion_prompts").Scan(&total)
if err != nil {
return 0, 0, core.E("store.DuckDB.CountExpansionPrompts", "count expansion_prompts", err)
}
err = db.conn.QueryRow("SELECT COUNT(*) FROM expansion_prompts WHERE status = 'pending'").Scan(&pending)
if err != nil {
return total, 0, core.E("store.DuckDB.CountExpansionPrompts", "count pending expansion_prompts", err)
}
return total, pending, nil
}
// UpdateExpansionStatus updates the status of an expansion prompt by idx.
//
// Usage example:
//
// err := db.UpdateExpansionStatus(42, "done")
func (db *DuckDB) UpdateExpansionStatus(idx int64, status string) error {
_, err := db.conn.Exec("UPDATE expansion_prompts SET status = ? WHERE idx = ?", status, idx)
if err != nil {
return core.E("store.DuckDB.UpdateExpansionStatus", core.Sprintf("update expansion_prompt %d", idx), err)
}
return nil
}
// QueryRows executes an arbitrary SQL query and returns results as maps.
//
// Usage example:
//
// rows, err := db.QueryRows("SELECT COUNT(*) AS n FROM golden_set")
func (db *DuckDB) QueryRows(query string, args ...any) ([]map[string]any, error) {
rows, err := db.conn.Query(query, args...)
if err != nil {
return nil, core.E("store.DuckDB.QueryRows", "query", err)
}
defer func() {
_ = rows.Close()
}()
cols, err := rows.Columns()
if err != nil {
return nil, core.E("store.DuckDB.QueryRows", "columns", err)
}
var result []map[string]any
for rows.Next() {
values := make([]any, len(cols))
ptrs := make([]any, len(cols))
for i := range values {
ptrs[i] = &values[i]
}
if err := rows.Scan(ptrs...); err != nil {
return nil, core.E("store.DuckDB.QueryRows", "scan", err)
}
row := make(map[string]any, len(cols))
for i, col := range cols {
row[col] = values[i]
}
result = append(result, row)
}
return result, rows.Err()
}
// EnsureScoringTables creates the scoring tables if they do not exist.
//
// Usage example:
//
// if err := db.EnsureScoringTables(); err != nil { return }
func (db *DuckDB) EnsureScoringTables() error {
if _, err := db.conn.Exec(core.Sprintf(`CREATE TABLE IF NOT EXISTS %s (
model TEXT, run_id TEXT, label TEXT, iteration INTEGER,
correct INTEGER, total INTEGER, accuracy DOUBLE,
scored_at TIMESTAMP DEFAULT current_timestamp,
PRIMARY KEY (run_id, label)
)`, TableCheckpointScores)); err != nil {
return core.E("store.DuckDB.EnsureScoringTables", "create checkpoint_scores", err)
}
if _, err := db.conn.Exec(core.Sprintf(`CREATE TABLE IF NOT EXISTS %s (
model TEXT, run_id TEXT, label TEXT, probe_id TEXT,
passed BOOLEAN, response TEXT, iteration INTEGER,
scored_at TIMESTAMP DEFAULT current_timestamp,
PRIMARY KEY (run_id, label, probe_id)
)`, TableProbeResults)); err != nil {
return core.E("store.DuckDB.EnsureScoringTables", "create probe_results", err)
}
if _, err := db.conn.Exec(`CREATE TABLE IF NOT EXISTS scoring_results (
model TEXT, prompt_id TEXT, suite TEXT,
dimension TEXT, score DOUBLE,
scored_at TIMESTAMP DEFAULT current_timestamp
)`); err != nil {
return core.E("store.DuckDB.EnsureScoringTables", "create scoring_results", err)
}
return nil
}
// WriteScoringResult writes a single scoring dimension result to DuckDB.
//
// Usage example:
//
// err := db.WriteScoringResult("lem-8b", "p-001", "ethics", "honesty", 0.95)
func (db *DuckDB) WriteScoringResult(model, promptID, suite, dimension string, score float64) error {
_, err := db.conn.Exec(
`INSERT INTO scoring_results (model, prompt_id, suite, dimension, score) VALUES (?, ?, ?, ?, ?)`,
model, promptID, suite, dimension, score,
)
if err != nil {
return core.E("store.DuckDB.WriteScoringResult", "insert scoring result", err)
}
return nil
}
// TableCounts returns row counts for all known tables.
//
// Usage example:
//
// counts, err := db.TableCounts()
// n := counts["golden_set"]
func (db *DuckDB) TableCounts() (map[string]int, error) {
tables := []string{"golden_set", "expansion_prompts", "seeds", "prompts",
"training_examples", "gemini_responses", "benchmark_questions", "benchmark_results", "validations",
TableCheckpointScores, TableProbeResults, "scoring_results"}
counts := make(map[string]int)
for _, t := range tables {
var count int
err := db.conn.QueryRow(core.Sprintf("SELECT COUNT(*) FROM %s", t)).Scan(&count)
if err != nil {
continue
}
counts[t] = count
}
return counts, nil
}

272
events.go
View file

@ -1,25 +1,25 @@
package store
import (
"slices"
"sync"
"sync/atomic"
"reflect"
"sync" // Note: AX-6 — internal concurrency primitive; structural for store infrastructure (RFC §4 explicitly mandates).
"sync/atomic" // Note: AX-6 — internal concurrency primitive; structural for store infrastructure (RFC §4 explicitly mandates).
"time"
)
// EventType describes the kind of store mutation that occurred.
// Usage example: `if event.Type == store.EventSet { return }`
type EventType int
const (
// EventSet indicates a key was created or updated.
// Usage example: `if event.Type == store.EventSet { return }`
EventSet EventType = iota
// EventDelete indicates a single key was removed.
// Usage example: `if event.Type == store.EventDelete { return }`
EventDelete
// EventDeleteGroup indicates all keys in a group were removed.
// Usage example: `if event.Type == store.EventDeleteGroup { return }`
EventDeleteGroup
)
// String returns a human-readable label for the event type.
// Usage example: `label := store.EventDeleteGroup.String()`
func (t EventType) String() string {
switch t {
case EventSet:
@ -33,140 +33,200 @@ func (t EventType) String() string {
}
}
// Event describes a single store mutation. Key is empty for EventDeleteGroup.
// Value is only populated for EventSet.
// Usage example: `event := store.Event{Type: store.EventSet, Group: "config", Key: "colour", Value: "blue"}`
// Usage example: `event := store.Event{Type: store.EventDeleteGroup, Group: "config"}`
type Event struct {
Type EventType
Group string
Key string
Value string
// Usage example: `if event.Type == store.EventDeleteGroup { return }`
Type EventType
// Usage example: `if event.Group == "config" { return }`
Group string
// Usage example: `if event.Key == "colour" { return }`
Key string
// Usage example: `if event.Value == "blue" { return }`
Value string
// Usage example: `if event.Timestamp.IsZero() { return }`
Timestamp time.Time
}
// Watcher receives events matching a group/key filter. Use Store.Watch to
// create one and Store.Unwatch to stop delivery.
type Watcher struct {
// Ch is the public read-only channel that consumers select on.
Ch <-chan Event
// ch is the internal write channel (same underlying channel as Ch).
ch chan Event
group string
key string
id uint64
// changeCallbackRegistration keeps the registration ID so unregister can remove
// the exact callback later.
type changeCallbackRegistration struct {
registrationID uint64
callback func(Event)
}
// callbackEntry pairs a change callback with its unique ID for unregistration.
type callbackEntry struct {
id uint64
fn func(Event)
func closedEventChannel() chan Event {
eventChannel := make(chan Event)
close(eventChannel)
return eventChannel
}
// watcherBufSize is the capacity of each watcher's buffered channel.
const watcherBufSize = 16
// Watch("config") can hold 16 pending events before non-blocking sends start
// dropping new ones.
const watcherEventBufferCapacity = 16
// Watch creates a new watcher that receives events matching the given group and
// key. Use "*" as a wildcard: ("mygroup", "*") matches all keys in that group,
// ("*", "*") matches every mutation. The returned Watcher has a buffered
// channel (cap 16); events are dropped if the consumer falls behind.
func (s *Store) Watch(group, key string) *Watcher {
ch := make(chan Event, watcherBufSize)
w := &Watcher{
Ch: ch,
ch: ch,
group: group,
key: key,
id: atomic.AddUint64(&s.nextID, 1),
// Usage example: `events := storeInstance.Watch("config")`
// Usage example: `events := storeInstance.Watch("*")`
func (storeInstance *Store) Watch(group string) <-chan Event {
if storeInstance == nil {
return closedEventChannel()
}
s.mu.Lock()
s.watchers = append(s.watchers, w)
s.mu.Unlock()
eventChannel := make(chan Event, watcherEventBufferCapacity)
return w
storeInstance.lifecycleLock.Lock()
defer storeInstance.lifecycleLock.Unlock()
if storeInstance.isClosed || storeInstance.isClosing {
return closedEventChannel()
}
storeInstance.watcherLock.Lock()
defer storeInstance.watcherLock.Unlock()
if storeInstance.watchers == nil {
storeInstance.watchers = make(map[string][]chan Event)
}
storeInstance.watchers[group] = append(storeInstance.watchers[group], eventChannel)
return eventChannel
}
// Unwatch removes a watcher and closes its channel. Safe to call multiple
// times; subsequent calls are no-ops.
func (s *Store) Unwatch(w *Watcher) {
s.mu.Lock()
defer s.mu.Unlock()
// Usage example: `storeInstance.Unwatch("config", events)`
func (storeInstance *Store) Unwatch(group string, events <-chan Event) {
if storeInstance == nil || events == nil {
return
}
s.watchers = slices.DeleteFunc(s.watchers, func(existing *Watcher) bool {
if existing.id == w.id {
close(w.ch)
return true
storeInstance.lifecycleLock.Lock()
closed := storeInstance.isClosed || storeInstance.isClosing
storeInstance.lifecycleLock.Unlock()
if closed {
return
}
storeInstance.watcherLock.Lock()
defer storeInstance.watcherLock.Unlock()
registeredEvents := storeInstance.watchers[group]
if len(registeredEvents) == 0 {
return
}
eventsPointer := channelPointer(events)
nextRegisteredEvents := registeredEvents[:0]
removed := false
for _, registeredChannel := range registeredEvents {
if channelPointer(registeredChannel) == eventsPointer {
if !removed {
close(registeredChannel)
removed = true
}
continue
}
return false
})
nextRegisteredEvents = append(nextRegisteredEvents, registeredChannel)
}
if !removed {
return
}
if len(nextRegisteredEvents) == 0 {
delete(storeInstance.watchers, group)
return
}
storeInstance.watchers[group] = nextRegisteredEvents
}
// OnChange registers a callback that fires on every store mutation. Callbacks
// are called synchronously in the goroutine that performed the write, so the
// caller controls concurrency. Returns an unregister function; calling it stops
// future invocations.
//
// This is the integration point for go-ws and similar consumers:
//
// unreg := store.OnChange(func(e store.Event) {
// hub.SendToChannel("store-events", e)
// })
// defer unreg()
func (s *Store) OnChange(fn func(Event)) func() {
id := atomic.AddUint64(&s.nextID, 1)
entry := callbackEntry{id: id, fn: fn}
// Usage example: `unregister := storeInstance.OnChange(func(event store.Event) { fmt.Println(event.Group, event.Key, event.Value) })`
func (storeInstance *Store) OnChange(callback func(Event)) func() {
if callback == nil {
return func() {}
}
s.mu.Lock()
s.callbacks = append(s.callbacks, entry)
s.mu.Unlock()
if storeInstance == nil {
return func() {}
}
storeInstance.lifecycleLock.Lock()
defer storeInstance.lifecycleLock.Unlock()
if storeInstance.isClosed || storeInstance.isClosing {
return func() {}
}
registrationID := atomic.AddUint64(&storeInstance.nextCallbackID, 1)
callbackRegistration := changeCallbackRegistration{registrationID: registrationID, callback: callback}
storeInstance.callbackLock.Lock()
defer storeInstance.callbackLock.Unlock()
storeInstance.callbacks = append(storeInstance.callbacks, callbackRegistration)
// Return an idempotent unregister function.
var once sync.Once
return func() {
once.Do(func() {
s.mu.Lock()
defer s.mu.Unlock()
s.callbacks = slices.DeleteFunc(s.callbacks, func(cb callbackEntry) bool {
return cb.id == id
})
storeInstance.callbackLock.Lock()
defer storeInstance.callbackLock.Unlock()
for i := range storeInstance.callbacks {
if storeInstance.callbacks[i].registrationID == registrationID {
storeInstance.callbacks = append(storeInstance.callbacks[:i], storeInstance.callbacks[i+1:]...)
return
}
}
})
}
}
// notify dispatches an event to all matching watchers and callbacks. It must be
// called after a successful DB write. Watcher sends are non-blocking — if a
// channel buffer is full the event is silently dropped to avoid blocking the
// writer.
func (s *Store) notify(e Event) {
s.mu.RLock()
defer s.mu.RUnlock()
// notify(Event{Type: EventSet, Group: "config", Key: "colour", Value: "blue"})
// dispatches matching watchers and callbacks after a successful write. If a
// watcher buffer is full, the event is dropped instead of blocking the writer.
// Callbacks are copied under a separate lock and invoked after the lock is
// released, so they can register or unregister subscriptions without
// deadlocking.
func (storeInstance *Store) notify(event Event) {
if storeInstance == nil {
return
}
if event.Timestamp.IsZero() {
event.Timestamp = time.Now()
}
for _, w := range s.watchers {
if !watcherMatches(w, e) {
continue
}
// Non-blocking send: drop the event rather than block the writer.
storeInstance.lifecycleLock.Lock()
if storeInstance.isClosed || storeInstance.isClosing {
storeInstance.lifecycleLock.Unlock()
return
}
storeInstance.watcherLock.RLock()
storeInstance.lifecycleLock.Unlock()
for _, registeredChannel := range storeInstance.watchers["*"] {
select {
case w.ch <- e:
case registeredChannel <- event:
default:
}
}
for _, registeredChannel := range storeInstance.watchers[event.Group] {
select {
case registeredChannel <- event:
default:
}
}
storeInstance.watcherLock.RUnlock()
for _, cb := range s.callbacks {
cb.fn(e)
storeInstance.lifecycleLock.Lock()
if storeInstance.isClosed || storeInstance.isClosing {
storeInstance.lifecycleLock.Unlock()
return
}
storeInstance.callbackLock.RLock()
storeInstance.lifecycleLock.Unlock()
callbacks := append([]changeCallbackRegistration(nil), storeInstance.callbacks...)
storeInstance.callbackLock.RUnlock()
for _, callback := range callbacks {
callback.callback(event)
}
}
// watcherMatches reports whether a watcher's filter matches the given event.
func watcherMatches(w *Watcher, e Event) bool {
if w.group != "*" && w.group != e.Group {
return false
func channelPointer(eventChannel <-chan Event) uintptr {
if eventChannel == nil {
return 0
}
if w.key != "*" && w.key != e.Key {
// EventDeleteGroup has an empty Key — only wildcard watchers or
// group-level watchers (key="*") should receive it.
return false
}
return true
return reflect.ValueOf(eventChannel).Pointer()
}

View file

@ -1,426 +1,348 @@
package store
import (
"dappco.re/go/core"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
core "dappco.re/go/core"
)
// ---------------------------------------------------------------------------
// Watch — specific key
// ---------------------------------------------------------------------------
func TestEvents_Watch_Good_Group(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
func TestWatch_Good_SpecificKey(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
events := storeInstance.Watch("config")
defer storeInstance.Unwatch("config", events)
w := s.Watch("config", "theme")
defer s.Unwatch(w)
assertNoError(t, storeInstance.Set("config", "theme", "dark"))
assertNoError(t, storeInstance.Set("config", "colour", "blue"))
require.NoError(t, s.Set("config", "theme", "dark"))
received := drainEvents(events, 2, time.Second)
assertLen(t, received, 2)
assertEqual(t, "theme", received[0].Key)
assertEqual(t, "colour", received[1].Key)
assertEqual(t, "config", received[0].Group)
assertEqual(t, "config", received[1].Group)
}
func TestEvents_Watch_Good_WildcardGroup(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("*")
defer storeInstance.Unwatch("*", events)
assertNoError(t, storeInstance.Set("g1", "k1", "v1"))
assertNoError(t, storeInstance.Set("g2", "k2", "v2"))
assertNoError(t, storeInstance.Delete("g1", "k1"))
assertNoError(t, storeInstance.DeleteGroup("g2"))
received := drainEvents(events, 4, time.Second)
assertLen(t, received, 4)
assertEqual(t, EventSet, received[0].Type)
assertEqual(t, EventSet, received[1].Type)
assertEqual(t, EventDelete, received[2].Type)
assertEqual(t, EventDeleteGroup, received[3].Type)
}
func TestEvents_Unwatch_Good_StopsDelivery(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("g")
storeInstance.Unwatch("g", events)
_, open := <-events
assertFalsef(t, open, "channel should be closed after Unwatch")
assertNoError(t, storeInstance.Set("g", "k", "v"))
}
func TestEvents_Unwatch_Good_Idempotent(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("g")
storeInstance.Unwatch("g", events)
storeInstance.Unwatch("g", events)
}
func TestEvents_Close_Good_ClosesWatcherChannels(t *testing.T) {
storeInstance, _ := New(":memory:")
events := storeInstance.Watch("g")
assertNoError(t, storeInstance.Close())
_, open := <-events
assertFalsef(t, open, "channel should be closed after Close")
}
func TestEvents_Unwatch_Good_NilChannel(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
storeInstance.Unwatch("g", nil)
}
func TestEvents_Watch_Good_DeleteEvent(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("g")
defer storeInstance.Unwatch("g", events)
assertNoError(t, storeInstance.Set("g", "k", "v"))
<-events
assertNoError(t, storeInstance.Delete("g", "k"))
select {
case e := <-w.Ch:
assert.Equal(t, EventSet, e.Type)
assert.Equal(t, "config", e.Group)
assert.Equal(t, "theme", e.Key)
assert.Equal(t, "dark", e.Value)
assert.False(t, e.Timestamp.IsZero())
case <-time.After(time.Second):
t.Fatal("timed out waiting for event")
}
// A Set to a different key in the same group should NOT trigger this watcher.
require.NoError(t, s.Set("config", "colour", "blue"))
select {
case e := <-w.Ch:
t.Fatalf("unexpected event for non-matching key: %+v", e)
case <-time.After(50 * time.Millisecond):
// Expected: no event.
}
}
// ---------------------------------------------------------------------------
// Watch — wildcard key "*"
// ---------------------------------------------------------------------------
func TestWatch_Good_WildcardKey(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
w := s.Watch("config", "*")
defer s.Unwatch(w)
require.NoError(t, s.Set("config", "theme", "dark"))
require.NoError(t, s.Set("config", "colour", "blue"))
received := drainEvents(w.Ch, 2, time.Second)
require.Len(t, received, 2)
assert.Equal(t, "theme", received[0].Key)
assert.Equal(t, "colour", received[1].Key)
}
// ---------------------------------------------------------------------------
// Watch — wildcard ("*", "*") matches everything
// ---------------------------------------------------------------------------
func TestWatch_Good_WildcardAll(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
w := s.Watch("*", "*")
defer s.Unwatch(w)
require.NoError(t, s.Set("g1", "k1", "v1"))
require.NoError(t, s.Set("g2", "k2", "v2"))
require.NoError(t, s.Delete("g1", "k1"))
require.NoError(t, s.DeleteGroup("g2"))
received := drainEvents(w.Ch, 4, time.Second)
require.Len(t, received, 4)
assert.Equal(t, EventSet, received[0].Type)
assert.Equal(t, EventSet, received[1].Type)
assert.Equal(t, EventDelete, received[2].Type)
assert.Equal(t, EventDeleteGroup, received[3].Type)
}
// ---------------------------------------------------------------------------
// Unwatch — stops delivery, channel closed
// ---------------------------------------------------------------------------
func TestUnwatch_Good_StopsDelivery(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
w := s.Watch("g", "k")
s.Unwatch(w)
// Channel should be closed.
_, open := <-w.Ch
assert.False(t, open, "channel should be closed after Unwatch")
// Set after Unwatch should not panic or block.
require.NoError(t, s.Set("g", "k", "v"))
}
func TestUnwatch_Good_Idempotent(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
w := s.Watch("g", "k")
// Calling Unwatch multiple times should not panic.
s.Unwatch(w)
s.Unwatch(w) // second call is a no-op
}
// ---------------------------------------------------------------------------
// Delete triggers event
// ---------------------------------------------------------------------------
func TestWatch_Good_DeleteEvent(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
w := s.Watch("g", "k")
defer s.Unwatch(w)
require.NoError(t, s.Set("g", "k", "v"))
// Drain the Set event.
<-w.Ch
require.NoError(t, s.Delete("g", "k"))
select {
case e := <-w.Ch:
assert.Equal(t, EventDelete, e.Type)
assert.Equal(t, "g", e.Group)
assert.Equal(t, "k", e.Key)
assert.Empty(t, e.Value, "Delete events should have empty Value")
case event := <-events:
assertEqual(t, EventDelete, event.Type)
assertEqual(t, "g", event.Group)
assertEqual(t, "k", event.Key)
assertEmpty(t, event.Value)
case <-time.After(time.Second):
t.Fatal("timed out waiting for delete event")
}
}
// ---------------------------------------------------------------------------
// DeleteGroup triggers event
// ---------------------------------------------------------------------------
func TestEvents_Watch_Good_DeleteGroupEvent(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
func TestWatch_Good_DeleteGroupEvent(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
events := storeInstance.Watch("g")
defer storeInstance.Unwatch("g", events)
// A wildcard-key watcher for the group should receive DeleteGroup events.
w := s.Watch("g", "*")
defer s.Unwatch(w)
assertNoError(t, storeInstance.Set("g", "a", "1"))
assertNoError(t, storeInstance.Set("g", "b", "2"))
<-events
<-events
require.NoError(t, s.Set("g", "a", "1"))
require.NoError(t, s.Set("g", "b", "2"))
// Drain Set events.
<-w.Ch
<-w.Ch
require.NoError(t, s.DeleteGroup("g"))
assertNoError(t, storeInstance.DeleteGroup("g"))
select {
case e := <-w.Ch:
assert.Equal(t, EventDeleteGroup, e.Type)
assert.Equal(t, "g", e.Group)
assert.Empty(t, e.Key, "DeleteGroup events should have empty Key")
case event := <-events:
assertEqual(t, EventDeleteGroup, event.Type)
assertEqual(t, "g", event.Group)
assertEmpty(t, event.Key)
case <-time.After(time.Second):
t.Fatal("timed out waiting for delete_group event")
}
}
// ---------------------------------------------------------------------------
// OnChange — callback fires on mutations
// ---------------------------------------------------------------------------
func TestOnChange_Good_Fires(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
func TestEvents_OnChange_Good_Fires(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
var events []Event
var mu sync.Mutex
var eventsMutex sync.Mutex
unreg := s.OnChange(func(e Event) {
mu.Lock()
events = append(events, e)
mu.Unlock()
unregister := storeInstance.OnChange(func(event Event) {
eventsMutex.Lock()
events = append(events, event)
eventsMutex.Unlock()
})
defer unreg()
defer unregister()
require.NoError(t, s.Set("g", "k", "v"))
require.NoError(t, s.Delete("g", "k"))
assertNoError(t, storeInstance.Set("g", "k", "v"))
assertNoError(t, storeInstance.Delete("g", "k"))
mu.Lock()
defer mu.Unlock()
require.Len(t, events, 2)
assert.Equal(t, EventSet, events[0].Type)
assert.Equal(t, EventDelete, events[1].Type)
eventsMutex.Lock()
defer eventsMutex.Unlock()
assertLen(t, events, 2)
assertEqual(t, EventSet, events[0].Type)
assertEqual(t, EventDelete, events[1].Type)
}
// ---------------------------------------------------------------------------
// OnChange — unregister stops callback
// ---------------------------------------------------------------------------
func TestEvents_OnChange_Good_GroupFilteredCallback(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
func TestOnChange_Good_Unregister(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
var count atomic.Int32
unreg := s.OnChange(func(e Event) {
count.Add(1)
var seen []string
unregister := storeInstance.OnChange(func(event Event) {
if event.Group != "config" {
return
}
seen = append(seen, event.Key+"="+event.Value)
})
defer unregister()
require.NoError(t, s.Set("g", "k", "v1"))
assert.Equal(t, int32(1), count.Load())
assertNoError(t, storeInstance.Set("config", "theme", "dark"))
assertNoError(t, storeInstance.Set("other", "theme", "light"))
unreg()
require.NoError(t, s.Set("g", "k", "v2"))
assert.Equal(t, int32(1), count.Load(), "callback should not fire after unregister")
// Calling unreg again should not panic.
unreg()
assertEqual(t, []string{"theme=dark"}, seen)
}
// ---------------------------------------------------------------------------
// Buffer-full doesn't block the writer
// ---------------------------------------------------------------------------
func TestEvents_OnChange_Good_ReentrantSubscriptionChanges(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
func TestWatch_Good_BufferFullDoesNotBlock(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
var (
seen []string
seenMutex sync.Mutex
nestedEvents <-chan Event
nestedActive bool
nestedStopped bool
unregisterNested = func() {}
)
w := s.Watch("g", "*")
defer s.Unwatch(w)
unregisterPrimary := storeInstance.OnChange(func(event Event) {
seenMutex.Lock()
seen = append(seen, event.Key)
seenMutex.Unlock()
// Fill the buffer (cap 16) plus extra writes. None should block.
done := make(chan struct{})
go func() {
defer close(done)
for i := range 32 {
require.NoError(t, s.Set("g", core.Sprintf("k%d", i), "v"))
if !nestedActive {
nestedEvents = storeInstance.Watch("config")
unregisterNested = storeInstance.OnChange(func(nested Event) {
seenMutex.Lock()
seen = append(seen, "nested:"+nested.Key)
seenMutex.Unlock()
})
nestedActive = true
return
}
}()
select {
case <-done:
// Success: all writes completed without blocking.
case <-time.After(5 * time.Second):
t.Fatal("writes blocked — buffer-full condition caused deadlock")
}
// Drain what we can — should get exactly watcherBufSize events.
var received int
for range watcherBufSize {
select {
case <-w.Ch:
received++
default:
if !nestedStopped {
storeInstance.Unwatch("config", nestedEvents)
unregisterNested()
nestedStopped = true
}
})
defer unregisterPrimary()
assertNoError(t, storeInstance.Set("config", "first", "dark"))
assertNoError(t, storeInstance.Set("config", "second", "light"))
assertNoError(t, storeInstance.Set("config", "third", "blue"))
seenMutex.Lock()
assertEqual(t, []string{"first", "second", "nested:second", "third"}, seen)
seenMutex.Unlock()
select {
case event, open := <-nestedEvents:
assertTrue(t, open)
assertEqual(t, "second", event.Key)
case <-time.After(time.Second):
t.Fatal("timed out waiting for nested watcher event")
}
assert.Equal(t, watcherBufSize, received, "should receive exactly buffer-size events")
_, open := <-nestedEvents
assertFalsef(t, open, "nested watcher should be closed after callback-driven unwatch")
}
// ---------------------------------------------------------------------------
// Multiple watchers on same key
// ---------------------------------------------------------------------------
func TestEvents_Notify_Good_PopulatesTimestamp(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
func TestWatch_Good_MultipleWatchersSameKey(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
events := storeInstance.Watch("config")
defer storeInstance.Unwatch("config", events)
w1 := s.Watch("g", "k")
w2 := s.Watch("g", "k")
defer s.Unwatch(w1)
defer s.Unwatch(w2)
require.NoError(t, s.Set("g", "k", "v"))
// Both watchers should receive the event independently.
select {
case e := <-w1.Ch:
assert.Equal(t, EventSet, e.Type)
case <-time.After(time.Second):
t.Fatal("w1 timed out")
}
storeInstance.notify(Event{Type: EventSet, Group: "config", Key: "theme", Value: "dark"})
select {
case e := <-w2.Ch:
assert.Equal(t, EventSet, e.Type)
case event := <-events:
assertFalse(t, event.Timestamp.IsZero())
assertEqual(t, "config", event.Group)
assertEqual(t, "theme", event.Key)
case <-time.After(time.Second):
t.Fatal("w2 timed out")
t.Fatal("timed out waiting for timestamped event")
}
}
// ---------------------------------------------------------------------------
// Concurrent Watch/Unwatch during writes (race test)
// ---------------------------------------------------------------------------
func TestEvents_Watch_Good_BufferDrops(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
func TestWatch_Good_ConcurrentWatchUnwatch(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
events := storeInstance.Watch("g")
defer storeInstance.Unwatch("g", events)
const goroutines = 10
const ops = 50
for i := 0; i < watcherEventBufferCapacity+8; i++ {
assertNoError(t, storeInstance.Set("g", core.Sprintf("k-%d", i), "v"))
}
received := drainEvents(events, watcherEventBufferCapacity, time.Second)
assertLessOrEqual(t, len(received), watcherEventBufferCapacity)
}
func TestEvents_Watch_Good_ConcurrentWatchUnwatch(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
const workers = 10
var wg sync.WaitGroup
wg.Add(workers)
// Writers — continuously mutate the store.
wg.Go(func() {
for i := range goroutines * ops {
_ = s.Set("g", core.Sprintf("k%d", i), "v")
}
})
// Watchers — add and remove watchers concurrently.
for range goroutines {
wg.Go(func() {
for range ops {
w := s.Watch("g", "*")
// Drain a few events to exercise the channel path.
for range 3 {
select {
case <-w.Ch:
case <-time.After(time.Millisecond):
}
}
s.Unwatch(w)
}
})
for worker := 0; worker < workers; worker++ {
go func(worker int) {
defer wg.Done()
group := core.Sprintf("g-%d", worker)
events := storeInstance.Watch(group)
_ = storeInstance.Set(group, "k", "v")
storeInstance.Unwatch(group, events)
}(worker)
}
wg.Wait()
// If we got here without a data race or panic, the test passes.
}
// ---------------------------------------------------------------------------
// ScopedStore events — prefixed group name
// ---------------------------------------------------------------------------
func TestEvents_Watch_Good_ScopedStoreEventGroup(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
func TestWatch_Good_ScopedStoreEvents(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNotNil(t, scopedStore)
sc, err := NewScoped(s, "tenant-a")
require.NoError(t, err)
events := storeInstance.Watch("tenant-a:config")
defer storeInstance.Unwatch("tenant-a:config", events)
// Watch on the underlying store with the full prefixed group name.
w := s.Watch("tenant-a:config", "theme")
defer s.Unwatch(w)
require.NoError(t, sc.Set("config", "theme", "dark"))
assertNoError(t, scopedStore.SetIn("config", "theme", "dark"))
select {
case e := <-w.Ch:
assert.Equal(t, EventSet, e.Type)
assert.Equal(t, "tenant-a:config", e.Group)
assert.Equal(t, "theme", e.Key)
assert.Equal(t, "dark", e.Value)
case event := <-events:
assertEqual(t, "tenant-a:config", event.Group)
assertEqual(t, "theme", event.Key)
case <-time.After(time.Second):
t.Fatal("timed out waiting for scoped store event")
t.Fatal("timed out waiting for scoped event")
}
}
// ---------------------------------------------------------------------------
// EventType.String()
// ---------------------------------------------------------------------------
func TestEvents_Watch_Good_SetWithTTL(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
func TestEventType_String(t *testing.T) {
assert.Equal(t, "set", EventSet.String())
assert.Equal(t, "delete", EventDelete.String())
assert.Equal(t, "delete_group", EventDeleteGroup.String())
assert.Equal(t, "unknown", EventType(99).String())
}
events := storeInstance.Watch("g")
defer storeInstance.Unwatch("g", events)
// ---------------------------------------------------------------------------
// SetWithTTL emits events
// ---------------------------------------------------------------------------
func TestWatch_Good_SetWithTTLEmitsEvent(t *testing.T) {
s, _ := New(":memory:")
defer s.Close()
w := s.Watch("g", "k")
defer s.Unwatch(w)
require.NoError(t, s.SetWithTTL("g", "k", "ttl-val", time.Hour))
assertNoError(t, storeInstance.SetWithTTL("g", "ephemeral", "v", time.Minute))
select {
case e := <-w.Ch:
assert.Equal(t, EventSet, e.Type)
assert.Equal(t, "g", e.Group)
assert.Equal(t, "k", e.Key)
assert.Equal(t, "ttl-val", e.Value)
case event := <-events:
assertEqual(t, EventSet, event.Type)
assertEqual(t, "ephemeral", event.Key)
case <-time.After(time.Second):
t.Fatal("timed out waiting for SetWithTTL event")
t.Fatal("timed out waiting for TTL event")
}
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
func TestEvents_EventType_Good_String(t *testing.T) {
assertEqual(t, "set", EventSet.String())
assertEqual(t, "delete", EventDelete.String())
assertEqual(t, "delete_group", EventDeleteGroup.String())
assertEqual(t, "unknown", EventType(99).String())
}
// drainEvents collects up to n events from ch within the given timeout.
func drainEvents(ch <-chan Event, n int, timeout time.Duration) []Event {
var events []Event
func drainEvents(events <-chan Event, count int, timeout time.Duration) []Event {
received := make([]Event, 0, count)
deadline := time.After(timeout)
for range n {
for len(received) < count {
select {
case e := <-ch:
events = append(events, e)
case event := <-events:
received = append(received, event)
case <-deadline:
return events
return received
}
}
return events
return received
}

40
go.mod
View file

@ -1,26 +1,46 @@
module dappco.re/go/core/store
module dappco.re/go/store
go 1.26.0
require (
dappco.re/go/core v0.7.0
dappco.re/go/core/log v0.1.0
github.com/stretchr/testify v1.11.1
modernc.org/sqlite v1.47.0
dappco.re/go/core v0.8.0-alpha.1
dappco.re/go/core/io v0.4.2
github.com/influxdata/influxdb-client-go/v2 v2.14.0 // Note: InfluxDB storage client; no core equivalent
github.com/klauspost/compress v1.18.5 // Note: compression codecs for storage payloads; no core equivalent
modernc.org/sqlite v1.47.0 // Note: pure-Go SQLite driver; no core equivalent
)
require (
github.com/andybalholm/brotli v1.2.0 // indirect
github.com/apache/arrow-go/v18 v18.1.0 // indirect
github.com/apapsch/go-jsonmerge/v2 v2.0.0 // indirect
github.com/go-viper/mapstructure/v2 v2.5.0 // indirect
github.com/goccy/go-json v0.10.6 // indirect
github.com/google/flatbuffers v25.1.24+incompatible // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/influxdata/line-protocol v0.0.0-20200327222509-2487e7298839 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/oapi-codegen/runtime v1.0.0 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect
github.com/zeebo/xxh3 v1.0.2 // indirect
golang.org/x/exp v0.0.0-20260312153236-7ab1446f8b90 // indirect
golang.org/x/mod v0.34.0 // indirect
golang.org/x/net v0.53.0 // indirect
golang.org/x/sync v0.20.0 // indirect
golang.org/x/telemetry v0.0.0-20260311193753-579e4da9a98c // indirect
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da // indirect
gonum.org/v1/gonum v0.17.0 // indirect
)
require (
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/marcboeker/go-duckdb v1.8.5 // Note: DuckDB workspace buffer driver; no core equivalent
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
golang.org/x/sys v0.42.0 // indirect
golang.org/x/sys v0.43.0 // indirect
golang.org/x/tools v0.43.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
modernc.org/libc v1.70.0 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect

82
go.sum
View file

@ -1,46 +1,96 @@
dappco.re/go/core v0.7.0 h1:A3vi7LD0jBBA7n+8WPZmjxbRDZ43FFoKhBJ/ydKDPSs=
dappco.re/go/core v0.7.0/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A=
dappco.re/go/core/log v0.1.0 h1:pa71Vq2TD2aoEUQWFKwNcaJ3GBY8HbaNGqtE688Unyc=
dappco.re/go/core/log v0.1.0/go.mod h1:Nkqb8gsXhZAO8VLpx7B8i1iAmohhzqA20b9Zr8VUcJs=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
dappco.re/go/core v0.8.0-alpha.1 h1:gj7+Scv+L63Z7wMxbJYHhaRFkHJo2u4MMPuUSv/Dhtk=
dappco.re/go/core v0.8.0-alpha.1/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A=
dappco.re/go/core/io v0.4.2 h1:SHNF/xMPyFnKWWYoFW5Y56eiuGVL/mFa1lfIw/530ls=
dappco.re/go/core/io v0.4.2/go.mod h1:w71dukyunczLb8frT9JOd5B78PjwWQD3YAXiCt3AcPA=
github.com/RaveNoX/go-jsoncommentstrip v1.0.0/go.mod h1:78ihd09MekBnJnxpICcwzCMzGrKSKYe4AqU6PDYYpjk=
github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
github.com/apache/arrow-go/v18 v18.1.0 h1:agLwJUiVuwXZdwPYVrlITfx7bndULJ/dggbnLFgDp/Y=
github.com/apache/arrow-go/v18 v18.1.0/go.mod h1:tigU/sIgKNXaesf5d7Y95jBBKS5KsxTqYBKXFsvKzo0=
github.com/apache/thrift v0.21.0 h1:tdPmh/ptjE1IJnhbhrcl2++TauVjy242rkV/UzJChnE=
github.com/apache/thrift v0.21.0/go.mod h1:W1H8aR/QRtYNvrPeFXBtobyRkd0/YVhTc6i07XIAgDw=
github.com/apapsch/go-jsonmerge/v2 v2.0.0 h1:axGnT1gRIfimI7gJifB699GoE/oq+F2MU7Dml6nw9rQ=
github.com/apapsch/go-jsonmerge/v2 v2.0.0/go.mod h1:lvDnEdqiQrp0O42VQGgmlKpxL1AP2+08jFMw88y4klk=
github.com/bmatcuk/doublestar v1.1.1/go.mod h1:UD6OnuiIn0yFxxA2le/rnRU1G4RaI4UvFv1sNto9p6w=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/go-viper/mapstructure/v2 v2.5.0 h1:vM5IJoUAy3d7zRSVtIwQgBj7BiWtMPfmPEgAXnvj1Ro=
github.com/go-viper/mapstructure/v2 v2.5.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/goccy/go-json v0.10.6 h1:p8HrPJzOakx/mn/bQtjgNjdTcN+/S6FcG2CTtQOrHVU=
github.com/goccy/go-json v0.10.6/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/flatbuffers v25.1.24+incompatible h1:4wPqL3K7GzBd1CwyhSd3usxLKOaJN/AC6puCca6Jm7o=
github.com/google/flatbuffers v25.1.24+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/influxdata/influxdb-client-go/v2 v2.14.0 h1:AjbBfJuq+QoaXNcrova8smSjwJdUHnwvfjMF71M1iI4=
github.com/influxdata/influxdb-client-go/v2 v2.14.0/go.mod h1:Ahpm3QXKMJslpXl3IftVLVezreAUtBOTZssDrjZEFHI=
github.com/influxdata/line-protocol v0.0.0-20200327222509-2487e7298839 h1:W9WBk7wlPfJLvMCdtV4zPulc4uCPrlywQOmbFOhgQNU=
github.com/influxdata/line-protocol v0.0.0-20200327222509-2487e7298839/go.mod h1:xaLFMmpvUxqXtVkUJfg9QmT88cDaCJ3ZKgdZ78oO8Qo=
github.com/juju/gnuflag v0.0.0-20171113085948-2ce1bb71843d/go.mod h1:2PavIy+JPciBPrBUjwbNvtwB6RQlve+hkpll6QSNmOE=
github.com/klauspost/asmfmt v1.3.2 h1:4Ri7ox3EwapiOjCki+hw14RyKk201CN4rzyCJRFLpK4=
github.com/klauspost/asmfmt v1.3.2/go.mod h1:AG8TuvYojzulgDAMCnYn50l/5QV3Bs/tp6j0HLHbNSE=
github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE=
github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ=
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/marcboeker/go-duckdb v1.8.5 h1:tkYp+TANippy0DaIOP5OEfBEwbUINqiFqgwMQ44jME0=
github.com/marcboeker/go-duckdb v1.8.5/go.mod h1:6mK7+WQE4P4u5AFLvVBmhFxY5fvhymFptghgJX6B+/8=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8 h1:AMFGa4R4MiIpspGNG7Z948v4n35fFGB3RR3G/ry4FWs=
github.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8/go.mod h1:mC1jAcsrzbxHt8iiaC+zU4b1ylILSosueou12R++wfY=
github.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3 h1:+n/aFZefKZp7spd8DFdX7uMikMLXX4oubIzJF4kv/wI=
github.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3/go.mod h1:RagcQ7I8IeTMnF8JTXieKnO4Z6JCsikNEzj0DwauVzE=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/oapi-codegen/runtime v1.0.0 h1:P4rqFX5fMFWqRzY9M/3YF9+aPSPPB06IzP2P7oOxrWo=
github.com/oapi-codegen/runtime v1.0.0/go.mod h1:LmCUMQuPB4M/nLXilQXhHw+BLZdDb18B34OO356yJ/A=
github.com/pierrec/lz4/v4 v4.1.22 h1:cKFw6uJDK+/gfw5BcDL0JL5aBsAFdsIT18eRtLj7VIU=
github.com/pierrec/lz4/v4 v4.1.22/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/spkg/bom v0.0.0-20160624110644-59b7046e48ad/go.mod h1:qLr4V1qq6nMqFKkMo8ZTx3f+BZEkzsRUY10Xsm2mwU0=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/zeebo/assert v1.3.0 h1:g7C04CbJuIDKNPFHmsk4hwZDO5O+kntRxzaUoNXj+IQ=
github.com/zeebo/assert v1.3.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN/wJ0=
github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
golang.org/x/exp v0.0.0-20260312153236-7ab1446f8b90 h1:jiDhWWeC7jfWqR9c/uplMOqJ0sbNlNWv0UkzE0vX1MA=
golang.org/x/exp v0.0.0-20260312153236-7ab1446f8b90/go.mod h1:xE1HEv6b+1SCZ5/uscMRjUBKtIxworgEcEi+/n9NQDQ=
golang.org/x/mod v0.34.0 h1:xIHgNUUnW6sYkcM5Jleh05DvLOtwc6RitGHbDk4akRI=
golang.org/x/mod v0.34.0/go.mod h1:ykgH52iCZe79kzLLMhyCUzhMci+nQj+0XkbXpNYtVjY=
golang.org/x/net v0.53.0 h1:d+qAbo5L0orcWAr0a9JweQpjXF19LMXJE8Ey7hwOdUA=
golang.org/x/net v0.53.0/go.mod h1:JvMuJH7rrdiCfbeHoo3fCQU24Lf5JJwT9W3sJFulfgs=
golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
golang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
golang.org/x/telemetry v0.0.0-20260311193753-579e4da9a98c h1:6a8FdnNk6bTXBjR4AGKFgUKuo+7GnR3FX5L7CbveeZc=
golang.org/x/telemetry v0.0.0-20260311193753-579e4da9a98c/go.mod h1:TpUTTEp9frx7rTdLpC9gFG9kdI7zVLFTFFlqaH2Cncw=
golang.org/x/tools v0.43.0 h1:12BdW9CeB3Z+J/I/wj34VMl8X+fEXBxVR90JeMX5E7s=
golang.org/x/tools v0.43.0/go.mod h1:uHkMso649BX2cZK6+RpuIPXS3ho2hZo4FVwfoy1vIk0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da h1:noIWHXmPHxILtqtCOPIhSt0ABwskkZKjD3bXGnZGpNY=
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da/go.mod h1:NDW/Ps6MPRej6fsCIbMTohpP40sJ/P/vI1MoTEGwX90=
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=

668
import.go Normal file
View file

@ -0,0 +1,668 @@
// SPDX-License-Identifier: EUPL-1.2
package store
import (
"bufio"
"database/sql"
"io"
"io/fs"
core "dappco.re/go/core"
)
// localFs provides unrestricted filesystem access for import operations.
var localFs = (&core.Fs{}).New("/")
type duckDBImportSession interface {
exec(query string, args ...any) error
queryRowScan(query string, dest any, args ...any) error
}
type duckDBImportTransaction struct {
transaction *sql.Tx
}
func (session duckDBImportTransaction) exec(query string, args ...any) error {
_, err := session.transaction.Exec(query, args...)
if err != nil {
return core.E("store.duckDBImportTransaction.Exec", "execute query", err)
}
return nil
}
func (session duckDBImportTransaction) queryRowScan(query string, dest any, args ...any) error {
if err := session.transaction.QueryRow(query, args...).Scan(dest); err != nil {
return core.E("store.duckDBImportTransaction.QueryRowScan", "scan row", err)
}
return nil
}
// ScpFunc is a callback for executing SCP file transfers.
// The function receives remote source and local destination paths.
//
// Usage example:
//
// scp := func(remote, local string) error { return exec.Command("scp", remote, local).Run() }
type ScpFunc func(remote, local string) error
// ScpDirFunc is a callback for executing recursive SCP directory transfers.
// The function receives remote source and local destination directory paths.
//
// Usage example:
//
// scpDir := func(remote, localDir string) error { return exec.Command("scp", "-r", remote, localDir).Run() }
type ScpDirFunc func(remote, localDir string) error
// ImportConfig holds options for the import-all operation.
//
// Usage example:
//
// cfg := store.ImportConfig{DataDir: "/Volumes/Data/lem", SkipM3: true}
type ImportConfig struct {
// SkipM3 disables pulling files from the M3 host.
//
// Usage example:
//
// cfg.SkipM3 // true
SkipM3 bool
// DataDir is the local directory containing LEM data files.
//
// Usage example:
//
// cfg.DataDir // "/Volumes/Data/lem"
DataDir string
// M3Host is the SSH hostname for SCP operations. Defaults to "m3".
//
// Usage example:
//
// cfg.M3Host // "m3"
M3Host string
// Scp copies a single file from the remote host. If nil, SCP is skipped.
//
// Usage example:
//
// cfg.Scp("m3:/path/file.jsonl", "/local/file.jsonl")
Scp ScpFunc
// ScpDir copies a directory recursively from the remote host. If nil, SCP is skipped.
//
// Usage example:
//
// cfg.ScpDir("m3:/path/dir/", "/local/dir/")
ScpDir ScpDirFunc
}
// ImportAll imports all LEM data into DuckDB from M3 and local files.
//
// Usage example:
//
// err := store.ImportAll(db, store.ImportConfig{DataDir: "/Volumes/Data/lem"}, os.Stdout)
func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
if db == nil || db.Conn() == nil {
return core.E("store.ImportAll", "database is nil", nil)
}
m3Host := cfg.M3Host
if m3Host == "" {
m3Host = "m3"
}
totals := make(map[string]int)
// ── 1. Golden set ──
goldenPath := core.JoinPath(cfg.DataDir, "gold-15k.jsonl")
if !cfg.SkipM3 && cfg.Scp != nil {
core.Print(w, " Pulling golden set from M3...")
remote := core.Sprintf("%s:/Volumes/Data/lem/responses/gold-15k.jsonl", m3Host)
if err := cfg.Scp(remote, goldenPath); err != nil {
core.Print(w, " WARNING: could not pull golden set from M3: %v", err)
}
}
transaction, err := db.Conn().Begin()
if err != nil {
return core.E("store.ImportAll", "begin import transaction", err)
}
committed := false
defer func() {
if !committed {
_ = transaction.Rollback()
}
}()
importSession := duckDBImportTransaction{transaction: transaction}
if isFile(goldenPath) {
if err := importSession.exec("DROP TABLE IF EXISTS golden_set"); err != nil {
return core.E("store.ImportAll", "drop golden_set", err)
}
err := importSession.exec(core.Sprintf(`
CREATE TABLE golden_set AS
SELECT
idx::INT AS idx,
seed_id::VARCHAR AS seed_id,
domain::VARCHAR AS domain,
voice::VARCHAR AS voice,
prompt::VARCHAR AS prompt,
response::VARCHAR AS response,
gen_time::DOUBLE AS gen_time,
length(response)::INT AS char_count,
length(response) - length(replace(response, ' ', '')) + 1 AS word_count
FROM read_json_auto('%s', maximum_object_size=1048576)
`, escapeSQLPath(goldenPath)))
if err != nil {
return core.E("store.ImportAll", "import golden_set", err)
} else {
var n int
if err := importSession.queryRowScan("SELECT count(*) FROM golden_set", &n); err != nil {
return core.E("store.ImportAll", "count golden_set", err)
}
totals["golden_set"] = n
core.Print(w, " golden_set: %d rows", n)
}
}
// ── 2. Training examples ──
trainingDirs := []struct {
name string
files []string
}{
{"training", []string{"training/train.jsonl", "training/valid.jsonl", "training/test.jsonl"}},
{"training-2k", []string{"training-2k/train.jsonl", "training-2k/valid.jsonl", "training-2k/test.jsonl"}},
{"training-expanded", []string{"training-expanded/train.jsonl", "training-expanded/valid.jsonl"}},
{"training-book", []string{"training-book/train.jsonl", "training-book/valid.jsonl", "training-book/test.jsonl"}},
{"training-conv", []string{"training-conv/train.jsonl", "training-conv/valid.jsonl", "training-conv/test.jsonl"}},
{"gold-full", []string{"gold-full/train.jsonl", "gold-full/valid.jsonl"}},
{"sovereignty-gold", []string{"sovereignty-gold/train.jsonl", "sovereignty-gold/valid.jsonl"}},
{"composure-lessons", []string{"composure-lessons/train.jsonl", "composure-lessons/valid.jsonl"}},
{"watts-full", []string{"watts-full/train.jsonl", "watts-full/valid.jsonl"}},
{"watts-expanded", []string{"watts-expanded/train.jsonl", "watts-expanded/valid.jsonl"}},
{"watts-composure", []string{"watts-composure-merged/train.jsonl", "watts-composure-merged/valid.jsonl"}},
{"western-fresh", []string{"western-fresh/train.jsonl", "western-fresh/valid.jsonl"}},
{"deepseek-soak", []string{"deepseek-western-soak/train.jsonl", "deepseek-western-soak/valid.jsonl"}},
{"russian-bridge", []string{"russian-bridge/train.jsonl", "russian-bridge/valid.jsonl"}},
}
trainingRoot := cfg.DataDir
if !cfg.SkipM3 && cfg.Scp != nil {
core.Print(w, " Pulling training sets from M3...")
for _, trainingDir := range trainingDirs {
for _, relativePath := range trainingDir.files {
localPath := core.JoinPath(trainingRoot, relativePath)
if result := localFs.EnsureDir(core.PathDir(localPath)); !result.OK {
return core.E("store.ImportAll", "ensure training directory", result.Value.(error))
}
remote := core.Sprintf("%s:/Volumes/Data/lem/%s", m3Host, relativePath)
_ = cfg.Scp(remote, localPath) // ignore errors, file might not exist
}
}
}
if err := importSession.exec("DROP TABLE IF EXISTS training_examples"); err != nil {
return core.E("store.ImportAll", "drop training_examples", err)
}
if err := importSession.exec(`
CREATE TABLE training_examples (
source VARCHAR,
split VARCHAR,
prompt TEXT,
response TEXT,
num_turns INT,
full_messages TEXT,
char_count INT
)
`); err != nil {
return core.E("store.ImportAll", "create training_examples", err)
}
trainingTotal := 0
for _, trainingDir := range trainingDirs {
for _, relativePath := range trainingDir.files {
localPath := core.JoinPath(trainingRoot, relativePath)
if !isFile(localPath) {
continue
}
split := "train"
if core.Contains(relativePath, "valid") {
split = "valid"
} else if core.Contains(relativePath, "test") {
split = "test"
}
n, err := importTrainingFile(importSession, localPath, trainingDir.name, split)
if err != nil {
return core.E("store.ImportAll", core.Sprintf("import training file %s", localPath), err)
}
trainingTotal += n
}
}
totals["training_examples"] = trainingTotal
core.Print(w, " training_examples: %d rows", trainingTotal)
// ── 3. Benchmark results ──
benchLocal := core.JoinPath(cfg.DataDir, "benchmarks")
if result := localFs.EnsureDir(benchLocal); !result.OK {
return core.E("store.ImportAll", core.Sprintf("ensure benchmark directory %s", benchLocal), result.Value.(error))
}
if !cfg.SkipM3 {
core.Print(w, " Pulling benchmarks from M3...")
if cfg.Scp != nil {
for _, benchmarkName := range []string{"truthfulqa", "gsm8k", "do_not_answer", "toxigen"} {
remote := core.Sprintf("%s:/Volumes/Data/lem/benchmarks/%s.jsonl", m3Host, benchmarkName)
_ = cfg.Scp(remote, core.JoinPath(benchLocal, benchmarkName+".jsonl"))
}
}
if cfg.ScpDir != nil {
for _, benchmarkSubdirectory := range []string{"results", "scale_results", "cross_arch_results", "deepseek-r1-7b"} {
localSubdirectory := core.JoinPath(benchLocal, benchmarkSubdirectory)
if result := localFs.EnsureDir(localSubdirectory); !result.OK {
return core.E("store.ImportAll", core.Sprintf("ensure benchmark subdirectory %s", localSubdirectory), result.Value.(error))
}
remote := core.Sprintf("%s:/Volumes/Data/lem/benchmarks/%s/", m3Host, benchmarkSubdirectory)
_ = cfg.ScpDir(remote, localSubdirectory+"/")
}
}
}
if err := importSession.exec("DROP TABLE IF EXISTS benchmark_results"); err != nil {
return core.E("store.ImportAll", "drop benchmark_results", err)
}
if err := importSession.exec(`
CREATE TABLE benchmark_results (
source VARCHAR, id VARCHAR, benchmark VARCHAR, model VARCHAR,
prompt TEXT, response TEXT, elapsed_seconds DOUBLE, domain VARCHAR
)
`); err != nil {
return core.E("store.ImportAll", "create benchmark_results", err)
}
benchTotal := 0
for _, benchmarkSubdirectory := range []string{"results", "scale_results", "cross_arch_results", "deepseek-r1-7b"} {
resultDir := core.JoinPath(benchLocal, benchmarkSubdirectory)
matches := core.PathGlob(core.JoinPath(resultDir, "*.jsonl"))
for _, jsonFile := range matches {
n, err := importBenchmarkFile(importSession, jsonFile, benchmarkSubdirectory)
if err != nil {
return core.E("store.ImportAll", core.Sprintf("import benchmark file %s", jsonFile), err)
}
benchTotal += n
}
}
// Also import standalone benchmark files.
for _, benchmarkFile := range []string{"lem_bench", "lem_ethics", "lem_ethics_allen", "instruction_tuned", "abliterated", "base_pt"} {
localPath := core.JoinPath(benchLocal, benchmarkFile+".jsonl")
if !isFile(localPath) {
if !cfg.SkipM3 && cfg.Scp != nil {
remote := core.Sprintf("%s:/Volumes/Data/lem/benchmarks/%s.jsonl", m3Host, benchmarkFile)
_ = cfg.Scp(remote, localPath)
}
}
if isFile(localPath) {
n, err := importBenchmarkFile(importSession, localPath, "benchmark")
if err != nil {
return core.E("store.ImportAll", core.Sprintf("import benchmark file %s", localPath), err)
}
benchTotal += n
}
}
totals["benchmark_results"] = benchTotal
core.Print(w, " benchmark_results: %d rows", benchTotal)
// ── 4. Benchmark questions ──
if err := importSession.exec("DROP TABLE IF EXISTS benchmark_questions"); err != nil {
return core.E("store.ImportAll", "drop benchmark_questions", err)
}
if err := importSession.exec(`
CREATE TABLE benchmark_questions (
benchmark VARCHAR, id VARCHAR, question TEXT,
best_answer TEXT, correct_answers TEXT, incorrect_answers TEXT, category VARCHAR
)
`); err != nil {
return core.E("store.ImportAll", "create benchmark_questions", err)
}
benchQTotal := 0
for _, bname := range []string{"truthfulqa", "gsm8k", "do_not_answer", "toxigen"} {
local := core.JoinPath(benchLocal, bname+".jsonl")
if isFile(local) {
n, err := importBenchmarkQuestions(importSession, local, bname)
if err != nil {
return core.E("store.ImportAll", core.Sprintf("import benchmark questions %s", local), err)
}
benchQTotal += n
}
}
totals["benchmark_questions"] = benchQTotal
core.Print(w, " benchmark_questions: %d rows", benchQTotal)
// ── 5. Seeds ──
if err := importSession.exec("DROP TABLE IF EXISTS seeds"); err != nil {
return core.E("store.ImportAll", "drop seeds", err)
}
if err := importSession.exec(`
CREATE TABLE seeds (
source_file VARCHAR, region VARCHAR, seed_id VARCHAR, domain VARCHAR, prompt TEXT
)
`); err != nil {
return core.E("store.ImportAll", "create seeds", err)
}
seedTotal := 0
seedDirs := []string{core.JoinPath(cfg.DataDir, "seeds")}
for _, seedDir := range seedDirs {
if !isDir(seedDir) {
continue
}
n, err := importSeeds(importSession, seedDir)
if err != nil {
return core.E("store.ImportAll", core.Sprintf("import seeds %s", seedDir), err)
}
seedTotal += n
}
totals["seeds"] = seedTotal
core.Print(w, " seeds: %d rows", seedTotal)
if err := transaction.Commit(); err != nil {
return core.E("store.ImportAll", "commit import transaction", err)
}
committed = true
// ── Summary ──
grandTotal := 0
core.Print(w, "\n%s", repeat("=", 50))
core.Print(w, "LEM Database Import Complete")
core.Print(w, "%s", repeat("=", 50))
for table, count := range totals {
core.Print(w, " %-25s %8d", table, count)
grandTotal += count
}
core.Print(w, " %s", repeat("-", 35))
core.Print(w, " %-25s %8d", "TOTAL", grandTotal)
core.Print(w, "\nDatabase: %s", db.Path())
return nil
}
func importTrainingFile(db duckDBImportSession, path, source, split string) (int, error) {
r := localFs.Open(path)
if !r.OK {
return 0, core.E("store.importTrainingFile", core.Sprintf("open %s", path), r.Value.(error))
}
f := r.Value.(io.ReadCloser)
defer func() { _ = f.Close() }()
count := 0
scanner := bufio.NewScanner(f)
scanner.Buffer(make([]byte, 1024*1024), 1024*1024)
lineNumber := 0
for scanner.Scan() {
lineNumber++
var rec struct {
Messages []ChatMessage `json:"messages"`
}
if r := core.JSONUnmarshal(scanner.Bytes(), &rec); !r.OK {
parseErr, _ := r.Value.(error)
return count, core.E("store.importTrainingFile", core.Sprintf("parse %s line %d", path, lineNumber), parseErr)
}
prompt := ""
response := ""
assistantCount := 0
for _, m := range rec.Messages {
if m.Role == "user" && prompt == "" {
prompt = m.Content
}
if m.Role == "assistant" {
if response == "" {
response = m.Content
}
assistantCount++
}
}
msgsJSON := core.JSONMarshalString(rec.Messages)
if err := db.exec(`INSERT INTO training_examples VALUES (?, ?, ?, ?, ?, ?, ?)`,
source, split, prompt, response, assistantCount, msgsJSON, len(response)); err != nil {
return count, core.E("store.importTrainingFile", "insert training example", err)
}
count++
}
if err := scanner.Err(); err != nil {
return count, core.E("store.importTrainingFile", "scan training file", err)
}
return count, nil
}
func importBenchmarkFile(db duckDBImportSession, path, source string) (int, error) {
r := localFs.Open(path)
if !r.OK {
return 0, core.E("store.importBenchmarkFile", core.Sprintf("open %s", path), r.Value.(error))
}
f := r.Value.(io.ReadCloser)
defer func() { _ = f.Close() }()
count := 0
scanner := bufio.NewScanner(f)
scanner.Buffer(make([]byte, 1024*1024), 1024*1024)
lineNumber := 0
for scanner.Scan() {
lineNumber++
var rec map[string]any
if r := core.JSONUnmarshal(scanner.Bytes(), &rec); !r.OK {
parseErr, _ := r.Value.(error)
return count, core.E("store.importBenchmarkFile", core.Sprintf("parse %s line %d", path, lineNumber), parseErr)
}
if err := db.exec(`INSERT INTO benchmark_results VALUES (?, ?, ?, ?, ?, ?, ?, ?)`,
source,
core.Sprint(rec["id"]),
strOrEmpty(rec, "benchmark"),
strOrEmpty(rec, "model"),
strOrEmpty(rec, "prompt"),
strOrEmpty(rec, "response"),
floatOrZero(rec, "elapsed_seconds"),
strOrEmpty(rec, "domain"),
); err != nil {
return count, core.E("store.importBenchmarkFile", "insert benchmark result", err)
}
count++
}
if err := scanner.Err(); err != nil {
return count, core.E("store.importBenchmarkFile", "scan benchmark file", err)
}
return count, nil
}
func importBenchmarkQuestions(db duckDBImportSession, path, benchmark string) (int, error) {
r := localFs.Open(path)
if !r.OK {
return 0, core.E("store.importBenchmarkQuestions", core.Sprintf("open %s", path), r.Value.(error))
}
f := r.Value.(io.ReadCloser)
defer func() { _ = f.Close() }()
count := 0
scanner := bufio.NewScanner(f)
scanner.Buffer(make([]byte, 1024*1024), 1024*1024)
lineNumber := 0
for scanner.Scan() {
lineNumber++
var rec map[string]any
if r := core.JSONUnmarshal(scanner.Bytes(), &rec); !r.OK {
parseErr, _ := r.Value.(error)
return count, core.E("store.importBenchmarkQuestions", core.Sprintf("parse %s line %d", path, lineNumber), parseErr)
}
correctJSON := core.JSONMarshalString(rec["correct_answers"])
incorrectJSON := core.JSONMarshalString(rec["incorrect_answers"])
if err := db.exec(`INSERT INTO benchmark_questions VALUES (?, ?, ?, ?, ?, ?, ?)`,
benchmark,
core.Sprint(rec["id"]),
strOrEmpty(rec, "question"),
strOrEmpty(rec, "best_answer"),
correctJSON,
incorrectJSON,
strOrEmpty(rec, "category"),
); err != nil {
return count, core.E("store.importBenchmarkQuestions", "insert benchmark question", err)
}
count++
}
if err := scanner.Err(); err != nil {
return count, core.E("store.importBenchmarkQuestions", "scan benchmark questions", err)
}
return count, nil
}
func importSeeds(db duckDBImportSession, seedDir string) (int, error) {
count := 0
if err := walkDir(seedDir, func(path string) error {
if !core.HasSuffix(path, ".json") {
return nil
}
rel := core.TrimPrefix(path, seedDir+"/")
region := core.TrimSuffix(core.PathBase(path), ".json")
readResult := localFs.Read(path)
if !readResult.OK {
return core.E("store.importSeeds", core.Sprintf("read seed file %s", rel), readResult.Value.(error))
}
data := []byte(readResult.Value.(string))
// Try parsing as array or object with prompts/seeds field.
var seedsList []any
var raw any
if r := core.JSONUnmarshal(data, &raw); !r.OK {
err, _ := r.Value.(error)
return core.E("store.importSeeds", core.Sprintf("parse seed file %s", rel), err)
}
switch v := raw.(type) {
case []any:
seedsList = v
case map[string]any:
if prompts, ok := v["prompts"].([]any); ok {
seedsList = prompts
} else if seeds, ok := v["seeds"].([]any); ok {
seedsList = seeds
}
}
for _, s := range seedsList {
switch seed := s.(type) {
case map[string]any:
prompt := strOrEmpty(seed, "prompt")
if prompt == "" {
prompt = strOrEmpty(seed, "text")
}
if prompt == "" {
prompt = strOrEmpty(seed, "question")
}
if err := db.exec(`INSERT INTO seeds VALUES (?, ?, ?, ?, ?)`,
rel, region,
strOrEmpty(seed, "seed_id"),
strOrEmpty(seed, "domain"),
prompt,
); err != nil {
return core.E("store.importSeeds", "insert seed prompt", err)
}
count++
case string:
if err := db.exec(`INSERT INTO seeds VALUES (?, ?, ?, ?, ?)`,
rel, region, "", "", seed); err != nil {
return core.E("store.importSeeds", "insert seed string", err)
}
count++
}
}
return nil
}); err != nil {
return count, err
}
return count, nil
}
// walkDir recursively visits all regular files under root, calling fn for each.
func walkDir(root string, fn func(path string) error) error {
r := localFs.List(root)
if !r.OK {
return core.E("store.walkDir", core.Sprintf("list %s", root), r.Value.(error))
}
entries, ok := r.Value.([]fs.DirEntry)
if !ok {
return core.E("store.walkDir", core.Sprintf("list %s returned invalid entries", root), nil)
}
for _, entry := range entries {
full := core.JoinPath(root, entry.Name())
if entry.IsDir() {
if err := walkDir(full, fn); err != nil {
return err
}
} else {
if err := fn(full); err != nil {
return err
}
}
}
return nil
}
// strOrEmpty extracts a string value from a map, returning an empty string if
// the key is absent.
func strOrEmpty(m map[string]any, key string) string {
if v, ok := m[key]; ok {
return core.Sprint(v)
}
return ""
}
// floatOrZero extracts a float64 value from a map, returning zero if the key
// is absent or not a number.
func floatOrZero(m map[string]any, key string) float64 {
if v, ok := m[key]; ok {
if f, ok := v.(float64); ok {
return f
}
}
return 0
}
// repeat returns a string consisting of count copies of s. It avoids importing
// strings because repository conventions route string helpers through core.
func repeat(s string, count int) string {
if count <= 0 {
return ""
}
b := core.NewBuilder()
for range count {
b.WriteString(s)
}
return b.String()
}
// escapeSQLPath escapes single quotes in a file path for use in DuckDB SQL
// string literals.
func escapeSQLPath(p string) string {
return core.Replace(p, "'", "''")
}
// isFile returns true if the path exists and is a regular file.
func isFile(path string) bool {
return localFs.IsFile(path)
}
// isDir returns true if the path exists and is a directory.
func isDir(path string) bool {
return localFs.IsDir(path)
}

67
import_export_test.go Normal file
View file

@ -0,0 +1,67 @@
// SPDX-License-Identifier: EUPL-1.2
package store
import "testing"
func TestImportExport_Import_Good_CSVAndJSONIngestion(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("import-export-good")
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
assertNoError(t, medium.Write("findings.csv", "tool,severity\ngosec,high\ngolint,low\n"))
assertNoError(t, medium.Write("users.json", `{"entries":[{"name":"Alice"},{"name":"Bob"}]}`))
assertNoError(t, Import(workspace, medium, "findings.csv"))
assertNoError(t, Import(workspace, medium, "users.json"))
assertEqual(t, map[string]any{"findings": 2, "users": 2}, workspace.Aggregate())
}
func TestImportExport_Import_Bad_MalformedPayload(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("import-export-bad")
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
assertNoError(t, medium.Write("broken.json", `{"entries":[{"name":"Alice"}`))
assertError(t, Import(workspace, medium, "broken.json"))
count, err := workspace.Count()
assertNoError(t, err)
assertEqual(t, 0, count)
}
func TestImportExport_Import_Ugly_EmptyPayload(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("import-export-ugly")
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
for _, path := range []string{"empty.csv", "empty.json", "empty.jsonl"} {
assertNoError(t, medium.Write(path, ""))
assertNoError(t, Import(workspace, medium, path))
}
assertEqual(t, map[string]any{}, workspace.Aggregate())
}

70
import_test.go Normal file
View file

@ -0,0 +1,70 @@
package store
import (
"testing"
core "dappco.re/go/core"
)
type importSessionStub struct {
inserts int
}
func (session *importSessionStub) exec(string, ...any) error {
session.inserts++
return nil
}
func (session *importSessionStub) queryRowScan(string, any, ...any) error {
return nil
}
func TestImport_ImportTrainingFile_Bad_MalformedJSONL(t *testing.T) {
path := testPath(t, "training.jsonl")
requireCoreWriteBytes(t, path, []byte("{\"messages\":[]}\n{broken\n"))
session := &importSessionStub{}
count, err := importTrainingFile(session, path, "training", "train")
assertError(t, err)
assertContainsString(t, err.Error(), "line 2")
assertEqual(t, 1, count)
assertEqual(t, 1, session.inserts)
}
func TestImport_ImportBenchmarkFile_Bad_MalformedJSONL(t *testing.T) {
path := testPath(t, "benchmark.jsonl")
requireCoreWriteBytes(t, path, []byte("{\"id\":\"row-1\"}\n{broken\n"))
session := &importSessionStub{}
count, err := importBenchmarkFile(session, path, "benchmark")
assertError(t, err)
assertContainsString(t, err.Error(), "line 2")
assertEqual(t, 1, count)
assertEqual(t, 1, session.inserts)
}
func TestImport_ImportBenchmarkQuestions_Bad_MalformedJSONL(t *testing.T) {
path := testPath(t, "questions.jsonl")
requireCoreWriteBytes(t, path, []byte("{\"id\":\"q-1\"}\n{broken\n"))
session := &importSessionStub{}
count, err := importBenchmarkQuestions(session, path, "truthfulqa")
assertError(t, err)
assertContainsString(t, err.Error(), "line 2")
assertEqual(t, 1, count)
assertEqual(t, 1, session.inserts)
}
func TestImport_ImportSeeds_Bad_WalkFailure(t *testing.T) {
session := &importSessionStub{}
count, err := importSeeds(session, core.JoinPath(t.TempDir(), "missing-seeds"))
assertError(t, err)
assertContainsString(t, err.Error(), "store.walkDir")
assertEqual(t, 0, count)
assertEqual(t, 0, session.inserts)
}

166
inventory.go Normal file
View file

@ -0,0 +1,166 @@
// SPDX-License-Identifier: EUPL-1.2
package store
import (
"io"
core "dappco.re/go/core"
)
// TargetTotal is the golden set target size used for progress reporting.
//
// Usage example:
//
// pct := float64(count) / float64(store.TargetTotal) * 100
const TargetTotal = 15000
// duckDBTableOrder defines the canonical display order for DuckDB inventory
// tables.
var duckDBTableOrder = []string{
"golden_set", "expansion_prompts", "seeds", "prompts",
"training_examples", "gemini_responses", "benchmark_questions",
"benchmark_results", "validations", TableCheckpointScores,
TableProbeResults, "scoring_results",
}
// duckDBTableDetail holds extra context for a single table beyond its row count.
type duckDBTableDetail struct {
notes []string
}
// PrintDuckDBInventory queries all known DuckDB tables and prints a formatted
// inventory with row counts, detail breakdowns, and a grand total.
//
// Usage example:
//
// err := store.PrintDuckDBInventory(db, os.Stdout)
func PrintDuckDBInventory(db *DuckDB, w io.Writer) error {
counts, err := db.TableCounts()
if err != nil {
return core.E("store.PrintDuckDBInventory", "table counts", err)
}
details := gatherDuckDBDetails(db, counts)
core.Print(w, "DuckDB Inventory")
core.Print(w, "%s", repeat("-", 52))
grand := 0
for _, table := range duckDBTableOrder {
count, ok := counts[table]
if !ok {
continue
}
grand += count
line := core.Sprintf(" %-24s %8d rows", table, count)
if d, has := details[table]; has && len(d.notes) > 0 {
line += core.Sprintf(" (%s)", core.Join(", ", d.notes...))
}
core.Print(w, "%s", line)
}
core.Print(w, "%s", repeat("-", 52))
core.Print(w, " %-24s %8d rows", "TOTAL", grand)
return nil
}
// gatherDuckDBDetails runs per-table detail queries and returns annotations
// keyed by table name. Errors on individual queries are silently ignored so
// the inventory always prints.
func gatherDuckDBDetails(db *DuckDB, counts map[string]int) map[string]*duckDBTableDetail {
details := make(map[string]*duckDBTableDetail)
// golden_set: progress towards target
if count, ok := counts["golden_set"]; ok {
pct := float64(count) / float64(TargetTotal) * 100
details["golden_set"] = &duckDBTableDetail{
notes: []string{core.Sprintf("%.1f%% of %d target", pct, TargetTotal)},
}
}
// training_examples: distinct sources
if _, ok := counts["training_examples"]; ok {
rows, err := db.QueryRows("SELECT COUNT(DISTINCT source) AS n FROM training_examples")
if err == nil && len(rows) > 0 {
n := duckDBToInt(rows[0]["n"])
details["training_examples"] = &duckDBTableDetail{
notes: []string{core.Sprintf("%d sources", n)},
}
}
}
// prompts: distinct domains and voices
if _, ok := counts["prompts"]; ok {
d := &duckDBTableDetail{}
rows, err := db.QueryRows("SELECT COUNT(DISTINCT domain) AS n FROM prompts")
if err == nil && len(rows) > 0 {
d.notes = append(d.notes, core.Sprintf("%d domains", duckDBToInt(rows[0]["n"])))
}
rows, err = db.QueryRows("SELECT COUNT(DISTINCT voice) AS n FROM prompts")
if err == nil && len(rows) > 0 {
d.notes = append(d.notes, core.Sprintf("%d voices", duckDBToInt(rows[0]["n"])))
}
if len(d.notes) > 0 {
details["prompts"] = d
}
}
// gemini_responses: group by source_model
if _, ok := counts["gemini_responses"]; ok {
rows, err := db.QueryRows(
"SELECT source_model, COUNT(*) AS n FROM gemini_responses GROUP BY source_model ORDER BY n DESC",
)
if err == nil && len(rows) > 0 {
var parts []string
for _, row := range rows {
model := duckDBStrVal(row, "source_model")
n := duckDBToInt(row["n"])
if model != "" {
parts = append(parts, core.Sprintf("%s:%d", model, n))
}
}
if len(parts) > 0 {
details["gemini_responses"] = &duckDBTableDetail{notes: parts}
}
}
}
// benchmark_results: distinct source categories
if _, ok := counts["benchmark_results"]; ok {
rows, err := db.QueryRows("SELECT COUNT(DISTINCT source) AS n FROM benchmark_results")
if err == nil && len(rows) > 0 {
n := duckDBToInt(rows[0]["n"])
details["benchmark_results"] = &duckDBTableDetail{
notes: []string{core.Sprintf("%d categories", n)},
}
}
}
return details
}
// duckDBToInt converts a DuckDB value to int. DuckDB returns integers as int64
// (not float64 like InfluxDB), so we handle both types.
func duckDBToInt(v any) int {
switch n := v.(type) {
case int64:
return int(n)
case int32:
return int(n)
case float64:
return int(n)
default:
return 0
}
}
// duckDBStrVal extracts a string value from a row map.
func duckDBStrVal(row map[string]any, key string) string {
if v, ok := row[key]; ok {
return core.Sprint(v)
}
return ""
}

599
journal.go Normal file
View file

@ -0,0 +1,599 @@
package store
import (
"database/sql"
"regexp"
"time"
core "dappco.re/go/core"
)
const (
journalEntriesTableName = "journal_entries"
defaultJournalBucket = "store"
)
const createJournalEntriesTableSQL = `CREATE TABLE IF NOT EXISTS journal_entries (
entry_id INTEGER PRIMARY KEY AUTOINCREMENT,
bucket_name TEXT NOT NULL,
measurement TEXT NOT NULL,
fields_json TEXT NOT NULL,
tags_json TEXT NOT NULL,
committed_at INTEGER NOT NULL,
archived_at INTEGER
)`
var (
journalBucketPattern = regexp.MustCompile(`bucket:\s*"([^"]+)"`)
journalMeasurementPatterns = []*regexp.Regexp{
regexp.MustCompile(`(?:_measurement|measurement)\s*==\s*"([^"]+)"`),
regexp.MustCompile(`\[\s*"(?:_measurement|measurement)"\s*\]\s*==\s*"([^"]+)"`),
}
journalBucketEqualityPatterns = []*regexp.Regexp{
regexp.MustCompile(`r\.(?:_bucket|bucket|bucket_name)\s*==\s*"([^"]+)"`),
regexp.MustCompile(`r\[\s*"(?:_bucket|bucket|bucket_name)"\s*\]\s*==\s*"([^"]+)"`),
}
journalStringEqualityPatterns = []*regexp.Regexp{
regexp.MustCompile(`r\.([a-zA-Z0-9_:-]+)\s*==\s*"([^"]+)"`),
regexp.MustCompile(`r\[\s*"([a-zA-Z0-9_:-]+)"\s*\]\s*==\s*"([^"]+)"`),
}
journalScalarEqualityPatterns = []*regexp.Regexp{
regexp.MustCompile(`r\.([a-zA-Z0-9_:-]+)\s*==\s*(true|false|-?[0-9]+(?:\.[0-9]+)?)`),
regexp.MustCompile(`r\[\s*"([a-zA-Z0-9_:-]+)"\s*\]\s*==\s*(true|false|-?[0-9]+(?:\.[0-9]+)?)`),
}
)
type journalEqualityFilter struct {
columnName string
filterValue any
stringCompare bool
}
type journalExecutor interface {
Exec(query string, args ...any) (sql.Result, error)
}
// Usage example: `result := storeInstance.CommitToJournal("scroll-session", map[string]any{"like": 4}, map[string]string{"workspace": "scroll-session"})`
// Workspace.Commit uses this same journal write path before it updates the
// summary row in `workspace:NAME`.
func (storeInstance *Store) CommitToJournal(measurement string, fields map[string]any, tags map[string]string) core.Result {
if err := storeInstance.ensureReady("store.CommitToJournal"); err != nil {
return core.Result{Value: err, OK: false}
}
if measurement == "" {
return core.Result{Value: core.E("store.CommitToJournal", "measurement is empty", nil), OK: false}
}
if fields == nil {
fields = map[string]any{}
}
if tags == nil {
tags = map[string]string{}
}
if err := ensureJournalSchema(storeInstance.sqliteDatabase); err != nil {
return core.Result{Value: core.E("store.CommitToJournal", "ensure journal schema", err), OK: false}
}
fieldsJSON, err := marshalJSONText(fields, "store.CommitToJournal", "marshal fields")
if err != nil {
return core.Result{Value: err, OK: false}
}
tagsJSON, err := marshalJSONText(tags, "store.CommitToJournal", "marshal tags")
if err != nil {
return core.Result{Value: err, OK: false}
}
committedAt := time.Now().UnixMilli()
if err := commitJournalEntry(
storeInstance.sqliteDatabase,
storeInstance.journalBucket(),
measurement,
fieldsJSON,
tagsJSON,
committedAt,
); err != nil {
return core.Result{Value: core.E("store.CommitToJournal", "insert journal entry", err), OK: false}
}
return core.Result{
Value: map[string]any{
"bucket": storeInstance.journalBucket(),
"measurement": measurement,
"fields": cloneAnyMap(fields),
"tags": cloneStringMap(tags),
"committed_at": committedAt,
},
OK: true,
}
}
// Usage example: `result := storeInstance.QueryJournal(\`from(bucket: "events") |> range(start: -24h) |> filter(fn: (r) => r.workspace == "session-a")\`)`
// Usage example: `result := storeInstance.QueryJournal("SELECT measurement, committed_at FROM journal_entries ORDER BY committed_at")`
func (storeInstance *Store) QueryJournal(flux string) core.Result {
if err := storeInstance.ensureReady("store.QueryJournal"); err != nil {
return core.Result{Value: err, OK: false}
}
if err := ensureJournalSchema(storeInstance.sqliteDatabase); err != nil {
return core.Result{Value: core.E("store.QueryJournal", "ensure journal schema", err), OK: false}
}
trimmedQuery := core.Trim(flux)
if trimmedQuery == "" {
return storeInstance.queryJournalRows(
"SELECT bucket_name, measurement, fields_json, tags_json, committed_at, archived_at FROM " + journalEntriesTableName + " WHERE archived_at IS NULL ORDER BY committed_at, entry_id",
)
}
if isRawSQLJournalQuery(trimmedQuery) {
return storeInstance.queryJournalRows(trimmedQuery)
}
selectSQL, arguments, err := storeInstance.queryJournalFromFlux(trimmedQuery)
if err != nil {
return core.Result{Value: err, OK: false}
}
return storeInstance.queryJournalRows(selectSQL, arguments...)
}
func isRawSQLJournalQuery(query string) bool {
upperQuery := core.Upper(core.Trim(query))
return core.HasPrefix(upperQuery, "SELECT") ||
core.HasPrefix(upperQuery, "WITH") ||
core.HasPrefix(upperQuery, "EXPLAIN") ||
core.HasPrefix(upperQuery, "PRAGMA")
}
func (storeInstance *Store) queryJournalRows(query string, arguments ...any) core.Result {
rows, err := storeInstance.sqliteDatabase.Query(query, arguments...)
if err != nil {
return core.Result{Value: core.E("store.QueryJournal", "query rows", err), OK: false}
}
defer func() { _ = rows.Close() }()
rowMaps, err := queryRowsAsMaps(rows)
if err != nil {
return core.Result{Value: core.E("store.QueryJournal", "scan rows", err), OK: false}
}
return core.Result{Value: inflateJournalRows(rowMaps), OK: true}
}
func (storeInstance *Store) queryJournalFromFlux(flux string) (string, []any, error) {
queryBuilder := core.NewBuilder()
queryBuilder.WriteString("SELECT bucket_name, measurement, fields_json, tags_json, committed_at, archived_at FROM ")
queryBuilder.WriteString(journalEntriesTableName)
queryBuilder.WriteString(" WHERE archived_at IS NULL")
var queryArguments []any
if bucket := quotedSubmatch(journalBucketPattern, flux); bucket != "" {
queryBuilder.WriteString(" AND bucket_name = ?")
queryArguments = append(queryArguments, bucket)
}
if measurement := firstQuotedSubmatch(journalMeasurementPatterns, flux); measurement != "" {
queryBuilder.WriteString(" AND measurement = ?")
queryArguments = append(queryArguments, measurement)
}
startRange, stopRange := journalRangeBounds(flux)
if startRange != "" {
startTime, err := parseFluxTime(core.Trim(startRange))
if err != nil {
return "", nil, core.E("store.QueryJournal", "parse range", err)
}
queryBuilder.WriteString(" AND committed_at >= ?")
queryArguments = append(queryArguments, startTime.UnixMilli())
}
if stopRange != "" {
stopTime, err := parseFluxTime(core.Trim(stopRange))
if err != nil {
return "", nil, core.E("store.QueryJournal", "parse range", err)
}
queryBuilder.WriteString(" AND committed_at < ?")
queryArguments = append(queryArguments, stopTime.UnixMilli())
}
for _, pattern := range journalBucketEqualityPatterns {
bucketMatches := pattern.FindAllStringSubmatch(flux, -1)
for _, match := range bucketMatches {
if len(match) < 2 {
continue
}
queryBuilder.WriteString(" AND bucket_name = ?")
queryArguments = append(queryArguments, match[1])
}
}
for _, filter := range journalEqualityFilters(flux) {
if filter.stringCompare {
queryBuilder.WriteString(" AND (CAST(json_extract(tags_json, '$.\"' || ? || '\"') AS TEXT) = ? OR CAST(json_extract(fields_json, '$.\"' || ? || '\"') AS TEXT) = ?)")
queryArguments = append(queryArguments, filter.columnName, filter.filterValue, filter.columnName, filter.filterValue)
continue
}
queryBuilder.WriteString(" AND json_extract(fields_json, '$.\"' || ? || '\"') = ?")
queryArguments = append(queryArguments, filter.columnName, filter.filterValue)
}
queryBuilder.WriteString(" ORDER BY committed_at, entry_id")
return queryBuilder.String(), queryArguments, nil
}
func (storeInstance *Store) journalBucket() string {
if storeInstance.journalConfiguration.BucketName == "" {
return defaultJournalBucket
}
return storeInstance.journalConfiguration.BucketName
}
func ensureJournalSchema(database schemaDatabase) error {
if _, err := database.Exec(createJournalEntriesTableSQL); err != nil {
return err
}
if _, err := database.Exec(
"CREATE INDEX IF NOT EXISTS journal_entries_bucket_committed_at_idx ON " + journalEntriesTableName + " (bucket_name, committed_at)",
); err != nil {
return err
}
return nil
}
func commitJournalEntry(
executor journalExecutor,
bucket, measurement, fieldsJSON, tagsJSON string,
committedAt int64,
) error {
_, err := executor.Exec(
"INSERT INTO "+journalEntriesTableName+" (bucket_name, measurement, fields_json, tags_json, committed_at, archived_at) VALUES (?, ?, ?, ?, ?, NULL)",
bucket,
measurement,
fieldsJSON,
tagsJSON,
committedAt,
)
return err
}
func marshalJSONText(value any, operation, message string) (string, error) {
result := core.JSONMarshal(value)
if !result.OK {
return "", core.E(operation, message, result.Value.(error))
}
return string(result.Value.([]byte)), nil
}
func journalRangeBounds(flux string) (string, string) {
rangeIndex := indexOfSubstring(flux, "range(")
if rangeIndex < 0 {
return "", ""
}
contentStart := rangeIndex + len("range(")
depth := 1
contentEnd := -1
scanRange:
for i := contentStart; i < len(flux); i++ {
switch flux[i] {
case '(':
depth++
case ')':
depth--
if depth == 0 {
contentEnd = i
break scanRange
}
}
}
if contentEnd < 0 || contentEnd <= contentStart {
return "", ""
}
content := flux[contentStart:contentEnd]
startPrefix := "start:"
startIndex := indexOfSubstring(content, startPrefix)
if startIndex < 0 {
return "", ""
}
startIndex += len(startPrefix)
start := core.Trim(content[startIndex:])
stop := ""
if stopIndex := indexOfSubstring(content, ", stop:"); stopIndex >= 0 {
start = core.Trim(content[startIndex:stopIndex])
stop = core.Trim(content[stopIndex+len(", stop:"):])
} else if stopIndex := indexOfSubstring(content, ",stop:"); stopIndex >= 0 {
start = core.Trim(content[startIndex:stopIndex])
stop = core.Trim(content[stopIndex+len(",stop:"):])
}
return start, stop
}
func indexOfSubstring(text, substring string) int {
if substring == "" {
return 0
}
if len(substring) > len(text) {
return -1
}
for i := 0; i <= len(text)-len(substring); i++ {
if text[i:i+len(substring)] == substring {
return i
}
}
return -1
}
func parseFluxTime(value string) (time.Time, error) {
value = core.Trim(value)
if value == "" {
return time.Time{}, core.E("store.parseFluxTime", "range value is empty", nil)
}
value = firstStringOrEmpty(core.Split(value, ","))
value = core.Trim(value)
if core.HasPrefix(value, "time(v:") && core.HasSuffix(value, ")") {
value = core.Trim(core.TrimSuffix(core.TrimPrefix(value, "time(v:"), ")"))
}
if core.HasPrefix(value, `"`) && core.HasSuffix(value, `"`) {
value = core.TrimSuffix(core.TrimPrefix(value, `"`), `"`)
}
if value == "now()" {
return time.Now(), nil
}
if core.HasSuffix(value, "d") {
days, err := parseJournalInt64(core.TrimSuffix(value, "d"))
if err != nil {
return time.Time{}, err
}
return time.Now().Add(time.Duration(days) * 24 * time.Hour), nil
}
lookback, err := time.ParseDuration(value)
if err == nil {
return time.Now().Add(lookback), nil
}
parsedTime, err := time.Parse(time.RFC3339Nano, value)
if err != nil {
return time.Time{}, err
}
return parsedTime, nil
}
func quotedSubmatch(pattern *regexp.Regexp, value string) string {
match := pattern.FindStringSubmatch(value)
if len(match) < 2 {
return ""
}
return match[1]
}
func firstQuotedSubmatch(patterns []*regexp.Regexp, value string) string {
for _, pattern := range patterns {
if match := quotedSubmatch(pattern, value); match != "" {
return match
}
}
return ""
}
func queryRowsAsMaps(rows *sql.Rows) ([]map[string]any, error) {
columnNames, err := rows.Columns()
if err != nil {
return nil, err
}
var result []map[string]any
for rows.Next() {
rawValues := make([]any, len(columnNames))
scanTargets := make([]any, len(columnNames))
for i := range rawValues {
scanTargets[i] = &rawValues[i]
}
if err := rows.Scan(scanTargets...); err != nil {
return nil, err
}
row := make(map[string]any, len(columnNames))
for i, columnName := range columnNames {
row[columnName] = normaliseRowValue(rawValues[i])
}
result = append(result, row)
}
if err := rows.Err(); err != nil {
return nil, err
}
return result, nil
}
func inflateJournalRows(rows []map[string]any) []map[string]any {
for _, row := range rows {
if fieldsJSON, ok := row["fields_json"].(string); ok {
fields := make(map[string]any)
result := core.JSONUnmarshalString(fieldsJSON, &fields)
if result.OK {
row["fields"] = fields
}
}
if tagsJSON, ok := row["tags_json"].(string); ok {
tags := make(map[string]string)
result := core.JSONUnmarshalString(tagsJSON, &tags)
if result.OK {
row["tags"] = tags
}
}
}
return rows
}
func normaliseRowValue(value any) any {
switch typedValue := value.(type) {
case []byte:
return string(typedValue)
default:
return typedValue
}
}
func journalEqualityFilters(flux string) []journalEqualityFilter {
var filters []journalEqualityFilter
appendFilter := func(columnName string, filterValue any, stringCompare bool) {
if columnName == "_measurement" || columnName == "measurement" || columnName == "_bucket" || columnName == "bucket" || columnName == "bucket_name" {
return
}
filters = append(filters, journalEqualityFilter{
columnName: columnName,
filterValue: filterValue,
stringCompare: stringCompare,
})
}
for _, pattern := range journalStringEqualityPatterns {
matches := pattern.FindAllStringSubmatch(flux, -1)
for _, match := range matches {
if len(match) < 3 {
continue
}
appendFilter(match[1], match[2], true)
}
}
for _, pattern := range journalScalarEqualityPatterns {
matches := pattern.FindAllStringSubmatch(flux, -1)
for _, match := range matches {
if len(match) < 3 {
continue
}
filterValue, ok := parseJournalScalarValue(match[2])
if !ok {
continue
}
appendFilter(match[1], filterValue, false)
}
}
return filters
}
func parseJournalScalarValue(value string) (any, bool) {
switch value {
case "true":
return true, true
case "false":
return false, true
}
if integerValue, err := parseJournalInt64(value); err == nil {
return integerValue, true
}
if floatValue, err := parseJournalFloat64(value); err == nil {
return floatValue, true
}
return nil, false
}
func parseJournalInt64(value string) (int64, error) {
if value == "" {
return 0, core.E("store.parseJournalInt64", "integer value is empty", nil)
}
negative := false
index := 0
if value[0] == '-' || value[0] == '+' {
negative = value[0] == '-'
index++
if index == len(value) {
return 0, core.E("store.parseJournalInt64", "integer value has no digits", nil)
}
}
limit := uint64(1<<63 - 1)
if negative {
limit = uint64(1 << 63)
}
var parsed uint64
for ; index < len(value); index++ {
character := value[index]
if character < '0' || character > '9' {
return 0, core.E("store.parseJournalInt64", "integer value contains non-digit characters", nil)
}
digit := uint64(character - '0')
if parsed > (limit-digit)/10 {
return 0, core.E("store.parseJournalInt64", "integer value is out of range", nil)
}
parsed = parsed*10 + digit
}
if negative {
if parsed == uint64(1<<63) {
return -1 << 63, nil
}
return -int64(parsed), nil
}
return int64(parsed), nil
}
func parseJournalFloat64(value string) (float64, error) {
if value == "" {
return 0, core.E("store.parseJournalFloat64", "float value is empty", nil)
}
negative := false
index := 0
if value[0] == '-' || value[0] == '+' {
negative = value[0] == '-'
index++
if index == len(value) {
return 0, core.E("store.parseJournalFloat64", "float value has no digits", nil)
}
}
var parsed float64
digits := 0
for index < len(value) && value[index] >= '0' && value[index] <= '9' {
parsed = parsed*10 + float64(value[index]-'0')
if parsed > maxJournalFloat64 {
return 0, core.E("store.parseJournalFloat64", "float value is out of range", nil)
}
digits++
index++
}
if index < len(value) && value[index] == '.' {
index++
scale := 0.1
for index < len(value) && value[index] >= '0' && value[index] <= '9' {
parsed += float64(value[index]-'0') * scale
scale /= 10
digits++
index++
}
}
if digits == 0 {
return 0, core.E("store.parseJournalFloat64", "float value has no digits", nil)
}
if index != len(value) {
return 0, core.E("store.parseJournalFloat64", "float value contains invalid characters", nil)
}
if negative {
return -parsed, nil
}
return parsed, nil
}
const maxJournalFloat64 = 1.79769313486231570814527423731704357e+308
func cloneAnyMap(input map[string]any) map[string]any {
if input == nil {
return map[string]any{}
}
cloned := make(map[string]any, len(input))
for key, value := range input {
cloned[key] = value
}
return cloned
}
func cloneStringMap(input map[string]string) map[string]string {
if input == nil {
return map[string]string{}
}
cloned := make(map[string]string, len(input))
for key, value := range input {
cloned[key] = value
}
return cloned
}

285
journal_test.go Normal file
View file

@ -0,0 +1,285 @@
package store
import (
"testing"
"time"
)
func TestJournal_CommitToJournal_Good_WithQueryJournalSQL(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
first := storeInstance.CommitToJournal("session-a", map[string]any{"like": 4}, map[string]string{"workspace": "session-a"})
second := storeInstance.CommitToJournal("session-b", map[string]any{"profile_match": 2}, map[string]string{"workspace": "session-b"})
assertTruef(t, first.OK, "first journal commit failed: %v", first.Value)
assertTruef(t, second.OK, "second journal commit failed: %v", second.Value)
rows := requireResultRows(
t,
storeInstance.QueryJournal("SELECT bucket_name, measurement, fields_json, tags_json FROM journal_entries ORDER BY entry_id"),
)
assertLen(t, rows, 2)
assertEqual(t, "events", rows[0]["bucket_name"])
assertEqual(t, "session-a", rows[0]["measurement"])
fields, ok := rows[0]["fields"].(map[string]any)
assertTruef(t, ok, "unexpected fields type: %T", rows[0]["fields"])
assertEqual(t, float64(4), fields["like"])
tags, ok := rows[1]["tags"].(map[string]string)
assertTruef(t, ok, "unexpected tags type: %T", rows[1]["tags"])
assertEqual(t, "session-b", tags["workspace"])
}
func TestJournal_CommitToJournal_Good_ResultCopiesInputMaps(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
fields := map[string]any{"like": 4}
tags := map[string]string{"workspace": "session-a"}
result := storeInstance.CommitToJournal("session-a", fields, tags)
assertTruef(t, result.OK, "journal commit failed: %v", result.Value)
fields["like"] = 99
tags["workspace"] = "session-b"
value, ok := result.Value.(map[string]any)
assertTruef(t, ok, "unexpected result type: %T", result.Value)
resultFields, ok := value["fields"].(map[string]any)
assertTruef(t, ok, "unexpected fields type: %T", value["fields"])
assertEqual(t, 4, resultFields["like"])
resultTags, ok := value["tags"].(map[string]string)
assertTruef(t, ok, "unexpected tags type: %T", value["tags"])
assertEqual(t, "session-a", resultTags["workspace"])
}
func TestJournal_QueryJournal_Good_RawSQLWithCTE(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 4}, map[string]string{"workspace": "session-a"}).OK)
rows := requireResultRows(
t,
storeInstance.QueryJournal(`
WITH journal_rows AS (
SELECT bucket_name, measurement, fields_json, tags_json, committed_at, archived_at
FROM journal_entries
)
SELECT bucket_name, measurement, fields_json, tags_json, committed_at, archived_at
FROM journal_rows
ORDER BY committed_at
`),
)
assertLen(t, rows, 1)
assertEqual(t, "session-a", rows[0]["measurement"])
}
func TestJournal_QueryJournal_Good_PragmaSQL(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
rows := requireResultRows(
t,
storeInstance.QueryJournal("PRAGMA table_info(journal_entries)"),
)
assertNotEmpty(t, rows)
var columnNames []string
for _, row := range rows {
name, ok := row["name"].(string)
assertTruef(t, ok, "unexpected column name type: %T", row["name"])
columnNames = append(columnNames, name)
}
assertContainsElement(t, columnNames, "bucket_name")
}
func TestJournal_QueryJournal_Good_FluxFilters(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"like": 2}, map[string]string{"workspace": "session-b"}).OK)
rows := requireResultRows(
t,
storeInstance.QueryJournal(`from(bucket: "events") |> range(start: -24h) |> filter(fn: (r) => r._measurement == "session-b")`),
)
assertLen(t, rows, 1)
assertEqual(t, "session-b", rows[0]["measurement"])
fields, ok := rows[0]["fields"].(map[string]any)
assertTruef(t, ok, "unexpected fields type: %T", rows[0]["fields"])
assertEqual(t, float64(2), fields["like"])
}
func TestJournal_QueryJournal_Good_TagFilter(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"like": 2}, map[string]string{"workspace": "session-b"}).OK)
rows := requireResultRows(
t,
storeInstance.QueryJournal(`from(bucket: "events") |> range(start: -24h) |> filter(fn: (r) => r.workspace == "session-b")`),
)
assertLen(t, rows, 1)
assertEqual(t, "session-b", rows[0]["measurement"])
tags, ok := rows[0]["tags"].(map[string]string)
assertTruef(t, ok, "unexpected tags type: %T", rows[0]["tags"])
assertEqual(t, "session-b", tags["workspace"])
}
func TestJournal_QueryJournal_Good_NumericFieldFilter(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"like": 2}, map[string]string{"workspace": "session-b"}).OK)
rows := requireResultRows(
t,
storeInstance.QueryJournal(`from(bucket: "events") |> range(start: -24h) |> filter(fn: (r) => r.like == 2)`),
)
assertLen(t, rows, 1)
assertEqual(t, "session-b", rows[0]["measurement"])
fields, ok := rows[0]["fields"].(map[string]any)
assertTruef(t, ok, "unexpected fields type: %T", rows[0]["fields"])
assertEqual(t, float64(2), fields["like"])
}
func TestJournal_QueryJournal_Good_BooleanFieldFilter(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"complete": false}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"complete": true}, map[string]string{"workspace": "session-b"}).OK)
rows := requireResultRows(
t,
storeInstance.QueryJournal(`from(bucket: "events") |> range(start: -24h) |> filter(fn: (r) => r["complete"] == true)`),
)
assertLen(t, rows, 1)
assertEqual(t, "session-b", rows[0]["measurement"])
fields, ok := rows[0]["fields"].(map[string]any)
assertTruef(t, ok, "unexpected fields type: %T", rows[0]["fields"])
assertEqual(t, true, fields["complete"])
}
func TestJournal_QueryJournal_Good_BucketFilter(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertNoError(t, commitJournalEntry(storeInstance.sqliteDatabase, "events", "session-b", `{"like":2}`, `{"workspace":"session-b"}`, time.Now().UnixMilli()))
rows := requireResultRows(
t,
storeInstance.QueryJournal(`from(bucket: "events") |> range(start: -24h) |> filter(fn: (r) => r._bucket == "events")`),
)
assertLen(t, rows, 1)
assertEqual(t, "session-b", rows[0]["measurement"])
assertEqual(t, "events", rows[0]["bucket_name"])
}
func TestJournal_QueryJournal_Good_DeterministicOrderingForSameTimestamp(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertNoError(t, ensureJournalSchema(storeInstance.sqliteDatabase))
committedAt := time.Date(2026, 3, 30, 12, 0, 0, 0, time.UTC).UnixMilli()
assertNoError(t, commitJournalEntry(storeInstance.sqliteDatabase, "events", "session-b", `{"like":2}`, `{"workspace":"session-b"}`, committedAt))
assertNoError(t, commitJournalEntry(storeInstance.sqliteDatabase, "events", "session-a", `{"like":1}`, `{"workspace":"session-a"}`, committedAt))
rows := requireResultRows(
t,
storeInstance.QueryJournal(""),
)
assertLen(t, rows, 2)
assertEqual(t, "session-b", rows[0]["measurement"])
assertEqual(t, "session-a", rows[1]["measurement"])
}
func TestJournal_QueryJournal_Good_AbsoluteRangeWithStop(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"like": 2}, map[string]string{"workspace": "session-b"}).OK)
_, err = storeInstance.sqliteDatabase.Exec(
"UPDATE "+journalEntriesTableName+" SET committed_at = ? WHERE measurement = ?",
time.Date(2026, 3, 29, 12, 0, 0, 0, time.UTC).UnixMilli(),
"session-a",
)
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec(
"UPDATE "+journalEntriesTableName+" SET committed_at = ? WHERE measurement = ?",
time.Date(2026, 3, 30, 12, 0, 0, 0, time.UTC).UnixMilli(),
"session-b",
)
assertNoError(t, err)
rows := requireResultRows(
t,
storeInstance.QueryJournal(`from(bucket: "events") |> range(start: "2026-03-30T00:00:00Z", stop: now())`),
)
assertLen(t, rows, 1)
assertEqual(t, "session-b", rows[0]["measurement"])
}
func TestJournal_QueryJournal_Good_AbsoluteRangeHonoursStop(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"like": 2}, map[string]string{"workspace": "session-b"}).OK)
_, err = storeInstance.sqliteDatabase.Exec(
"UPDATE "+journalEntriesTableName+" SET committed_at = ? WHERE measurement = ?",
time.Date(2026, 3, 29, 12, 0, 0, 0, time.UTC).UnixMilli(),
"session-a",
)
assertNoError(t, err)
_, err = storeInstance.sqliteDatabase.Exec(
"UPDATE "+journalEntriesTableName+" SET committed_at = ? WHERE measurement = ?",
time.Date(2026, 3, 30, 12, 0, 0, 0, time.UTC).UnixMilli(),
"session-b",
)
assertNoError(t, err)
rows := requireResultRows(
t,
storeInstance.QueryJournal(`from(bucket: "events") |> range(start: "2026-03-29T00:00:00Z", stop: "2026-03-30T00:00:00Z")`),
)
assertLen(t, rows, 1)
assertEqual(t, "session-a", rows[0]["measurement"])
}
func TestJournal_CommitToJournal_Bad_EmptyMeasurement(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
result := storeInstance.CommitToJournal("", map[string]any{"like": 1}, map[string]string{"workspace": "missing"})
assertFalse(t, result.OK)
assertContainsString(t, result.Value.(error).Error(), "measurement is empty")
}

171
json.go Normal file
View file

@ -0,0 +1,171 @@
// SPDX-License-Identifier: EUPL-1.2
// JSON helpers for storage consumers.
// Re-exports the minimum JSON surface needed by downstream users like
// go-cache and go-tenant so they don't need to import encoding/json directly.
// Internally uses core/go JSON primitives.
package store
import core "dappco.re/go/core"
// RawMessage is a raw encoded JSON value.
// Use in structs where the JSON should be stored as-is without re-encoding.
//
// Usage example:
//
// type CacheEntry struct {
// Data store.RawMessage `json:"data"`
// }
// cacheEntry := CacheEntry{Data: store.RawMessage([]byte("{\"name\":\"Alice\"}"))}
type RawMessage []byte
// MarshalJSON returns the raw bytes as-is. If empty, returns `null`.
//
// Usage example: `bytes, err := store.RawMessage([]byte("{\"name\":\"Alice\"}")).MarshalJSON()`
func (raw RawMessage) MarshalJSON() ([]byte, error) {
if len(raw) == 0 {
return []byte("null"), nil
}
return raw, nil
}
// UnmarshalJSON stores the raw JSON bytes without decoding them.
//
// Usage example: `var raw store.RawMessage; err := raw.UnmarshalJSON([]byte("{\"name\":\"Alice\"}"))`
func (raw *RawMessage) UnmarshalJSON(data []byte) error {
if raw == nil {
return core.E("store.RawMessage.UnmarshalJSON", "nil receiver", nil)
}
*raw = append((*raw)[:0], data...)
return nil
}
// MarshalIndent serialises a value to pretty-printed JSON bytes.
// Uses core.JSONMarshal internally then applies prefix/indent formatting
// so consumers get readable output without importing encoding/json.
//
// Usage example: `data, err := store.MarshalIndent(map[string]string{"name": "Alice"}, "", " ")`
func MarshalIndent(value any, prefix, indent string) ([]byte, error) {
marshalled := core.JSONMarshal(value)
if !marshalled.OK {
if err, ok := marshalled.Value.(error); ok {
return nil, core.E("store.MarshalIndent", "marshal", err)
}
return nil, core.E("store.MarshalIndent", "marshal", nil)
}
raw, ok := marshalled.Value.([]byte)
if !ok {
return nil, core.E("store.MarshalIndent", "non-bytes result", nil)
}
if prefix == "" && indent == "" {
return raw, nil
}
buf := core.NewBuilder()
if err := indentCompactJSON(buf, raw, prefix, indent); err != nil {
return nil, core.E("store.MarshalIndent", "indent", err)
}
return []byte(buf.String()), nil
}
// indentCompactJSON formats compact JSON bytes with prefix+indent.
// Mirrors json.Indent's semantics without importing encoding/json.
//
// Usage example: `builder := core.NewBuilder(); _ = indentCompactJSON(builder, []byte("{\"name\":\"Alice\"}"), "", " ")`
func indentCompactJSON(buf interface {
WriteByte(byte) error
WriteString(string) (int, error)
}, src []byte, prefix, indent string) error {
depth := 0
inString := false
escaped := false
writeNewlineIndent := func(level int) error {
if err := buf.WriteByte('\n'); err != nil {
return err
}
if _, err := buf.WriteString(prefix); err != nil {
return err
}
for i := 0; i < level; i++ {
if _, err := buf.WriteString(indent); err != nil {
return err
}
}
return nil
}
for i := 0; i < len(src); i++ {
c := src[i]
if inString {
if err := buf.WriteByte(c); err != nil {
return err
}
if escaped {
escaped = false
continue
}
if c == '\\' {
escaped = true
continue
}
if c == '"' {
inString = false
}
continue
}
switch c {
case '"':
inString = true
if err := buf.WriteByte(c); err != nil {
return err
}
case '{', '[':
if err := buf.WriteByte(c); err != nil {
return err
}
depth++
// Look ahead for empty object/array.
if i+1 < len(src) && (src[i+1] == '}' || src[i+1] == ']') {
continue
}
if err := writeNewlineIndent(depth); err != nil {
return err
}
case '}', ']':
// Only indent if previous byte wasn't the matching opener.
if i > 0 && src[i-1] != '{' && src[i-1] != '[' {
depth--
if err := writeNewlineIndent(depth); err != nil {
return err
}
} else {
depth--
}
if err := buf.WriteByte(c); err != nil {
return err
}
case ',':
if err := buf.WriteByte(c); err != nil {
return err
}
if err := writeNewlineIndent(depth); err != nil {
return err
}
case ':':
if err := buf.WriteByte(c); err != nil {
return err
}
if err := buf.WriteByte(' '); err != nil {
return err
}
case ' ', '\t', '\n', '\r':
// Drop whitespace from compact source.
default:
if err := buf.WriteByte(c); err != nil {
return err
}
}
}
return nil
}

323
medium.go Normal file
View file

@ -0,0 +1,323 @@
// SPDX-License-Identifier: EUPL-1.2
package store
import (
"bytes"
"encoding/csv"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
)
// Medium is the minimal storage transport used by the go-store workspace
// import and export helpers and by Compact when writing cold archives.
//
// This is an alias of `dappco.re/go/core/io.Medium`, so callers can pass any
// upstream medium implementation directly without an adapter.
//
// Usage example: `medium, _ := local.New("/tmp/exports"); storeInstance, err := store.NewConfigured(store.StoreConfig{DatabasePath: ":memory:", Medium: medium})`
type Medium = coreio.Medium
// Usage example: `medium, _ := local.New("/srv/core"); storeInstance, err := store.NewConfigured(store.StoreConfig{DatabasePath: ":memory:", Medium: medium})`
// WithMedium installs an io.Medium-compatible transport on the Store so that
// Compact archives and Import/Export helpers route through the medium instead
// of the raw filesystem.
func WithMedium(medium Medium) StoreOption {
return func(storeInstance *Store) {
if storeInstance == nil {
return
}
storeInstance.medium = medium
}
}
// Usage example: `medium := storeInstance.Medium(); if medium != nil { _ = medium.EnsureDir("exports") }`
func (storeInstance *Store) Medium() Medium {
if storeInstance == nil {
return nil
}
return storeInstance.medium
}
// Usage example: `err := store.Import(workspace, medium, "dataset.jsonl")`
// Import reads a JSON, JSONL, or CSV payload from the provided medium and
// appends each record to the workspace buffer as a `Put` entry. Format is
// chosen from the file extension: `.json` expects either a top-level array or
// `{"entries":[...]}` shape, `.jsonl`/`.ndjson` parse line-by-line, and `.csv`
// uses the first row as the header.
func Import(workspace *Workspace, medium Medium, path string) error {
if workspace == nil {
return core.E("store.Import", "workspace is nil", nil)
}
if medium == nil {
return core.E("store.Import", "medium is nil", nil)
}
if path == "" {
return core.E("store.Import", "path is empty", nil)
}
content, err := medium.Read(path)
if err != nil {
return core.E("store.Import", "read from medium", err)
}
kind := importEntryKind(path)
switch lowercaseText(importExtension(path)) {
case ".jsonl", ".ndjson":
return importJSONLines(workspace, kind, content)
case ".csv":
return importCSV(workspace, kind, content)
case ".json":
return importJSON(workspace, kind, content)
default:
return importJSONLines(workspace, kind, content)
}
}
// Usage example: `err := store.Export(workspace, medium, "report.json")`
// Export writes the workspace aggregate summary to the medium at the given
// path. Format is chosen from the extension: `.jsonl` writes one record per
// query row, `.csv` writes header + rows, everything else writes the
// aggregate as JSON.
func Export(workspace *Workspace, medium Medium, path string) error {
if workspace == nil {
return core.E("store.Export", "workspace is nil", nil)
}
if medium == nil {
return core.E("store.Export", "medium is nil", nil)
}
if path == "" {
return core.E("store.Export", "path is empty", nil)
}
if err := ensureMediumDir(medium, core.PathDir(path)); err != nil {
return core.E("store.Export", "ensure directory", err)
}
switch lowercaseText(importExtension(path)) {
case ".jsonl", ".ndjson":
return exportJSONLines(workspace, medium, path)
case ".csv":
return exportCSV(workspace, medium, path)
default:
return exportJSON(workspace, medium, path)
}
}
func ensureMediumDir(medium Medium, directory string) error {
if directory == "" || directory == "." || directory == "/" {
return nil
}
if err := medium.EnsureDir(directory); err != nil {
return core.E("store.ensureMediumDir", "ensure directory", err)
}
return nil
}
func importExtension(path string) string {
base := core.PathBase(path)
for i := len(base) - 1; i >= 0; i-- {
if base[i] == '.' {
return base[i:]
}
}
return ""
}
func importEntryKind(path string) string {
base := core.PathBase(path)
for i := len(base) - 1; i >= 0; i-- {
if base[i] == '.' {
base = base[:i]
break
}
}
if base == "" {
return "entry"
}
return base
}
func importJSONLines(workspace *Workspace, kind, content string) error {
scanner := core.Split(content, "\n")
for _, rawLine := range scanner {
line := core.Trim(rawLine)
if line == "" {
continue
}
record := map[string]any{}
if result := core.JSONUnmarshalString(line, &record); !result.OK {
err, _ := result.Value.(error)
return core.E("store.Import", "parse jsonl line", err)
}
if err := workspace.Put(kind, record); err != nil {
return core.E("store.Import", "put jsonl record", err)
}
}
return nil
}
func importJSON(workspace *Workspace, kind, content string) error {
trimmed := core.Trim(content)
if trimmed == "" {
return nil
}
var topLevel any
if result := core.JSONUnmarshalString(trimmed, &topLevel); !result.OK {
err, _ := result.Value.(error)
return core.E("store.Import", "parse json", err)
}
records, err := collectJSONRecords(topLevel)
if err != nil {
return core.E("store.Import", "normalise json records", err)
}
for _, record := range records {
if err := workspace.Put(kind, record); err != nil {
return core.E("store.Import", "put json record", err)
}
}
return nil
}
func collectJSONRecords(value any) ([]map[string]any, error) {
switch shape := value.(type) {
case []any:
records := make([]map[string]any, 0, len(shape))
for index, entry := range shape {
record, ok := entry.(map[string]any)
if !ok {
return nil, core.E("store.Import", core.Concat("json array element is not an object at index ", core.Sprint(index)), nil)
}
records = append(records, record)
}
return records, nil
case map[string]any:
if nested, ok := shape["entries"].([]any); ok {
return collectJSONRecords(nested)
}
if nested, ok := shape["records"].([]any); ok {
return collectJSONRecords(nested)
}
if nested, ok := shape["data"].([]any); ok {
return collectJSONRecords(nested)
}
return []map[string]any{shape}, nil
}
return nil, core.E("store.Import", "unsupported json shape", nil)
}
func importCSV(workspace *Workspace, kind, content string) error {
reader := csv.NewReader(bytes.NewBufferString(content))
reader.FieldsPerRecord = -1
rows, err := reader.ReadAll()
if err != nil {
return core.E("store.Import", "parse csv", err)
}
if len(rows) == 0 {
return nil
}
header := rows[0]
if len(header) == 0 {
return nil
}
for _, fields := range rows[1:] {
if len(fields) == 0 {
continue
}
record := make(map[string]any, len(header))
for columnIndex, columnName := range header {
if columnIndex < len(fields) {
record[columnName] = fields[columnIndex]
} else {
record[columnName] = ""
}
}
if err := workspace.Put(kind, record); err != nil {
return core.E("store.Import", "put csv record", err)
}
}
return nil
}
func exportJSON(workspace *Workspace, medium Medium, path string) error {
summary, err := workspace.aggregateFields()
if err != nil {
return core.E("store.Export", "aggregate workspace", err)
}
content := core.JSONMarshalString(summary)
if err := medium.Write(path, content); err != nil {
return core.E("store.Export", "write json", err)
}
return nil
}
func exportJSONLines(workspace *Workspace, medium Medium, path string) error {
result := workspace.Query("SELECT entry_kind, entry_data, created_at FROM workspace_entries ORDER BY entry_id")
if !result.OK {
err, _ := result.Value.(error)
return core.E("store.Export", "query workspace", err)
}
rows, ok := result.Value.([]map[string]any)
if !ok {
rows = nil
}
builder := core.NewBuilder()
for _, row := range rows {
line := core.JSONMarshalString(row)
builder.WriteString(line)
builder.WriteString("\n")
}
if err := medium.Write(path, builder.String()); err != nil {
return core.E("store.Export", "write jsonl", err)
}
return nil
}
func exportCSV(workspace *Workspace, medium Medium, path string) error {
result := workspace.Query("SELECT entry_kind, entry_data, created_at FROM workspace_entries ORDER BY entry_id")
if !result.OK {
err, _ := result.Value.(error)
return core.E("store.Export", "query workspace", err)
}
rows, ok := result.Value.([]map[string]any)
if !ok {
rows = nil
}
builder := core.NewBuilder()
builder.WriteString("entry_kind,entry_data,created_at\n")
for _, row := range rows {
builder.WriteString(csvField(core.Sprint(row["entry_kind"])))
builder.WriteString(",")
builder.WriteString(csvField(core.Sprint(row["entry_data"])))
builder.WriteString(",")
builder.WriteString(csvField(core.Sprint(row["created_at"])))
builder.WriteString("\n")
}
if err := medium.Write(path, builder.String()); err != nil {
return core.E("store.Export", "write csv", err)
}
return nil
}
func csvField(value string) string {
needsQuote := false
for index := 0; index < len(value); index++ {
switch value[index] {
case ',', '"', '\n', '\r':
needsQuote = true
}
if needsQuote {
break
}
}
if !needsQuote {
return value
}
escaped := core.Replace(value, `"`, `""`)
return core.Concat(`"`, escaped, `"`)
}

570
medium_test.go Normal file
View file

@ -0,0 +1,570 @@
// SPDX-License-Identifier: EUPL-1.2
package store
import (
"bytes"
goio "io"
"io/fs"
"sync"
"testing"
"time"
core "dappco.re/go/core"
)
// memoryMedium is an in-memory implementation of `store.Medium` used by the
// medium tests so assertions do not depend on the local filesystem.
type memoryMedium struct {
lock sync.Mutex
files map[string]string
}
func newMemoryMedium() *memoryMedium {
return &memoryMedium{files: make(map[string]string)}
}
func (medium *memoryMedium) Read(path string) (string, error) {
medium.lock.Lock()
defer medium.lock.Unlock()
content, ok := medium.files[path]
if !ok {
return "", core.E("memoryMedium.Read", "file not found: "+path, nil)
}
return content, nil
}
func (medium *memoryMedium) Write(path, content string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
medium.files[path] = content
return nil
}
func (medium *memoryMedium) WriteMode(path, content string, _ fs.FileMode) error {
return medium.Write(path, content)
}
func (medium *memoryMedium) EnsureDir(string) error { return nil }
func (medium *memoryMedium) Create(path string) (goio.WriteCloser, error) {
return &memoryWriter{medium: medium, path: path}, nil
}
func (medium *memoryMedium) Append(path string) (goio.WriteCloser, error) {
medium.lock.Lock()
defer medium.lock.Unlock()
return &memoryWriter{medium: medium, path: path, buffer: *bytes.NewBufferString(medium.files[path])}, nil
}
func (medium *memoryMedium) ReadStream(path string) (goio.ReadCloser, error) {
medium.lock.Lock()
defer medium.lock.Unlock()
return goio.NopCloser(bytes.NewReader([]byte(medium.files[path]))), nil
}
func (medium *memoryMedium) WriteStream(path string) (goio.WriteCloser, error) {
return medium.Create(path)
}
func (medium *memoryMedium) Exists(path string) bool {
medium.lock.Lock()
defer medium.lock.Unlock()
_, ok := medium.files[path]
return ok
}
func (medium *memoryMedium) IsFile(path string) bool { return medium.Exists(path) }
func (medium *memoryMedium) Delete(path string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
delete(medium.files, path)
return nil
}
func (medium *memoryMedium) DeleteAll(path string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
for key := range medium.files {
if key == path || core.HasPrefix(key, path+"/") {
delete(medium.files, key)
}
}
return nil
}
func (medium *memoryMedium) Rename(oldPath, newPath string) error {
medium.lock.Lock()
defer medium.lock.Unlock()
content, ok := medium.files[oldPath]
if !ok {
return core.E("memoryMedium.Rename", "file not found: "+oldPath, nil)
}
medium.files[newPath] = content
delete(medium.files, oldPath)
return nil
}
type renameFailMedium struct {
*memoryMedium
}
func (medium *renameFailMedium) Rename(string, string) error {
return core.E("renameFailMedium.Rename", "forced rename failure", nil)
}
type writeFailOnceMedium struct {
*memoryMedium
failures int
}
func (medium *writeFailOnceMedium) Write(path, content string) error {
if medium.failures > 0 {
medium.failures--
return core.E("writeFailOnceMedium.Write", "forced write failure", nil)
}
return medium.memoryMedium.Write(path, content)
}
func (medium *memoryMedium) List(path string) ([]fs.DirEntry, error) { return nil, nil }
func (medium *memoryMedium) Stat(path string) (fs.FileInfo, error) {
if !medium.Exists(path) {
return nil, core.E("memoryMedium.Stat", "file not found: "+path, nil)
}
return fileInfoStub{name: core.PathBase(path)}, nil
}
func (medium *memoryMedium) Open(path string) (fs.File, error) {
if !medium.Exists(path) {
return nil, core.E("memoryMedium.Open", "file not found: "+path, nil)
}
return newMemoryFile(path, medium.files[path]), nil
}
func (medium *memoryMedium) IsDir(string) bool { return false }
type memoryWriter struct {
medium *memoryMedium
path string
buffer bytes.Buffer
closed bool
}
func (writer *memoryWriter) Write(data []byte) (int, error) {
return writer.buffer.Write(data)
}
func (writer *memoryWriter) Close() error {
if writer.closed {
return nil
}
writer.closed = true
return writer.medium.Write(writer.path, writer.buffer.String())
}
type fileInfoStub struct {
name string
}
func (fileInfoStub) Size() int64 { return 0 }
func (fileInfoStub) Mode() fs.FileMode { return 0 }
func (fileInfoStub) ModTime() time.Time { return time.Time{} }
func (fileInfoStub) IsDir() bool { return false }
func (fileInfoStub) Sys() any { return nil }
func (info fileInfoStub) Name() string { return info.name }
type memoryFile struct {
*bytes.Reader
name string
}
func newMemoryFile(name, content string) *memoryFile {
return &memoryFile{Reader: bytes.NewReader([]byte(content)), name: name}
}
func (file *memoryFile) Stat() (fs.FileInfo, error) {
return fileInfoStub{name: core.PathBase(file.name)}, nil
}
func (file *memoryFile) Close() error { return nil }
// Ensure memoryMedium still satisfies the internal Medium contract.
var _ Medium = (*memoryMedium)(nil)
// Compile-time check for fs.FileInfo usage in the tests.
var _ fs.FileInfo = (*FileInfoStub)(nil)
type FileInfoStub struct{}
func (FileInfoStub) Name() string { return "" }
func (FileInfoStub) Size() int64 { return 0 }
func (FileInfoStub) Mode() fs.FileMode { return 0 }
func (FileInfoStub) ModTime() time.Time { return time.Time{} }
func (FileInfoStub) IsDir() bool { return false }
func (FileInfoStub) Sys() any { return nil }
func TestMedium_WithMedium_Good(t *testing.T) {
useWorkspaceStateDirectory(t)
medium := newMemoryMedium()
storeInstance, err := New(":memory:", WithMedium(medium))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertSamef(t, medium, storeInstance.Medium(), "medium should round-trip via accessor")
assertSamef(t, medium, storeInstance.Config().Medium, "medium should appear in Config()")
}
func TestMedium_WithMedium_Bad_NilKeepsFilesystemBackend(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertNil(t, storeInstance.Medium())
}
func TestMedium_WithMedium_Good_PersistsDatabaseThroughMedium(t *testing.T) {
useWorkspaceStateDirectory(t)
medium := newMemoryMedium()
storeInstance, err := New("app.db", WithMedium(medium))
assertNoError(t, err)
assertNoError(t, storeInstance.Set("g", "k", "v"))
assertNoError(t, storeInstance.Close())
reopenedStore, err := New("app.db", WithMedium(medium))
assertNoError(t, err)
defer func() { _ = reopenedStore.Close() }()
value, err := reopenedStore.Get("g", "k")
assertNoError(t, err)
assertEqual(t, "v", value)
assertTrue(t, medium.Exists("app.db"))
}
func TestMedium_Import_Good_JSONL(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-jsonl")
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
assertNoError(t, medium.Write("data.jsonl", `{"user":"@alice"}
{"user":"@bob"}
`))
assertNoError(t, Import(workspace, medium, "data.jsonl"))
rows := requireResultRows(t, workspace.Query("SELECT entry_kind, entry_data FROM workspace_entries ORDER BY entry_id"))
assertLen(t, rows, 2)
assertEqual(t, "data", rows[0]["entry_kind"])
assertContainsElement(t, rows[0]["entry_data"], "@alice")
assertContainsElement(t, rows[1]["entry_data"], "@bob")
}
func TestMedium_Import_Good_JSONArray(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-json-array")
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
assertNoError(t, medium.Write("users.json", `[{"name":"Alice"},{"name":"Bob"},{"name":"Carol"}]`))
assertNoError(t, Import(workspace, medium, "users.json"))
assertEqual(t, map[string]any{"users": 3}, workspace.Aggregate())
}
func TestMedium_Import_Good_CSV(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-csv")
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
assertNoError(t, medium.Write("findings.csv", "tool,severity\ngosec,high\ngolint,low\n"))
assertNoError(t, Import(workspace, medium, "findings.csv"))
assertEqual(t, map[string]any{"findings": 2}, workspace.Aggregate())
}
func TestMedium_Import_Good_CSVQuotedMultiline(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-csv-multiline")
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
assertNoError(t, medium.Write("notes.csv", "name,note\nAlice,\"hello\nworld\"\n"))
assertNoError(t, Import(workspace, medium, "notes.csv"))
assertEqual(t, map[string]any{"notes": 1}, workspace.Aggregate())
}
func TestMedium_Import_Bad_JSONArrayNonObject(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-json-non-object")
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
assertNoError(t, medium.Write("users.json", `[{"name":"Alice"},"Bob"]`))
assertError(t, Import(workspace, medium, "users.json"))
count, err := workspace.Count()
assertNoError(t, err)
assertEqual(t, 0, count)
}
func TestMedium_Import_Bad_MalformedCSV(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-csv-bad")
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
assertNoError(t, medium.Write("findings.csv", "tool,severity\ngosec,\"high\n"))
assertError(t, Import(workspace, medium, "findings.csv"))
count, err := workspace.Count()
assertNoError(t, err)
assertEqual(t, 0, count)
}
func TestMedium_Import_Bad_NilArguments(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-bad")
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
assertError(t, Import(nil, medium, "data.json"))
assertError(t, Import(workspace, nil, "data.json"))
assertError(t, Import(workspace, medium, ""))
}
func TestMedium_Import_Ugly_MissingFileReturnsError(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-missing")
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
assertError(t, Import(workspace, medium, "ghost.jsonl"))
}
func TestMedium_Export_Good_JSON(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-export-json")
assertNoError(t, err)
defer workspace.Discard()
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
assertNoError(t, workspace.Put("like", map[string]any{"user": "@bob"}))
assertNoError(t, workspace.Put("profile_match", map[string]any{"user": "@carol"}))
medium := newMemoryMedium()
assertNoError(t, Export(workspace, medium, "report.json"))
assertTrue(t, medium.Exists("report.json"))
content, err := medium.Read("report.json")
assertNoError(t, err)
assertContainsString(t, content, `"like":2`)
assertContainsString(t, content, `"profile_match":1`)
}
func TestMedium_Export_Good_JSONLines(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-export-jsonl")
assertNoError(t, err)
defer workspace.Discard()
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
assertNoError(t, workspace.Put("like", map[string]any{"user": "@bob"}))
medium := newMemoryMedium()
assertNoError(t, Export(workspace, medium, "report.jsonl"))
content, err := medium.Read("report.jsonl")
assertNoError(t, err)
lines := 0
for _, line := range splitNewlines(content) {
if line != "" {
lines++
}
}
assertEqual(t, 2, lines)
}
func TestMedium_Export_Bad_NilArguments(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-export-bad")
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
assertError(t, Export(nil, medium, "report.json"))
assertError(t, Export(workspace, nil, "report.json"))
assertError(t, Export(workspace, medium, ""))
}
func TestMedium_Export_Bad_JSONPropagatesWorkspaceFailure(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-export-json-closed")
assertNoError(t, err)
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
assertNoError(t, workspace.Close())
medium := newMemoryMedium()
assertNoError(t, medium.Write("report.json", `{"previous":true}`))
err = Export(workspace, medium, "report.json")
assertError(t, err)
assertContainsString(t, err.Error(), "aggregate workspace")
content, readErr := medium.Read("report.json")
assertNoError(t, readErr)
assertEqual(t, `{"previous":true}`, content)
}
func TestMedium_Compact_Good_MediumRoutesArchive(t *testing.T) {
useWorkspaceStateDirectory(t)
useArchiveOutputDirectory(t)
medium := newMemoryMedium()
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"), WithMedium(medium))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("jobs", map[string]any{"count": 3}, map[string]string{"workspace": "jobs-1"}).OK)
result := storeInstance.Compact(CompactOptions{
Before: time.Now().Add(time.Minute),
Output: "archive/",
Format: "gzip",
})
assertTruef(t, result.OK, "compact result: %v", result.Value)
outputPath, ok := result.Value.(string)
assertTrue(t, ok)
assertNotEmpty(t, outputPath)
assertTruef(t, medium.Exists(outputPath), "compact should write through medium at %s", outputPath)
}
func TestMedium_Compact_Bad_PreservesStagedArchiveWhenPublishFails(t *testing.T) {
useWorkspaceStateDirectory(t)
useArchiveOutputDirectory(t)
medium := &renameFailMedium{memoryMedium: newMemoryMedium()}
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"), WithMedium(medium))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("jobs", map[string]any{"count": 3}, map[string]string{"workspace": "jobs-1"}).OK)
result := storeInstance.Compact(CompactOptions{
Before: time.Now().Add(time.Minute),
Output: "archive/",
Format: "gzip",
})
assertFalse(t, result.OK)
stagedArchiveFound := false
medium.lock.Lock()
for path := range medium.files {
if core.HasSuffix(path, ".tmp") {
stagedArchiveFound = true
}
}
medium.lock.Unlock()
assertTrue(t, stagedArchiveFound)
}
func splitNewlines(content string) []string {
var result []string
current := core.NewBuilder()
for index := 0; index < len(content); index++ {
character := content[index]
if character == '\n' {
result = append(result, current.String())
current.Reset()
continue
}
current.WriteByte(character)
}
if current.Len() > 0 {
result = append(result, current.String())
}
return result
}

91
parquet.go Normal file
View file

@ -0,0 +1,91 @@
// SPDX-License-Identifier: EUPL-1.2
package store
import core "dappco.re/go/core"
// ChatMessage represents a single message in a chat conversation, used for
// reading JSONL training data during data import.
//
// Usage example:
//
// msg := store.ChatMessage{Role: "user", Content: "What is sovereignty?"}
type ChatMessage struct {
// Role is the message author role (e.g. "user", "assistant", "system").
//
// Usage example:
//
// msg.Role // "user"
Role string `json:"role"`
// Content is the message text.
//
// Usage example:
//
// msg.Content // "What is sovereignty?"
Content string `json:"content"`
}
// ParquetRow describes the lightweight row shape used by external Parquet
// exporters.
//
// Usage example:
//
// row := store.ParquetRow{Prompt: "What is sovereignty?", Response: "Sovereignty is...", System: "You are LEM."}
type ParquetRow struct {
// Prompt is the user prompt text.
//
// Usage example:
//
// row.Prompt // "What is sovereignty?"
Prompt string `parquet:"prompt"`
// Response is the assistant response text.
//
// Usage example:
//
// row.Response // "Sovereignty is..."
Response string `parquet:"response"`
// System is the system prompt text.
//
// Usage example:
//
// row.System // "You are LEM."
System string `parquet:"system"`
// Messages is the JSON-encoded full conversation messages.
//
// Usage example:
//
// row.Messages // `[{"role":"user","content":"What is sovereignty?"}]`
Messages string `parquet:"messages"`
}
// ExportParquet reports that Parquet export is intentionally kept outside the
// core package dependency graph.
//
// Usage example:
//
// _, err := store.ExportParquet("/Volumes/Data/lem/training", "/Volumes/Data/lem/parquet")
func ExportParquet(trainingDir, outputDir string) (int, error) {
return 0, core.E(
"store.ExportParquet",
"Parquet export requires an external tool so core does not ship a runtime Parquet dependency",
nil,
)
}
// ExportSplitParquet reports that split-level Parquet export is intentionally
// kept outside the core package dependency graph.
//
// Usage example:
//
// _, err := store.ExportSplitParquet("/data/train.jsonl", "/data/parquet", "train")
func ExportSplitParquet(jsonlPath, outputDir, split string) (int, error) {
return 0, core.E(
"store.ExportSplitParquet",
"Parquet export requires an external tool so core does not ship a runtime Parquet dependency",
nil,
)
}

11
path_test.go Normal file
View file

@ -0,0 +1,11 @@
package store
import (
"testing"
)
func TestPath_Normalise_Good_TrailingSlashes(t *testing.T) {
assertEqual(t, ".core/state/scroll-session.duckdb", workspaceFilePath(".core/state/", "scroll-session"))
assertEqual(t, ".core/archive/journal-20260404-010203.jsonl.gz", joinPath(".core/archive/", "journal-20260404-010203.jsonl.gz"))
assertEqual(t, ".core/archive", normaliseDirectoryPath(".core/archive///"))
}

312
publish.go Normal file
View file

@ -0,0 +1,312 @@
// SPDX-License-Identifier: EUPL-1.2
package store
import (
"bytes"
"context"
"io"
"io/fs"
"net/http"
"time"
core "dappco.re/go/core"
)
// PublishConfig holds options for the publish operation.
//
// Usage example:
//
// cfg := store.PublishConfig{InputDir: "/data/parquet", Repo: "snider/lem-training", Public: true}
type PublishConfig struct {
// InputDir is the directory containing Parquet files to upload.
//
// Usage example:
//
// cfg.InputDir // "/data/parquet"
InputDir string
// Repo is the HuggingFace dataset repository (e.g. "user/dataset").
//
// Usage example:
//
// cfg.Repo // "snider/lem-training"
Repo string
// Public sets the dataset visibility to public when true.
//
// Usage example:
//
// cfg.Public // true
Public bool
// Token is the HuggingFace API token. Falls back to HF_TOKEN env or ~/.huggingface/token.
//
// Usage example:
//
// cfg.Token // "hf_..."
Token string
// Context controls cancellation for HuggingFace API requests. When nil,
// Publish uses context.Background().
//
// Usage example:
//
// cfg.Context = context.Background()
Context context.Context
// DryRun lists files that would be uploaded without actually uploading.
//
// Usage example:
//
// cfg.DryRun // true
DryRun bool
}
// uploadEntry pairs a local file path with its remote destination.
type uploadEntry struct {
local string
remote string
}
// Publish uploads Parquet files to HuggingFace Hub.
//
// It looks for train.parquet, valid.parquet, and test.parquet in InputDir,
// plus an optional dataset_card.md in the parent directory (uploaded as README.md).
// The token is resolved from PublishConfig.Token, the HF_TOKEN environment variable,
// or ~/.huggingface/token, in that order.
//
// Usage example:
//
// err := store.Publish(store.PublishConfig{InputDir: "/data/parquet", Repo: "snider/lem-training"}, os.Stdout)
func Publish(cfg PublishConfig, w io.Writer) error {
if cfg.InputDir == "" {
return core.E("store.Publish", "input directory is required", nil)
}
if cfg.Repo == "" {
return core.E("store.Publish", "repository is required", nil)
}
publishContext := cfg.Context
if publishContext == nil {
publishContext = context.Background()
}
token := resolveHFToken(cfg.Token)
if token == "" && !cfg.DryRun {
return core.E("store.Publish", "HuggingFace token required (--token, HF_TOKEN env, or ~/.huggingface/token)", nil)
}
files, hasSplit, err := collectUploadFiles(cfg.InputDir)
if err != nil {
return err
}
if !hasSplit {
return core.E("store.Publish", core.Sprintf("no Parquet files found in %s", cfg.InputDir), nil)
}
if cfg.DryRun {
core.Print(w, "Dry run: would publish to %s", cfg.Repo)
if cfg.Public {
core.Print(w, " Visibility: public")
} else {
core.Print(w, " Visibility: private")
}
for _, f := range files {
statResult := localFs.Stat(f.local)
if !statResult.OK {
return core.E("store.Publish", core.Sprintf("stat %s", f.local), statResult.Value.(error))
}
info := statResult.Value.(fs.FileInfo)
sizeMB := float64(info.Size()) / 1024 / 1024
core.Print(w, " %s -> %s (%.1f MB)", core.PathBase(f.local), f.remote, sizeMB)
}
return nil
}
core.Print(w, "Publishing to https://huggingface.co/datasets/%s", cfg.Repo)
if err := ensureHFDatasetRepo(publishContext, token, cfg.Repo, cfg.Public); err != nil {
return core.E("store.Publish", "ensure HuggingFace dataset", err)
}
for _, f := range files {
if err := uploadFileToHF(publishContext, token, cfg.Repo, f.local, f.remote); err != nil {
return core.E("store.Publish", core.Sprintf("upload %s", core.PathBase(f.local)), err)
}
core.Print(w, " Uploaded %s -> %s", core.PathBase(f.local), f.remote)
}
core.Print(w, "\nPublished to https://huggingface.co/datasets/%s", cfg.Repo)
return nil
}
// resolveHFToken returns a HuggingFace API token from the given value,
// HF_TOKEN env var, or ~/.huggingface/token file.
func resolveHFToken(explicit string) string {
if explicit != "" {
return explicit
}
if env := core.Env("HF_TOKEN"); env != "" {
return env
}
// Core populates DIR_HOME via os.UserHomeDir while this package keeps the
// repository-wide ban on direct os imports.
homes := []string{core.Env("DIR_HOME")}
if homeEnv := core.Env("HOME"); homeEnv != "" && homeEnv != homes[0] {
homes = append(homes, homeEnv)
}
for _, home := range homes {
if home == "" {
continue
}
r := localFs.Read(core.JoinPath(home, ".huggingface", "token"))
if !r.OK {
continue
}
token := core.Trim(r.Value.(string))
if token != "" {
return token
}
}
return ""
}
// collectUploadFiles finds Parquet split files and an optional dataset card.
func collectUploadFiles(inputDir string) ([]uploadEntry, bool, error) {
splits := []string{"train", "valid", "test"}
var files []uploadEntry
hasSplit := false
for _, split := range splits {
path := core.JoinPath(inputDir, split+".parquet")
if !isFile(path) {
continue
}
files = append(files, uploadEntry{path, core.Sprintf("data/%s.parquet", split)})
hasSplit = true
}
// Check for dataset card in parent directory.
cardPath := core.JoinPath(inputDir, "..", "dataset_card.md")
if isFile(cardPath) {
files = append(files, uploadEntry{cardPath, "README.md"})
}
return files, hasSplit, nil
}
func ensureHFDatasetRepo(ctx context.Context, token, repoID string, public bool) error {
if repoID == "" {
return core.E("store.ensureHFDatasetRepo", "repository is required", nil)
}
organisation, name := splitHFRepoID(repoID)
if name == "" {
return core.E("store.ensureHFDatasetRepo", "repository name is required", nil)
}
createPayload := map[string]any{
"name": name,
"type": "dataset",
"private": !public,
}
if organisation != "" {
createPayload["organization"] = organisation
}
createStatus, createBody, err := hfJSONRequest(ctx, token, http.MethodPost, "https://huggingface.co/api/repos/create", createPayload)
if err != nil {
return core.E("store.ensureHFDatasetRepo", "create dataset repository", err)
}
if createStatus >= 300 && createStatus != http.StatusConflict {
return core.E("store.ensureHFDatasetRepo", core.Sprintf("create dataset failed: HTTP %d: %s", createStatus, createBody), nil)
}
settingsURL := core.Sprintf("https://huggingface.co/api/repos/dataset/%s/settings", repoID)
settingsStatus, settingsBody, err := hfJSONRequest(ctx, token, http.MethodPut, settingsURL, map[string]any{
"private": !public,
})
if err != nil {
return core.E("store.ensureHFDatasetRepo", "update dataset visibility", err)
}
if settingsStatus >= 300 {
return core.E("store.ensureHFDatasetRepo", core.Sprintf("update dataset visibility failed: HTTP %d: %s", settingsStatus, settingsBody), nil)
}
return nil
}
func splitHFRepoID(repoID string) (organisation string, name string) {
parts := core.Split(repoID, "/")
if len(parts) == 1 {
return "", repoID
}
return parts[0], parts[1]
}
func hfJSONRequest(ctx context.Context, token, method, url string, payload map[string]any) (int, string, error) {
payloadJSON := core.JSONMarshalString(payload)
req, err := http.NewRequestWithContext(ctx, method, url, bytes.NewBufferString(payloadJSON))
if err != nil {
return 0, "", core.E("store.hfJSONRequest", "create request", err)
}
req.Header.Set("Authorization", "Bearer "+token)
req.Header.Set("Content-Type", "application/json")
client := &http.Client{Timeout: 120 * time.Second}
resp, err := client.Do(req)
if err != nil {
return 0, "", core.E("store.hfJSONRequest", "send request", err)
}
defer func() {
_ = resp.Body.Close()
}()
body, err := io.ReadAll(resp.Body)
if err != nil {
return resp.StatusCode, "", core.E("store.hfJSONRequest", "read response body", err)
}
return resp.StatusCode, string(body), nil
}
// uploadFileToHF uploads a single file to a HuggingFace dataset repo via the
// Hub API.
func uploadFileToHF(ctx context.Context, token, repoID, localPath, remotePath string) error {
openResult := localFs.Open(localPath)
if !openResult.OK {
return core.E("store.uploadFileToHF", core.Sprintf("open %s", localPath), openResult.Value.(error))
}
file := openResult.Value.(fs.File)
defer func() { _ = file.Close() }()
url := core.Sprintf("https://huggingface.co/api/datasets/%s/upload/main/%s", repoID, remotePath)
req, err := http.NewRequestWithContext(ctx, http.MethodPut, url, file)
if err != nil {
return core.E("store.uploadFileToHF", "create request", err)
}
req.Header.Set("Authorization", "Bearer "+token)
req.Header.Set("Content-Type", "application/octet-stream")
if stat, err := file.Stat(); err == nil {
req.ContentLength = stat.Size()
}
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
return core.E("store.uploadFileToHF", "upload request", err)
}
defer func() {
_ = resp.Body.Close()
}()
if resp.StatusCode >= 300 {
body, readErr := io.ReadAll(resp.Body)
if readErr != nil {
return core.E("store.uploadFileToHF", "read error response body", readErr)
}
return core.E("store.uploadFileToHF", core.Sprintf("upload failed: HTTP %d: %s", resp.StatusCode, string(body)), nil)
}
return nil
}

42
publish_test.go Normal file
View file

@ -0,0 +1,42 @@
package store
import (
"bytes"
"testing"
core "dappco.re/go/core"
)
func TestPublish_Publish_Bad_EmptyRepository(t *testing.T) {
var output bytes.Buffer
err := Publish(PublishConfig{InputDir: t.TempDir(), DryRun: true}, &output)
assertError(t, err)
assertContainsString(t, err.Error(), "repository is required")
}
func TestPublish_Publish_Bad_DatasetCardWithoutParquetSplit(t *testing.T) {
inputDir := core.JoinPath(t.TempDir(), "data")
requireCoreOK(t, testFilesystem().EnsureDir(inputDir))
requireCoreWriteBytes(t, core.JoinPath(inputDir, "..", "dataset_card.md"), []byte("# Dataset\n"))
var output bytes.Buffer
err := Publish(PublishConfig{InputDir: inputDir, Repo: "snider/lem-training", DryRun: true}, &output)
assertError(t, err)
assertContainsString(t, err.Error(), "no Parquet files found")
}
func TestPublish_ResolveHFToken_Good_UserHomeFallback(t *testing.T) {
homeDirectory := t.TempDir()
t.Setenv("HF_TOKEN", "")
t.Setenv("DIR_HOME", "")
t.Setenv("HOME", homeDirectory)
tokenDirectory := core.JoinPath(homeDirectory, ".huggingface")
requireCoreOK(t, testFilesystem().EnsureDir(tokenDirectory))
requireCoreWriteBytes(t, core.JoinPath(tokenDirectory, "token"), []byte(" hf_file_token \n"))
assertEqual(t, "hf_file_token", resolveHFToken(""))
}

58
recover_test.go Normal file
View file

@ -0,0 +1,58 @@
package store
import "testing"
func TestRecover_Orphans_Good_RecoversOrphan(t *testing.T) {
stateDirectory := useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("recover-good")
assertNoError(t, err)
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
assertNoError(t, workspace.Close())
orphans := storeInstance.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 1)
assertEqual(t, "recover-good", orphans[0].Name())
assertEqual(t, map[string]any{"like": 1}, orphans[0].Aggregate())
orphans[0].Discard()
assertFalse(t, testFilesystem().Exists(workspaceFilePath(stateDirectory, "recover-good")))
}
func TestRecover_Orphans_Bad_CorruptMetadataQuarantined(t *testing.T) {
stateDirectory := useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
corruptDatabasePath := workspaceFilePath(stateDirectory, "recover-bad")
requireCoreWriteBytes(t, corruptDatabasePath, []byte("not a duckdb database"))
requireCoreWriteBytes(t, corruptDatabasePath+".wal", []byte("wal"))
orphans := storeInstance.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 0)
assertFalse(t, testFilesystem().Exists(corruptDatabasePath))
assertFalse(t, testFilesystem().Exists(corruptDatabasePath+".wal"))
quarantinePath := workspaceQuarantineFilePath(stateDirectory, corruptDatabasePath)
assertTrue(t, testFilesystem().Exists(quarantinePath))
assertTrue(t, testFilesystem().Exists(quarantinePath+".wal"))
assertEqual(t, "not a duckdb database", string(requireCoreReadBytes(t, quarantinePath)))
}
func TestRecover_Orphans_Ugly_NoOrphansNoop(t *testing.T) {
stateDirectory := useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
orphans := storeInstance.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 0)
assertFalse(t, testFilesystem().Exists(joinPath(stateDirectory, workspaceQuarantineDirName)))
}

1037
scope.go

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

1324
store.go

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

342
test_asserts_test.go Normal file
View file

@ -0,0 +1,342 @@
package store
import (
"reflect"
"sort"
"testing"
core "dappco.re/go/core"
)
func assertNoError(t testing.TB, err error) {
t.Helper()
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
}
func assertNoErrorf(t testing.TB, err error, format string, args ...any) {
t.Helper()
if err != nil {
t.Fatalf("unexpected error: %v — "+format, append([]any{err}, args...)...)
}
}
func assertError(t testing.TB, err error) {
t.Helper()
if err == nil {
t.Fatal("expected error, got nil")
}
}
func assertErrorIs(t testing.TB, err, target error) {
t.Helper()
if !errIs(err, target) {
t.Fatalf("expected error matching %v, got %v", target, err)
}
}
func assertEqual(t testing.TB, want, got any) {
t.Helper()
if !reflect.DeepEqual(want, got) {
t.Fatalf("want %v, got %v", want, got)
}
}
func assertEqualf(t testing.TB, want, got any, format string, args ...any) {
t.Helper()
if !reflect.DeepEqual(want, got) {
t.Fatalf("want %v, got %v — "+format, append([]any{want, got}, args...)...)
}
}
func assertTrue(t testing.TB, cond bool) {
t.Helper()
if !cond {
t.Fatal("expected true")
}
}
func assertTruef(t testing.TB, cond bool, format string, args ...any) {
t.Helper()
if !cond {
t.Fatalf("expected true — "+format, args...)
}
}
func assertFalse(t testing.TB, cond bool) {
t.Helper()
if cond {
t.Fatal("expected false")
}
}
func assertFalsef(t testing.TB, cond bool, format string, args ...any) {
t.Helper()
if cond {
t.Fatalf("expected false — "+format, args...)
}
}
func assertNil(t testing.TB, value any) {
t.Helper()
if !isNil(value) {
t.Fatalf("expected nil, got %v", value)
}
}
func assertNilf(t testing.TB, value any, format string, args ...any) {
t.Helper()
if !isNil(value) {
t.Fatalf("expected nil, got %v — "+format, append([]any{value}, args...)...)
}
}
func assertNotNil(t testing.TB, value any) {
t.Helper()
if isNil(value) {
t.Fatal("expected non-nil")
}
}
func assertEmpty(t testing.TB, value any) {
t.Helper()
if !isEmpty(value) {
t.Fatalf("expected empty, got %v", value)
}
}
func assertEmptyf(t testing.TB, value any, format string, args ...any) {
t.Helper()
if !isEmpty(value) {
t.Fatalf("expected empty, got %v — "+format, append([]any{value}, args...)...)
}
}
func assertNotEmpty(t testing.TB, value any) {
t.Helper()
if isEmpty(value) {
t.Fatal("expected non-empty")
}
}
func assertLen(t testing.TB, value any, want int) {
t.Helper()
got := lenOf(value)
if got != want {
t.Fatalf("expected len %d, got %d", want, got)
}
}
func assertLenf(t testing.TB, value any, want int, format string, args ...any) {
t.Helper()
got := lenOf(value)
if got != want {
t.Fatalf("expected len %d, got %d — "+format, append([]any{want, got}, args...)...)
}
}
func assertContainsString(t testing.TB, haystack, needle string) {
t.Helper()
if !stringContains(haystack, needle) {
t.Fatalf("expected %q to contain %q", haystack, needle)
}
}
func assertContainsElement(t testing.TB, collection, element any) {
t.Helper()
if !containsElement(collection, element) {
t.Fatalf("expected collection to contain %v", element)
}
}
func assertElementsMatch(t testing.TB, want, got any) {
t.Helper()
if !elementsMatch(want, got) {
t.Fatalf("expected same elements: want %v, got %v", want, got)
}
}
func assertLessOrEqual(t testing.TB, got, want int) {
t.Helper()
if got > want {
t.Fatalf("expected %d <= %d", got, want)
}
}
func assertSamef(t testing.TB, want, got any, format string, args ...any) {
t.Helper()
if !samePointer(want, got) {
t.Fatalf("expected same pointer, got %v vs %v — "+format, append([]any{want, got}, args...)...)
}
}
func assertGreaterf(t testing.TB, got, want int, format string, args ...any) {
t.Helper()
if got <= want {
t.Fatalf("expected %d > %d — "+format, append([]any{got, want}, args...)...)
}
}
func assertNotPanics(t testing.TB, fn func()) {
t.Helper()
defer func() {
if r := recover(); r != nil {
t.Fatalf("unexpected panic: %v", r)
}
}()
fn()
}
func errIs(err, target error) bool {
return core.Is(err, target)
}
func isNil(value any) bool {
if value == nil {
return true
}
rv := reflect.ValueOf(value)
switch rv.Kind() {
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
}
return false
}
func isEmpty(value any) bool {
if value == nil {
return true
}
rv := reflect.ValueOf(value)
switch rv.Kind() {
case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice, reflect.String:
return rv.Len() == 0
case reflect.Ptr, reflect.Interface:
if rv.IsNil() {
return true
}
return isEmpty(rv.Elem().Interface())
}
return reflect.DeepEqual(value, reflect.Zero(rv.Type()).Interface())
}
func lenOf(value any) int {
rv := reflect.ValueOf(value)
switch rv.Kind() {
case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice, reflect.String:
return rv.Len()
}
return -1
}
func stringContains(haystack, needle string) bool {
if len(needle) == 0 {
return true
}
if len(needle) > len(haystack) {
return false
}
for i := 0; i+len(needle) <= len(haystack); i++ {
if haystack[i:i+len(needle)] == needle {
return true
}
}
return false
}
func containsElement(collection, element any) bool {
rv := reflect.ValueOf(collection)
switch rv.Kind() {
case reflect.String:
needle, ok := element.(string)
if !ok {
return false
}
return stringContains(rv.String(), needle)
case reflect.Array, reflect.Slice:
for i := 0; i < rv.Len(); i++ {
if reflect.DeepEqual(rv.Index(i).Interface(), element) {
return true
}
}
return false
case reflect.Map:
for _, key := range rv.MapKeys() {
if reflect.DeepEqual(key.Interface(), element) {
return true
}
}
return false
}
return false
}
func elementsMatch(want, got any) bool {
wantSlice := toAnySlice(want)
gotSlice := toAnySlice(got)
if wantSlice == nil || gotSlice == nil {
return false
}
if len(wantSlice) != len(gotSlice) {
return false
}
sortAny(wantSlice)
sortAny(gotSlice)
for i := range wantSlice {
if !reflect.DeepEqual(wantSlice[i], gotSlice[i]) {
return false
}
}
return true
}
func toAnySlice(value any) []any {
rv := reflect.ValueOf(value)
switch rv.Kind() {
case reflect.Array, reflect.Slice:
result := make([]any, rv.Len())
for i := 0; i < rv.Len(); i++ {
result[i] = rv.Index(i).Interface()
}
return result
}
return nil
}
func sortAny(values []any) {
sort.Slice(values, func(i, j int) bool {
return less(values[i], values[j])
})
}
func less(a, b any) bool {
aValue := reflect.ValueOf(a)
bValue := reflect.ValueOf(b)
if aValue.Kind() != bValue.Kind() {
return aValue.Kind() < bValue.Kind()
}
switch aValue.Kind() {
case reflect.String:
return aValue.String() < bValue.String()
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return aValue.Int() < bValue.Int()
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
return aValue.Uint() < bValue.Uint()
case reflect.Float32, reflect.Float64:
return aValue.Float() < bValue.Float()
}
return false
}
func samePointer(want, got any) bool {
wantValue := reflect.ValueOf(want)
gotValue := reflect.ValueOf(got)
if !wantValue.IsValid() || !gotValue.IsValid() {
return false
}
if wantValue.Kind() != reflect.Ptr || gotValue.Kind() != reflect.Ptr {
return false
}
return wantValue.Pointer() == gotValue.Pointer()
}

79
test_helpers_test.go Normal file
View file

@ -0,0 +1,79 @@
package store
import (
"testing"
core "dappco.re/go/core"
)
func testFilesystem() *core.Fs {
return (&core.Fs{}).NewUnrestricted()
}
func testPath(tb testing.TB, name string) string {
tb.Helper()
return core.Path(tb.TempDir(), name)
}
func requireCoreOK(tb testing.TB, result core.Result) {
tb.Helper()
assertTruef(tb, result.OK, "core result failed: %v", result.Value)
}
func requireCoreReadBytes(tb testing.TB, path string) []byte {
tb.Helper()
result := testFilesystem().Read(path)
requireCoreOK(tb, result)
return []byte(result.Value.(string))
}
func requireCoreWriteBytes(tb testing.TB, path string, data []byte) {
tb.Helper()
requireCoreOK(tb, testFilesystem().Write(path, string(data)))
}
func repeatString(value string, count int) string {
if count <= 0 {
return ""
}
builder := core.NewBuilder()
for range count {
builder.WriteString(value)
}
return builder.String()
}
func useWorkspaceStateDirectory(tb testing.TB) string {
tb.Helper()
previous := defaultWorkspaceStateDirectory
stateDirectory := testPath(tb, "state")
defaultWorkspaceStateDirectory = stateDirectory
tb.Cleanup(func() {
defaultWorkspaceStateDirectory = previous
_ = testFilesystem().DeleteAll(stateDirectory)
})
return stateDirectory
}
func useArchiveOutputDirectory(tb testing.TB) string {
tb.Helper()
previous := defaultArchiveOutputDirectory
outputDirectory := testPath(tb, "archive")
defaultArchiveOutputDirectory = outputDirectory
tb.Cleanup(func() {
defaultArchiveOutputDirectory = previous
_ = testFilesystem().DeleteAll(outputDirectory)
})
return outputDirectory
}
func requireResultRows(tb testing.TB, result core.Result) []map[string]any {
tb.Helper()
assertTruef(tb, result.OK, "core result failed: %v", result.Value)
rows, ok := result.Value.([]map[string]any)
assertTruef(tb, ok, "unexpected row type: %T", result.Value)
return rows
}

View file

@ -0,0 +1,30 @@
version: "3"
tasks:
default:
deps: [build, vet, test]
build:
dir: ../../..
cmds:
- go build ./...
vet:
dir: ../../..
cmds:
- go vet ./...
test:
dir: ../../..
cmds:
- go test -count=1 -race ./...
test-memory:
dir: ../../..
cmds:
- go test -count=1 -race -run "^TestStore_.*Memory" ./...
test-workspace:
dir: ../../..
cmds:
- go test -count=1 -race -run "^TestWorkspace_" ./...

531
transaction.go Normal file
View file

@ -0,0 +1,531 @@
package store
import (
"database/sql"
"iter"
"text/template"
"time"
core "dappco.re/go/core"
)
// Usage example: `err := storeInstance.Transaction(func(transaction *store.StoreTransaction) error { return transaction.Set("config", "colour", "blue") })`
// Usage example: `if err := transaction.Delete("config", "colour"); err != nil { return err }`
type StoreTransaction struct {
storeInstance *Store
sqliteTransaction *sql.Tx
pendingEvents []Event
}
// Usage example: `err := storeInstance.Transaction(func(transaction *store.StoreTransaction) error { if err := transaction.Set("tenant-a:config", "colour", "blue"); err != nil { return err }; return transaction.Set("tenant-b:config", "language", "en-GB") })`
func (storeInstance *Store) Transaction(operation func(*StoreTransaction) error) error {
if err := storeInstance.ensureReady("store.Transaction"); err != nil {
return err
}
if operation == nil {
return core.E("store.Transaction", "operation is nil", nil)
}
transaction, err := storeInstance.sqliteDatabase.Begin()
if err != nil {
return core.E("store.Transaction", "begin transaction", err)
}
storeTransaction := &StoreTransaction{
storeInstance: storeInstance,
sqliteTransaction: transaction,
}
committed := false
defer func() {
if !committed {
_ = transaction.Rollback()
}
}()
if err := operation(storeTransaction); err != nil {
return core.E("store.Transaction", "execute transaction", err)
}
if err := transaction.Commit(); err != nil {
return core.E("store.Transaction", "commit transaction", err)
}
committed = true
for _, event := range storeTransaction.pendingEvents {
storeInstance.notify(event)
}
return nil
}
func (storeTransaction *StoreTransaction) ensureReady(operation string) error {
if storeTransaction == nil {
return core.E(operation, "transaction is nil", nil)
}
if storeTransaction.storeInstance == nil {
return core.E(operation, "transaction store is nil", nil)
}
if storeTransaction.sqliteTransaction == nil {
return core.E(operation, "transaction database is nil", nil)
}
if err := storeTransaction.storeInstance.ensureReady(operation); err != nil {
return err
}
return nil
}
func (storeTransaction *StoreTransaction) recordEvent(event Event) {
if storeTransaction == nil {
return
}
storeTransaction.pendingEvents = append(storeTransaction.pendingEvents, event)
}
// Usage example: `exists, err := transaction.Exists("config", "colour")`
// Usage example: `if exists, _ := transaction.Exists("session", "token"); !exists { return core.E("auth", "session expired", nil) }`
func (storeTransaction *StoreTransaction) Exists(group, key string) (bool, error) {
if err := storeTransaction.ensureReady("store.Transaction.Exists"); err != nil {
return false, err
}
return liveEntryExists(storeTransaction.sqliteTransaction, group, key)
}
// Usage example: `exists, err := transaction.GroupExists("config")`
func (storeTransaction *StoreTransaction) GroupExists(group string) (bool, error) {
if err := storeTransaction.ensureReady("store.Transaction.GroupExists"); err != nil {
return false, err
}
count, err := storeTransaction.Count(group)
if err != nil {
return false, err
}
return count > 0, nil
}
// Usage example: `value, err := transaction.Get("config", "colour")`
func (storeTransaction *StoreTransaction) Get(group, key string) (string, error) {
if err := storeTransaction.ensureReady("store.Transaction.Get"); err != nil {
return "", err
}
var value string
var expiresAt sql.NullInt64
err := storeTransaction.sqliteTransaction.QueryRow(
"SELECT "+entryValueColumn+", expires_at FROM "+entriesTableName+" WHERE "+entryGroupColumn+" = ? AND "+entryKeyColumn+" = ?",
group, key,
).Scan(&value, &expiresAt)
if err == sql.ErrNoRows {
return "", core.E("store.Transaction.Get", core.Concat(group, "/", key), NotFoundError)
}
if err != nil {
return "", core.E("store.Transaction.Get", "query row", err)
}
if expiresAt.Valid && expiresAt.Int64 <= time.Now().UnixMilli() {
if err := storeTransaction.Delete(group, key); err != nil {
return "", core.E("store.Transaction.Get", "delete expired row", err)
}
return "", core.E("store.Transaction.Get", core.Concat(group, "/", key), NotFoundError)
}
return value, nil
}
// Usage example: `if err := transaction.Set("config", "colour", "blue"); err != nil { return err }`
func (storeTransaction *StoreTransaction) Set(group, key, value string) error {
if err := storeTransaction.ensureReady("store.Transaction.Set"); err != nil {
return err
}
_, err := storeTransaction.sqliteTransaction.Exec(
"INSERT INTO "+entriesTableName+" ("+entryGroupColumn+", "+entryKeyColumn+", "+entryValueColumn+", expires_at) VALUES (?, ?, ?, NULL) "+
"ON CONFLICT("+entryGroupColumn+", "+entryKeyColumn+") DO UPDATE SET "+entryValueColumn+" = excluded."+entryValueColumn+", expires_at = NULL",
group, key, value,
)
if err != nil {
return core.E("store.Transaction.Set", "execute upsert", err)
}
storeTransaction.recordEvent(Event{Type: EventSet, Group: group, Key: key, Value: value, Timestamp: time.Now()})
return nil
}
// Usage example: `if err := transaction.SetWithTTL("session", "token", "abc123", time.Minute); err != nil { return err }`
func (storeTransaction *StoreTransaction) SetWithTTL(group, key, value string, timeToLive time.Duration) error {
if err := storeTransaction.ensureReady("store.Transaction.SetWithTTL"); err != nil {
return err
}
expiresAt := time.Now().Add(timeToLive).UnixMilli()
_, err := storeTransaction.sqliteTransaction.Exec(
"INSERT INTO "+entriesTableName+" ("+entryGroupColumn+", "+entryKeyColumn+", "+entryValueColumn+", expires_at) VALUES (?, ?, ?, ?) "+
"ON CONFLICT("+entryGroupColumn+", "+entryKeyColumn+") DO UPDATE SET "+entryValueColumn+" = excluded."+entryValueColumn+", expires_at = excluded.expires_at",
group, key, value, expiresAt,
)
if err != nil {
return core.E("store.Transaction.SetWithTTL", "execute upsert with expiry", err)
}
storeTransaction.recordEvent(Event{Type: EventSet, Group: group, Key: key, Value: value, Timestamp: time.Now()})
return nil
}
// Usage example: `if err := transaction.Delete("config", "colour"); err != nil { return err }`
func (storeTransaction *StoreTransaction) Delete(group, key string) error {
if err := storeTransaction.ensureReady("store.Transaction.Delete"); err != nil {
return err
}
deleteResult, err := storeTransaction.sqliteTransaction.Exec(
"DELETE FROM "+entriesTableName+" WHERE "+entryGroupColumn+" = ? AND "+entryKeyColumn+" = ?",
group, key,
)
if err != nil {
return core.E("store.Transaction.Delete", "delete row", err)
}
deletedRows, rowsAffectedError := deleteResult.RowsAffected()
if rowsAffectedError != nil {
return core.E("store.Transaction.Delete", "count deleted rows", rowsAffectedError)
}
if deletedRows > 0 {
storeTransaction.recordEvent(Event{Type: EventDelete, Group: group, Key: key, Timestamp: time.Now()})
}
return nil
}
// Usage example: `if err := transaction.DeleteGroup("cache"); err != nil { return err }`
func (storeTransaction *StoreTransaction) DeleteGroup(group string) error {
if err := storeTransaction.ensureReady("store.Transaction.DeleteGroup"); err != nil {
return err
}
deleteResult, err := storeTransaction.sqliteTransaction.Exec(
"DELETE FROM "+entriesTableName+" WHERE "+entryGroupColumn+" = ?",
group,
)
if err != nil {
return core.E("store.Transaction.DeleteGroup", "delete group", err)
}
deletedRows, rowsAffectedError := deleteResult.RowsAffected()
if rowsAffectedError != nil {
return core.E("store.Transaction.DeleteGroup", "count deleted rows", rowsAffectedError)
}
if deletedRows > 0 {
storeTransaction.recordEvent(Event{Type: EventDeleteGroup, Group: group, Timestamp: time.Now()})
}
return nil
}
// Usage example: `if err := transaction.DeletePrefix("tenant-a:"); err != nil { return err }`
func (storeTransaction *StoreTransaction) DeletePrefix(groupPrefix string) error {
if err := storeTransaction.ensureReady("store.Transaction.DeletePrefix"); err != nil {
return err
}
var rows *sql.Rows
var err error
if groupPrefix == "" {
rows, err = storeTransaction.sqliteTransaction.Query(
"SELECT DISTINCT " + entryGroupColumn + " FROM " + entriesTableName + " ORDER BY " + entryGroupColumn,
)
} else {
rows, err = storeTransaction.sqliteTransaction.Query(
"SELECT DISTINCT "+entryGroupColumn+" FROM "+entriesTableName+" WHERE "+entryGroupColumn+" LIKE ? ESCAPE '^' ORDER BY "+entryGroupColumn,
escapeLike(groupPrefix)+"%",
)
}
if err != nil {
return core.E("store.Transaction.DeletePrefix", "list groups", err)
}
defer func() { _ = rows.Close() }()
var groupNames []string
for rows.Next() {
var groupName string
if err := rows.Scan(&groupName); err != nil {
return core.E("store.Transaction.DeletePrefix", "scan group name", err)
}
groupNames = append(groupNames, groupName)
}
if err := rows.Err(); err != nil {
return core.E("store.Transaction.DeletePrefix", "iterate groups", err)
}
for _, groupName := range groupNames {
if err := storeTransaction.DeleteGroup(groupName); err != nil {
return core.E("store.Transaction.DeletePrefix", "delete group", err)
}
}
return nil
}
// Usage example: `keyCount, err := transaction.Count("config")`
func (storeTransaction *StoreTransaction) Count(group string) (int, error) {
if err := storeTransaction.ensureReady("store.Transaction.Count"); err != nil {
return 0, err
}
var count int
err := storeTransaction.sqliteTransaction.QueryRow(
"SELECT COUNT(*) FROM "+entriesTableName+" WHERE "+entryGroupColumn+" = ? AND (expires_at IS NULL OR expires_at > ?)",
group, time.Now().UnixMilli(),
).Scan(&count)
if err != nil {
return 0, core.E("store.Transaction.Count", "count rows", err)
}
return count, nil
}
// Usage example: `colourEntries, err := transaction.GetAll("config")`
func (storeTransaction *StoreTransaction) GetAll(group string) (map[string]string, error) {
if err := storeTransaction.ensureReady("store.Transaction.GetAll"); err != nil {
return nil, err
}
entriesByKey := make(map[string]string)
for entry, err := range storeTransaction.All(group) {
if err != nil {
return nil, core.E("store.Transaction.GetAll", "iterate rows", err)
}
entriesByKey[entry.Key] = entry.Value
}
return entriesByKey, nil
}
// Usage example: `page, err := transaction.GetPage("config", 0, 25); if err != nil { return }; for _, entry := range page { fmt.Println(entry.Key, entry.Value) }`
func (storeTransaction *StoreTransaction) GetPage(group string, offset, limit int) ([]KeyValue, error) {
if err := storeTransaction.ensureReady("store.Transaction.GetPage"); err != nil {
return nil, err
}
if offset < 0 {
return nil, core.E("store.Transaction.GetPage", "offset must be zero or positive", nil)
}
if limit < 0 {
return nil, core.E("store.Transaction.GetPage", "limit must be zero or positive", nil)
}
rows, err := storeTransaction.sqliteTransaction.Query(
"SELECT "+entryKeyColumn+", "+entryValueColumn+" FROM "+entriesTableName+" WHERE "+entryGroupColumn+" = ? AND (expires_at IS NULL OR expires_at > ?) ORDER BY "+entryKeyColumn+" LIMIT ? OFFSET ?",
group, time.Now().UnixMilli(), limit, offset,
)
if err != nil {
return nil, core.E("store.Transaction.GetPage", "query rows", err)
}
defer func() { _ = rows.Close() }()
page := make([]KeyValue, 0, limit)
for rows.Next() {
var entry KeyValue
if err := rows.Scan(&entry.Key, &entry.Value); err != nil {
return nil, core.E("store.Transaction.GetPage", "scan row", err)
}
page = append(page, entry)
}
if err := rows.Err(); err != nil {
return nil, core.E("store.Transaction.GetPage", "rows iteration", err)
}
return page, nil
}
// Usage example: `for entry, err := range transaction.All("config") { if err != nil { break }; fmt.Println(entry.Key, entry.Value) }`
func (storeTransaction *StoreTransaction) All(group string) iter.Seq2[KeyValue, error] {
return storeTransaction.AllSeq(group)
}
// Usage example: `for entry, err := range transaction.AllSeq("config") { if err != nil { break }; fmt.Println(entry.Key, entry.Value) }`
func (storeTransaction *StoreTransaction) AllSeq(group string) iter.Seq2[KeyValue, error] {
return func(yield func(KeyValue, error) bool) {
if err := storeTransaction.ensureReady("store.Transaction.All"); err != nil {
yield(KeyValue{}, err)
return
}
rows, err := storeTransaction.sqliteTransaction.Query(
"SELECT "+entryKeyColumn+", "+entryValueColumn+" FROM "+entriesTableName+" WHERE "+entryGroupColumn+" = ? AND (expires_at IS NULL OR expires_at > ?) ORDER BY "+entryKeyColumn,
group, time.Now().UnixMilli(),
)
if err != nil {
yield(KeyValue{}, core.E("store.Transaction.All", "query rows", err))
return
}
defer func() { _ = rows.Close() }()
for rows.Next() {
var entry KeyValue
if err := rows.Scan(&entry.Key, &entry.Value); err != nil {
if !yield(KeyValue{}, core.E("store.Transaction.All", "scan row", err)) {
return
}
continue
}
if !yield(entry, nil) {
return
}
}
if err := rows.Err(); err != nil {
yield(KeyValue{}, core.E("store.Transaction.All", "rows iteration", err))
}
}
}
// Usage example: `removedRows, err := transaction.CountAll("tenant-a:")`
func (storeTransaction *StoreTransaction) CountAll(groupPrefix string) (int, error) {
if err := storeTransaction.ensureReady("store.Transaction.CountAll"); err != nil {
return 0, err
}
var count int
var err error
if groupPrefix == "" {
err = storeTransaction.sqliteTransaction.QueryRow(
"SELECT COUNT(*) FROM "+entriesTableName+" WHERE (expires_at IS NULL OR expires_at > ?)",
time.Now().UnixMilli(),
).Scan(&count)
} else {
err = storeTransaction.sqliteTransaction.QueryRow(
"SELECT COUNT(*) FROM "+entriesTableName+" WHERE "+entryGroupColumn+" LIKE ? ESCAPE '^' AND (expires_at IS NULL OR expires_at > ?)",
escapeLike(groupPrefix)+"%", time.Now().UnixMilli(),
).Scan(&count)
}
if err != nil {
return 0, core.E("store.Transaction.CountAll", "count rows", err)
}
return count, nil
}
// Usage example: `groupNames, err := transaction.Groups("tenant-a:")`
// Usage example: `groupNames, err := transaction.Groups()`
func (storeTransaction *StoreTransaction) Groups(groupPrefix ...string) ([]string, error) {
if err := storeTransaction.ensureReady("store.Transaction.Groups"); err != nil {
return nil, err
}
var groupNames []string
for groupName, err := range storeTransaction.GroupsSeq(groupPrefix...) {
if err != nil {
return nil, err
}
groupNames = append(groupNames, groupName)
}
return groupNames, nil
}
// Usage example: `for groupName, err := range transaction.GroupsSeq("tenant-a:") { if err != nil { break }; fmt.Println(groupName) }`
// Usage example: `for groupName, err := range transaction.GroupsSeq() { if err != nil { break }; fmt.Println(groupName) }`
func (storeTransaction *StoreTransaction) GroupsSeq(groupPrefix ...string) iter.Seq2[string, error] {
actualGroupPrefix := firstStringOrEmpty(groupPrefix)
return func(yield func(string, error) bool) {
if err := storeTransaction.ensureReady("store.Transaction.GroupsSeq"); err != nil {
yield("", err)
return
}
var rows *sql.Rows
var err error
now := time.Now().UnixMilli()
if actualGroupPrefix == "" {
rows, err = storeTransaction.sqliteTransaction.Query(
"SELECT DISTINCT "+entryGroupColumn+" FROM "+entriesTableName+" WHERE (expires_at IS NULL OR expires_at > ?) ORDER BY "+entryGroupColumn,
now,
)
} else {
rows, err = storeTransaction.sqliteTransaction.Query(
"SELECT DISTINCT "+entryGroupColumn+" FROM "+entriesTableName+" WHERE "+entryGroupColumn+" LIKE ? ESCAPE '^' AND (expires_at IS NULL OR expires_at > ?) ORDER BY "+entryGroupColumn,
escapeLike(actualGroupPrefix)+"%", now,
)
}
if err != nil {
yield("", core.E("store.Transaction.GroupsSeq", "query group names", err))
return
}
defer func() { _ = rows.Close() }()
for rows.Next() {
var groupName string
if err := rows.Scan(&groupName); err != nil {
if !yield("", core.E("store.Transaction.GroupsSeq", "scan group name", err)) {
return
}
continue
}
if !yield(groupName, nil) {
return
}
}
if err := rows.Err(); err != nil {
yield("", core.E("store.Transaction.GroupsSeq", "rows iteration", err))
}
}
}
// Usage example: `renderedTemplate, err := transaction.Render("Hello {{ .name }}", "user")`
func (storeTransaction *StoreTransaction) Render(templateSource, group string) (string, error) {
if err := storeTransaction.ensureReady("store.Transaction.Render"); err != nil {
return "", err
}
templateData := make(map[string]string)
for entry, err := range storeTransaction.All(group) {
if err != nil {
return "", core.E("store.Transaction.Render", "iterate rows", err)
}
templateData[entry.Key] = entry.Value
}
renderTemplate, err := template.New("render").Parse(templateSource)
if err != nil {
return "", core.E("store.Transaction.Render", "parse template", err)
}
builder := core.NewBuilder()
if err := renderTemplate.Execute(builder, templateData); err != nil {
return "", core.E("store.Transaction.Render", "execute template", err)
}
return builder.String(), nil
}
// Usage example: `parts, err := transaction.GetSplit("config", "hosts", ","); if err != nil { return }; for part := range parts { fmt.Println(part) }`
func (storeTransaction *StoreTransaction) GetSplit(group, key, separator string) (iter.Seq[string], error) {
if err := storeTransaction.ensureReady("store.Transaction.GetSplit"); err != nil {
return nil, err
}
value, err := storeTransaction.Get(group, key)
if err != nil {
return nil, err
}
return splitValueSeq(value, separator), nil
}
// Usage example: `fields, err := transaction.GetFields("config", "flags"); if err != nil { return }; for field := range fields { fmt.Println(field) }`
func (storeTransaction *StoreTransaction) GetFields(group, key string) (iter.Seq[string], error) {
if err := storeTransaction.ensureReady("store.Transaction.GetFields"); err != nil {
return nil, err
}
value, err := storeTransaction.Get(group, key)
if err != nil {
return nil, err
}
return fieldsValueSeq(value), nil
}
// Usage example: `removedRows, err := transaction.PurgeExpired(); if err != nil { return err }; fmt.Println(removedRows)`
func (storeTransaction *StoreTransaction) PurgeExpired() (int64, error) {
if err := storeTransaction.ensureReady("store.Transaction.PurgeExpired"); err != nil {
return 0, err
}
cutoffUnixMilli := time.Now().UnixMilli()
expiredEntries, err := deleteExpiredEntriesMatchingGroupPrefix(storeTransaction.sqliteTransaction, "", cutoffUnixMilli)
if err != nil {
return 0, core.E("store.Transaction.PurgeExpired", "delete expired rows", err)
}
removedRows := int64(len(expiredEntries))
if removedRows > 0 {
for _, expiredEntry := range expiredEntries {
storeTransaction.recordEvent(Event{
Type: EventDelete,
Group: expiredEntry.group,
Key: expiredEntry.key,
Timestamp: time.Now(),
})
}
}
return removedRows, nil
}

406
transaction_test.go Normal file
View file

@ -0,0 +1,406 @@
package store
import (
"iter"
"testing"
"time"
core "dappco.re/go/core"
)
func TestTransaction_Transaction_Good_CommitsMultipleWrites(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("*")
defer storeInstance.Unwatch("*", events)
err := storeInstance.Transaction(func(transaction *StoreTransaction) error {
if err := transaction.Set("alpha", "first", "1"); err != nil {
return err
}
if err := transaction.Set("beta", "second", "2"); err != nil {
return err
}
return nil
})
assertNoError(t, err)
firstValue, err := storeInstance.Get("alpha", "first")
assertNoError(t, err)
assertEqual(t, "1", firstValue)
secondValue, err := storeInstance.Get("beta", "second")
assertNoError(t, err)
assertEqual(t, "2", secondValue)
received := drainEvents(events, 2, time.Second)
assertLen(t, received, 2)
assertEqual(t, EventSet, received[0].Type)
assertEqual(t, "alpha", received[0].Group)
assertEqual(t, "first", received[0].Key)
assertEqual(t, EventSet, received[1].Type)
assertEqual(t, "beta", received[1].Group)
assertEqual(t, "second", received[1].Key)
}
func TestTransaction_Transaction_Good_RollbackOnError(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
err := storeInstance.Transaction(func(transaction *StoreTransaction) error {
if err := transaction.Set("alpha", "first", "1"); err != nil {
return err
}
return core.E("test", "force rollback", nil)
})
assertError(t, err)
_, err = storeInstance.Get("alpha", "first")
assertErrorIs(t, err, NotFoundError)
}
func TestTransaction_Transaction_Good_DeletesAtomically(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("alpha", "first", "1"))
assertNoError(t, storeInstance.Set("beta", "second", "2"))
err := storeInstance.Transaction(func(transaction *StoreTransaction) error {
if err := transaction.DeletePrefix(""); err != nil {
return err
}
return nil
})
assertNoError(t, err)
_, err = storeInstance.Get("alpha", "first")
assertErrorIs(t, err, NotFoundError)
_, err = storeInstance.Get("beta", "second")
assertErrorIs(t, err, NotFoundError)
}
func TestTransaction_Transaction_Good_ReadHelpersSeePendingWrites(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
err := storeInstance.Transaction(func(transaction *StoreTransaction) error {
if err := transaction.Set("config", "colour", "blue"); err != nil {
return err
}
if err := transaction.Set("config", "hosts", "alpha beta"); err != nil {
return err
}
if err := transaction.Set("audit", "enabled", "true"); err != nil {
return err
}
entriesByKey, err := transaction.GetAll("config")
assertNoError(t, err)
assertEqual(t, map[string]string{"colour": "blue", "hosts": "alpha beta"}, entriesByKey)
count, err := transaction.CountAll("")
assertNoError(t, err)
assertEqual(t, 3, count)
groupNames, err := transaction.Groups()
assertNoError(t, err)
assertEqual(t, []string{"audit", "config"}, groupNames)
renderedTemplate, err := transaction.Render("{{ .colour }} / {{ .hosts }}", "config")
assertNoError(t, err)
assertEqual(t, "blue / alpha beta", renderedTemplate)
splitParts, err := transaction.GetSplit("config", "hosts", " ")
assertNoError(t, err)
assertEqual(t, []string{"alpha", "beta"}, collectSeq(t, splitParts))
fieldParts, err := transaction.GetFields("config", "hosts")
assertNoError(t, err)
assertEqual(t, []string{"alpha", "beta"}, collectSeq(t, fieldParts))
return nil
})
assertNoError(t, err)
}
func TestTransaction_Transaction_Good_PurgeExpired(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.SetWithTTL("alpha", "ephemeral", "gone", 1*time.Millisecond))
time.Sleep(5 * time.Millisecond)
err := storeInstance.Transaction(func(transaction *StoreTransaction) error {
removedRows, err := transaction.PurgeExpired()
assertNoError(t, err)
assertEqual(t, int64(1), removedRows)
return nil
})
assertNoError(t, err)
_, err = storeInstance.Get("alpha", "ephemeral")
assertErrorIs(t, err, NotFoundError)
}
func TestTransaction_Transaction_Good_Exists(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("config", "colour", "blue"))
err := storeInstance.Transaction(func(transaction *StoreTransaction) error {
exists, err := transaction.Exists("config", "colour")
assertNoError(t, err)
assertTrue(t, exists)
exists, err = transaction.Exists("config", "missing")
assertNoError(t, err)
assertFalse(t, exists)
return nil
})
assertNoError(t, err)
}
func TestTransaction_Transaction_Good_ExistsSeesPendingWrites(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
err := storeInstance.Transaction(func(transaction *StoreTransaction) error {
exists, err := transaction.Exists("config", "colour")
assertNoError(t, err)
assertFalse(t, exists)
if err := transaction.Set("config", "colour", "blue"); err != nil {
return err
}
exists, err = transaction.Exists("config", "colour")
assertNoError(t, err)
assertTrue(t, exists)
return nil
})
assertNoError(t, err)
}
func TestTransaction_Transaction_Good_GroupExists(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
err := storeInstance.Transaction(func(transaction *StoreTransaction) error {
exists, err := transaction.GroupExists("config")
assertNoError(t, err)
assertFalse(t, exists)
if err := transaction.Set("config", "colour", "blue"); err != nil {
return err
}
exists, err = transaction.GroupExists("config")
assertNoError(t, err)
assertTrue(t, exists)
return nil
})
assertNoError(t, err)
}
func TestTransaction_ScopedStoreTransaction_Good_ExistsAndGroupExists(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
err := scopedStore.Transaction(func(transaction *ScopedStoreTransaction) error {
exists, err := transaction.Exists("colour")
assertNoError(t, err)
assertFalse(t, exists)
if err := transaction.Set("colour", "blue"); err != nil {
return err
}
exists, err = transaction.Exists("colour")
assertNoError(t, err)
assertTrue(t, exists)
exists, err = transaction.ExistsIn("other", "colour")
assertNoError(t, err)
assertFalse(t, exists)
if err := transaction.SetIn("config", "theme", "dark"); err != nil {
return err
}
groupExists, err := transaction.GroupExists("config")
assertNoError(t, err)
assertTrue(t, groupExists)
groupExists, err = transaction.GroupExists("missing-group")
assertNoError(t, err)
assertFalse(t, groupExists)
return nil
})
assertNoError(t, err)
}
func TestTransaction_ScopedStoreTransaction_Good_GetPage(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
err := scopedStore.Transaction(func(transaction *ScopedStoreTransaction) error {
if err := transaction.SetIn("items", "charlie", "3"); err != nil {
return err
}
if err := transaction.SetIn("items", "alpha", "1"); err != nil {
return err
}
if err := transaction.SetIn("items", "bravo", "2"); err != nil {
return err
}
page, err := transaction.GetPage("items", 1, 1)
assertNoError(t, err)
assertLen(t, page, 1)
assertEqual(t, KeyValue{Key: "bravo", Value: "2"}, page[0])
return nil
})
assertNoError(t, err)
}
func TestTransaction_ScopedStoreTransaction_Good_CommitsNamespacedWrites(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
scopedStore, err := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
Quota: QuotaConfig{MaxKeys: 4, MaxGroups: 2},
})
assertNoError(t, err)
err = scopedStore.Transaction(func(transaction *ScopedStoreTransaction) error {
if err := transaction.Set("theme", "dark"); err != nil {
return err
}
if err := transaction.SetIn("preferences", "locale", "en-GB"); err != nil {
return err
}
themeValue, err := transaction.Get("theme")
assertNoError(t, err)
assertEqual(t, "dark", themeValue)
localeValue, err := transaction.GetFrom("preferences", "locale")
assertNoError(t, err)
assertEqual(t, "en-GB", localeValue)
groupNames, err := transaction.Groups()
assertNoError(t, err)
assertEqual(t, []string{"default", "preferences"}, groupNames)
return nil
})
assertNoError(t, err)
themeValue, err := storeInstance.Get("tenant-a:default", "theme")
assertNoError(t, err)
assertEqual(t, "dark", themeValue)
localeValue, err := storeInstance.Get("tenant-a:preferences", "locale")
assertNoError(t, err)
assertEqual(t, "en-GB", localeValue)
}
func TestTransaction_ScopedStoreTransaction_Good_PurgeExpired(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetWithTTL("session", "token", "abc123", 1*time.Millisecond))
time.Sleep(5 * time.Millisecond)
err := scopedStore.Transaction(func(transaction *ScopedStoreTransaction) error {
removedRows, err := transaction.PurgeExpired()
assertNoError(t, err)
assertEqual(t, int64(1), removedRows)
return nil
})
assertNoError(t, err)
_, err = scopedStore.GetFrom("session", "token")
assertErrorIs(t, err, NotFoundError)
}
func TestTransaction_ScopedStoreTransaction_Good_QuotaUsesPendingWrites(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
scopedStore, err := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
Quota: QuotaConfig{MaxKeys: 2, MaxGroups: 2},
})
assertNoError(t, err)
err = scopedStore.Transaction(func(transaction *ScopedStoreTransaction) error {
assertNoError(t, transaction.SetIn("group-1", "first", "1"))
assertNoError(t, transaction.SetIn("group-2", "second", "2"))
err := transaction.SetIn("group-2", "third", "3")
assertError(t, err)
assertTrue(t, core.Is(err, QuotaExceededError))
return err
})
assertError(t, err)
assertTrue(t, core.Is(err, QuotaExceededError))
_, getErr := storeInstance.Get("tenant-a:group-1", "first")
assertTrue(t, core.Is(getErr, NotFoundError))
}
func TestTransaction_ScopedStoreTransaction_Good_DeletePrefix(t *testing.T) {
storeInstance, _ := New(":memory:")
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
otherScopedStore := NewScoped(storeInstance, "tenant-b")
assertNoError(t, scopedStore.SetIn("cache", "theme", "dark"))
assertNoError(t, scopedStore.SetIn("cache-warm", "status", "ready"))
assertNoError(t, scopedStore.SetIn("config", "colour", "blue"))
assertNoError(t, otherScopedStore.SetIn("cache", "theme", "keep"))
err := scopedStore.Transaction(func(transaction *ScopedStoreTransaction) error {
return transaction.DeletePrefix("cache")
})
assertNoError(t, err)
_, getErr := scopedStore.GetFrom("cache", "theme")
assertTrue(t, core.Is(getErr, NotFoundError))
_, getErr = scopedStore.GetFrom("cache-warm", "status")
assertTrue(t, core.Is(getErr, NotFoundError))
colourValue, getErr := scopedStore.GetFrom("config", "colour")
assertNoError(t, getErr)
assertEqual(t, "blue", colourValue)
otherValue, getErr := otherScopedStore.GetFrom("cache", "theme")
assertNoError(t, getErr)
assertEqual(t, "keep", otherValue)
}
func collectSeq[T any](t *testing.T, sequence iter.Seq[T]) []T {
t.Helper()
values := make([]T, 0)
for value := range sequence {
values = append(values, value)
}
return values
}

647
workspace.go Normal file
View file

@ -0,0 +1,647 @@
package store
import (
"database/sql"
"io/fs"
"maps"
"slices"
"sync" // Note: AX-6 — internal concurrency primitive; structural for store infrastructure (RFC §4 explicitly mandates).
"time"
core "dappco.re/go/core"
)
const (
workspaceEntriesTableName = "workspace_entries"
workspaceSummaryGroupPrefix = "workspace"
workspaceQuarantineDirName = "quarantine"
)
const createWorkspaceEntriesTableSQL = `CREATE TABLE IF NOT EXISTS workspace_entries (
entry_id BIGINT PRIMARY KEY DEFAULT nextval('workspace_entries_entry_id_seq'),
entry_kind TEXT NOT NULL,
entry_data TEXT NOT NULL,
created_at BIGINT NOT NULL
)`
const createWorkspaceEntriesViewSQL = `CREATE VIEW IF NOT EXISTS entries AS
SELECT
entry_id AS id,
entry_kind AS kind,
entry_data AS data,
created_at
FROM workspace_entries`
var defaultWorkspaceStateDirectory = ".core/state/"
// Usage example: `workspace, err := storeInstance.NewWorkspace("scroll-session"); if err != nil { return }; defer workspace.Discard()`
// Usage example: `workspace, err := storeInstance.NewWorkspace("scroll-session-2026-03-30"); if err != nil { return }; defer workspace.Discard(); _ = workspace.Put("like", map[string]any{"user": "@alice"})`
// Each workspace keeps mutable work-in-progress in a DuckDB file such as
// `.core/state/scroll-session.duckdb` until `Commit()` or `Discard()` removes
// it.
type Workspace struct {
name string
store *Store
db *sql.DB
databasePath string
filesystem *core.Fs
cachedOrphanAggregate map[string]any
lifecycleLock sync.Mutex
isClosed bool
}
// Usage example: `workspaceName := workspace.Name(); fmt.Println(workspaceName)`
func (workspace *Workspace) Name() string {
if workspace == nil {
return ""
}
return workspace.name
}
// Usage example: `workspacePath := workspace.DatabasePath(); fmt.Println(workspacePath)`
func (workspace *Workspace) DatabasePath() string {
if workspace == nil {
return ""
}
return workspace.databasePath
}
// Usage example: `if err := workspace.Close(); err != nil { return }`
// Usage example: `if err := workspace.Close(); err != nil { return }; orphans := storeInstance.RecoverOrphans(".core/state"); _ = orphans`
// `Close()` keeps the `.duckdb` file on disk so `RecoverOrphans(".core/state")`
// can reopen it after a crash or interrupted agent run.
func (workspace *Workspace) Close() error {
return workspace.closeWithoutRemovingFiles()
}
func (workspace *Workspace) ensureReady(operation string) error {
if workspace == nil {
return core.E(operation, "workspace is nil", nil)
}
if workspace.store == nil {
return core.E(operation, "workspace store is nil", nil)
}
if workspace.db == nil {
return core.E(operation, "workspace database is nil", nil)
}
if workspace.filesystem == nil {
return core.E(operation, "workspace filesystem is nil", nil)
}
if err := workspace.store.ensureReady(operation); err != nil {
return err
}
workspace.lifecycleLock.Lock()
closed := workspace.isClosed
workspace.lifecycleLock.Unlock()
if closed {
return core.E(operation, "workspace is closed", nil)
}
return nil
}
// Usage example: `workspace, err := storeInstance.NewWorkspace("scroll-session-2026-03-30"); if err != nil { return }; defer workspace.Discard()`
// This creates `.core/state/scroll-session-2026-03-30.duckdb` by default and
// removes it when the workspace is committed or discarded.
func (storeInstance *Store) NewWorkspace(name string) (*Workspace, error) {
if err := storeInstance.ensureReady("store.NewWorkspace"); err != nil {
return nil, err
}
workspaceNameValidation := core.ValidateName(name)
if !workspaceNameValidation.OK {
return nil, core.E("store.NewWorkspace", "validate workspace name", workspaceNameValidation.Value.(error))
}
filesystem := (&core.Fs{}).NewUnrestricted()
stateDirectory := storeInstance.workspaceStateDirectoryPath()
databasePath := workspaceFilePath(stateDirectory, name)
if filesystem.Exists(databasePath) {
return nil, core.E("store.NewWorkspace", core.Concat("workspace already exists: ", name), nil)
}
if result := filesystem.EnsureDir(stateDirectory); !result.OK {
return nil, core.E("store.NewWorkspace", "ensure state directory", result.Value.(error))
}
database, err := openWorkspaceDatabase(databasePath)
if err != nil {
return nil, core.E("store.NewWorkspace", "open workspace database", err)
}
return &Workspace{
name: name,
store: storeInstance,
db: database,
databasePath: databasePath,
filesystem: filesystem,
}, nil
}
// discoverOrphanWorkspacePaths(".core/state") returns leftover SQLite workspace
// files such as `scroll-session.duckdb` without opening them.
func discoverOrphanWorkspacePaths(stateDirectory string) []string {
filesystem := (&core.Fs{}).NewUnrestricted()
if stateDirectory == "" {
stateDirectory = defaultWorkspaceStateDirectory
}
if !filesystem.Exists(stateDirectory) {
return nil
}
listResult := filesystem.List(stateDirectory)
if !listResult.OK {
return nil
}
directoryEntries, ok := listResult.Value.([]fs.DirEntry)
if !ok {
return nil
}
slices.SortFunc(directoryEntries, func(left, right fs.DirEntry) int {
switch {
case left.Name() < right.Name():
return -1
case left.Name() > right.Name():
return 1
default:
return 0
}
})
orphanPaths := make([]string, 0, len(directoryEntries))
for _, dirEntry := range directoryEntries {
if dirEntry.IsDir() || !core.HasSuffix(dirEntry.Name(), ".duckdb") {
continue
}
orphanPaths = append(orphanPaths, workspaceFilePath(stateDirectory, core.TrimSuffix(dirEntry.Name(), ".duckdb")))
}
return orphanPaths
}
func discoverOrphanWorkspaces(stateDirectory string, store *Store) []*Workspace {
return loadRecoveredWorkspaces(stateDirectory, store)
}
func loadRecoveredWorkspaces(stateDirectory string, store *Store) []*Workspace {
filesystem := (&core.Fs{}).NewUnrestricted()
orphanWorkspaces := make([]*Workspace, 0)
for _, databasePath := range discoverOrphanWorkspacePaths(stateDirectory) {
workspaceName := workspaceNameFromPath(stateDirectory, databasePath)
if workspaceCommitMarkerExists(store, workspaceName) {
removeWorkspaceDatabaseFiles(filesystem, databasePath)
continue
}
database, err := openWorkspaceDatabase(databasePath)
if err != nil {
quarantineOrphanWorkspaceFiles(filesystem, stateDirectory, databasePath)
continue
}
orphanWorkspace := &Workspace{
name: workspaceName,
store: store,
db: database,
databasePath: databasePath,
filesystem: filesystem,
}
aggregate, err := orphanWorkspace.aggregateFieldsWithoutReadiness()
if err != nil {
_ = orphanWorkspace.closeWithoutRemovingFiles()
quarantineOrphanWorkspaceFiles(filesystem, stateDirectory, databasePath)
continue
}
orphanWorkspace.cachedOrphanAggregate = aggregate
orphanWorkspaces = append(orphanWorkspaces, orphanWorkspace)
}
return orphanWorkspaces
}
func normaliseWorkspaceStateDirectory(stateDirectory string) string {
return normaliseDirectoryPath(stateDirectory)
}
func workspaceNameFromPath(stateDirectory, databasePath string) string {
relativePath := core.TrimPrefix(databasePath, joinPath(stateDirectory, ""))
return core.TrimSuffix(relativePath, ".duckdb")
}
// Usage example: `orphans := storeInstance.RecoverOrphans(".core/state"); for _, orphanWorkspace := range orphans { fmt.Println(orphanWorkspace.Name(), orphanWorkspace.Aggregate()) }`
// This reopens leftover `.duckdb` files such as `scroll-session-2026-03-30`
// so callers can inspect `Aggregate()` and choose `Commit()` or `Discard()`.
// Unreadable orphan files are moved under `.core/state/quarantine/`.
func (storeInstance *Store) RecoverOrphans(stateDirectory string) []*Workspace {
if storeInstance == nil {
return nil
}
if stateDirectory == "" {
stateDirectory = storeInstance.workspaceStateDirectoryPath()
}
stateDirectory = normaliseWorkspaceStateDirectory(stateDirectory)
if stateDirectory == storeInstance.workspaceStateDirectoryPath() {
storeInstance.orphanWorkspaceLock.Lock()
cachedWorkspaces := slices.Clone(storeInstance.cachedOrphanWorkspaces)
storeInstance.cachedOrphanWorkspaces = nil
storeInstance.orphanWorkspaceLock.Unlock()
if len(cachedWorkspaces) > 0 {
return cachedWorkspaces
}
}
return loadRecoveredWorkspaces(stateDirectory, storeInstance)
}
// Usage example: `err := workspace.Put("like", map[string]any{"user": "@alice", "post": "video_123"})`
func (workspace *Workspace) Put(kind string, data map[string]any) error {
if err := workspace.ensureReady("store.Workspace.Put"); err != nil {
return err
}
if kind == "" {
return core.E("store.Workspace.Put", "kind is empty", nil)
}
if data == nil {
data = map[string]any{}
}
dataJSON, err := marshalJSONText(data, "store.Workspace.Put", "marshal entry data")
if err != nil {
return err
}
_, err = workspace.db.Exec(
"INSERT INTO "+workspaceEntriesTableName+" (entry_kind, entry_data, created_at) VALUES (?, ?, ?)",
kind,
dataJSON,
time.Now().UnixMilli(),
)
if err != nil {
return core.E("store.Workspace.Put", "insert entry", err)
}
return nil
}
// Usage example: `entryCount, err := workspace.Count(); if err != nil { return }; fmt.Println(entryCount)`
func (workspace *Workspace) Count() (int, error) {
if err := workspace.ensureReady("store.Workspace.Count"); err != nil {
return 0, err
}
var count int
err := workspace.db.QueryRow(
"SELECT COUNT(*) FROM " + workspaceEntriesTableName,
).Scan(&count)
if err != nil {
return 0, core.E("store.Workspace.Count", "count entries", err)
}
return count, nil
}
// Usage example: `summary := workspace.Aggregate(); fmt.Println(summary["like"])`
func (workspace *Workspace) Aggregate() map[string]any {
if workspace.shouldUseOrphanAggregate() {
return workspace.aggregateFallback()
}
if err := workspace.ensureReady("store.Workspace.Aggregate"); err != nil {
return workspace.aggregateFallback()
}
fields, err := workspace.aggregateFields()
if err != nil {
return workspace.aggregateFallback()
}
return fields
}
// Usage example: `result := workspace.Commit(); if !result.OK { return }; fmt.Println(result.Value)`
// `Commit()` writes one completed workspace row to the journal, upserts the
// `workspace:NAME/summary` entry, and removes the workspace file.
func (workspace *Workspace) Commit() core.Result {
if err := workspace.ensureReady("store.Workspace.Commit"); err != nil {
return core.Result{Value: err, OK: false}
}
fields, err := workspace.aggregateFields()
if err != nil {
return core.Result{Value: core.E("store.Workspace.Commit", "aggregate workspace", err), OK: false}
}
if err := workspace.store.commitWorkspaceAggregate(workspace.name, fields); err != nil {
return core.Result{Value: err, OK: false}
}
if err := workspace.closeAndRemoveFiles(); err != nil {
return core.Result{Value: cloneAnyMap(fields), OK: true}
}
return core.Result{Value: cloneAnyMap(fields), OK: true}
}
// Usage example: `workspace.Discard()`
func (workspace *Workspace) Discard() {
if workspace == nil {
return
}
_ = workspace.closeAndRemoveFiles()
}
// Usage example: `result := workspace.Query("SELECT entry_kind, COUNT(*) AS count FROM workspace_entries GROUP BY entry_kind")`
// `result.Value` contains `[]map[string]any`, which lets an agent inspect the
// current buffer state without defining extra result types.
func (workspace *Workspace) Query(query string) core.Result {
if err := workspace.ensureReady("store.Workspace.Query"); err != nil {
return core.Result{Value: err, OK: false}
}
rows, err := workspace.db.Query(query)
if err != nil {
return core.Result{Value: core.E("store.Workspace.Query", "query workspace", err), OK: false}
}
defer func() { _ = rows.Close() }()
rowMaps, err := queryRowsAsMaps(rows)
if err != nil {
return core.Result{Value: core.E("store.Workspace.Query", "scan rows", err), OK: false}
}
return core.Result{Value: rowMaps, OK: true}
}
func (workspace *Workspace) aggregateFields() (map[string]any, error) {
if err := workspace.ensureReady("store.Workspace.aggregateFields"); err != nil {
return nil, err
}
return workspace.aggregateFieldsWithoutReadiness()
}
func (workspace *Workspace) aggregateFallback() map[string]any {
if workspace == nil || workspace.cachedOrphanAggregate == nil {
return map[string]any{}
}
return maps.Clone(workspace.cachedOrphanAggregate)
}
func (workspace *Workspace) shouldUseOrphanAggregate() bool {
if workspace == nil || workspace.cachedOrphanAggregate == nil {
return false
}
if workspace.filesystem == nil || workspace.databasePath == "" {
return false
}
return !workspace.filesystem.Exists(workspace.databasePath)
}
func (workspace *Workspace) aggregateFieldsWithoutReadiness() (map[string]any, error) {
rows, err := workspace.db.Query(
"SELECT entry_kind, COUNT(*) FROM " + workspaceEntriesTableName + " GROUP BY entry_kind ORDER BY entry_kind",
)
if err != nil {
return nil, err
}
defer func() { _ = rows.Close() }()
fields := make(map[string]any)
for rows.Next() {
var (
kind string
count int
)
if err := rows.Scan(&kind, &count); err != nil {
return nil, err
}
fields[kind] = count
}
if err := rows.Err(); err != nil {
return nil, err
}
return fields, nil
}
func (workspace *Workspace) closeAndRemoveFiles() error {
return workspace.closeAndCleanup(true)
}
// closeWithoutRemovingFiles closes the database handle but leaves the orphan
// file on disk so a later store instance can recover it.
func (workspace *Workspace) closeWithoutRemovingFiles() error {
return workspace.closeAndCleanup(false)
}
func (workspace *Workspace) closeAndCleanup(removeFiles bool) error {
if workspace == nil {
return nil
}
if workspace.db == nil {
return nil
}
workspace.lifecycleLock.Lock()
alreadyClosed := workspace.isClosed
if !alreadyClosed {
workspace.isClosed = true
}
workspace.lifecycleLock.Unlock()
if !alreadyClosed {
if err := workspace.db.Close(); err != nil {
return core.E("store.Workspace.closeAndCleanup", "close workspace database", err)
}
}
if !removeFiles || workspace.filesystem == nil {
return nil
}
for _, path := range workspaceDatabaseFilePaths(workspace.databasePath) {
if result := workspace.filesystem.Delete(path); !result.OK && workspace.filesystem.Exists(path) {
return core.E("store.Workspace.closeAndCleanup", "delete workspace file", result.Value.(error))
}
}
return nil
}
func (storeInstance *Store) commitWorkspaceAggregate(workspaceName string, fields map[string]any) error {
if err := storeInstance.ensureReady("store.Workspace.Commit"); err != nil {
return err
}
if err := ensureJournalSchema(storeInstance.sqliteDatabase); err != nil {
return core.E("store.Workspace.Commit", "ensure journal schema", err)
}
transaction, err := storeInstance.sqliteDatabase.Begin()
if err != nil {
return core.E("store.Workspace.Commit", "begin transaction", err)
}
committed := false
defer func() {
if !committed {
_ = transaction.Rollback()
}
}()
fieldsJSON, err := marshalJSONText(fields, "store.Workspace.Commit", "marshal summary")
if err != nil {
return err
}
tagsJSON, err := marshalJSONText(map[string]string{"workspace": workspaceName}, "store.Workspace.Commit", "marshal tags")
if err != nil {
return err
}
if err := commitJournalEntry(
transaction,
storeInstance.journalBucket(),
workspaceName,
fieldsJSON,
tagsJSON,
time.Now().UnixMilli(),
); err != nil {
return core.E("store.Workspace.Commit", "insert journal entry", err)
}
if _, err := transaction.Exec(
"INSERT INTO "+entriesTableName+" ("+entryGroupColumn+", "+entryKeyColumn+", "+entryValueColumn+", expires_at) VALUES (?, ?, ?, NULL) "+
"ON CONFLICT("+entryGroupColumn+", "+entryKeyColumn+") DO UPDATE SET "+entryValueColumn+" = excluded."+entryValueColumn+", expires_at = NULL",
workspaceSummaryGroup(workspaceName),
"summary",
fieldsJSON,
); err != nil {
return core.E("store.Workspace.Commit", "upsert workspace summary", err)
}
if err := transaction.Commit(); err != nil {
return core.E("store.Workspace.Commit", "commit transaction", err)
}
committed = true
storeInstance.notify(Event{
Type: EventSet,
Group: workspaceSummaryGroup(workspaceName),
Key: "summary",
Value: fieldsJSON,
Timestamp: time.Now(),
})
return nil
}
func openWorkspaceDatabase(databasePath string) (*sql.DB, error) {
database, err := sql.Open("duckdb", databasePath)
if err != nil {
return nil, core.E("store.openWorkspaceDatabase", "open workspace database", err)
}
database.SetMaxOpenConns(1)
if err := database.Ping(); err != nil {
_ = database.Close()
return nil, core.E("store.openWorkspaceDatabase", "ping workspace database", err)
}
if _, err := database.Exec("CREATE SEQUENCE IF NOT EXISTS workspace_entries_entry_id_seq START 1"); err != nil {
_ = database.Close()
return nil, core.E("store.openWorkspaceDatabase", "create workspace entry sequence", err)
}
if _, err := database.Exec(createWorkspaceEntriesTableSQL); err != nil {
_ = database.Close()
return nil, core.E("store.openWorkspaceDatabase", "create workspace entries table", err)
}
if _, err := database.Exec(createWorkspaceEntriesViewSQL); err != nil {
_ = database.Close()
return nil, core.E("store.openWorkspaceDatabase", "create workspace entries view", err)
}
return database, nil
}
func workspaceSummaryGroup(workspaceName string) string {
return core.Concat(workspaceSummaryGroupPrefix, ":", workspaceName)
}
func workspaceFilePath(stateDirectory, name string) string {
return joinPath(stateDirectory, core.Concat(name, ".duckdb"))
}
func workspaceQuarantineFilePath(stateDirectory, databasePath string) string {
return joinPath(
joinPath(stateDirectory, workspaceQuarantineDirName),
core.Concat(workspaceNameFromPath(stateDirectory, databasePath), ".duckdb"),
)
}
func quarantineOrphanWorkspaceFiles(filesystem *core.Fs, stateDirectory, databasePath string) {
if filesystem == nil || databasePath == "" {
return
}
quarantineDirectory := joinPath(stateDirectory, workspaceQuarantineDirName)
if result := filesystem.EnsureDir(quarantineDirectory); !result.OK {
return
}
quarantinePath := availableQuarantineWorkspacePath(
filesystem,
workspaceQuarantineFilePath(stateDirectory, databasePath),
)
sourcePaths := workspaceDatabaseFilePaths(databasePath)
quarantinePaths := workspaceDatabaseFilePaths(quarantinePath)
for index, sourcePath := range sourcePaths {
quarantineWorkspaceFile(filesystem, sourcePath, quarantinePaths[index])
}
}
func availableQuarantineWorkspacePath(filesystem *core.Fs, preferredPath string) string {
if !workspaceQuarantinePathExists(filesystem, preferredPath) {
return preferredPath
}
stem := core.TrimSuffix(preferredPath, ".duckdb")
for index := 1; ; index++ {
candidatePath := core.Concat(stem, ".", core.Sprint(index), ".duckdb")
if !workspaceQuarantinePathExists(filesystem, candidatePath) {
return candidatePath
}
}
}
func workspaceQuarantinePathExists(filesystem *core.Fs, databasePath string) bool {
for _, path := range workspaceDatabaseFilePaths(databasePath) {
if filesystem.Exists(path) {
return true
}
}
return false
}
func workspaceCommitMarkerExists(storeInstance *Store, workspaceName string) bool {
if storeInstance == nil || workspaceName == "" {
return false
}
exists, err := storeInstance.Exists(workspaceSummaryGroup(workspaceName), "summary")
return err == nil && exists
}
func removeWorkspaceDatabaseFiles(filesystem *core.Fs, databasePath string) {
if filesystem == nil || databasePath == "" {
return
}
for _, path := range workspaceDatabaseFilePaths(databasePath) {
_ = filesystem.Delete(path)
}
}
func workspaceDatabaseFilePaths(databasePath string) []string {
if core.HasSuffix(databasePath, ".duckdb") {
return []string{databasePath, databasePath + ".wal"}
}
return []string{databasePath, databasePath + "-wal", databasePath + "-shm"}
}
func quarantineWorkspaceFile(filesystem *core.Fs, sourcePath, quarantinePath string) {
if filesystem == nil || !filesystem.Exists(sourcePath) {
return
}
_ = filesystem.Rename(sourcePath, quarantinePath)
}
func joinPath(base, name string) string {
if base == "" {
return name
}
return core.Concat(normaliseDirectoryPath(base), "/", name)
}
func normaliseDirectoryPath(directory string) string {
for directory != "" && core.HasSuffix(directory, "/") {
directory = core.TrimSuffix(directory, "/")
}
return directory
}

490
workspace_test.go Normal file
View file

@ -0,0 +1,490 @@
package store
import (
"testing"
"time"
core "dappco.re/go/core"
)
func TestWorkspace_NewWorkspace_Good_CreatePutAggregateQuery(t *testing.T) {
stateDirectory := useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("scroll-session")
assertNoError(t, err)
defer workspace.Discard()
assertEqual(t, workspaceFilePath(stateDirectory, "scroll-session"), workspace.databasePath)
assertTrue(t, testFilesystem().Exists(workspace.databasePath))
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
assertNoError(t, workspace.Put("like", map[string]any{"user": "@bob"}))
assertNoError(t, workspace.Put("profile_match", map[string]any{"user": "@charlie"}))
assertEqual(t, map[string]any{"like": 2, "profile_match": 1}, workspace.Aggregate())
rows := requireResultRows(
t,
workspace.Query("SELECT entry_kind, COUNT(*) AS entry_count FROM workspace_entries GROUP BY entry_kind ORDER BY entry_kind"),
)
assertLen(t, rows, 2)
assertEqual(t, "like", rows[0]["entry_kind"])
assertEqual(t, int64(2), rows[0]["entry_count"])
assertEqual(t, "profile_match", rows[1]["entry_kind"])
assertEqual(t, int64(1), rows[1]["entry_count"])
}
func TestWorkspace_DatabasePath_Good(t *testing.T) {
stateDirectory := useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("scroll-session")
assertNoError(t, err)
defer workspace.Discard()
assertEqual(t, workspaceFilePath(stateDirectory, "scroll-session"), workspace.DatabasePath())
}
func TestWorkspace_Count_Good_Empty(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("count-empty")
assertNoError(t, err)
defer workspace.Discard()
count, err := workspace.Count()
assertNoError(t, err)
assertEqual(t, 0, count)
}
func TestWorkspace_Count_Good_AfterPuts(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("count-puts")
assertNoError(t, err)
defer workspace.Discard()
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
assertNoError(t, workspace.Put("like", map[string]any{"user": "@bob"}))
assertNoError(t, workspace.Put("profile_match", map[string]any{"user": "@charlie"}))
count, err := workspace.Count()
assertNoError(t, err)
assertEqual(t, 3, count)
}
func TestWorkspace_Count_Bad_ClosedWorkspace(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("count-closed")
assertNoError(t, err)
workspace.Discard()
_, err = workspace.Count()
assertError(t, err)
}
func TestWorkspace_Query_Good_RFCEntriesView(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("scroll-session")
assertNoError(t, err)
defer workspace.Discard()
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
assertNoError(t, workspace.Put("like", map[string]any{"user": "@bob"}))
assertNoError(t, workspace.Put("profile_match", map[string]any{"user": "@charlie"}))
rows := requireResultRows(
t,
workspace.Query("SELECT kind, COUNT(*) AS entry_count FROM entries GROUP BY kind ORDER BY kind"),
)
assertLen(t, rows, 2)
assertEqual(t, "like", rows[0]["kind"])
assertEqual(t, int64(2), rows[0]["entry_count"])
assertEqual(t, "profile_match", rows[1]["kind"])
assertEqual(t, int64(1), rows[1]["entry_count"])
}
func TestWorkspace_Commit_Good_JournalAndSummary(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("scroll-session")
assertNoError(t, err)
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
assertNoError(t, workspace.Put("like", map[string]any{"user": "@bob"}))
assertNoError(t, workspace.Put("profile_match", map[string]any{"user": "@charlie"}))
result := workspace.Commit()
assertTruef(t, result.OK, "workspace commit failed: %v", result.Value)
assertEqual(t, map[string]any{"like": 2, "profile_match": 1}, result.Value)
assertFalse(t, testFilesystem().Exists(workspace.databasePath))
summaryJSON, err := storeInstance.Get(workspaceSummaryGroup("scroll-session"), "summary")
assertNoError(t, err)
summary := make(map[string]any)
summaryResult := core.JSONUnmarshalString(summaryJSON, &summary)
assertTruef(t, summaryResult.OK, "summary unmarshal failed: %v", summaryResult.Value)
assertEqual(t, float64(2), summary["like"])
assertEqual(t, float64(1), summary["profile_match"])
rows := requireResultRows(
t,
storeInstance.QueryJournal(`from(bucket: "events") |> range(start: -24h) |> filter(fn: (r) => r._measurement == "scroll-session")`),
)
assertLen(t, rows, 1)
assertEqual(t, "scroll-session", rows[0]["measurement"])
fields, ok := rows[0]["fields"].(map[string]any)
assertTruef(t, ok, "unexpected fields type: %T", rows[0]["fields"])
assertEqual(t, float64(2), fields["like"])
assertEqual(t, float64(1), fields["profile_match"])
tags, ok := rows[0]["tags"].(map[string]string)
assertTruef(t, ok, "unexpected tags type: %T", rows[0]["tags"])
assertEqual(t, "scroll-session", tags["workspace"])
}
func TestWorkspace_Commit_Good_ResultCopiesAggregatedMap(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("scroll-session")
assertNoError(t, err)
aggregateSource := map[string]any{"like": 1}
assertNoError(t, workspace.Put("like", aggregateSource))
result := workspace.Commit()
assertTruef(t, result.OK, "workspace commit failed: %v", result.Value)
aggregateSource["like"] = 99
value, ok := result.Value.(map[string]any)
assertTruef(t, ok, "unexpected result type: %T", result.Value)
assertEqual(t, 1, value["like"])
}
func TestWorkspace_Commit_Good_EmitsSummaryEvent(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch(workspaceSummaryGroup("scroll-session"))
defer storeInstance.Unwatch(workspaceSummaryGroup("scroll-session"), events)
workspace, err := storeInstance.NewWorkspace("scroll-session")
assertNoError(t, err)
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
assertNoError(t, workspace.Put("profile_match", map[string]any{"user": "@charlie"}))
result := workspace.Commit()
assertTruef(t, result.OK, "workspace commit failed: %v", result.Value)
select {
case event := <-events:
assertEqual(t, EventSet, event.Type)
assertEqual(t, workspaceSummaryGroup("scroll-session"), event.Group)
assertEqual(t, "summary", event.Key)
assertFalse(t, event.Timestamp.IsZero())
summary := make(map[string]any)
summaryResult := core.JSONUnmarshalString(event.Value, &summary)
assertTruef(t, summaryResult.OK, "summary event unmarshal failed: %v", summaryResult.Value)
assertEqual(t, float64(1), summary["like"])
assertEqual(t, float64(1), summary["profile_match"])
case <-time.After(time.Second):
t.Fatal("timed out waiting for workspace summary event")
}
}
func TestWorkspace_RecoverOrphans_Good_SkipsAlreadyCommittedWorkspaceFile(t *testing.T) {
stateDirectory := useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("committed-leftover")
assertNoError(t, err)
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
fields, err := workspace.aggregateFields()
assertNoError(t, err)
assertNoError(t, storeInstance.commitWorkspaceAggregate(workspace.Name(), fields))
assertNoError(t, workspace.closeWithoutRemovingFiles())
assertTrue(t, testFilesystem().Exists(workspace.databasePath))
orphans := storeInstance.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 0)
assertFalse(t, testFilesystem().Exists(workspace.databasePath))
}
func TestWorkspace_Discard_Good_Idempotent(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("discard-session")
assertNoError(t, err)
workspace.Discard()
workspace.Discard()
assertFalse(t, testFilesystem().Exists(workspace.databasePath))
}
func TestWorkspace_Close_Good_PreservesFileForRecovery(t *testing.T) {
stateDirectory := useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("close-session")
assertNoError(t, err)
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
assertNoError(t, workspace.Close())
assertTrue(t, testFilesystem().Exists(workspace.databasePath))
err = workspace.Put("like", map[string]any{"user": "@bob"})
assertError(t, err)
orphans := storeInstance.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 1)
assertEqual(t, "close-session", orphans[0].Name())
assertEqual(t, map[string]any{"like": 1}, orphans[0].Aggregate())
orphans[0].Discard()
assertFalse(t, testFilesystem().Exists(workspace.databasePath))
}
func TestWorkspace_Close_Good_ClosesDatabaseWithoutFilesystem(t *testing.T) {
databasePath := testPath(t, "workspace-no-filesystem.duckdb")
database, err := openWorkspaceDatabase(databasePath)
assertNoError(t, err)
workspace := &Workspace{
name: "partial-workspace",
db: database,
databasePath: databasePath,
}
assertNoError(t, workspace.Close())
_, execErr := database.Exec("SELECT 1")
assertError(t, execErr)
assertContainsString(t, execErr.Error(), "closed")
assertTrue(t, testFilesystem().Exists(databasePath))
for _, path := range workspaceDatabaseFilePaths(databasePath) {
_ = testFilesystem().Delete(path)
}
}
func TestWorkspace_RecoverOrphans_Good(t *testing.T) {
stateDirectory := useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("orphan-session")
assertNoError(t, err)
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
assertNoError(t, workspace.db.Close())
orphans := storeInstance.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 1)
assertEqual(t, "orphan-session", orphans[0].Name())
assertEqual(t, map[string]any{"like": 1}, orphans[0].Aggregate())
orphans[0].Discard()
assertFalse(t, testFilesystem().Exists(workspaceFilePath(stateDirectory, "orphan-session")))
}
func TestWorkspace_New_Good_LeavesOrphanedWorkspacesForRecovery(t *testing.T) {
stateDirectory := useWorkspaceStateDirectory(t)
requireCoreOK(t, testFilesystem().EnsureDir(stateDirectory))
orphanDatabasePath := workspaceFilePath(stateDirectory, "orphan-session")
orphanDatabase, err := openWorkspaceDatabase(orphanDatabasePath)
assertNoError(t, err)
_, err = orphanDatabase.Exec(
"INSERT INTO "+workspaceEntriesTableName+" (entry_kind, entry_data, created_at) VALUES (?, ?, ?)",
"like",
`{"user":"@alice"}`,
time.Now().UnixMilli(),
)
assertNoError(t, err)
assertNoError(t, orphanDatabase.Close())
assertTrue(t, testFilesystem().Exists(orphanDatabasePath))
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
assertTrue(t, testFilesystem().Exists(orphanDatabasePath))
orphans := storeInstance.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 1)
assertEqual(t, "orphan-session", orphans[0].Name())
orphans[0].Discard()
assertFalse(t, testFilesystem().Exists(orphanDatabasePath))
assertFalse(t, testFilesystem().Exists(orphanDatabasePath+"-wal"))
assertFalse(t, testFilesystem().Exists(orphanDatabasePath+"-shm"))
}
func TestWorkspace_New_Good_CachesOrphansDuringConstruction(t *testing.T) {
stateDirectory := useWorkspaceStateDirectory(t)
requireCoreOK(t, testFilesystem().EnsureDir(stateDirectory))
orphanDatabasePath := workspaceFilePath(stateDirectory, "orphan-session")
orphanDatabase, err := openWorkspaceDatabase(orphanDatabasePath)
assertNoError(t, err)
_, err = orphanDatabase.Exec(
"INSERT INTO "+workspaceEntriesTableName+" (entry_kind, entry_data, created_at) VALUES (?, ?, ?)",
"like",
`{"user":"@alice"}`,
time.Now().UnixMilli(),
)
assertNoError(t, err)
assertNoError(t, orphanDatabase.Close())
assertTrue(t, testFilesystem().Exists(orphanDatabasePath))
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
requireCoreOK(t, testFilesystem().DeleteAll(stateDirectory))
assertFalse(t, testFilesystem().Exists(orphanDatabasePath))
orphans := storeInstance.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 1)
assertEqual(t, "orphan-session", orphans[0].Name())
assertEqual(t, map[string]any{"like": 1}, orphans[0].Aggregate())
orphans[0].Discard()
}
func TestWorkspace_NewConfigured_Good_CachesOrphansFromConfiguredStateDirectory(t *testing.T) {
stateDirectory := testPath(t, "configured-state")
requireCoreOK(t, testFilesystem().EnsureDir(stateDirectory))
orphanDatabasePath := workspaceFilePath(stateDirectory, "orphan-session")
orphanDatabase, err := openWorkspaceDatabase(orphanDatabasePath)
assertNoError(t, err)
_, err = orphanDatabase.Exec(
"INSERT INTO "+workspaceEntriesTableName+" (entry_kind, entry_data, created_at) VALUES (?, ?, ?)",
"like",
`{"user":"@alice"}`,
time.Now().UnixMilli(),
)
assertNoError(t, err)
assertNoError(t, orphanDatabase.Close())
storeInstance, err := NewConfigured(StoreConfig{
DatabasePath: ":memory:",
WorkspaceStateDirectory: stateDirectory,
})
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
requireCoreOK(t, testFilesystem().DeleteAll(stateDirectory))
assertFalse(t, testFilesystem().Exists(orphanDatabasePath))
orphans := storeInstance.RecoverOrphans("")
assertLen(t, orphans, 1)
assertEqual(t, "orphan-session", orphans[0].Name())
assertEqual(t, map[string]any{"like": 1}, orphans[0].Aggregate())
orphans[0].Discard()
}
func TestWorkspace_RecoverOrphans_Good_TrailingSlashUsesCache(t *testing.T) {
stateDirectory := useWorkspaceStateDirectory(t)
requireCoreOK(t, testFilesystem().EnsureDir(stateDirectory))
orphanDatabasePath := workspaceFilePath(stateDirectory, "orphan-session")
orphanDatabase, err := openWorkspaceDatabase(orphanDatabasePath)
assertNoError(t, err)
assertNoError(t, orphanDatabase.Close())
assertTrue(t, testFilesystem().Exists(orphanDatabasePath))
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
requireCoreOK(t, testFilesystem().DeleteAll(stateDirectory))
assertFalse(t, testFilesystem().Exists(orphanDatabasePath))
orphans := storeInstance.RecoverOrphans(stateDirectory + "/")
assertLen(t, orphans, 1)
assertEqual(t, "orphan-session", orphans[0].Name())
orphans[0].Discard()
}
func TestWorkspace_Close_Good_PreservesOrphansForRecovery(t *testing.T) {
stateDirectory := useWorkspaceStateDirectory(t)
requireCoreOK(t, testFilesystem().EnsureDir(stateDirectory))
orphanDatabasePath := workspaceFilePath(stateDirectory, "orphan-session")
orphanDatabase, err := openWorkspaceDatabase(orphanDatabasePath)
assertNoError(t, err)
assertNoError(t, orphanDatabase.Close())
assertTrue(t, testFilesystem().Exists(orphanDatabasePath))
storeInstance, err := New(":memory:")
assertNoError(t, err)
assertNoError(t, storeInstance.Close())
assertTrue(t, testFilesystem().Exists(orphanDatabasePath))
recoveryStore, err := New(":memory:")
assertNoError(t, err)
defer func() { _ = recoveryStore.Close() }()
orphans := recoveryStore.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 1)
assertEqual(t, "orphan-session", orphans[0].Name())
orphans[0].Discard()
assertFalse(t, testFilesystem().Exists(orphanDatabasePath))
}