fix(store): address all CodeRabbit findings on PR #4
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run

30+ findings dispositioned across compact / publish / events / import
/ workspace / config / docs / test surfaces.

Major code fixes:
- compact.go: stage archive, DB commit first, publish after commit
  (durability ordering bug)
- compact_test.go: replaced medium reread O(n²) with bytes.Buffer
- coverage_test.go: EnsureScoringTables now wraps + returns DDL errors
- events.go: lifecycle lock-order reordered (deadlock risk)
- import.go: paths join against cfg.DataDir; SQL write errors propagate;
  counts increment only on success
- bench_test.go: ScpDir + /benchmarks/ paths corrected
- json.go: formatter write errors propagate (no silent drops)
- parquet.go: removed runtime parquet-go dep + in-core writer (was
  buffering whole rows in memory — OOM risk on large datasets)
- publish.go: PublishConfig.Public uses private: !public
- HuggingFace upload streams file with content length (was buffering)
- store.go: ScopedStore.Transaction nil-store panic guard
- recover_test.go: handle .duckdb.wal sidecars; skip SQLite -shm for DuckDB
- coverage_test.go: WriteScoringResult wraps insert failures
- events_test.go: restored EventDeleteGroup wire value 'delete_group'
- transaction.go: NewScopedConfigured docs include parent store arg
- store_test.go: scope_test keyName uses non-wrapping integer names
- import_export_test.go: testify → stdlib helpers (AX-6 conformance)
- store.go: collapsed duplicate workspace DB fields to canonical 'db'

Doc / config:
- README.md: 'Licence' UK English on badge
- docs/architecture.md: clearer event ordering + lifecycle docs
- .golangci.yml: migrated to golangci-lint v2 schema
- Taskfile: default includes vet task
- JSON: terse 'value' param + concrete examples

Disposition replies (RESOLVED-COMMENT, no code change):
- conventions_test.go testify suggestion: AX-6 banned testify; stdlib helpers are convention
- DuckDB CGO/MIT critical: retained as documented exception in
  DEPENDENCIES.md (load-bearing existing dependency for the workspace
  store; replacement is its own engineering ticket)

Verification: GOWORK=off go vet + go test -count=1 ./... pass.
golangci-lint run ./... reports 0 issues. gofmt -l clean. git diff
--check clean.

Closes findings on https://github.com/dAppCore/go-store/pull/4

Co-authored-by: Codex <noreply@openai.com>
This commit is contained in:
Snider 2026-04-27 14:51:06 +01:00
parent 85ab185b90
commit 6c90af807d
36 changed files with 862 additions and 838 deletions

View file

@ -1,34 +1,37 @@
version: "2"
run:
timeout: 5m
go: "1.26"
linters:
enable:
- depguard
- govet
- errcheck
- staticcheck
- unused
- gosimple
- ineffassign
- typecheck
- gocritic
- gofmt
disable:
- exhaustive
- wrapcheck
linters-settings:
depguard:
rules:
legacy-module-paths:
list-mode: lax
files:
- $all
deny:
- pkg: forge.lthn.ai/
desc: use dappco.re/ module paths instead
settings:
depguard:
rules:
legacy-module-paths:
list-mode: lax
files:
- $all
deny:
- pkg: forge.lthn.ai/
desc: use dappco.re/ module paths instead
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$
issues:
exclude-use-default: false
max-same-issues: 0
formatters:
enable:
- gofmt
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

19
DEPENDENCIES.md Normal file
View file

@ -0,0 +1,19 @@
# Dependency Exceptions
This repository is pure Go by default and permits `modernc.org/sqlite` as the
normal runtime database dependency. The following exception is documented
because the current PR contains load-bearing analytical workspace code that
cannot be replaced by a pure-Go DuckDB-compatible driver.
## `github.com/marcboeker/go-duckdb`
`github.com/marcboeker/go-duckdb` is retained only for DuckDB-backed workspace
buffers and LEM analytical import helpers. DuckDB files are produced and
consumed by existing data pipelines, and no pure-Go DuckDB implementation with
compatible SQL semantics is currently available. Replacing it with
`modernc.org/sqlite` would remove DuckDB JSON import, analytical table, and
workspace recovery behaviour rather than preserving the feature.
This is a CGO and MIT-licensed dependency exception. It must not be used for the
primary SQLite store path, and new runtime storage features should continue to
use pure-Go dependencies compatible with EUPL-1.2.

View file

@ -1,5 +1,5 @@
[![Go Reference](https://pkg.go.dev/badge/dappco.re/go/store.svg)](https://pkg.go.dev/dappco.re/go/store)
[![License: EUPL-1.2](https://img.shields.io/badge/License-EUPL--1.2-blue.svg)](LICENSE.md)
[![Licence: EUPL-1.2](https://img.shields.io/badge/License-EUPL--1.2-blue.svg)](LICENSE.md)
[![Go Version](https://img.shields.io/badge/Go-1.26-00ADD8?style=flat&logo=go)](go.mod)
# go-store
@ -81,6 +81,7 @@ func main() {
- [Architecture](docs/architecture.md) — storage layer, group/key model, TTL expiry, event system, namespace isolation
- [Development Guide](docs/development.md) — prerequisites, test patterns, benchmarks, adding methods
- [Project History](docs/history.md) — completed phases, known limitations, future considerations
- [Dependency Exceptions](DEPENDENCIES.md) — documented runtime dependency exceptions
## Build & Test

View file

@ -20,7 +20,7 @@ func BenchmarkGetAll_VaryingSize(b *testing.B) {
if err != nil {
b.Fatal(err)
}
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
for i := range size {
_ = storeInstance.Set("bench", core.Sprintf("key-%d", i), "value")
@ -41,7 +41,7 @@ func BenchmarkSetGet_Parallel(b *testing.B) {
if err != nil {
b.Fatal(err)
}
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
b.ReportAllocs()
b.ResetTimer()
@ -62,7 +62,7 @@ func BenchmarkCount_10K(b *testing.B) {
if err != nil {
b.Fatal(err)
}
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
for i := range 10_000 {
_ = storeInstance.Set("bench", core.Sprintf("key-%d", i), "value")
@ -81,7 +81,7 @@ func BenchmarkDelete(b *testing.B) {
if err != nil {
b.Fatal(err)
}
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
// Pre-populate keys that will be deleted.
for i := range b.N {
@ -101,7 +101,7 @@ func BenchmarkSetWithTTL(b *testing.B) {
if err != nil {
b.Fatal(err)
}
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
b.ReportAllocs()
b.ResetTimer()
@ -116,7 +116,7 @@ func BenchmarkRender(b *testing.B) {
if err != nil {
b.Fatal(err)
}
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
for i := range 50 {
_ = storeInstance.Set("bench", core.Sprintf("key%d", i), core.Sprintf("val%d", i))

View file

@ -1,6 +1,7 @@
package store
import (
"bytes"
"compress/gzip"
"time"
"unicode"
@ -116,7 +117,9 @@ func (storeInstance *Store) Compact(options CompactOptions) core.Result {
if queryErr != nil {
return core.Result{Value: core.E("store.Compact", "query journal rows", queryErr), OK: false}
}
defer rows.Close()
defer func() {
_ = rows.Close()
}()
var archiveEntries []compactArchiveEntry
for rows.Next() {
@ -177,9 +180,16 @@ func (storeInstance *Store) Compact(options CompactOptions) core.Result {
if err != nil {
return core.Result{Value: core.E("store.Compact", "read archive buffer", err), OK: false}
}
if err := medium.Write(outputPath, compressedArchive); err != nil {
return core.Result{Value: core.E("store.Compact", "write archive via medium", err), OK: false}
stagedOutputPath := core.Concat(outputPath, ".tmp")
stagedOutputPublished := false
if err := medium.Write(stagedOutputPath, compressedArchive); err != nil {
return core.Result{Value: core.E("store.Compact", "write staged archive via medium", err), OK: false}
}
defer func() {
if !stagedOutputPublished && medium.Exists(stagedOutputPath) {
_ = medium.Delete(stagedOutputPath)
}
}()
transaction, err := storeInstance.sqliteDatabase.Begin()
if err != nil {
@ -208,6 +218,11 @@ func (storeInstance *Store) Compact(options CompactOptions) core.Result {
}
committed = true
if err := medium.Rename(stagedOutputPath, outputPath); err != nil {
return core.Result{Value: core.E("store.Compact", "publish staged archive", err), OK: false}
}
stagedOutputPublished = true
return core.Result{Value: outputPath, OK: true}
}
@ -243,35 +258,20 @@ type compactArchiveWriteTarget interface {
}
type compactArchiveBuffer struct {
medium coreio.Medium
path string
buffer bytes.Buffer
}
func newCompactArchiveBuffer() (*compactArchiveBuffer, error) {
buffer := &compactArchiveBuffer{
medium: coreio.NewMemoryMedium(),
path: "archive-buffer",
}
if err := buffer.medium.Write(buffer.path, ""); err != nil {
return nil, err
}
return buffer, nil
return &compactArchiveBuffer{}, nil
}
// Usage example: `buffer, _ := newCompactArchiveBuffer(); _, _ = buffer.Write([]byte("archive"))`
func (buffer *compactArchiveBuffer) Write(data []byte) (int, error) {
content, err := buffer.medium.Read(buffer.path)
if err != nil {
return 0, core.E("store.compactArchiveBuffer.Write", "read buffer", err)
}
if err := buffer.medium.Write(buffer.path, content+string(data)); err != nil {
return 0, core.E("store.compactArchiveBuffer.Write", "write buffer", err)
}
return len(data), nil
return buffer.buffer.Write(data)
}
func (buffer *compactArchiveBuffer) content() (string, error) {
return buffer.medium.Read(buffer.path)
return buffer.buffer.String(), nil
}
func archiveWriter(writer compactArchiveWriteTarget, format string) (compactArchiveWriter, error) {

View file

@ -16,7 +16,7 @@ func TestCompact_Compact_Good_GzipArchive(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"like": 2}, map[string]string{"workspace": "session-b"}).OK)
@ -42,7 +42,9 @@ func TestCompact_Compact_Good_GzipArchive(t *testing.T) {
archiveData := requireCoreReadBytes(t, archivePath)
reader, err := gzip.NewReader(bytes.NewReader(archiveData))
assertNoError(t, err)
defer reader.Close()
defer func() {
_ = reader.Close()
}()
decompressedData, err := io.ReadAll(reader)
assertNoError(t, err)
@ -64,7 +66,7 @@ func TestCompact_Compact_Good_ZstdArchive(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
@ -108,7 +110,7 @@ func TestCompact_Compact_Good_NoRows(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
result := storeInstance.Compact(CompactOptions{
Before: time.Now(),
@ -124,12 +126,12 @@ func TestCompact_Compact_Good_DeterministicOrderingForSameTimestamp(t *testing.T
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, ensureJournalSchema(storeInstance.sqliteDatabase))
committedAt := time.Now().Add(-48 * time.Hour).UnixMilli()
assertNoError(t, commitJournalEntry( storeInstance.sqliteDatabase, "events", "session-b", `{"like":2}`, `{"workspace":"session-b"}`, committedAt, ))
assertNoError(t, commitJournalEntry( storeInstance.sqliteDatabase, "events", "session-a", `{"like":1}`, `{"workspace":"session-a"}`, committedAt, ))
assertNoError(t, commitJournalEntry(storeInstance.sqliteDatabase, "events", "session-b", `{"like":2}`, `{"workspace":"session-b"}`, committedAt))
assertNoError(t, commitJournalEntry(storeInstance.sqliteDatabase, "events", "session-a", `{"like":1}`, `{"workspace":"session-a"}`, committedAt))
result := storeInstance.Compact(CompactOptions{
Before: time.Now().Add(-24 * time.Hour),
@ -144,7 +146,9 @@ func TestCompact_Compact_Good_DeterministicOrderingForSameTimestamp(t *testing.T
archiveData := requireCoreReadBytes(t, archivePath)
reader, err := gzip.NewReader(bytes.NewReader(archiveData))
assertNoError(t, err)
defer reader.Close()
defer func() {
_ = reader.Close()
}()
decompressedData, err := io.ReadAll(reader)
assertNoError(t, err)

View file

@ -171,33 +171,34 @@ func TestConventions_Exports_Good_NoCompatibilityAliases(t *testing.T) {
for _, path := range files {
file := parseGoFile(t, path)
for _, decl := range file.Decls {
switch node := decl.(type) {
case *ast.GenDecl:
for _, spec := range node.Specs {
switch item := spec.(type) {
case *ast.TypeSpec:
if item.Name.Name == "KV" {
invalid = append(invalid, core.Concat(path, ": ", item.Name.Name))
}
if item.Name.Name != "Watcher" {
continue
}
structType, ok := item.Type.(*ast.StructType)
if !ok {
continue
}
for _, field := range structType.Fields.List {
for _, name := range field.Names {
if name.Name == "Ch" {
invalid = append(invalid, core.Concat(path, ": Watcher.Ch"))
}
node, ok := decl.(*ast.GenDecl)
if !ok {
continue
}
for _, spec := range node.Specs {
switch item := spec.(type) {
case *ast.TypeSpec:
if item.Name.Name == "KV" {
invalid = append(invalid, core.Concat(path, ": ", item.Name.Name))
}
if item.Name.Name != "Watcher" {
continue
}
structType, ok := item.Type.(*ast.StructType)
if !ok {
continue
}
for _, field := range structType.Fields.List {
for _, name := range field.Names {
if name.Name == "Ch" {
invalid = append(invalid, core.Concat(path, ": Watcher.Ch"))
}
}
case *ast.ValueSpec:
for _, name := range item.Names {
if name.Name == "ErrNotFound" || name.Name == "ErrQuotaExceeded" {
invalid = append(invalid, core.Concat(path, ": ", name.Name))
}
}
case *ast.ValueSpec:
for _, name := range item.Names {
if name.Name == "ErrNotFound" || name.Name == "ErrQuotaExceeded" {
invalid = append(invalid, core.Concat(path, ": ", name.Name))
}
}
}

View file

@ -46,7 +46,7 @@ func TestCoverage_GetAll_Bad_ScanError(t *testing.T) {
// code scans into plain strings, which cannot represent NULL.
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
// Insert a normal row first so the query returns results.
assertNoError(t, storeInstance.Set("g", "good", "value"))
@ -90,8 +90,7 @@ func TestCoverage_GetAll_Bad_RowsError(t *testing.T) {
for i := range rows {
assertNoError(t, storeInstance.Set("g", core.Sprintf("key-%06d", i), core.Sprintf("value-with-padding-%06d-xxxxxxxxxxxxxxxxxxxxxxxx", i)))
}
storeInstance.Close()
_ = storeInstance.Close()
// Force a WAL checkpoint so all data is in the main database file.
rawDatabase, err := sql.Open("sqlite", databasePath)
assertNoError(t, err)
@ -123,7 +122,7 @@ func TestCoverage_GetAll_Bad_RowsError(t *testing.T) {
reopenedStore, err := New(databasePath)
assertNoError(t, err)
defer reopenedStore.Close()
defer func() { _ = reopenedStore.Close() }()
_, err = reopenedStore.GetAll("g")
assertError(t, err)
@ -138,7 +137,7 @@ func TestCoverage_Render_Bad_ScanError(t *testing.T) {
// Same NULL-key technique as TestCoverage_GetAll_Bad_ScanError.
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "good", "value"))
@ -178,8 +177,7 @@ func TestCoverage_Render_Bad_RowsError(t *testing.T) {
for i := range rows {
assertNoError(t, storeInstance.Set("g", core.Sprintf("key-%06d", i), core.Sprintf("value-with-padding-%06d-xxxxxxxxxxxxxxxxxxxxxxxx", i)))
}
storeInstance.Close()
_ = storeInstance.Close()
rawDatabase, err := sql.Open("sqlite", databasePath)
assertNoError(t, err)
rawDatabase.SetMaxOpenConns(1)
@ -207,7 +205,7 @@ func TestCoverage_Render_Bad_RowsError(t *testing.T) {
reopenedStore, err := New(databasePath)
assertNoError(t, err)
defer reopenedStore.Close()
defer func() { _ = reopenedStore.Close() }()
_, err = reopenedStore.Render("{{ . }}", "g")
assertError(t, err)
@ -223,7 +221,7 @@ func TestCoverage_GroupsSeq_Bad_ScanError(t *testing.T) {
// production code scans into a plain string, which cannot represent NULL.
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_, err = storeInstance.sqliteDatabase.Exec("ALTER TABLE entries RENAME TO entries_backup")
assertNoError(t, err)
@ -256,7 +254,7 @@ func TestCoverage_GroupsSeq_Bad_RowsError(t *testing.T) {
groupRowsErr: core.E("stubSQLiteScenario", "rows iteration failed", nil),
groupRowsErrIndex: 0,
})
defer database.Close()
defer func() { _ = database.Close() }()
storeInstance := &Store{
sqliteDatabase: database,
@ -294,7 +292,7 @@ func TestCoverage_ScopedStore_Bad_GroupsSeqRowsError(t *testing.T) {
groupRowsErr: core.E("stubSQLiteScenario", "rows iteration failed", nil),
groupRowsErrIndex: 1,
})
defer database.Close()
defer func() { _ = database.Close() }()
scopedStore := &ScopedStore{
store: &Store{
@ -324,7 +322,7 @@ func TestCoverage_EnsureSchema_Bad_TableExistsQueryError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableExistsErr: core.E("stubSQLiteScenario", "sqlite master query failed", nil),
})
defer database.Close()
defer func() { _ = database.Close() }()
err := ensureSchema(database)
assertError(t, err)
@ -338,7 +336,7 @@ func TestCoverage_EnsureSchema_Good_ExistingEntriesAndLegacyMigration(t *testing
{0, "expires_at", "INTEGER", 0, nil, 0},
},
})
defer database.Close()
defer func() { _ = database.Close() }()
assertNoError(t, ensureSchema(database))
}
@ -348,7 +346,7 @@ func TestCoverage_EnsureSchema_Bad_ExpiryColumnQueryError(t *testing.T) {
tableExistsFound: true,
tableInfoErr: core.E("stubSQLiteScenario", "table_info query failed", nil),
})
defer database.Close()
defer func() { _ = database.Close() }()
err := ensureSchema(database)
assertError(t, err)
@ -363,7 +361,7 @@ func TestCoverage_EnsureSchema_Bad_MigrationError(t *testing.T) {
},
insertErr: core.E("stubSQLiteScenario", "insert failed", nil),
})
defer database.Close()
defer func() { _ = database.Close() }()
err := ensureSchema(database)
assertError(t, err)
@ -378,7 +376,7 @@ func TestCoverage_EnsureSchema_Bad_MigrationCommitError(t *testing.T) {
},
commitErr: core.E("stubSQLiteScenario", "commit failed", nil),
})
defer database.Close()
defer func() { _ = database.Close() }()
err := ensureSchema(database)
assertError(t, err)
@ -389,7 +387,7 @@ func TestCoverage_TableHasColumn_Bad_QueryError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableInfoErr: core.E("stubSQLiteScenario", "table_info query failed", nil),
})
defer database.Close()
defer func() { _ = database.Close() }()
_, err := tableHasColumn(database, "entries", "expires_at")
assertError(t, err)
@ -403,7 +401,7 @@ func TestCoverage_EnsureExpiryColumn_Good_DuplicateColumn(t *testing.T) {
},
alterTableErr: core.E("stubSQLiteScenario", "duplicate column name: expires_at", nil),
})
defer database.Close()
defer func() { _ = database.Close() }()
assertNoError(t, ensureExpiryColumn(database))
}
@ -415,7 +413,7 @@ func TestCoverage_EnsureExpiryColumn_Bad_AlterTableError(t *testing.T) {
},
alterTableErr: core.E("stubSQLiteScenario", "permission denied", nil),
})
defer database.Close()
defer func() { _ = database.Close() }()
err := ensureExpiryColumn(database)
assertError(t, err)
@ -429,7 +427,7 @@ func TestCoverage_MigrateLegacyEntriesTable_Bad_InsertError(t *testing.T) {
},
insertErr: core.E("stubSQLiteScenario", "insert failed", nil),
})
defer database.Close()
defer func() { _ = database.Close() }()
err := migrateLegacyEntriesTable(database)
assertError(t, err)
@ -440,7 +438,7 @@ func TestCoverage_MigrateLegacyEntriesTable_Bad_BeginError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
beginErr: core.E("stubSQLiteScenario", "begin failed", nil),
})
defer database.Close()
defer func() { _ = database.Close() }()
err := migrateLegacyEntriesTable(database)
assertError(t, err)
@ -453,7 +451,7 @@ func TestCoverage_MigrateLegacyEntriesTable_Good_CreatesAndMigratesLegacyRows(t
{0, "grp", "TEXT", 1, nil, 0},
},
})
defer database.Close()
defer func() { _ = database.Close() }()
assertNoError(t, migrateLegacyEntriesTable(database))
}
@ -462,7 +460,7 @@ func TestCoverage_MigrateLegacyEntriesTable_Bad_TableInfoError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{
tableInfoErr: core.E("stubSQLiteScenario", "table_info query failed", nil),
})
defer database.Close()
defer func() { _ = database.Close() }()
err := migrateLegacyEntriesTable(database)
assertError(t, err)

4
doc.go
View file

@ -4,7 +4,7 @@
//
// Prefer `store.New(...)` and `store.NewScoped(...)` for the primary API.
// Use `store.NewConfigured(store.StoreConfig{...})` and
// `store.NewScopedConfigured(store.ScopedStoreConfig{...})` when the
// `store.NewScopedConfigured(configuredStore, store.ScopedStoreConfig{...})` when the
// configuration is already known:
//
// configuredStore, err := store.NewConfigured(store.StoreConfig{
@ -39,7 +39,7 @@
// if err != nil {
// return
// }
// defer configuredStore.Close()
// defer func() { _ = configuredStore.Close() }()
//
// if err := configuredStore.Set("config", "colour", "blue"); err != nil {
// return

View file

@ -190,7 +190,7 @@ Watcher delivery is grouped by the registered group name. Wildcard `"*"` matches
## Namespace Isolation (ScopedStore)
`ScopedStore` wraps a `*Store` and automatically prefixes all group names with `namespace + ":"`. This prevents key collisions when multiple tenants share a single underlying database. When the namespace and quota are already known, prefer `NewScopedConfigured(store.ScopedStoreConfig{...})` so the configuration is explicit at the call site.
`ScopedStore` wraps a `*Store` and automatically prefixes all group names with `namespace + ":"`. This prevents key collisions when multiple tenants share a single underlying database. When the namespace and quota are already known, prefer `NewScopedConfigured(storeInstance, store.ScopedStoreConfig{...})` so the configuration is explicit at the call site.
```go
scopedStore, err := store.NewScopedConfigured(storeInstance, store.ScopedStoreConfig{
@ -215,7 +215,7 @@ Namespace strings must match `^[a-zA-Z0-9-]+$`. Invalid namespaces are rejected
### Quota Enforcement
`NewScopedConfigured(store.ScopedStoreConfig{...})` is the preferred way to set per-namespace limits because the quota values stay visible at the call site. For example, `store.QuotaConfig{MaxKeys: 100, MaxGroups: 10}` caps a namespace at 100 keys and 10 groups:
`NewScopedConfigured(storeInstance, store.ScopedStoreConfig{...})` is the preferred way to set per-namespace limits because the quota values stay visible at the call site. For example, `store.QuotaConfig{MaxKeys: 100, MaxGroups: 10}` caps a namespace at 100 keys and 10 groups:
```go
type QuotaConfig struct {

View file

@ -13,7 +13,7 @@ import (
//
// Usage example:
//
// db.EnsureScoringTables()
// _ = db.EnsureScoringTables()
// db.Exec(core.Sprintf("SELECT * FROM %s", store.TableCheckpointScores))
const (
// TableCheckpointScores is the table name for checkpoint scoring data.
@ -38,7 +38,7 @@ const (
//
// db, err := store.OpenDuckDB("/Volumes/Data/lem/lem.duckdb")
// if err != nil { return }
// defer db.Close()
// defer func() { _ = db.Close() }()
// rows, _ := db.QueryGoldenSet(500)
type DuckDB struct {
conn *sql.DB
@ -57,7 +57,7 @@ func OpenDuckDB(path string) (*DuckDB, error) {
return nil, core.E("store.OpenDuckDB", core.Sprintf("open duckdb %s", path), err)
}
if err := conn.Ping(); err != nil {
conn.Close()
_ = conn.Close()
return nil, core.E("store.OpenDuckDB", core.Sprintf("ping duckdb %s", path), err)
}
return &DuckDB{conn: conn, path: path}, nil
@ -74,7 +74,7 @@ func OpenDuckDBReadWrite(path string) (*DuckDB, error) {
return nil, core.E("store.OpenDuckDBReadWrite", core.Sprintf("open duckdb %s", path), err)
}
if err := conn.Ping(); err != nil {
conn.Close()
_ = conn.Close()
return nil, core.E("store.OpenDuckDBReadWrite", core.Sprintf("ping duckdb %s", path), err)
}
return &DuckDB{conn: conn, path: path}, nil
@ -84,7 +84,7 @@ func OpenDuckDBReadWrite(path string) (*DuckDB, error) {
//
// Usage example:
//
// defer db.Close()
// defer func() { _ = db.Close() }()
func (db *DuckDB) Close() error {
return db.conn.Close()
}
@ -116,7 +116,10 @@ func (db *DuckDB) Conn() *sql.DB {
// err := db.Exec("INSERT INTO golden_set VALUES (?, ?)", idx, prompt)
func (db *DuckDB) Exec(query string, args ...any) error {
_, err := db.conn.Exec(query, args...)
return err
if err != nil {
return core.E("store.DuckDB.Exec", "execute query", err)
}
return nil
}
// QueryRowScan executes a query expected to return at most one row and scans
@ -279,7 +282,9 @@ func (db *DuckDB) QueryGoldenSet(minChars int) ([]GoldenSetRow, error) {
if err != nil {
return nil, core.E("store.DuckDB.QueryGoldenSet", "query golden_set", err)
}
defer rows.Close()
defer func() {
_ = rows.Close()
}()
var result []GoldenSetRow
for rows.Next() {
@ -331,7 +336,9 @@ func (db *DuckDB) QueryExpansionPrompts(status string, limit int) ([]ExpansionPr
if err != nil {
return nil, core.E("store.DuckDB.QueryExpansionPrompts", "query expansion_prompts", err)
}
defer rows.Close()
defer func() {
_ = rows.Close()
}()
var result []ExpansionPromptRow
for rows.Next() {
@ -385,7 +392,9 @@ func (db *DuckDB) QueryRows(query string, args ...any) ([]map[string]any, error)
if err != nil {
return nil, core.E("store.DuckDB.QueryRows", "query", err)
}
defer rows.Close()
defer func() {
_ = rows.Close()
}()
cols, err := rows.Columns()
if err != nil {
@ -415,25 +424,32 @@ func (db *DuckDB) QueryRows(query string, args ...any) ([]map[string]any, error)
//
// Usage example:
//
// db.EnsureScoringTables()
func (db *DuckDB) EnsureScoringTables() {
db.conn.Exec(core.Sprintf(`CREATE TABLE IF NOT EXISTS %s (
// if err := db.EnsureScoringTables(); err != nil { return }
func (db *DuckDB) EnsureScoringTables() error {
if _, err := db.conn.Exec(core.Sprintf(`CREATE TABLE IF NOT EXISTS %s (
model TEXT, run_id TEXT, label TEXT, iteration INTEGER,
correct INTEGER, total INTEGER, accuracy DOUBLE,
scored_at TIMESTAMP DEFAULT current_timestamp,
PRIMARY KEY (run_id, label)
)`, TableCheckpointScores))
db.conn.Exec(core.Sprintf(`CREATE TABLE IF NOT EXISTS %s (
)`, TableCheckpointScores)); err != nil {
return core.E("store.DuckDB.EnsureScoringTables", "create checkpoint_scores", err)
}
if _, err := db.conn.Exec(core.Sprintf(`CREATE TABLE IF NOT EXISTS %s (
model TEXT, run_id TEXT, label TEXT, probe_id TEXT,
passed BOOLEAN, response TEXT, iteration INTEGER,
scored_at TIMESTAMP DEFAULT current_timestamp,
PRIMARY KEY (run_id, label, probe_id)
)`, TableProbeResults))
db.conn.Exec(`CREATE TABLE IF NOT EXISTS scoring_results (
)`, TableProbeResults)); err != nil {
return core.E("store.DuckDB.EnsureScoringTables", "create probe_results", err)
}
if _, err := db.conn.Exec(`CREATE TABLE IF NOT EXISTS scoring_results (
model TEXT, prompt_id TEXT, suite TEXT,
dimension TEXT, score DOUBLE,
scored_at TIMESTAMP DEFAULT current_timestamp
)`)
)`); err != nil {
return core.E("store.DuckDB.EnsureScoringTables", "create scoring_results", err)
}
return nil
}
// WriteScoringResult writes a single scoring dimension result to DuckDB.
@ -446,7 +462,10 @@ func (db *DuckDB) WriteScoringResult(model, promptID, suite, dimension string, s
`INSERT INTO scoring_results (model, prompt_id, suite, dimension, score) VALUES (?, ?, ?, ?, ?)`,
model, promptID, suite, dimension, score,
)
return err
if err != nil {
return core.E("store.DuckDB.WriteScoringResult", "insert scoring result", err)
}
return nil
}
// TableCounts returns row counts for all known tables.

View file

@ -27,7 +27,7 @@ func (t EventType) String() string {
case EventDelete:
return "delete"
case EventDeleteGroup:
return "deletegroup"
return "delete_group"
default:
return "unknown"
}
@ -72,23 +72,16 @@ func (storeInstance *Store) Watch(group string) <-chan Event {
return closedEventChannel()
}
eventChannel := make(chan Event, watcherEventBufferCapacity)
storeInstance.lifecycleLock.Lock()
closed := storeInstance.isClosed
storeInstance.lifecycleLock.Unlock()
if closed {
defer storeInstance.lifecycleLock.Unlock()
if storeInstance.isClosed {
return closedEventChannel()
}
eventChannel := make(chan Event, watcherEventBufferCapacity)
storeInstance.watcherLock.Lock()
defer storeInstance.watcherLock.Unlock()
storeInstance.lifecycleLock.Lock()
closed = storeInstance.isClosed
storeInstance.lifecycleLock.Unlock()
if closed {
return closedEventChannel()
}
if storeInstance.watchers == nil {
storeInstance.watchers = make(map[string][]chan Event)
}
@ -152,9 +145,8 @@ func (storeInstance *Store) OnChange(callback func(Event)) func() {
}
storeInstance.lifecycleLock.Lock()
closed := storeInstance.isClosed
storeInstance.lifecycleLock.Unlock()
if closed {
defer storeInstance.lifecycleLock.Unlock()
if storeInstance.isClosed {
return func() {}
}
@ -163,12 +155,6 @@ func (storeInstance *Store) OnChange(callback func(Event)) func() {
storeInstance.callbackLock.Lock()
defer storeInstance.callbackLock.Unlock()
storeInstance.lifecycleLock.Lock()
closed = storeInstance.isClosed
storeInstance.lifecycleLock.Unlock()
if closed {
return func() {}
}
storeInstance.callbacks = append(storeInstance.callbacks, callbackRegistration)
// Return an idempotent unregister function.
@ -202,20 +188,13 @@ func (storeInstance *Store) notify(event Event) {
}
storeInstance.lifecycleLock.Lock()
closed := storeInstance.isClosed
storeInstance.lifecycleLock.Unlock()
if closed {
if storeInstance.isClosed {
storeInstance.lifecycleLock.Unlock()
return
}
storeInstance.watcherLock.RLock()
storeInstance.lifecycleLock.Lock()
closed = storeInstance.isClosed
storeInstance.lifecycleLock.Unlock()
if closed {
storeInstance.watcherLock.RUnlock()
return
}
for _, registeredChannel := range storeInstance.watchers["*"] {
select {
case registeredChannel <- event:
@ -230,7 +209,13 @@ func (storeInstance *Store) notify(event Event) {
}
storeInstance.watcherLock.RUnlock()
storeInstance.lifecycleLock.Lock()
if storeInstance.isClosed {
storeInstance.lifecycleLock.Unlock()
return
}
storeInstance.callbackLock.RLock()
storeInstance.lifecycleLock.Unlock()
callbacks := append([]changeCallbackRegistration(nil), storeInstance.callbacks...)
storeInstance.callbackLock.RUnlock()

View file

@ -10,7 +10,7 @@ import (
func TestEvents_Watch_Good_Group(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("config")
defer storeInstance.Unwatch("config", events)
@ -28,7 +28,7 @@ func TestEvents_Watch_Good_Group(t *testing.T) {
func TestEvents_Watch_Good_WildcardGroup(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("*")
defer storeInstance.Unwatch("*", events)
@ -48,7 +48,7 @@ func TestEvents_Watch_Good_WildcardGroup(t *testing.T) {
func TestEvents_Unwatch_Good_StopsDelivery(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("g")
storeInstance.Unwatch("g", events)
@ -61,7 +61,7 @@ func TestEvents_Unwatch_Good_StopsDelivery(t *testing.T) {
func TestEvents_Unwatch_Good_Idempotent(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("g")
storeInstance.Unwatch("g", events)
@ -80,14 +80,14 @@ func TestEvents_Close_Good_ClosesWatcherChannels(t *testing.T) {
func TestEvents_Unwatch_Good_NilChannel(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
storeInstance.Unwatch("g", nil)
}
func TestEvents_Watch_Good_DeleteEvent(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("g")
defer storeInstance.Unwatch("g", events)
@ -110,7 +110,7 @@ func TestEvents_Watch_Good_DeleteEvent(t *testing.T) {
func TestEvents_Watch_Good_DeleteGroupEvent(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("g")
defer storeInstance.Unwatch("g", events)
@ -134,7 +134,7 @@ func TestEvents_Watch_Good_DeleteGroupEvent(t *testing.T) {
func TestEvents_OnChange_Good_Fires(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
var events []Event
var eventsMutex sync.Mutex
@ -158,7 +158,7 @@ func TestEvents_OnChange_Good_Fires(t *testing.T) {
func TestEvents_OnChange_Good_GroupFilteredCallback(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
var seen []string
unregister := storeInstance.OnChange(func(event Event) {
@ -177,7 +177,7 @@ func TestEvents_OnChange_Good_GroupFilteredCallback(t *testing.T) {
func TestEvents_OnChange_Good_ReentrantSubscriptionChanges(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
var (
seen []string
@ -234,7 +234,7 @@ func TestEvents_OnChange_Good_ReentrantSubscriptionChanges(t *testing.T) {
func TestEvents_Notify_Good_PopulatesTimestamp(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("config")
defer storeInstance.Unwatch("config", events)
@ -253,7 +253,7 @@ func TestEvents_Notify_Good_PopulatesTimestamp(t *testing.T) {
func TestEvents_Watch_Good_BufferDrops(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("g")
defer storeInstance.Unwatch("g", events)
@ -268,7 +268,7 @@ func TestEvents_Watch_Good_BufferDrops(t *testing.T) {
func TestEvents_Watch_Good_ConcurrentWatchUnwatch(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
const workers = 10
var wg sync.WaitGroup
@ -289,7 +289,7 @@ func TestEvents_Watch_Good_ConcurrentWatchUnwatch(t *testing.T) {
func TestEvents_Watch_Good_ScopedStoreEventGroup(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNotNil(t, scopedStore)
@ -310,7 +310,7 @@ func TestEvents_Watch_Good_ScopedStoreEventGroup(t *testing.T) {
func TestEvents_Watch_Good_SetWithTTL(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("g")
defer storeInstance.Unwatch("g", events)
@ -329,7 +329,7 @@ func TestEvents_Watch_Good_SetWithTTL(t *testing.T) {
func TestEvents_EventType_Good_String(t *testing.T) {
assertEqual(t, "set", EventSet.String())
assertEqual(t, "delete", EventDelete.String())
assertEqual(t, "deletegroup", EventDeleteGroup.String())
assertEqual(t, "delete_group", EventDeleteGroup.String())
assertEqual(t, "unknown", EventType(99).String())
}

11
go.mod
View file

@ -4,7 +4,7 @@ go 1.26.0
require (
dappco.re/go/core v0.8.0-alpha.1
dappco.re/go/io v0.8.0-alpha.1
dappco.re/go/core/io v0.4.2
github.com/influxdata/influxdb-client-go/v2 v2.14.0 // Note: InfluxDB storage client; no core equivalent
github.com/klauspost/compress v1.18.5 // Note: compression codecs for storage payloads; no core equivalent
modernc.org/sqlite v1.47.0 // Note: pure-Go SQLite driver; no core equivalent
@ -16,16 +16,13 @@ require (
github.com/apapsch/go-jsonmerge/v2 v2.0.0 // indirect
github.com/go-viper/mapstructure/v2 v2.5.0 // indirect
github.com/goccy/go-json v0.10.6 // indirect
github.com/golang/snappy v1.0.0 // indirect
github.com/google/flatbuffers v25.1.24+incompatible // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/influxdata/line-protocol v0.0.0-20200327222509-2487e7298839 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/oapi-codegen/runtime v1.0.0 // indirect
github.com/parquet-go/bitpack v1.0.0 // indirect
github.com/parquet-go/jsonlite v1.0.0 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect
github.com/twpayne/go-geom v1.6.1 // indirect
github.com/zeebo/xxh3 v1.1.0 // indirect
github.com/zeebo/xxh3 v1.0.2 // indirect
golang.org/x/exp v0.0.0-20260312153236-7ab1446f8b90 // indirect
golang.org/x/mod v0.34.0 // indirect
golang.org/x/net v0.53.0 // indirect
@ -33,7 +30,6 @@ require (
golang.org/x/telemetry v0.0.0-20260311193753-579e4da9a98c // indirect
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da // indirect
gonum.org/v1/gonum v0.17.0 // indirect
google.golang.org/protobuf v1.36.11 // indirect
)
require (
@ -42,7 +38,6 @@ require (
github.com/marcboeker/go-duckdb v1.8.5 // Note: DuckDB workspace buffer driver; no core equivalent
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/parquet-go/parquet-go v0.29.0 // Note: Parquet file storage support; no core equivalent
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
golang.org/x/sys v0.43.0 // indirect
golang.org/x/tools v0.43.0 // indirect

28
go.sum
View file

@ -2,13 +2,7 @@ dappco.re/go/core v0.8.0-alpha.1 h1:gj7+Scv+L63Z7wMxbJYHhaRFkHJo2u4MMPuUSv/Dhtk=
dappco.re/go/core v0.8.0-alpha.1/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A=
dappco.re/go/core/io v0.4.2 h1:SHNF/xMPyFnKWWYoFW5Y56eiuGVL/mFa1lfIw/530ls=
dappco.re/go/core/io v0.4.2/go.mod h1:w71dukyunczLb8frT9JOd5B78PjwWQD3YAXiCt3AcPA=
github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU=
github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU=
github.com/RaveNoX/go-jsoncommentstrip v1.0.0/go.mod h1:78ihd09MekBnJnxpICcwzCMzGrKSKYe4AqU6PDYYpjk=
github.com/alecthomas/assert/v2 v2.10.0 h1:jjRCHsj6hBJhkmhznrCzoNpbA3zqy0fYiUcYZP/GkPY=
github.com/alecthomas/assert/v2 v2.10.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k=
github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc=
github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4=
github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
github.com/apache/arrow-go/v18 v18.1.0 h1:agLwJUiVuwXZdwPYVrlITfx7bndULJ/dggbnLFgDp/Y=
@ -27,8 +21,8 @@ github.com/go-viper/mapstructure/v2 v2.5.0 h1:vM5IJoUAy3d7zRSVtIwQgBj7BiWtMPfmPE
github.com/go-viper/mapstructure/v2 v2.5.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/goccy/go-json v0.10.6 h1:p8HrPJzOakx/mn/bQtjgNjdTcN+/S6FcG2CTtQOrHVU=
github.com/goccy/go-json v0.10.6/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs=
github.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/flatbuffers v25.1.24+incompatible h1:4wPqL3K7GzBd1CwyhSd3usxLKOaJN/AC6puCca6Jm7o=
github.com/google/flatbuffers v25.1.24+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
@ -39,8 +33,6 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM=
github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg=
github.com/influxdata/influxdb-client-go/v2 v2.14.0 h1:AjbBfJuq+QoaXNcrova8smSjwJdUHnwvfjMF71M1iI4=
github.com/influxdata/influxdb-client-go/v2 v2.14.0/go.mod h1:Ahpm3QXKMJslpXl3IftVLVezreAUtBOTZssDrjZEFHI=
github.com/influxdata/line-protocol v0.0.0-20200327222509-2487e7298839 h1:W9WBk7wlPfJLvMCdtV4zPulc4uCPrlywQOmbFOhgQNU=
@ -64,12 +56,6 @@ github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOF
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/oapi-codegen/runtime v1.0.0 h1:P4rqFX5fMFWqRzY9M/3YF9+aPSPPB06IzP2P7oOxrWo=
github.com/oapi-codegen/runtime v1.0.0/go.mod h1:LmCUMQuPB4M/nLXilQXhHw+BLZdDb18B34OO356yJ/A=
github.com/parquet-go/bitpack v1.0.0 h1:AUqzlKzPPXf2bCdjfj4sTeacrUwsT7NlcYDMUQxPcQA=
github.com/parquet-go/bitpack v1.0.0/go.mod h1:XnVk9TH+O40eOOmvpAVZ7K2ocQFrQwysLMnc6M/8lgs=
github.com/parquet-go/jsonlite v1.0.0 h1:87QNdi56wOfsE5bdgas0vRzHPxfJgzrXGml1zZdd7VU=
github.com/parquet-go/jsonlite v1.0.0/go.mod h1:nDjpkpL4EOtqs6NQugUsi0Rleq9sW/OtC1NnZEnxzF0=
github.com/parquet-go/parquet-go v0.29.0 h1:xXlPtFVR51jpSVzf+cgHnNIcb7Xet+iuvkbe0HIm90Y=
github.com/parquet-go/parquet-go v0.29.0/go.mod h1:navtkAYr2LGoJVp141oXPlO/sxLvaOe3la2JEoD8+rg=
github.com/pierrec/lz4/v4 v4.1.22 h1:cKFw6uJDK+/gfw5BcDL0JL5aBsAFdsIT18eRtLj7VIU=
github.com/pierrec/lz4/v4 v4.1.22/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
@ -82,14 +68,10 @@ github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/twpayne/go-geom v1.6.1 h1:iLE+Opv0Ihm/ABIcvQFGIiFBXd76oBIar9drAwHFhR4=
github.com/twpayne/go-geom v1.6.1/go.mod h1:Kr+Nly6BswFsKM5sd31YaoWS5PeDDH2NftJTK7Gd028=
github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
github.com/zeebo/assert v1.3.0 h1:g7C04CbJuIDKNPFHmsk4hwZDO5O+kntRxzaUoNXj+IQ=
github.com/zeebo/assert v1.3.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN/wJ0=
github.com/zeebo/xxh3 v1.1.0 h1:s7DLGDK45Dyfg7++yxI0khrfwq9661w9EN78eP/UZVs=
github.com/zeebo/xxh3 v1.1.0/go.mod h1:IisAie1LELR4xhVinxWS5+zf1lA4p0MW4T+w+W07F5s=
github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
golang.org/x/exp v0.0.0-20260312153236-7ab1446f8b90 h1:jiDhWWeC7jfWqR9c/uplMOqJ0sbNlNWv0UkzE0vX1MA=
golang.org/x/exp v0.0.0-20260312153236-7ab1446f8b90/go.mod h1:xE1HEv6b+1SCZ5/uscMRjUBKtIxworgEcEi+/n9NQDQ=
golang.org/x/mod v0.34.0 h1:xIHgNUUnW6sYkcM5Jleh05DvLOtwc6RitGHbDk4akRI=
@ -109,8 +91,6 @@ golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da h1:noIWHXmPHxILtqtCOPIhS
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da/go.mod h1:NDW/Ps6MPRej6fsCIbMTohpP40sJ/P/vI1MoTEGwX90=
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=

172
import.go
View file

@ -94,7 +94,9 @@ func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
}
}
if isFile(goldenPath) {
db.Exec("DROP TABLE IF EXISTS golden_set")
if err := db.Exec("DROP TABLE IF EXISTS golden_set"); err != nil {
return core.E("store.ImportAll", "drop golden_set", err)
}
err := db.Exec(core.Sprintf(`
CREATE TABLE golden_set AS
SELECT
@ -110,10 +112,12 @@ func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
FROM read_json_auto('%s', maximum_object_size=1048576)
`, escapeSQLPath(goldenPath)))
if err != nil {
core.Print(w, " WARNING: golden set import failed: %v", err)
return core.E("store.ImportAll", "import golden_set", err)
} else {
var n int
db.QueryRowScan("SELECT count(*) FROM golden_set", &n)
if err := db.QueryRowScan("SELECT count(*) FROM golden_set", &n); err != nil {
return core.E("store.ImportAll", "count golden_set", err)
}
totals["golden_set"] = n
core.Print(w, " golden_set: %d rows", n)
}
@ -140,23 +144,26 @@ func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
{"russian-bridge", []string{"russian-bridge/train.jsonl", "russian-bridge/valid.jsonl"}},
}
trainingLocal := core.JoinPath(cfg.DataDir, "training")
localFs.EnsureDir(trainingLocal)
trainingRoot := cfg.DataDir
if !cfg.SkipM3 && cfg.Scp != nil {
core.Print(w, " Pulling training sets from M3...")
for _, td := range trainingDirs {
for _, rel := range td.files {
local := core.JoinPath(trainingLocal, rel)
localFs.EnsureDir(core.PathDir(local))
local := core.JoinPath(trainingRoot, rel)
if result := localFs.EnsureDir(core.PathDir(local)); !result.OK {
return core.E("store.ImportAll", "ensure training directory", result.Value.(error))
}
remote := core.Sprintf("%s:/Volumes/Data/lem/%s", m3Host, rel)
cfg.Scp(remote, local) // ignore errors, file might not exist
_ = cfg.Scp(remote, local) // ignore errors, file might not exist
}
}
}
db.Exec("DROP TABLE IF EXISTS training_examples")
db.Exec(`
if err := db.Exec("DROP TABLE IF EXISTS training_examples"); err != nil {
return core.E("store.ImportAll", "drop training_examples", err)
}
if err := db.Exec(`
CREATE TABLE training_examples (
source VARCHAR,
split VARCHAR,
@ -166,12 +173,14 @@ func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
full_messages TEXT,
char_count INT
)
`)
`); err != nil {
return core.E("store.ImportAll", "create training_examples", err)
}
trainingTotal := 0
for _, td := range trainingDirs {
for _, rel := range td.files {
local := core.JoinPath(trainingLocal, rel)
local := core.JoinPath(trainingRoot, rel)
if !isFile(local) {
continue
}
@ -183,7 +192,10 @@ func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
split = "test"
}
n := importTrainingFile(db, local, td.name, split)
n, err := importTrainingFile(db, local, td.name, split)
if err != nil {
return core.E("store.ImportAll", core.Sprintf("import training file %s", local), err)
}
trainingTotal += n
}
}
@ -199,7 +211,7 @@ func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
if cfg.Scp != nil {
for _, bname := range []string{"truthfulqa", "gsm8k", "do_not_answer", "toxigen"} {
remote := core.Sprintf("%s:/Volumes/Data/lem/benchmarks/%s.jsonl", m3Host, bname)
cfg.Scp(remote, core.JoinPath(benchLocal, bname+".jsonl"))
_ = cfg.Scp(remote, core.JoinPath(benchLocal, bname+".jsonl"))
}
}
if cfg.ScpDir != nil {
@ -207,25 +219,32 @@ func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
localSub := core.JoinPath(benchLocal, subdir)
localFs.EnsureDir(localSub)
remote := core.Sprintf("%s:/Volumes/Data/lem/benchmarks/%s/", m3Host, subdir)
cfg.ScpDir(remote, core.JoinPath(benchLocal)+"/")
_ = cfg.ScpDir(remote, localSub+"/")
}
}
}
db.Exec("DROP TABLE IF EXISTS benchmark_results")
db.Exec(`
if err := db.Exec("DROP TABLE IF EXISTS benchmark_results"); err != nil {
return core.E("store.ImportAll", "drop benchmark_results", err)
}
if err := db.Exec(`
CREATE TABLE benchmark_results (
source VARCHAR, id VARCHAR, benchmark VARCHAR, model VARCHAR,
prompt TEXT, response TEXT, elapsed_seconds DOUBLE, domain VARCHAR
)
`)
`); err != nil {
return core.E("store.ImportAll", "create benchmark_results", err)
}
benchTotal := 0
for _, subdir := range []string{"results", "scale_results", "cross_arch_results", "deepseek-r1-7b"} {
resultDir := core.JoinPath(benchLocal, subdir)
matches := core.PathGlob(core.JoinPath(resultDir, "*.jsonl"))
for _, jf := range matches {
n := importBenchmarkFile(db, jf, subdir)
n, err := importBenchmarkFile(db, jf, subdir)
if err != nil {
return core.E("store.ImportAll", core.Sprintf("import benchmark file %s", jf), err)
}
benchTotal += n
}
}
@ -235,12 +254,15 @@ func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
local := core.JoinPath(benchLocal, bfile+".jsonl")
if !isFile(local) {
if !cfg.SkipM3 && cfg.Scp != nil {
remote := core.Sprintf("%s:/Volumes/Data/lem/benchmark/%s.jsonl", m3Host, bfile)
cfg.Scp(remote, local)
remote := core.Sprintf("%s:/Volumes/Data/lem/benchmarks/%s.jsonl", m3Host, bfile)
_ = cfg.Scp(remote, local)
}
}
if isFile(local) {
n := importBenchmarkFile(db, local, "benchmark")
n, err := importBenchmarkFile(db, local, "benchmark")
if err != nil {
return core.E("store.ImportAll", core.Sprintf("import benchmark file %s", local), err)
}
benchTotal += n
}
}
@ -248,19 +270,26 @@ func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
core.Print(w, " benchmark_results: %d rows", benchTotal)
// ── 4. Benchmark questions ──
db.Exec("DROP TABLE IF EXISTS benchmark_questions")
db.Exec(`
if err := db.Exec("DROP TABLE IF EXISTS benchmark_questions"); err != nil {
return core.E("store.ImportAll", "drop benchmark_questions", err)
}
if err := db.Exec(`
CREATE TABLE benchmark_questions (
benchmark VARCHAR, id VARCHAR, question TEXT,
best_answer TEXT, correct_answers TEXT, incorrect_answers TEXT, category VARCHAR
)
`)
`); err != nil {
return core.E("store.ImportAll", "create benchmark_questions", err)
}
benchQTotal := 0
for _, bname := range []string{"truthfulqa", "gsm8k", "do_not_answer", "toxigen"} {
local := core.JoinPath(benchLocal, bname+".jsonl")
if isFile(local) {
n := importBenchmarkQuestions(db, local, bname)
n, err := importBenchmarkQuestions(db, local, bname)
if err != nil {
return core.E("store.ImportAll", core.Sprintf("import benchmark questions %s", local), err)
}
benchQTotal += n
}
}
@ -268,12 +297,16 @@ func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
core.Print(w, " benchmark_questions: %d rows", benchQTotal)
// ── 5. Seeds ──
db.Exec("DROP TABLE IF EXISTS seeds")
db.Exec(`
if err := db.Exec("DROP TABLE IF EXISTS seeds"); err != nil {
return core.E("store.ImportAll", "drop seeds", err)
}
if err := db.Exec(`
CREATE TABLE seeds (
source_file VARCHAR, region VARCHAR, seed_id VARCHAR, domain VARCHAR, prompt TEXT
)
`)
`); err != nil {
return core.E("store.ImportAll", "create seeds", err)
}
seedTotal := 0
seedDirs := []string{core.JoinPath(cfg.DataDir, "seeds"), "/tmp/lem-data/seeds", "/tmp/lem-repo/seeds"}
@ -281,7 +314,10 @@ func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
if !isDir(seedDir) {
continue
}
n := importSeeds(db, seedDir)
n, err := importSeeds(db, seedDir)
if err != nil {
return core.E("store.ImportAll", core.Sprintf("import seeds %s", seedDir), err)
}
seedTotal += n
}
totals["seeds"] = seedTotal
@ -303,13 +339,13 @@ func ImportAll(db *DuckDB, cfg ImportConfig, w io.Writer) error {
return nil
}
func importTrainingFile(db *DuckDB, path, source, split string) int {
func importTrainingFile(db *DuckDB, path, source, split string) (int, error) {
r := localFs.Open(path)
if !r.OK {
return 0
return 0, core.E("store.importTrainingFile", core.Sprintf("open %s", path), r.Value.(error))
}
f := r.Value.(io.ReadCloser)
defer f.Close()
defer func() { _ = f.Close() }()
count := 0
scanner := bufio.NewScanner(f)
@ -339,20 +375,25 @@ func importTrainingFile(db *DuckDB, path, source, split string) int {
}
msgsJSON := core.JSONMarshalString(rec.Messages)
db.Exec(`INSERT INTO training_examples VALUES (?, ?, ?, ?, ?, ?, ?)`,
source, split, prompt, response, assistantCount, msgsJSON, len(response))
if err := db.Exec(`INSERT INTO training_examples VALUES (?, ?, ?, ?, ?, ?, ?)`,
source, split, prompt, response, assistantCount, msgsJSON, len(response)); err != nil {
return count, core.E("store.importTrainingFile", "insert training example", err)
}
count++
}
return count
if err := scanner.Err(); err != nil {
return count, core.E("store.importTrainingFile", "scan training file", err)
}
return count, nil
}
func importBenchmarkFile(db *DuckDB, path, source string) int {
func importBenchmarkFile(db *DuckDB, path, source string) (int, error) {
r := localFs.Open(path)
if !r.OK {
return 0
return 0, core.E("store.importBenchmarkFile", core.Sprintf("open %s", path), r.Value.(error))
}
f := r.Value.(io.ReadCloser)
defer f.Close()
defer func() { _ = f.Close() }()
count := 0
scanner := bufio.NewScanner(f)
@ -364,7 +405,7 @@ func importBenchmarkFile(db *DuckDB, path, source string) int {
continue
}
db.Exec(`INSERT INTO benchmark_results VALUES (?, ?, ?, ?, ?, ?, ?, ?)`,
if err := db.Exec(`INSERT INTO benchmark_results VALUES (?, ?, ?, ?, ?, ?, ?, ?)`,
source,
core.Sprint(rec["id"]),
strOrEmpty(rec, "benchmark"),
@ -373,19 +414,24 @@ func importBenchmarkFile(db *DuckDB, path, source string) int {
strOrEmpty(rec, "response"),
floatOrZero(rec, "elapsed_seconds"),
strOrEmpty(rec, "domain"),
)
); err != nil {
return count, core.E("store.importBenchmarkFile", "insert benchmark result", err)
}
count++
}
return count
if err := scanner.Err(); err != nil {
return count, core.E("store.importBenchmarkFile", "scan benchmark file", err)
}
return count, nil
}
func importBenchmarkQuestions(db *DuckDB, path, benchmark string) int {
func importBenchmarkQuestions(db *DuckDB, path, benchmark string) (int, error) {
r := localFs.Open(path)
if !r.OK {
return 0
return 0, core.E("store.importBenchmarkQuestions", core.Sprintf("open %s", path), r.Value.(error))
}
f := r.Value.(io.ReadCloser)
defer f.Close()
defer func() { _ = f.Close() }()
count := 0
scanner := bufio.NewScanner(f)
@ -400,7 +446,7 @@ func importBenchmarkQuestions(db *DuckDB, path, benchmark string) int {
correctJSON := core.JSONMarshalString(rec["correct_answers"])
incorrectJSON := core.JSONMarshalString(rec["incorrect_answers"])
db.Exec(`INSERT INTO benchmark_questions VALUES (?, ?, ?, ?, ?, ?, ?)`,
if err := db.Exec(`INSERT INTO benchmark_questions VALUES (?, ?, ?, ?, ?, ?, ?)`,
benchmark,
core.Sprint(rec["id"]),
strOrEmpty(rec, "question"),
@ -408,15 +454,24 @@ func importBenchmarkQuestions(db *DuckDB, path, benchmark string) int {
correctJSON,
incorrectJSON,
strOrEmpty(rec, "category"),
)
); err != nil {
return count, core.E("store.importBenchmarkQuestions", "insert benchmark question", err)
}
count++
}
return count
if err := scanner.Err(); err != nil {
return count, core.E("store.importBenchmarkQuestions", "scan benchmark questions", err)
}
return count, nil
}
func importSeeds(db *DuckDB, seedDir string) int {
func importSeeds(db *DuckDB, seedDir string) (int, error) {
count := 0
var firstErr error
walkDir(seedDir, func(path string) {
if firstErr != nil {
return
}
if !core.HasSuffix(path, ".json") {
return
}
@ -458,21 +513,30 @@ func importSeeds(db *DuckDB, seedDir string) int {
if prompt == "" {
prompt = strOrEmpty(seed, "question")
}
db.Exec(`INSERT INTO seeds VALUES (?, ?, ?, ?, ?)`,
if err := db.Exec(`INSERT INTO seeds VALUES (?, ?, ?, ?, ?)`,
rel, region,
strOrEmpty(seed, "seed_id"),
strOrEmpty(seed, "domain"),
prompt,
)
); err != nil {
firstErr = core.E("store.importSeeds", "insert seed prompt", err)
return
}
count++
case string:
db.Exec(`INSERT INTO seeds VALUES (?, ?, ?, ?, ?)`,
rel, region, "", "", seed)
if err := db.Exec(`INSERT INTO seeds VALUES (?, ?, ?, ?, ?)`,
rel, region, "", "", seed); err != nil {
firstErr = core.E("store.importSeeds", "insert seed string", err)
return
}
count++
}
}
})
return count
if firstErr != nil {
return count, firstErr
}
return count, nil
}
// walkDir recursively visits all regular files under root, calling fn for each.

View file

@ -2,71 +2,66 @@
package store
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
import "testing"
func TestImportExport_Import_Good_CSVAndJSONIngestion(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
require.NoError(t, err)
defer storeInstance.Close()
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("import-export-good")
require.NoError(t, err)
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
require.NoError(t, medium.Write("findings.csv", "tool,severity\ngosec,high\ngolint,low\n"))
require.NoError(t, medium.Write("users.json", `{"entries":[{"name":"Alice"},{"name":"Bob"}]}`))
assertNoError(t, medium.Write("findings.csv", "tool,severity\ngosec,high\ngolint,low\n"))
assertNoError(t, medium.Write("users.json", `{"entries":[{"name":"Alice"},{"name":"Bob"}]}`))
require.NoError(t, Import(workspace, medium, "findings.csv"))
require.NoError(t, Import(workspace, medium, "users.json"))
assertNoError(t, Import(workspace, medium, "findings.csv"))
assertNoError(t, Import(workspace, medium, "users.json"))
assert.Equal(t, map[string]any{"findings": 2, "users": 2}, workspace.Aggregate())
assertEqual(t, map[string]any{"findings": 2, "users": 2}, workspace.Aggregate())
}
func TestImportExport_Import_Bad_MalformedPayload(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
require.NoError(t, err)
defer storeInstance.Close()
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("import-export-bad")
require.NoError(t, err)
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
require.NoError(t, medium.Write("broken.json", `{"entries":[{"name":"Alice"}`))
assertNoError(t, medium.Write("broken.json", `{"entries":[{"name":"Alice"}`))
require.Error(t, Import(workspace, medium, "broken.json"))
assertError(t, Import(workspace, medium, "broken.json"))
count, err := workspace.Count()
require.NoError(t, err)
assert.Equal(t, 0, count)
assertNoError(t, err)
assertEqual(t, 0, count)
}
func TestImportExport_Import_Ugly_EmptyPayload(t *testing.T) {
useWorkspaceStateDirectory(t)
storeInstance, err := New(":memory:")
require.NoError(t, err)
defer storeInstance.Close()
assertNoError(t, err)
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("import-export-ugly")
require.NoError(t, err)
assertNoError(t, err)
defer workspace.Discard()
medium := newMemoryMedium()
for _, path := range []string{"empty.csv", "empty.json", "empty.jsonl"} {
require.NoError(t, medium.Write(path, ""))
require.NoError(t, Import(workspace, medium, path))
assertNoError(t, medium.Write(path, ""))
assertNoError(t, Import(workspace, medium, path))
}
assert.Equal(t, map[string]any{}, workspace.Aggregate())
assertEqual(t, map[string]any{}, workspace.Aggregate())
}

View file

@ -146,7 +146,7 @@ func (storeInstance *Store) queryJournalRows(query string, arguments ...any) cor
if err != nil {
return core.Result{Value: core.E("store.QueryJournal", "query rows", err), OK: false}
}
defer rows.Close()
defer func() { _ = rows.Close() }()
rowMaps, err := queryRowsAsMaps(rows)
if err != nil {
@ -368,14 +368,6 @@ func firstQuotedSubmatch(patterns []*regexp.Regexp, value string) string {
return ""
}
func regexpSubmatch(pattern *regexp.Regexp, value string, index int) string {
match := pattern.FindStringSubmatch(value)
if len(match) <= index {
return ""
}
return match[index]
}
func queryRowsAsMaps(rows *sql.Rows) ([]map[string]any, error) {
columnNames, err := rows.Columns()
if err != nil {

View file

@ -3,13 +3,12 @@ package store
import (
"testing"
"time"
)
func TestJournal_CommitToJournal_Good_WithQueryJournalSQL(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
first := storeInstance.CommitToJournal("session-a", map[string]any{"like": 4}, map[string]string{"workspace": "session-a"})
second := storeInstance.CommitToJournal("session-b", map[string]any{"profile_match": 2}, map[string]string{"workspace": "session-b"})
@ -36,7 +35,7 @@ func TestJournal_CommitToJournal_Good_WithQueryJournalSQL(t *testing.T) {
func TestJournal_CommitToJournal_Good_ResultCopiesInputMaps(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
fields := map[string]any{"like": 4}
tags := map[string]string{"workspace": "session-a"}
@ -62,7 +61,7 @@ func TestJournal_CommitToJournal_Good_ResultCopiesInputMaps(t *testing.T) {
func TestJournal_QueryJournal_Good_RawSQLWithCTE(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 4}, map[string]string{"workspace": "session-a"}).OK)
@ -85,7 +84,7 @@ func TestJournal_QueryJournal_Good_RawSQLWithCTE(t *testing.T) {
func TestJournal_QueryJournal_Good_PragmaSQL(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
rows := requireResultRows(
t,
@ -104,7 +103,7 @@ func TestJournal_QueryJournal_Good_PragmaSQL(t *testing.T) {
func TestJournal_QueryJournal_Good_FluxFilters(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"like": 2}, map[string]string{"workspace": "session-b"}).OK)
@ -124,7 +123,7 @@ func TestJournal_QueryJournal_Good_FluxFilters(t *testing.T) {
func TestJournal_QueryJournal_Good_TagFilter(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"like": 2}, map[string]string{"workspace": "session-b"}).OK)
@ -144,7 +143,7 @@ func TestJournal_QueryJournal_Good_TagFilter(t *testing.T) {
func TestJournal_QueryJournal_Good_NumericFieldFilter(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"like": 2}, map[string]string{"workspace": "session-b"}).OK)
@ -164,7 +163,7 @@ func TestJournal_QueryJournal_Good_NumericFieldFilter(t *testing.T) {
func TestJournal_QueryJournal_Good_BooleanFieldFilter(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"complete": false}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"complete": true}, map[string]string{"workspace": "session-b"}).OK)
@ -184,10 +183,10 @@ func TestJournal_QueryJournal_Good_BooleanFieldFilter(t *testing.T) {
func TestJournal_QueryJournal_Good_BucketFilter(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertNoError(t, commitJournalEntry( storeInstance.sqliteDatabase, "events", "session-b", `{"like":2}`, `{"workspace":"session-b"}`, time.Now().UnixMilli(), ))
assertNoError(t, commitJournalEntry(storeInstance.sqliteDatabase, "events", "session-b", `{"like":2}`, `{"workspace":"session-b"}`, time.Now().UnixMilli()))
rows := requireResultRows(
t,
@ -201,12 +200,12 @@ func TestJournal_QueryJournal_Good_BucketFilter(t *testing.T) {
func TestJournal_QueryJournal_Good_DeterministicOrderingForSameTimestamp(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, ensureJournalSchema(storeInstance.sqliteDatabase))
committedAt := time.Date(2026, 3, 30, 12, 0, 0, 0, time.UTC).UnixMilli()
assertNoError(t, commitJournalEntry( storeInstance.sqliteDatabase, "events", "session-b", `{"like":2}`, `{"workspace":"session-b"}`, committedAt, ))
assertNoError(t, commitJournalEntry( storeInstance.sqliteDatabase, "events", "session-a", `{"like":1}`, `{"workspace":"session-a"}`, committedAt, ))
assertNoError(t, commitJournalEntry(storeInstance.sqliteDatabase, "events", "session-b", `{"like":2}`, `{"workspace":"session-b"}`, committedAt))
assertNoError(t, commitJournalEntry(storeInstance.sqliteDatabase, "events", "session-a", `{"like":1}`, `{"workspace":"session-a"}`, committedAt))
rows := requireResultRows(
t,
@ -220,7 +219,7 @@ func TestJournal_QueryJournal_Good_DeterministicOrderingForSameTimestamp(t *test
func TestJournal_QueryJournal_Good_AbsoluteRangeWithStop(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"like": 2}, map[string]string{"workspace": "session-b"}).OK)
@ -249,7 +248,7 @@ func TestJournal_QueryJournal_Good_AbsoluteRangeWithStop(t *testing.T) {
func TestJournal_QueryJournal_Good_AbsoluteRangeHonoursStop(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("session-a", map[string]any{"like": 1}, map[string]string{"workspace": "session-a"}).OK)
assertTrue(t, storeInstance.CommitToJournal("session-b", map[string]any{"like": 2}, map[string]string{"workspace": "session-b"}).OK)
@ -278,7 +277,7 @@ func TestJournal_QueryJournal_Good_AbsoluteRangeHonoursStop(t *testing.T) {
func TestJournal_CommitToJournal_Bad_EmptyMeasurement(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
result := storeInstance.CommitToJournal("", map[string]any{"like": 1}, map[string]string{"workspace": "missing"})
assertFalse(t, result.OK)

74
json.go
View file

@ -16,11 +16,12 @@ import core "dappco.re/go/core"
// type CacheEntry struct {
// Data store.RawMessage `json:"data"`
// }
// cacheEntry := CacheEntry{Data: store.RawMessage([]byte("{\"name\":\"Alice\"}"))}
type RawMessage []byte
// MarshalJSON returns the raw bytes as-is. If empty, returns `null`.
//
// Usage example: `bytes, err := raw.MarshalJSON()`
// Usage example: `bytes, err := store.RawMessage([]byte("{\"name\":\"Alice\"}")).MarshalJSON()`
func (raw RawMessage) MarshalJSON() ([]byte, error) {
if len(raw) == 0 {
return []byte("null"), nil
@ -30,7 +31,7 @@ func (raw RawMessage) MarshalJSON() ([]byte, error) {
// UnmarshalJSON stores the raw JSON bytes without decoding them.
//
// Usage example: `var raw store.RawMessage; err := raw.UnmarshalJSON(data)`
// Usage example: `var raw store.RawMessage; err := raw.UnmarshalJSON([]byte("{\"name\":\"Alice\"}"))`
func (raw *RawMessage) UnmarshalJSON(data []byte) error {
if raw == nil {
return core.E("store.RawMessage.UnmarshalJSON", "nil receiver", nil)
@ -43,9 +44,9 @@ func (raw *RawMessage) UnmarshalJSON(data []byte) error {
// Uses core.JSONMarshal internally then applies prefix/indent formatting
// so consumers get readable output without importing encoding/json.
//
// Usage example: `data, err := store.MarshalIndent(entry, "", " ")`
func MarshalIndent(v any, prefix, indent string) ([]byte, error) {
marshalled := core.JSONMarshal(v)
// Usage example: `data, err := store.MarshalIndent(map[string]string{"name": "Alice"}, "", " ")`
func MarshalIndent(value any, prefix, indent string) ([]byte, error) {
marshalled := core.JSONMarshal(value)
if !marshalled.OK {
if err, ok := marshalled.Value.(error); ok {
return nil, core.E("store.MarshalIndent", "marshal", err)
@ -70,7 +71,7 @@ func MarshalIndent(v any, prefix, indent string) ([]byte, error) {
// indentCompactJSON formats compact JSON bytes with prefix+indent.
// Mirrors json.Indent's semantics without importing encoding/json.
//
// Usage example: `builder := core.NewBuilder(); _ = indentCompactJSON(builder, compact, "", " ")`
// Usage example: `builder := core.NewBuilder(); _ = indentCompactJSON(builder, []byte("{\"name\":\"Alice\"}"), "", " ")`
func indentCompactJSON(buf interface {
WriteByte(byte) error
WriteString(string) (int, error)
@ -79,18 +80,27 @@ func indentCompactJSON(buf interface {
inString := false
escaped := false
writeNewlineIndent := func(level int) {
buf.WriteByte('\n')
buf.WriteString(prefix)
for i := 0; i < level; i++ {
buf.WriteString(indent)
writeNewlineIndent := func(level int) error {
if err := buf.WriteByte('\n'); err != nil {
return err
}
if _, err := buf.WriteString(prefix); err != nil {
return err
}
for i := 0; i < level; i++ {
if _, err := buf.WriteString(indent); err != nil {
return err
}
}
return nil
}
for i := 0; i < len(src); i++ {
c := src[i]
if inString {
buf.WriteByte(c)
if err := buf.WriteByte(c); err != nil {
return err
}
if escaped {
escaped = false
continue
@ -107,34 +117,54 @@ func indentCompactJSON(buf interface {
switch c {
case '"':
inString = true
buf.WriteByte(c)
if err := buf.WriteByte(c); err != nil {
return err
}
case '{', '[':
buf.WriteByte(c)
if err := buf.WriteByte(c); err != nil {
return err
}
depth++
// Look ahead for empty object/array.
if i+1 < len(src) && (src[i+1] == '}' || src[i+1] == ']') {
continue
}
writeNewlineIndent(depth)
if err := writeNewlineIndent(depth); err != nil {
return err
}
case '}', ']':
// Only indent if previous byte wasn't the matching opener.
if i > 0 && src[i-1] != '{' && src[i-1] != '[' {
depth--
writeNewlineIndent(depth)
if err := writeNewlineIndent(depth); err != nil {
return err
}
} else {
depth--
}
buf.WriteByte(c)
if err := buf.WriteByte(c); err != nil {
return err
}
case ',':
buf.WriteByte(c)
writeNewlineIndent(depth)
if err := buf.WriteByte(c); err != nil {
return err
}
if err := writeNewlineIndent(depth); err != nil {
return err
}
case ':':
buf.WriteByte(c)
buf.WriteByte(' ')
if err := buf.WriteByte(c); err != nil {
return err
}
if err := buf.WriteByte(' '); err != nil {
return err
}
case ' ', '\t', '\n', '\r':
// Drop whitespace from compact source.
default:
buf.WriteByte(c)
if err := buf.WriteByte(c); err != nil {
return err
}
}
}
return nil

View file

@ -3,18 +3,20 @@
package store
import (
"bytes"
core "dappco.re/go/core"
"dappco.re/go/io"
coreio "dappco.re/go/core/io"
)
// Medium is the minimal storage transport used by the go-store workspace
// import and export helpers and by Compact when writing cold archives.
//
// This is an alias of `dappco.re/go/io.Medium`, so callers can pass any
// This is an alias of `dappco.re/go/core/io.Medium`, so callers can pass any
// upstream medium implementation directly without an adapter.
//
// Usage example: `medium, _ := local.New("/tmp/exports"); storeInstance, err := store.New(":memory:", store.WithMedium(medium))`
type Medium = io.Medium
type Medium = coreio.Medium
// Usage example: `medium, _ := local.New("/srv/core"); storeInstance, err := store.NewConfigured(store.StoreConfig{DatabasePath: ":memory:", Medium: medium})`
// WithMedium installs an io.Medium-compatible transport on the Store so that
@ -233,11 +235,10 @@ func importCSV(workspace *Workspace, kind, content string) error {
func splitCSVLine(line string) []string {
line = trimTrailingCarriageReturn(line)
buffer := core.NewBuffer()
buffer := &bytes.Buffer{}
var (
fields []string
inQuotes bool
wasEscaped bool
fields []string
inQuotes bool
)
for index := 0; index < len(line); index++ {
character := line[index]
@ -245,19 +246,16 @@ func splitCSVLine(line string) []string {
case character == '"' && inQuotes && index+1 < len(line) && line[index+1] == '"':
buffer.WriteByte('"')
index++
wasEscaped = true
case character == '"':
inQuotes = !inQuotes
case character == ',' && !inQuotes:
fields = append(fields, buffer.String())
buffer.Reset()
wasEscaped = false
default:
buffer.WriteByte(character)
}
}
fields = append(fields, buffer.String())
_ = wasEscaped
return fields
}

View file

@ -189,7 +189,7 @@ func TestMedium_WithMedium_Good(t *testing.T) {
medium := newMemoryMedium()
storeInstance, err := New(":memory:", WithMedium(medium))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertSamef(t, medium, storeInstance.Medium(), "medium should round-trip via accessor")
assertSamef(t, medium, storeInstance.Config().Medium, "medium should appear in Config()")
@ -200,7 +200,7 @@ func TestMedium_WithMedium_Bad_NilKeepsFilesystemBackend(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNil(t, storeInstance.Medium())
}
@ -218,7 +218,7 @@ func TestMedium_WithMedium_Good_PersistsDatabaseThroughMedium(t *testing.T) {
reopenedStore, err := New("app.db", WithMedium(medium))
assertNoError(t, err)
defer reopenedStore.Close()
defer func() { _ = reopenedStore.Close() }()
value, err := reopenedStore.Get("g", "k")
assertNoError(t, err)
@ -231,7 +231,7 @@ func TestMedium_Import_Good_JSONL(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-jsonl")
assertNoError(t, err)
@ -256,7 +256,7 @@ func TestMedium_Import_Good_JSONArray(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-json-array")
assertNoError(t, err)
@ -275,7 +275,7 @@ func TestMedium_Import_Good_CSV(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-csv")
assertNoError(t, err)
@ -294,7 +294,7 @@ func TestMedium_Import_Bad_NilArguments(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-bad")
assertNoError(t, err)
@ -312,7 +312,7 @@ func TestMedium_Import_Ugly_MissingFileReturnsError(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-import-missing")
assertNoError(t, err)
@ -327,7 +327,7 @@ func TestMedium_Export_Good_JSON(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-export-json")
assertNoError(t, err)
@ -352,7 +352,7 @@ func TestMedium_Export_Good_JSONLines(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-export-jsonl")
assertNoError(t, err)
@ -380,7 +380,7 @@ func TestMedium_Export_Bad_NilArguments(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("medium-export-bad")
assertNoError(t, err)
@ -400,7 +400,7 @@ func TestMedium_Compact_Good_MediumRoutesArchive(t *testing.T) {
medium := newMemoryMedium()
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"), WithMedium(medium))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.CommitToJournal("jobs", map[string]any{"count": 3}, map[string]string{"workspace": "jobs-1"}).OK)

View file

@ -2,25 +2,10 @@
package store
import (
"bufio"
core "dappco.re/go/core"
"github.com/parquet-go/parquet-go"
)
type readCloser interface {
Read([]byte) (int, error)
Close() error
}
type writeCloser interface {
Write([]byte) (int, error)
Close() error
}
import core "dappco.re/go/core"
// ChatMessage represents a single message in a chat conversation, used for
// reading JSONL training data during Parquet export and data import.
// reading JSONL training data during data import.
//
// Usage example:
//
@ -41,11 +26,12 @@ type ChatMessage struct {
Content string `json:"content"`
}
// ParquetRow is the schema for exported Parquet files.
// ParquetRow describes the lightweight row shape used by external Parquet
// exporters.
//
// Usage example:
//
// row := store.ParquetRow{Prompt: "What is sovereignty?", Response: "...", System: "You are LEM."}
// row := store.ParquetRow{Prompt: "What is sovereignty?", Response: "Sovereignty is...", System: "You are LEM."}
type ParquetRow struct {
// Prompt is the user prompt text.
//
@ -72,133 +58,34 @@ type ParquetRow struct {
//
// Usage example:
//
// row.Messages // `[{"role":"user","content":"..."}]`
// row.Messages // `[{"role":"user","content":"What is sovereignty?"}]`
Messages string `parquet:"messages"`
}
// ExportParquet reads JSONL training splits (train.jsonl, valid.jsonl, test.jsonl)
// from trainingDir and writes Parquet files with snappy compression to outputDir.
// Returns total rows exported.
// ExportParquet reports that Parquet export is intentionally kept outside the
// core package dependency graph.
//
// Usage example:
//
// total, err := store.ExportParquet("/Volumes/Data/lem/training", "/Volumes/Data/lem/parquet")
// _, err := store.ExportParquet("/Volumes/Data/lem/training", "/Volumes/Data/lem/parquet")
func ExportParquet(trainingDir, outputDir string) (int, error) {
if outputDir == "" {
outputDir = core.JoinPath(trainingDir, "parquet")
}
if r := localFs.EnsureDir(outputDir); !r.OK {
return 0, core.E("store.ExportParquet", "create output directory", r.Value.(error))
}
total := 0
for _, split := range []string{"train", "valid", "test"} {
jsonlPath := core.JoinPath(trainingDir, split+".jsonl")
if !localFs.IsFile(jsonlPath) {
continue
}
n, err := ExportSplitParquet(jsonlPath, outputDir, split)
if err != nil {
return total, core.E("store.ExportParquet", core.Sprintf("export %s", split), err)
}
total += n
}
return total, nil
return 0, core.E(
"store.ExportParquet",
"Parquet export requires an external tool so core does not ship a runtime Parquet dependency",
nil,
)
}
// ExportSplitParquet reads a chat JSONL file and writes a Parquet file for the
// given split name. Returns the number of rows written.
// ExportSplitParquet reports that split-level Parquet export is intentionally
// kept outside the core package dependency graph.
//
// Usage example:
//
// n, err := store.ExportSplitParquet("/data/train.jsonl", "/data/parquet", "train")
// _, err := store.ExportSplitParquet("/data/train.jsonl", "/data/parquet", "train")
func ExportSplitParquet(jsonlPath, outputDir, split string) (int, error) {
openResult := localFs.Open(jsonlPath)
if !openResult.OK {
return 0, core.E("store.ExportSplitParquet", core.Sprintf("open %s", jsonlPath), openResult.Value.(error))
}
f := openResult.Value.(readCloser)
defer f.Close()
var rows []ParquetRow
scanner := bufio.NewScanner(f)
scanner.Buffer(make([]byte, 1024*1024), 1024*1024)
for scanner.Scan() {
text := core.Trim(scanner.Text())
if text == "" {
continue
}
var data struct {
Messages []ChatMessage `json:"messages"`
}
if r := core.JSONUnmarshal([]byte(text), &data); !r.OK {
continue
}
var prompt, response, system string
for _, m := range data.Messages {
switch m.Role {
case "user":
if prompt == "" {
prompt = m.Content
}
case "assistant":
if response == "" {
response = m.Content
}
case "system":
if system == "" {
system = m.Content
}
}
}
msgsJSON := core.JSONMarshalString(data.Messages)
rows = append(rows, ParquetRow{
Prompt: prompt,
Response: response,
System: system,
Messages: msgsJSON,
})
}
if err := scanner.Err(); err != nil {
return 0, core.E("store.ExportSplitParquet", core.Sprintf("scan %s", jsonlPath), err)
}
if len(rows) == 0 {
return 0, nil
}
outPath := core.JoinPath(outputDir, split+".parquet")
createResult := localFs.Create(outPath)
if !createResult.OK {
return 0, core.E("store.ExportSplitParquet", core.Sprintf("create %s", outPath), createResult.Value.(error))
}
out := createResult.Value.(writeCloser)
writer := parquet.NewGenericWriter[ParquetRow](out,
parquet.Compression(&parquet.Snappy),
return 0, core.E(
"store.ExportSplitParquet",
"Parquet export requires an external tool so core does not ship a runtime Parquet dependency",
nil,
)
if _, err := writer.Write(rows); err != nil {
out.Close()
return 0, core.E("store.ExportSplitParquet", "write parquet rows", err)
}
if err := writer.Close(); err != nil {
out.Close()
return 0, core.E("store.ExportSplitParquet", "close parquet writer", err)
}
if err := out.Close(); err != nil {
return 0, core.E("store.ExportSplitParquet", "close file", err)
}
return len(rows), nil
}

View file

@ -2,7 +2,6 @@ package store
import (
"testing"
)
func TestPath_Normalise_Good_TrailingSlashes(t *testing.T) {

View file

@ -3,6 +3,7 @@
package store
import (
"bytes"
"io"
"io/fs"
"net/http"
@ -108,6 +109,10 @@ func Publish(cfg PublishConfig, w io.Writer) error {
core.Print(w, "Publishing to https://huggingface.co/datasets/%s", cfg.Repo)
if err := ensureHFDatasetRepo(token, cfg.Repo, cfg.Public); err != nil {
return core.E("store.Publish", "ensure HuggingFace dataset", err)
}
for _, f := range files {
if err := uploadFileToHF(token, cfg.Repo, f.local, f.remote); err != nil {
return core.E("store.Publish", core.Sprintf("upload %s", core.PathBase(f.local)), err)
@ -161,33 +166,115 @@ func collectUploadFiles(inputDir string) ([]uploadEntry, error) {
return files, nil
}
func ensureHFDatasetRepo(token, repoID string, public bool) error {
if repoID == "" {
return core.E("store.ensureHFDatasetRepo", "repository is required", nil)
}
organisation, name := splitHFRepoID(repoID)
if name == "" {
return core.E("store.ensureHFDatasetRepo", "repository name is required", nil)
}
createPayload := map[string]any{
"name": name,
"type": "dataset",
"private": !public,
}
if organisation != "" {
createPayload["organization"] = organisation
}
createStatus, createBody, err := hfJSONRequest(token, http.MethodPost, "https://huggingface.co/api/repos/create", createPayload)
if err != nil {
return core.E("store.ensureHFDatasetRepo", "create dataset repository", err)
}
if createStatus >= 300 && createStatus != http.StatusConflict {
return core.E("store.ensureHFDatasetRepo", core.Sprintf("create dataset failed: HTTP %d: %s", createStatus, createBody), nil)
}
settingsURL := core.Sprintf("https://huggingface.co/api/repos/dataset/%s/settings", repoID)
settingsStatus, settingsBody, err := hfJSONRequest(token, http.MethodPut, settingsURL, map[string]any{
"private": !public,
})
if err != nil {
return core.E("store.ensureHFDatasetRepo", "update dataset visibility", err)
}
if settingsStatus >= 300 {
return core.E("store.ensureHFDatasetRepo", core.Sprintf("update dataset visibility failed: HTTP %d: %s", settingsStatus, settingsBody), nil)
}
return nil
}
func splitHFRepoID(repoID string) (organisation string, name string) {
parts := core.Split(repoID, "/")
if len(parts) == 1 {
return "", repoID
}
return parts[0], parts[1]
}
func hfJSONRequest(token, method, url string, payload map[string]any) (int, string, error) {
payloadJSON := core.JSONMarshalString(payload)
req, err := http.NewRequest(method, url, bytes.NewBufferString(payloadJSON))
if err != nil {
return 0, "", core.E("store.hfJSONRequest", "create request", err)
}
req.Header.Set("Authorization", "Bearer "+token)
req.Header.Set("Content-Type", "application/json")
client := &http.Client{Timeout: 120 * time.Second}
resp, err := client.Do(req)
if err != nil {
return 0, "", core.E("store.hfJSONRequest", "send request", err)
}
defer func() {
_ = resp.Body.Close()
}()
body, err := io.ReadAll(resp.Body)
if err != nil {
return resp.StatusCode, "", core.E("store.hfJSONRequest", "read response body", err)
}
return resp.StatusCode, string(body), nil
}
// uploadFileToHF uploads a single file to a HuggingFace dataset repo via the
// Hub API.
func uploadFileToHF(token, repoID, localPath, remotePath string) error {
readResult := localFs.Read(localPath)
if !readResult.OK {
return core.E("store.uploadFileToHF", core.Sprintf("read %s", localPath), readResult.Value.(error))
openResult := localFs.Open(localPath)
if !openResult.OK {
return core.E("store.uploadFileToHF", core.Sprintf("open %s", localPath), openResult.Value.(error))
}
raw := []byte(readResult.Value.(string))
file := openResult.Value.(fs.File)
defer func() { _ = file.Close() }()
url := core.Sprintf("https://huggingface.co/api/datasets/%s/upload/main/%s", repoID, remotePath)
req, err := http.NewRequest(http.MethodPut, url, core.NewBuffer(raw))
req, err := http.NewRequest(http.MethodPut, url, file)
if err != nil {
return core.E("store.uploadFileToHF", "create request", err)
}
req.Header.Set("Authorization", "Bearer "+token)
req.Header.Set("Content-Type", "application/octet-stream")
if stat, err := file.Stat(); err == nil {
req.ContentLength = stat.Size()
}
client := &http.Client{Timeout: 120 * time.Second}
resp, err := client.Do(req)
if err != nil {
return core.E("store.uploadFileToHF", "upload request", err)
}
defer resp.Body.Close()
defer func() {
_ = resp.Body.Close()
}()
if resp.StatusCode >= 300 {
body, _ := io.ReadAll(resp.Body)
body, readErr := io.ReadAll(resp.Body)
if readErr != nil {
return core.E("store.uploadFileToHF", "read error response body", readErr)
}
return core.E("store.uploadFileToHF", core.Sprintf("upload failed: HTTP %d: %s", resp.StatusCode, string(body)), nil)
}

View file

@ -7,7 +7,7 @@ func TestRecover_Orphans_Good_RecoversOrphan(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("recover-good")
assertNoError(t, err)
@ -28,23 +28,20 @@ func TestRecover_Orphans_Bad_CorruptMetadataQuarantined(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
corruptDatabasePath := workspaceFilePath(stateDirectory, "recover-bad")
requireCoreWriteBytes(t, corruptDatabasePath, []byte("not a duckdb database"))
requireCoreWriteBytes(t, corruptDatabasePath+"-wal", []byte("wal"))
requireCoreWriteBytes(t, corruptDatabasePath+"-shm", []byte("shm"))
requireCoreWriteBytes(t, corruptDatabasePath+".wal", []byte("wal"))
orphans := storeInstance.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 0)
assertFalse(t, testFilesystem().Exists(corruptDatabasePath))
assertFalse(t, testFilesystem().Exists(corruptDatabasePath+"-wal"))
assertFalse(t, testFilesystem().Exists(corruptDatabasePath+"-shm"))
assertFalse(t, testFilesystem().Exists(corruptDatabasePath+".wal"))
quarantinePath := workspaceQuarantineFilePath(stateDirectory, corruptDatabasePath)
assertTrue(t, testFilesystem().Exists(quarantinePath))
assertTrue(t, testFilesystem().Exists(quarantinePath+"-wal"))
assertTrue(t, testFilesystem().Exists(quarantinePath+"-shm"))
assertTrue(t, testFilesystem().Exists(quarantinePath+".wal"))
assertEqual(t, "not a duckdb database", string(requireCoreReadBytes(t, quarantinePath)))
}
@ -53,7 +50,7 @@ func TestRecover_Orphans_Ugly_NoOrphansNoop(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
orphans := storeInstance.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 0)

View file

@ -565,6 +565,9 @@ func (scopedStore *ScopedStore) Transaction(operation func(*ScopedStoreTransacti
if operation == nil {
return core.E("store.ScopedStore.Transaction", "operation is nil", nil)
}
if scopedStore.store == nil {
return core.E("store.ScopedStore.Transaction", "scoped store store is nil", nil)
}
return scopedStore.store.Transaction(func(storeTransaction *StoreTransaction) error {
return operation(&ScopedStoreTransaction{

View file

@ -13,7 +13,7 @@ import (
func TestScope_NewScoped_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-1")
assertNotNil(t, scopedStore)
@ -22,7 +22,7 @@ func TestScope_NewScoped_Good(t *testing.T) {
func TestScope_ScopedStore_Good_Config(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, err := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -30,7 +30,7 @@ func TestScope_ScopedStore_Good_Config(t *testing.T) {
})
assertNoError(t, err)
assertEqual(t, ScopedStoreConfig{ Namespace: "tenant-a", Quota: QuotaConfig{MaxKeys: 4, MaxGroups: 2}, }, scopedStore.Config())
assertEqual(t, ScopedStoreConfig{Namespace: "tenant-a", Quota: QuotaConfig{MaxKeys: 4, MaxGroups: 2}}, scopedStore.Config())
}
func TestScope_ScopedStore_Good_ConfigZeroValueFromNil(t *testing.T) {
@ -41,7 +41,7 @@ func TestScope_ScopedStore_Good_ConfigZeroValueFromNil(t *testing.T) {
func TestScope_NewScoped_Good_AlphanumericHyphens(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
valid := []string{"abc", "ABC", "123", "a-b-c", "tenant-42", "A1-B2"}
for _, namespace := range valid {
@ -52,7 +52,7 @@ func TestScope_NewScoped_Good_AlphanumericHyphens(t *testing.T) {
func TestScope_NewScoped_Bad_Empty(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNil(t, NewScoped(storeInstance, ""))
}
@ -63,7 +63,7 @@ func TestScope_NewScoped_Bad_NilStore(t *testing.T) {
func TestScope_NewScoped_Bad_InvalidChars(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
invalid := []string{"foo.bar", "foo:bar", "foo bar", "foo/bar", "foo_bar", "tenant!", "@ns"}
for _, namespace := range invalid {
@ -73,7 +73,7 @@ func TestScope_NewScoped_Bad_InvalidChars(t *testing.T) {
func TestScope_NewScopedConfigured_Bad_InvalidNamespaceFromQuotaConfig(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_, err := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant_a",
@ -94,7 +94,7 @@ func TestScope_NewScopedConfigured_Bad_NilStoreFromQuotaConfig(t *testing.T) {
func TestScope_NewScopedConfigured_Bad_NegativeMaxKeys(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_, err := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -106,7 +106,7 @@ func TestScope_NewScopedConfigured_Bad_NegativeMaxKeys(t *testing.T) {
func TestScope_NewScopedConfigured_Bad_NegativeMaxGroups(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_, err := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -118,7 +118,7 @@ func TestScope_NewScopedConfigured_Bad_NegativeMaxGroups(t *testing.T) {
func TestScope_NewScopedConfigured_Good_InlineQuotaFields(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, err := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -140,7 +140,7 @@ func TestScope_ScopedStoreConfig_Good_Validate(t *testing.T) {
func TestScope_NewScopedConfigured_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, err := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -154,7 +154,7 @@ func TestScope_NewScopedConfigured_Good(t *testing.T) {
func TestScope_NewScopedWithQuota_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, err := NewScopedWithQuota(storeInstance, "tenant-a", QuotaConfig{MaxKeys: 4, MaxGroups: 2})
assertNoError(t, err)
@ -167,7 +167,7 @@ func TestScope_NewScopedWithQuota_Good(t *testing.T) {
func TestScope_NewScopedConfigured_Bad_InvalidNamespace(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_, err := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant_a",
@ -217,7 +217,7 @@ func TestScope_ScopedStore_Good_NilReceiverReturnsErrors(t *testing.T) {
func TestScope_ScopedStore_Good_SetGet(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("config", "theme", "dark"))
@ -229,7 +229,7 @@ func TestScope_ScopedStore_Good_SetGet(t *testing.T) {
func TestScope_ScopedStore_Good_DefaultGroupHelpers(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.Set("theme", "dark"))
@ -245,7 +245,7 @@ func TestScope_ScopedStore_Good_DefaultGroupHelpers(t *testing.T) {
func TestScope_ScopedStore_Good_SetInGetFrom(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("config", "theme", "dark"))
@ -257,7 +257,7 @@ func TestScope_ScopedStore_Good_SetInGetFrom(t *testing.T) {
func TestScope_ScopedStore_Good_PrefixedInUnderlyingStore(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("config", "key", "val"))
@ -274,7 +274,7 @@ func TestScope_ScopedStore_Good_PrefixedInUnderlyingStore(t *testing.T) {
func TestScope_ScopedStore_Good_NamespaceIsolation(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
alphaStore := NewScoped(storeInstance, "tenant-a")
betaStore := NewScoped(storeInstance, "tenant-b")
@ -297,7 +297,7 @@ func TestScope_ScopedStore_Good_NamespaceIsolation(t *testing.T) {
func TestScope_ScopedStore_Good_ExistsInDefaultGroup(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.Set("colour", "blue"))
@ -313,7 +313,7 @@ func TestScope_ScopedStore_Good_ExistsInDefaultGroup(t *testing.T) {
func TestScope_ScopedStore_Good_ExistsInExplicitGroup(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("config", "colour", "blue"))
@ -333,7 +333,7 @@ func TestScope_ScopedStore_Good_ExistsInExplicitGroup(t *testing.T) {
func TestScope_ScopedStore_Good_ExistsExpiredKeyReturnsFalse(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetWithTTL("session", "token", "abc123", 1*time.Millisecond))
@ -346,7 +346,7 @@ func TestScope_ScopedStore_Good_ExistsExpiredKeyReturnsFalse(t *testing.T) {
func TestScope_ScopedStore_Good_GroupExists(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("config", "colour", "blue"))
@ -362,7 +362,7 @@ func TestScope_ScopedStore_Good_GroupExists(t *testing.T) {
func TestScope_ScopedStore_Good_GroupExistsAfterDelete(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("config", "colour", "blue"))
@ -375,8 +375,7 @@ func TestScope_ScopedStore_Good_GroupExistsAfterDelete(t *testing.T) {
func TestScope_ScopedStore_Bad_ExistsClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
scopedStore := NewScoped(storeInstance, "tenant-a")
_, err := scopedStore.Exists("colour")
@ -391,7 +390,7 @@ func TestScope_ScopedStore_Bad_ExistsClosedStore(t *testing.T) {
func TestScope_ScopedStore_Good_Delete(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("g", "k", "v"))
@ -403,7 +402,7 @@ func TestScope_ScopedStore_Good_Delete(t *testing.T) {
func TestScope_ScopedStore_Good_DeleteGroup(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("g", "a", "1"))
@ -417,7 +416,7 @@ func TestScope_ScopedStore_Good_DeleteGroup(t *testing.T) {
func TestScope_ScopedStore_Good_DeletePrefix(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
otherScopedStore := NewScoped(storeInstance, "tenant-b")
@ -445,7 +444,7 @@ func TestScope_ScopedStore_Good_DeletePrefix(t *testing.T) {
func TestScope_ScopedStore_Good_OnChange_NamespaceLocal(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
otherScopedStore := NewScoped(storeInstance, "tenant-b")
@ -471,7 +470,7 @@ func TestScope_ScopedStore_Good_OnChange_NamespaceLocal(t *testing.T) {
func TestScope_ScopedStore_Good_Watch_NamespaceLocal(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
otherScopedStore := NewScoped(storeInstance, "tenant-b")
@ -502,7 +501,7 @@ func TestScope_ScopedStore_Good_Watch_NamespaceLocal(t *testing.T) {
func TestScope_ScopedStore_Good_Watch_All_NamespaceLocal(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
otherScopedStore := NewScoped(storeInstance, "tenant-b")
@ -541,7 +540,7 @@ func TestScope_ScopedStore_Good_Watch_All_NamespaceLocal(t *testing.T) {
func TestScope_ScopedStore_Good_Unwatch_ClosesLocalChannel(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
@ -558,7 +557,7 @@ func TestScope_ScopedStore_Good_Unwatch_ClosesLocalChannel(t *testing.T) {
func TestScope_ScopedStore_Good_GetAll(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
alphaStore := NewScoped(storeInstance, "tenant-a")
betaStore := NewScoped(storeInstance, "tenant-b")
@ -578,7 +577,7 @@ func TestScope_ScopedStore_Good_GetAll(t *testing.T) {
func TestScope_ScopedStore_Good_GetPage(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("items", "charlie", "3"))
@ -593,7 +592,7 @@ func TestScope_ScopedStore_Good_GetPage(t *testing.T) {
func TestScope_ScopedStore_Good_All(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("items", "first", "1"))
@ -610,7 +609,7 @@ func TestScope_ScopedStore_Good_All(t *testing.T) {
func TestScope_ScopedStore_Good_All_SortedByKey(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("items", "charlie", "3"))
@ -628,7 +627,7 @@ func TestScope_ScopedStore_Good_All_SortedByKey(t *testing.T) {
func TestScope_ScopedStore_Good_Count(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("g", "a", "1"))
@ -641,7 +640,7 @@ func TestScope_ScopedStore_Good_Count(t *testing.T) {
func TestScope_ScopedStore_Good_SetWithTTL(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetWithTTL("g", "k", "v", time.Hour))
@ -653,7 +652,7 @@ func TestScope_ScopedStore_Good_SetWithTTL(t *testing.T) {
func TestScope_ScopedStore_Good_SetWithTTL_Expires(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetWithTTL("g", "k", "v", 1*time.Millisecond))
@ -665,7 +664,7 @@ func TestScope_ScopedStore_Good_SetWithTTL_Expires(t *testing.T) {
func TestScope_ScopedStore_Good_Render(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("user", "name", "Alice"))
@ -677,7 +676,7 @@ func TestScope_ScopedStore_Good_Render(t *testing.T) {
func TestScope_ScopedStore_Good_BulkHelpers(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
alphaStore := NewScoped(storeInstance, "tenant-a")
betaStore := NewScoped(storeInstance, "tenant-b")
@ -719,7 +718,7 @@ func TestScope_ScopedStore_Good_BulkHelpers(t *testing.T) {
func TestScope_ScopedStore_Good_GroupsSeqStopsEarly(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("alpha", "a", "1"))
@ -738,7 +737,7 @@ func TestScope_ScopedStore_Good_GroupsSeqStopsEarly(t *testing.T) {
func TestScope_ScopedStore_Good_GroupsSeqSorted(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("charlie", "c", "3"))
@ -756,7 +755,7 @@ func TestScope_ScopedStore_Good_GroupsSeqSorted(t *testing.T) {
func TestScope_ScopedStore_Good_GetSplitAndGetFields(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetIn("config", "hosts", "alpha,beta,gamma"))
@ -783,7 +782,7 @@ func TestScope_ScopedStore_Good_GetSplitAndGetFields(t *testing.T) {
func TestScope_ScopedStore_Good_PurgeExpired(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
assertNoError(t, scopedStore.SetWithTTL("session", "token", "abc123", 1*time.Millisecond))
@ -799,7 +798,7 @@ func TestScope_ScopedStore_Good_PurgeExpired(t *testing.T) {
func TestScope_ScopedStore_Good_PurgeExpired_NamespaceLocal(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
alphaStore := NewScoped(storeInstance, "tenant-a")
betaStore := NewScoped(storeInstance, "tenant-b")
@ -825,7 +824,7 @@ func TestScope_ScopedStore_Good_PurgeExpired_NamespaceLocal(t *testing.T) {
func TestScope_Quota_Good_MaxKeys(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, err := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -846,7 +845,7 @@ func TestScope_Quota_Good_MaxKeys(t *testing.T) {
func TestScope_Quota_Bad_QuotaCheckQueryError(t *testing.T) {
database, _ := openStubSQLiteDatabase(t, stubSQLiteScenario{})
defer database.Close()
defer func() { _ = database.Close() }()
storeInstance := &Store{
sqliteDatabase: database,
@ -866,7 +865,7 @@ func TestScope_Quota_Bad_QuotaCheckQueryError(t *testing.T) {
func TestScope_Quota_Good_MaxKeys_AcrossGroups(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -884,7 +883,7 @@ func TestScope_Quota_Good_MaxKeys_AcrossGroups(t *testing.T) {
func TestScope_Quota_Good_UpsertDoesNotCount(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -905,7 +904,7 @@ func TestScope_Quota_Good_UpsertDoesNotCount(t *testing.T) {
func TestScope_Quota_Good_ExpiredUpsertDoesNotEmitDeleteEvent(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -944,7 +943,7 @@ func TestScope_Quota_Good_ExpiredUpsertDoesNotEmitDeleteEvent(t *testing.T) {
func TestScope_Quota_Good_DeleteAndReInsert(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -962,7 +961,7 @@ func TestScope_Quota_Good_DeleteAndReInsert(t *testing.T) {
func TestScope_Quota_Good_ZeroMeansUnlimited(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -977,7 +976,7 @@ func TestScope_Quota_Good_ZeroMeansUnlimited(t *testing.T) {
func TestScope_Quota_Good_ExpiredKeysExcluded(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -1002,7 +1001,7 @@ func TestScope_Quota_Good_ExpiredKeysExcluded(t *testing.T) {
func TestScope_Quota_Good_SetWithTTL_Enforced(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -1022,7 +1021,7 @@ func TestScope_Quota_Good_SetWithTTL_Enforced(t *testing.T) {
func TestScope_Quota_Good_MaxGroups(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -1041,7 +1040,7 @@ func TestScope_Quota_Good_MaxGroups(t *testing.T) {
func TestScope_Quota_Good_MaxGroups_ExistingGroupOK(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -1058,7 +1057,7 @@ func TestScope_Quota_Good_MaxGroups_ExistingGroupOK(t *testing.T) {
func TestScope_Quota_Good_MaxGroups_DeleteAndRecreate(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -1075,7 +1074,7 @@ func TestScope_Quota_Good_MaxGroups_DeleteAndRecreate(t *testing.T) {
func TestScope_Quota_Good_MaxGroups_ZeroUnlimited(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -1089,7 +1088,7 @@ func TestScope_Quota_Good_MaxGroups_ZeroUnlimited(t *testing.T) {
func TestScope_Quota_Good_MaxGroups_ExpiredGroupExcluded(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -1108,7 +1107,7 @@ func TestScope_Quota_Good_MaxGroups_ExpiredGroupExcluded(t *testing.T) {
func TestScope_Quota_Good_BothLimits(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -1128,7 +1127,7 @@ func TestScope_Quota_Good_BothLimits(t *testing.T) {
func TestScope_Quota_Good_DoesNotAffectOtherNamespaces(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
alphaStore, _ := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -1159,7 +1158,7 @@ func TestScope_Quota_Good_DoesNotAffectOtherNamespaces(t *testing.T) {
func TestScope_CountAll_Good_WithPrefix(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("ns-a:g1", "k1", "v"))
assertNoError(t, storeInstance.Set("ns-a:g1", "k2", "v"))
@ -1177,7 +1176,7 @@ func TestScope_CountAll_Good_WithPrefix(t *testing.T) {
func TestScope_CountAll_Good_WithPrefix_Wildcards(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
// Add keys in groups that look like wildcards.
assertNoError(t, storeInstance.Set("user_1", "k", "v"))
@ -1201,7 +1200,7 @@ func TestScope_CountAll_Good_WithPrefix_Wildcards(t *testing.T) {
func TestScope_CountAll_Good_EmptyPrefix(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g1", "k1", "v"))
assertNoError(t, storeInstance.Set("g2", "k2", "v"))
@ -1213,7 +1212,7 @@ func TestScope_CountAll_Good_EmptyPrefix(t *testing.T) {
func TestScope_CountAll_Good_ExcludesExpired(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("ns:g", "permanent", "v"))
assertNoError(t, storeInstance.SetWithTTL("ns:g", "temp", "v", 1*time.Millisecond))
@ -1226,7 +1225,7 @@ func TestScope_CountAll_Good_ExcludesExpired(t *testing.T) {
func TestScope_CountAll_Good_Empty(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
count, err := storeInstance.CountAll("nonexistent:")
assertNoError(t, err)
@ -1235,8 +1234,7 @@ func TestScope_CountAll_Good_Empty(t *testing.T) {
func TestScope_CountAll_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
_, err := storeInstance.CountAll("")
assertError(t, err)
}
@ -1247,7 +1245,7 @@ func TestScope_CountAll_Bad_ClosedStore(t *testing.T) {
func TestScope_Groups_Good_WithPrefix(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("ns-a:g1", "k", "v"))
assertNoError(t, storeInstance.Set("ns-a:g2", "k", "v"))
@ -1263,7 +1261,7 @@ func TestScope_Groups_Good_WithPrefix(t *testing.T) {
func TestScope_Groups_Good_EmptyPrefix(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g1", "k", "v"))
assertNoError(t, storeInstance.Set("g2", "k", "v"))
@ -1276,7 +1274,7 @@ func TestScope_Groups_Good_EmptyPrefix(t *testing.T) {
func TestScope_Groups_Good_Distinct(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
// Multiple keys in the same group should produce one entry.
assertNoError(t, storeInstance.Set("g1", "a", "v"))
@ -1291,7 +1289,7 @@ func TestScope_Groups_Good_Distinct(t *testing.T) {
func TestScope_Groups_Good_ExcludesExpired(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("ns:g1", "permanent", "v"))
assertNoError(t, storeInstance.SetWithTTL("ns:g2", "temp", "v", 1*time.Millisecond))
@ -1305,7 +1303,7 @@ func TestScope_Groups_Good_ExcludesExpired(t *testing.T) {
func TestScope_Groups_Good_SortedByGroupName(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("charlie", "c", "3"))
assertNoError(t, storeInstance.Set("alpha", "a", "1"))
@ -1318,7 +1316,7 @@ func TestScope_Groups_Good_SortedByGroupName(t *testing.T) {
func TestScope_Groups_Good_Empty(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
groups, err := storeInstance.Groups("nonexistent:")
assertNoError(t, err)
@ -1327,8 +1325,7 @@ func TestScope_Groups_Good_Empty(t *testing.T) {
func TestScope_Groups_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
_, err := storeInstance.Groups("")
assertError(t, err)
}
@ -1338,7 +1335,7 @@ func TestScope_Groups_Bad_ClosedStore(t *testing.T) {
// ---------------------------------------------------------------------------
func keyName(i int) string {
return "key-" + string(rune('a'+i%26))
return core.Concat("key-", core.Sprint(i))
}
func rawEntryCount(t *testing.T, storeInstance *Store, group string) int {

View file

@ -151,7 +151,6 @@ type Store struct {
journal influxdb2.Client
bucket string
org string
mu sync.RWMutex
journalConfiguration JournalConfiguration
medium Medium
lifecycleLock sync.Mutex
@ -382,15 +381,15 @@ func openSQLiteStore(operation, databasePath string, medium Medium) (*Store, err
// pool hands out different connections for each call.
sqliteDatabase.SetMaxOpenConns(1)
if _, err := sqliteDatabase.Exec("PRAGMA journal_mode=WAL"); err != nil {
sqliteDatabase.Close()
_ = sqliteDatabase.Close()
return nil, core.E(operation, "set WAL journal mode", err)
}
if _, err := sqliteDatabase.Exec("PRAGMA busy_timeout=5000"); err != nil {
sqliteDatabase.Close()
_ = sqliteDatabase.Close()
return nil, core.E(operation, "set busy timeout", err)
}
if err := ensureSchema(sqliteDatabase); err != nil {
sqliteDatabase.Close()
_ = sqliteDatabase.Close()
return nil, core.E(operation, "ensure schema", err)
}
@ -418,7 +417,7 @@ func (storeInstance *Store) workspaceStateDirectoryPath() string {
return normaliseWorkspaceStateDirectory(storeInstance.workspaceStateDirectory)
}
// Usage example: `storeInstance, err := store.New(":memory:"); if err != nil { return }; defer storeInstance.Close()`
// Usage example: `storeInstance, err := store.New(":memory:"); if err != nil { return }; defer func() { _ = storeInstance.Close() }()`
func (storeInstance *Store) Close() error {
if storeInstance == nil {
return nil
@ -675,7 +674,7 @@ func (storeInstance *Store) DeletePrefix(groupPrefix string) error {
if err != nil {
return core.E("store.DeletePrefix", "list groups", err)
}
defer rows.Close()
defer func() { _ = rows.Close() }()
var groupNames []string
for rows.Next() {
@ -739,7 +738,7 @@ func (storeInstance *Store) GetPage(group string, offset, limit int) ([]KeyValue
if err != nil {
return nil, core.E("store.GetPage", "query rows", err)
}
defer rows.Close()
defer func() { _ = rows.Close() }()
page := make([]KeyValue, 0, limit)
for rows.Next() {
@ -771,7 +770,7 @@ func (storeInstance *Store) AllSeq(group string) iter.Seq2[KeyValue, error] {
yield(KeyValue{}, core.E("store.All", "query rows", err))
return
}
defer rows.Close()
defer func() { _ = rows.Close() }()
for rows.Next() {
var entry KeyValue
@ -917,7 +916,7 @@ func (storeInstance *Store) GroupsSeq(groupPrefix ...string) iter.Seq2[string, e
yield("", core.E("store.GroupsSeq", "query group names", err))
return
}
defer rows.Close()
defer func() { _ = rows.Close() }()
for rows.Next() {
var groupName string
@ -1008,6 +1007,7 @@ func (storeInstance *Store) startBackgroundPurge() {
if _, err := storeInstance.PurgeExpired(); err != nil {
// For example, a logger could record the failure here. The loop
// keeps running so the next tick can retry.
_ = err
}
}
}
@ -1075,7 +1075,7 @@ func listExpiredEntriesMatchingGroupPrefix(database schemaDatabase, groupPrefix
if err != nil {
return nil, err
}
defer rows.Close()
defer func() { _ = rows.Close() }()
expiredEntries := make([]expiredEntryRef, 0)
for rows.Next() {
@ -1203,6 +1203,7 @@ func migrateLegacyEntriesTable(database *sql.DB) error {
if !committed {
if rollbackErr := transaction.Rollback(); rollbackErr != nil {
// Ignore rollback failures; the original error is already being returned.
_ = rollbackErr
}
}
}()
@ -1259,7 +1260,7 @@ func tableHasColumn(database schemaDatabase, tableName, columnName string) (bool
if err != nil {
return false, err
}
defer rows.Close()
defer func() { _ = rows.Close() }()
for rows.Next() {
var (

View file

@ -20,7 +20,7 @@ func TestStore_New_Good_Memory(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
assertNotNil(t, storeInstance)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
}
func TestStore_New_Good_FileBacked(t *testing.T) {
@ -28,7 +28,7 @@ func TestStore_New_Good_FileBacked(t *testing.T) {
storeInstance, err := New(databasePath)
assertNoError(t, err)
assertNotNil(t, storeInstance)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
// Verify data persists: write, close, reopen.
assertNoError(t, storeInstance.Set("g", "k", "v"))
@ -36,7 +36,7 @@ func TestStore_New_Good_FileBacked(t *testing.T) {
reopenedStore, err := New(databasePath)
assertNoError(t, err)
defer reopenedStore.Close()
defer func() { _ = reopenedStore.Close() }()
value, err := reopenedStore.Get("g", "k")
assertNoError(t, err)
@ -88,7 +88,7 @@ func TestStore_New_Good_WALMode(t *testing.T) {
databasePath := testPath(t, "wal.db")
storeInstance, err := New(databasePath)
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
var mode string
err = storeInstance.sqliteDatabase.QueryRow("PRAGMA journal_mode").Scan(&mode)
@ -99,7 +99,7 @@ func TestStore_New_Good_WALMode(t *testing.T) {
func TestStore_New_Good_WithJournalOption(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertEqual(t, "events", storeInstance.journalConfiguration.BucketName)
assertEqual(t, "core", storeInstance.journalConfiguration.Organisation)
@ -111,7 +111,7 @@ func TestStore_New_Good_WithWorkspaceStateDirectoryOption(t *testing.T) {
storeInstance, err := New(":memory:", WithWorkspaceStateDirectory(workspaceStateDirectory))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertEqual(t, workspaceStateDirectory, storeInstance.WorkspaceStateDirectory())
@ -131,7 +131,7 @@ func TestStore_NewConfigured_Good_WorkspaceStateDirectory(t *testing.T) {
WorkspaceStateDirectory: workspaceStateDirectory,
})
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertEqual(t, workspaceStateDirectory, storeInstance.Config().WorkspaceStateDirectory)
@ -146,7 +146,7 @@ func TestStore_NewConfigured_Good_WorkspaceStateDirectory(t *testing.T) {
func TestStore_WorkspaceStateDirectory_Good_Default(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertEqual(t, normaliseWorkspaceStateDirectory(defaultWorkspaceStateDirectory), storeInstance.WorkspaceStateDirectory())
assertEqual(t, storeInstance.WorkspaceStateDirectory(), storeInstance.Config().WorkspaceStateDirectory)
@ -156,10 +156,10 @@ func TestStore_WorkspaceStateDirectory_Good_Default(t *testing.T) {
func TestStore_JournalConfiguration_Good(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
config := storeInstance.JournalConfiguration()
assertEqual(t, JournalConfiguration{ EndpointURL: "http://127.0.0.1:8086", Organisation: "core", BucketName: "events", }, config)
assertEqual(t, JournalConfiguration{EndpointURL: "http://127.0.0.1:8086", Organisation: "core", BucketName: "events"}, config)
}
func TestStore_JournalConfiguration_Good_Validate(t *testing.T) {
@ -201,14 +201,14 @@ func TestStore_JournalConfiguration_Bad_ValidateMissingBucketName(t *testing.T)
func TestStore_JournalConfigured_Good(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, storeInstance.JournalConfigured())
assertFalse(t, (*Store)(nil).JournalConfigured())
unconfiguredStore, err := New(":memory:")
assertNoError(t, err)
defer unconfiguredStore.Close()
defer func() { _ = unconfiguredStore.Close() }()
assertFalse(t, unconfiguredStore.JournalConfigured())
}
@ -298,9 +298,9 @@ func TestStore_Config_Good(t *testing.T) {
PurgeInterval: 20 * time.Millisecond,
})
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertEqual(t, StoreConfig{ DatabasePath: ":memory:", Journal: JournalConfiguration{ EndpointURL: "http://127.0.0.1:8086", Organisation: "core", BucketName: "events", }, PurgeInterval: 20 * time.Millisecond, WorkspaceStateDirectory: normaliseWorkspaceStateDirectory(defaultWorkspaceStateDirectory), }, storeInstance.Config())
assertEqual(t, StoreConfig{DatabasePath: ":memory:", Journal: JournalConfiguration{EndpointURL: "http://127.0.0.1:8086", Organisation: "core", BucketName: "events"}, PurgeInterval: 20 * time.Millisecond, WorkspaceStateDirectory: normaliseWorkspaceStateDirectory(defaultWorkspaceStateDirectory)}, storeInstance.Config())
}
func TestStore_DatabasePath_Good(t *testing.T) {
@ -308,7 +308,7 @@ func TestStore_DatabasePath_Good(t *testing.T) {
storeInstance, err := New(databasePath)
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertEqual(t, databasePath, storeInstance.DatabasePath())
}
@ -334,9 +334,9 @@ func TestStore_NewConfigured_Good(t *testing.T) {
PurgeInterval: 20 * time.Millisecond,
})
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertEqual(t, JournalConfiguration{ EndpointURL: "http://127.0.0.1:8086", Organisation: "core", BucketName: "events", }, storeInstance.JournalConfiguration())
assertEqual(t, JournalConfiguration{EndpointURL: "http://127.0.0.1:8086", Organisation: "core", BucketName: "events"}, storeInstance.JournalConfiguration())
assertEqual(t, 20*time.Millisecond, storeInstance.purgeInterval)
assertNoError(t, storeInstance.Set("g", "k", "v"))
@ -352,7 +352,7 @@ func TestStore_NewConfigured_Good(t *testing.T) {
func TestStore_SetGet_Good(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
err = storeInstance.Set("config", "theme", "dark")
assertNoError(t, err)
@ -364,7 +364,7 @@ func TestStore_SetGet_Good(t *testing.T) {
func TestStore_Set_Good_Upsert(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "k", "v1"))
assertNoError(t, storeInstance.Set("g", "k", "v2"))
@ -380,7 +380,7 @@ func TestStore_Set_Good_Upsert(t *testing.T) {
func TestStore_Get_Bad_NotFound(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_, err := storeInstance.Get("config", "missing")
assertError(t, err)
@ -389,7 +389,7 @@ func TestStore_Get_Bad_NotFound(t *testing.T) {
func TestStore_Get_Bad_NonExistentGroup(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_, err := storeInstance.Get("no-such-group", "key")
assertError(t, err)
@ -398,16 +398,14 @@ func TestStore_Get_Bad_NonExistentGroup(t *testing.T) {
func TestStore_Get_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
_, err := storeInstance.Get("g", "k")
assertError(t, err)
}
func TestStore_Set_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
err := storeInstance.Set("g", "k", "v")
assertError(t, err)
}
@ -418,7 +416,7 @@ func TestStore_Set_Bad_ClosedStore(t *testing.T) {
func TestStore_Exists_Good_Present(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.Set("config", "colour", "blue")
@ -429,7 +427,7 @@ func TestStore_Exists_Good_Present(t *testing.T) {
func TestStore_Exists_Good_Absent(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
exists, err := storeInstance.Exists("config", "colour")
assertNoError(t, err)
@ -438,7 +436,7 @@ func TestStore_Exists_Good_Absent(t *testing.T) {
func TestStore_Exists_Good_ExpiredKeyReturnsFalse(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.SetWithTTL("session", "token", "abc123", 1*time.Millisecond)
time.Sleep(5 * time.Millisecond)
@ -450,8 +448,7 @@ func TestStore_Exists_Good_ExpiredKeyReturnsFalse(t *testing.T) {
func TestStore_Exists_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
_, err := storeInstance.Exists("g", "k")
assertError(t, err)
}
@ -462,7 +459,7 @@ func TestStore_Exists_Bad_ClosedStore(t *testing.T) {
func TestStore_GroupExists_Good_Present(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.Set("config", "colour", "blue")
@ -473,7 +470,7 @@ func TestStore_GroupExists_Good_Present(t *testing.T) {
func TestStore_GroupExists_Good_Absent(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
exists, err := storeInstance.GroupExists("config")
assertNoError(t, err)
@ -482,7 +479,7 @@ func TestStore_GroupExists_Good_Absent(t *testing.T) {
func TestStore_GroupExists_Good_EmptyAfterDelete(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.Set("config", "colour", "blue")
_ = storeInstance.DeleteGroup("config")
@ -494,8 +491,7 @@ func TestStore_GroupExists_Good_EmptyAfterDelete(t *testing.T) {
func TestStore_GroupExists_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
_, err := storeInstance.GroupExists("config")
assertError(t, err)
}
@ -506,7 +502,7 @@ func TestStore_GroupExists_Bad_ClosedStore(t *testing.T) {
func TestStore_Delete_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.Set("config", "key", "val")
err := storeInstance.Delete("config", "key")
@ -519,7 +515,7 @@ func TestStore_Delete_Good(t *testing.T) {
func TestStore_Delete_Good_NonExistent(t *testing.T) {
// Deleting a key that does not exist should not error.
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
err := storeInstance.Delete("g", "nope")
assertNoError(t, err)
@ -527,8 +523,7 @@ func TestStore_Delete_Good_NonExistent(t *testing.T) {
func TestStore_Delete_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
err := storeInstance.Delete("g", "k")
assertError(t, err)
}
@ -539,7 +534,7 @@ func TestStore_Delete_Bad_ClosedStore(t *testing.T) {
func TestStore_Count_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.Set("grp", "a", "1")
_ = storeInstance.Set("grp", "b", "2")
@ -552,7 +547,7 @@ func TestStore_Count_Good(t *testing.T) {
func TestStore_Count_Good_Empty(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
count, err := storeInstance.Count("empty")
assertNoError(t, err)
@ -561,7 +556,7 @@ func TestStore_Count_Good_Empty(t *testing.T) {
func TestStore_Count_Good_BulkInsert(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
const total = 500
for i := range total {
@ -574,8 +569,7 @@ func TestStore_Count_Good_BulkInsert(t *testing.T) {
func TestStore_Count_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
_, err := storeInstance.Count("g")
assertError(t, err)
}
@ -586,7 +580,7 @@ func TestStore_Count_Bad_ClosedStore(t *testing.T) {
func TestStore_DeleteGroup_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.Set("grp", "a", "1")
_ = storeInstance.Set("grp", "b", "2")
@ -599,7 +593,7 @@ func TestStore_DeleteGroup_Good(t *testing.T) {
func TestStore_DeleteGroup_Good_ThenGetAllEmpty(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.Set("grp", "a", "1")
_ = storeInstance.Set("grp", "b", "2")
@ -612,7 +606,7 @@ func TestStore_DeleteGroup_Good_ThenGetAllEmpty(t *testing.T) {
func TestStore_DeleteGroup_Good_IsolatesOtherGroups(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.Set("a", "k", "1")
_ = storeInstance.Set("b", "k", "2")
@ -628,7 +622,7 @@ func TestStore_DeleteGroup_Good_IsolatesOtherGroups(t *testing.T) {
func TestStore_DeletePrefix_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.Set("tenant-a:config", "colour", "blue")
_ = storeInstance.Set("tenant-a:sessions", "token", "abc123")
@ -648,8 +642,7 @@ func TestStore_DeletePrefix_Good(t *testing.T) {
func TestStore_DeleteGroup_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
err := storeInstance.DeleteGroup("g")
assertError(t, err)
}
@ -660,7 +653,7 @@ func TestStore_DeleteGroup_Bad_ClosedStore(t *testing.T) {
func TestStore_GetAll_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.Set("grp", "a", "1")
_ = storeInstance.Set("grp", "b", "2")
@ -673,7 +666,7 @@ func TestStore_GetAll_Good(t *testing.T) {
func TestStore_GetAll_Good_Empty(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
all, err := storeInstance.GetAll("empty")
assertNoError(t, err)
@ -682,7 +675,7 @@ func TestStore_GetAll_Good_Empty(t *testing.T) {
func TestStore_GetPage_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("grp", "charlie", "3"))
assertNoError(t, storeInstance.Set("grp", "alpha", "1"))
@ -696,7 +689,7 @@ func TestStore_GetPage_Good(t *testing.T) {
func TestStore_GetPage_Good_EmptyAndBounds(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
page, err := storeInstance.GetPage("grp", 0, 0)
assertNoError(t, err)
@ -711,8 +704,7 @@ func TestStore_GetPage_Good_EmptyAndBounds(t *testing.T) {
func TestStore_GetAll_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
_, err := storeInstance.GetAll("g")
assertError(t, err)
}
@ -723,7 +715,7 @@ func TestStore_GetAll_Bad_ClosedStore(t *testing.T) {
func TestStore_All_Good_StopsEarly(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "a", "1"))
assertNoError(t, storeInstance.Set("g", "b", "2"))
@ -741,7 +733,7 @@ func TestStore_All_Good_StopsEarly(t *testing.T) {
func TestStore_All_Good_SortedByKey(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "charlie", "3"))
assertNoError(t, storeInstance.Set("g", "alpha", "1"))
@ -758,7 +750,7 @@ func TestStore_All_Good_SortedByKey(t *testing.T) {
func TestStore_AllSeq_Good_SortedByKey(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "charlie", "3"))
assertNoError(t, storeInstance.Set("g", "alpha", "1"))
@ -775,8 +767,7 @@ func TestStore_AllSeq_Good_SortedByKey(t *testing.T) {
func TestStore_All_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
for _, err := range storeInstance.All("g") {
assertError(t, err)
}
@ -784,7 +775,7 @@ func TestStore_All_Bad_ClosedStore(t *testing.T) {
func TestStore_GroupsSeq_Good_StopsEarly(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("alpha", "a", "1"))
assertNoError(t, storeInstance.Set("beta", "b", "2"))
@ -802,7 +793,7 @@ func TestStore_GroupsSeq_Good_StopsEarly(t *testing.T) {
func TestStore_GroupsSeq_Good_PrefixStopsEarly(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("alpha", "a", "1"))
assertNoError(t, storeInstance.Set("beta", "b", "2"))
@ -820,7 +811,7 @@ func TestStore_GroupsSeq_Good_PrefixStopsEarly(t *testing.T) {
func TestStore_GroupsSeq_Good_SortedByGroupName(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("charlie", "c", "3"))
assertNoError(t, storeInstance.Set("alpha", "a", "1"))
@ -837,7 +828,7 @@ func TestStore_GroupsSeq_Good_SortedByGroupName(t *testing.T) {
func TestStore_GroupsSeq_Good_DefaultArgument(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("alpha", "a", "1"))
assertNoError(t, storeInstance.Set("beta", "b", "2"))
@ -853,8 +844,7 @@ func TestStore_GroupsSeq_Good_DefaultArgument(t *testing.T) {
func TestStore_GroupsSeq_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
for _, err := range storeInstance.GroupsSeq("") {
assertError(t, err)
}
@ -866,7 +856,7 @@ func TestStore_GroupsSeq_Bad_ClosedStore(t *testing.T) {
func TestStore_GetSplit_Good_SplitsValue(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "comma", "alpha,beta,gamma"))
@ -883,7 +873,7 @@ func TestStore_GetSplit_Good_SplitsValue(t *testing.T) {
func TestStore_GetSplit_Good_StopsEarly(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "comma", "alpha,beta,gamma"))
@ -901,7 +891,7 @@ func TestStore_GetSplit_Good_StopsEarly(t *testing.T) {
func TestStore_GetSplit_Bad_MissingKey(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_, err := storeInstance.GetSplit("g", "missing", ",")
assertError(t, err)
@ -910,7 +900,7 @@ func TestStore_GetSplit_Bad_MissingKey(t *testing.T) {
func TestStore_GetFields_Good_SplitsWhitespace(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "fields", "alpha beta\tgamma\n"))
@ -927,7 +917,7 @@ func TestStore_GetFields_Good_SplitsWhitespace(t *testing.T) {
func TestStore_GetFields_Good_StopsEarly(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "fields", "alpha beta\tgamma\n"))
@ -945,7 +935,7 @@ func TestStore_GetFields_Good_StopsEarly(t *testing.T) {
func TestStore_GetFields_Bad_MissingKey(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_, err := storeInstance.GetFields("g", "missing")
assertError(t, err)
@ -958,7 +948,7 @@ func TestStore_GetFields_Bad_MissingKey(t *testing.T) {
func TestStore_Render_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.Set("user", "pool", "pool.lthn.io:3333")
_ = storeInstance.Set("user", "wallet", "iz...")
@ -972,7 +962,7 @@ func TestStore_Render_Good(t *testing.T) {
func TestStore_Render_Good_EmptyGroup(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
// Template that does not reference any variables.
renderedTemplate, err := storeInstance.Render("static content", "empty")
@ -982,7 +972,7 @@ func TestStore_Render_Good_EmptyGroup(t *testing.T) {
func TestStore_Render_Bad_InvalidTemplateSyntax(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_, err := storeInstance.Render("{{ .unclosed", "g")
assertError(t, err)
@ -991,7 +981,7 @@ func TestStore_Render_Bad_InvalidTemplateSyntax(t *testing.T) {
func TestStore_Render_Bad_MissingTemplateVar(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
// text/template with a missing key on a map returns <no value>, not an error,
// unless Option("missingkey=error") is set. The default behaviour is no error.
@ -1002,7 +992,7 @@ func TestStore_Render_Bad_MissingTemplateVar(t *testing.T) {
func TestStore_Render_Bad_ExecError(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_ = storeInstance.Set("g", "name", "hello")
@ -1014,8 +1004,7 @@ func TestStore_Render_Bad_ExecError(t *testing.T) {
func TestStore_Render_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
_, err := storeInstance.Render("{{ .x }}", "g")
assertError(t, err)
}
@ -1173,7 +1162,7 @@ func (testRowsAffectedErrorResult) RowsAffected() (int64, error) {
func TestStore_SetGet_Good_EdgeCases(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
tests := []struct {
name string
@ -1219,7 +1208,7 @@ func TestStore_SetGet_Good_EdgeCases(t *testing.T) {
func TestStore_GroupIsolation_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("alpha", "k", "a-val"))
assertNoError(t, storeInstance.Set("beta", "k", "b-val"))
@ -1250,7 +1239,7 @@ func TestStore_Concurrent_Good_ReadWrite(t *testing.T) {
databasePath := testPath(t, "concurrent.db")
storeInstance, err := New(databasePath)
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
const goroutines = 10
const opsPerGoroutine = 100
@ -1310,7 +1299,7 @@ func TestStore_Concurrent_Good_ReadWrite(t *testing.T) {
func TestStore_Concurrent_Good_GetAll(t *testing.T) {
storeInstance, err := New(testPath(t, "getall.db"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
// Seed data.
for i := range 50 {
@ -1336,7 +1325,7 @@ func TestStore_Concurrent_Good_GetAll(t *testing.T) {
func TestStore_Concurrent_Good_DeleteGroup(t *testing.T) {
storeInstance, err := New(testPath(t, "delgrp.db"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
var waitGroup sync.WaitGroup
for g := range 10 {
@ -1359,7 +1348,7 @@ func TestStore_Concurrent_Good_DeleteGroup(t *testing.T) {
func TestStore_NotFoundError_Good_Is(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
_, err := storeInstance.Get("g", "k")
assertError(t, err)
@ -1373,7 +1362,7 @@ func TestStore_NotFoundError_Good_Is(t *testing.T) {
func BenchmarkSet(benchmark *testing.B) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
benchmark.ResetTimer()
for i := range benchmark.N {
@ -1383,7 +1372,7 @@ func BenchmarkSet(benchmark *testing.B) {
func BenchmarkGet(benchmark *testing.B) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
// Pre-populate.
const keys = 10000
@ -1399,7 +1388,7 @@ func BenchmarkGet(benchmark *testing.B) {
func BenchmarkGetAll(benchmark *testing.B) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
const keys = 10000
for i := range keys {
@ -1415,7 +1404,7 @@ func BenchmarkGetAll(benchmark *testing.B) {
func BenchmarkSet_FileBacked(benchmark *testing.B) {
databasePath := testPath(benchmark, "bench.db")
storeInstance, _ := New(databasePath)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
benchmark.ResetTimer()
for i := range benchmark.N {
@ -1429,7 +1418,7 @@ func BenchmarkSet_FileBacked(benchmark *testing.B) {
func TestStore_SetWithTTL_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
err := storeInstance.SetWithTTL("g", "k", "v", 5*time.Second)
assertNoError(t, err)
@ -1441,7 +1430,7 @@ func TestStore_SetWithTTL_Good(t *testing.T) {
func TestStore_SetWithTTL_Good_Upsert(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.SetWithTTL("g", "k", "v1", time.Hour))
assertNoError(t, storeInstance.SetWithTTL("g", "k", "v2", time.Hour))
@ -1457,7 +1446,7 @@ func TestStore_SetWithTTL_Good_Upsert(t *testing.T) {
func TestStore_SetWithTTL_Good_ExpiresOnGet(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
// Set a key with a very short TTL.
assertNoError(t, storeInstance.SetWithTTL("g", "ephemeral", "gone-soon", 1*time.Millisecond))
@ -1472,7 +1461,7 @@ func TestStore_SetWithTTL_Good_ExpiresOnGet(t *testing.T) {
func TestStore_SetWithTTL_Good_ExpiresOnGetEmitsDeleteEvent(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("g")
defer storeInstance.Unwatch("g", events)
@ -1499,7 +1488,7 @@ func TestStore_SetWithTTL_Good_ExpiresOnGetEmitsDeleteEvent(t *testing.T) {
func TestStore_SetWithTTL_Good_ExcludedFromCount(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "permanent", "stays"))
assertNoError(t, storeInstance.SetWithTTL("g", "temp", "goes", 1*time.Millisecond))
@ -1512,7 +1501,7 @@ func TestStore_SetWithTTL_Good_ExcludedFromCount(t *testing.T) {
func TestStore_SetWithTTL_Good_ExcludedFromGetAll(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "a", "1"))
assertNoError(t, storeInstance.SetWithTTL("g", "b", "2", 1*time.Millisecond))
@ -1525,7 +1514,7 @@ func TestStore_SetWithTTL_Good_ExcludedFromGetAll(t *testing.T) {
func TestStore_SetWithTTL_Good_ExcludedFromRender(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "name", "Alice"))
assertNoError(t, storeInstance.SetWithTTL("g", "temp", "gone", 1*time.Millisecond))
@ -1538,7 +1527,7 @@ func TestStore_SetWithTTL_Good_ExcludedFromRender(t *testing.T) {
func TestStore_SetWithTTL_Good_SetClearsTTL(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
// Set with TTL, then overwrite with plain Set — TTL should be cleared.
assertNoError(t, storeInstance.SetWithTTL("g", "k", "temp", 1*time.Millisecond))
@ -1552,7 +1541,7 @@ func TestStore_SetWithTTL_Good_SetClearsTTL(t *testing.T) {
func TestStore_SetWithTTL_Good_FutureTTLAccessible(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.SetWithTTL("g", "k", "v", 1*time.Hour))
@ -1567,8 +1556,7 @@ func TestStore_SetWithTTL_Good_FutureTTLAccessible(t *testing.T) {
func TestStore_SetWithTTL_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
err := storeInstance.SetWithTTL("g", "k", "v", time.Hour)
assertError(t, err)
}
@ -1579,7 +1567,7 @@ func TestStore_SetWithTTL_Bad_ClosedStore(t *testing.T) {
func TestStore_PurgeExpired_Good(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.SetWithTTL("g", "a", "1", 1*time.Millisecond))
assertNoError(t, storeInstance.SetWithTTL("g", "b", "2", 1*time.Millisecond))
@ -1597,7 +1585,7 @@ func TestStore_PurgeExpired_Good(t *testing.T) {
func TestStore_PurgeExpired_Good_NoneExpired(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("g", "a", "1"))
assertNoError(t, storeInstance.SetWithTTL("g", "b", "2", time.Hour))
@ -1609,7 +1597,7 @@ func TestStore_PurgeExpired_Good_NoneExpired(t *testing.T) {
func TestStore_PurgeExpired_Good_Empty(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
removed, err := storeInstance.PurgeExpired()
assertNoError(t, err)
@ -1618,8 +1606,7 @@ func TestStore_PurgeExpired_Good_Empty(t *testing.T) {
func TestStore_PurgeExpired_Bad_ClosedStore(t *testing.T) {
storeInstance, _ := New(":memory:")
storeInstance.Close()
_ = storeInstance.Close()
_, err := storeInstance.PurgeExpired()
assertError(t, err)
}
@ -1639,7 +1626,7 @@ func TestStore_PurgeExpired_Bad_RowsAffectedError(t *testing.T) {
func TestStore_PurgeExpired_Good_BackgroundPurge(t *testing.T) {
storeInstance, err := New(":memory:", WithPurgeInterval(20*time.Millisecond))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.SetWithTTL("g", "ephemeral", "v", 1*time.Millisecond))
assertNoError(t, storeInstance.Set("g", "permanent", "stays"))
@ -1682,7 +1669,7 @@ func TestStore_SchemaUpgrade_Good_ExistingDB(t *testing.T) {
// Reopen — the ALTER TABLE ADD COLUMN should be a no-op.
reopenedStore, err := New(databasePath)
assertNoError(t, err)
defer reopenedStore.Close()
defer func() { _ = reopenedStore.Close() }()
value, err := reopenedStore.Get("g", "k")
assertNoError(t, err)
@ -1715,7 +1702,7 @@ func TestStore_SchemaUpgrade_Good_EntriesWithoutExpiryColumn(t *testing.T) {
storeInstance, err := New(databasePath)
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
value, err := storeInstance.Get("g", "k")
assertNoError(t, err)
@ -1757,7 +1744,7 @@ func TestStore_SchemaUpgrade_Good_LegacyAndCurrentTables(t *testing.T) {
storeInstance, err := New(databasePath)
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
value, err := storeInstance.Get("existing", "k")
assertNoError(t, err)
@ -1791,7 +1778,7 @@ func TestStore_SchemaUpgrade_Good_PreTTLDatabase(t *testing.T) {
// Open with New — should migrate the legacy table into the descriptive schema.
storeInstance, err := New(databasePath)
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
// Existing data should be readable.
value, err := storeInstance.Get("g", "k")
@ -1812,7 +1799,7 @@ func TestStore_SchemaUpgrade_Good_PreTTLDatabase(t *testing.T) {
func TestStore_Concurrent_Good_TTL(t *testing.T) {
storeInstance, err := New(testPath(t, "concurrent-ttl.db"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
const goroutines = 10
const ops = 50

View file

@ -27,13 +27,6 @@ func assertError(t testing.TB, err error) {
}
}
func assertErrorf(t testing.TB, err error, format string, args ...any) {
t.Helper()
if err == nil {
t.Fatalf("expected error, got nil — "+format, args...)
}
}
func assertErrorIs(t testing.TB, err, target error) {
t.Helper()
if !errIs(err, target) {
@ -169,13 +162,6 @@ func assertLessOrEqual(t testing.TB, got, want int) {
}
}
func assertSame(t testing.TB, want, got any) {
t.Helper()
if !samePointer(want, got) {
t.Fatalf("expected same pointer, got %v vs %v", want, got)
}
}
func assertSamef(t testing.TB, want, got any, format string, args ...any) {
t.Helper()
if !samePointer(want, got) {
@ -183,13 +169,6 @@ func assertSamef(t testing.TB, want, got any, format string, args ...any) {
}
}
func assertGreater(t testing.TB, got, want int) {
t.Helper()
if got <= want {
t.Fatalf("expected %d > %d", got, want)
}
}
func assertGreaterf(t testing.TB, got, want int, format string, args ...any) {
t.Helper()
if got <= want {
@ -212,6 +191,15 @@ func errIs(err, target error) bool {
if err == target {
return true
}
multiUnwrapper, ok := err.(interface{ Unwrap() []error })
if ok {
for _, childErr := range multiUnwrapper.Unwrap() {
if errIs(childErr, target) {
return true
}
}
return false
}
unwrapper, ok := err.(interface{ Unwrap() error })
if !ok {
return false

View file

@ -2,13 +2,18 @@ version: "3"
tasks:
default:
deps: [build, test]
deps: [build, vet, test]
build:
dir: ../../..
cmds:
- go build ./...
vet:
dir: ../../..
cmds:
- go vet ./...
test:
dir: ../../..
cmds:

View file

@ -234,7 +234,7 @@ func (storeTransaction *StoreTransaction) DeletePrefix(groupPrefix string) error
if err != nil {
return core.E("store.Transaction.DeletePrefix", "list groups", err)
}
defer rows.Close()
defer func() { _ = rows.Close() }()
var groupNames []string
for rows.Next() {
@ -307,7 +307,7 @@ func (storeTransaction *StoreTransaction) GetPage(group string, offset, limit in
if err != nil {
return nil, core.E("store.Transaction.GetPage", "query rows", err)
}
defer rows.Close()
defer func() { _ = rows.Close() }()
page := make([]KeyValue, 0, limit)
for rows.Next() {
@ -344,7 +344,7 @@ func (storeTransaction *StoreTransaction) AllSeq(group string) iter.Seq2[KeyValu
yield(KeyValue{}, core.E("store.Transaction.All", "query rows", err))
return
}
defer rows.Close()
defer func() { _ = rows.Close() }()
for rows.Next() {
var entry KeyValue
@ -434,7 +434,7 @@ func (storeTransaction *StoreTransaction) GroupsSeq(groupPrefix ...string) iter.
yield("", core.E("store.Transaction.GroupsSeq", "query group names", err))
return
}
defer rows.Close()
defer func() { _ = rows.Close() }()
for rows.Next() {
var groupName string

View file

@ -10,7 +10,7 @@ import (
func TestTransaction_Transaction_Good_CommitsMultipleWrites(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch("*")
defer storeInstance.Unwatch("*", events)
@ -46,7 +46,7 @@ func TestTransaction_Transaction_Good_CommitsMultipleWrites(t *testing.T) {
func TestTransaction_Transaction_Good_RollbackOnError(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
err := storeInstance.Transaction(func(transaction *StoreTransaction) error {
if err := transaction.Set("alpha", "first", "1"); err != nil {
@ -62,7 +62,7 @@ func TestTransaction_Transaction_Good_RollbackOnError(t *testing.T) {
func TestTransaction_Transaction_Good_DeletesAtomically(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("alpha", "first", "1"))
assertNoError(t, storeInstance.Set("beta", "second", "2"))
@ -83,7 +83,7 @@ func TestTransaction_Transaction_Good_DeletesAtomically(t *testing.T) {
func TestTransaction_Transaction_Good_ReadHelpersSeePendingWrites(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
err := storeInstance.Transaction(func(transaction *StoreTransaction) error {
if err := transaction.Set("config", "colour", "blue"); err != nil {
@ -127,7 +127,7 @@ func TestTransaction_Transaction_Good_ReadHelpersSeePendingWrites(t *testing.T)
func TestTransaction_Transaction_Good_PurgeExpired(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.SetWithTTL("alpha", "ephemeral", "gone", 1*time.Millisecond))
time.Sleep(5 * time.Millisecond)
@ -146,7 +146,7 @@ func TestTransaction_Transaction_Good_PurgeExpired(t *testing.T) {
func TestTransaction_Transaction_Good_Exists(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertNoError(t, storeInstance.Set("config", "colour", "blue"))
@ -166,7 +166,7 @@ func TestTransaction_Transaction_Good_Exists(t *testing.T) {
func TestTransaction_Transaction_Good_ExistsSeesPendingWrites(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
err := storeInstance.Transaction(func(transaction *StoreTransaction) error {
exists, err := transaction.Exists("config", "colour")
@ -188,7 +188,7 @@ func TestTransaction_Transaction_Good_ExistsSeesPendingWrites(t *testing.T) {
func TestTransaction_Transaction_Good_GroupExists(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
err := storeInstance.Transaction(func(transaction *StoreTransaction) error {
exists, err := transaction.GroupExists("config")
@ -210,7 +210,7 @@ func TestTransaction_Transaction_Good_GroupExists(t *testing.T) {
func TestTransaction_ScopedStoreTransaction_Good_ExistsAndGroupExists(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
@ -250,7 +250,7 @@ func TestTransaction_ScopedStoreTransaction_Good_ExistsAndGroupExists(t *testing
func TestTransaction_ScopedStoreTransaction_Good_GetPage(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
@ -276,7 +276,7 @@ func TestTransaction_ScopedStoreTransaction_Good_GetPage(t *testing.T) {
func TestTransaction_ScopedStoreTransaction_Good_CommitsNamespacedWrites(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, err := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -319,7 +319,7 @@ func TestTransaction_ScopedStoreTransaction_Good_CommitsNamespacedWrites(t *test
func TestTransaction_ScopedStoreTransaction_Good_PurgeExpired(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
@ -340,7 +340,7 @@ func TestTransaction_ScopedStoreTransaction_Good_PurgeExpired(t *testing.T) {
func TestTransaction_ScopedStoreTransaction_Good_QuotaUsesPendingWrites(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore, err := NewScopedConfigured(storeInstance, ScopedStoreConfig{
Namespace: "tenant-a",
@ -366,7 +366,7 @@ func TestTransaction_ScopedStoreTransaction_Good_QuotaUsesPendingWrites(t *testi
func TestTransaction_ScopedStoreTransaction_Good_DeletePrefix(t *testing.T) {
storeInstance, _ := New(":memory:")
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
scopedStore := NewScoped(storeInstance, "tenant-a")
otherScopedStore := NewScoped(storeInstance, "tenant-b")

View file

@ -43,7 +43,6 @@ type Workspace struct {
name string
store *Store
db *sql.DB
sqliteDatabase *sql.DB
databasePath string
filesystem *core.Fs
cachedOrphanAggregate map[string]any
@ -84,15 +83,6 @@ func (workspace *Workspace) ensureReady(operation string) error {
return core.E(operation, "workspace store is nil", nil)
}
if workspace.db == nil {
workspace.db = workspace.sqliteDatabase
}
if workspace.sqliteDatabase == nil {
workspace.sqliteDatabase = workspace.db
}
if workspace.db == nil {
return core.E(operation, "workspace database is nil", nil)
}
if workspace.sqliteDatabase == nil {
return core.E(operation, "workspace database is nil", nil)
}
if workspace.filesystem == nil {
@ -135,18 +125,17 @@ func (storeInstance *Store) NewWorkspace(name string) (*Workspace, error) {
return nil, core.E("store.NewWorkspace", "ensure state directory", result.Value.(error))
}
sqliteDatabase, err := openWorkspaceDatabase(databasePath)
database, err := openWorkspaceDatabase(databasePath)
if err != nil {
return nil, core.E("store.NewWorkspace", "open workspace database", err)
}
return &Workspace{
name: name,
store: storeInstance,
db: sqliteDatabase,
sqliteDatabase: sqliteDatabase,
databasePath: databasePath,
filesystem: filesystem,
name: name,
store: storeInstance,
db: database,
databasePath: databasePath,
filesystem: filesystem,
}, nil
}
@ -200,18 +189,17 @@ func loadRecoveredWorkspaces(stateDirectory string, store *Store) []*Workspace {
filesystem := (&core.Fs{}).NewUnrestricted()
orphanWorkspaces := make([]*Workspace, 0)
for _, databasePath := range discoverOrphanWorkspacePaths(stateDirectory) {
sqliteDatabase, err := openWorkspaceDatabase(databasePath)
database, err := openWorkspaceDatabase(databasePath)
if err != nil {
quarantineOrphanWorkspaceFiles(filesystem, stateDirectory, databasePath)
continue
}
orphanWorkspace := &Workspace{
name: workspaceNameFromPath(stateDirectory, databasePath),
store: store,
db: sqliteDatabase,
sqliteDatabase: sqliteDatabase,
databasePath: databasePath,
filesystem: filesystem,
name: workspaceNameFromPath(stateDirectory, databasePath),
store: store,
db: database,
databasePath: databasePath,
filesystem: filesystem,
}
aggregate, err := orphanWorkspace.aggregateFieldsWithoutReadiness()
if err != nil {
@ -278,7 +266,7 @@ func (workspace *Workspace) Put(kind string, data map[string]any) error {
return err
}
_, err = workspace.sqliteDatabase.Exec(
_, err = workspace.db.Exec(
"INSERT INTO "+workspaceEntriesTableName+" (entry_kind, entry_data, created_at) VALUES (?, ?, ?)",
kind,
dataJSON,
@ -297,7 +285,7 @@ func (workspace *Workspace) Count() (int, error) {
}
var count int
err := workspace.sqliteDatabase.QueryRow(
err := workspace.db.QueryRow(
"SELECT COUNT(*) FROM " + workspaceEntriesTableName,
).Scan(&count)
if err != nil {
@ -359,11 +347,11 @@ func (workspace *Workspace) Query(query string) core.Result {
return core.Result{Value: err, OK: false}
}
rows, err := workspace.sqliteDatabase.Query(query)
rows, err := workspace.db.Query(query)
if err != nil {
return core.Result{Value: core.E("store.Workspace.Query", "query workspace", err), OK: false}
}
defer rows.Close()
defer func() { _ = rows.Close() }()
rowMaps, err := queryRowsAsMaps(rows)
if err != nil {
@ -379,18 +367,6 @@ func (workspace *Workspace) aggregateFields() (map[string]any, error) {
return workspace.aggregateFieldsWithoutReadiness()
}
func (workspace *Workspace) captureAggregateSnapshot() map[string]any {
if workspace == nil || workspace.sqliteDatabase == nil {
return nil
}
fields, err := workspace.aggregateFieldsWithoutReadiness()
if err != nil {
return nil
}
return fields
}
func (workspace *Workspace) aggregateFallback() map[string]any {
if workspace == nil || workspace.cachedOrphanAggregate == nil {
return map[string]any{}
@ -409,13 +385,13 @@ func (workspace *Workspace) shouldUseOrphanAggregate() bool {
}
func (workspace *Workspace) aggregateFieldsWithoutReadiness() (map[string]any, error) {
rows, err := workspace.sqliteDatabase.Query(
rows, err := workspace.db.Query(
"SELECT entry_kind, COUNT(*) FROM " + workspaceEntriesTableName + " GROUP BY entry_kind ORDER BY entry_kind",
)
if err != nil {
return nil, err
}
defer rows.Close()
defer func() { _ = rows.Close() }()
fields := make(map[string]any)
for rows.Next() {
@ -448,7 +424,7 @@ func (workspace *Workspace) closeAndCleanup(removeFiles bool) error {
if workspace == nil {
return nil
}
if workspace.sqliteDatabase == nil {
if workspace.db == nil {
return nil
}
@ -460,14 +436,14 @@ func (workspace *Workspace) closeAndCleanup(removeFiles bool) error {
workspace.lifecycleLock.Unlock()
if !alreadyClosed {
if err := workspace.sqliteDatabase.Close(); err != nil {
if err := workspace.db.Close(); err != nil {
return core.E("store.Workspace.closeAndCleanup", "close workspace database", err)
}
}
if !removeFiles || workspace.filesystem == nil {
return nil
}
for _, path := range []string{workspace.databasePath, workspace.databasePath + "-wal", workspace.databasePath + "-shm"} {
for _, path := range workspaceDatabaseFilePaths(workspace.databasePath) {
if result := workspace.filesystem.Delete(path); !result.OK && workspace.filesystem.Exists(path) {
return core.E("store.Workspace.closeAndCleanup", "delete workspace file", result.Value.(error))
}
@ -540,28 +516,28 @@ func (storeInstance *Store) commitWorkspaceAggregate(workspaceName string, field
}
func openWorkspaceDatabase(databasePath string) (*sql.DB, error) {
sqliteDatabase, err := sql.Open("duckdb", databasePath)
database, err := sql.Open("duckdb", databasePath)
if err != nil {
return nil, core.E("store.openWorkspaceDatabase", "open workspace database", err)
}
sqliteDatabase.SetMaxOpenConns(1)
if err := sqliteDatabase.Ping(); err != nil {
sqliteDatabase.Close()
database.SetMaxOpenConns(1)
if err := database.Ping(); err != nil {
_ = database.Close()
return nil, core.E("store.openWorkspaceDatabase", "ping workspace database", err)
}
if _, err := sqliteDatabase.Exec("CREATE SEQUENCE IF NOT EXISTS workspace_entries_entry_id_seq START 1"); err != nil {
sqliteDatabase.Close()
if _, err := database.Exec("CREATE SEQUENCE IF NOT EXISTS workspace_entries_entry_id_seq START 1"); err != nil {
_ = database.Close()
return nil, core.E("store.openWorkspaceDatabase", "create workspace entry sequence", err)
}
if _, err := sqliteDatabase.Exec(createWorkspaceEntriesTableSQL); err != nil {
sqliteDatabase.Close()
if _, err := database.Exec(createWorkspaceEntriesTableSQL); err != nil {
_ = database.Close()
return nil, core.E("store.openWorkspaceDatabase", "create workspace entries table", err)
}
if _, err := sqliteDatabase.Exec(createWorkspaceEntriesViewSQL); err != nil {
sqliteDatabase.Close()
if _, err := database.Exec(createWorkspaceEntriesViewSQL); err != nil {
_ = database.Close()
return nil, core.E("store.openWorkspaceDatabase", "create workspace entries view", err)
}
return sqliteDatabase, nil
return database, nil
}
func workspaceSummaryGroup(workspaceName string) string {
@ -591,9 +567,11 @@ func quarantineOrphanWorkspaceFiles(filesystem *core.Fs, stateDirectory, databas
filesystem,
workspaceQuarantineFilePath(stateDirectory, databasePath),
)
quarantineWorkspaceFile(filesystem, databasePath, quarantinePath)
quarantineWorkspaceFile(filesystem, databasePath+"-wal", quarantinePath+"-wal")
quarantineWorkspaceFile(filesystem, databasePath+"-shm", quarantinePath+"-shm")
sourcePaths := workspaceDatabaseFilePaths(databasePath)
quarantinePaths := workspaceDatabaseFilePaths(quarantinePath)
for index, sourcePath := range sourcePaths {
quarantineWorkspaceFile(filesystem, sourcePath, quarantinePaths[index])
}
}
func availableQuarantineWorkspacePath(filesystem *core.Fs, preferredPath string) string {
@ -602,7 +580,7 @@ func availableQuarantineWorkspacePath(filesystem *core.Fs, preferredPath string)
}
stem := core.TrimSuffix(preferredPath, ".duckdb")
for index := 1; ; index++ {
candidatePath := core.Concat(stem, ".", core.Itoa(index), ".duckdb")
candidatePath := core.Concat(stem, ".", core.Sprint(index), ".duckdb")
if !workspaceQuarantinePathExists(filesystem, candidatePath) {
return candidatePath
}
@ -610,7 +588,19 @@ func availableQuarantineWorkspacePath(filesystem *core.Fs, preferredPath string)
}
func workspaceQuarantinePathExists(filesystem *core.Fs, databasePath string) bool {
return filesystem.Exists(databasePath) || filesystem.Exists(databasePath+"-wal") || filesystem.Exists(databasePath+"-shm")
for _, path := range workspaceDatabaseFilePaths(databasePath) {
if filesystem.Exists(path) {
return true
}
}
return false
}
func workspaceDatabaseFilePaths(databasePath string) []string {
if core.HasSuffix(databasePath, ".duckdb") {
return []string{databasePath, databasePath + ".wal"}
}
return []string{databasePath, databasePath + "-wal", databasePath + "-shm"}
}
func quarantineWorkspaceFile(filesystem *core.Fs, sourcePath, quarantinePath string) {

View file

@ -12,7 +12,7 @@ func TestWorkspace_NewWorkspace_Good_CreatePutAggregateQuery(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("scroll-session")
assertNoError(t, err)
@ -43,7 +43,7 @@ func TestWorkspace_DatabasePath_Good(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("scroll-session")
assertNoError(t, err)
@ -57,7 +57,7 @@ func TestWorkspace_Count_Good_Empty(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("count-empty")
assertNoError(t, err)
@ -73,7 +73,7 @@ func TestWorkspace_Count_Good_AfterPuts(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("count-puts")
assertNoError(t, err)
@ -93,7 +93,7 @@ func TestWorkspace_Count_Bad_ClosedWorkspace(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("count-closed")
assertNoError(t, err)
@ -108,7 +108,7 @@ func TestWorkspace_Query_Good_RFCEntriesView(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("scroll-session")
assertNoError(t, err)
@ -134,7 +134,7 @@ func TestWorkspace_Commit_Good_JournalAndSummary(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("scroll-session")
assertNoError(t, err)
@ -179,7 +179,7 @@ func TestWorkspace_Commit_Good_ResultCopiesAggregatedMap(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("scroll-session")
assertNoError(t, err)
@ -202,7 +202,7 @@ func TestWorkspace_Commit_Good_EmitsSummaryEvent(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
events := storeInstance.Watch(workspaceSummaryGroup("scroll-session"))
defer storeInstance.Unwatch(workspaceSummaryGroup("scroll-session"), events)
@ -238,7 +238,7 @@ func TestWorkspace_Discard_Good_Idempotent(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("discard-session")
assertNoError(t, err)
@ -254,7 +254,7 @@ func TestWorkspace_Close_Good_PreservesFileForRecovery(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("close-session")
assertNoError(t, err)
@ -278,25 +278,25 @@ func TestWorkspace_Close_Good_PreservesFileForRecovery(t *testing.T) {
func TestWorkspace_Close_Good_ClosesDatabaseWithoutFilesystem(t *testing.T) {
databasePath := testPath(t, "workspace-no-filesystem.duckdb")
sqliteDatabase, err := openWorkspaceDatabase(databasePath)
database, err := openWorkspaceDatabase(databasePath)
assertNoError(t, err)
workspace := &Workspace{
name: "partial-workspace",
sqliteDatabase: sqliteDatabase,
databasePath: databasePath,
name: "partial-workspace",
db: database,
databasePath: databasePath,
}
assertNoError(t, workspace.Close())
_, execErr := sqliteDatabase.Exec("SELECT 1")
_, execErr := database.Exec("SELECT 1")
assertError(t, execErr)
assertContainsString(t, execErr.Error(), "closed")
assertTrue(t, testFilesystem().Exists(databasePath))
requireCoreOK(t, testFilesystem().Delete(databasePath))
_ = testFilesystem().Delete(databasePath + "-wal")
_ = testFilesystem().Delete(databasePath + "-shm")
for _, path := range workspaceDatabaseFilePaths(databasePath) {
_ = testFilesystem().Delete(path)
}
}
func TestWorkspace_RecoverOrphans_Good(t *testing.T) {
@ -304,12 +304,12 @@ func TestWorkspace_RecoverOrphans_Good(t *testing.T) {
storeInstance, err := New(":memory:", WithJournal("http://127.0.0.1:8086", "core", "events"))
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
workspace, err := storeInstance.NewWorkspace("orphan-session")
assertNoError(t, err)
assertNoError(t, workspace.Put("like", map[string]any{"user": "@alice"}))
assertNoError(t, workspace.sqliteDatabase.Close())
assertNoError(t, workspace.db.Close())
orphans := storeInstance.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 1)
@ -339,7 +339,7 @@ func TestWorkspace_New_Good_LeavesOrphanedWorkspacesForRecovery(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
assertTrue(t, testFilesystem().Exists(orphanDatabasePath))
@ -371,7 +371,7 @@ func TestWorkspace_New_Good_CachesOrphansDuringConstruction(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
requireCoreOK(t, testFilesystem().DeleteAll(stateDirectory))
assertFalse(t, testFilesystem().Exists(orphanDatabasePath))
@ -404,7 +404,7 @@ func TestWorkspace_NewConfigured_Good_CachesOrphansFromConfiguredStateDirectory(
WorkspaceStateDirectory: stateDirectory,
})
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
requireCoreOK(t, testFilesystem().DeleteAll(stateDirectory))
assertFalse(t, testFilesystem().Exists(orphanDatabasePath))
@ -428,7 +428,7 @@ func TestWorkspace_RecoverOrphans_Good_TrailingSlashUsesCache(t *testing.T) {
storeInstance, err := New(":memory:")
assertNoError(t, err)
defer storeInstance.Close()
defer func() { _ = storeInstance.Close() }()
requireCoreOK(t, testFilesystem().DeleteAll(stateDirectory))
assertFalse(t, testFilesystem().Exists(orphanDatabasePath))
@ -458,7 +458,7 @@ func TestWorkspace_Close_Good_PreservesOrphansForRecovery(t *testing.T) {
recoveryStore, err := New(":memory:")
assertNoError(t, err)
defer recoveryStore.Close()
defer func() { _ = recoveryStore.Close() }()
orphans := recoveryStore.RecoverOrphans(stateDirectory)
assertLen(t, orphans, 1)