Compare commits
18 commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
36ca98652b | ||
|
|
92ecddaa69 | ||
|
|
8ffd10c2ac | ||
|
|
209166507b | ||
|
|
74084f37b9 | ||
|
|
e22f44c2c7 | ||
|
|
5c40e4c5a2 | ||
|
|
3b6972785d | ||
|
|
897bef1c30 | ||
|
|
a83fafbde7 | ||
|
|
27dd3bbbb4 | ||
|
|
05f8a0050c | ||
|
|
36c184e7dd | ||
|
|
0ab8627447 | ||
|
|
3680aaf871 | ||
|
|
d9a63f1981 | ||
|
|
a7772087ae | ||
|
|
af4e1d6ae2 |
33 changed files with 2506 additions and 916 deletions
|
|
@ -1,3 +1,5 @@
|
|||
version: "2"
|
||||
|
||||
run:
|
||||
timeout: 5m
|
||||
go: "1.26"
|
||||
|
|
@ -8,15 +10,15 @@ linters:
|
|||
- errcheck
|
||||
- staticcheck
|
||||
- unused
|
||||
- gosimple
|
||||
- ineffassign
|
||||
- typecheck
|
||||
- gocritic
|
||||
- gofmt
|
||||
disable:
|
||||
- exhaustive
|
||||
- wrapcheck
|
||||
|
||||
formatters:
|
||||
enable:
|
||||
- gofmt
|
||||
|
||||
issues:
|
||||
exclude-use-default: false
|
||||
max-same-issues: 0
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
Claude Code JSONL transcript parser, analytics engine, and HTML/video renderer. Module: `dappco.re/go/core/session`
|
||||
Claude Code JSONL transcript parser, analytics engine, and HTML/video renderer. Module: `dappco.re/go/session`
|
||||
|
||||
## Commands
|
||||
|
||||
|
|
@ -43,8 +43,12 @@ Coverage target: maintain ≥90.9%.
|
|||
|
||||
- UK English throughout (colour, licence, initialise)
|
||||
- Explicit types on all function signatures and struct fields
|
||||
- Exported declarations must have Go doc comments beginning with the identifier name
|
||||
- `go test ./...` and `go vet ./...` must pass before commit
|
||||
- SPDX header on all source files: `// SPDX-Licence-Identifier: EUPL-1.2`
|
||||
- Error handling: all errors must use `coreerr.E(op, msg, err)` from `dappco.re/go/core/log`, never `fmt.Errorf` or `errors.New`
|
||||
- Banned imports in non-test Go files: `errors`, `github.com/pkg/errors`, and legacy `forge.lthn.ai/...` paths
|
||||
- Conventional commits: `type(scope): description`
|
||||
- Co-Author trailer: `Co-Authored-By: Virgil <virgil@lethean.io>`
|
||||
|
||||
The conventions test suite enforces banned imports, exported usage comments, and test naming via `go test ./...`.
|
||||
|
|
|
|||
54
CODEX.md
Normal file
54
CODEX.md
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
# CODEX.md
|
||||
|
||||
This file provides guidance to Codex when working in this repository.
|
||||
|
||||
Claude Code JSONL transcript parser, analytics engine, and HTML/video renderer. Module: `dappco.re/go/session`
|
||||
|
||||
## Commands
|
||||
|
||||
```bash
|
||||
go test ./... # Run all tests
|
||||
go test -v -run TestFunctionName_Context # Run single test
|
||||
go test -race ./... # Race detector
|
||||
go test -bench=. -benchmem ./... # Benchmarks
|
||||
go vet ./... # Vet
|
||||
golangci-lint run ./... # Lint (optional, config in .golangci.yml)
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
Single-package library (`package session`) with five source files forming a pipeline:
|
||||
|
||||
1. **parser.go** — Core JSONL parser. Reads Claude Code session files line-by-line (8 MiB scanner buffer), correlates `tool_use`/`tool_result` pairs via a `pendingTools` map keyed by tool ID, and produces `Session` with `[]Event`. Also handles session listing, fetching, and pruning.
|
||||
2. **analytics.go** — Pure computation over `[]Event`. `Analyse()` returns `SessionAnalytics` (per-tool counts, error rates, latency stats, token estimates). No I/O.
|
||||
3. **html.go** — `RenderHTML()` generates a self-contained HTML file (inline CSS/JS, dark theme, collapsible panels, client-side search). All user content is `html.EscapeString`-escaped.
|
||||
4. **video.go** — `RenderMP4()` generates a VHS `.tape` script and shells out to `vhs`. Requires `vhs` on PATH.
|
||||
5. **search.go** — `Search()`/`SearchSeq()` does cross-session case-insensitive substring search over tool event inputs and outputs.
|
||||
|
||||
Both slice-returning and `iter.Seq` variants exist for `ListSessions`, `Search`, and `Session.EventsSeq`.
|
||||
|
||||
### Adding a new tool type
|
||||
|
||||
Touch all layers: add input struct in `parser.go` → case in `extractToolInput` → label in `html.go` `RenderHTML` → tape entry in `video.go` `generateTape` → tests in `parser_test.go`.
|
||||
|
||||
## Testing
|
||||
|
||||
Tests are white-box (`package session`). Test helpers in `parser_test.go` build synthetic JSONL in-memory — no fixture files. Use `writeJSONL(t, dir, name, lines...)` and the entry builders (`toolUseEntry`, `toolResultEntry`, `userTextEntry`, `assistantTextEntry`).
|
||||
|
||||
Naming convention: `TestFile_Function_Good/Bad/Ugly` (group by file, collapse the specific behaviour into the function segment, and suffix with happy path / expected errors / extreme edge cases).
|
||||
|
||||
Coverage target: maintain ≥90.9%.
|
||||
|
||||
## Coding Standards
|
||||
|
||||
- UK English throughout (colour, licence, initialise)
|
||||
- Explicit types on all function signatures and struct fields
|
||||
- Exported declarations must have Go doc comments beginning with the identifier name and include an `Example:` usage snippet
|
||||
- `go test ./...` and `go vet ./...` must pass before commit
|
||||
- SPDX header on all source files: `// SPDX-Licence-Identifier: EUPL-1.2`
|
||||
- Error handling: all package errors must use `core.E(op, msg, err)` from `dappco.re/go/core`; do not use `core.NewError`, `fmt.Errorf`, or `errors.New`
|
||||
- Banned imports in non-test Go files: `errors`, `github.com/pkg/errors`, and legacy `forge.lthn.ai/...` paths
|
||||
- Conventional commits: `type(scope): description`
|
||||
- Co-Author trailer: `Co-Authored-By: Virgil <virgil@lethean.io>`
|
||||
|
||||
The conventions test suite enforces banned imports, exported usage comments, and test naming via `go test ./...`.
|
||||
|
|
@ -39,7 +39,7 @@ The input label adapts to the tool type:
|
|||
[go-session] Installation
|
||||
|
||||
```bash
|
||||
go get dappco.re/go/core/session@latest
|
||||
go get dappco.re/go/session@latest
|
||||
```
|
||||
|
||||
### 5. go-session [convention] (score: -0.004)
|
||||
|
|
|
|||
12
README.md
12
README.md
|
|
@ -1,4 +1,4 @@
|
|||
[](https://pkg.go.dev/dappco.re/go/core/session)
|
||||
[](https://pkg.go.dev/dappco.re/go/session)
|
||||
[](LICENSE.md)
|
||||
[](go.mod)
|
||||
|
||||
|
|
@ -6,16 +6,20 @@
|
|||
|
||||
Claude Code JSONL transcript parser, analytics engine, and HTML timeline renderer. Parses Claude Code session files into structured event arrays (tool calls with round-trip durations, user and assistant messages), computes per-tool analytics (call counts, error rates, average and peak latency, estimated token usage), renders self-contained HTML timelines with collapsible panels and client-side search, and generates VHS tape scripts for MP4 video output. No external runtime dependencies — stdlib only.
|
||||
|
||||
**Module**: `dappco.re/go/core/session`
|
||||
**Module**: `dappco.re/go/session`
|
||||
**Licence**: EUPL-1.2
|
||||
**Language**: Go 1.26
|
||||
|
||||
## Quick Start
|
||||
|
||||
```go
|
||||
import "dappco.re/go/core/session"
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
sess, stats, err := session.ParseTranscript("/path/to/session.jsonl")
|
||||
"dappco.re/go/session"
|
||||
)
|
||||
|
||||
sess, _, err := session.ParseTranscript("/path/to/session.jsonl")
|
||||
analytics := session.Analyse(sess)
|
||||
fmt.Println(session.FormatAnalytics(analytics))
|
||||
|
||||
|
|
|
|||
3
TODO.md
3
TODO.md
|
|
@ -3,7 +3,7 @@
|
|||
## Task
|
||||
Update go.mod require lines from forge.lthn.ai to dappco.re paths. Update versions: core v0.5.0, log v0.1.0, io v0.2.0. Update all .go import paths. Run go mod tidy and go build ./...
|
||||
|
||||
> **Status:** Complete. All module paths migrated to `dappco.re/go/core/...`.
|
||||
> **Status:** Complete. All module paths migrated to `dappco.re/go/...`.
|
||||
|
||||
## Checklist
|
||||
- [x] Read and understand the codebase
|
||||
|
|
@ -13,4 +13,3 @@ Update go.mod require lines from forge.lthn.ai to dappco.re paths. Update versio
|
|||
- [ ] Commit with conventional commit message
|
||||
|
||||
## Context
|
||||
|
||||
|
|
|
|||
43
analytics.go
43
analytics.go
|
|
@ -2,14 +2,17 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"maps"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
"maps" // Note: intrinsic — maps.Keys exposes tool names for deterministic analytics output; no core equivalent
|
||||
"slices" // Note: intrinsic — slices.Sorted orders analytics rows deterministically; no core equivalent
|
||||
"time" // Note: intrinsic — time.Duration arithmetic for session, active-time, and latency metrics; no core equivalent
|
||||
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// SessionAnalytics holds computed metrics for a parsed session.
|
||||
//
|
||||
// Example:
|
||||
// analytics := session.Analyse(sess)
|
||||
type SessionAnalytics struct {
|
||||
Duration time.Duration
|
||||
ActiveTime time.Duration
|
||||
|
|
@ -24,6 +27,9 @@ type SessionAnalytics struct {
|
|||
}
|
||||
|
||||
// Analyse iterates session events and computes analytics. Pure function, no I/O.
|
||||
//
|
||||
// Example:
|
||||
// analytics := session.Analyse(sess)
|
||||
func Analyse(sess *Session) *SessionAnalytics {
|
||||
a := &SessionAnalytics{
|
||||
ToolCounts: make(map[string]int),
|
||||
|
|
@ -97,32 +103,35 @@ func Analyse(sess *Session) *SessionAnalytics {
|
|||
}
|
||||
|
||||
// FormatAnalytics returns a tabular text summary suitable for CLI display.
|
||||
//
|
||||
// Example:
|
||||
// summary := session.FormatAnalytics(analytics)
|
||||
func FormatAnalytics(a *SessionAnalytics) string {
|
||||
var b strings.Builder
|
||||
b := core.NewBuilder()
|
||||
|
||||
b.WriteString("Session Analytics\n")
|
||||
b.WriteString(strings.Repeat("=", 50) + "\n\n")
|
||||
b.WriteString(repeatString("=", 50) + "\n\n")
|
||||
|
||||
b.WriteString(fmt.Sprintf(" Duration: %s\n", formatDuration(a.Duration)))
|
||||
b.WriteString(fmt.Sprintf(" Active Time: %s\n", formatDuration(a.ActiveTime)))
|
||||
b.WriteString(fmt.Sprintf(" Events: %d\n", a.EventCount))
|
||||
b.WriteString(fmt.Sprintf(" Success Rate: %.1f%%\n", a.SuccessRate*100))
|
||||
b.WriteString(fmt.Sprintf(" Est. Input Tk: %d\n", a.EstimatedInputTokens))
|
||||
b.WriteString(fmt.Sprintf(" Est. Output Tk: %d\n", a.EstimatedOutputTokens))
|
||||
b.WriteString(core.Sprintf(" Duration: %s\n", formatDuration(a.Duration)))
|
||||
b.WriteString(core.Sprintf(" Active Time: %s\n", formatDuration(a.ActiveTime)))
|
||||
b.WriteString(core.Sprintf(" Events: %d\n", a.EventCount))
|
||||
b.WriteString(core.Sprintf(" Success Rate: %.1f%%\n", a.SuccessRate*100))
|
||||
b.WriteString(core.Sprintf(" Est. Input Tk: %d\n", a.EstimatedInputTokens))
|
||||
b.WriteString(core.Sprintf(" Est. Output Tk: %d\n", a.EstimatedOutputTokens))
|
||||
|
||||
if len(a.ToolCounts) > 0 {
|
||||
b.WriteString("\n Tool Breakdown\n")
|
||||
b.WriteString(" " + strings.Repeat("-", 48) + "\n")
|
||||
b.WriteString(fmt.Sprintf(" %-14s %6s %6s %10s %10s\n",
|
||||
b.WriteString(" " + repeatString("-", 48) + "\n")
|
||||
b.WriteString(core.Sprintf(" %-14s %6s %6s %10s %10s\n",
|
||||
"Tool", "Calls", "Errors", "Avg", "Max"))
|
||||
b.WriteString(" " + strings.Repeat("-", 48) + "\n")
|
||||
b.WriteString(" " + repeatString("-", 48) + "\n")
|
||||
|
||||
// Sort tools for deterministic output
|
||||
for _, tool := range slices.Sorted(maps.Keys(a.ToolCounts)) {
|
||||
errors := a.ErrorCounts[tool]
|
||||
avg := a.AvgLatency[tool]
|
||||
max := a.MaxLatency[tool]
|
||||
b.WriteString(fmt.Sprintf(" %-14s %6d %6d %10s %10s\n",
|
||||
b.WriteString(core.Sprintf(" %-14s %6d %6d %10s %10s\n",
|
||||
tool, a.ToolCounts[tool], errors,
|
||||
formatDuration(avg), formatDuration(max)))
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,15 +2,12 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestAnalyse_EmptySession_Good(t *testing.T) {
|
||||
// TestAnalytics_AnalyseEmptySession_Good verifies the behaviour covered by this test case.
|
||||
func TestAnalytics_AnalyseEmptySession_Good(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "empty",
|
||||
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
|
||||
|
|
@ -19,25 +16,27 @@ func TestAnalyse_EmptySession_Good(t *testing.T) {
|
|||
}
|
||||
|
||||
a := Analyse(sess)
|
||||
require.NotNil(t, a)
|
||||
requireNotNil(t, a)
|
||||
|
||||
assert.Equal(t, time.Duration(0), a.Duration)
|
||||
assert.Equal(t, time.Duration(0), a.ActiveTime)
|
||||
assert.Equal(t, 0, a.EventCount)
|
||||
assert.Equal(t, 0.0, a.SuccessRate)
|
||||
assert.Empty(t, a.ToolCounts)
|
||||
assert.Empty(t, a.ErrorCounts)
|
||||
assert.Equal(t, 0, a.EstimatedInputTokens)
|
||||
assert.Equal(t, 0, a.EstimatedOutputTokens)
|
||||
assertEqual(t, time.Duration(0), a.Duration)
|
||||
assertEqual(t, time.Duration(0), a.ActiveTime)
|
||||
assertEqual(t, 0, a.EventCount)
|
||||
assertEqual(t, 0.0, a.SuccessRate)
|
||||
assertEmpty(t, a.ToolCounts)
|
||||
assertEmpty(t, a.ErrorCounts)
|
||||
assertEqual(t, 0, a.EstimatedInputTokens)
|
||||
assertEqual(t, 0, a.EstimatedOutputTokens)
|
||||
}
|
||||
|
||||
func TestAnalyse_NilSession_Good(t *testing.T) {
|
||||
// TestAnalytics_AnalyseNilSession_Good verifies the behaviour covered by this test case.
|
||||
func TestAnalytics_AnalyseNilSession_Good(t *testing.T) {
|
||||
a := Analyse(nil)
|
||||
require.NotNil(t, a)
|
||||
assert.Equal(t, 0, a.EventCount)
|
||||
requireNotNil(t, a)
|
||||
assertEqual(t, 0, a.EventCount)
|
||||
}
|
||||
|
||||
func TestAnalyse_SingleToolCall_Good(t *testing.T) {
|
||||
// TestAnalytics_AnalyseSingleToolCall_Good verifies the behaviour covered by this test case.
|
||||
func TestAnalytics_AnalyseSingleToolCall_Good(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "single",
|
||||
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
|
||||
|
|
@ -57,17 +56,18 @@ func TestAnalyse_SingleToolCall_Good(t *testing.T) {
|
|||
|
||||
a := Analyse(sess)
|
||||
|
||||
assert.Equal(t, 5*time.Second, a.Duration)
|
||||
assert.Equal(t, 2*time.Second, a.ActiveTime)
|
||||
assert.Equal(t, 1, a.EventCount)
|
||||
assert.Equal(t, 1.0, a.SuccessRate)
|
||||
assert.Equal(t, 1, a.ToolCounts["Bash"])
|
||||
assert.Equal(t, 0, a.ErrorCounts["Bash"])
|
||||
assert.Equal(t, 2*time.Second, a.AvgLatency["Bash"])
|
||||
assert.Equal(t, 2*time.Second, a.MaxLatency["Bash"])
|
||||
assertEqual(t, 5*time.Second, a.Duration)
|
||||
assertEqual(t, 2*time.Second, a.ActiveTime)
|
||||
assertEqual(t, 1, a.EventCount)
|
||||
assertEqual(t, 1.0, a.SuccessRate)
|
||||
assertEqual(t, 1, a.ToolCounts["Bash"])
|
||||
assertEqual(t, 0, a.ErrorCounts["Bash"])
|
||||
assertEqual(t, 2*time.Second, a.AvgLatency["Bash"])
|
||||
assertEqual(t, 2*time.Second, a.MaxLatency["Bash"])
|
||||
}
|
||||
|
||||
func TestAnalyse_MixedToolsWithErrors_Good(t *testing.T) {
|
||||
// TestAnalytics_AnalyseMixedToolsWithErrors_Good verifies the behaviour covered by this test case.
|
||||
func TestAnalytics_AnalyseMixedToolsWithErrors_Good(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "mixed",
|
||||
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
|
||||
|
|
@ -128,27 +128,28 @@ func TestAnalyse_MixedToolsWithErrors_Good(t *testing.T) {
|
|||
|
||||
a := Analyse(sess)
|
||||
|
||||
assert.Equal(t, 5*time.Minute, a.Duration)
|
||||
assert.Equal(t, 7, a.EventCount)
|
||||
assertEqual(t, 5*time.Minute, a.Duration)
|
||||
assertEqual(t, 7, a.EventCount)
|
||||
|
||||
// Tool counts
|
||||
assert.Equal(t, 2, a.ToolCounts["Bash"])
|
||||
assert.Equal(t, 2, a.ToolCounts["Read"])
|
||||
assert.Equal(t, 1, a.ToolCounts["Edit"])
|
||||
assertEqual(t, 2, a.ToolCounts["Bash"])
|
||||
assertEqual(t, 2, a.ToolCounts["Read"])
|
||||
assertEqual(t, 1, a.ToolCounts["Edit"])
|
||||
|
||||
// Error counts
|
||||
assert.Equal(t, 1, a.ErrorCounts["Bash"])
|
||||
assert.Equal(t, 1, a.ErrorCounts["Read"])
|
||||
assert.Equal(t, 0, a.ErrorCounts["Edit"])
|
||||
assertEqual(t, 1, a.ErrorCounts["Bash"])
|
||||
assertEqual(t, 1, a.ErrorCounts["Read"])
|
||||
assertEqual(t, 0, a.ErrorCounts["Edit"])
|
||||
|
||||
// Success rate: 3 successes out of 5 tool calls = 0.6
|
||||
assert.InDelta(t, 0.6, a.SuccessRate, 0.001)
|
||||
assertInDelta(t, 0.6, a.SuccessRate, 0.001)
|
||||
|
||||
// Active time: 1s + 500ms + 200ms + 100ms + 300ms = 2.1s
|
||||
assert.Equal(t, 2100*time.Millisecond, a.ActiveTime)
|
||||
assertEqual(t, 2100*time.Millisecond, a.ActiveTime)
|
||||
}
|
||||
|
||||
func TestAnalyse_LatencyCalculations_Good(t *testing.T) {
|
||||
// TestAnalytics_AnalyseLatencyCalculations_Good verifies the behaviour covered by this test case.
|
||||
func TestAnalytics_AnalyseLatencyCalculations_Good(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "latency",
|
||||
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
|
||||
|
|
@ -184,15 +185,16 @@ func TestAnalyse_LatencyCalculations_Good(t *testing.T) {
|
|||
a := Analyse(sess)
|
||||
|
||||
// Bash: avg = (1+3+5)/3 = 3s, max = 5s
|
||||
assert.Equal(t, 3*time.Second, a.AvgLatency["Bash"])
|
||||
assert.Equal(t, 5*time.Second, a.MaxLatency["Bash"])
|
||||
assertEqual(t, 3*time.Second, a.AvgLatency["Bash"])
|
||||
assertEqual(t, 5*time.Second, a.MaxLatency["Bash"])
|
||||
|
||||
// Read: avg = 200ms, max = 200ms
|
||||
assert.Equal(t, 200*time.Millisecond, a.AvgLatency["Read"])
|
||||
assert.Equal(t, 200*time.Millisecond, a.MaxLatency["Read"])
|
||||
assertEqual(t, 200*time.Millisecond, a.AvgLatency["Read"])
|
||||
assertEqual(t, 200*time.Millisecond, a.MaxLatency["Read"])
|
||||
}
|
||||
|
||||
func TestAnalyse_TokenEstimation_Good(t *testing.T) {
|
||||
// TestAnalytics_AnalyseTokenEstimation_Good verifies the behaviour covered by this test case.
|
||||
func TestAnalytics_AnalyseTokenEstimation_Good(t *testing.T) {
|
||||
// 4 chars = ~1 token
|
||||
sess := &Session{
|
||||
ID: "tokens",
|
||||
|
|
@ -201,19 +203,19 @@ func TestAnalyse_TokenEstimation_Good(t *testing.T) {
|
|||
Events: []Event{
|
||||
{
|
||||
Type: "user",
|
||||
Input: strings.Repeat("a", 400), // 100 tokens
|
||||
Input: repeatString("a", 400), // 100 tokens
|
||||
},
|
||||
{
|
||||
Type: "tool_use",
|
||||
Tool: "Bash",
|
||||
Input: strings.Repeat("b", 80), // 20 tokens
|
||||
Output: strings.Repeat("c", 200), // 50 tokens
|
||||
Input: repeatString("b", 80), // 20 tokens
|
||||
Output: repeatString("c", 200), // 50 tokens
|
||||
Duration: time.Second,
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Type: "assistant",
|
||||
Input: strings.Repeat("d", 120), // 30 tokens
|
||||
Input: repeatString("d", 120), // 30 tokens
|
||||
},
|
||||
},
|
||||
}
|
||||
|
|
@ -221,12 +223,13 @@ func TestAnalyse_TokenEstimation_Good(t *testing.T) {
|
|||
a := Analyse(sess)
|
||||
|
||||
// Input tokens: 400/4 + 80/4 + 120/4 = 100 + 20 + 30 = 150
|
||||
assert.Equal(t, 150, a.EstimatedInputTokens)
|
||||
assertEqual(t, 150, a.EstimatedInputTokens)
|
||||
// Output tokens: 0 + 200/4 + 0 = 50
|
||||
assert.Equal(t, 50, a.EstimatedOutputTokens)
|
||||
assertEqual(t, 50, a.EstimatedOutputTokens)
|
||||
}
|
||||
|
||||
func TestFormatAnalytics_Output_Good(t *testing.T) {
|
||||
// TestAnalytics_FormatAnalyticsOutput_Good verifies the behaviour covered by this test case.
|
||||
func TestAnalytics_FormatAnalyticsOutput_Good(t *testing.T) {
|
||||
a := &SessionAnalytics{
|
||||
Duration: 5 * time.Minute,
|
||||
ActiveTime: 2 * time.Minute,
|
||||
|
|
@ -256,20 +259,21 @@ func TestFormatAnalytics_Output_Good(t *testing.T) {
|
|||
|
||||
output := FormatAnalytics(a)
|
||||
|
||||
assert.Contains(t, output, "Session Analytics")
|
||||
assert.Contains(t, output, "5m0s")
|
||||
assert.Contains(t, output, "2m0s")
|
||||
assert.Contains(t, output, "42")
|
||||
assert.Contains(t, output, "85.0%")
|
||||
assert.Contains(t, output, "1500")
|
||||
assert.Contains(t, output, "3000")
|
||||
assert.Contains(t, output, "Bash")
|
||||
assert.Contains(t, output, "Read")
|
||||
assert.Contains(t, output, "Edit")
|
||||
assert.Contains(t, output, "Tool Breakdown")
|
||||
assertContains(t, output, "Session Analytics")
|
||||
assertContains(t, output, "5m0s")
|
||||
assertContains(t, output, "2m0s")
|
||||
assertContains(t, output, "42")
|
||||
assertContains(t, output, "85.0%")
|
||||
assertContains(t, output, "1500")
|
||||
assertContains(t, output, "3000")
|
||||
assertContains(t, output, "Bash")
|
||||
assertContains(t, output, "Read")
|
||||
assertContains(t, output, "Edit")
|
||||
assertContains(t, output, "Tool Breakdown")
|
||||
}
|
||||
|
||||
func TestFormatAnalytics_EmptyAnalytics_Good(t *testing.T) {
|
||||
// TestAnalytics_FormatAnalyticsEmptyAnalytics_Good verifies the behaviour covered by this test case.
|
||||
func TestAnalytics_FormatAnalyticsEmptyAnalytics_Good(t *testing.T) {
|
||||
a := &SessionAnalytics{
|
||||
ToolCounts: make(map[string]int),
|
||||
ErrorCounts: make(map[string]int),
|
||||
|
|
@ -279,8 +283,8 @@ func TestFormatAnalytics_EmptyAnalytics_Good(t *testing.T) {
|
|||
|
||||
output := FormatAnalytics(a)
|
||||
|
||||
assert.Contains(t, output, "Session Analytics")
|
||||
assert.Contains(t, output, "0.0%")
|
||||
assertContains(t, output, "Session Analytics")
|
||||
assertContains(t, output, "0.0%")
|
||||
// No tool breakdown section when no tools
|
||||
assert.NotContains(t, output, "Tool Breakdown")
|
||||
assertNotContains(t, output, "Tool Breakdown")
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,11 +2,11 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"io/fs"
|
||||
"path"
|
||||
"testing"
|
||||
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// BenchmarkParseTranscript benchmarks parsing a ~1MB+ JSONL file.
|
||||
|
|
@ -92,44 +92,44 @@ func BenchmarkSearch(b *testing.B) {
|
|||
func generateBenchJSONL(b testing.TB, dir string, numTools int) string {
|
||||
b.Helper()
|
||||
|
||||
var sb strings.Builder
|
||||
sb := core.NewBuilder()
|
||||
baseTS := "2026-02-20T10:00:00Z"
|
||||
|
||||
// Opening user message
|
||||
sb.WriteString(fmt.Sprintf(`{"type":"user","timestamp":"%s","sessionId":"bench","message":{"role":"user","content":[{"type":"text","text":"Start benchmark session"}]}}`, baseTS))
|
||||
sb.WriteString(core.Sprintf(`{"type":"user","timestamp":"%s","sessionId":"bench","message":{"role":"user","content":[{"type":"text","text":"Start benchmark session"}]}}`, baseTS))
|
||||
sb.WriteByte('\n')
|
||||
|
||||
for i := range numTools {
|
||||
toolID := fmt.Sprintf("tool-%d", i)
|
||||
toolID := core.Sprintf("tool-%d", i)
|
||||
offset := i * 2
|
||||
|
||||
// Alternate between different tool types for realistic distribution
|
||||
var toolUse, toolResult string
|
||||
switch i % 5 {
|
||||
case 0: // Bash
|
||||
toolUse = fmt.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Bash","id":"%s","input":{"command":"echo iteration %d","description":"echo test"}}]}}`,
|
||||
toolUse = core.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Bash","id":"%s","input":{"command":"echo iteration %d","description":"echo test"}}]}}`,
|
||||
offset/60, offset%60, toolID, i)
|
||||
toolResult = fmt.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"iteration %d output line one\niteration %d output line two","is_error":false}]}}`,
|
||||
toolResult = core.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"iteration %d output line one\niteration %d output line two","is_error":false}]}}`,
|
||||
(offset+1)/60, (offset+1)%60, toolID, i, i)
|
||||
case 1: // Read
|
||||
toolUse = fmt.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Read","id":"%s","input":{"file_path":"/tmp/bench/file-%d.go"}}]}}`,
|
||||
toolUse = core.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Read","id":"%s","input":{"file_path":"/tmp/bench/file-%d.go"}}]}}`,
|
||||
offset/60, offset%60, toolID, i)
|
||||
toolResult = fmt.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"package main\n\nfunc main() {\n\tfmt.Println(%d)\n}","is_error":false}]}}`,
|
||||
toolResult = core.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"package main\n\nfunc main() {\n\tfmt.Println(%d)\n}","is_error":false}]}}`,
|
||||
(offset+1)/60, (offset+1)%60, toolID, i)
|
||||
case 2: // Edit
|
||||
toolUse = fmt.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Edit","id":"%s","input":{"file_path":"/tmp/bench/file-%d.go","old_string":"old","new_string":"new"}}]}}`,
|
||||
toolUse = core.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Edit","id":"%s","input":{"file_path":"/tmp/bench/file-%d.go","old_string":"old","new_string":"new"}}]}}`,
|
||||
offset/60, offset%60, toolID, i)
|
||||
toolResult = fmt.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"ok","is_error":false}]}}`,
|
||||
toolResult = core.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"ok","is_error":false}]}}`,
|
||||
(offset+1)/60, (offset+1)%60, toolID)
|
||||
case 3: // Grep
|
||||
toolUse = fmt.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Grep","id":"%s","input":{"pattern":"TODO","path":"/tmp/bench"}}]}}`,
|
||||
toolUse = core.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Grep","id":"%s","input":{"pattern":"TODO","path":"/tmp/bench"}}]}}`,
|
||||
offset/60, offset%60, toolID)
|
||||
toolResult = fmt.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"/tmp/bench/file.go:10: // TODO fix this","is_error":false}]}}`,
|
||||
toolResult = core.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"/tmp/bench/file.go:10: // TODO fix this","is_error":false}]}}`,
|
||||
(offset+1)/60, (offset+1)%60, toolID)
|
||||
case 4: // Glob
|
||||
toolUse = fmt.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Glob","id":"%s","input":{"pattern":"**/*.go"}}]}}`,
|
||||
toolUse = core.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Glob","id":"%s","input":{"pattern":"**/*.go"}}]}}`,
|
||||
offset/60, offset%60, toolID)
|
||||
toolResult = fmt.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"/tmp/a.go\n/tmp/b.go\n/tmp/c.go","is_error":false}]}}`,
|
||||
toolResult = core.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"/tmp/a.go\n/tmp/b.go\n/tmp/c.go","is_error":false}]}}`,
|
||||
(offset+1)/60, (offset+1)%60, toolID)
|
||||
}
|
||||
|
||||
|
|
@ -140,16 +140,24 @@ func generateBenchJSONL(b testing.TB, dir string, numTools int) string {
|
|||
}
|
||||
|
||||
// Closing assistant message
|
||||
sb.WriteString(fmt.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T12:00:00Z","sessionId":"bench","message":{"role":"assistant","content":[{"type":"text","text":"Benchmark session complete."}]}}%s`, "\n"))
|
||||
sb.WriteString(core.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T12:00:00Z","sessionId":"bench","message":{"role":"assistant","content":[{"type":"text","text":"Benchmark session complete."}]}}%s`, "\n"))
|
||||
|
||||
name := fmt.Sprintf("bench-%d.jsonl", numTools)
|
||||
path := filepath.Join(dir, name)
|
||||
if err := os.WriteFile(path, []byte(sb.String()), 0644); err != nil {
|
||||
b.Fatal(err)
|
||||
name := core.Sprintf("bench-%d.jsonl", numTools)
|
||||
filePath := path.Join(dir, name)
|
||||
writeResult := hostFS.Write(filePath, sb.String())
|
||||
if !writeResult.OK {
|
||||
b.Fatal(resultError(writeResult))
|
||||
}
|
||||
|
||||
info, _ := os.Stat(path)
|
||||
statResult := hostFS.Stat(filePath)
|
||||
if !statResult.OK {
|
||||
b.Fatal(resultError(statResult))
|
||||
}
|
||||
info, ok := statResult.Value.(fs.FileInfo)
|
||||
if !ok {
|
||||
b.Fatal("expected fs.FileInfo from Stat")
|
||||
}
|
||||
b.Logf("Generated %s: %d bytes, %d tool pairs", name, info.Size(), numTools)
|
||||
|
||||
return path
|
||||
return filePath
|
||||
}
|
||||
|
|
|
|||
457
conventions_test.go
Normal file
457
conventions_test.go
Normal file
|
|
@ -0,0 +1,457 @@
|
|||
// SPDX-Licence-Identifier: EUPL-1.2
|
||||
package session
|
||||
|
||||
import (
|
||||
"go/ast"
|
||||
"go/parser"
|
||||
"go/token"
|
||||
"path"
|
||||
"regexp"
|
||||
"slices"
|
||||
"testing"
|
||||
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
var testNamePattern = regexp.MustCompile(`^Test[A-Za-z0-9]+_[A-Za-z0-9]+_(Good|Bad|Ugly)$`)
|
||||
|
||||
// TestConventions_BannedImports_Good verifies the behaviour covered by this test case.
|
||||
func TestConventions_BannedImports_Good(t *testing.T) {
|
||||
files := parseGoFiles(t, ".")
|
||||
|
||||
banned := map[string]string{
|
||||
core.Concat("encoding", "/json"): "use dappco.re/go/core JSON helpers instead",
|
||||
core.Concat("error", "s"): "use core.E/op-aware errors instead",
|
||||
core.Concat("f", "mt"): "use dappco.re/go/core formatting helpers instead",
|
||||
"github.com/pkg/errors": "use coreerr.E(op, msg, err) for package errors",
|
||||
core.Concat("o", "s"): "use dappco.re/go/core filesystem helpers instead",
|
||||
core.Concat("o", "s/exec"): "use session command helpers or core process abstractions instead",
|
||||
core.Concat("path", "/filepath"): "use path or dappco.re/go/core path helpers instead",
|
||||
core.Concat("string", "s"): "use dappco.re/go/core string helpers or local helpers instead",
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
for _, spec := range file.ast.Imports {
|
||||
importPath := trimQuotes(spec.Path.Value)
|
||||
if core.HasPrefix(importPath, "forge.lthn.ai/") {
|
||||
t.Errorf("%s imports %q; use dappco.re/go/... paths instead", file.path, importPath)
|
||||
continue
|
||||
}
|
||||
if reason, ok := banned[importPath]; ok {
|
||||
t.Errorf("%s imports %q; %s", file.path, importPath, reason)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TestConventions_ErrorHandling_Good verifies the behaviour covered by this test case.
|
||||
func TestConventions_ErrorHandling_Good(t *testing.T) {
|
||||
files := parseGoFiles(t, ".")
|
||||
|
||||
for _, file := range files {
|
||||
if core.HasSuffix(file.path, "_test.go") {
|
||||
continue
|
||||
}
|
||||
|
||||
ast.Inspect(file.ast, func(node ast.Node) bool {
|
||||
call, ok := node.(*ast.CallExpr)
|
||||
if !ok {
|
||||
return true
|
||||
}
|
||||
|
||||
sel, ok := call.Fun.(*ast.SelectorExpr)
|
||||
if !ok {
|
||||
return true
|
||||
}
|
||||
|
||||
pkg, ok := sel.X.(*ast.Ident)
|
||||
if !ok {
|
||||
return true
|
||||
}
|
||||
|
||||
switch {
|
||||
case pkg.Name == "core" && sel.Sel.Name == "NewError":
|
||||
t.Errorf("%s uses core.NewError; use core.E(op, msg, err)", file.path)
|
||||
case pkg.Name == "fmt" && sel.Sel.Name == "Errorf":
|
||||
t.Errorf("%s uses fmt.Errorf; use core.E(op, msg, err)", file.path)
|
||||
case pkg.Name == "errors" && sel.Sel.Name == "New":
|
||||
t.Errorf("%s uses errors.New; use core.E(op, msg, err)", file.path)
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestConventions_TestNaming_Good verifies the behaviour covered by this test case.
|
||||
func TestConventions_TestNaming_Good(t *testing.T) {
|
||||
files := parseGoFiles(t, ".")
|
||||
|
||||
for _, file := range files {
|
||||
if !core.HasSuffix(file.path, "_test.go") {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, decl := range file.ast.Decls {
|
||||
fn, ok := decl.(*ast.FuncDecl)
|
||||
if !ok || fn.Recv != nil {
|
||||
continue
|
||||
}
|
||||
if !core.HasPrefix(fn.Name.Name, "Test") || fn.Name.Name == "TestMain" {
|
||||
continue
|
||||
}
|
||||
if !isTestingTFunc(file, fn) {
|
||||
continue
|
||||
}
|
||||
expectedPrefix := core.Concat("Test", testFileToken(file.path), "_")
|
||||
if !core.HasPrefix(fn.Name.Name, expectedPrefix) {
|
||||
t.Errorf("%s contains %s; expected prefix %s", file.path, fn.Name.Name, expectedPrefix)
|
||||
continue
|
||||
}
|
||||
if !testNamePattern.MatchString(fn.Name.Name) {
|
||||
t.Errorf("%s contains %s; expected TestFile_Function_Good/Bad/Ugly", file.path, fn.Name.Name)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TestConventions_UsageComments_Good verifies the behaviour covered by this test case.
|
||||
func TestConventions_UsageComments_Good(t *testing.T) {
|
||||
files := parseGoFiles(t, ".")
|
||||
|
||||
for _, file := range files {
|
||||
if core.HasSuffix(file.path, "_test.go") {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, decl := range file.ast.Decls {
|
||||
switch d := decl.(type) {
|
||||
case *ast.FuncDecl:
|
||||
if d.Recv != nil || !d.Name.IsExported() {
|
||||
continue
|
||||
}
|
||||
text := commentText(d.Doc)
|
||||
if !hasDocPrefix(text, d.Name.Name) || !hasUsageExample(text) {
|
||||
t.Errorf("%s: exported function %s needs a usage comment starting with %s and containing Example:", file.path, d.Name.Name, d.Name.Name)
|
||||
}
|
||||
case *ast.GenDecl:
|
||||
for i, spec := range d.Specs {
|
||||
switch s := spec.(type) {
|
||||
case *ast.TypeSpec:
|
||||
if !s.Name.IsExported() {
|
||||
continue
|
||||
}
|
||||
text := commentText(typeDocGroup(d, s, i))
|
||||
if !hasDocPrefix(text, s.Name.Name) || !hasUsageExample(text) {
|
||||
t.Errorf("%s: exported type %s needs a usage comment starting with %s and containing Example:", file.path, s.Name.Name, s.Name.Name)
|
||||
}
|
||||
case *ast.ValueSpec:
|
||||
doc := valueDocGroup(d, s, i)
|
||||
for _, name := range s.Names {
|
||||
if !name.IsExported() {
|
||||
continue
|
||||
}
|
||||
text := commentText(doc)
|
||||
if !hasDocPrefix(text, name.Name) || !hasUsageExample(text) {
|
||||
t.Errorf("%s: exported declaration %s needs a usage comment starting with %s and containing Example:", file.path, name.Name, name.Name)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type parsedFile struct {
|
||||
path string
|
||||
ast *ast.File
|
||||
testingImportNames map[string]struct{}
|
||||
hasTestingDotImport bool
|
||||
}
|
||||
|
||||
// parseGoFiles supports the session test suite.
|
||||
func parseGoFiles(t *testing.T, dir string) []parsedFile {
|
||||
t.Helper()
|
||||
|
||||
paths := collectGoPaths(dir)
|
||||
if len(paths) == 0 {
|
||||
t.Fatalf("no Go files found in %s", dir)
|
||||
}
|
||||
|
||||
slices.Sort(paths)
|
||||
|
||||
fset := token.NewFileSet()
|
||||
files := make([]parsedFile, 0, len(paths))
|
||||
for _, filePath := range paths {
|
||||
fileAST, err := parser.ParseFile(fset, filePath, nil, parser.ParseComments)
|
||||
if err != nil {
|
||||
t.Fatalf("parse %s: %v", filePath, err)
|
||||
}
|
||||
|
||||
testingImportNames, hasTestingDotImport := testingImports(fileAST)
|
||||
files = append(files, parsedFile{
|
||||
path: relativeGoPath(dir, filePath),
|
||||
ast: fileAST,
|
||||
testingImportNames: testingImportNames,
|
||||
hasTestingDotImport: hasTestingDotImport,
|
||||
})
|
||||
}
|
||||
return files
|
||||
}
|
||||
|
||||
// collectGoPaths supports the session test suite.
|
||||
func collectGoPaths(dir string) []string {
|
||||
var paths []string
|
||||
for _, entryPath := range core.PathGlob(path.Join(dir, "*")) {
|
||||
if hostFS.IsDir(entryPath) {
|
||||
paths = append(paths, collectGoPaths(entryPath)...)
|
||||
continue
|
||||
}
|
||||
if core.HasSuffix(entryPath, ".go") {
|
||||
paths = append(paths, entryPath)
|
||||
}
|
||||
}
|
||||
return paths
|
||||
}
|
||||
|
||||
// relativeGoPath supports the session test suite.
|
||||
func relativeGoPath(root, filePath string) string {
|
||||
prefix := core.TrimSuffix(root, "/")
|
||||
if prefix == "." || prefix == "" {
|
||||
return filePath
|
||||
}
|
||||
prefix += "/"
|
||||
if core.HasPrefix(filePath, prefix) {
|
||||
return filePath[len(prefix):]
|
||||
}
|
||||
return path.Base(filePath)
|
||||
}
|
||||
|
||||
// TestConventions_ParseGoFilesMultiplePackages_Good verifies the behaviour covered by this test case.
|
||||
func TestConventions_ParseGoFilesMultiplePackages_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
writeTestFile(t, path.Join(dir, "session.go"), "package session\n")
|
||||
writeTestFile(t, path.Join(dir, "session_external_test.go"), "package session_test\n")
|
||||
writeTestFile(t, path.Join(dir, "nested", "worker.go"), "package nested\n")
|
||||
writeTestFile(t, path.Join(dir, "README.md"), "# ignored\n")
|
||||
|
||||
files := parseGoFiles(t, dir)
|
||||
if len(files) != 3 {
|
||||
t.Fatalf("expected 3 Go files, got %d", len(files))
|
||||
}
|
||||
|
||||
names := []string{files[0].path, files[1].path}
|
||||
names = append(names, files[2].path)
|
||||
slices.Sort(names)
|
||||
if names[0] != "nested/worker.go" || names[1] != "session.go" || names[2] != "session_external_test.go" {
|
||||
t.Fatalf("unexpected files: %v", names)
|
||||
}
|
||||
}
|
||||
|
||||
// TestConventions_IsTestingTFuncAliasedImport_Good verifies the behaviour covered by this test case.
|
||||
func TestConventions_IsTestingTFuncAliasedImport_Good(t *testing.T) {
|
||||
fileAST, fn := parseTestFunc(t, `
|
||||
package session_test
|
||||
|
||||
import t "testing"
|
||||
|
||||
// TestConventions_AliasedImportContext_Good verifies the behaviour covered by this test case.
|
||||
func TestConventions_AliasedImportContext_Good(testcase *t.T) {}
|
||||
`, "TestConventions_AliasedImportContext_Good")
|
||||
|
||||
names, hasDotImport := testingImports(fileAST)
|
||||
file := parsedFile{
|
||||
ast: fileAST,
|
||||
testingImportNames: names,
|
||||
hasTestingDotImport: hasDotImport,
|
||||
}
|
||||
|
||||
if !isTestingTFunc(file, fn) {
|
||||
t.Fatal("expected aliased *testing.T signature to be recognised")
|
||||
}
|
||||
}
|
||||
|
||||
// TestConventions_IsTestingTFuncDotImport_Good verifies the behaviour covered by this test case.
|
||||
func TestConventions_IsTestingTFuncDotImport_Good(t *testing.T) {
|
||||
fileAST, fn := parseTestFunc(t, `
|
||||
package session_test
|
||||
|
||||
import . "testing"
|
||||
|
||||
// TestConventions_DotImportContext_Good verifies the behaviour covered by this test case.
|
||||
func TestConventions_DotImportContext_Good(testcase *T) {}
|
||||
`, "TestConventions_DotImportContext_Good")
|
||||
|
||||
names, hasDotImport := testingImports(fileAST)
|
||||
file := parsedFile{
|
||||
ast: fileAST,
|
||||
testingImportNames: names,
|
||||
hasTestingDotImport: hasDotImport,
|
||||
}
|
||||
|
||||
if !isTestingTFunc(file, fn) {
|
||||
t.Fatal("expected dot-imported *testing.T signature to be recognised")
|
||||
}
|
||||
}
|
||||
|
||||
// TestConventions_TestHelpers_Good verifies the behaviour covered by this test case.
|
||||
func TestConventions_TestHelpers_Good(t *testing.T) {
|
||||
requireEqual(t, "same", "same")
|
||||
assertNil(t, nil)
|
||||
assertNotNil(t, t)
|
||||
}
|
||||
|
||||
// testingImports supports the session test suite.
|
||||
func testingImports(file *ast.File) (map[string]struct{}, bool) {
|
||||
names := make(map[string]struct{})
|
||||
hasDotImport := false
|
||||
|
||||
for _, spec := range file.Imports {
|
||||
importPath := trimQuotes(spec.Path.Value)
|
||||
if importPath != "testing" {
|
||||
continue
|
||||
}
|
||||
if spec.Name == nil {
|
||||
names["testing"] = struct{}{}
|
||||
continue
|
||||
}
|
||||
switch spec.Name.Name {
|
||||
case ".":
|
||||
hasDotImport = true
|
||||
case "_":
|
||||
continue
|
||||
default:
|
||||
names[spec.Name.Name] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
return names, hasDotImport
|
||||
}
|
||||
|
||||
// isTestingTFunc supports the session test suite.
|
||||
func isTestingTFunc(file parsedFile, fn *ast.FuncDecl) bool {
|
||||
if fn.Type == nil || fn.Type.Params == nil || len(fn.Type.Params.List) != 1 {
|
||||
return false
|
||||
}
|
||||
|
||||
param := fn.Type.Params.List[0]
|
||||
star, ok := param.Type.(*ast.StarExpr)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
switch expr := star.X.(type) {
|
||||
case *ast.Ident:
|
||||
return file.hasTestingDotImport && expr.Name == "T"
|
||||
case *ast.SelectorExpr:
|
||||
pkg, ok := expr.X.(*ast.Ident)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
if expr.Sel.Name != "T" {
|
||||
return false
|
||||
}
|
||||
_, ok = file.testingImportNames[pkg.Name]
|
||||
return ok
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// typeDocGroup supports the session test suite.
|
||||
func typeDocGroup(decl *ast.GenDecl, spec *ast.TypeSpec, index int) *ast.CommentGroup {
|
||||
if spec.Doc != nil {
|
||||
return spec.Doc
|
||||
}
|
||||
if len(decl.Specs) == 1 && index == 0 {
|
||||
return decl.Doc
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// valueDocGroup supports the session test suite.
|
||||
func valueDocGroup(decl *ast.GenDecl, spec *ast.ValueSpec, index int) *ast.CommentGroup {
|
||||
if spec.Doc != nil {
|
||||
return spec.Doc
|
||||
}
|
||||
if len(decl.Specs) == 1 && index == 0 {
|
||||
return decl.Doc
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// commentText supports the session test suite.
|
||||
func commentText(group *ast.CommentGroup) string {
|
||||
if group == nil {
|
||||
return ""
|
||||
}
|
||||
return core.Trim(group.Text())
|
||||
}
|
||||
|
||||
// hasDocPrefix supports the session test suite.
|
||||
func hasDocPrefix(text, name string) bool {
|
||||
if text == "" || !core.HasPrefix(text, name) {
|
||||
return false
|
||||
}
|
||||
if len(text) == len(name) {
|
||||
return true
|
||||
}
|
||||
|
||||
next := text[len(name)]
|
||||
return (next < 'A' || next > 'Z') && (next < 'a' || next > 'z') && (next < '0' || next > '9') && next != '_'
|
||||
}
|
||||
|
||||
// hasUsageExample supports the session test suite.
|
||||
func hasUsageExample(text string) bool {
|
||||
if text == "" {
|
||||
return false
|
||||
}
|
||||
return core.HasPrefix(text, "Example:") || core.Contains(text, "\nExample:")
|
||||
}
|
||||
|
||||
// testFileToken supports the session test suite.
|
||||
func testFileToken(filePath string) string {
|
||||
stem := core.TrimSuffix(path.Base(filePath), "_test.go")
|
||||
switch stem {
|
||||
case "html":
|
||||
return "HTML"
|
||||
default:
|
||||
if stem == "" {
|
||||
return ""
|
||||
}
|
||||
return core.Concat(core.Upper(stem[:1]), stem[1:])
|
||||
}
|
||||
}
|
||||
|
||||
// writeTestFile supports the session test suite.
|
||||
func writeTestFile(t *testing.T, path, content string) {
|
||||
t.Helper()
|
||||
|
||||
writeResult := hostFS.Write(path, content)
|
||||
if !writeResult.OK {
|
||||
t.Fatalf("write %s: %v", path, resultError(writeResult))
|
||||
}
|
||||
}
|
||||
|
||||
// parseTestFunc supports the session test suite.
|
||||
func parseTestFunc(t *testing.T, src, name string) (*ast.File, *ast.FuncDecl) {
|
||||
t.Helper()
|
||||
|
||||
fset := token.NewFileSet()
|
||||
fileAST, err := parser.ParseFile(fset, "test.go", src, parser.ParseComments)
|
||||
if err != nil {
|
||||
t.Fatalf("parse test source: %v", err)
|
||||
}
|
||||
|
||||
for _, decl := range fileAST.Decls {
|
||||
fn, ok := decl.(*ast.FuncDecl)
|
||||
if ok && fn.Name.Name == name {
|
||||
return fileAST, fn
|
||||
}
|
||||
}
|
||||
|
||||
t.Fatalf("function %s not found", name)
|
||||
return nil, nil
|
||||
}
|
||||
99
core_helpers.go
Normal file
99
core_helpers.go
Normal file
|
|
@ -0,0 +1,99 @@
|
|||
// SPDX-Licence-Identifier: EUPL-1.2
|
||||
package session
|
||||
|
||||
import (
|
||||
"bytes" // Note: intrinsic — byte-slice helpers implement local string primitives without strings import; no core equivalent
|
||||
"context"
|
||||
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
var hostCore = core.New()
|
||||
var hostFS = (&core.Fs{}).NewUnrestricted()
|
||||
|
||||
// sessionCore returns the shared core instance, initialising it if needed.
|
||||
func sessionCore(c *core.Core) *core.Core {
|
||||
if c == nil {
|
||||
c = hostCore
|
||||
}
|
||||
if c == nil {
|
||||
c = core.New()
|
||||
hostCore = c
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
// hostContext returns the context associated with the shared core instance.
|
||||
func hostContext(c *core.Core) context.Context {
|
||||
c = sessionCore(c)
|
||||
return c.Context()
|
||||
}
|
||||
|
||||
// hostProcess returns the process runner associated with the shared core instance.
|
||||
func hostProcess(c *core.Core) *core.Process {
|
||||
return sessionCore(c).Process()
|
||||
}
|
||||
|
||||
type rawJSON []byte
|
||||
|
||||
// UnmarshalJSON stores raw JSON bytes without decoding their nested structure.
|
||||
func (m *rawJSON) UnmarshalJSON(data []byte) error {
|
||||
if m == nil {
|
||||
return core.E("rawJSON.UnmarshalJSON", "nil receiver", nil)
|
||||
}
|
||||
*m = append((*m)[:0], data...)
|
||||
return nil
|
||||
}
|
||||
|
||||
// MarshalJSON returns the stored raw JSON bytes or null for a nil value.
|
||||
func (m rawJSON) MarshalJSON() ([]byte, error) {
|
||||
if m == nil {
|
||||
return []byte("null"), nil
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
// resultError extracts an error from a failed core result.
|
||||
func resultError(result core.Result) error {
|
||||
if result.OK {
|
||||
return nil
|
||||
}
|
||||
if err, ok := result.Value.(error); ok && err != nil {
|
||||
return err
|
||||
}
|
||||
return core.E("resultError", "unexpected core result failure", nil)
|
||||
}
|
||||
|
||||
// repeatString repeats a string without importing strings.
|
||||
func repeatString(s string, count int) string {
|
||||
if s == "" || count <= 0 {
|
||||
return ""
|
||||
}
|
||||
return string(bytes.Repeat([]byte(s), count))
|
||||
}
|
||||
|
||||
// containsAny reports whether s contains any rune from chars.
|
||||
func containsAny(s, chars string) bool {
|
||||
for _, ch := range chars {
|
||||
if bytes.ContainsRune([]byte(s), ch) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// indexOf returns the byte index of substr within s.
|
||||
func indexOf(s, substr string) int {
|
||||
return bytes.Index([]byte(s), []byte(substr))
|
||||
}
|
||||
|
||||
// trimQuotes removes matching single-token quote delimiters from s.
|
||||
func trimQuotes(s string) string {
|
||||
if len(s) < 2 {
|
||||
return s
|
||||
}
|
||||
if (s[0] == '"' && s[len(s)-1] == '"') || (s[0] == '`' && s[len(s)-1] == '`') {
|
||||
return s[1 : len(s)-1]
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
|
@ -5,7 +5,7 @@ description: Internals of go-session -- JSONL format, parsing pipeline, event mo
|
|||
|
||||
# Architecture
|
||||
|
||||
Module: `dappco.re/go/core/session`
|
||||
Module: `dappco.re/go/session`
|
||||
|
||||
## Overview
|
||||
|
||||
|
|
@ -239,10 +239,11 @@ Success or failure of a `tool_use` event is indicated by a Unicode check mark (U
|
|||
|
||||
Each event is rendered as a `<div class="event">` containing:
|
||||
|
||||
- `.event-header`: always visible; shows timestamp, tool label, truncated input (120 chars), duration, and status icon.
|
||||
- `.event-header`: always visible; shows timestamp, tool label, truncated input (120 chars), duration, status icon, and a permalink anchor.
|
||||
- `.event-body`: hidden by default; shown on click via the `toggle(i)` JavaScript function which toggles the `open` class.
|
||||
|
||||
The arrow indicator rotates 90 degrees (CSS `transform: rotate(90deg)`) when the panel is open. Output text in `.event-body` is capped at 400px height with `overflow-y: auto`.
|
||||
If the page loads with an `#evt-N` fragment, that event is opened automatically and scrolled into view.
|
||||
|
||||
Input label semantics vary per tool:
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,6 @@ description: How to build, test, lint, and contribute to go-session.
|
|||
## Prerequisites
|
||||
|
||||
- **Go 1.26 or later** -- the module requires Go 1.26 (`go.mod`). The benchmark suite uses `b.Loop()`, introduced in Go 1.25.
|
||||
- **`github.com/stretchr/testify`** -- test-only dependency, fetched automatically by `go test`.
|
||||
- **`vhs`** (`github.com/charmbracelet/vhs`) -- optional, required only for `RenderMP4`. Install with `go install github.com/charmbracelet/vhs@latest`.
|
||||
- **`golangci-lint`** -- optional, for running the full lint suite. Configuration is in `.golangci.yml`.
|
||||
|
||||
|
|
@ -138,6 +137,17 @@ Both `go vet ./...` and `golangci-lint run ./...` must be clean before committin
|
|||
- Use explicit types on struct fields and function signatures.
|
||||
- Avoid `interface{}` in public APIs; use typed parameters where possible.
|
||||
- Handle all errors explicitly; do not use blank `_` for error returns in non-test code.
|
||||
- Exported declarations must have Go doc comments beginning with the identifier name.
|
||||
|
||||
### Imports and Error Handling
|
||||
|
||||
- Do not import `errors` or `github.com/pkg/errors` in non-test Go files; use `coreerr.E(op, msg, err)` from `dappco.re/go/core/log`.
|
||||
- Do not reintroduce legacy `forge.lthn.ai/...` module paths; use `dappco.re/go/core/...` imports.
|
||||
|
||||
### Test Naming
|
||||
|
||||
Test functions should follow `TestFunctionName_Context_Good/Bad/Ugly`.
|
||||
The conventions test suite checks test naming, banned imports, and exported usage comments during `go test ./...`.
|
||||
|
||||
### File Headers
|
||||
|
||||
|
|
@ -210,7 +220,7 @@ Co-Authored-By: Virgil <virgil@lethean.io>
|
|||
|
||||
## Module Path and Go Workspace
|
||||
|
||||
The module path is `dappco.re/go/core/session`. If this package is used within a Go workspace, add it with:
|
||||
The module path is `dappco.re/go/session`. If this package is used within a Go workspace, add it with:
|
||||
|
||||
```bash
|
||||
go work use ./go-session
|
||||
|
|
|
|||
|
|
@ -76,5 +76,5 @@ The following have been identified as potential improvements but are not current
|
|||
- **Parallel search**: fan out `ParseTranscript` calls across goroutines with a result channel to reduce wall time for large directories.
|
||||
- **Persistent index**: a lightweight SQLite index or binary cache per session file to avoid re-parsing on every `Search` or `ListSessions` call.
|
||||
- **Additional tool types**: the parser's `extractToolInput` fallback handles any unknown tool by listing its JSON keys. Dedicated handling could be added for `WebFetch`, `WebSearch`, `NotebookEdit`, and other tools that appear in Claude Code sessions.
|
||||
- **HTML export options**: configurable truncation limits, optional full-output display, and per-event direct links (anchor IDs already exist as `evt-{i}`).
|
||||
- **HTML export options**: configurable truncation limits and optional full-output display remain open; per-event direct links are now available via `#evt-{i}` permalinks.
|
||||
- **VHS alternative**: a pure-Go terminal animation renderer to eliminate the `vhs` dependency for MP4 output.
|
||||
|
|
|
|||
|
|
@ -7,14 +7,14 @@ description: Claude Code JSONL transcript parser, analytics engine, and HTML tim
|
|||
|
||||
`go-session` parses Claude Code JSONL session transcripts into structured event arrays, computes per-tool analytics, renders self-contained HTML timelines with client-side search, and generates VHS tape scripts for MP4 video output. It has no external runtime dependencies -- stdlib only.
|
||||
|
||||
**Module path:** `dappco.re/go/core/session`
|
||||
**Module path:** `dappco.re/go/session`
|
||||
**Go version:** 1.26
|
||||
**Licence:** EUPL-1.2
|
||||
|
||||
## Quick Start
|
||||
|
||||
```go
|
||||
import "dappco.re/go/core/session"
|
||||
import "dappco.re/go/session"
|
||||
|
||||
// Parse a single session file
|
||||
sess, stats, err := session.ParseTranscript("/path/to/session.jsonl")
|
||||
|
|
@ -58,10 +58,9 @@ Test files mirror the source files (`parser_test.go`, `analytics_test.go`, `html
|
|||
| Dependency | Scope | Purpose |
|
||||
|------------|-------|---------|
|
||||
| Go standard library | Runtime | All parsing, HTML rendering, file I/O, JSON decoding |
|
||||
| `github.com/stretchr/testify` | Test only | Assertions and requirements in test files |
|
||||
| `vhs` (charmbracelet) | Optional external binary | Required only by `RenderMP4` for MP4 video generation |
|
||||
|
||||
The package has **zero runtime dependencies** beyond the Go standard library. `testify` is fetched automatically by `go test` and is never imported outside test files.
|
||||
The package has **zero runtime dependencies** beyond the Go standard library and uses local stdlib-backed test helpers instead of third-party assertion packages.
|
||||
|
||||
## Supported Tool Types
|
||||
|
||||
|
|
|
|||
13
go.mod
13
go.mod
|
|
@ -1,15 +1,8 @@
|
|||
module dappco.re/go/core/session
|
||||
module dappco.re/go/session
|
||||
|
||||
go 1.26.0
|
||||
|
||||
require (
|
||||
dappco.re/go/core/log v0.1.0
|
||||
github.com/stretchr/testify v1.11.1
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/kr/text v0.2.0 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
dappco.re/go/core v0.8.0-alpha.1
|
||||
dappco.re/go/core/log v0.1.2
|
||||
)
|
||||
|
|
|
|||
18
go.sum
18
go.sum
|
|
@ -1,20 +1,10 @@
|
|||
dappco.re/go/core/log v0.1.0 h1:pa71Vq2TD2aoEUQWFKwNcaJ3GBY8HbaNGqtE688Unyc=
|
||||
dappco.re/go/core/log v0.1.0/go.mod h1:Nkqb8gsXhZAO8VLpx7B8i1iAmohhzqA20b9Zr8VUcJs=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
dappco.re/go/core v0.8.0-alpha.1 h1:gj7+Scv+L63Z7wMxbJYHhaRFkHJo2u4MMPuUSv/Dhtk=
|
||||
dappco.re/go/core v0.8.0-alpha.1/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A=
|
||||
dappco.re/go/core/log v0.1.2 h1:pQSZxKD8VycdvjNJmatXbPSq2OxcP2xHbF20zgFIiZI=
|
||||
dappco.re/go/core/log v0.1.2/go.mod h1:Nkqb8gsXhZAO8VLpx7B8i1iAmohhzqA20b9Zr8VUcJs=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
|
|
|
|||
100
html.go
100
html.go
|
|
@ -2,22 +2,21 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"html"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
"html" // Note: intrinsic — escaping transcript content for generated HTML; stdlib encoder is the output contract
|
||||
"path" // Note: intrinsic — output parent directory derivation for slash-separated paths; no core equivalent
|
||||
"time" // Note: intrinsic — duration formatting thresholds for rendered summaries; no core equivalent
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// RenderHTML generates a self-contained HTML timeline from a session.
|
||||
//
|
||||
// Example:
|
||||
// err := session.RenderHTML(sess, "/tmp/session.html")
|
||||
func RenderHTML(sess *Session, outputPath string) error {
|
||||
f, err := os.Create(outputPath)
|
||||
if err != nil {
|
||||
return coreerr.E("RenderHTML", "create html", err)
|
||||
if !hostFS.IsDir(path.Dir(outputPath)) {
|
||||
return core.E("RenderHTML", "parent directory does not exist", nil)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
duration := sess.EndTime.Sub(sess.StartTime)
|
||||
toolCount := 0
|
||||
|
|
@ -31,7 +30,8 @@ func RenderHTML(sess *Session, outputPath string) error {
|
|||
}
|
||||
}
|
||||
|
||||
fmt.Fprintf(f, `<!DOCTYPE html>
|
||||
b := core.NewBuilder()
|
||||
b.WriteString(core.Sprintf(`<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
|
|
@ -71,6 +71,8 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
|
|||
.event-header .input { flex: 1; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
|
||||
.event-header .dur { color: var(--dim); font-size: 11px; min-width: 50px; text-align: right; }
|
||||
.event-header .status { font-size: 14px; min-width: 20px; text-align: center; }
|
||||
.event-header .permalink { color: var(--dim); font-size: 12px; min-width: 16px; text-align: center; text-decoration: none; }
|
||||
.event-header .permalink:hover { color: var(--accent); }
|
||||
.event-header .arrow { color: var(--dim); font-size: 10px; transition: transform 0.15s; min-width: 16px; }
|
||||
.event.open .arrow { transform: rotate(90deg); }
|
||||
.event-body { display: none; padding: 12px; background: var(--bg); border-top: 1px solid var(--border); }
|
||||
|
|
@ -93,14 +95,14 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
|
|||
shortID(sess.ID), shortID(sess.ID),
|
||||
sess.StartTime.Format("2006-01-02 15:04:05"),
|
||||
formatDuration(duration),
|
||||
toolCount)
|
||||
toolCount))
|
||||
|
||||
if errorCount > 0 {
|
||||
fmt.Fprintf(f, `
|
||||
<span class="err">%d errors</span>`, errorCount)
|
||||
b.WriteString(core.Sprintf(`
|
||||
<span class="err">%d errors</span>`, errorCount))
|
||||
}
|
||||
|
||||
fmt.Fprintf(f, `
|
||||
b.WriteString(`
|
||||
</div>
|
||||
</div>
|
||||
<div class="search">
|
||||
|
|
@ -108,7 +110,7 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
|
|||
<select id="filter" onchange="filterEvents()">
|
||||
<option value="all">All events</option>
|
||||
<option value="tool_use">Tool calls only</option>
|
||||
<option value="errors">Errors only</option>
|
||||
<option value='errors'>Errors only</option>
|
||||
<option value="Bash">Bash only</option>
|
||||
<option value="user">User messages</option>
|
||||
</select>
|
||||
|
|
@ -119,10 +121,11 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
|
|||
|
||||
var i int
|
||||
for evt := range sess.EventsSeq() {
|
||||
toolClass := strings.ToLower(evt.Tool)
|
||||
if evt.Type == "user" {
|
||||
toolClass := core.Lower(evt.Tool)
|
||||
switch evt.Type {
|
||||
case "user":
|
||||
toolClass = "user"
|
||||
} else if evt.Type == "assistant" {
|
||||
case "assistant":
|
||||
toolClass = "assistant"
|
||||
}
|
||||
|
||||
|
|
@ -141,9 +144,10 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
|
|||
}
|
||||
|
||||
toolLabel := evt.Tool
|
||||
if evt.Type == "user" {
|
||||
switch evt.Type {
|
||||
case "user":
|
||||
toolLabel = "User"
|
||||
} else if evt.Type == "assistant" {
|
||||
case "assistant":
|
||||
toolLabel = "Claude"
|
||||
}
|
||||
|
||||
|
|
@ -152,7 +156,7 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
|
|||
durStr = formatDuration(evt.Duration)
|
||||
}
|
||||
|
||||
fmt.Fprintf(f, `<div class="event%s" data-type="%s" data-tool="%s" data-text="%s" id="evt-%d">
|
||||
b.WriteString(core.Sprintf(`<div class="event%s" data-type="%s" data-tool="%s" data-text="%s" id="evt-%d">
|
||||
<div class="event-header" onclick="toggle(%d)">
|
||||
<span class="arrow">▶</span>
|
||||
<span class="time">%s</span>
|
||||
|
|
@ -160,13 +164,14 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
|
|||
<span class="input">%s</span>
|
||||
<span class="dur">%s</span>
|
||||
<span class="status">%s</span>
|
||||
<a class="permalink" href="#evt-%d" aria-label="Direct link to this event" onclick="event.stopPropagation()">#</a>
|
||||
</div>
|
||||
<div class="event-body">
|
||||
`,
|
||||
errorClass,
|
||||
evt.Type,
|
||||
evt.Tool,
|
||||
html.EscapeString(strings.ToLower(evt.Input+" "+evt.Output)),
|
||||
html.EscapeString(core.Lower(core.Concat(evt.Input, " ", evt.Output))),
|
||||
i,
|
||||
i,
|
||||
evt.Timestamp.Format("15:04:05"),
|
||||
|
|
@ -174,21 +179,23 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
|
|||
html.EscapeString(toolLabel),
|
||||
html.EscapeString(truncate(evt.Input, 120)),
|
||||
durStr,
|
||||
statusIcon)
|
||||
statusIcon,
|
||||
i))
|
||||
|
||||
if evt.Input != "" {
|
||||
label := "Command"
|
||||
if evt.Type == "user" {
|
||||
switch {
|
||||
case evt.Type == "user":
|
||||
label = "Message"
|
||||
} else if evt.Type == "assistant" {
|
||||
case evt.Type == "assistant":
|
||||
label = "Response"
|
||||
} else if evt.Tool == "Read" || evt.Tool == "Glob" || evt.Tool == "Grep" {
|
||||
case evt.Tool == "Read" || evt.Tool == "Glob" || evt.Tool == "Grep":
|
||||
label = "Target"
|
||||
} else if evt.Tool == "Edit" || evt.Tool == "Write" {
|
||||
case evt.Tool == "Edit" || evt.Tool == "Write":
|
||||
label = "File"
|
||||
}
|
||||
fmt.Fprintf(f, ` <div class="section"><div class="label">%s</div><pre>%s</pre></div>
|
||||
`, label, html.EscapeString(evt.Input))
|
||||
b.WriteString(core.Sprintf(` <div class="section"><div class="label">%s</div><pre>%s</pre></div>
|
||||
`, label, html.EscapeString(evt.Input)))
|
||||
}
|
||||
|
||||
if evt.Output != "" {
|
||||
|
|
@ -196,17 +203,17 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
|
|||
if !evt.Success {
|
||||
outClass = "output err"
|
||||
}
|
||||
fmt.Fprintf(f, ` <div class="section"><div class="label">Output</div><pre class="%s">%s</pre></div>
|
||||
`, outClass, html.EscapeString(evt.Output))
|
||||
b.WriteString(core.Sprintf(` <div class="section"><div class="label">Output</div><pre class="%s">%s</pre></div>
|
||||
`, outClass, html.EscapeString(evt.Output)))
|
||||
}
|
||||
|
||||
fmt.Fprint(f, ` </div>
|
||||
b.WriteString(` </div>
|
||||
</div>
|
||||
`)
|
||||
i++
|
||||
}
|
||||
|
||||
fmt.Fprint(f, `</div>
|
||||
b.WriteString(`</div>
|
||||
<script>
|
||||
function toggle(i) {
|
||||
document.getElementById('evt-'+i).classList.toggle('open');
|
||||
|
|
@ -227,20 +234,36 @@ function filterEvents() {
|
|||
el.classList.toggle('hidden', !show);
|
||||
});
|
||||
}
|
||||
function openHashEvent() {
|
||||
const hash = window.location.hash;
|
||||
if (!hash || !hash.startsWith('#evt-')) return;
|
||||
const el = document.getElementById(hash.slice(1));
|
||||
if (!el) return;
|
||||
el.classList.add('open');
|
||||
el.scrollIntoView({block: 'start'});
|
||||
}
|
||||
document.addEventListener('keydown', e => {
|
||||
if (e.key === '/' && document.activeElement.tagName !== 'INPUT') {
|
||||
e.preventDefault();
|
||||
document.getElementById('search').focus();
|
||||
}
|
||||
});
|
||||
window.addEventListener('hashchange', openHashEvent);
|
||||
document.addEventListener('DOMContentLoaded', openHashEvent);
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
`)
|
||||
|
||||
writeResult := hostFS.Write(outputPath, b.String())
|
||||
if !writeResult.OK {
|
||||
return core.E("RenderHTML", "write html", resultError(writeResult))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// shortID returns the abbreviated identifier used by rendered summaries.
|
||||
func shortID(id string) string {
|
||||
if len(id) > 8 {
|
||||
return id[:8]
|
||||
|
|
@ -248,15 +271,16 @@ func shortID(id string) string {
|
|||
return id
|
||||
}
|
||||
|
||||
// formatDuration formats a duration for compact timeline and analytics output.
|
||||
func formatDuration(d time.Duration) string {
|
||||
if d < time.Second {
|
||||
return fmt.Sprintf("%dms", d.Milliseconds())
|
||||
return core.Sprintf("%dms", d.Milliseconds())
|
||||
}
|
||||
if d < time.Minute {
|
||||
return fmt.Sprintf("%.1fs", d.Seconds())
|
||||
return core.Sprintf("%.1fs", d.Seconds())
|
||||
}
|
||||
if d < time.Hour {
|
||||
return fmt.Sprintf("%dm%ds", int(d.Minutes()), int(d.Seconds())%60)
|
||||
return core.Sprintf("%dm%ds", int(d.Minutes()), int(d.Seconds())%60)
|
||||
}
|
||||
return fmt.Sprintf("%dh%dm", int(d.Hours()), int(d.Minutes())%60)
|
||||
return core.Sprintf("%dh%dm", int(d.Hours()), int(d.Minutes())%60)
|
||||
}
|
||||
|
|
|
|||
128
html_test.go
128
html_test.go
|
|
@ -2,16 +2,14 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
func TestRenderHTML_BasicSession_Good(t *testing.T) {
|
||||
// TestHTML_RenderHTMLBasicSession_Good verifies the behaviour covered by this test case.
|
||||
func TestHTML_RenderHTMLBasicSession_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
outputPath := dir + "/output.html"
|
||||
|
||||
|
|
@ -55,32 +53,34 @@ func TestRenderHTML_BasicSession_Good(t *testing.T) {
|
|||
}
|
||||
|
||||
err := RenderHTML(sess, outputPath)
|
||||
require.NoError(t, err)
|
||||
requireNoError(t, err)
|
||||
|
||||
content, err := os.ReadFile(outputPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
html := string(content)
|
||||
readResult := hostFS.Read(outputPath)
|
||||
requireTrue(t, readResult.OK)
|
||||
html := readResult.Value.(string)
|
||||
|
||||
// Basic structure checks
|
||||
assert.Contains(t, html, "<!DOCTYPE html>")
|
||||
assert.Contains(t, html, "test-ses") // shortID of "test-session-12345678"
|
||||
assert.Contains(t, html, "2026-02-20 10:00:00")
|
||||
assert.Contains(t, html, "5m30s") // duration
|
||||
assert.Contains(t, html, "2 tool calls")
|
||||
assert.Contains(t, html, "ls -la")
|
||||
assert.Contains(t, html, "total 42")
|
||||
assert.Contains(t, html, "/tmp/file.go")
|
||||
assert.Contains(t, html, "User") // user event label
|
||||
assert.Contains(t, html, "Claude") // assistant event label
|
||||
assert.Contains(t, html, "Bash")
|
||||
assert.Contains(t, html, "Read")
|
||||
assertContains(t, html, "<!DOCTYPE html>")
|
||||
assertContains(t, html, "test-ses") // shortID of "test-session-12345678"
|
||||
assertContains(t, html, "2026-02-20 10:00:00")
|
||||
assertContains(t, html, "5m30s") // duration
|
||||
assertContains(t, html, "2 tool calls")
|
||||
assertContains(t, html, "ls -la")
|
||||
assertContains(t, html, "total 42")
|
||||
assertContains(t, html, "/tmp/file.go")
|
||||
assertContains(t, html, "User") // user event label
|
||||
assertContains(t, html, "Claude") // assistant event label
|
||||
assertContains(t, html, "Bash")
|
||||
assertContains(t, html, "Read")
|
||||
assertContains(t, html, `href="#evt-0"`)
|
||||
assertContains(t, html, "openHashEvent")
|
||||
// Should contain JS for toggle and filter
|
||||
assert.Contains(t, html, "function toggle")
|
||||
assert.Contains(t, html, "function filterEvents")
|
||||
assertContains(t, html, "function toggle")
|
||||
assertContains(t, html, "function filterEvents")
|
||||
}
|
||||
|
||||
func TestRenderHTML_EmptySession_Good(t *testing.T) {
|
||||
// TestHTML_RenderHTMLEmptySession_Good verifies the behaviour covered by this test case.
|
||||
func TestHTML_RenderHTMLEmptySession_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
outputPath := dir + "/empty.html"
|
||||
|
||||
|
|
@ -93,19 +93,19 @@ func TestRenderHTML_EmptySession_Good(t *testing.T) {
|
|||
}
|
||||
|
||||
err := RenderHTML(sess, outputPath)
|
||||
require.NoError(t, err)
|
||||
requireNoError(t, err)
|
||||
|
||||
content, err := os.ReadFile(outputPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
html := string(content)
|
||||
assert.Contains(t, html, "<!DOCTYPE html>")
|
||||
assert.Contains(t, html, "0 tool calls")
|
||||
readResult := hostFS.Read(outputPath)
|
||||
requireTrue(t, readResult.OK)
|
||||
html := readResult.Value.(string)
|
||||
assertContains(t, html, "<!DOCTYPE html>")
|
||||
assertContains(t, html, "0 tool calls")
|
||||
// Should NOT contain error span
|
||||
assert.NotContains(t, html, "errors</span>")
|
||||
assertNotContains(t, html, "errors</span>")
|
||||
}
|
||||
|
||||
func TestRenderHTML_WithErrors_Good(t *testing.T) {
|
||||
// TestHTML_RenderHTMLWithErrors_Good verifies the behaviour covered by this test case.
|
||||
func TestHTML_RenderHTMLWithErrors_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
outputPath := dir + "/errors.html"
|
||||
|
||||
|
|
@ -138,19 +138,19 @@ func TestRenderHTML_WithErrors_Good(t *testing.T) {
|
|||
}
|
||||
|
||||
err := RenderHTML(sess, outputPath)
|
||||
require.NoError(t, err)
|
||||
requireNoError(t, err)
|
||||
|
||||
content, err := os.ReadFile(outputPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
html := string(content)
|
||||
assert.Contains(t, html, "1 errors")
|
||||
assert.Contains(t, html, `class="event error"`)
|
||||
assert.Contains(t, html, "✗") // cross mark for failed
|
||||
assert.Contains(t, html, "✓") // check mark for success
|
||||
readResult := hostFS.Read(outputPath)
|
||||
requireTrue(t, readResult.OK)
|
||||
html := readResult.Value.(string)
|
||||
assertContains(t, html, "1 errors")
|
||||
assertContains(t, html, `class="event error"`)
|
||||
assertContains(t, html, "✗") // cross mark for failed
|
||||
assertContains(t, html, "✓") // check mark for success
|
||||
}
|
||||
|
||||
func TestRenderHTML_SpecialCharacters_Good(t *testing.T) {
|
||||
// TestHTML_RenderHTMLSpecialCharacters_Good verifies the behaviour covered by this test case.
|
||||
func TestHTML_RenderHTMLSpecialCharacters_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
outputPath := dir + "/special.html"
|
||||
|
||||
|
|
@ -178,31 +178,32 @@ func TestRenderHTML_SpecialCharacters_Good(t *testing.T) {
|
|||
}
|
||||
|
||||
err := RenderHTML(sess, outputPath)
|
||||
require.NoError(t, err)
|
||||
requireNoError(t, err)
|
||||
|
||||
content, err := os.ReadFile(outputPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
html := string(content)
|
||||
readResult := hostFS.Read(outputPath)
|
||||
requireTrue(t, readResult.OK)
|
||||
html := readResult.Value.(string)
|
||||
|
||||
// Script tags should be escaped, never raw
|
||||
assert.NotContains(t, html, "<script>alert")
|
||||
assert.Contains(t, html, "<script>")
|
||||
assert.Contains(t, html, "&")
|
||||
assertNotContains(t, html, "<script>alert")
|
||||
assertContains(t, html, "<script>")
|
||||
assertContains(t, html, "&")
|
||||
}
|
||||
|
||||
func TestRenderHTML_InvalidPath_Ugly(t *testing.T) {
|
||||
// TestHTML_RenderHTMLInvalidPath_Ugly verifies the behaviour covered by this test case.
|
||||
func TestHTML_RenderHTMLInvalidPath_Ugly(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "test",
|
||||
Events: nil,
|
||||
}
|
||||
|
||||
err := RenderHTML(sess, "/nonexistent/dir/output.html")
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "create html")
|
||||
requireError(t, err)
|
||||
assertContains(t, err.Error(), "parent directory does not exist")
|
||||
}
|
||||
|
||||
func TestRenderHTML_LabelsByToolType_Good(t *testing.T) {
|
||||
// TestHTML_RenderHTMLLabelsByToolType_Good verifies the behaviour covered by this test case.
|
||||
func TestHTML_RenderHTMLLabelsByToolType_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
outputPath := dir + "/labels.html"
|
||||
|
||||
|
|
@ -222,17 +223,16 @@ func TestRenderHTML_LabelsByToolType_Good(t *testing.T) {
|
|||
}
|
||||
|
||||
err := RenderHTML(sess, outputPath)
|
||||
require.NoError(t, err)
|
||||
requireNoError(t, err)
|
||||
|
||||
content, err := os.ReadFile(outputPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
html := string(content)
|
||||
readResult := hostFS.Read(outputPath)
|
||||
requireTrue(t, readResult.OK)
|
||||
html := readResult.Value.(string)
|
||||
|
||||
// Bash gets "Command" label
|
||||
assert.True(t, strings.Contains(html, "Command"), "Bash events should use 'Command' label")
|
||||
assertTrue(t, core.Contains(html, "Command"), "Bash events should use 'Command' label")
|
||||
// Read, Glob, Grep get "Target" label
|
||||
assert.True(t, strings.Contains(html, "Target"), "Read/Glob/Grep events should use 'Target' label")
|
||||
assertTrue(t, core.Contains(html, "Target"), "Read/Glob/Grep events should use 'Target' label")
|
||||
// Edit, Write get "File" label
|
||||
assert.True(t, strings.Contains(html, "File"), "Edit/Write events should use 'File' label")
|
||||
assertTrue(t, core.Contains(html, "File"), "Edit/Write events should use 'File' label")
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
# go-session
|
||||
|
||||
`dappco.re/go/core/session` -- Claude Code session parser and visualiser.
|
||||
`dappco.re/go/session` -- Claude Code session parser and visualiser.
|
||||
|
||||
Reads JSONL transcript files produced by Claude Code, extracts structured events, and renders them as interactive HTML timelines or MP4 videos. Zero external dependencies (stdlib only).
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
go get dappco.re/go/core/session@latest
|
||||
go get dappco.re/go/session@latest
|
||||
```
|
||||
|
||||
## Core Types
|
||||
|
|
@ -45,15 +45,16 @@ import (
|
|||
"fmt"
|
||||
"log"
|
||||
|
||||
"dappco.re/go/core/session"
|
||||
"dappco.re/go/session"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Parse a single transcript
|
||||
sess, err := session.ParseTranscript("~/.claude/projects/abc123.jsonl")
|
||||
sess, stats, err := session.ParseTranscript("~/.claude/projects/abc123.jsonl")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fmt.Printf("Skipped lines: %d\n", stats.SkippedLines)
|
||||
fmt.Printf("Session %s: %d events over %s\n",
|
||||
sess.ID, len(sess.Events), sess.EndTime.Sub(sess.StartTime))
|
||||
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ go-session provides two output formats for visualising parsed sessions: a self-c
|
|||
- Yellow: User messages
|
||||
- Grey: Assistant responses
|
||||
- Red border: Failed tool calls
|
||||
- **Permalinks** on each event card for direct `#evt-N` links
|
||||
|
||||
### Usage
|
||||
|
||||
|
|
|
|||
389
parser.go
389
parser.go
|
|
@ -2,18 +2,13 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"iter"
|
||||
"maps"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
"io" // Note: intrinsic — Reader, ReadCloser, and EOF contracts for transcript streams and hostFS handles; no core equivalent
|
||||
"io/fs" // Note: intrinsic — fs.FileInfo metadata returned from hostFS.Stat; no core equivalent
|
||||
"iter" // Note: intrinsic — public lazy sequence API for sessions and events; no core equivalent
|
||||
"slices" // Note: intrinsic — iterator collection, sorted keys, and session ordering; no core equivalent
|
||||
"time" // Note: intrinsic — RFC3339 transcript timestamps and session age calculations; no core equivalent
|
||||
|
||||
core "dappco.re/go/core"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
)
|
||||
|
||||
|
|
@ -21,7 +16,13 @@ import (
|
|||
// Set to 8 MiB to handle very large tool outputs without truncation.
|
||||
const maxScannerBuffer = 8 * 1024 * 1024
|
||||
|
||||
// maxPendingToolCalls bounds unmatched tool_use entries retained while parsing.
|
||||
const maxPendingToolCalls = 4096
|
||||
|
||||
// Event represents a single action in a session timeline.
|
||||
//
|
||||
// Example:
|
||||
// evt := session.Event{Type: "tool_use", Tool: "Bash"}
|
||||
type Event struct {
|
||||
Timestamp time.Time
|
||||
Type string // "tool_use", "user", "assistant", "error"
|
||||
|
|
@ -35,6 +36,9 @@ type Event struct {
|
|||
}
|
||||
|
||||
// Session holds parsed session metadata and events.
|
||||
//
|
||||
// Example:
|
||||
// sess := &session.Session{ID: "abc123", Events: []session.Event{}}
|
||||
type Session struct {
|
||||
ID string
|
||||
Path string
|
||||
|
|
@ -44,33 +48,39 @@ type Session struct {
|
|||
}
|
||||
|
||||
// EventsSeq returns an iterator over the session's events.
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// for evt := range sess.EventsSeq() {
|
||||
// _ = evt
|
||||
// }
|
||||
func (s *Session) EventsSeq() iter.Seq[Event] {
|
||||
return slices.Values(s.Events)
|
||||
}
|
||||
|
||||
// rawEntry is the top-level structure of a Claude Code JSONL line.
|
||||
type rawEntry struct {
|
||||
Type string `json:"type"`
|
||||
Timestamp string `json:"timestamp"`
|
||||
SessionID string `json:"sessionId"`
|
||||
Message json.RawMessage `json:"message"`
|
||||
UserType string `json:"userType"`
|
||||
Type string `json:"type"`
|
||||
Timestamp string `json:"timestamp"`
|
||||
SessionID string `json:"sessionId"`
|
||||
Message rawJSON `json:"message"`
|
||||
UserType string `json:"userType"`
|
||||
}
|
||||
|
||||
type rawMessage struct {
|
||||
Role string `json:"role"`
|
||||
Content []json.RawMessage `json:"content"`
|
||||
Role string `json:"role"`
|
||||
Content []rawJSON `json:"content"`
|
||||
}
|
||||
|
||||
type contentBlock struct {
|
||||
Type string `json:"type"`
|
||||
Name string `json:"name,omitempty"`
|
||||
ID string `json:"id,omitempty"`
|
||||
Text string `json:"text,omitempty"`
|
||||
Input json.RawMessage `json:"input,omitempty"`
|
||||
ToolUseID string `json:"tool_use_id,omitempty"`
|
||||
Content any `json:"content,omitempty"`
|
||||
IsError *bool `json:"is_error,omitempty"`
|
||||
Type string `json:"type"`
|
||||
Name string `json:"name,omitempty"`
|
||||
ID string `json:"id,omitempty"`
|
||||
Text string `json:"text,omitempty"`
|
||||
Input rawJSON `json:"input,omitempty"`
|
||||
ToolUseID string `json:"tool_use_id,omitempty"`
|
||||
Content any `json:"content,omitempty"`
|
||||
IsError *bool `json:"is_error,omitempty"`
|
||||
}
|
||||
|
||||
type bashInput struct {
|
||||
|
|
@ -113,6 +123,9 @@ type taskInput struct {
|
|||
}
|
||||
|
||||
// ParseStats reports diagnostic information from a parse run.
|
||||
//
|
||||
// Example:
|
||||
// stats := &session.ParseStats{TotalLines: 42}
|
||||
type ParseStats struct {
|
||||
TotalLines int
|
||||
SkippedLines int
|
||||
|
|
@ -121,56 +134,67 @@ type ParseStats struct {
|
|||
}
|
||||
|
||||
// ListSessions returns all sessions found in the Claude projects directory.
|
||||
//
|
||||
// Example:
|
||||
// sessions, err := session.ListSessions("/tmp/projects")
|
||||
func ListSessions(projectsDir string) ([]Session, error) {
|
||||
return slices.Collect(ListSessionsSeq(projectsDir)), nil
|
||||
}
|
||||
|
||||
// ListSessionsSeq returns an iterator over all sessions found in the Claude projects directory.
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// for sess := range session.ListSessionsSeq("/tmp/projects") {
|
||||
// _ = sess
|
||||
// }
|
||||
func ListSessionsSeq(projectsDir string) iter.Seq[Session] {
|
||||
return func(yield func(Session) bool) {
|
||||
matches, err := filepath.Glob(filepath.Join(projectsDir, "*.jsonl"))
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
const op = "ListSessionsSeq"
|
||||
|
||||
matches := core.PathGlob(transcriptPath(projectsDir, "*.jsonl"))
|
||||
|
||||
var sessions []Session
|
||||
for _, path := range matches {
|
||||
base := filepath.Base(path)
|
||||
id := strings.TrimSuffix(base, ".jsonl")
|
||||
for _, filePath := range matches {
|
||||
base := core.PathBase(filePath)
|
||||
id := core.TrimSuffix(base, ".jsonl")
|
||||
|
||||
info, err := os.Stat(path)
|
||||
f, err := openTranscriptNoFollow(filePath)
|
||||
if err != nil {
|
||||
coreerr.Warn("skip unreadable transcript", "op", op, "path", filePath, "err", err)
|
||||
continue
|
||||
}
|
||||
|
||||
s := Session{
|
||||
ID: id,
|
||||
Path: path,
|
||||
Path: filePath,
|
||||
}
|
||||
|
||||
// Quick scan for first and last timestamps
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
scanner := bufio.NewScanner(f)
|
||||
scanner.Buffer(make([]byte, 1024*1024), 1024*1024)
|
||||
var firstTS, lastTS string
|
||||
for scanner.Scan() {
|
||||
scanErr := scanTranscriptLines(f, maxScannerBuffer, func(line []byte) bool {
|
||||
var entry rawEntry
|
||||
if json.Unmarshal(scanner.Bytes(), &entry) != nil {
|
||||
continue
|
||||
if !core.JSONUnmarshal(line, &entry).OK {
|
||||
return true
|
||||
}
|
||||
if entry.Timestamp == "" {
|
||||
continue
|
||||
return true
|
||||
}
|
||||
if firstTS == "" {
|
||||
firstTS = entry.Timestamp
|
||||
}
|
||||
lastTS = entry.Timestamp
|
||||
return true
|
||||
})
|
||||
closeErr := f.Close()
|
||||
if scanErr != nil {
|
||||
coreerr.Warn("skip unreadable transcript", "op", op, "path", filePath, "err", scanErr)
|
||||
continue
|
||||
}
|
||||
if closeErr != nil {
|
||||
coreerr.Warn("skip unreadable transcript", "op", op, "path", filePath, "err", closeErr)
|
||||
continue
|
||||
}
|
||||
f.Close()
|
||||
|
||||
if firstTS != "" {
|
||||
if t, err := time.Parse(time.RFC3339Nano, firstTS); err == nil {
|
||||
|
|
@ -183,7 +207,12 @@ func ListSessionsSeq(projectsDir string) iter.Seq[Session] {
|
|||
}
|
||||
}
|
||||
if s.StartTime.IsZero() {
|
||||
s.StartTime = info.ModTime()
|
||||
infoResult := hostFS.Stat(filePath)
|
||||
if infoResult.OK {
|
||||
if info, ok := infoResult.Value.(fs.FileInfo); ok {
|
||||
s.StartTime = info.ModTime()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sessions = append(sessions, s)
|
||||
|
|
@ -203,22 +232,26 @@ func ListSessionsSeq(projectsDir string) iter.Seq[Session] {
|
|||
|
||||
// PruneSessions deletes session files in the projects directory that were last
|
||||
// modified more than maxAge ago. Returns the number of files deleted.
|
||||
//
|
||||
// Example:
|
||||
// deleted, err := session.PruneSessions("/tmp/projects", 24*time.Hour)
|
||||
func PruneSessions(projectsDir string, maxAge time.Duration) (int, error) {
|
||||
matches, err := filepath.Glob(filepath.Join(projectsDir, "*.jsonl"))
|
||||
if err != nil {
|
||||
return 0, coreerr.E("PruneSessions", "list sessions", err)
|
||||
}
|
||||
matches := core.PathGlob(transcriptPath(projectsDir, "*.jsonl"))
|
||||
|
||||
var deleted int
|
||||
now := time.Now()
|
||||
for _, path := range matches {
|
||||
info, err := os.Stat(path)
|
||||
if err != nil {
|
||||
for _, filePath := range matches {
|
||||
infoResult := hostFS.Stat(filePath)
|
||||
if !infoResult.OK {
|
||||
continue
|
||||
}
|
||||
info, ok := infoResult.Value.(fs.FileInfo)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
if now.Sub(info.ModTime()) > maxAge {
|
||||
if err := os.Remove(path); err == nil {
|
||||
if deleteResult := hostFS.Delete(filePath); deleteResult.OK {
|
||||
deleted++
|
||||
}
|
||||
}
|
||||
|
|
@ -228,6 +261,9 @@ func PruneSessions(projectsDir string, maxAge time.Duration) (int, error) {
|
|||
|
||||
// IsExpired returns true if the session's end time is older than the given maxAge
|
||||
// relative to now.
|
||||
//
|
||||
// Example:
|
||||
// expired := sess.IsExpired(24 * time.Hour)
|
||||
func (s *Session) IsExpired(maxAge time.Duration) bool {
|
||||
if s.EndTime.IsZero() {
|
||||
return false
|
||||
|
|
@ -237,44 +273,81 @@ func (s *Session) IsExpired(maxAge time.Duration) bool {
|
|||
|
||||
// FetchSession retrieves a session by ID from the projects directory.
|
||||
// It ensures the ID does not contain path traversal characters.
|
||||
//
|
||||
// Example:
|
||||
// sess, stats, err := session.FetchSession("/tmp/projects", "abc123")
|
||||
func FetchSession(projectsDir, id string) (*Session, *ParseStats, error) {
|
||||
if strings.Contains(id, "..") || strings.ContainsAny(id, `/\`) {
|
||||
if core.Contains(id, "..") || containsAny(id, `/\`) {
|
||||
return nil, nil, coreerr.E("FetchSession", "invalid session id", nil)
|
||||
}
|
||||
|
||||
path := filepath.Join(projectsDir, id+".jsonl")
|
||||
return ParseTranscript(path)
|
||||
filePath := transcriptPath(projectsDir, id+".jsonl")
|
||||
f, err := openTranscriptNoFollow(filePath)
|
||||
if err != nil {
|
||||
if isTranscriptMissing(err) {
|
||||
return nil, nil, coreerr.E("FetchSession", "open transcript", err)
|
||||
}
|
||||
return nil, nil, coreerr.E("FetchSession", "invalid session path", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = f.Close()
|
||||
}()
|
||||
return parseTranscriptFile(filePath, f)
|
||||
}
|
||||
|
||||
// ParseTranscript reads a JSONL session file and returns structured events.
|
||||
// Malformed or truncated lines are skipped; diagnostics are reported in ParseStats.
|
||||
func ParseTranscript(path string) (*Session, *ParseStats, error) {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, nil, coreerr.E("ParseTranscript", "open transcript", err)
|
||||
//
|
||||
// Example:
|
||||
// sess, stats, err := session.ParseTranscript("/tmp/projects/abc123.jsonl")
|
||||
func ParseTranscript(filePath string) (*Session, *ParseStats, error) {
|
||||
openResult := hostFS.Open(filePath)
|
||||
if !openResult.OK {
|
||||
return nil, nil, coreerr.E("ParseTranscript", "open transcript", resultError(openResult))
|
||||
}
|
||||
defer f.Close()
|
||||
f, ok := openResult.Value.(io.ReadCloser)
|
||||
if !ok {
|
||||
return nil, nil, coreerr.E("ParseTranscript", "unexpected file handle type", nil)
|
||||
}
|
||||
defer func() {
|
||||
_ = f.Close()
|
||||
}()
|
||||
|
||||
base := filepath.Base(path)
|
||||
id := strings.TrimSuffix(base, ".jsonl")
|
||||
return parseTranscriptFile(filePath, f)
|
||||
}
|
||||
|
||||
sess, stats, err := parseFromReader(f, id)
|
||||
// parseTranscriptFile parses an already-open transcript reader and assigns path metadata.
|
||||
func parseTranscriptFile(filePath string, r io.Reader) (*Session, *ParseStats, error) {
|
||||
base := core.PathBase(filePath)
|
||||
id := core.TrimSuffix(base, ".jsonl")
|
||||
|
||||
sess, stats, err := parseFromReader(r, id)
|
||||
if sess != nil {
|
||||
sess.Path = path
|
||||
sess.Path = filePath
|
||||
}
|
||||
return sess, stats, err
|
||||
if err != nil {
|
||||
return sess, stats, coreerr.E("ParseTranscript", "parse transcript", err)
|
||||
}
|
||||
return sess, stats, nil
|
||||
}
|
||||
|
||||
// ParseTranscriptReader parses a JSONL session from an io.Reader, enabling
|
||||
// streaming parse without needing a file on disc. The id parameter sets
|
||||
// the session ID (since there is no file name to derive it from).
|
||||
//
|
||||
// Example:
|
||||
// sess, stats, err := session.ParseTranscriptReader(reader, "abc123")
|
||||
func ParseTranscriptReader(r io.Reader, id string) (*Session, *ParseStats, error) {
|
||||
return parseFromReader(r, id)
|
||||
sess, stats, err := parseFromReader(r, id)
|
||||
if err != nil {
|
||||
return sess, stats, coreerr.E("ParseTranscriptReader", "parse transcript", err)
|
||||
}
|
||||
return sess, stats, nil
|
||||
}
|
||||
|
||||
// parseFromReader is the shared implementation for both file-based and
|
||||
// reader-based parsing. It scans line-by-line using bufio.Scanner with
|
||||
// an 8 MiB buffer, gracefully skipping malformed lines.
|
||||
// reader-based parsing. It scans line-by-line with an 8 MiB buffer,
|
||||
// gracefully skipping malformed lines.
|
||||
func parseFromReader(r io.Reader, id string) (*Session, *ParseStats, error) {
|
||||
sess := &Session{
|
||||
ID: id,
|
||||
|
|
@ -290,42 +363,39 @@ func parseFromReader(r io.Reader, id string) (*Session, *ParseStats, error) {
|
|||
}
|
||||
pendingTools := make(map[string]toolUse)
|
||||
|
||||
scanner := bufio.NewScanner(r)
|
||||
scanner.Buffer(make([]byte, maxScannerBuffer), maxScannerBuffer)
|
||||
|
||||
var lineNum int
|
||||
var lastRaw string
|
||||
var lastLineFailed bool
|
||||
|
||||
for scanner.Scan() {
|
||||
scanErr := scanTranscriptLines(r, maxScannerBuffer, func(line []byte) bool {
|
||||
lineNum++
|
||||
stats.TotalLines++
|
||||
|
||||
raw := scanner.Text()
|
||||
if strings.TrimSpace(raw) == "" {
|
||||
continue
|
||||
raw := string(line)
|
||||
if core.Trim(raw) == "" {
|
||||
return true
|
||||
}
|
||||
|
||||
lastRaw = raw
|
||||
lastLineFailed = false
|
||||
|
||||
var entry rawEntry
|
||||
if err := json.Unmarshal([]byte(raw), &entry); err != nil {
|
||||
if !core.JSONUnmarshalString(raw, &entry).OK {
|
||||
stats.SkippedLines++
|
||||
preview := raw
|
||||
if len(preview) > 100 {
|
||||
preview = preview[:100]
|
||||
}
|
||||
stats.Warnings = append(stats.Warnings,
|
||||
fmt.Sprintf("line %d: skipped (bad JSON): %s", lineNum, preview))
|
||||
core.Sprintf("line %d: skipped (bad JSON): %s", lineNum, preview))
|
||||
lastLineFailed = true
|
||||
continue
|
||||
return true
|
||||
}
|
||||
|
||||
ts, err := time.Parse(time.RFC3339Nano, entry.Timestamp)
|
||||
if err != nil {
|
||||
stats.Warnings = append(stats.Warnings, fmt.Sprintf("line %d: bad timestamp %q: %v", lineNum, entry.Timestamp, err))
|
||||
continue
|
||||
stats.Warnings = append(stats.Warnings, core.Sprintf("line %d: bad timestamp %q: %v", lineNum, entry.Timestamp, err))
|
||||
return true
|
||||
}
|
||||
|
||||
if sess.StartTime.IsZero() && !ts.IsZero() {
|
||||
|
|
@ -338,20 +408,20 @@ func parseFromReader(r io.Reader, id string) (*Session, *ParseStats, error) {
|
|||
switch entry.Type {
|
||||
case "assistant":
|
||||
var msg rawMessage
|
||||
if err := json.Unmarshal(entry.Message, &msg); err != nil {
|
||||
stats.Warnings = append(stats.Warnings, fmt.Sprintf("line %d: failed to unmarshal assistant message: %v", lineNum, err))
|
||||
continue
|
||||
if !core.JSONUnmarshal(entry.Message, &msg).OK {
|
||||
stats.Warnings = append(stats.Warnings, core.Sprintf("line %d: failed to unmarshal assistant message", lineNum))
|
||||
return true
|
||||
}
|
||||
for i, raw := range msg.Content {
|
||||
var block contentBlock
|
||||
if err := json.Unmarshal(raw, &block); err != nil {
|
||||
stats.Warnings = append(stats.Warnings, fmt.Sprintf("line %d block %d: failed to unmarshal content: %v", lineNum, i, err))
|
||||
if !core.JSONUnmarshal(raw, &block).OK {
|
||||
stats.Warnings = append(stats.Warnings, core.Sprintf("line %d block %d: failed to unmarshal content", lineNum, i))
|
||||
continue
|
||||
}
|
||||
|
||||
switch block.Type {
|
||||
case "text":
|
||||
if text := strings.TrimSpace(block.Text); text != "" {
|
||||
if text := core.Trim(block.Text); text != "" {
|
||||
sess.Events = append(sess.Events, Event{
|
||||
Timestamp: ts,
|
||||
Type: "assistant",
|
||||
|
|
@ -360,25 +430,33 @@ func parseFromReader(r io.Reader, id string) (*Session, *ParseStats, error) {
|
|||
}
|
||||
|
||||
case "tool_use":
|
||||
if block.ID == "" {
|
||||
continue
|
||||
}
|
||||
if _, exists := pendingTools[block.ID]; !exists && len(pendingTools) >= maxPendingToolCalls {
|
||||
stats.Warnings = append(stats.Warnings,
|
||||
core.Sprintf("line %d: skipped tool_use %q (pending tool limit reached)", lineNum, block.ID))
|
||||
continue
|
||||
}
|
||||
inputStr := extractToolInput(block.Name, block.Input)
|
||||
pendingTools[block.ID] = toolUse{
|
||||
timestamp: ts,
|
||||
tool: block.Name,
|
||||
input: inputStr,
|
||||
input: truncate(inputStr, 500),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
case "user":
|
||||
var msg rawMessage
|
||||
if err := json.Unmarshal(entry.Message, &msg); err != nil {
|
||||
stats.Warnings = append(stats.Warnings, fmt.Sprintf("line %d: failed to unmarshal user message: %v", lineNum, err))
|
||||
continue
|
||||
if !core.JSONUnmarshal(entry.Message, &msg).OK {
|
||||
stats.Warnings = append(stats.Warnings, core.Sprintf("line %d: failed to unmarshal user message", lineNum))
|
||||
return true
|
||||
}
|
||||
for i, raw := range msg.Content {
|
||||
var block contentBlock
|
||||
if err := json.Unmarshal(raw, &block); err != nil {
|
||||
stats.Warnings = append(stats.Warnings, fmt.Sprintf("line %d block %d: failed to unmarshal content: %v", lineNum, i, err))
|
||||
if !core.JSONUnmarshal(raw, &block).OK {
|
||||
stats.Warnings = append(stats.Warnings, core.Sprintf("line %d block %d: failed to unmarshal content", lineNum, i))
|
||||
continue
|
||||
}
|
||||
|
||||
|
|
@ -405,7 +483,7 @@ func parseFromReader(r io.Reader, id string) (*Session, *ParseStats, error) {
|
|||
}
|
||||
|
||||
case "text":
|
||||
if text := strings.TrimSpace(block.Text); text != "" {
|
||||
if text := core.Trim(block.Text); text != "" {
|
||||
sess.Events = append(sess.Events, Event{
|
||||
Timestamp: ts,
|
||||
Type: "user",
|
||||
|
|
@ -415,15 +493,15 @@ func parseFromReader(r io.Reader, id string) (*Session, *ParseStats, error) {
|
|||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return true
|
||||
})
|
||||
|
||||
// Detect truncated final line.
|
||||
if lastLineFailed && lastRaw != "" {
|
||||
stats.Warnings = append(stats.Warnings, "truncated final line")
|
||||
}
|
||||
|
||||
// Check for scanner buffer errors.
|
||||
if scanErr := scanner.Err(); scanErr != nil {
|
||||
if scanErr != nil {
|
||||
return nil, stats, scanErr
|
||||
}
|
||||
|
||||
|
|
@ -432,14 +510,15 @@ func parseFromReader(r io.Reader, id string) (*Session, *ParseStats, error) {
|
|||
if stats.OrphanedToolCalls > 0 {
|
||||
for id := range pendingTools {
|
||||
stats.Warnings = append(stats.Warnings,
|
||||
fmt.Sprintf("orphaned tool call: %s", id))
|
||||
core.Sprintf("orphaned tool call: %s", id))
|
||||
}
|
||||
}
|
||||
|
||||
return sess, stats, nil
|
||||
}
|
||||
|
||||
func extractToolInput(toolName string, raw json.RawMessage) string {
|
||||
// extractToolInput converts raw Claude tool input into a concise display string.
|
||||
func extractToolInput(toolName string, raw rawJSON) string {
|
||||
if raw == nil {
|
||||
return ""
|
||||
}
|
||||
|
|
@ -447,7 +526,7 @@ func extractToolInput(toolName string, raw json.RawMessage) string {
|
|||
switch toolName {
|
||||
case "Bash":
|
||||
var inp bashInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
if core.JSONUnmarshal(raw, &inp).OK {
|
||||
desc := inp.Description
|
||||
if desc != "" {
|
||||
desc = " # " + desc
|
||||
|
|
@ -456,54 +535,59 @@ func extractToolInput(toolName string, raw json.RawMessage) string {
|
|||
}
|
||||
case "Read":
|
||||
var inp readInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
if core.JSONUnmarshal(raw, &inp).OK {
|
||||
return inp.FilePath
|
||||
}
|
||||
case "Edit":
|
||||
var inp editInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
return fmt.Sprintf("%s (edit)", inp.FilePath)
|
||||
if core.JSONUnmarshal(raw, &inp).OK {
|
||||
return core.Sprintf("%s (edit)", inp.FilePath)
|
||||
}
|
||||
case "Write":
|
||||
var inp writeInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
return fmt.Sprintf("%s (%d bytes)", inp.FilePath, len(inp.Content))
|
||||
if core.JSONUnmarshal(raw, &inp).OK {
|
||||
return core.Sprintf("%s (%d bytes)", inp.FilePath, len(inp.Content))
|
||||
}
|
||||
case "Grep":
|
||||
var inp grepInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
if core.JSONUnmarshal(raw, &inp).OK {
|
||||
path := inp.Path
|
||||
if path == "" {
|
||||
path = "."
|
||||
}
|
||||
return fmt.Sprintf("/%s/ in %s", inp.Pattern, path)
|
||||
return core.Sprintf("/%s/ in %s", inp.Pattern, path)
|
||||
}
|
||||
case "Glob":
|
||||
var inp globInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
if core.JSONUnmarshal(raw, &inp).OK {
|
||||
return inp.Pattern
|
||||
}
|
||||
case "Task":
|
||||
var inp taskInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
if core.JSONUnmarshal(raw, &inp).OK {
|
||||
desc := inp.Description
|
||||
if desc == "" {
|
||||
desc = truncate(inp.Prompt, 80)
|
||||
}
|
||||
return fmt.Sprintf("[%s] %s", inp.SubagentType, desc)
|
||||
return core.Sprintf("[%s] %s", inp.SubagentType, desc)
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: show raw JSON keys
|
||||
var m map[string]any
|
||||
if json.Unmarshal(raw, &m) == nil {
|
||||
parts := slices.Sorted(maps.Keys(m))
|
||||
return strings.Join(parts, ", ")
|
||||
if core.JSONUnmarshal(raw, &m).OK {
|
||||
parts := make([]string, 0, len(m))
|
||||
for key := range m {
|
||||
parts = append(parts, key)
|
||||
}
|
||||
slices.Sort(parts)
|
||||
return core.Join(", ", parts...)
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// extractResultContent converts Claude tool_result content into plain text.
|
||||
func extractResultContent(content any) string {
|
||||
switch v := content.(type) {
|
||||
case string:
|
||||
|
|
@ -517,18 +601,87 @@ func extractResultContent(content any) string {
|
|||
}
|
||||
}
|
||||
}
|
||||
return strings.Join(parts, "\n")
|
||||
return core.Join("\n", parts...)
|
||||
case map[string]any:
|
||||
if text, ok := v["text"].(string); ok {
|
||||
return text
|
||||
}
|
||||
}
|
||||
return fmt.Sprintf("%v", content)
|
||||
return core.Sprint(content)
|
||||
}
|
||||
|
||||
// truncate returns s capped to max bytes with an ellipsis marker.
|
||||
func truncate(s string, max int) string {
|
||||
if len(s) <= max {
|
||||
return s
|
||||
}
|
||||
return s[:max] + "..."
|
||||
}
|
||||
|
||||
// scanTranscriptLines streams newline-delimited records with a per-line size limit.
|
||||
func scanTranscriptLines(r io.Reader, maxLineSize int, handle func([]byte) bool) error {
|
||||
const op = "scanTranscriptLines"
|
||||
|
||||
if maxLineSize <= 0 {
|
||||
maxLineSize = maxScannerBuffer
|
||||
}
|
||||
|
||||
readBuffer := make([]byte, 64*1024)
|
||||
line := make([]byte, 0, 64*1024)
|
||||
|
||||
for {
|
||||
n, readErr := r.Read(readBuffer)
|
||||
if n > 0 {
|
||||
chunk := readBuffer[:n]
|
||||
start := 0
|
||||
for i, b := range chunk {
|
||||
if b != '\n' {
|
||||
continue
|
||||
}
|
||||
if len(line)+i-start > maxLineSize {
|
||||
return coreerr.E(op, core.Sprintf("line exceeds %d bytes", maxLineSize), nil)
|
||||
}
|
||||
line = append(line, chunk[start:i]...)
|
||||
if !handle(trimLineBreak(line)) {
|
||||
return nil
|
||||
}
|
||||
line = line[:0]
|
||||
start = i + 1
|
||||
}
|
||||
if start < len(chunk) {
|
||||
if len(line)+len(chunk)-start > maxLineSize {
|
||||
return coreerr.E(op, core.Sprintf("line exceeds %d bytes", maxLineSize), nil)
|
||||
}
|
||||
line = append(line, chunk[start:]...)
|
||||
}
|
||||
}
|
||||
|
||||
if readErr == io.EOF {
|
||||
if len(line) > 0 {
|
||||
if !handle(trimLineBreak(line)) {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if readErr != nil {
|
||||
return coreerr.E(op, "read error", readErr)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// trimLineBreak removes a trailing carriage return from a scanned line.
|
||||
func trimLineBreak(line []byte) []byte {
|
||||
if len(line) > 0 && line[len(line)-1] == '\r' {
|
||||
return line[:len(line)-1]
|
||||
}
|
||||
return line
|
||||
}
|
||||
|
||||
// transcriptPath joins a projects directory and transcript file name.
|
||||
func transcriptPath(projectsDir, name string) string {
|
||||
if projectsDir == "" {
|
||||
return core.CleanPath(name, "/")
|
||||
}
|
||||
return core.CleanPath(core.JoinPath(projectsDir, name), "/")
|
||||
}
|
||||
|
|
|
|||
20
parser_other.go
Normal file
20
parser_other.go
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
//go:build !unix
|
||||
|
||||
// SPDX-Licence-Identifier: EUPL-1.2
|
||||
package session
|
||||
|
||||
import (
|
||||
"io" // Note: intrinsic — keeps the platform stub signature aligned with the Unix io.ReadCloser implementation; no core equivalent
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
)
|
||||
|
||||
// openTranscriptNoFollow reports that secure no-follow opens are unavailable on this platform.
|
||||
func openTranscriptNoFollow(filePath string) (io.ReadCloser, error) {
|
||||
return nil, coreerr.E("openTranscriptNoFollow", "secure no-follow transcript opens are unsupported on this platform: "+filePath, nil)
|
||||
}
|
||||
|
||||
// isTranscriptMissing reports whether err wraps a missing transcript path error.
|
||||
func isTranscriptMissing(error) bool {
|
||||
return false
|
||||
}
|
||||
1043
parser_test.go
1043
parser_test.go
File diff suppressed because it is too large
Load diff
83
parser_unix.go
Normal file
83
parser_unix.go
Normal file
|
|
@ -0,0 +1,83 @@
|
|||
//go:build unix
|
||||
|
||||
// SPDX-Licence-Identifier: EUPL-1.2
|
||||
package session
|
||||
|
||||
import (
|
||||
"io" // Note: intrinsic — io.ReadCloser contract and EOF signalling for descriptor-backed transcript reads; no core equivalent
|
||||
"syscall" // Note: intrinsic — O_NOFOLLOW descriptor opens and fstat checks are platform syscalls; no core equivalent
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
)
|
||||
|
||||
type noFollowFile struct {
|
||||
fd int
|
||||
}
|
||||
|
||||
// Read reads bytes from a descriptor opened without following symlinks.
|
||||
func (f *noFollowFile) Read(p []byte) (int, error) {
|
||||
n, err := syscall.Read(f.fd, p)
|
||||
if err != nil {
|
||||
return n, coreerr.E("noFollowFile.Read", "read transcript descriptor", err)
|
||||
}
|
||||
if n == 0 {
|
||||
return 0, io.EOF
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// Close closes a descriptor opened without following symlinks.
|
||||
func (f *noFollowFile) Close() error {
|
||||
if err := syscall.Close(f.fd); err != nil {
|
||||
return coreerr.E("noFollowFile.Close", "close transcript descriptor", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// openTranscriptNoFollow opens a regular transcript file without following symlinks.
|
||||
func openTranscriptNoFollow(filePath string) (io.ReadCloser, error) {
|
||||
const op = "openTranscriptNoFollow"
|
||||
|
||||
fd, err := syscall.Open(filePath, syscall.O_RDONLY|syscall.O_NOFOLLOW, 0)
|
||||
if err != nil {
|
||||
return nil, coreerr.E(op, "open transcript without following symlinks", err)
|
||||
}
|
||||
|
||||
var st syscall.Stat_t
|
||||
if err := syscall.Fstat(fd, &st); err != nil {
|
||||
if closeErr := closeNoFollowFD(fd); closeErr != nil {
|
||||
return nil, closeErr
|
||||
}
|
||||
return nil, coreerr.E(op, "stat transcript descriptor", err)
|
||||
}
|
||||
if st.Mode&syscall.S_IFMT != syscall.S_IFREG {
|
||||
if closeErr := closeNoFollowFD(fd); closeErr != nil {
|
||||
return nil, closeErr
|
||||
}
|
||||
return nil, coreerr.E(op, "not a regular file", nil)
|
||||
}
|
||||
return &noFollowFile{fd: fd}, nil
|
||||
}
|
||||
|
||||
// closeNoFollowFD closes a raw descriptor after a failed secure-open check.
|
||||
func closeNoFollowFD(fd int) error {
|
||||
if err := syscall.Close(fd); err != nil {
|
||||
return coreerr.E("openTranscriptNoFollow", "close rejected transcript descriptor", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// isTranscriptMissing reports whether err wraps a missing transcript path error.
|
||||
func isTranscriptMissing(err error) bool {
|
||||
for err != nil {
|
||||
if err == syscall.ENOENT {
|
||||
return true
|
||||
}
|
||||
unwrapper, ok := err.(interface{ Unwrap() error })
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
err = unwrapper.Unwrap()
|
||||
}
|
||||
return false
|
||||
}
|
||||
38
search.go
38
search.go
|
|
@ -2,14 +2,18 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"iter"
|
||||
"path/filepath"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
"iter" // Note: intrinsic — public lazy sequence API for search results; no core equivalent
|
||||
"path" // Note: intrinsic — slash-separated transcript glob path construction; no core equivalent
|
||||
"slices" // Note: intrinsic — slices.Collect materialises search iterator results; no core equivalent
|
||||
"time" // Note: intrinsic — search result timestamps mirror parsed transcript event times; no core equivalent
|
||||
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// SearchResult represents a match found in a session transcript.
|
||||
//
|
||||
// Example:
|
||||
// result := session.SearchResult{SessionID: "abc123", Tool: "Bash"}
|
||||
type SearchResult struct {
|
||||
SessionID string
|
||||
Timestamp time.Time
|
||||
|
|
@ -18,22 +22,28 @@ type SearchResult struct {
|
|||
}
|
||||
|
||||
// Search finds events matching the query across all sessions in the directory.
|
||||
//
|
||||
// Example:
|
||||
// results, err := session.Search("/tmp/projects", "go test")
|
||||
func Search(projectsDir, query string) ([]SearchResult, error) {
|
||||
return slices.Collect(SearchSeq(projectsDir, query)), nil
|
||||
}
|
||||
|
||||
// SearchSeq returns an iterator over search results matching the query across all sessions.
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// for result := range session.SearchSeq("/tmp/projects", "go test") {
|
||||
// _ = result
|
||||
// }
|
||||
func SearchSeq(projectsDir, query string) iter.Seq[SearchResult] {
|
||||
return func(yield func(SearchResult) bool) {
|
||||
matches, err := filepath.Glob(filepath.Join(projectsDir, "*.jsonl"))
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
matches := core.PathGlob(path.Join(projectsDir, "*.jsonl"))
|
||||
|
||||
query = strings.ToLower(query)
|
||||
query = core.Lower(query)
|
||||
|
||||
for _, path := range matches {
|
||||
sess, _, err := ParseTranscript(path)
|
||||
for _, filePath := range matches {
|
||||
sess, _, err := ParseTranscript(filePath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
|
@ -42,8 +52,8 @@ func SearchSeq(projectsDir, query string) iter.Seq[SearchResult] {
|
|||
if evt.Type != "tool_use" {
|
||||
continue
|
||||
}
|
||||
text := strings.ToLower(evt.Input + " " + evt.Output)
|
||||
if strings.Contains(text, query) {
|
||||
text := core.Lower(core.Concat(evt.Input, " ", evt.Output))
|
||||
if core.Contains(text, query) {
|
||||
matchCtx := evt.Input
|
||||
if matchCtx == "" {
|
||||
matchCtx = truncate(evt.Output, 120)
|
||||
|
|
|
|||
|
|
@ -2,23 +2,21 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"path"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestSearch_EmptyDir_Good(t *testing.T) {
|
||||
// TestSearch_SearchEmptyDir_Good verifies the behaviour covered by this test case.
|
||||
func TestSearch_SearchEmptyDir_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
results, err := Search(dir, "anything")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, results)
|
||||
requireNoError(t, err)
|
||||
assertEmpty(t, results)
|
||||
}
|
||||
|
||||
func TestSearch_NoMatches_Good(t *testing.T) {
|
||||
// TestSearch_SearchNoMatches_Good verifies the behaviour covered by this test case.
|
||||
func TestSearch_SearchNoMatches_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
writeJSONL(t, dir, "session.jsonl",
|
||||
toolUseEntry(ts(0), "Bash", "tool-1", map[string]any{
|
||||
|
|
@ -28,11 +26,12 @@ func TestSearch_NoMatches_Good(t *testing.T) {
|
|||
)
|
||||
|
||||
results, err := Search(dir, "nonexistent-query")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, results)
|
||||
requireNoError(t, err)
|
||||
assertEmpty(t, results)
|
||||
}
|
||||
|
||||
func TestSearch_SingleMatch_Good(t *testing.T) {
|
||||
// TestSearch_SearchSingleMatch_Good verifies the behaviour covered by this test case.
|
||||
func TestSearch_SearchSingleMatch_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
writeJSONL(t, dir, "session.jsonl",
|
||||
toolUseEntry(ts(0), "Bash", "tool-1", map[string]any{
|
||||
|
|
@ -42,15 +41,16 @@ func TestSearch_SingleMatch_Good(t *testing.T) {
|
|||
)
|
||||
|
||||
results, err := Search(dir, "go test")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, results, 1)
|
||||
requireNoError(t, err)
|
||||
requireLen(t, results, 1)
|
||||
|
||||
assert.Equal(t, "session", results[0].SessionID)
|
||||
assert.Equal(t, "Bash", results[0].Tool)
|
||||
assert.Contains(t, results[0].Match, "go test")
|
||||
assertEqual(t, "session", results[0].SessionID)
|
||||
assertEqual(t, "Bash", results[0].Tool)
|
||||
assertContains(t, results[0].Match, "go test")
|
||||
}
|
||||
|
||||
func TestSearchSeq_SingleMatch_Good(t *testing.T) {
|
||||
// TestSearch_SearchSeqSingleMatch_Good verifies the behaviour covered by this test case.
|
||||
func TestSearch_SearchSeqSingleMatch_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
writeJSONL(t, dir, "session.jsonl",
|
||||
toolUseEntry(ts(0), "Bash", "tool-1", map[string]any{
|
||||
|
|
@ -64,12 +64,13 @@ func TestSearchSeq_SingleMatch_Good(t *testing.T) {
|
|||
results = append(results, r)
|
||||
}
|
||||
|
||||
require.Len(t, results, 1)
|
||||
assert.Equal(t, "session", results[0].SessionID)
|
||||
assert.Equal(t, "Bash", results[0].Tool)
|
||||
requireLen(t, results, 1)
|
||||
assertEqual(t, "session", results[0].SessionID)
|
||||
assertEqual(t, "Bash", results[0].Tool)
|
||||
}
|
||||
|
||||
func TestSearch_MultipleMatches_Good(t *testing.T) {
|
||||
// TestSearch_SearchMultipleMatches_Good verifies the behaviour covered by this test case.
|
||||
func TestSearch_SearchMultipleMatches_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
writeJSONL(t, dir, "session1.jsonl",
|
||||
toolUseEntry(ts(0), "Bash", "t1", map[string]any{
|
||||
|
|
@ -89,11 +90,12 @@ func TestSearch_MultipleMatches_Good(t *testing.T) {
|
|||
)
|
||||
|
||||
results, err := Search(dir, "go test")
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, results, 3, "should find matches across both sessions")
|
||||
requireNoError(t, err)
|
||||
assertLen(t, results, 3, "should find matches across both sessions")
|
||||
}
|
||||
|
||||
func TestSearch_CaseInsensitive_Good(t *testing.T) {
|
||||
// TestSearch_SearchCaseInsensitive_Good verifies the behaviour covered by this test case.
|
||||
func TestSearch_SearchCaseInsensitive_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
writeJSONL(t, dir, "session.jsonl",
|
||||
toolUseEntry(ts(0), "Bash", "t1", map[string]any{
|
||||
|
|
@ -103,11 +105,12 @@ func TestSearch_CaseInsensitive_Good(t *testing.T) {
|
|||
)
|
||||
|
||||
results, err := Search(dir, "go test")
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, results, 1, "search should be case-insensitive")
|
||||
requireNoError(t, err)
|
||||
assertLen(t, results, 1, "search should be case-insensitive")
|
||||
}
|
||||
|
||||
func TestSearch_MatchesInOutput_Good(t *testing.T) {
|
||||
// TestSearch_SearchMatchesInOutput_Good verifies the behaviour covered by this test case.
|
||||
func TestSearch_SearchMatchesInOutput_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
writeJSONL(t, dir, "session.jsonl",
|
||||
toolUseEntry(ts(0), "Bash", "t1", map[string]any{
|
||||
|
|
@ -117,13 +120,14 @@ func TestSearch_MatchesInOutput_Good(t *testing.T) {
|
|||
)
|
||||
|
||||
results, err := Search(dir, "connection refused")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, results, 1, "should match against output text")
|
||||
requireNoError(t, err)
|
||||
requireLen(t, results, 1, "should match against output text")
|
||||
// Match field should contain the input (command) since it's non-empty
|
||||
assert.Contains(t, results[0].Match, "cat log.txt")
|
||||
assertContains(t, results[0].Match, "cat log.txt")
|
||||
}
|
||||
|
||||
func TestSearch_SkipsNonToolEvents_Good(t *testing.T) {
|
||||
// TestSearch_SearchSkipsNonToolEvents_Good verifies the behaviour covered by this test case.
|
||||
func TestSearch_SearchSkipsNonToolEvents_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
writeJSONL(t, dir, "session.jsonl",
|
||||
userTextEntry(ts(0), "Please search for something"),
|
||||
|
|
@ -132,20 +136,23 @@ func TestSearch_SkipsNonToolEvents_Good(t *testing.T) {
|
|||
|
||||
// "search" appears in user and assistant text, but Search only checks tool_use events
|
||||
results, err := Search(dir, "search")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, results, "should only match tool_use events, not user/assistant text")
|
||||
requireNoError(t, err)
|
||||
assertEmpty(t, results, "should only match tool_use events, not user/assistant text")
|
||||
}
|
||||
|
||||
func TestSearch_NonJSONLIgnored_Good(t *testing.T) {
|
||||
// TestSearch_SearchNonJSONLIgnored_Good verifies the behaviour covered by this test case.
|
||||
func TestSearch_SearchNonJSONLIgnored_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, "readme.md"), []byte("go test"), 0644))
|
||||
writeResult := hostFS.Write(path.Join(dir, "readme.md"), "go test")
|
||||
requireTrue(t, writeResult.OK)
|
||||
|
||||
results, err := Search(dir, "go test")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, results, "non-JSONL files should be ignored")
|
||||
requireNoError(t, err)
|
||||
assertEmpty(t, results, "non-JSONL files should be ignored")
|
||||
}
|
||||
|
||||
func TestSearch_MalformedSessionSkipped_Bad(t *testing.T) {
|
||||
// TestSearch_SearchMalformedSessionSkipped_Bad verifies the behaviour covered by this test case.
|
||||
func TestSearch_SearchMalformedSessionSkipped_Bad(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
// One broken session and one valid session
|
||||
|
|
@ -160,6 +167,6 @@ func TestSearch_MalformedSessionSkipped_Bad(t *testing.T) {
|
|||
)
|
||||
|
||||
results, err := Search(dir, "go test")
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, results, 1, "should still find matches in valid sessions")
|
||||
requireNoError(t, err)
|
||||
assertLen(t, results, 1, "should still find matches in valid sessions")
|
||||
}
|
||||
|
|
|
|||
200
test_helpers_test.go
Normal file
200
test_helpers_test.go
Normal file
|
|
@ -0,0 +1,200 @@
|
|||
// SPDX-Licence-Identifier: EUPL-1.2
|
||||
package session
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// testContext supports the session test suite.
|
||||
func testContext(msgAndArgs []any) string {
|
||||
if len(msgAndArgs) == 0 {
|
||||
return ""
|
||||
}
|
||||
return core.Sprintf("%v: ", msgAndArgs[0])
|
||||
}
|
||||
|
||||
// isNil supports the session test suite.
|
||||
func isNil(v any) bool {
|
||||
if v == nil {
|
||||
return true
|
||||
}
|
||||
rv := reflect.ValueOf(v)
|
||||
switch rv.Kind() {
|
||||
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
|
||||
return rv.IsNil()
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// isEmpty supports the session test suite.
|
||||
func isEmpty(v any) bool {
|
||||
if isNil(v) {
|
||||
return true
|
||||
}
|
||||
rv := reflect.ValueOf(v)
|
||||
switch rv.Kind() {
|
||||
case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice, reflect.String:
|
||||
return rv.Len() == 0
|
||||
default:
|
||||
return rv.IsZero()
|
||||
}
|
||||
}
|
||||
|
||||
// valueLen supports the session test suite.
|
||||
func valueLen(v any) (int, bool) {
|
||||
if v == nil {
|
||||
return 0, true
|
||||
}
|
||||
rv := reflect.ValueOf(v)
|
||||
switch rv.Kind() {
|
||||
case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice, reflect.String:
|
||||
return rv.Len(), true
|
||||
default:
|
||||
return 0, false
|
||||
}
|
||||
}
|
||||
|
||||
// requireNoError stops the current test case when its condition is not met.
|
||||
func requireNoError(t *testing.T, err error, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if err != nil {
|
||||
t.Fatalf("%sunexpected error: %v", testContext(msgAndArgs), err)
|
||||
}
|
||||
}
|
||||
|
||||
// requireError stops the current test case when its condition is not met.
|
||||
func requireError(t *testing.T, err error, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if err == nil {
|
||||
t.Fatalf("%sexpected error, got nil", testContext(msgAndArgs))
|
||||
}
|
||||
}
|
||||
|
||||
// requireEqual stops the current test case when its condition is not met.
|
||||
func requireEqual(t *testing.T, want, got any, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if !reflect.DeepEqual(want, got) {
|
||||
t.Fatalf("%swant %v, got %v", testContext(msgAndArgs), want, got)
|
||||
}
|
||||
}
|
||||
|
||||
// requireTrue stops the current test case when its condition is not met.
|
||||
func requireTrue(t *testing.T, cond bool, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if !cond {
|
||||
t.Fatalf("%sexpected true", testContext(msgAndArgs))
|
||||
}
|
||||
}
|
||||
|
||||
// requireNotNil stops the current test case when its condition is not met.
|
||||
func requireNotNil(t *testing.T, v any, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if isNil(v) {
|
||||
t.Fatalf("%sexpected non-nil", testContext(msgAndArgs))
|
||||
}
|
||||
}
|
||||
|
||||
// requireLen stops the current test case when its condition is not met.
|
||||
func requireLen(t *testing.T, v any, want int, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
got, ok := valueLen(v)
|
||||
if !ok {
|
||||
t.Fatalf("%sexpected value with length, got %T", testContext(msgAndArgs), v)
|
||||
}
|
||||
if want != got {
|
||||
t.Fatalf("%swant length %v, got %v", testContext(msgAndArgs), want, got)
|
||||
}
|
||||
}
|
||||
|
||||
// assertEqual records a test failure when its condition is not met.
|
||||
func assertEqual(t *testing.T, want, got any, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if !reflect.DeepEqual(want, got) {
|
||||
t.Errorf("%swant %v, got %v", testContext(msgAndArgs), want, got)
|
||||
}
|
||||
}
|
||||
|
||||
// assertTrue records a test failure when its condition is not met.
|
||||
func assertTrue(t *testing.T, cond bool, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if !cond {
|
||||
t.Errorf("%sexpected true", testContext(msgAndArgs))
|
||||
}
|
||||
}
|
||||
|
||||
// assertFalse records a test failure when its condition is not met.
|
||||
func assertFalse(t *testing.T, cond bool, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if cond {
|
||||
t.Errorf("%sexpected false", testContext(msgAndArgs))
|
||||
}
|
||||
}
|
||||
|
||||
// assertNil records a test failure when its condition is not met.
|
||||
func assertNil(t *testing.T, v any, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if !isNil(v) {
|
||||
t.Errorf("%sexpected nil, got %v", testContext(msgAndArgs), v)
|
||||
}
|
||||
}
|
||||
|
||||
// assertNotNil records a test failure when its condition is not met.
|
||||
func assertNotNil(t *testing.T, v any, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if isNil(v) {
|
||||
t.Errorf("%sexpected non-nil", testContext(msgAndArgs))
|
||||
}
|
||||
}
|
||||
|
||||
// assertEmpty records a test failure when its condition is not met.
|
||||
func assertEmpty(t *testing.T, v any, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if !isEmpty(v) {
|
||||
t.Errorf("%sexpected empty, got %v", testContext(msgAndArgs), v)
|
||||
}
|
||||
}
|
||||
|
||||
// assertLen records a test failure when its condition is not met.
|
||||
func assertLen(t *testing.T, v any, want int, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
got, ok := valueLen(v)
|
||||
if !ok {
|
||||
t.Errorf("%sexpected value with length, got %T", testContext(msgAndArgs), v)
|
||||
return
|
||||
}
|
||||
if want != got {
|
||||
t.Errorf("%swant length %v, got %v", testContext(msgAndArgs), want, got)
|
||||
}
|
||||
}
|
||||
|
||||
// assertContains records a test failure when its condition is not met.
|
||||
func assertContains(t *testing.T, s, substr string, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if !core.Contains(s, substr) {
|
||||
t.Errorf("%sexpected %q to contain %q", testContext(msgAndArgs), s, substr)
|
||||
}
|
||||
}
|
||||
|
||||
// assertNotContains records a test failure when its condition is not met.
|
||||
func assertNotContains(t *testing.T, s, substr string, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
if core.Contains(s, substr) {
|
||||
t.Errorf("%sexpected %q not to contain %q", testContext(msgAndArgs), s, substr)
|
||||
}
|
||||
}
|
||||
|
||||
// assertInDelta records a test failure when its condition is not met.
|
||||
func assertInDelta(t *testing.T, want, got, delta float64, msgAndArgs ...any) {
|
||||
t.Helper()
|
||||
diff := want - got
|
||||
if diff < 0 {
|
||||
diff = -diff
|
||||
}
|
||||
if diff > delta {
|
||||
t.Errorf("%swant %v within %v, got %v", testContext(msgAndArgs), want, delta, got)
|
||||
}
|
||||
}
|
||||
18
tests/cli/session/Taskfile.yaml
Normal file
18
tests/cli/session/Taskfile.yaml
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
version: "3"
|
||||
|
||||
env:
|
||||
GOWORK: off
|
||||
GOPATH: /tmp/gopath-gosession
|
||||
GOMODCACHE: /tmp/gomodcache-gosession
|
||||
GOCACHE: /tmp/go-session-go-build-cache
|
||||
|
||||
tasks:
|
||||
default:
|
||||
deps: [test]
|
||||
|
||||
test:
|
||||
dir: ../../..
|
||||
cmds:
|
||||
- go vet ./...
|
||||
- go test ./...
|
||||
- go run ./tests/cli/session
|
||||
109
tests/cli/session/main.go
Normal file
109
tests/cli/session/main.go
Normal file
|
|
@ -0,0 +1,109 @@
|
|||
// SPDX-Licence-Identifier: EUPL-1.2
|
||||
package main
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
core "dappco.re/go/core"
|
||||
session "dappco.re/go/session"
|
||||
)
|
||||
|
||||
const transcript = `{"type":"user","timestamp":"2026-02-20T10:00:00Z","sessionId":"ax10-session","message":{"role":"user","content":[{"type":"text","text":"Run the AX-10 smoke test"}]}}
|
||||
{"type":"assistant","timestamp":"2026-02-20T10:00:01Z","sessionId":"ax10-session","message":{"role":"assistant","content":[{"type":"tool_use","name":"Bash","id":"tool-bash-1","input":{"command":"echo ax10","description":"smoke test"}}]}}
|
||||
{"type":"user","timestamp":"2026-02-20T10:00:02Z","sessionId":"ax10-session","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"tool-bash-1","content":"ax10\n","is_error":false}]}}
|
||||
{"type":"assistant","timestamp":"2026-02-20T10:00:03Z","sessionId":"ax10-session","message":{"role":"assistant","content":[{"type":"text","text":"AX-10 complete"}]}}
|
||||
`
|
||||
|
||||
// main runs the CLI session smoke test.
|
||||
func main() {
|
||||
fs := (&core.Fs{}).NewUnrestricted()
|
||||
dir := fs.TempDir("go-session-ax10-")
|
||||
require(dir != "", "create temporary directory")
|
||||
defer func() {
|
||||
_ = fs.DeleteAll(dir)
|
||||
}()
|
||||
|
||||
transcriptPath := core.Path(dir, "ax10-session.jsonl")
|
||||
writeResult := fs.WriteMode(transcriptPath, transcript, 0o600)
|
||||
require(writeResult.OK, "write transcript")
|
||||
|
||||
sess, stats, err := session.ParseTranscript(transcriptPath)
|
||||
requireNoError(err, "parse transcript")
|
||||
require(sess.ID == "ax10-session", "session ID should come from the file name")
|
||||
require(sess.Path == transcriptPath, "session path should match the parsed file")
|
||||
require(len(sess.Events) == 3, "expected user, tool, and assistant events")
|
||||
require(stats.TotalLines == 4, "expected all transcript lines to be scanned")
|
||||
require(stats.SkippedLines == 0, "expected no skipped transcript lines")
|
||||
require(stats.OrphanedToolCalls == 0, "expected no orphaned tool calls")
|
||||
|
||||
tool := sess.Events[1]
|
||||
require(tool.Type == "tool_use", "expected second event to be the tool call")
|
||||
require(tool.Tool == "Bash", "expected Bash tool call")
|
||||
require(tool.Input == "echo ax10 # smoke test", "expected Bash input to include command and description")
|
||||
require(tool.Output == "ax10\n", "expected Bash output to be preserved")
|
||||
expectedDuration := time.Second
|
||||
require(tool.Duration == expectedDuration, "expected tool duration to match transcript timestamps")
|
||||
require(tool.Success, "expected successful tool call")
|
||||
|
||||
analytics := session.Analyse(sess)
|
||||
require(analytics.EventCount == 3, "expected analytics event count")
|
||||
require(analytics.ToolCounts["Bash"] == 1, "expected analytics Bash count")
|
||||
expectedSuccessRate := successfulToolRate(sess)
|
||||
require(analytics.SuccessRate == expectedSuccessRate, "expected analytics success rate")
|
||||
require(core.Contains(session.FormatAnalytics(analytics), "Bash"), "expected formatted analytics to include Bash")
|
||||
|
||||
results, err := session.Search(dir, "ax10")
|
||||
requireNoError(err, "search sessions")
|
||||
require(len(results) == 1, "expected one search result")
|
||||
require(results[0].SessionID == "ax10-session", "expected search result session ID")
|
||||
|
||||
sessions, err := session.ListSessions(dir)
|
||||
requireNoError(err, "list sessions")
|
||||
require(len(sessions) == 1, "expected one listed session")
|
||||
require(sessions[0].ID == "ax10-session", "expected listed session ID")
|
||||
|
||||
fetched, _, err := session.FetchSession(dir, "ax10-session")
|
||||
requireNoError(err, "fetch session")
|
||||
require(fetched.ID == sess.ID, "expected fetched session to match parsed session")
|
||||
|
||||
htmlPath := core.Path(dir, "timeline.html")
|
||||
requireNoError(session.RenderHTML(sess, htmlPath), "render HTML")
|
||||
readResult := fs.Read(htmlPath)
|
||||
require(readResult.OK, "read rendered HTML")
|
||||
html, ok := readResult.Value.(string)
|
||||
require(ok, "read rendered HTML as string")
|
||||
require(core.Contains(html, "Session ax10"), "expected rendered HTML session title")
|
||||
require(core.Contains(html, "echo ax10"), "expected rendered HTML tool input")
|
||||
}
|
||||
|
||||
// successfulToolRate calculates the same tool-call success ratio as session.Analyse.
|
||||
func successfulToolRate(sess *session.Session) float64 {
|
||||
var successful, total int
|
||||
for _, evt := range sess.Events {
|
||||
if evt.Type != "tool_use" {
|
||||
continue
|
||||
}
|
||||
total++
|
||||
if evt.Success {
|
||||
successful++
|
||||
}
|
||||
}
|
||||
if total == 0 {
|
||||
return 0
|
||||
}
|
||||
return float64(successful) / float64(total)
|
||||
}
|
||||
|
||||
// require stops the current test case when its condition is not met.
|
||||
func require(ok bool, msg string) {
|
||||
if !ok {
|
||||
panic(msg)
|
||||
}
|
||||
}
|
||||
|
||||
// requireNoError stops the current test case when its condition is not met.
|
||||
func requireNoError(err error, msg string) {
|
||||
if err != nil {
|
||||
panic(msg + ": " + err.Error())
|
||||
}
|
||||
}
|
||||
45
threats.md
Normal file
45
threats.md
Normal file
|
|
@ -0,0 +1,45 @@
|
|||
## 1. Parser DoS
|
||||
|
||||
Status: Findings landed
|
||||
|
||||
Question: Can an attacker force unbounded parser memory with many large JSONL lines or unmatched tool calls?
|
||||
|
||||
Finding: Partial yes. The scanner is bounded to 8 MiB per token, and it now starts with a 64 KiB buffer instead of allocating 8 MiB up front (`parser.go:18`, `parser.go:357-358`). It does not retain N scanner buffers for N lines. However, unmatched `tool_use` records were previously retained in `pendingTools` until EOF and had no count limit; this is now capped at 4096 pending calls (`parser.go:22-23`, `parser.go:430-433`). Tool inputs are now truncated before they are stored in `pendingTools`, so an unmatched Bash command cannot keep an entire scanner-sized line resident (`parser.go:435-439`).
|
||||
|
||||
Severity: Medium before fix. Requires attacker-controlled transcript content, but memory growth was linear in unmatched tool_use count and input size.
|
||||
|
||||
Coverage: Added `TestParser_ParseTranscriptToolUseInputTruncated_Bad` and `TestParser_ParseTranscriptPendingToolLimit_Bad` (`parser_test.go:1099`, `parser_test.go:1115`).
|
||||
|
||||
## 2. Malformed JSONL
|
||||
|
||||
Status: No exploitable finding; coverage added
|
||||
|
||||
Question: Do malformed or adversarial JSONL records panic or bypass type handling?
|
||||
|
||||
Finding: No exploitable parser bug found. Bad top-level JSON is skipped with stats (`parser.go:376-386`), malformed assistant/user messages and content blocks are skipped (`parser.go:404-413`, `parser.go:445-454`), and unexpected tool result/input types fall through type switches without panicking (`parser.go:568-576`, `parser.go:579-598`). Deeply nested JSON is handled through `encoding/json` via core helpers and returned as a normal unmarshal failure, not a panic.
|
||||
|
||||
Severity: Low. The remaining cost is bounded by the per-line scanner maximum and the JSON decoder's own validation.
|
||||
|
||||
Coverage: Added tests for deeply nested JSON, unexpected tool input/result types, and lone UTF-16 surrogate halves (`parser_test.go:1133`, `parser_test.go:1147`, `parser_test.go:1161`).
|
||||
|
||||
## 3. Path traversal
|
||||
|
||||
Status: Finding landed
|
||||
|
||||
Question: Can FetchSession or ListSessions escape projectsDir through encoded traversal, symlinks, case-insensitive paths, or Windows-style paths?
|
||||
|
||||
Finding: Yes for symlinks before fix. `FetchSession` rejected literal `..`, `/`, and `\` in IDs (`parser.go:284-286`), so URL-encoded `..` remains a literal filename unless a caller decodes it before calling; if decoded first, the existing check rejects it. The real gap was that a `linked.jsonl` symlink inside projectsDir could point outside and still be opened/listed because normal stat/open operations follow symlinks. FetchSession now rejects symlink targets (`parser.go:289-292`, `parser.go:616-617`), and ListSessions skips symlink matches before stat/open (`parser.go:156-162`). The local path style is still POSIX-oriented via `path.Join`; Windows UNC behavior is not fully addressed in this package.
|
||||
|
||||
Severity: Medium before fix. Exploitation requires ability to place a symlink in projectsDir, but then reads can escape the intended session directory.
|
||||
|
||||
Coverage: Added URL-encoded traversal, FetchSession symlink traversal, and ListSessions symlink traversal tests (`parser_test.go:1508`, `parser_test.go:1516`, `parser_test.go:1562`).
|
||||
|
||||
## 4. Mantis #669 ParseStats RFC audit
|
||||
|
||||
Status: NOTABUG
|
||||
|
||||
Question: Does `ParseStats` match RFC §3 field-for-field, especially `Warnings` and `OrphanedToolCalls`?
|
||||
|
||||
Finding: Yes. RFC §3 specifies `TotalLines int`, `SkippedLines int`, `OrphanedToolCalls int`, and `Warnings []string`; `parser.go` defines those exact fields and types. `Warnings` is a string slice, not a plain string, and `OrphanedToolCalls` is an integer counter, not a boolean or string.
|
||||
|
||||
Coverage: `TestParser_ParseStatsOrphanedToolCalls_Ugly` covers unmatched `tool_use` records without matching `tool_result` records and asserts `ParseStats.OrphanedToolCalls > 0`.
|
||||
110
video.go
110
video.go
|
|
@ -2,48 +2,48 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
"io/fs" // Note: intrinsic — fs.FileInfo metadata for executable checks from hostFS.Stat; no core equivalent
|
||||
"path" // Note: intrinsic — PATH candidate and temporary tape path construction; no core equivalent
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// RenderMP4 generates an MP4 video from session events using VHS (charmbracelet).
|
||||
//
|
||||
// Example:
|
||||
// err := session.RenderMP4(sess, "/tmp/session.mp4")
|
||||
func RenderMP4(sess *Session, outputPath string) error {
|
||||
if _, err := exec.LookPath("vhs"); err != nil {
|
||||
return coreerr.E("RenderMP4", "vhs not installed (go install github.com/charmbracelet/vhs@latest)", nil)
|
||||
vhsPath := lookupExecutable("vhs")
|
||||
if vhsPath == "" {
|
||||
return core.E("RenderMP4", "vhs not installed (go install github.com/charmbracelet/vhs@latest)", nil)
|
||||
}
|
||||
|
||||
tape := generateTape(sess, outputPath)
|
||||
|
||||
tmpFile, err := os.CreateTemp("", "session-*.tape")
|
||||
if err != nil {
|
||||
return coreerr.E("RenderMP4", "create tape", err)
|
||||
tmpDir := hostFS.TempDir("session-")
|
||||
if tmpDir == "" {
|
||||
return core.E("RenderMP4", "failed to create temp dir", nil)
|
||||
}
|
||||
defer os.Remove(tmpFile.Name())
|
||||
defer hostFS.DeleteAll(tmpDir)
|
||||
|
||||
if _, err := tmpFile.WriteString(tape); err != nil {
|
||||
tmpFile.Close()
|
||||
return coreerr.E("RenderMP4", "write tape", err)
|
||||
tapePath := path.Join(tmpDir, core.Concat(core.ID(), ".tape"))
|
||||
writeResult := hostFS.Write(tapePath, tape)
|
||||
if !writeResult.OK {
|
||||
return core.E("RenderMP4", "write tape", resultError(writeResult))
|
||||
}
|
||||
tmpFile.Close()
|
||||
|
||||
cmd := exec.Command("vhs", tmpFile.Name())
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
if err := cmd.Run(); err != nil {
|
||||
return coreerr.E("RenderMP4", "vhs render", err)
|
||||
if err := runCommand(vhsPath, tapePath); err != nil {
|
||||
return core.E("RenderMP4", "vhs render", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// generateTape builds the VHS script used to render a session video.
|
||||
func generateTape(sess *Session, outputPath string) string {
|
||||
var b strings.Builder
|
||||
b := core.NewBuilder()
|
||||
|
||||
b.WriteString(fmt.Sprintf("Output %s\n", outputPath))
|
||||
b.WriteString(core.Sprintf("Output %s\n", outputPath))
|
||||
b.WriteString("Set FontSize 16\n")
|
||||
b.WriteString("Set Width 1400\n")
|
||||
b.WriteString("Set Height 800\n")
|
||||
|
|
@ -57,7 +57,7 @@ func generateTape(sess *Session, outputPath string) string {
|
|||
if len(id) > 8 {
|
||||
id = id[:8]
|
||||
}
|
||||
b.WriteString(fmt.Sprintf("Type \"# Session %s | %s\"\n",
|
||||
b.WriteString(core.Sprintf("Type \"# Session %s | %s\"\n",
|
||||
id, sess.StartTime.Format("2006-01-02 15:04")))
|
||||
b.WriteString("Enter\n")
|
||||
b.WriteString("Sleep 2s\n")
|
||||
|
|
@ -75,7 +75,7 @@ func generateTape(sess *Session, outputPath string) string {
|
|||
continue
|
||||
}
|
||||
// Show the command
|
||||
b.WriteString(fmt.Sprintf("Type %q\n", "$ "+cmd))
|
||||
b.WriteString(core.Sprintf("Type %q\n", "$ "+cmd))
|
||||
b.WriteString("Enter\n")
|
||||
|
||||
// Show abbreviated output
|
||||
|
|
@ -84,11 +84,11 @@ func generateTape(sess *Session, outputPath string) string {
|
|||
output = output[:200] + "..."
|
||||
}
|
||||
if output != "" {
|
||||
for line := range strings.SplitSeq(output, "\n") {
|
||||
for _, line := range core.Split(output, "\n") {
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
b.WriteString(fmt.Sprintf("Type %q\n", line))
|
||||
b.WriteString(core.Sprintf("Type %q\n", line))
|
||||
b.WriteString("Enter\n")
|
||||
}
|
||||
}
|
||||
|
|
@ -104,14 +104,14 @@ func generateTape(sess *Session, outputPath string) string {
|
|||
b.WriteString("\n")
|
||||
|
||||
case "Read", "Edit", "Write":
|
||||
b.WriteString(fmt.Sprintf("Type %q\n",
|
||||
fmt.Sprintf("# %s: %s", evt.Tool, truncate(evt.Input, 80))))
|
||||
b.WriteString(core.Sprintf("Type %q\n",
|
||||
core.Sprintf("# %s: %s", evt.Tool, truncate(evt.Input, 80))))
|
||||
b.WriteString("Enter\n")
|
||||
b.WriteString("Sleep 500ms\n")
|
||||
|
||||
case "Task":
|
||||
b.WriteString(fmt.Sprintf("Type %q\n",
|
||||
fmt.Sprintf("# Agent: %s", truncate(evt.Input, 80))))
|
||||
b.WriteString(core.Sprintf("Type %q\n",
|
||||
core.Sprintf("# Agent: %s", truncate(evt.Input, 80))))
|
||||
b.WriteString("Enter\n")
|
||||
b.WriteString("Sleep 1s\n")
|
||||
}
|
||||
|
|
@ -121,10 +121,58 @@ func generateTape(sess *Session, outputPath string) string {
|
|||
return b.String()
|
||||
}
|
||||
|
||||
// extractCommand removes a human description suffix from a Bash tool input.
|
||||
func extractCommand(input string) string {
|
||||
// Remove description suffix (after " # ")
|
||||
if idx := strings.Index(input, " # "); idx > 0 {
|
||||
if idx := indexOf(input, " # "); idx > 0 {
|
||||
return input[:idx]
|
||||
}
|
||||
return input
|
||||
}
|
||||
|
||||
// lookupExecutable resolves an executable name from PATH or validates a direct path.
|
||||
func lookupExecutable(name string) string {
|
||||
if name == "" {
|
||||
return ""
|
||||
}
|
||||
if containsAny(name, `/\`) {
|
||||
if isExecutablePath(name) {
|
||||
return name
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
for _, dir := range core.Split(core.Env("PATH"), ":") {
|
||||
if dir == "" {
|
||||
dir = "."
|
||||
}
|
||||
candidate := path.Join(dir, name)
|
||||
if isExecutablePath(candidate) {
|
||||
return candidate
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// isExecutablePath reports whether filePath is an executable regular file.
|
||||
func isExecutablePath(filePath string) bool {
|
||||
statResult := hostFS.Stat(filePath)
|
||||
if !statResult.OK {
|
||||
return false
|
||||
}
|
||||
info, ok := statResult.Value.(fs.FileInfo)
|
||||
if !ok || info.IsDir() {
|
||||
return false
|
||||
}
|
||||
return info.Mode()&0111 != 0
|
||||
}
|
||||
|
||||
// runCommand executes an external command through the core process abstraction.
|
||||
func runCommand(command string, args ...string) error {
|
||||
c := sessionCore(nil)
|
||||
runResult := hostProcess(c).Run(hostContext(c), command, args...)
|
||||
if runResult.OK {
|
||||
return nil
|
||||
}
|
||||
return core.E("runCommand", "run command", resultError(runResult))
|
||||
}
|
||||
|
|
|
|||
105
video_test.go
105
video_test.go
|
|
@ -2,16 +2,14 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"os/exec"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
func TestGenerateTape_BasicSession_Good(t *testing.T) {
|
||||
// TestVideo_GenerateTapeBasicSession_Good verifies the behaviour covered by this test case.
|
||||
func TestVideo_GenerateTapeBasicSession_Good(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "tape-test-12345678",
|
||||
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
|
||||
|
|
@ -35,17 +33,18 @@ func TestGenerateTape_BasicSession_Good(t *testing.T) {
|
|||
|
||||
tape := generateTape(sess, "/tmp/output.mp4")
|
||||
|
||||
assert.Contains(t, tape, "Output /tmp/output.mp4")
|
||||
assert.Contains(t, tape, "Set FontSize 16")
|
||||
assert.Contains(t, tape, "tape-tes") // shortID
|
||||
assert.Contains(t, tape, "2026-02-20 10:00")
|
||||
assert.Contains(t, tape, `"$ go test ./..."`)
|
||||
assert.Contains(t, tape, "PASS")
|
||||
assert.Contains(t, tape, `"# ✓ OK"`)
|
||||
assert.Contains(t, tape, "# Read: /tmp/file.go")
|
||||
assertContains(t, tape, "Output /tmp/output.mp4")
|
||||
assertContains(t, tape, "Set FontSize 16")
|
||||
assertContains(t, tape, "tape-tes") // shortID
|
||||
assertContains(t, tape, "2026-02-20 10:00")
|
||||
assertContains(t, tape, `"$ go test ./..."`)
|
||||
assertContains(t, tape, "PASS")
|
||||
assertContains(t, tape, `"# ✓ OK"`)
|
||||
assertContains(t, tape, "# Read: /tmp/file.go")
|
||||
}
|
||||
|
||||
func TestGenerateTape_SkipsNonToolEvents_Good(t *testing.T) {
|
||||
// TestVideo_GenerateTapeSkipsNonToolEvents_Good verifies the behaviour covered by this test case.
|
||||
func TestVideo_GenerateTapeSkipsNonToolEvents_Good(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "skip-test",
|
||||
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
|
||||
|
|
@ -59,13 +58,14 @@ func TestGenerateTape_SkipsNonToolEvents_Good(t *testing.T) {
|
|||
tape := generateTape(sess, "/tmp/out.mp4")
|
||||
|
||||
// User and assistant events should NOT appear in the tape
|
||||
assert.NotContains(t, tape, "Hello")
|
||||
assert.NotContains(t, tape, "Hi there")
|
||||
assertNotContains(t, tape, "Hello")
|
||||
assertNotContains(t, tape, "Hi there")
|
||||
// Bash command should appear
|
||||
assert.Contains(t, tape, "echo hi")
|
||||
assertContains(t, tape, "echo hi")
|
||||
}
|
||||
|
||||
func TestGenerateTape_FailedCommand_Good(t *testing.T) {
|
||||
// TestVideo_GenerateTapeFailedCommand_Good verifies the behaviour covered by this test case.
|
||||
func TestVideo_GenerateTapeFailedCommand_Good(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "fail-test",
|
||||
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
|
||||
|
|
@ -81,10 +81,11 @@ func TestGenerateTape_FailedCommand_Good(t *testing.T) {
|
|||
}
|
||||
|
||||
tape := generateTape(sess, "/tmp/out.mp4")
|
||||
assert.Contains(t, tape, `"# ✗ FAILED"`)
|
||||
assertContains(t, tape, `"# ✗ FAILED"`)
|
||||
}
|
||||
|
||||
func TestGenerateTape_LongOutput_Good(t *testing.T) {
|
||||
// TestVideo_GenerateTapeLongOutput_Good verifies the behaviour covered by this test case.
|
||||
func TestVideo_GenerateTapeLongOutput_Good(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "long-test",
|
||||
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
|
||||
|
|
@ -93,7 +94,7 @@ func TestGenerateTape_LongOutput_Good(t *testing.T) {
|
|||
Type: "tool_use",
|
||||
Tool: "Bash",
|
||||
Input: "cat huge.log",
|
||||
Output: strings.Repeat("x", 300),
|
||||
Output: repeatString("x", 300),
|
||||
Success: true,
|
||||
},
|
||||
},
|
||||
|
|
@ -101,10 +102,11 @@ func TestGenerateTape_LongOutput_Good(t *testing.T) {
|
|||
|
||||
tape := generateTape(sess, "/tmp/out.mp4")
|
||||
// Output should be truncated to 200 chars + "..."
|
||||
assert.Contains(t, tape, "...")
|
||||
assertContains(t, tape, "...")
|
||||
}
|
||||
|
||||
func TestGenerateTape_TaskEvent_Good(t *testing.T) {
|
||||
// TestVideo_GenerateTapeTaskEvent_Good verifies the behaviour covered by this test case.
|
||||
func TestVideo_GenerateTapeTaskEvent_Good(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "task-test",
|
||||
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
|
||||
|
|
@ -118,10 +120,11 @@ func TestGenerateTape_TaskEvent_Good(t *testing.T) {
|
|||
}
|
||||
|
||||
tape := generateTape(sess, "/tmp/out.mp4")
|
||||
assert.Contains(t, tape, "# Agent: [research] Analyse code structure")
|
||||
assertContains(t, tape, "# Agent: [research] Analyse code structure")
|
||||
}
|
||||
|
||||
func TestGenerateTape_EditWriteEvents_Good(t *testing.T) {
|
||||
// TestVideo_GenerateTapeEditWriteEvents_Good verifies the behaviour covered by this test case.
|
||||
func TestVideo_GenerateTapeEditWriteEvents_Good(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "edit-test",
|
||||
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
|
||||
|
|
@ -132,11 +135,12 @@ func TestGenerateTape_EditWriteEvents_Good(t *testing.T) {
|
|||
}
|
||||
|
||||
tape := generateTape(sess, "/tmp/out.mp4")
|
||||
assert.Contains(t, tape, "# Edit: /tmp/app.go (edit)")
|
||||
assert.Contains(t, tape, "# Write: /tmp/new.go (50 bytes)")
|
||||
assertContains(t, tape, "# Edit: /tmp/app.go (edit)")
|
||||
assertContains(t, tape, "# Write: /tmp/new.go (50 bytes)")
|
||||
}
|
||||
|
||||
func TestGenerateTape_EmptySession_Good(t *testing.T) {
|
||||
// TestVideo_GenerateTapeEmptySession_Good verifies the behaviour covered by this test case.
|
||||
func TestVideo_GenerateTapeEmptySession_Good(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "empty-test",
|
||||
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
|
||||
|
|
@ -146,21 +150,22 @@ func TestGenerateTape_EmptySession_Good(t *testing.T) {
|
|||
tape := generateTape(sess, "/tmp/out.mp4")
|
||||
|
||||
// Should still have the header and trailer
|
||||
assert.Contains(t, tape, "Output /tmp/out.mp4")
|
||||
assert.Contains(t, tape, "Sleep 3s")
|
||||
assertContains(t, tape, "Output /tmp/out.mp4")
|
||||
assertContains(t, tape, "Sleep 3s")
|
||||
// No tool events
|
||||
lines := strings.Split(tape, "\n")
|
||||
lines := core.Split(tape, "\n")
|
||||
var toolLines int
|
||||
for _, line := range lines {
|
||||
if strings.Contains(line, "$ ") || strings.Contains(line, "# Read:") ||
|
||||
strings.Contains(line, "# Edit:") || strings.Contains(line, "# Write:") {
|
||||
if core.Contains(line, "$ ") || core.Contains(line, "# Read:") ||
|
||||
core.Contains(line, "# Edit:") || core.Contains(line, "# Write:") {
|
||||
toolLines++
|
||||
}
|
||||
}
|
||||
assert.Equal(t, 0, toolLines)
|
||||
assertEqual(t, 0, toolLines)
|
||||
}
|
||||
|
||||
func TestGenerateTape_BashEmptyCommand_Bad(t *testing.T) {
|
||||
// TestVideo_GenerateTapeBashEmptyCommand_Bad verifies the behaviour covered by this test case.
|
||||
func TestVideo_GenerateTapeBashEmptyCommand_Bad(t *testing.T) {
|
||||
sess := &Session{
|
||||
ID: "empty-cmd",
|
||||
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
|
||||
|
|
@ -171,28 +176,32 @@ func TestGenerateTape_BashEmptyCommand_Bad(t *testing.T) {
|
|||
|
||||
tape := generateTape(sess, "/tmp/out.mp4")
|
||||
// Empty command should be skipped (extractCommand returns "")
|
||||
assert.NotContains(t, tape, `"$ "`)
|
||||
assertNotContains(t, tape, `"$ "`)
|
||||
}
|
||||
|
||||
func TestExtractCommand_Good(t *testing.T) {
|
||||
assert.Equal(t, "ls -la", extractCommand("ls -la # list files"))
|
||||
assert.Equal(t, "go test ./...", extractCommand("go test ./..."))
|
||||
assert.Equal(t, "echo hello", extractCommand("echo hello"))
|
||||
// TestVideo_ExtractCommandStripsDescriptionSuffix_Good verifies the behaviour covered by this test case.
|
||||
func TestVideo_ExtractCommandStripsDescriptionSuffix_Good(t *testing.T) {
|
||||
assertEqual(t, "ls -la", extractCommand("ls -la # list files"))
|
||||
assertEqual(t, "go test ./...", extractCommand("go test ./..."))
|
||||
assertEqual(t, "echo hello", extractCommand("echo hello"))
|
||||
}
|
||||
|
||||
func TestExtractCommand_NoDescription_Good(t *testing.T) {
|
||||
assert.Equal(t, "plain command", extractCommand("plain command"))
|
||||
// TestVideo_ExtractCommandNoDescription_Good verifies the behaviour covered by this test case.
|
||||
func TestVideo_ExtractCommandNoDescription_Good(t *testing.T) {
|
||||
assertEqual(t, "plain command", extractCommand("plain command"))
|
||||
}
|
||||
|
||||
func TestExtractCommand_DescriptionAtStart_Good(t *testing.T) {
|
||||
// TestVideo_ExtractCommandDescriptionAtStart_Good verifies the behaviour covered by this test case.
|
||||
func TestVideo_ExtractCommandDescriptionAtStart_Good(t *testing.T) {
|
||||
// " # " at position 0 means idx <= 0, so it returns the whole input
|
||||
result := extractCommand(" # description only")
|
||||
assert.Equal(t, " # description only", result)
|
||||
assertEqual(t, " # description only", result)
|
||||
}
|
||||
|
||||
func TestRenderMP4_NoVHS_Ugly(t *testing.T) {
|
||||
// TestVideo_RenderMP4NoVHS_Ugly verifies the behaviour covered by this test case.
|
||||
func TestVideo_RenderMP4NoVHS_Ugly(t *testing.T) {
|
||||
// Skip if vhs is actually installed (this tests the error path)
|
||||
if _, err := exec.LookPath("vhs"); err == nil {
|
||||
if lookupExecutable("vhs") != "" {
|
||||
t.Skip("vhs is installed; skipping missing-vhs test")
|
||||
}
|
||||
|
||||
|
|
@ -202,6 +211,6 @@ func TestRenderMP4_NoVHS_Ugly(t *testing.T) {
|
|||
}
|
||||
|
||||
err := RenderMP4(sess, "/tmp/test.mp4")
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "vhs not installed")
|
||||
requireError(t, err)
|
||||
assertContains(t, err.Error(), "vhs not installed")
|
||||
}
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue