Compare commits

..

24 commits
main ... dev

Author SHA1 Message Date
Snider
1e965de24f chore(repo): refresh submodules + go.work hygiene (Phase 2 cascade unblock)
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
- git submodule update on external/* to current dev tips
- go.work paths fixed for Phase 1 /go/ subtree layout where stale
- go.work go-version bumped 1.26.0 → 1.26.2 to match submodule floor

Workspace-mode build (`go build ./...`) is the verification path. Some
repos may surface transitive dep issues (api/go.sum checksum drift, etc.)
which are separate cascade tickets — not blocking this metadata refresh.

Co-Authored-By: Cladius Maximus <cladius@lethean.io>
2026-05-01 09:41:20 +01:00
Snider
c9f7d971c8 chore: add EUPL-1.2 LICENCE file (UK English canonical)
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Reference: core/api/LICENCE.

Co-Authored-By: Cladius Maximus <cladius@lethean.io>
2026-05-01 08:35:03 +01:00
Snider
e1b0fb152a refactor(go): restructure to /go/ subtree + audit COMPLIANT (Mantis #1249)
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
audit.sh verdict: COMPLIANT. Build/vet/test all clean.

Closes tasks.lthn.sh/view.php?id=1249

Co-authored-by: Codex <noreply@openai.com>
2026-05-01 01:43:32 +01:00
Snider
76913cbc58 ci: woodpecker pipeline (Go) — golangci-lint/eslint/phpstan + sonar.lthn.sh 2026-04-29 00:03:20 +01:00
Snider
325454e9ea ci: woodpecker pipeline (Go) — golangci-lint/eslint/phpstan + sonar.lthn.sh 2026-04-28 23:33:35 +01:00
Snider
e661e275c1 refactor(core): full v0.9.0 compliance against core/go reference
bash /tmp/v090/audit.sh . → verdict: COMPLIANT (all 7 dimensions zero).

Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-28 18:55:05 +01:00
Snider
36ca98652b fix(session): r3 — coreerr.E error wrapping in parser scanner + ListSessionsSeq logging on PR #5
Some checks failed
Security Scan / security (push) Has been cancelled
Test / test (push) Has been cancelled
Round 3 follow-up to 92ecdda.

Code:
- parser.go: scanTranscriptLines uses coreerr.E for line-size errors;
  read failures now wrapped with coreerr.E (was returning raw)
- parser.go: FetchSession preserves openTranscriptNoFollow cause
- parser.go: ListSessionsSeq logs skipped open/scan/close failures
  (was silently discarding)

Verification: gofmt clean, golangci-lint v2 0 issues, GOWORK=off
go vet + go test -count=1 ./... pass with explicit cache paths.

Closes residual r3 findings on https://github.com/dAppCore/go-session/pull/5

Co-authored-by: Codex <noreply@openai.com>
2026-04-27 19:10:34 +01:00
Snider
92ecddaa69 fix(session): r2 — platform-split no-follow + recursive convention scan + doc fixes on PR #5
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Round 2 follow-up to 8ffd10c.

Code:
- parser_unix.go (new): Unix O_NOFOLLOW implementation
- parser_other.go (new): non-Unix fallback
- parser.go: removed syscall import; syscall failures wrapped via
  coreerr.E
- tests/cli/session/main.go: smoke driver uses core path/fs/string
  helpers (was using direct os + filepath + strings)

Tests:
- conventions_test.go: recursive Go file collection + nested-file test
  case (was non-recursive, missing nested files)

Doc:
- README.md: quick-start compile fix (fmt import + discard unused
  parse stats)
- kb/Home.md: ParseTranscript signature aligned to current API
  (captures and uses stats)

Verification: gofmt clean, golangci-lint v2 0 issues, GOWORK=off
go vet + go test -count=1 ./... pass with explicit cache paths.
AX-6 clean: no testify references; smoke driver uses core helpers.

Closes residual findings on https://github.com/dAppCore/go-session/pull/5

Co-authored-by: Codex <noreply@openai.com>
2026-04-27 18:48:40 +01:00
Snider
8ffd10c2ac fix(session): address all CodeRabbit findings on PR #5
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
6+ findings dispositioned. AX-6 maintained (stale testify refs removed).

Code:
- parser_test.go: fixed EOF-truncated JSONL fixtures
- parser.go: ListSessionsSeq skips transcripts when quick scan fails;
  added oversized-line coverage
- parser.go: symlink pre-check replaced with O_NOFOLLOW descriptor
  opens + Fstat for FetchSession and ListSessionsSeq (TOCTOU-safe)
- test_helpers_test.go: assert* helpers changed from fatal to
  non-fatal reporting
- tests/cli/session/main.go: derived expectations from current code
  (CodeRabbit's suggested literals were incorrect for current impl)
  + filepath.Join nit; preserved correct behaviour

CI / config:
- .golangci.yml: migrated to v2 schema
- tests/cli/session/Taskfile.yaml: 'test' broadened to run go vet +
  go test + CLI smoke
- PR title: made specific

Doc:
- AX-2 docstring coverage: comments added to all Go funcs in touched
  files (closes pre-merge docstring warning)
- README + CLAUDE.md + CODEX.md + CONTEXT.md + TODO.md +
  docs/{architecture,development,index}.md + kb/Home.md: removed
  stale testify references, aligned to stdlib testing

Disposition:
- SonarCloud / GHAS: no separate PR comments/checks; gh pr checks
  only reports CodeRabbit. RESOLVED-COMMENT.

Verification: gofmt clean, golangci-lint v2 0 issues, GOWORK=off
go vet + go test -count=1 ./... pass with explicit cache paths,
task -d tests/cli/session clean.

Closes findings on https://github.com/dAppCore/go-session/pull/5

Co-authored-by: Codex <noreply@openai.com>
2026-04-27 18:17:50 +01:00
Snider
209166507b fix(go-session): replace syscall.ForkExec with c.Process() in video.go
Some checks failed
Security Scan / security (push) Has been cancelled
Test / test (push) Has been cancelled
Replace syscall.ForkExec/Wait4 invocation of vhs with c.Process().Run(ctx,
vhsPath, tapePath) per Core v0.8 process primitive. Threads *core.Core
dependency through core_helpers.go Process accessor. Removes syscall
import.

GOWORK=off go test passes. Live MP4 render not validated in sandbox (no
vhs on PATH).

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=399
2026-04-25 21:20:20 +01:00
Snider
74084f37b9 fix(session): AX-6 sweep on parser.go (#398) — bufio/maps/path → core
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Removed bufio (replaced with local streaming line reader capped at 8 MiB),
maps (replaced with explicit for range), and path (replaced with
core.CleanPath / core.JoinPath). Preserves transcript line handling.

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=398
2026-04-25 15:10:48 +01:00
Snider
e22f44c2c7 docs(session): confirm ParseStats matches RFC §3 (#669, audit NOTABUG)
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
ParseStats audit complete: RFC §3 specifies TotalLines int, SkippedLines
int, OrphanedToolCalls int, Warnings []string. parser.go defines those
exact fields + types. parser.go unchanged.

parser_test.go: TestParser_ParseStatsOrphanedToolCalls_Ugly now
classified _Ugly explicitly (covers tool_use without matching
tool_result, asserts OrphanedToolCalls > 0).

threats.md gains NOTABUG audit note for #669.

Race PASS.

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=669
2026-04-25 07:28:26 +01:00
Snider
5c40e4c5a2 fix(session): close threat-model audit findings (Cerberus #921)
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
- Bounded pendingTools map at 4096 entries to cap memory growth
- Reduced scanner initial allocation 8MiB→64KiB (max stays 8MiB)
- Truncated tool input before storing in pendingTools
- Rejected/skipped symlinks in FetchSession + ListSessions
- Added _Bad/_Ugly tests: deeply-nested JSON, unexpected tool input/result types, lone UTF-16 surrogate halves, URL-encoded path-traversal, FetchSession + ListSessions symlink traversal

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=921
2026-04-25 03:37:30 +01:00
Codex
3b6972785d feat(go-session): scaffold tests/cli/session Taskfile + test driver per AX-10
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
tests/cli/session/Taskfile.yaml + tests/cli/session/main.go — driver
builds a synthetic JSONL session and exercises parse, analytics,
search, list, fetch, and HTML rendering paths.

Verification: task -d tests/cli/session + go test + go vet all pass.

Closes tasks.lthn.sh/view.php?id=670

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 22:44:55 +01:00
Codex
897bef1c30 chore(go-session): minor stale path cleanup per AX-6
Module line already migrated in #805 (a83fafb). conventions_test.go
had one stale doc reference `dappco.re/go/core/...` — rewritten to
`dappco.re/go/...`. go.mod clean; go test ./... passes.

Closes tasks.lthn.sh/view.php?id=666

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 21:58:01 +01:00
Codex
a83fafbde7 chore(go-session): migrate module path to dappco.re/go/session
Dropped the stale `core` segment per RFC, aligning with graduated
repos: dappco.re/go/{name}. No *.go self-imports existed — go.mod
single-line change. `go build ./...` passes.

Closes tasks.lthn.sh/view.php?id=805

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 20:18:03 +01:00
Codex
27dd3bbbb4 fix(go-session): annotate intrinsic banned imports per AX-6
Closes tasks.lthn.sh/view.php?id=668

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 19:26:24 +01:00
Codex
05f8a0050c fix(go-session): replace testify with stdlib testing patterns (AX-6)
Removes github.com/stretchr/testify from go.mod/go.sum; rewrites
assert/require calls across root _test.go files to stdlib-backed
local helpers. Adds test_helpers_test.go for shared assertion
helpers. go mod tidy + go vet + go test all clean.

Closes tasks.lthn.sh/view.php?id=806

Co-authored-by: Codex <noreply@openai.com>
Via-codex-lane: Cladius-solo dispatch (Mac codex CLI)
2026-04-24 17:37:40 +01:00
Virgil
36c184e7dd feat(html): add event permalinks
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m10s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-01 04:47:53 +00:00
Virgil
0ab8627447 chore(session): record verification pass
All checks were successful
Security Scan / security (push) Successful in 8s
Test / test (push) Successful in 43s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-27 03:34:03 +00:00
Virgil
3680aaf871 chore(session): enforce AX v0.8.0 conventions
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 18:59:53 +00:00
Virgil
d9a63f1981 chore(session): align with core v0.8.0-alpha.1
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 15:50:44 +00:00
Virgil
a7772087ae test(conventions): harden import and test checks
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 11:26:45 +00:00
Virgil
af4e1d6ae2 test(conventions): enforce AX review rules
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 11:14:35 +00:00
52 changed files with 2505 additions and 3353 deletions

View file

@ -1,3 +1,5 @@
version: "2"
run: run:
timeout: 5m timeout: 5m
go: "1.26" go: "1.26"
@ -8,15 +10,15 @@ linters:
- errcheck - errcheck
- staticcheck - staticcheck
- unused - unused
- gosimple
- ineffassign - ineffassign
- typecheck
- gocritic - gocritic
- gofmt
disable: disable:
- exhaustive - exhaustive
- wrapcheck - wrapcheck
formatters:
enable:
- gofmt
issues: issues:
exclude-use-default: false
max-same-issues: 0 max-same-issues: 0

37
.woodpecker.yml Normal file
View file

@ -0,0 +1,37 @@
# Woodpecker CI pipeline.
# Server: ci.lthn.sh. Lint + sonar in parallel, both depend only on clone.
# sonar_token is admin-scoped on the Woodpecker server.
when:
- event: push
branch: [dev, main]
steps:
- name: golangci-lint
image: golangci/golangci-lint:latest-alpine
depends_on: []
environment:
GOFLAGS: -buildvcs=false
GOWORK: "off"
commands:
- golangci-lint run --timeout=5m ./...
- name: go-test
image: golang:1.26-alpine
depends_on: []
environment:
GOFLAGS: -buildvcs=false
GOWORK: "off"
CGO_ENABLED: "1"
commands:
- apk add --no-cache git build-base
- go test -race -coverprofile=coverage.out -covermode=atomic -count=1 ./...
- name: sonar
image: sonarsource/sonar-scanner-cli:latest
depends_on: [go-test]
environment:
SONAR_HOST_URL: https://sonar.lthn.sh
SONAR_TOKEN:
from_secret: sonar_token
commands:
- sonar-scanner

8
AGENTS.md Normal file
View file

@ -0,0 +1,8 @@
# AGENTS.md
This repository follows the v0.9.0 core/go audit contract. Go source lives in
the `go/` subtree, and local development uses `go.work` with `./go` plus the
core dependency under `./external/go`.
Use core/go primitives directly instead of banned stdlib imports. Public
symbols require file-local Good, Bad, and Ugly tests plus examples.

View file

@ -2,7 +2,7 @@
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Claude Code JSONL transcript parser, analytics engine, and HTML/video renderer. Module: `dappco.re/go/core/session` Claude Code JSONL transcript parser, analytics engine, and HTML/video renderer. Module: `dappco.re/go/session`
## Commands ## Commands
@ -43,8 +43,12 @@ Coverage target: maintain ≥90.9%.
- UK English throughout (colour, licence, initialise) - UK English throughout (colour, licence, initialise)
- Explicit types on all function signatures and struct fields - Explicit types on all function signatures and struct fields
- Exported declarations must have Go doc comments beginning with the identifier name
- `go test ./...` and `go vet ./...` must pass before commit - `go test ./...` and `go vet ./...` must pass before commit
- SPDX header on all source files: `// SPDX-Licence-Identifier: EUPL-1.2` - SPDX header on all source files: `// SPDX-Licence-Identifier: EUPL-1.2`
- Error handling: all errors must use `coreerr.E(op, msg, err)` from `dappco.re/go/core/log`, never `fmt.Errorf` or `errors.New` - Error handling: all errors must use `coreerr.E(op, msg, err)` from `dappco.re/go/core/log`, never `fmt.Errorf` or `errors.New`
- Banned imports in non-test Go files: `errors`, `github.com/pkg/errors`, and legacy `forge.lthn.ai/...` paths
- Conventional commits: `type(scope): description` - Conventional commits: `type(scope): description`
- Co-Author trailer: `Co-Authored-By: Virgil <virgil@lethean.io>` - Co-Author trailer: `Co-Authored-By: Virgil <virgil@lethean.io>`
The conventions test suite enforces banned imports, exported usage comments, and test naming via `go test ./...`.

54
CODEX.md Normal file
View file

@ -0,0 +1,54 @@
# CODEX.md
This file provides guidance to Codex when working in this repository.
Claude Code JSONL transcript parser, analytics engine, and HTML/video renderer. Module: `dappco.re/go/session`
## Commands
```bash
go test ./... # Run all tests
go test -v -run TestFunctionName_Context # Run single test
go test -race ./... # Race detector
go test -bench=. -benchmem ./... # Benchmarks
go vet ./... # Vet
golangci-lint run ./... # Lint (optional, config in .golangci.yml)
```
## Architecture
Single-package library (`package session`) with five source files forming a pipeline:
1. **parser.go** — Core JSONL parser. Reads Claude Code session files line-by-line (8 MiB scanner buffer), correlates `tool_use`/`tool_result` pairs via a `pendingTools` map keyed by tool ID, and produces `Session` with `[]Event`. Also handles session listing, fetching, and pruning.
2. **analytics.go** — Pure computation over `[]Event`. `Analyse()` returns `SessionAnalytics` (per-tool counts, error rates, latency stats, token estimates). No I/O.
3. **html.go**`RenderHTML()` generates a self-contained HTML file (inline CSS/JS, dark theme, collapsible panels, client-side search). All user content is `html.EscapeString`-escaped.
4. **video.go**`RenderMP4()` generates a VHS `.tape` script and shells out to `vhs`. Requires `vhs` on PATH.
5. **search.go**`Search()`/`SearchSeq()` does cross-session case-insensitive substring search over tool event inputs and outputs.
Both slice-returning and `iter.Seq` variants exist for `ListSessions`, `Search`, and `Session.EventsSeq`.
### Adding a new tool type
Touch all layers: add input struct in `parser.go` → case in `extractToolInput` → label in `html.go` `RenderHTML` → tape entry in `video.go` `generateTape` → tests in `parser_test.go`.
## Testing
Tests are white-box (`package session`). Test helpers in `parser_test.go` build synthetic JSONL in-memory — no fixture files. Use `writeJSONL(t, dir, name, lines...)` and the entry builders (`toolUseEntry`, `toolResultEntry`, `userTextEntry`, `assistantTextEntry`).
Naming convention: `TestFile_Function_Good/Bad/Ugly` (group by file, collapse the specific behaviour into the function segment, and suffix with happy path / expected errors / extreme edge cases).
Coverage target: maintain ≥90.9%.
## Coding Standards
- UK English throughout (colour, licence, initialise)
- Explicit types on all function signatures and struct fields
- Exported declarations must have Go doc comments beginning with the identifier name and include an `Example:` usage snippet
- `go test ./...` and `go vet ./...` must pass before commit
- SPDX header on all source files: `// SPDX-Licence-Identifier: EUPL-1.2`
- Error handling: all package errors must use `core.E(op, msg, err)` from `dappco.re/go/core`; do not use `core.NewError`, `fmt.Errorf`, or `errors.New`
- Banned imports in non-test Go files: `errors`, `github.com/pkg/errors`, and legacy `forge.lthn.ai/...` paths
- Conventional commits: `type(scope): description`
- Co-Author trailer: `Co-Authored-By: Virgil <virgil@lethean.io>`
The conventions test suite enforces banned imports, exported usage comments, and test naming via `go test ./...`.

View file

@ -39,7 +39,7 @@ The input label adapts to the tool type:
[go-session] Installation [go-session] Installation
```bash ```bash
go get dappco.re/go/core/session@latest go get dappco.re/go/session@latest
``` ```
### 5. go-session [convention] (score: -0.004) ### 5. go-session [convention] (score: -0.004)

287
LICENCE Normal file
View file

@ -0,0 +1,287 @@
EUROPEAN UNION PUBLIC LICENCE v. 1.2
EUPL © the European Union 2007, 2016
This European Union Public Licence (the EUPL) applies to the Work (as defined
below) which is provided under the terms of this Licence. Any use of the Work,
other than as authorised under this Licence is prohibited (to the extent such
use is covered by a right of the copyright holder of the Work).
The Work is provided under the terms of this Licence when the Licensor (as
defined below) has placed the following notice immediately following the
copyright notice for the Work:
Licensed under the EUPL
or has expressed by any other means his willingness to license under the EUPL.
1. Definitions
In this Licence, the following terms have the following meaning:
- The Licence: this Licence.
- The Original Work: the work or software distributed or communicated by the
Licensor under this Licence, available as Source Code and also as Executable
Code as the case may be.
- Derivative Works: the works or software that could be created by the
Licensee, based upon the Original Work or modifications thereof. This Licence
does not define the extent of modification or dependence on the Original Work
required in order to classify a work as a Derivative Work; this extent is
determined by copyright law applicable in the country mentioned in Article 15.
- The Work: the Original Work or its Derivative Works.
- The Source Code: the human-readable form of the Work which is the most
convenient for people to study and modify.
- The Executable Code: any code which has generally been compiled and which is
meant to be interpreted by a computer as a program.
- The Licensor: the natural or legal person that distributes or communicates
the Work under the Licence.
- Contributor(s): any natural or legal person who modifies the Work under the
Licence, or otherwise contributes to the creation of a Derivative Work.
- The Licensee or You: any natural or legal person who makes any usage of
the Work under the terms of the Licence.
- Distribution or Communication: any act of selling, giving, lending,
renting, distributing, communicating, transmitting, or otherwise making
available, online or offline, copies of the Work or providing access to its
essential functionalities at the disposal of any other natural or legal
person.
2. Scope of the rights granted by the Licence
The Licensor hereby grants You a worldwide, royalty-free, non-exclusive,
sublicensable licence to do the following, for the duration of copyright vested
in the Original Work:
- use the Work in any circumstance and for all usage,
- reproduce the Work,
- modify the Work, and make Derivative Works based upon the Work,
- communicate to the public, including the right to make available or display
the Work or copies thereof to the public and perform publicly, as the case may
be, the Work,
- distribute the Work or copies thereof,
- lend and rent the Work or copies thereof,
- sublicense rights in the Work or copies thereof.
Those rights can be exercised on any media, supports and formats, whether now
known or later invented, as far as the applicable law permits so.
In the countries where moral rights apply, the Licensor waives his right to
exercise his moral right to the extent allowed by law in order to make effective
the licence of the economic rights here above listed.
The Licensor grants to the Licensee royalty-free, non-exclusive usage rights to
any patents held by the Licensor, to the extent necessary to make use of the
rights granted on the Work under this Licence.
3. Communication of the Source Code
The Licensor may provide the Work either in its Source Code form, or as
Executable Code. If the Work is provided as Executable Code, the Licensor
provides in addition a machine-readable copy of the Source Code of the Work
along with each copy of the Work that the Licensor distributes or indicates, in
a notice following the copyright notice attached to the Work, a repository where
the Source Code is easily and freely accessible for as long as the Licensor
continues to distribute or communicate the Work.
4. Limitations on copyright
Nothing in this Licence is intended to deprive the Licensee of the benefits from
any exception or limitation to the exclusive rights of the rights owners in the
Work, of the exhaustion of those rights or of other applicable limitations
thereto.
5. Obligations of the Licensee
The grant of the rights mentioned above is subject to some restrictions and
obligations imposed on the Licensee. Those obligations are the following:
Attribution right: The Licensee shall keep intact all copyright, patent or
trademarks notices and all notices that refer to the Licence and to the
disclaimer of warranties. The Licensee must include a copy of such notices and a
copy of the Licence with every copy of the Work he/she distributes or
communicates. The Licensee must cause any Derivative Work to carry prominent
notices stating that the Work has been modified and the date of modification.
Copyleft clause: If the Licensee distributes or communicates copies of the
Original Works or Derivative Works, this Distribution or Communication will be
done under the terms of this Licence or of a later version of this Licence
unless the Original Work is expressly distributed only under this version of the
Licence — for example by communicating EUPL v. 1.2 only. The Licensee
(becoming Licensor) cannot offer or impose any additional terms or conditions on
the Work or Derivative Work that alter or restrict the terms of the Licence.
Compatibility clause: If the Licensee Distributes or Communicates Derivative
Works or copies thereof based upon both the Work and another work licensed under
a Compatible Licence, this Distribution or Communication can be done under the
terms of this Compatible Licence. For the sake of this clause, Compatible
Licence refers to the licences listed in the appendix attached to this Licence.
Should the Licensee's obligations under the Compatible Licence conflict with
his/her obligations under this Licence, the obligations of the Compatible
Licence shall prevail.
Provision of Source Code: When distributing or communicating copies of the Work,
the Licensee will provide a machine-readable copy of the Source Code or indicate
a repository where this Source will be easily and freely available for as long
as the Licensee continues to distribute or communicate the Work.
Legal Protection: This Licence does not grant permission to use the trade names,
trademarks, service marks, or names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the copyright notice.
6. Chain of Authorship
The original Licensor warrants that the copyright in the Original Work granted
hereunder is owned by him/her or licensed to him/her and that he/she has the
power and authority to grant the Licence.
Each Contributor warrants that the copyright in the modifications he/she brings
to the Work are owned by him/her or licensed to him/her and that he/she has the
power and authority to grant the Licence.
Each time You accept the Licence, the original Licensor and subsequent
Contributors grant You a licence to their contributions to the Work, under the
terms of this Licence.
7. Disclaimer of Warranty
The Work is a work in progress, which is continuously improved by numerous
Contributors. It is not a finished work and may therefore contain defects or
bugs inherent to this type of development.
For the above reason, the Work is provided under the Licence on an as is basis
and without warranties of any kind concerning the Work, including without
limitation merchantability, fitness for a particular purpose, absence of defects
or errors, accuracy, non-infringement of intellectual property rights other than
copyright as stated in Article 6 of this Licence.
This disclaimer of warranty is an essential part of the Licence and a condition
for the grant of any rights to the Work.
8. Disclaimer of Liability
Except in the cases of wilful misconduct or damages directly caused to natural
persons, the Licensor will in no event be liable for any direct or indirect,
material or moral, damages of any kind, arising out of the Licence or of the use
of the Work, including without limitation, damages for loss of goodwill, work
stoppage, computer failure or malfunction, loss of data or any commercial
damage, even if the Licensor has been advised of the possibility of such damage.
However, the Licensor will be liable under statutory product liability laws as
far such laws apply to the Work.
9. Additional agreements
While distributing the Work, You may choose to conclude an additional agreement,
defining obligations or services consistent with this Licence. However, if
accepting obligations, You may act only on your own behalf and on your sole
responsibility, not on behalf of the original Licensor or any other Contributor,
and only if You agree to indemnify, defend, and hold each Contributor harmless
for any liability incurred by, or claims asserted against such Contributor by
the fact You have accepted any warranty or additional liability.
10. Acceptance of the Licence
The provisions of this Licence can be accepted by clicking on an icon I agree
placed under the bottom of a window displaying the text of this Licence or by
affirming consent in any other similar way, in accordance with the rules of
applicable law. Clicking on that icon indicates your clear and irrevocable
acceptance of this Licence and all of its terms and conditions.
Similarly, you irrevocably accept this Licence and all of its terms and
conditions by exercising any rights granted to You by Article 2 of this Licence,
such as the use of the Work, the creation by You of a Derivative Work or the
Distribution or Communication by You of the Work or copies thereof.
11. Information to the public
In case of any Distribution or Communication of the Work by means of electronic
communication by You (for example, by offering to download the Work from a
remote location) the distribution channel or media (for example, a website) must
at least provide to the public the information requested by the applicable law
regarding the Licensor, the Licence and the way it may be accessible, concluded,
stored and reproduced by the Licensee.
12. Termination of the Licence
The Licence and the rights granted hereunder will terminate automatically upon
any breach by the Licensee of the terms of the Licence.
Such a termination will not terminate the licences of any person who has
received the Work from the Licensee under the Licence, provided such persons
remain in full compliance with the Licence.
13. Miscellaneous
Without prejudice of Article 9 above, the Licence represents the complete
agreement between the Parties as to the Work.
If any provision of the Licence is invalid or unenforceable under applicable
law, this will not affect the validity or enforceability of the Licence as a
whole. Such provision will be construed or reformed so as necessary to make it
valid and enforceable.
The European Commission may publish other linguistic versions or new versions of
this Licence or updated versions of the Appendix, so far this is required and
reasonable, without reducing the scope of the rights granted by the Licence. New
versions of the Licence will be published with a unique version number.
All linguistic versions of this Licence, approved by the European Commission,
have identical value. Parties can take advantage of the linguistic version of
their choice.
14. Jurisdiction
Without prejudice to specific agreement between parties,
- any litigation resulting from the interpretation of this License, arising
between the European Union institutions, bodies, offices or agencies, as a
Licensor, and any Licensee, will be subject to the jurisdiction of the Court
of Justice of the European Union, as laid down in article 272 of the Treaty on
the Functioning of the European Union,
- any litigation arising between other parties and resulting from the
interpretation of this License, will be subject to the exclusive jurisdiction
of the competent court where the Licensor resides or conducts its primary
business.
15. Applicable Law
Without prejudice to specific agreement between parties,
- this Licence shall be governed by the law of the European Union Member State
where the Licensor has his seat, resides or has his registered office,
- this licence shall be governed by Belgian law if the Licensor has no seat,
residence or registered office inside a European Union Member State.
Appendix
Compatible Licences according to Article 5 EUPL are:
- GNU General Public License (GPL) v. 2, v. 3
- GNU Affero General Public License (AGPL) v. 3
- Open Software License (OSL) v. 2.1, v. 3.0
- Eclipse Public License (EPL) v. 1.0
- CeCILL v. 2.0, v. 2.1
- Mozilla Public Licence (MPL) v. 2
- GNU Lesser General Public Licence (LGPL) v. 2.1, v. 3
- Creative Commons Attribution-ShareAlike v. 3.0 Unported (CC BY-SA 3.0) for
works other than software
- European Union Public Licence (EUPL) v. 1.1, v. 1.2
- Québec Free and Open-Source Licence — Reciprocity (LiLiQ-R) or Strong
Reciprocity (LiLiQ-R+).
The European Commission may update this Appendix to later versions of the above
licences without producing a new version of the EUPL, as long as they provide
the rights granted in Article 2 of this Licence and protect the covered Source
Code from exclusive appropriation.
All other changes or additions to this Appendix require the production of a new
EUPL version.

View file

@ -1,4 +1,4 @@
[![Go Reference](https://pkg.go.dev/badge/dappco.re/go/core/session.svg)](https://pkg.go.dev/dappco.re/go/core/session) [![Go Reference](https://pkg.go.dev/badge/dappco.re/go/session.svg)](https://pkg.go.dev/dappco.re/go/session)
[![License: EUPL-1.2](https://img.shields.io/badge/License-EUPL--1.2-blue.svg)](LICENSE.md) [![License: EUPL-1.2](https://img.shields.io/badge/License-EUPL--1.2-blue.svg)](LICENSE.md)
[![Go Version](https://img.shields.io/badge/Go-1.26-00ADD8?style=flat&logo=go)](go.mod) [![Go Version](https://img.shields.io/badge/Go-1.26-00ADD8?style=flat&logo=go)](go.mod)
@ -6,16 +6,20 @@
Claude Code JSONL transcript parser, analytics engine, and HTML timeline renderer. Parses Claude Code session files into structured event arrays (tool calls with round-trip durations, user and assistant messages), computes per-tool analytics (call counts, error rates, average and peak latency, estimated token usage), renders self-contained HTML timelines with collapsible panels and client-side search, and generates VHS tape scripts for MP4 video output. No external runtime dependencies — stdlib only. Claude Code JSONL transcript parser, analytics engine, and HTML timeline renderer. Parses Claude Code session files into structured event arrays (tool calls with round-trip durations, user and assistant messages), computes per-tool analytics (call counts, error rates, average and peak latency, estimated token usage), renders self-contained HTML timelines with collapsible panels and client-side search, and generates VHS tape scripts for MP4 video output. No external runtime dependencies — stdlib only.
**Module**: `dappco.re/go/core/session` **Module**: `dappco.re/go/session`
**Licence**: EUPL-1.2 **Licence**: EUPL-1.2
**Language**: Go 1.26 **Language**: Go 1.26
## Quick Start ## Quick Start
```go ```go
import "dappco.re/go/core/session" import (
"fmt"
sess, stats, err := session.ParseTranscript("/path/to/session.jsonl") "dappco.re/go/session"
)
sess, _, err := session.ParseTranscript("/path/to/session.jsonl")
analytics := session.Analyse(sess) analytics := session.Analyse(sess)
fmt.Println(session.FormatAnalytics(analytics)) fmt.Println(session.FormatAnalytics(analytics))

View file

@ -3,7 +3,7 @@
## Task ## Task
Update go.mod require lines from forge.lthn.ai to dappco.re paths. Update versions: core v0.5.0, log v0.1.0, io v0.2.0. Update all .go import paths. Run go mod tidy and go build ./... Update go.mod require lines from forge.lthn.ai to dappco.re paths. Update versions: core v0.5.0, log v0.1.0, io v0.2.0. Update all .go import paths. Run go mod tidy and go build ./...
> **Status:** Complete. All module paths migrated to `dappco.re/go/core/...`. > **Status:** Complete. All module paths migrated to `dappco.re/go/...`.
## Checklist ## Checklist
- [x] Read and understand the codebase - [x] Read and understand the codebase
@ -13,4 +13,3 @@ Update go.mod require lines from forge.lthn.ai to dappco.re paths. Update versio
- [ ] Commit with conventional commit message - [ ] Commit with conventional commit message
## Context ## Context

View file

@ -1,286 +0,0 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestAnalyse_EmptySession_Good(t *testing.T) {
sess := &Session{
ID: "empty",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
EndTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Events: nil,
}
a := Analyse(sess)
require.NotNil(t, a)
assert.Equal(t, time.Duration(0), a.Duration)
assert.Equal(t, time.Duration(0), a.ActiveTime)
assert.Equal(t, 0, a.EventCount)
assert.Equal(t, 0.0, a.SuccessRate)
assert.Empty(t, a.ToolCounts)
assert.Empty(t, a.ErrorCounts)
assert.Equal(t, 0, a.EstimatedInputTokens)
assert.Equal(t, 0, a.EstimatedOutputTokens)
}
func TestAnalyse_NilSession_Good(t *testing.T) {
a := Analyse(nil)
require.NotNil(t, a)
assert.Equal(t, 0, a.EventCount)
}
func TestAnalyse_SingleToolCall_Good(t *testing.T) {
sess := &Session{
ID: "single",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
EndTime: time.Date(2026, 2, 20, 10, 0, 5, 0, time.UTC),
Events: []Event{
{
Timestamp: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Type: "tool_use",
Tool: "Bash",
Input: "go test ./...",
Output: "PASS",
Duration: 2 * time.Second,
Success: true,
},
},
}
a := Analyse(sess)
assert.Equal(t, 5*time.Second, a.Duration)
assert.Equal(t, 2*time.Second, a.ActiveTime)
assert.Equal(t, 1, a.EventCount)
assert.Equal(t, 1.0, a.SuccessRate)
assert.Equal(t, 1, a.ToolCounts["Bash"])
assert.Equal(t, 0, a.ErrorCounts["Bash"])
assert.Equal(t, 2*time.Second, a.AvgLatency["Bash"])
assert.Equal(t, 2*time.Second, a.MaxLatency["Bash"])
}
func TestAnalyse_MixedToolsWithErrors_Good(t *testing.T) {
sess := &Session{
ID: "mixed",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
EndTime: time.Date(2026, 2, 20, 10, 5, 0, 0, time.UTC),
Events: []Event{
{
Type: "user",
Input: "Please help",
},
{
Type: "tool_use",
Tool: "Bash",
Input: "ls -la",
Output: "total 42",
Duration: 1 * time.Second,
Success: true,
},
{
Type: "tool_use",
Tool: "Bash",
Input: "cat /missing",
Output: "No such file",
Duration: 500 * time.Millisecond,
Success: false,
ErrorMsg: "No such file",
},
{
Type: "tool_use",
Tool: "Read",
Input: "/tmp/file.go",
Output: "package main",
Duration: 200 * time.Millisecond,
Success: true,
},
{
Type: "tool_use",
Tool: "Read",
Input: "/tmp/missing.go",
Output: "file not found",
Duration: 100 * time.Millisecond,
Success: false,
ErrorMsg: "file not found",
},
{
Type: "tool_use",
Tool: "Edit",
Input: "/tmp/file.go (edit)",
Output: "ok",
Duration: 300 * time.Millisecond,
Success: true,
},
{
Type: "assistant",
Input: "All done.",
},
},
}
a := Analyse(sess)
assert.Equal(t, 5*time.Minute, a.Duration)
assert.Equal(t, 7, a.EventCount)
// Tool counts
assert.Equal(t, 2, a.ToolCounts["Bash"])
assert.Equal(t, 2, a.ToolCounts["Read"])
assert.Equal(t, 1, a.ToolCounts["Edit"])
// Error counts
assert.Equal(t, 1, a.ErrorCounts["Bash"])
assert.Equal(t, 1, a.ErrorCounts["Read"])
assert.Equal(t, 0, a.ErrorCounts["Edit"])
// Success rate: 3 successes out of 5 tool calls = 0.6
assert.InDelta(t, 0.6, a.SuccessRate, 0.001)
// Active time: 1s + 500ms + 200ms + 100ms + 300ms = 2.1s
assert.Equal(t, 2100*time.Millisecond, a.ActiveTime)
}
func TestAnalyse_LatencyCalculations_Good(t *testing.T) {
sess := &Session{
ID: "latency",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
EndTime: time.Date(2026, 2, 20, 10, 1, 0, 0, time.UTC),
Events: []Event{
{
Type: "tool_use",
Tool: "Bash",
Duration: 1 * time.Second,
Success: true,
},
{
Type: "tool_use",
Tool: "Bash",
Duration: 3 * time.Second,
Success: true,
},
{
Type: "tool_use",
Tool: "Bash",
Duration: 5 * time.Second,
Success: true,
},
{
Type: "tool_use",
Tool: "Read",
Duration: 200 * time.Millisecond,
Success: true,
},
},
}
a := Analyse(sess)
// Bash: avg = (1+3+5)/3 = 3s, max = 5s
assert.Equal(t, 3*time.Second, a.AvgLatency["Bash"])
assert.Equal(t, 5*time.Second, a.MaxLatency["Bash"])
// Read: avg = 200ms, max = 200ms
assert.Equal(t, 200*time.Millisecond, a.AvgLatency["Read"])
assert.Equal(t, 200*time.Millisecond, a.MaxLatency["Read"])
}
func TestAnalyse_TokenEstimation_Good(t *testing.T) {
// 4 chars = ~1 token
sess := &Session{
ID: "tokens",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
EndTime: time.Date(2026, 2, 20, 10, 0, 1, 0, time.UTC),
Events: []Event{
{
Type: "user",
Input: strings.Repeat("a", 400), // 100 tokens
},
{
Type: "tool_use",
Tool: "Bash",
Input: strings.Repeat("b", 80), // 20 tokens
Output: strings.Repeat("c", 200), // 50 tokens
Duration: time.Second,
Success: true,
},
{
Type: "assistant",
Input: strings.Repeat("d", 120), // 30 tokens
},
},
}
a := Analyse(sess)
// Input tokens: 400/4 + 80/4 + 120/4 = 100 + 20 + 30 = 150
assert.Equal(t, 150, a.EstimatedInputTokens)
// Output tokens: 0 + 200/4 + 0 = 50
assert.Equal(t, 50, a.EstimatedOutputTokens)
}
func TestFormatAnalytics_Output_Good(t *testing.T) {
a := &SessionAnalytics{
Duration: 5 * time.Minute,
ActiveTime: 2 * time.Minute,
EventCount: 42,
SuccessRate: 0.85,
EstimatedInputTokens: 1500,
EstimatedOutputTokens: 3000,
ToolCounts: map[string]int{
"Bash": 20,
"Read": 15,
"Edit": 7,
},
ErrorCounts: map[string]int{
"Bash": 3,
},
AvgLatency: map[string]time.Duration{
"Bash": 2 * time.Second,
"Read": 500 * time.Millisecond,
"Edit": 300 * time.Millisecond,
},
MaxLatency: map[string]time.Duration{
"Bash": 10 * time.Second,
"Read": 1 * time.Second,
"Edit": 800 * time.Millisecond,
},
}
output := FormatAnalytics(a)
assert.Contains(t, output, "Session Analytics")
assert.Contains(t, output, "5m0s")
assert.Contains(t, output, "2m0s")
assert.Contains(t, output, "42")
assert.Contains(t, output, "85.0%")
assert.Contains(t, output, "1500")
assert.Contains(t, output, "3000")
assert.Contains(t, output, "Bash")
assert.Contains(t, output, "Read")
assert.Contains(t, output, "Edit")
assert.Contains(t, output, "Tool Breakdown")
}
func TestFormatAnalytics_EmptyAnalytics_Good(t *testing.T) {
a := &SessionAnalytics{
ToolCounts: make(map[string]int),
ErrorCounts: make(map[string]int),
AvgLatency: make(map[string]time.Duration),
MaxLatency: make(map[string]time.Duration),
}
output := FormatAnalytics(a)
assert.Contains(t, output, "Session Analytics")
assert.Contains(t, output, "0.0%")
// No tool breakdown section when no tools
assert.NotContains(t, output, "Tool Breakdown")
}

View file

@ -1,155 +0,0 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"fmt"
"os"
"path/filepath"
"strings"
"testing"
)
// BenchmarkParseTranscript benchmarks parsing a ~1MB+ JSONL file.
func BenchmarkParseTranscript(b *testing.B) {
dir := b.TempDir()
path := generateBenchJSONL(b, dir, 5000) // ~1MB+ of JSONL
b.ResetTimer()
b.ReportAllocs()
for b.Loop() {
sess, _, err := ParseTranscript(path)
if err != nil {
b.Fatal(err)
}
if len(sess.Events) == 0 {
b.Fatal("expected events")
}
}
}
// BenchmarkParseTranscript_Large benchmarks a larger ~5MB file.
func BenchmarkParseTranscript_Large(b *testing.B) {
dir := b.TempDir()
path := generateBenchJSONL(b, dir, 25000) // ~5MB
b.ResetTimer()
b.ReportAllocs()
for b.Loop() {
_, _, err := ParseTranscript(path)
if err != nil {
b.Fatal(err)
}
}
}
// BenchmarkListSessions benchmarks listing sessions in a directory.
func BenchmarkListSessions(b *testing.B) {
dir := b.TempDir()
// Create 20 session files
for range 20 {
generateBenchJSONL(b, dir, 100)
}
b.ResetTimer()
b.ReportAllocs()
for b.Loop() {
sessions, err := ListSessions(dir)
if err != nil {
b.Fatal(err)
}
if len(sessions) == 0 {
b.Fatal("expected sessions")
}
}
}
// BenchmarkSearch benchmarks searching across multiple sessions.
func BenchmarkSearch(b *testing.B) {
dir := b.TempDir()
// Create 10 session files with varied content
for range 10 {
generateBenchJSONL(b, dir, 500)
}
b.ResetTimer()
b.ReportAllocs()
for b.Loop() {
_, err := Search(dir, "echo")
if err != nil {
b.Fatal(err)
}
}
}
// generateBenchJSONL creates a synthetic JSONL file with the given number of tool pairs.
// Returns the file path.
func generateBenchJSONL(b testing.TB, dir string, numTools int) string {
b.Helper()
var sb strings.Builder
baseTS := "2026-02-20T10:00:00Z"
// Opening user message
sb.WriteString(fmt.Sprintf(`{"type":"user","timestamp":"%s","sessionId":"bench","message":{"role":"user","content":[{"type":"text","text":"Start benchmark session"}]}}`, baseTS))
sb.WriteByte('\n')
for i := range numTools {
toolID := fmt.Sprintf("tool-%d", i)
offset := i * 2
// Alternate between different tool types for realistic distribution
var toolUse, toolResult string
switch i % 5 {
case 0: // Bash
toolUse = fmt.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Bash","id":"%s","input":{"command":"echo iteration %d","description":"echo test"}}]}}`,
offset/60, offset%60, toolID, i)
toolResult = fmt.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"iteration %d output line one\niteration %d output line two","is_error":false}]}}`,
(offset+1)/60, (offset+1)%60, toolID, i, i)
case 1: // Read
toolUse = fmt.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Read","id":"%s","input":{"file_path":"/tmp/bench/file-%d.go"}}]}}`,
offset/60, offset%60, toolID, i)
toolResult = fmt.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"package main\n\nfunc main() {\n\tfmt.Println(%d)\n}","is_error":false}]}}`,
(offset+1)/60, (offset+1)%60, toolID, i)
case 2: // Edit
toolUse = fmt.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Edit","id":"%s","input":{"file_path":"/tmp/bench/file-%d.go","old_string":"old","new_string":"new"}}]}}`,
offset/60, offset%60, toolID, i)
toolResult = fmt.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"ok","is_error":false}]}}`,
(offset+1)/60, (offset+1)%60, toolID)
case 3: // Grep
toolUse = fmt.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Grep","id":"%s","input":{"pattern":"TODO","path":"/tmp/bench"}}]}}`,
offset/60, offset%60, toolID)
toolResult = fmt.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"/tmp/bench/file.go:10: // TODO fix this","is_error":false}]}}`,
(offset+1)/60, (offset+1)%60, toolID)
case 4: // Glob
toolUse = fmt.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"assistant","content":[{"type":"tool_use","name":"Glob","id":"%s","input":{"pattern":"**/*.go"}}]}}`,
offset/60, offset%60, toolID)
toolResult = fmt.Sprintf(`{"type":"user","timestamp":"2026-02-20T10:%02d:%02dZ","sessionId":"bench","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"%s","content":"/tmp/a.go\n/tmp/b.go\n/tmp/c.go","is_error":false}]}}`,
(offset+1)/60, (offset+1)%60, toolID)
}
sb.WriteString(toolUse)
sb.WriteByte('\n')
sb.WriteString(toolResult)
sb.WriteByte('\n')
}
// Closing assistant message
sb.WriteString(fmt.Sprintf(`{"type":"assistant","timestamp":"2026-02-20T12:00:00Z","sessionId":"bench","message":{"role":"assistant","content":[{"type":"text","text":"Benchmark session complete."}]}}%s`, "\n"))
name := fmt.Sprintf("bench-%d.jsonl", numTools)
path := filepath.Join(dir, name)
if err := os.WriteFile(path, []byte(sb.String()), 0644); err != nil {
b.Fatal(err)
}
info, _ := os.Stat(path)
b.Logf("Generated %s: %d bytes, %d tool pairs", name, info.Size(), numTools)
return path
}

View file

@ -5,7 +5,7 @@ description: Internals of go-session -- JSONL format, parsing pipeline, event mo
# Architecture # Architecture
Module: `dappco.re/go/core/session` Module: `dappco.re/go/session`
## Overview ## Overview
@ -239,10 +239,11 @@ Success or failure of a `tool_use` event is indicated by a Unicode check mark (U
Each event is rendered as a `<div class="event">` containing: Each event is rendered as a `<div class="event">` containing:
- `.event-header`: always visible; shows timestamp, tool label, truncated input (120 chars), duration, and status icon. - `.event-header`: always visible; shows timestamp, tool label, truncated input (120 chars), duration, status icon, and a permalink anchor.
- `.event-body`: hidden by default; shown on click via the `toggle(i)` JavaScript function which toggles the `open` class. - `.event-body`: hidden by default; shown on click via the `toggle(i)` JavaScript function which toggles the `open` class.
The arrow indicator rotates 90 degrees (CSS `transform: rotate(90deg)`) when the panel is open. Output text in `.event-body` is capped at 400px height with `overflow-y: auto`. The arrow indicator rotates 90 degrees (CSS `transform: rotate(90deg)`) when the panel is open. Output text in `.event-body` is capped at 400px height with `overflow-y: auto`.
If the page loads with an `#evt-N` fragment, that event is opened automatically and scrolled into view.
Input label semantics vary per tool: Input label semantics vary per tool:

View file

@ -8,7 +8,6 @@ description: How to build, test, lint, and contribute to go-session.
## Prerequisites ## Prerequisites
- **Go 1.26 or later** -- the module requires Go 1.26 (`go.mod`). The benchmark suite uses `b.Loop()`, introduced in Go 1.25. - **Go 1.26 or later** -- the module requires Go 1.26 (`go.mod`). The benchmark suite uses `b.Loop()`, introduced in Go 1.25.
- **`github.com/stretchr/testify`** -- test-only dependency, fetched automatically by `go test`.
- **`vhs`** (`github.com/charmbracelet/vhs`) -- optional, required only for `RenderMP4`. Install with `go install github.com/charmbracelet/vhs@latest`. - **`vhs`** (`github.com/charmbracelet/vhs`) -- optional, required only for `RenderMP4`. Install with `go install github.com/charmbracelet/vhs@latest`.
- **`golangci-lint`** -- optional, for running the full lint suite. Configuration is in `.golangci.yml`. - **`golangci-lint`** -- optional, for running the full lint suite. Configuration is in `.golangci.yml`.
@ -138,6 +137,17 @@ Both `go vet ./...` and `golangci-lint run ./...` must be clean before committin
- Use explicit types on struct fields and function signatures. - Use explicit types on struct fields and function signatures.
- Avoid `interface{}` in public APIs; use typed parameters where possible. - Avoid `interface{}` in public APIs; use typed parameters where possible.
- Handle all errors explicitly; do not use blank `_` for error returns in non-test code. - Handle all errors explicitly; do not use blank `_` for error returns in non-test code.
- Exported declarations must have Go doc comments beginning with the identifier name.
### Imports and Error Handling
- Do not import `errors` or `github.com/pkg/errors` in non-test Go files; use `coreerr.E(op, msg, err)` from `dappco.re/go/core/log`.
- Do not reintroduce legacy `forge.lthn.ai/...` module paths; use `dappco.re/go/core/...` imports.
### Test Naming
Test functions should follow `TestFunctionName_Context_Good/Bad/Ugly`.
The conventions test suite checks test naming, banned imports, and exported usage comments during `go test ./...`.
### File Headers ### File Headers
@ -210,7 +220,7 @@ Co-Authored-By: Virgil <virgil@lethean.io>
## Module Path and Go Workspace ## Module Path and Go Workspace
The module path is `dappco.re/go/core/session`. If this package is used within a Go workspace, add it with: The module path is `dappco.re/go/session`. If this package is used within a Go workspace, add it with:
```bash ```bash
go work use ./go-session go work use ./go-session

View file

@ -76,5 +76,5 @@ The following have been identified as potential improvements but are not current
- **Parallel search**: fan out `ParseTranscript` calls across goroutines with a result channel to reduce wall time for large directories. - **Parallel search**: fan out `ParseTranscript` calls across goroutines with a result channel to reduce wall time for large directories.
- **Persistent index**: a lightweight SQLite index or binary cache per session file to avoid re-parsing on every `Search` or `ListSessions` call. - **Persistent index**: a lightweight SQLite index or binary cache per session file to avoid re-parsing on every `Search` or `ListSessions` call.
- **Additional tool types**: the parser's `extractToolInput` fallback handles any unknown tool by listing its JSON keys. Dedicated handling could be added for `WebFetch`, `WebSearch`, `NotebookEdit`, and other tools that appear in Claude Code sessions. - **Additional tool types**: the parser's `extractToolInput` fallback handles any unknown tool by listing its JSON keys. Dedicated handling could be added for `WebFetch`, `WebSearch`, `NotebookEdit`, and other tools that appear in Claude Code sessions.
- **HTML export options**: configurable truncation limits, optional full-output display, and per-event direct links (anchor IDs already exist as `evt-{i}`). - **HTML export options**: configurable truncation limits and optional full-output display remain open; per-event direct links are now available via `#evt-{i}` permalinks.
- **VHS alternative**: a pure-Go terminal animation renderer to eliminate the `vhs` dependency for MP4 output. - **VHS alternative**: a pure-Go terminal animation renderer to eliminate the `vhs` dependency for MP4 output.

View file

@ -7,14 +7,14 @@ description: Claude Code JSONL transcript parser, analytics engine, and HTML tim
`go-session` parses Claude Code JSONL session transcripts into structured event arrays, computes per-tool analytics, renders self-contained HTML timelines with client-side search, and generates VHS tape scripts for MP4 video output. It has no external runtime dependencies -- stdlib only. `go-session` parses Claude Code JSONL session transcripts into structured event arrays, computes per-tool analytics, renders self-contained HTML timelines with client-side search, and generates VHS tape scripts for MP4 video output. It has no external runtime dependencies -- stdlib only.
**Module path:** `dappco.re/go/core/session` **Module path:** `dappco.re/go/session`
**Go version:** 1.26 **Go version:** 1.26
**Licence:** EUPL-1.2 **Licence:** EUPL-1.2
## Quick Start ## Quick Start
```go ```go
import "dappco.re/go/core/session" import "dappco.re/go/session"
// Parse a single session file // Parse a single session file
sess, stats, err := session.ParseTranscript("/path/to/session.jsonl") sess, stats, err := session.ParseTranscript("/path/to/session.jsonl")
@ -58,10 +58,9 @@ Test files mirror the source files (`parser_test.go`, `analytics_test.go`, `html
| Dependency | Scope | Purpose | | Dependency | Scope | Purpose |
|------------|-------|---------| |------------|-------|---------|
| Go standard library | Runtime | All parsing, HTML rendering, file I/O, JSON decoding | | Go standard library | Runtime | All parsing, HTML rendering, file I/O, JSON decoding |
| `github.com/stretchr/testify` | Test only | Assertions and requirements in test files |
| `vhs` (charmbracelet) | Optional external binary | Required only by `RenderMP4` for MP4 video generation | | `vhs` (charmbracelet) | Optional external binary | Required only by `RenderMP4` for MP4 video generation |
The package has **zero runtime dependencies** beyond the Go standard library. `testify` is fetched automatically by `go test` and is never imported outside test files. The package has **zero runtime dependencies** beyond the Go standard library and uses local stdlib-backed test helpers instead of third-party assertion packages.
## Supported Tool Types ## Supported Tool Types

1
external/go vendored Symbolic link
View file

@ -0,0 +1 @@
/Users/snider/Code/core/api/external/go

15
go.mod
View file

@ -1,15 +0,0 @@
module dappco.re/go/core/session
go 1.26.0
require (
dappco.re/go/core/log v0.1.0
github.com/stretchr/testify v1.11.1
)
require (
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/kr/text v0.2.0 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

20
go.sum
View file

@ -1,20 +0,0 @@
dappco.re/go/core/log v0.1.0 h1:pa71Vq2TD2aoEUQWFKwNcaJ3GBY8HbaNGqtE688Unyc=
dappco.re/go/core/log v0.1.0/go.mod h1:Nkqb8gsXhZAO8VLpx7B8i1iAmohhzqA20b9Zr8VUcJs=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

6
go.work Normal file
View file

@ -0,0 +1,6 @@
go 1.26.2
use (
./go
./external/go
)

View file

@ -2,14 +2,17 @@
package session package session
import ( import (
"fmt" "maps" // Note: intrinsic — maps.Keys exposes tool names for deterministic analytics output; no core equivalent
"maps" "slices" // Note: intrinsic — slices.Sorted orders analytics rows deterministically; no core equivalent
"slices" "time" // Note: intrinsic — time.Duration arithmetic for session, active-time, and latency metrics; no core equivalent
"strings"
"time" core "dappco.re/go"
) )
// SessionAnalytics holds computed metrics for a parsed session. // SessionAnalytics holds computed metrics for a parsed session.
//
// Example:
// analytics := session.Analyse(sess)
type SessionAnalytics struct { type SessionAnalytics struct {
Duration time.Duration Duration time.Duration
ActiveTime time.Duration ActiveTime time.Duration
@ -24,6 +27,9 @@ type SessionAnalytics struct {
} }
// Analyse iterates session events and computes analytics. Pure function, no I/O. // Analyse iterates session events and computes analytics. Pure function, no I/O.
//
// Example:
// analytics := session.Analyse(sess)
func Analyse(sess *Session) *SessionAnalytics { func Analyse(sess *Session) *SessionAnalytics {
a := &SessionAnalytics{ a := &SessionAnalytics{
ToolCounts: make(map[string]int), ToolCounts: make(map[string]int),
@ -97,32 +103,35 @@ func Analyse(sess *Session) *SessionAnalytics {
} }
// FormatAnalytics returns a tabular text summary suitable for CLI display. // FormatAnalytics returns a tabular text summary suitable for CLI display.
//
// Example:
// summary := session.FormatAnalytics(analytics)
func FormatAnalytics(a *SessionAnalytics) string { func FormatAnalytics(a *SessionAnalytics) string {
var b strings.Builder b := core.NewBuilder()
b.WriteString("Session Analytics\n") b.WriteString("Session Analytics\n")
b.WriteString(strings.Repeat("=", 50) + "\n\n") b.WriteString(repeatString("=", 50) + "\n\n")
b.WriteString(fmt.Sprintf(" Duration: %s\n", formatDuration(a.Duration))) b.WriteString(core.Sprintf(" Duration: %s\n", formatDuration(a.Duration)))
b.WriteString(fmt.Sprintf(" Active Time: %s\n", formatDuration(a.ActiveTime))) b.WriteString(core.Sprintf(" Active Time: %s\n", formatDuration(a.ActiveTime)))
b.WriteString(fmt.Sprintf(" Events: %d\n", a.EventCount)) b.WriteString(core.Sprintf(" Events: %d\n", a.EventCount))
b.WriteString(fmt.Sprintf(" Success Rate: %.1f%%\n", a.SuccessRate*100)) b.WriteString(core.Sprintf(" Success Rate: %.1f%%\n", a.SuccessRate*100))
b.WriteString(fmt.Sprintf(" Est. Input Tk: %d\n", a.EstimatedInputTokens)) b.WriteString(core.Sprintf(" Est. Input Tk: %d\n", a.EstimatedInputTokens))
b.WriteString(fmt.Sprintf(" Est. Output Tk: %d\n", a.EstimatedOutputTokens)) b.WriteString(core.Sprintf(" Est. Output Tk: %d\n", a.EstimatedOutputTokens))
if len(a.ToolCounts) > 0 { if len(a.ToolCounts) > 0 {
b.WriteString("\n Tool Breakdown\n") b.WriteString("\n Tool Breakdown\n")
b.WriteString(" " + strings.Repeat("-", 48) + "\n") b.WriteString(" " + repeatString("-", 48) + "\n")
b.WriteString(fmt.Sprintf(" %-14s %6s %6s %10s %10s\n", b.WriteString(core.Sprintf(" %-14s %6s %6s %10s %10s\n",
"Tool", "Calls", "Errors", "Avg", "Max")) "Tool", "Calls", "Errors", "Avg", "Max"))
b.WriteString(" " + strings.Repeat("-", 48) + "\n") b.WriteString(" " + repeatString("-", 48) + "\n")
// Sort tools for deterministic output // Sort tools for deterministic output
for _, tool := range slices.Sorted(maps.Keys(a.ToolCounts)) { for _, tool := range slices.Sorted(maps.Keys(a.ToolCounts)) {
errors := a.ErrorCounts[tool] errors := a.ErrorCounts[tool]
avg := a.AvgLatency[tool] avg := a.AvgLatency[tool]
max := a.MaxLatency[tool] max := a.MaxLatency[tool]
b.WriteString(fmt.Sprintf(" %-14s %6d %6d %10s %10s\n", b.WriteString(core.Sprintf(" %-14s %6d %6d %10s %10s\n",
tool, a.ToolCounts[tool], errors, tool, a.ToolCounts[tool], errors,
formatDuration(avg), formatDuration(max))) formatDuration(avg), formatDuration(max)))
} }

View file

@ -0,0 +1,14 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import "time"
func ExampleAnalyse() {
sess := &Session{Events: []Event{{Type: "tool_use", Tool: "Bash", Duration: time.Second, Success: true}}}
_ = Analyse(sess)
}
func ExampleFormatAnalytics() {
analytics := &SessionAnalytics{ToolCounts: map[string]int{"Bash": 1}}
_ = FormatAnalytics(analytics)
}

74
go/analytics_test.go Normal file
View file

@ -0,0 +1,74 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"testing"
"time"
core "dappco.re/go"
)
func TestAnalytics_Analyse_Good(t *testing.T) {
sess := &Session{StartTime: time.Unix(0, 0), EndTime: time.Unix(4, 0), Events: []Event{
{Type: "tool_use", Tool: "Bash", Input: "abcd", Output: "abcdefgh", Duration: 2 * time.Second, Success: true},
}}
got := Analyse(sess)
core.AssertEqual(t, 1, got.EventCount)
core.AssertEqual(t, 1.0, got.SuccessRate)
core.AssertEqual(t, 2*time.Second, got.ActiveTime)
core.AssertEqual(t, 1, got.EstimatedInputTokens)
core.AssertEqual(t, 2, got.EstimatedOutputTokens)
}
func TestAnalytics_Analyse_Bad(t *testing.T) {
sess := &Session{Events: []Event{
{Type: "tool_use", Tool: "Read", Duration: time.Second, Success: false},
}}
got := Analyse(sess)
core.AssertEqual(t, 0.0, got.SuccessRate)
core.AssertEqual(t, 1, got.ErrorCounts["Read"])
core.AssertEqual(t, time.Second, got.MaxLatency["Read"])
}
func TestAnalytics_Analyse_Ugly(t *testing.T) {
got := Analyse(nil)
core.AssertNotNil(t, got)
core.AssertEqual(t, 0, got.EventCount)
core.AssertEmpty(t, got.ToolCounts)
}
func TestAnalytics_FormatAnalytics_Good(t *testing.T) {
text := FormatAnalytics(&SessionAnalytics{
Duration: time.Minute,
ActiveTime: time.Second,
EventCount: 2,
ToolCounts: map[string]int{"Bash": 1},
ErrorCounts: map[string]int{},
AvgLatency: map[string]time.Duration{"Bash": time.Second},
MaxLatency: map[string]time.Duration{"Bash": time.Second},
SuccessRate: 1,
})
core.AssertContains(t, text, "Session Analytics")
core.AssertContains(t, text, "Bash")
core.AssertContains(t, text, "100.0%")
}
func TestAnalytics_FormatAnalytics_Bad(t *testing.T) {
text := FormatAnalytics(&SessionAnalytics{ToolCounts: map[string]int{}, ErrorCounts: map[string]int{}, AvgLatency: map[string]time.Duration{}, MaxLatency: map[string]time.Duration{}})
core.AssertContains(t, text, "Events:")
core.AssertNotContains(t, text, "Tool Breakdown")
}
func TestAnalytics_FormatAnalytics_Ugly(t *testing.T) {
text := FormatAnalytics(&SessionAnalytics{SuccessRate: 0.333})
core.AssertContains(t, text, "33.3%")
core.AssertContains(t, text, "0ms")
}

123
go/core_helpers.go Normal file
View file

@ -0,0 +1,123 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"context"
core "dappco.re/go"
)
var hostCore = core.New()
var hostFS = (&core.Fs{}).NewUnrestricted()
// sessionCore returns the shared core instance, initialising it if needed.
func sessionCore(c *core.Core) *core.Core {
if c == nil {
c = hostCore
}
if c == nil {
c = core.New()
hostCore = c
}
return c
}
// hostContext returns the context associated with the shared core instance.
func hostContext(c *core.Core) context.Context {
c = sessionCore(c)
return c.Context()
}
// hostProcess returns the process runner associated with the shared core instance.
func hostProcess(c *core.Core) *core.Process {
return sessionCore(c).Process()
}
type rawjson []byte
// UnmarshalJSON stores raw JSON bytes without decoding their nested structure.
func (m *rawjson) UnmarshalJSON(data []byte) (
err error,
) {
if m == nil {
return core.E("rawjson.UnmarshalJSON", "nil receiver", nil)
}
*m = append((*m)[:0], data...)
return nil
}
// MarshalJSON returns the stored raw JSON bytes or null for a nil value.
func (m rawjson) MarshalJSON() (
[]byte,
error,
) {
if m == nil {
return []byte("null"), nil
}
return m, nil
}
// resultError extracts an error from a failed core result.
func resultError(result core.Result) (
err error,
) {
if result.OK {
return nil
}
if err, ok := result.Value.(error); ok && err != nil {
return err
}
return core.E("resultError", "unexpected core result failure", nil)
}
// repeatString repeats a string without importing strings.
func repeatString(s string, count int) string {
if s == "" || count <= 0 {
return ""
}
b := core.NewBuilder()
for range count {
b.WriteString(s)
}
return b.String()
}
// containsAny reports whether s contains any rune from chars.
func containsAny(s, chars string) bool {
for _, ch := range chars {
for _, candidate := range s {
if candidate == ch {
return true
}
}
}
return false
}
// indexOf returns the byte index of substr within s.
func indexOf(s, substr string) int {
if substr == "" {
return 0
}
if len(substr) > len(s) {
return -1
}
limit := len(s) - len(substr)
for i := 0; i <= limit; i++ {
if s[i:i+len(substr)] == substr {
return i
}
}
return -1
}
// trimQuotes removes matching single-token quote delimiters from s.
func trimQuotes(s string) string {
if len(s) < 2 {
return s
}
if (s[0] == '"' && s[len(s)-1] == '"') || (s[0] == '`' && s[len(s)-1] == '`') {
return s[1 : len(s)-1]
}
return s
}

5
go/go.mod Normal file
View file

@ -0,0 +1,5 @@
module dappco.re/go/session
go 1.26.0
require dappco.re/go v0.9.0

2
go/go.sum Normal file
View file

@ -0,0 +1,2 @@
dappco.re/go v0.9.0 h1:4ruZRNqKDDva8o6g65tYggjGVe42E6/lMZfVKXtr3p0=
dappco.re/go v0.9.0/go.mod h1:xapr7fLK4/9Pu2iSCr4qZuIuatmtx1j56zS/oPDbGyQ=

View file

@ -2,22 +2,20 @@
package session package session
import ( import (
"fmt" "html" // Note: intrinsic — escaping transcript content for generated HTML; stdlib encoder is the output contract
"html" "time" // Note: intrinsic — duration formatting thresholds for rendered summaries; no core equivalent
"os"
"strings"
"time"
coreerr "dappco.re/go/core/log" core "dappco.re/go"
) )
// RenderHTML generates a self-contained HTML timeline from a session. // RenderHTML generates a self-contained HTML timeline from a session.
func RenderHTML(sess *Session, outputPath string) error { //
f, err := os.Create(outputPath) // Example:
if err != nil { // result := session.RenderHTML(sess, "/tmp/session.html")
return coreerr.E("RenderHTML", "create html", err) func RenderHTML(sess *Session, outputPath string) core.Result {
if !hostFS.IsDir(core.PathDir(outputPath)) {
return core.Fail(core.E("RenderHTML", "parent directory does not exist", nil))
} }
defer f.Close()
duration := sess.EndTime.Sub(sess.StartTime) duration := sess.EndTime.Sub(sess.StartTime)
toolCount := 0 toolCount := 0
@ -31,7 +29,8 @@ func RenderHTML(sess *Session, outputPath string) error {
} }
} }
fmt.Fprintf(f, `<!DOCTYPE html> b := core.NewBuilder()
b.WriteString(core.Sprintf(`<!DOCTYPE html>
<html lang="en"> <html lang="en">
<head> <head>
<meta charset="utf-8"> <meta charset="utf-8">
@ -71,6 +70,8 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
.event-header .input { flex: 1; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; } .event-header .input { flex: 1; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
.event-header .dur { color: var(--dim); font-size: 11px; min-width: 50px; text-align: right; } .event-header .dur { color: var(--dim); font-size: 11px; min-width: 50px; text-align: right; }
.event-header .status { font-size: 14px; min-width: 20px; text-align: center; } .event-header .status { font-size: 14px; min-width: 20px; text-align: center; }
.event-header .permalink { color: var(--dim); font-size: 12px; min-width: 16px; text-align: center; text-decoration: none; }
.event-header .permalink:hover { color: var(--accent); }
.event-header .arrow { color: var(--dim); font-size: 10px; transition: transform 0.15s; min-width: 16px; } .event-header .arrow { color: var(--dim); font-size: 10px; transition: transform 0.15s; min-width: 16px; }
.event.open .arrow { transform: rotate(90deg); } .event.open .arrow { transform: rotate(90deg); }
.event-body { display: none; padding: 12px; background: var(--bg); border-top: 1px solid var(--border); } .event-body { display: none; padding: 12px; background: var(--bg); border-top: 1px solid var(--border); }
@ -93,14 +94,14 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
shortID(sess.ID), shortID(sess.ID), shortID(sess.ID), shortID(sess.ID),
sess.StartTime.Format("2006-01-02 15:04:05"), sess.StartTime.Format("2006-01-02 15:04:05"),
formatDuration(duration), formatDuration(duration),
toolCount) toolCount))
if errorCount > 0 { if errorCount > 0 {
fmt.Fprintf(f, ` b.WriteString(core.Sprintf(`
<span class="err">%d errors</span>`, errorCount) <span class="err">%d errors</span>`, errorCount))
} }
fmt.Fprintf(f, ` b.WriteString(`
</div> </div>
</div> </div>
<div class="search"> <div class="search">
@ -108,7 +109,7 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
<select id="filter" onchange="filterEvents()"> <select id="filter" onchange="filterEvents()">
<option value="all">All events</option> <option value="all">All events</option>
<option value="tool_use">Tool calls only</option> <option value="tool_use">Tool calls only</option>
<option value="errors">Errors only</option> <option value='errors'>Errors only</option>
<option value="Bash">Bash only</option> <option value="Bash">Bash only</option>
<option value="user">User messages</option> <option value="user">User messages</option>
</select> </select>
@ -119,10 +120,11 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
var i int var i int
for evt := range sess.EventsSeq() { for evt := range sess.EventsSeq() {
toolClass := strings.ToLower(evt.Tool) toolClass := core.Lower(evt.Tool)
if evt.Type == "user" { switch evt.Type {
case "user":
toolClass = "user" toolClass = "user"
} else if evt.Type == "assistant" { case "assistant":
toolClass = "assistant" toolClass = "assistant"
} }
@ -141,9 +143,10 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
} }
toolLabel := evt.Tool toolLabel := evt.Tool
if evt.Type == "user" { switch evt.Type {
case "user":
toolLabel = "User" toolLabel = "User"
} else if evt.Type == "assistant" { case "assistant":
toolLabel = "Claude" toolLabel = "Claude"
} }
@ -152,7 +155,7 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
durStr = formatDuration(evt.Duration) durStr = formatDuration(evt.Duration)
} }
fmt.Fprintf(f, `<div class="event%s" data-type="%s" data-tool="%s" data-text="%s" id="evt-%d"> b.WriteString(core.Sprintf(`<div class="event%s" data-type="%s" data-tool="%s" data-text="%s" id="evt-%d">
<div class="event-header" onclick="toggle(%d)"> <div class="event-header" onclick="toggle(%d)">
<span class="arrow">&#9654;</span> <span class="arrow">&#9654;</span>
<span class="time">%s</span> <span class="time">%s</span>
@ -160,13 +163,14 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
<span class="input">%s</span> <span class="input">%s</span>
<span class="dur">%s</span> <span class="dur">%s</span>
<span class="status">%s</span> <span class="status">%s</span>
<a class="permalink" href="#evt-%d" aria-label="Direct link to this event" onclick="event.stopPropagation()">#</a>
</div> </div>
<div class="event-body"> <div class="event-body">
`, `,
errorClass, errorClass,
evt.Type, evt.Type,
evt.Tool, evt.Tool,
html.EscapeString(strings.ToLower(evt.Input+" "+evt.Output)), html.EscapeString(core.Lower(core.Concat(evt.Input, " ", evt.Output))),
i, i,
i, i,
evt.Timestamp.Format("15:04:05"), evt.Timestamp.Format("15:04:05"),
@ -174,21 +178,23 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
html.EscapeString(toolLabel), html.EscapeString(toolLabel),
html.EscapeString(truncate(evt.Input, 120)), html.EscapeString(truncate(evt.Input, 120)),
durStr, durStr,
statusIcon) statusIcon,
i))
if evt.Input != "" { if evt.Input != "" {
label := "Command" label := "Command"
if evt.Type == "user" { switch {
case evt.Type == "user":
label = "Message" label = "Message"
} else if evt.Type == "assistant" { case evt.Type == "assistant":
label = "Response" label = "Response"
} else if evt.Tool == "Read" || evt.Tool == "Glob" || evt.Tool == "Grep" { case evt.Tool == "Read" || evt.Tool == "Glob" || evt.Tool == "Grep":
label = "Target" label = "Target"
} else if evt.Tool == "Edit" || evt.Tool == "Write" { case evt.Tool == "Edit" || evt.Tool == "Write":
label = "File" label = "File"
} }
fmt.Fprintf(f, ` <div class="section"><div class="label">%s</div><pre>%s</pre></div> b.WriteString(core.Sprintf(` <div class="section"><div class="label">%s</div><pre>%s</pre></div>
`, label, html.EscapeString(evt.Input)) `, label, html.EscapeString(evt.Input)))
} }
if evt.Output != "" { if evt.Output != "" {
@ -196,17 +202,17 @@ body { background: var(--bg); color: var(--fg); font-family: var(--font); font-s
if !evt.Success { if !evt.Success {
outClass = "output err" outClass = "output err"
} }
fmt.Fprintf(f, ` <div class="section"><div class="label">Output</div><pre class="%s">%s</pre></div> b.WriteString(core.Sprintf(` <div class="section"><div class="label">Output</div><pre class="%s">%s</pre></div>
`, outClass, html.EscapeString(evt.Output)) `, outClass, html.EscapeString(evt.Output)))
} }
fmt.Fprint(f, ` </div> b.WriteString(` </div>
</div> </div>
`) `)
i++ i++
} }
fmt.Fprint(f, `</div> b.WriteString(`</div>
<script> <script>
function toggle(i) { function toggle(i) {
document.getElementById('evt-'+i).classList.toggle('open'); document.getElementById('evt-'+i).classList.toggle('open');
@ -227,20 +233,36 @@ function filterEvents() {
el.classList.toggle('hidden', !show); el.classList.toggle('hidden', !show);
}); });
} }
function openHashEvent() {
const hash = window.location.hash;
if (!hash || !hash.startsWith('#evt-')) return;
const el = document.getElementById(hash.slice(1));
if (!el) return;
el.classList.add('open');
el.scrollIntoView({block: 'start'});
}
document.addEventListener('keydown', e => { document.addEventListener('keydown', e => {
if (e.key === '/' && document.activeElement.tagName !== 'INPUT') { if (e.key === '/' && document.activeElement.tagName !== 'INPUT') {
e.preventDefault(); e.preventDefault();
document.getElementById('search').focus(); document.getElementById('search').focus();
} }
}); });
window.addEventListener('hashchange', openHashEvent);
document.addEventListener('DOMContentLoaded', openHashEvent);
</script> </script>
</body> </body>
</html> </html>
`) `)
return nil writeResult := hostFS.Write(outputPath, b.String())
if !writeResult.OK {
return core.Fail(core.E("RenderHTML", "write html", resultError(writeResult)))
} }
return core.Ok(nil)
}
// shortID returns the abbreviated identifier used by rendered summaries.
func shortID(id string) string { func shortID(id string) string {
if len(id) > 8 { if len(id) > 8 {
return id[:8] return id[:8]
@ -248,15 +270,16 @@ func shortID(id string) string {
return id return id
} }
// formatDuration formats a duration for compact timeline and analytics output.
func formatDuration(d time.Duration) string { func formatDuration(d time.Duration) string {
if d < time.Second { if d < time.Second {
return fmt.Sprintf("%dms", d.Milliseconds()) return core.Sprintf("%dms", d.Milliseconds())
} }
if d < time.Minute { if d < time.Minute {
return fmt.Sprintf("%.1fs", d.Seconds()) return core.Sprintf("%.1fs", d.Seconds())
} }
if d < time.Hour { if d < time.Hour {
return fmt.Sprintf("%dm%ds", int(d.Minutes()), int(d.Seconds())%60) return core.Sprintf("%dm%ds", int(d.Minutes()), int(d.Seconds())%60)
} }
return fmt.Sprintf("%dh%dm", int(d.Hours()), int(d.Minutes())%60) return core.Sprintf("%dh%dm", int(d.Hours()), int(d.Minutes())%60)
} }

7
go/html_example_test.go Normal file
View file

@ -0,0 +1,7 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
func ExampleRenderHTML() {
sess := &Session{ID: "example"}
_ = RenderHTML(sess, "/tmp/example-session.html")
}

41
go/html_test.go Normal file
View file

@ -0,0 +1,41 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"testing"
"time"
core "dappco.re/go"
)
func TestHtml_RenderHTML_Good(t *testing.T) {
out := core.PathJoin(t.TempDir(), "session.html")
sess := &Session{ID: "abcdefghi", StartTime: time.Unix(0, 0), EndTime: time.Unix(60, 0), Events: []Event{{Type: "tool_use", Tool: "Bash", Input: "go test", Output: "PASS", Success: true}}}
result := RenderHTML(sess, out)
core.RequireTrue(t, result.OK)
readResult := hostFS.Read(out)
core.RequireTrue(t, readResult.OK)
html := readResult.Value.(string)
core.AssertContains(t, html, "abcdefg")
core.AssertContains(t, html, "go test")
}
func TestHtml_RenderHTML_Bad(t *testing.T) {
result := RenderHTML(&Session{}, core.PathJoin(t.TempDir(), "missing", "session.html"))
core.AssertFalse(t, result.OK)
core.AssertContains(t, result.Error(), "parent directory does not exist")
}
func TestHtml_RenderHTML_Ugly(t *testing.T) {
out := core.PathJoin(t.TempDir(), "empty.html")
result := RenderHTML(&Session{ID: "empty"}, out)
core.AssertTrue(t, result.OK)
readResult := hostFS.Read(out)
core.RequireTrue(t, readResult.OK)
core.AssertContains(t, readResult.Value.(string), "0 tool calls")
}

700
go/parser.go Normal file
View file

@ -0,0 +1,700 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"io" // Note: intrinsic — Reader, ReadCloser, and EOF contracts for transcript streams and hostFS handles; no core equivalent
"io/fs" // Note: intrinsic — fs.FileInfo metadata returned from hostFS.Stat; no core equivalent
"iter" // Note: intrinsic — public lazy sequence API for sessions and events; no core equivalent
"slices" // Note: intrinsic — iterator collection, sorted keys, and session ordering; no core equivalent
"time" // Note: intrinsic — RFC3339 transcript timestamps and session age calculations; no core equivalent
core "dappco.re/go"
coreerr "dappco.re/go"
)
// maxScannerBuffer is the maximum line length the scanner will accept.
// Set to 8 MiB to handle very large tool outputs without truncation.
const maxScannerBuffer = 8 * 1024 * 1024
// maxPendingToolCalls bounds unmatched tool_use entries retained while parsing.
const maxPendingToolCalls = 4096
// Event represents a single action in a session timeline.
//
// Example:
// evt := session.Event{Type: "tool_use", Tool: "Bash"}
type Event struct {
Timestamp time.Time
Type string // "tool_use", "user", "assistant", "error"
Tool string // "Bash", "Read", "Edit", "Write", "Grep", "Glob", etc.
ToolID string
Input string // Command, file path, or message text
Output string // Result text
Duration time.Duration
Success bool
ErrorMsg string
}
// Session holds parsed session metadata and events.
//
// Example:
// sess := &session.Session{ID: "abc123", Events: []session.Event{}}
type Session struct {
ID string
Path string
StartTime time.Time
EndTime time.Time
Events []Event
}
// EventsSeq returns an iterator over the session's events.
//
// Example:
//
// for evt := range sess.EventsSeq() {
// _ = evt
// }
func (s *Session) EventsSeq() iter.Seq[Event] {
return slices.Values(s.Events)
}
// rawEntry is the top-level structure of a Claude Code JSONL line.
type rawEntry struct {
Type string `json:"type"`
Timestamp string `json:"timestamp"`
SessionID string `json:"sessionId"`
Message rawjson `json:"message"`
UserType string `json:"userType"`
}
type rawMessage struct {
Role string `json:"role"`
Content []rawjson `json:"content"`
}
type contentBlock struct {
Type string `json:"type"`
Name string `json:"name,omitempty"`
ID string `json:"id,omitempty"`
Text string `json:"text,omitempty"`
Input rawjson `json:"input,omitempty"`
ToolUseID string `json:"tool_use_id,omitempty"`
Content any `json:"content,omitempty"`
IsError *bool `json:"is_error,omitempty"`
}
type bashInput struct {
Command string `json:"command"`
Description string `json:"description"`
Timeout int `json:"timeout"`
}
type readInput struct {
FilePath string `json:"file_path"`
Offset int `json:"offset"`
Limit int `json:"limit"`
}
type editInput struct {
FilePath string `json:"file_path"`
OldString string `json:"old_string"`
NewString string `json:"new_string"`
}
type writeInput struct {
FilePath string `json:"file_path"`
Content string `json:"content"`
}
type grepInput struct {
Pattern string `json:"pattern"`
Path string `json:"path,omitempty"`
}
type globInput struct {
Pattern string `json:"pattern"`
Path string `json:"path,omitempty"`
}
type taskInput struct {
Prompt string `json:"prompt"`
Description string `json:"description"`
SubagentType string `json:"subagent_type"`
}
// ParseStats reports diagnostic information from a parse run.
//
// Example:
// stats := &session.ParseStats{TotalLines: 42}
type ParseStats struct {
TotalLines int
SkippedLines int
OrphanedToolCalls int
Warnings []string
}
type ParsedSession struct {
Session *Session
Stats *ParseStats
}
// ListSessions returns all sessions found in the Claude projects directory.
//
// Example:
// result := session.ListSessions("/tmp/projects")
func ListSessions(projectsDir string) core.Result {
return core.Ok(slices.Collect(ListSessionsSeq(projectsDir)))
}
// ListSessionsSeq returns an iterator over all sessions found in the Claude projects directory.
//
// Example:
//
// for sess := range session.ListSessionsSeq("/tmp/projects") {
// _ = sess
// }
func ListSessionsSeq(projectsDir string) iter.Seq[Session] {
return func(yield func(Session) bool) {
const op = "ListSessionsSeq"
matches := core.PathGlob(transcriptPath(projectsDir, "*.jsonl"))
var sessions []Session
for _, filePath := range matches {
base := core.PathBase(filePath)
id := core.TrimSuffix(base, ".jsonl")
openResult := openTranscriptNoFollow(filePath)
if !openResult.OK {
coreerr.Warn("skip unreadable transcript", "op", op, "file", filePath, "err", openResult.Error())
continue
}
f := openResult.Value.(io.ReadCloser)
s := Session{
ID: id,
Path: filePath,
}
// Quick scan for first and last timestamps
var firstTS, lastTS string
scanResult := scanTranscriptLines(f, maxScannerBuffer, func(line []byte) bool {
var entry rawEntry
if !core.JSONUnmarshal(line, &entry).OK {
return true
}
if entry.Timestamp == "" {
return true
}
if firstTS == "" {
firstTS = entry.Timestamp
}
lastTS = entry.Timestamp
return true
})
closeErr := f.Close()
if !scanResult.OK {
coreerr.Warn("skip unreadable transcript", "op", op, "file", filePath, "err", scanResult.Error())
continue
}
if closeErr != nil {
coreerr.Warn("skip unreadable transcript", "op", op, "file", filePath, "err", closeErr)
continue
}
if firstTS != "" {
if t, err := time.Parse(time.RFC3339Nano, firstTS); err == nil {
s.StartTime = t
}
}
if lastTS != "" {
if t, err := time.Parse(time.RFC3339Nano, lastTS); err == nil {
s.EndTime = t
}
}
if s.StartTime.IsZero() {
infoResult := hostFS.Stat(filePath)
if infoResult.OK {
if info, ok := infoResult.Value.(fs.FileInfo); ok {
s.StartTime = info.ModTime()
}
}
}
sessions = append(sessions, s)
}
slices.SortFunc(sessions, func(i, j Session) int {
return j.StartTime.Compare(i.StartTime)
})
for _, s := range sessions {
if !yield(s) {
return
}
}
}
}
// PruneSessions deletes session files in the projects directory that were last
// modified more than maxAge ago. Returns the number of files deleted.
//
// Example:
// result := session.PruneSessions("/tmp/projects", 24*time.Hour)
func PruneSessions(projectsDir string, maxAge time.Duration) core.Result {
matches := core.PathGlob(transcriptPath(projectsDir, "*.jsonl"))
var deleted int
now := time.Now()
for _, filePath := range matches {
infoResult := hostFS.Stat(filePath)
if !infoResult.OK {
continue
}
info, ok := infoResult.Value.(fs.FileInfo)
if !ok {
continue
}
if now.Sub(info.ModTime()) > maxAge {
if deleteResult := hostFS.Delete(filePath); deleteResult.OK {
deleted++
}
}
}
return core.Ok(deleted)
}
// IsExpired returns true if the session's end time is older than the given maxAge
// relative to now.
//
// Example:
// expired := sess.IsExpired(24 * time.Hour)
func (s *Session) IsExpired(maxAge time.Duration) bool {
if s.EndTime.IsZero() {
return false
}
return time.Since(s.EndTime) > maxAge
}
// FetchSession retrieves a session by ID from the projects directory.
// It ensures the ID does not contain path traversal characters.
//
// Example:
// result := session.FetchSession("/tmp/projects", "abc123")
func FetchSession(projectsDir, id string) core.Result {
if core.Contains(id, "..") || containsAny(id, `/\`) {
return core.Fail(coreerr.E("FetchSession", "invalid session id", nil))
}
filePath := transcriptPath(projectsDir, id+".jsonl")
openResult := openTranscriptNoFollow(filePath)
if !openResult.OK {
err := resultError(openResult)
if isTranscriptMissing(err) {
return core.Fail(coreerr.E("FetchSession", "open transcript", err))
}
return core.Fail(coreerr.E("FetchSession", "invalid session path", err))
}
f := openResult.Value.(io.ReadCloser)
defer func() {
if err := f.Close(); err != nil {
coreerr.Warn("close transcript", "op", "FetchSession", "file", filePath, "err", err)
}
}()
return parseTranscriptFile(filePath, f)
}
// ParseTranscript reads a JSONL session file and returns structured events.
// Malformed or truncated lines are skipped; diagnostics are reported in ParseStats.
//
// Example:
// result := session.ParseTranscript("/tmp/projects/abc123.jsonl")
func ParseTranscript(filePath string) core.Result {
openResult := hostFS.Open(filePath)
if !openResult.OK {
return core.Fail(coreerr.E("ParseTranscript", "open transcript", resultError(openResult)))
}
f, ok := openResult.Value.(io.ReadCloser)
if !ok {
return core.Fail(coreerr.E("ParseTranscript", "unexpected file handle type", nil))
}
defer func() {
if err := f.Close(); err != nil {
coreerr.Warn("close transcript", "op", "ParseTranscript", "file", filePath, "err", err)
}
}()
return parseTranscriptFile(filePath, f)
}
// parseTranscriptFile parses an already-open transcript reader and assigns path metadata.
func parseTranscriptFile(filePath string, r io.Reader) core.Result {
base := core.PathBase(filePath)
id := core.TrimSuffix(base, ".jsonl")
parseResult := parseFromReader(r, id)
if !parseResult.OK {
return core.Fail(coreerr.E("ParseTranscript", "parse transcript", resultError(parseResult)))
}
parsed := parseResult.Value.(ParsedSession)
if parsed.Session != nil {
parsed.Session.Path = filePath
}
return core.Ok(parsed)
}
// ParseTranscriptReader parses a JSONL session from an io.Reader, enabling
// streaming parse without needing a file on disc. The id parameter sets
// the session ID (since there is no file name to derive it from).
//
// Example:
// result := session.ParseTranscriptReader(reader, "abc123")
func ParseTranscriptReader(r io.Reader, id string) core.Result {
parseResult := parseFromReader(r, id)
if !parseResult.OK {
return core.Fail(coreerr.E("ParseTranscriptReader", "parse transcript", resultError(parseResult)))
}
return parseResult
}
// parseFromReader is the shared implementation for both file-based and
// reader-based parsing. It scans line-by-line with an 8 MiB buffer,
// gracefully skipping malformed lines.
func parseFromReader(r io.Reader, id string) core.Result {
sess := &Session{
ID: id,
}
stats := &ParseStats{}
// Collect tool_use entries keyed by ID.
type toolUse struct {
timestamp time.Time
tool string
input string
}
pendingTools := make(map[string]toolUse)
var lineNum int
var lastRaw string
var lastLineFailed bool
scanResult := scanTranscriptLines(r, maxScannerBuffer, func(line []byte) bool {
lineNum++
stats.TotalLines++
raw := string(line)
if core.Trim(raw) == "" {
return true
}
lastRaw = raw
lastLineFailed = false
var entry rawEntry
if !core.JSONUnmarshalString(raw, &entry).OK {
stats.SkippedLines++
preview := raw
if len(preview) > 100 {
preview = preview[:100]
}
stats.Warnings = append(stats.Warnings,
core.Sprintf("line %d: skipped (bad JSON): %s", lineNum, preview))
lastLineFailed = true
return true
}
ts, err := time.Parse(time.RFC3339Nano, entry.Timestamp)
if err != nil {
stats.Warnings = append(stats.Warnings, core.Sprintf("line %d: bad timestamp %q: %v", lineNum, entry.Timestamp, err))
return true
}
if sess.StartTime.IsZero() && !ts.IsZero() {
sess.StartTime = ts
}
if !ts.IsZero() {
sess.EndTime = ts
}
switch entry.Type {
case "assistant":
var msg rawMessage
if !core.JSONUnmarshal(entry.Message, &msg).OK {
stats.Warnings = append(stats.Warnings, core.Sprintf("line %d: failed to unmarshal assistant message", lineNum))
return true
}
for i, raw := range msg.Content {
var block contentBlock
if !core.JSONUnmarshal(raw, &block).OK {
stats.Warnings = append(stats.Warnings, core.Sprintf("line %d block %d: failed to unmarshal content", lineNum, i))
continue
}
switch block.Type {
case "text":
if text := core.Trim(block.Text); text != "" {
sess.Events = append(sess.Events, Event{
Timestamp: ts,
Type: "assistant",
Input: truncate(text, 500),
})
}
case "tool_use":
if block.ID == "" {
continue
}
if _, exists := pendingTools[block.ID]; !exists && len(pendingTools) >= maxPendingToolCalls {
stats.Warnings = append(stats.Warnings,
core.Sprintf("line %d: skipped tool_use %q (pending tool limit reached)", lineNum, block.ID))
continue
}
inputStr := extractToolInput(block.Name, block.Input)
pendingTools[block.ID] = toolUse{
timestamp: ts,
tool: block.Name,
input: truncate(inputStr, 500),
}
}
}
case "user":
var msg rawMessage
if !core.JSONUnmarshal(entry.Message, &msg).OK {
stats.Warnings = append(stats.Warnings, core.Sprintf("line %d: failed to unmarshal user message", lineNum))
return true
}
for i, raw := range msg.Content {
var block contentBlock
if !core.JSONUnmarshal(raw, &block).OK {
stats.Warnings = append(stats.Warnings, core.Sprintf("line %d block %d: failed to unmarshal content", lineNum, i))
continue
}
switch block.Type {
case "tool_result":
if tu, ok := pendingTools[block.ToolUseID]; ok {
output := extractResultContent(block.Content)
isError := block.IsError != nil && *block.IsError
evt := Event{
Timestamp: tu.timestamp,
Type: "tool_use",
Tool: tu.tool,
ToolID: block.ToolUseID,
Input: tu.input,
Output: truncate(output, 2000),
Duration: ts.Sub(tu.timestamp),
Success: !isError,
}
if isError {
evt.ErrorMsg = truncate(output, 500)
}
sess.Events = append(sess.Events, evt)
delete(pendingTools, block.ToolUseID)
}
case "text":
if text := core.Trim(block.Text); text != "" {
sess.Events = append(sess.Events, Event{
Timestamp: ts,
Type: "user",
Input: truncate(text, 500),
})
}
}
}
}
return true
})
// Detect truncated final line.
if lastLineFailed && lastRaw != "" {
stats.Warnings = append(stats.Warnings, "truncated final line")
}
if !scanResult.OK {
return core.Fail(resultError(scanResult))
}
// Track orphaned tool calls (tool_use with no matching result).
stats.OrphanedToolCalls = len(pendingTools)
if stats.OrphanedToolCalls > 0 {
for id := range pendingTools {
stats.Warnings = append(stats.Warnings,
core.Sprintf("orphaned tool call: %s", id))
}
}
return core.Ok(ParsedSession{Session: sess, Stats: stats})
}
// extractToolInput converts raw Claude tool input into a concise display string.
func extractToolInput(toolName string, raw rawjson) string {
if raw == nil {
return ""
}
switch toolName {
case "Bash":
var inp bashInput
if core.JSONUnmarshal(raw, &inp).OK {
desc := inp.Description
if desc != "" {
desc = " # " + desc
}
return inp.Command + desc
}
case "Read":
var inp readInput
if core.JSONUnmarshal(raw, &inp).OK {
return inp.FilePath
}
case "Edit":
var inp editInput
if core.JSONUnmarshal(raw, &inp).OK {
return core.Sprintf("%s (edit)", inp.FilePath)
}
case "Write":
var inp writeInput
if core.JSONUnmarshal(raw, &inp).OK {
return core.Sprintf("%s (%d bytes)", inp.FilePath, len(inp.Content))
}
case "Grep":
var inp grepInput
if core.JSONUnmarshal(raw, &inp).OK {
path := inp.Path
if path == "" {
path = "."
}
return core.Sprintf("/%s/ in %s", inp.Pattern, path)
}
case "Glob":
var inp globInput
if core.JSONUnmarshal(raw, &inp).OK {
return inp.Pattern
}
case "Task":
var inp taskInput
if core.JSONUnmarshal(raw, &inp).OK {
desc := inp.Description
if desc == "" {
desc = truncate(inp.Prompt, 80)
}
return core.Sprintf("[%s] %s", inp.SubagentType, desc)
}
}
// Fallback: show raw JSON keys
var m map[string]any
if core.JSONUnmarshal(raw, &m).OK {
parts := make([]string, 0, len(m))
for key := range m {
parts = append(parts, key)
}
slices.Sort(parts)
return core.Join(", ", parts...)
}
return ""
}
// extractResultContent converts Claude tool_result content into plain text.
func extractResultContent(content any) string {
switch v := content.(type) {
case string:
return v
case []any:
var parts []string
for _, item := range v {
if m, ok := item.(map[string]any); ok {
if text, ok := m["text"].(string); ok {
parts = append(parts, text)
}
}
}
return core.Join("\n", parts...)
case map[string]any:
if text, ok := v["text"].(string); ok {
return text
}
}
return core.Sprint(content)
}
// truncate returns s capped to max bytes with an ellipsis marker.
func truncate(s string, max int) string {
if len(s) <= max {
return s
}
return s[:max] + "..."
}
// scanTranscriptLines streams newline-delimited records with a per-line size limit.
func scanTranscriptLines(r io.Reader, maxLineSize int, handle func([]byte) bool) core.Result {
const op = "scanTranscriptLines"
if maxLineSize <= 0 {
maxLineSize = maxScannerBuffer
}
readBuffer := make([]byte, 64*1024)
line := make([]byte, 0, 64*1024)
for {
n, readErr := r.Read(readBuffer)
if n > 0 {
chunk := readBuffer[:n]
start := 0
for i, b := range chunk {
if b != '\n' {
continue
}
if len(line)+i-start > maxLineSize {
return core.Fail(coreerr.E(op, core.Sprintf("line exceeds %d bytes", maxLineSize), nil))
}
line = append(line, chunk[start:i]...)
if !handle(trimLineBreak(line)) {
return core.Ok(nil)
}
line = line[:0]
start = i + 1
}
if start < len(chunk) {
if len(line)+len(chunk)-start > maxLineSize {
return core.Fail(coreerr.E(op, core.Sprintf("line exceeds %d bytes", maxLineSize), nil))
}
line = append(line, chunk[start:]...)
}
}
if readErr == io.EOF {
if len(line) > 0 {
if !handle(trimLineBreak(line)) {
return core.Ok(nil)
}
}
return core.Ok(nil)
}
if readErr != nil {
return core.Fail(coreerr.E(op, "read error", readErr))
}
}
}
// trimLineBreak removes a trailing carriage return from a scanned line.
func trimLineBreak(line []byte) []byte {
if len(line) > 0 && line[len(line)-1] == '\r' {
return line[:len(line)-1]
}
return line
}
// transcriptPath joins a projects directory and transcript file name.
func transcriptPath(projectsDir, name string) string {
if projectsDir == "" {
return core.CleanPath(name, "/")
}
return core.CleanPath(core.JoinPath(projectsDir, name), "/")
}

46
go/parser_example_test.go Normal file
View file

@ -0,0 +1,46 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"time"
core "dappco.re/go"
)
func ExampleSession_EventsSeq() {
sess := &Session{Events: []Event{{Type: "user"}}}
for event := range sess.EventsSeq() {
_ = event
}
}
func ExampleListSessions() {
_ = ListSessions("/tmp/claude-projects")
}
func ExampleListSessionsSeq() {
for sess := range ListSessionsSeq("/tmp/claude-projects") {
_ = sess
}
}
func ExamplePruneSessions() {
_ = PruneSessions("/tmp/claude-projects", 24*time.Hour)
}
func ExampleSession_IsExpired() {
sess := &Session{EndTime: time.Now().Add(-48 * time.Hour)}
_ = sess.IsExpired(24 * time.Hour)
}
func ExampleFetchSession() {
_ = FetchSession("/tmp/claude-projects", "abc123")
}
func ExampleParseTranscript() {
_ = ParseTranscript("/tmp/claude-projects/abc123.jsonl")
}
func ExampleParseTranscriptReader() {
_ = ParseTranscriptReader(core.NewReader(""), "abc123")
}

20
go/parser_other.go Normal file
View file

@ -0,0 +1,20 @@
//go:build !unix
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"io" // Note: intrinsic — keeps the platform stub signature aligned with the Unix io.ReadCloser implementation; no core equivalent
coreerr "dappco.re/go"
)
// openTranscriptNoFollow reports that secure no-follow opens are unavailable on this platform.
func openTranscriptNoFollow(filePath string) coreerr.Result {
return coreerr.Fail(coreerr.E("openTranscriptNoFollow", "secure no-follow transcript opens are unsupported on this platform: "+filePath, nil))
}
// isTranscriptMissing reports whether err wraps a missing transcript path error.
func isTranscriptMissing(error) bool {
return false
}

278
go/parser_test.go Normal file
View file

@ -0,0 +1,278 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"syscall"
"testing"
"time"
core "dappco.re/go"
)
func ts() string {
return time.Unix(1, 0).UTC().Format(time.RFC3339Nano)
}
func jsonLine(t *testing.T, m map[string]any) string {
t.Helper()
result := core.JSONMarshal(m)
core.RequireTrue(t, result.OK)
return string(result.Value.([]byte))
}
func userTextEntry(text string) string {
result := core.JSONMarshal(map[string]any{
"type": "user",
"timestamp": ts(),
"sessionId": "test-session",
"message": map[string]any{"role": "user", "content": []map[string]any{{"type": "text", "text": text}}},
})
return string(result.Value.([]byte))
}
func toolUseEntry(tool, id string, input map[string]any) string {
result := core.JSONMarshal(map[string]any{
"type": "assistant",
"timestamp": ts(),
"sessionId": "test-session",
"message": map[string]any{"role": "assistant", "content": []map[string]any{{"type": "tool_use", "name": tool, "id": id, "input": input}}},
})
return string(result.Value.([]byte))
}
func toolResultEntry(id string, content any, failed bool) string {
result := core.JSONMarshal(map[string]any{
"type": "user",
"timestamp": ts(),
"sessionId": "test-session",
"message": map[string]any{"role": "user", "content": []map[string]any{{"type": "tool_result", "tool_use_id": id, "content": content, "is_error": failed}}},
})
return string(result.Value.([]byte))
}
func writeJSONL(t *testing.T, dir, name string, lines ...string) string {
t.Helper()
file := core.PathJoin(dir, name)
result := hostFS.Write(file, core.Concat(core.Join("\n", lines...), "\n"))
core.RequireTrue(t, result.OK)
return file
}
func parsedValue(t *testing.T, result core.Result) ParsedSession {
t.Helper()
core.RequireTrue(t, result.OK, result.Error())
return result.Value.(ParsedSession)
}
func TestParser_Session_EventsSeq_Good(t *testing.T) {
sess := &Session{Events: []Event{{Type: "user"}, {Type: "assistant"}}}
count := 0
for range sess.EventsSeq() {
count++
}
core.AssertEqual(t, 2, count)
}
func TestParser_Session_EventsSeq_Bad(t *testing.T) {
sess := &Session{}
for range sess.EventsSeq() {
t.Fatal("empty EventsSeq yielded an event")
}
}
func TestParser_Session_EventsSeq_Ugly(t *testing.T) {
sess := &Session{Events: []Event{{Type: "tool_use", Tool: "Bash"}}}
var last Event
for item := range sess.EventsSeq() {
last = item
}
core.AssertEqual(t, "Bash", last.Tool)
}
func TestParser_ListSessions_Good(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "b.jsonl", userTextEntry("second"))
writeJSONL(t, dir, "a.jsonl", userTextEntry("first"))
result := ListSessions(dir)
core.RequireTrue(t, result.OK)
core.AssertLen(t, result.Value.([]Session), 2)
}
func TestParser_ListSessions_Bad(t *testing.T) {
result := ListSessions(t.TempDir())
core.RequireTrue(t, result.OK)
core.AssertEmpty(t, result.Value.([]Session))
}
func TestParser_ListSessions_Ugly(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "bad.jsonl", "{")
result := ListSessions(dir)
core.RequireTrue(t, result.OK)
core.AssertLen(t, result.Value.([]Session), 1)
}
func TestParser_ListSessionsSeq_Good(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "one.jsonl", userTextEntry("hi"))
var sessions []Session
for sess := range ListSessionsSeq(dir) {
sessions = append(sessions, sess)
}
core.AssertLen(t, sessions, 1)
}
func TestParser_ListSessionsSeq_Bad(t *testing.T) {
for range ListSessionsSeq(t.TempDir()) {
t.Fatal("empty ListSessionsSeq yielded a session")
}
}
func TestParser_ListSessionsSeq_Ugly(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "one.jsonl", "")
var sessions []Session
for sess := range ListSessionsSeq(dir) {
sessions = append(sessions, sess)
}
core.AssertLen(t, sessions, 1)
}
func TestParser_PruneSessions_Good(t *testing.T) {
dir := t.TempDir()
file := writeJSONL(t, dir, "old.jsonl", userTextEntry("old"))
past := time.Now().Add(-48 * time.Hour)
core.RequireNoError(t, syscall.UtimesNano(file, []syscall.Timespec{
syscall.NsecToTimespec(past.UnixNano()),
syscall.NsecToTimespec(past.UnixNano()),
}))
result := PruneSessions(dir, time.Hour)
core.RequireTrue(t, result.OK)
core.AssertEqual(t, 1, result.Value.(int))
}
func TestParser_PruneSessions_Bad(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "new.jsonl", userTextEntry("new"))
result := PruneSessions(dir, 48*time.Hour)
core.RequireTrue(t, result.OK)
core.AssertEqual(t, 0, result.Value.(int))
}
func TestParser_PruneSessions_Ugly(t *testing.T) {
result := PruneSessions(t.TempDir(), -time.Hour)
core.RequireTrue(t, result.OK)
core.AssertEqual(t, 0, result.Value.(int))
}
func TestParser_Session_IsExpired_Good(t *testing.T) {
sess := &Session{EndTime: time.Now().Add(-2 * time.Hour)}
expired := sess.IsExpired(time.Hour)
core.AssertTrue(t, expired)
core.AssertFalse(t, sess.EndTime.IsZero())
}
func TestParser_Session_IsExpired_Bad(t *testing.T) {
sess := &Session{EndTime: time.Now()}
expired := sess.IsExpired(time.Hour)
core.AssertFalse(t, expired)
core.AssertFalse(t, sess.EndTime.IsZero())
}
func TestParser_Session_IsExpired_Ugly(t *testing.T) {
sess := &Session{}
expired := sess.IsExpired(time.Hour)
core.AssertFalse(t, expired)
core.AssertTrue(t, sess.EndTime.IsZero())
}
func TestParser_FetchSession_Good(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "abc.jsonl", userTextEntry("hello"))
parsed := parsedValue(t, FetchSession(dir, "abc"))
core.AssertEqual(t, "abc", parsed.Session.ID)
core.AssertEqual(t, 1, parsed.Stats.TotalLines)
}
func TestParser_FetchSession_Bad(t *testing.T) {
result := FetchSession(t.TempDir(), "missing")
core.AssertFalse(t, result.OK)
core.AssertContains(t, result.Error(), "open transcript")
}
func TestParser_FetchSession_Ugly(t *testing.T) {
result := FetchSession(t.TempDir(), "../escape")
core.AssertFalse(t, result.OK)
core.AssertContains(t, result.Error(), "invalid session id")
}
func TestParser_ParseTranscript_Good(t *testing.T) {
file := writeJSONL(t, t.TempDir(), "ok.jsonl", userTextEntry("hello"))
parsed := parsedValue(t, ParseTranscript(file))
core.AssertEqual(t, "ok", parsed.Session.ID)
core.AssertLen(t, parsed.Session.Events, 1)
}
func TestParser_ParseTranscript_Bad(t *testing.T) {
result := ParseTranscript(core.PathJoin(t.TempDir(), "missing.jsonl"))
core.AssertFalse(t, result.OK)
core.AssertContains(t, result.Error(), "open transcript")
}
func TestParser_ParseTranscript_Ugly(t *testing.T) {
file := writeJSONL(t, t.TempDir(), "bad.jsonl", "{")
parsed := parsedValue(t, ParseTranscript(file))
core.AssertEqual(t, 1, parsed.Stats.SkippedLines)
core.AssertContains(t, core.Join("\n", parsed.Stats.Warnings...), "bad JSON")
}
func TestParser_ParseTranscriptReader_Good(t *testing.T) {
parsed := parsedValue(t, ParseTranscriptReader(core.NewReader(userTextEntry("hi")), "reader"))
core.AssertEqual(t, "reader", parsed.Session.ID)
core.AssertLen(t, parsed.Session.Events, 1)
}
func TestParser_ParseTranscriptReader_Bad(t *testing.T) {
parsed := parsedValue(t, ParseTranscriptReader(core.NewReader("{"), "reader"))
core.AssertEqual(t, 1, parsed.Stats.SkippedLines)
}
func TestParser_ParseTranscriptReader_Ugly(t *testing.T) {
parsed := parsedValue(t, ParseTranscriptReader(core.NewReader(""), "empty"))
core.AssertEqual(t, "empty", parsed.Session.ID)
core.AssertEmpty(t, parsed.Session.Events)
}

90
go/parser_unix.go Normal file
View file

@ -0,0 +1,90 @@
//go:build unix
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"io" // Note: intrinsic — io.ReadCloser contract and EOF signalling for descriptor-backed transcript reads; no core equivalent
"syscall" // Note: intrinsic — O_NOFOLLOW descriptor opens and fstat checks are platform syscalls; no core equivalent
coreerr "dappco.re/go"
)
type nofollowfile struct {
fd int
}
// Read reads bytes from a descriptor opened without following symlinks.
func (f *nofollowfile) Read(p []byte) (
int,
error,
) {
n, err := syscall.Read(f.fd, p)
if err != nil {
return n, coreerr.E("noFollowFile.Read", "read transcript descriptor", err)
}
if n == 0 {
return 0, io.EOF
}
return n, nil
}
// Close closes a descriptor opened without following symlinks.
func (f *nofollowfile) Close() (
err error,
) {
if err := syscall.Close(f.fd); err != nil {
return coreerr.E("noFollowFile.Close", "close transcript descriptor", err)
}
return nil
}
// openTranscriptNoFollow opens a regular transcript file without following symlinks.
func openTranscriptNoFollow(filePath string) coreerr.Result {
const op = "openTranscriptNoFollow"
fd, err := syscall.Open(filePath, syscall.O_RDONLY|syscall.O_NOFOLLOW, 0)
if err != nil {
return coreerr.Fail(coreerr.E(op, "open transcript without following symlinks", err))
}
var st syscall.Stat_t
if err := syscall.Fstat(fd, &st); err != nil {
if closeErr := closeNoFollowFD(fd); closeErr != nil {
return coreerr.Fail(closeErr)
}
return coreerr.Fail(coreerr.E(op, "stat transcript descriptor", err))
}
if st.Mode&syscall.S_IFMT != syscall.S_IFREG {
if closeErr := closeNoFollowFD(fd); closeErr != nil {
return coreerr.Fail(closeErr)
}
return coreerr.Fail(coreerr.E(op, "not a regular file", nil))
}
return coreerr.Ok(io.ReadCloser(&nofollowfile{fd: fd}))
}
// closeNoFollowFD closes a raw descriptor after a failed secure-open check.
func closeNoFollowFD(fd int) (
err error,
) {
if err := syscall.Close(fd); err != nil {
return coreerr.E("openTranscriptNoFollow", "close rejected transcript descriptor", err)
}
return nil
}
// isTranscriptMissing reports whether err wraps a missing transcript path error.
func isTranscriptMissing(err error) bool {
for err != nil {
if err == syscall.ENOENT {
return true
}
unwrapper, ok := err.(interface{ Unwrap() error })
if !ok {
return false
}
err = unwrapper.Unwrap()
}
return false
}

74
go/search.go Normal file
View file

@ -0,0 +1,74 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"iter" // Note: intrinsic — public lazy sequence API for search results; no core equivalent
"slices" // Note: intrinsic — slices.Collect materialises search iterator results; no core equivalent
"time" // Note: intrinsic — search result timestamps mirror parsed transcript event times; no core equivalent
core "dappco.re/go"
)
// SearchResult represents a match found in a session transcript.
//
// Example:
// result := session.SearchResult{SessionID: "abc123", Tool: "Bash"}
type SearchResult struct {
SessionID string
Timestamp time.Time
Tool string
Match string
}
// Search finds events matching the query across all sessions in the directory.
//
// Example:
// result := session.Search("/tmp/projects", "go test")
func Search(projectsDir, query string) core.Result {
return core.Ok(slices.Collect(SearchSeq(projectsDir, query)))
}
// SearchSeq returns an iterator over search results matching the query across all sessions.
//
// Example:
//
// for result := range session.SearchSeq("/tmp/projects", "go test") {
// _ = result
// }
func SearchSeq(projectsDir, query string) iter.Seq[SearchResult] {
return func(yield func(SearchResult) bool) {
matches := core.PathGlob(core.PathJoin(projectsDir, "*.jsonl"))
query = core.Lower(query)
for _, filePath := range matches {
parseResult := ParseTranscript(filePath)
if !parseResult.OK {
continue
}
sess := parseResult.Value.(ParsedSession).Session
for evt := range sess.EventsSeq() {
if evt.Type != "tool_use" {
continue
}
text := core.Lower(core.Concat(evt.Input, " ", evt.Output))
if core.Contains(text, query) {
matchCtx := evt.Input
if matchCtx == "" {
matchCtx = truncate(evt.Output, 120)
}
res := SearchResult{
SessionID: sess.ID,
Timestamp: evt.Timestamp,
Tool: evt.Tool,
Match: matchCtx,
}
if !yield(res) {
return
}
}
}
}
}
}

12
go/search_example_test.go Normal file
View file

@ -0,0 +1,12 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
func ExampleSearch() {
_ = Search("/tmp/claude-projects", "go test")
}
func ExampleSearchSeq() {
for item := range SearchSeq("/tmp/claude-projects", "go test") {
_ = item
}
}

71
go/search_test.go Normal file
View file

@ -0,0 +1,71 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"testing"
core "dappco.re/go"
)
func TestSearch_Search_Good(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "one.jsonl", toolUseEntry("Bash", "tool-1", map[string]any{"command": "go test"}), toolResultEntry("tool-1", "PASS", false))
result := Search(dir, "go test")
core.RequireTrue(t, result.OK)
matches := result.Value.([]SearchResult)
core.AssertLen(t, matches, 1)
core.AssertEqual(t, "one", matches[0].SessionID)
}
func TestSearch_Search_Bad(t *testing.T) {
result := Search(t.TempDir(), "absent")
core.RequireTrue(t, result.OK)
core.AssertEmpty(t, result.Value.([]SearchResult))
}
func TestSearch_Search_Ugly(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "broken.jsonl", "{")
writeJSONL(t, dir, "valid.jsonl", toolUseEntry("Bash", "tool-1", map[string]any{"command": "GO TEST"}), toolResultEntry("tool-1", "PASS", false))
result := Search(dir, "go test")
core.RequireTrue(t, result.OK)
core.AssertLen(t, result.Value.([]SearchResult), 1)
}
func TestSearch_SearchSeq_Good(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "one.jsonl", toolUseEntry("Bash", "tool-1", map[string]any{"command": "go vet"}), toolResultEntry("tool-1", "PASS", false))
var matches []SearchResult
for item := range SearchSeq(dir, "go vet") {
matches = append(matches, item)
}
core.AssertLen(t, matches, 1)
}
func TestSearch_SearchSeq_Bad(t *testing.T) {
var matches []SearchResult
for item := range SearchSeq(t.TempDir(), "nothing") {
matches = append(matches, item)
}
core.AssertEmpty(t, matches)
}
func TestSearch_SearchSeq_Ugly(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "text.jsonl", userTextEntry("please run go test"))
var matches []SearchResult
for item := range SearchSeq(dir, "go test") {
matches = append(matches, item)
}
core.AssertEmpty(t, matches)
}

View file

@ -0,0 +1,18 @@
version: "3"
env:
GOWORK: off
GOPATH: /tmp/gopath-gosession
GOMODCACHE: /tmp/gomodcache-gosession
GOCACHE: /tmp/go-session-go-build-cache
tasks:
default:
deps: [test]
test:
dir: ../../..
cmds:
- go vet ./...
- go test ./...
- go run ./tests/cli/session

View file

@ -0,0 +1,116 @@
// SPDX-Licence-Identifier: EUPL-1.2
package main
import (
"time"
core "dappco.re/go"
session "dappco.re/go/session"
)
const transcript = `{"type":"user","timestamp":"2026-02-20T10:00:00Z","sessionId":"ax10-session","message":{"role":"user","content":[{"type":"text","text":"Run the AX-10 smoke test"}]}}
{"type":"assistant","timestamp":"2026-02-20T10:00:01Z","sessionId":"ax10-session","message":{"role":"assistant","content":[{"type":"tool_use","name":"Bash","id":"tool-bash-1","input":{"command":"echo ax10","description":"smoke test"}}]}}
{"type":"user","timestamp":"2026-02-20T10:00:02Z","sessionId":"ax10-session","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"tool-bash-1","content":"ax10\n","is_error":false}]}}
{"type":"assistant","timestamp":"2026-02-20T10:00:03Z","sessionId":"ax10-session","message":{"role":"assistant","content":[{"type":"text","text":"AX-10 complete"}]}}
`
// main runs the CLI session smoke test.
func main() {
fs := (&core.Fs{}).NewUnrestricted()
dir := fs.TempDir("go-session-ax10-")
require(dir != "", "create temporary directory")
defer func() {
deleteResult := fs.DeleteAll(dir)
require(deleteResult.OK, "delete temporary directory")
}()
transcriptPath := core.Path(dir, "ax10-session.jsonl")
writeResult := fs.WriteMode(transcriptPath, transcript, 0o600)
require(writeResult.OK, "write transcript")
parseResult := session.ParseTranscript(transcriptPath)
requireResult(parseResult, "parse transcript")
parsed := parseResult.Value.(session.ParsedSession)
sess := parsed.Session
stats := parsed.Stats
require(sess.ID == "ax10-session", "session ID should come from the file name")
require(sess.Path == transcriptPath, "session path should match the parsed file")
require(len(sess.Events) == 3, "expected user, tool, and assistant events")
require(stats.TotalLines == 4, "expected all transcript lines to be scanned")
require(stats.SkippedLines == 0, "expected no skipped transcript lines")
require(stats.OrphanedToolCalls == 0, "expected no orphaned tool calls")
tool := sess.Events[1]
require(tool.Type == "tool_use", "expected second event to be the tool call")
require(tool.Tool == "Bash", "expected Bash tool call")
require(tool.Input == "echo ax10 # smoke test", "expected Bash input to include command and description")
require(tool.Output == "ax10\n", "expected Bash output to be preserved")
expectedDuration := time.Second
require(tool.Duration == expectedDuration, "expected tool duration to match transcript timestamps")
require(tool.Success, "expected successful tool call")
analytics := session.Analyse(sess)
require(analytics.EventCount == 3, "expected analytics event count")
require(analytics.ToolCounts["Bash"] == 1, "expected analytics Bash count")
expectedSuccessRate := successfulToolRate(sess)
require(analytics.SuccessRate == expectedSuccessRate, "expected analytics success rate")
require(core.Contains(session.FormatAnalytics(analytics), "Bash"), "expected formatted analytics to include Bash")
searchResult := session.Search(dir, "ax10")
requireResult(searchResult, "search sessions")
results := searchResult.Value.([]session.SearchResult)
require(len(results) == 1, "expected one search result")
require(results[0].SessionID == "ax10-session", "expected search result session ID")
listResult := session.ListSessions(dir)
requireResult(listResult, "list sessions")
sessions := listResult.Value.([]session.Session)
require(len(sessions) == 1, "expected one listed session")
require(sessions[0].ID == "ax10-session", "expected listed session ID")
fetchResult := session.FetchSession(dir, "ax10-session")
requireResult(fetchResult, "fetch session")
fetched := fetchResult.Value.(session.ParsedSession)
require(fetched.Session.ID == sess.ID, "expected fetched session to match parsed session")
htmlPath := core.Path(dir, "timeline.html")
requireResult(session.RenderHTML(sess, htmlPath), "render HTML")
readResult := fs.Read(htmlPath)
require(readResult.OK, "read rendered HTML")
html, ok := readResult.Value.(string)
require(ok, "read rendered HTML as string")
require(core.Contains(html, "Session ax10"), "expected rendered HTML session title")
require(core.Contains(html, "echo ax10"), "expected rendered HTML tool input")
}
// successfulToolRate calculates the same tool-call success ratio as session.Analyse.
func successfulToolRate(sess *session.Session) float64 {
var successful, total int
for _, evt := range sess.Events {
if evt.Type != "tool_use" {
continue
}
total++
if evt.Success {
successful++
}
}
if total == 0 {
return 0
}
return float64(successful) / float64(total)
}
// require stops the current test case when its condition is not met.
func require(ok bool, msg string) {
if !ok {
panic(msg)
}
}
// requireResult stops the current test case when its condition is not met.
func requireResult(result core.Result, msg string) {
if !result.OK {
panic(msg + ": " + result.Error())
}
}

178
go/video.go Normal file
View file

@ -0,0 +1,178 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"io/fs" // Note: intrinsic — fs.FileInfo metadata for executable checks from hostFS.Stat; no core equivalent
core "dappco.re/go"
)
// RenderMP4 generates an MP4 video from session events using VHS (charmbracelet).
//
// Example:
// result := session.RenderMP4(sess, "/tmp/session.mp4")
func RenderMP4(sess *Session, outputPath string) core.Result {
vhsPath := lookupExecutable("vhs")
if vhsPath == "" {
return core.Fail(core.E("RenderMP4", "vhs not installed (go install github.com/charmbracelet/vhs@latest)", nil))
}
tape := generateTape(sess, outputPath)
tmpDir := hostFS.TempDir("session-")
if tmpDir == "" {
return core.Fail(core.E("RenderMP4", "failed to create temp dir", nil))
}
defer hostFS.DeleteAll(tmpDir)
tapePath := core.PathJoin(tmpDir, core.Concat(core.ID(), ".tape"))
writeResult := hostFS.Write(tapePath, tape)
if !writeResult.OK {
return core.Fail(core.E("RenderMP4", "write tape", resultError(writeResult)))
}
runResult := runCommand(vhsPath, tapePath)
if !runResult.OK {
return core.Fail(core.E("RenderMP4", "vhs render", resultError(runResult)))
}
return core.Ok(nil)
}
// generateTape builds the VHS script used to render a session video.
func generateTape(sess *Session, outputPath string) string {
b := core.NewBuilder()
b.WriteString(core.Sprintf("Output %s\n", outputPath))
b.WriteString("Set FontSize 16\n")
b.WriteString("Set Width 1400\n")
b.WriteString("Set Height 800\n")
b.WriteString("Set TypingSpeed 30ms\n")
b.WriteString("Set Theme \"Catppuccin Mocha\"\n")
b.WriteString("Set Shell bash\n")
b.WriteString("\n")
// Title frame
id := sess.ID
if len(id) > 8 {
id = id[:8]
}
b.WriteString(core.Sprintf("Type \"# Session %s | %s\"\n",
id, sess.StartTime.Format("2006-01-02 15:04")))
b.WriteString("Enter\n")
b.WriteString("Sleep 2s\n")
b.WriteString("\n")
for _, evt := range sess.Events {
if evt.Type != "tool_use" {
continue
}
switch evt.Tool {
case "Bash":
cmd := extractCommand(evt.Input)
if cmd == "" {
continue
}
// Show the command
b.WriteString(core.Sprintf("Type %q\n", "$ "+cmd))
b.WriteString("Enter\n")
// Show abbreviated output
output := evt.Output
if len(output) > 200 {
output = output[:200] + "..."
}
if output != "" {
for _, line := range core.Split(output, "\n") {
if line == "" {
continue
}
b.WriteString(core.Sprintf("Type %q\n", line))
b.WriteString("Enter\n")
}
}
// Status indicator
if !evt.Success {
b.WriteString("Type \"# ✗ FAILED\"\n")
} else {
b.WriteString("Type \"# ✓ OK\"\n")
}
b.WriteString("Enter\n")
b.WriteString("Sleep 1s\n")
b.WriteString("\n")
case "Read", "Edit", "Write":
b.WriteString(core.Sprintf("Type %q\n",
core.Sprintf("# %s: %s", evt.Tool, truncate(evt.Input, 80))))
b.WriteString("Enter\n")
b.WriteString("Sleep 500ms\n")
case "Task":
b.WriteString(core.Sprintf("Type %q\n",
core.Sprintf("# Agent: %s", truncate(evt.Input, 80))))
b.WriteString("Enter\n")
b.WriteString("Sleep 1s\n")
}
}
b.WriteString("Sleep 3s\n")
return b.String()
}
// extractCommand removes a human description suffix from a Bash tool input.
func extractCommand(input string) string {
// Remove description suffix (after " # ")
if idx := indexOf(input, " # "); idx > 0 {
return input[:idx]
}
return input
}
// lookupExecutable resolves an executable name from PATH or validates a direct path.
func lookupExecutable(name string) string {
if name == "" {
return ""
}
if containsAny(name, `/\`) {
if isExecutablePath(name) {
return name
}
return ""
}
for _, dir := range core.Split(core.Env("PATH"), ":") {
if dir == "" {
dir = "."
}
candidate := core.PathJoin(dir, name)
if isExecutablePath(candidate) {
return candidate
}
}
return ""
}
// isExecutablePath reports whether filePath is an executable regular file.
func isExecutablePath(filePath string) bool {
statResult := hostFS.Stat(filePath)
if !statResult.OK {
return false
}
info, ok := statResult.Value.(fs.FileInfo)
if !ok || info.IsDir() {
return false
}
return info.Mode()&0111 != 0
}
// runCommand executes an external command through the core process abstraction.
func runCommand(command string, args ...string) core.Result {
c := sessionCore(nil)
runResult := hostProcess(c).Run(hostContext(c), command, args...)
if runResult.OK {
return core.Ok(nil)
}
return core.Fail(core.E("runCommand", "run command", resultError(runResult)))
}

7
go/video_example_test.go Normal file
View file

@ -0,0 +1,7 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
func ExampleRenderMP4() {
sess := &Session{ID: "example"}
_ = RenderMP4(sess, "/tmp/example-session.mp4")
}

48
go/video_test.go Normal file
View file

@ -0,0 +1,48 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"testing"
"time"
core "dappco.re/go"
)
func TestVideo_RenderMP4_Good(t *testing.T) {
if lookupExecutable("vhs") == "" {
t.Skip("RenderMP4 success branch requires vhs")
}
sess := &Session{ID: "video", StartTime: time.Unix(0, 0), Events: []Event{{Type: "tool_use", Tool: "Bash", Input: "echo ok", Output: "ok", Success: true}}}
result := RenderMP4(sess, core.PathJoin(t.TempDir(), "session.mp4"))
tape := generateTape(sess, "/tmp/session.mp4")
core.AssertTrue(t, result.OK)
core.AssertContains(t, tape, "Output /tmp/session.mp4")
core.AssertContains(t, tape, "echo ok")
}
func TestVideo_RenderMP4_Bad(t *testing.T) {
if lookupExecutable("vhs") != "" {
t.Skip("RenderMP4 missing-vhs branch requires vhs absent")
}
sess := &Session{ID: "video"}
result := RenderMP4(sess, "/tmp/session.mp4")
core.AssertFalse(t, result.OK)
core.AssertContains(t, result.Error(), "vhs not installed")
}
func TestVideo_RenderMP4_Ugly(t *testing.T) {
sess := &Session{ID: "video", Events: []Event{{Type: "tool_use", Tool: "Bash", Input: "", Success: true}}}
result := RenderMP4(sess, "/tmp/session.mp4")
tape := generateTape(sess, "/tmp/session.mp4")
if lookupExecutable("vhs") == "" {
core.AssertFalse(t, result.OK)
}
core.AssertNotContains(t, tape, "\"$ \"")
core.AssertContains(t, tape, "Sleep 3s")
}

View file

@ -1,238 +0,0 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"os"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestRenderHTML_BasicSession_Good(t *testing.T) {
dir := t.TempDir()
outputPath := dir + "/output.html"
sess := &Session{
ID: "test-session-12345678",
Path: "/tmp/test.jsonl",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
EndTime: time.Date(2026, 2, 20, 10, 5, 30, 0, time.UTC),
Events: []Event{
{
Timestamp: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Type: "user",
Input: "Hello, please help me",
},
{
Timestamp: time.Date(2026, 2, 20, 10, 0, 1, 0, time.UTC),
Type: "assistant",
Input: "Sure, let me check.",
},
{
Timestamp: time.Date(2026, 2, 20, 10, 0, 2, 0, time.UTC),
Type: "tool_use",
Tool: "Bash",
ToolID: "t1",
Input: "ls -la",
Output: "total 42",
Duration: time.Second,
Success: true,
},
{
Timestamp: time.Date(2026, 2, 20, 10, 0, 4, 0, time.UTC),
Type: "tool_use",
Tool: "Read",
ToolID: "t2",
Input: "/tmp/file.go",
Output: "package main",
Duration: 500 * time.Millisecond,
Success: true,
},
},
}
err := RenderHTML(sess, outputPath)
require.NoError(t, err)
content, err := os.ReadFile(outputPath)
require.NoError(t, err)
html := string(content)
// Basic structure checks
assert.Contains(t, html, "<!DOCTYPE html>")
assert.Contains(t, html, "test-ses") // shortID of "test-session-12345678"
assert.Contains(t, html, "2026-02-20 10:00:00")
assert.Contains(t, html, "5m30s") // duration
assert.Contains(t, html, "2 tool calls")
assert.Contains(t, html, "ls -la")
assert.Contains(t, html, "total 42")
assert.Contains(t, html, "/tmp/file.go")
assert.Contains(t, html, "User") // user event label
assert.Contains(t, html, "Claude") // assistant event label
assert.Contains(t, html, "Bash")
assert.Contains(t, html, "Read")
// Should contain JS for toggle and filter
assert.Contains(t, html, "function toggle")
assert.Contains(t, html, "function filterEvents")
}
func TestRenderHTML_EmptySession_Good(t *testing.T) {
dir := t.TempDir()
outputPath := dir + "/empty.html"
sess := &Session{
ID: "empty",
Path: "/tmp/empty.jsonl",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
EndTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Events: nil,
}
err := RenderHTML(sess, outputPath)
require.NoError(t, err)
content, err := os.ReadFile(outputPath)
require.NoError(t, err)
html := string(content)
assert.Contains(t, html, "<!DOCTYPE html>")
assert.Contains(t, html, "0 tool calls")
// Should NOT contain error span
assert.NotContains(t, html, "errors</span>")
}
func TestRenderHTML_WithErrors_Good(t *testing.T) {
dir := t.TempDir()
outputPath := dir + "/errors.html"
sess := &Session{
ID: "err-session",
Path: "/tmp/err.jsonl",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
EndTime: time.Date(2026, 2, 20, 10, 1, 0, 0, time.UTC),
Events: []Event{
{
Timestamp: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Type: "tool_use",
Tool: "Bash",
Input: "cat /nonexistent",
Output: "No such file",
Duration: 100 * time.Millisecond,
Success: false,
ErrorMsg: "No such file",
},
{
Timestamp: time.Date(2026, 2, 20, 10, 0, 30, 0, time.UTC),
Type: "tool_use",
Tool: "Bash",
Input: "echo ok",
Output: "ok",
Duration: 50 * time.Millisecond,
Success: true,
},
},
}
err := RenderHTML(sess, outputPath)
require.NoError(t, err)
content, err := os.ReadFile(outputPath)
require.NoError(t, err)
html := string(content)
assert.Contains(t, html, "1 errors")
assert.Contains(t, html, `class="event error"`)
assert.Contains(t, html, "&#10007;") // cross mark for failed
assert.Contains(t, html, "&#10003;") // check mark for success
}
func TestRenderHTML_SpecialCharacters_Good(t *testing.T) {
dir := t.TempDir()
outputPath := dir + "/special.html"
sess := &Session{
ID: "special",
Path: "/tmp/special.jsonl",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
EndTime: time.Date(2026, 2, 20, 10, 0, 1, 0, time.UTC),
Events: []Event{
{
Timestamp: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Type: "tool_use",
Tool: "Bash",
Input: `echo "<script>alert('xss')</script>"`,
Output: `<script>alert('xss')</script>`,
Duration: time.Second,
Success: true,
},
{
Timestamp: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Type: "user",
Input: `User says: "quotes & <brackets>"`,
},
},
}
err := RenderHTML(sess, outputPath)
require.NoError(t, err)
content, err := os.ReadFile(outputPath)
require.NoError(t, err)
html := string(content)
// Script tags should be escaped, never raw
assert.NotContains(t, html, "<script>alert")
assert.Contains(t, html, "&lt;script&gt;")
assert.Contains(t, html, "&amp;")
}
func TestRenderHTML_InvalidPath_Ugly(t *testing.T) {
sess := &Session{
ID: "test",
Events: nil,
}
err := RenderHTML(sess, "/nonexistent/dir/output.html")
require.Error(t, err)
assert.Contains(t, err.Error(), "create html")
}
func TestRenderHTML_LabelsByToolType_Good(t *testing.T) {
dir := t.TempDir()
outputPath := dir + "/labels.html"
sess := &Session{
ID: "labels",
Path: "/tmp/labels.jsonl",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
EndTime: time.Date(2026, 2, 20, 10, 0, 5, 0, time.UTC),
Events: []Event{
{Type: "tool_use", Tool: "Bash", Input: "ls", Timestamp: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC), Success: true},
{Type: "tool_use", Tool: "Read", Input: "/file", Timestamp: time.Date(2026, 2, 20, 10, 0, 1, 0, time.UTC), Success: true},
{Type: "tool_use", Tool: "Glob", Input: "**/*.go", Timestamp: time.Date(2026, 2, 20, 10, 0, 2, 0, time.UTC), Success: true},
{Type: "tool_use", Tool: "Grep", Input: "/TODO/ in .", Timestamp: time.Date(2026, 2, 20, 10, 0, 3, 0, time.UTC), Success: true},
{Type: "tool_use", Tool: "Edit", Input: "/file (edit)", Timestamp: time.Date(2026, 2, 20, 10, 0, 4, 0, time.UTC), Success: true},
{Type: "tool_use", Tool: "Write", Input: "/file (100 bytes)", Timestamp: time.Date(2026, 2, 20, 10, 0, 5, 0, time.UTC), Success: true},
},
}
err := RenderHTML(sess, outputPath)
require.NoError(t, err)
content, err := os.ReadFile(outputPath)
require.NoError(t, err)
html := string(content)
// Bash gets "Command" label
assert.True(t, strings.Contains(html, "Command"), "Bash events should use 'Command' label")
// Read, Glob, Grep get "Target" label
assert.True(t, strings.Contains(html, "Target"), "Read/Glob/Grep events should use 'Target' label")
// Edit, Write get "File" label
assert.True(t, strings.Contains(html, "File"), "Edit/Write events should use 'File' label")
}

View file

@ -1,13 +1,13 @@
# go-session # go-session
`dappco.re/go/core/session` -- Claude Code session parser and visualiser. `dappco.re/go/session` -- Claude Code session parser and visualiser.
Reads JSONL transcript files produced by Claude Code, extracts structured events, and renders them as interactive HTML timelines or MP4 videos. Zero external dependencies (stdlib only). Reads JSONL transcript files produced by Claude Code, extracts structured events, and renders them as interactive HTML timelines or MP4 videos. Zero external dependencies (stdlib only).
## Installation ## Installation
```bash ```bash
go get dappco.re/go/core/session@latest go get dappco.re/go/session@latest
``` ```
## Core Types ## Core Types
@ -45,15 +45,16 @@ import (
"fmt" "fmt"
"log" "log"
"dappco.re/go/core/session" "dappco.re/go/session"
) )
func main() { func main() {
// Parse a single transcript // Parse a single transcript
sess, err := session.ParseTranscript("~/.claude/projects/abc123.jsonl") sess, stats, err := session.ParseTranscript("~/.claude/projects/abc123.jsonl")
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)
} }
fmt.Printf("Skipped lines: %d\n", stats.SkippedLines)
fmt.Printf("Session %s: %d events over %s\n", fmt.Printf("Session %s: %d events over %s\n",
sess.ID, len(sess.Events), sess.EndTime.Sub(sess.StartTime)) sess.ID, len(sess.Events), sess.EndTime.Sub(sess.StartTime))

View file

@ -15,6 +15,7 @@ go-session provides two output formats for visualising parsed sessions: a self-c
- Yellow: User messages - Yellow: User messages
- Grey: Assistant responses - Grey: Assistant responses
- Red border: Failed tool calls - Red border: Failed tool calls
- **Permalinks** on each event card for direct `#evt-N` links
### Usage ### Usage

534
parser.go
View file

@ -1,534 +0,0 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"bufio"
"encoding/json"
"fmt"
"io"
"iter"
"maps"
"os"
"path/filepath"
"slices"
"strings"
"time"
coreerr "dappco.re/go/core/log"
)
// maxScannerBuffer is the maximum line length the scanner will accept.
// Set to 8 MiB to handle very large tool outputs without truncation.
const maxScannerBuffer = 8 * 1024 * 1024
// Event represents a single action in a session timeline.
type Event struct {
Timestamp time.Time
Type string // "tool_use", "user", "assistant", "error"
Tool string // "Bash", "Read", "Edit", "Write", "Grep", "Glob", etc.
ToolID string
Input string // Command, file path, or message text
Output string // Result text
Duration time.Duration
Success bool
ErrorMsg string
}
// Session holds parsed session metadata and events.
type Session struct {
ID string
Path string
StartTime time.Time
EndTime time.Time
Events []Event
}
// EventsSeq returns an iterator over the session's events.
func (s *Session) EventsSeq() iter.Seq[Event] {
return slices.Values(s.Events)
}
// rawEntry is the top-level structure of a Claude Code JSONL line.
type rawEntry struct {
Type string `json:"type"`
Timestamp string `json:"timestamp"`
SessionID string `json:"sessionId"`
Message json.RawMessage `json:"message"`
UserType string `json:"userType"`
}
type rawMessage struct {
Role string `json:"role"`
Content []json.RawMessage `json:"content"`
}
type contentBlock struct {
Type string `json:"type"`
Name string `json:"name,omitempty"`
ID string `json:"id,omitempty"`
Text string `json:"text,omitempty"`
Input json.RawMessage `json:"input,omitempty"`
ToolUseID string `json:"tool_use_id,omitempty"`
Content any `json:"content,omitempty"`
IsError *bool `json:"is_error,omitempty"`
}
type bashInput struct {
Command string `json:"command"`
Description string `json:"description"`
Timeout int `json:"timeout"`
}
type readInput struct {
FilePath string `json:"file_path"`
Offset int `json:"offset"`
Limit int `json:"limit"`
}
type editInput struct {
FilePath string `json:"file_path"`
OldString string `json:"old_string"`
NewString string `json:"new_string"`
}
type writeInput struct {
FilePath string `json:"file_path"`
Content string `json:"content"`
}
type grepInput struct {
Pattern string `json:"pattern"`
Path string `json:"path"`
}
type globInput struct {
Pattern string `json:"pattern"`
Path string `json:"path"`
}
type taskInput struct {
Prompt string `json:"prompt"`
Description string `json:"description"`
SubagentType string `json:"subagent_type"`
}
// ParseStats reports diagnostic information from a parse run.
type ParseStats struct {
TotalLines int
SkippedLines int
OrphanedToolCalls int
Warnings []string
}
// ListSessions returns all sessions found in the Claude projects directory.
func ListSessions(projectsDir string) ([]Session, error) {
return slices.Collect(ListSessionsSeq(projectsDir)), nil
}
// ListSessionsSeq returns an iterator over all sessions found in the Claude projects directory.
func ListSessionsSeq(projectsDir string) iter.Seq[Session] {
return func(yield func(Session) bool) {
matches, err := filepath.Glob(filepath.Join(projectsDir, "*.jsonl"))
if err != nil {
return
}
var sessions []Session
for _, path := range matches {
base := filepath.Base(path)
id := strings.TrimSuffix(base, ".jsonl")
info, err := os.Stat(path)
if err != nil {
continue
}
s := Session{
ID: id,
Path: path,
}
// Quick scan for first and last timestamps
f, err := os.Open(path)
if err != nil {
continue
}
scanner := bufio.NewScanner(f)
scanner.Buffer(make([]byte, 1024*1024), 1024*1024)
var firstTS, lastTS string
for scanner.Scan() {
var entry rawEntry
if json.Unmarshal(scanner.Bytes(), &entry) != nil {
continue
}
if entry.Timestamp == "" {
continue
}
if firstTS == "" {
firstTS = entry.Timestamp
}
lastTS = entry.Timestamp
}
f.Close()
if firstTS != "" {
if t, err := time.Parse(time.RFC3339Nano, firstTS); err == nil {
s.StartTime = t
}
}
if lastTS != "" {
if t, err := time.Parse(time.RFC3339Nano, lastTS); err == nil {
s.EndTime = t
}
}
if s.StartTime.IsZero() {
s.StartTime = info.ModTime()
}
sessions = append(sessions, s)
}
slices.SortFunc(sessions, func(i, j Session) int {
return j.StartTime.Compare(i.StartTime)
})
for _, s := range sessions {
if !yield(s) {
return
}
}
}
}
// PruneSessions deletes session files in the projects directory that were last
// modified more than maxAge ago. Returns the number of files deleted.
func PruneSessions(projectsDir string, maxAge time.Duration) (int, error) {
matches, err := filepath.Glob(filepath.Join(projectsDir, "*.jsonl"))
if err != nil {
return 0, coreerr.E("PruneSessions", "list sessions", err)
}
var deleted int
now := time.Now()
for _, path := range matches {
info, err := os.Stat(path)
if err != nil {
continue
}
if now.Sub(info.ModTime()) > maxAge {
if err := os.Remove(path); err == nil {
deleted++
}
}
}
return deleted, nil
}
// IsExpired returns true if the session's end time is older than the given maxAge
// relative to now.
func (s *Session) IsExpired(maxAge time.Duration) bool {
if s.EndTime.IsZero() {
return false
}
return time.Since(s.EndTime) > maxAge
}
// FetchSession retrieves a session by ID from the projects directory.
// It ensures the ID does not contain path traversal characters.
func FetchSession(projectsDir, id string) (*Session, *ParseStats, error) {
if strings.Contains(id, "..") || strings.ContainsAny(id, `/\`) {
return nil, nil, coreerr.E("FetchSession", "invalid session id", nil)
}
path := filepath.Join(projectsDir, id+".jsonl")
return ParseTranscript(path)
}
// ParseTranscript reads a JSONL session file and returns structured events.
// Malformed or truncated lines are skipped; diagnostics are reported in ParseStats.
func ParseTranscript(path string) (*Session, *ParseStats, error) {
f, err := os.Open(path)
if err != nil {
return nil, nil, coreerr.E("ParseTranscript", "open transcript", err)
}
defer f.Close()
base := filepath.Base(path)
id := strings.TrimSuffix(base, ".jsonl")
sess, stats, err := parseFromReader(f, id)
if sess != nil {
sess.Path = path
}
return sess, stats, err
}
// ParseTranscriptReader parses a JSONL session from an io.Reader, enabling
// streaming parse without needing a file on disc. The id parameter sets
// the session ID (since there is no file name to derive it from).
func ParseTranscriptReader(r io.Reader, id string) (*Session, *ParseStats, error) {
return parseFromReader(r, id)
}
// parseFromReader is the shared implementation for both file-based and
// reader-based parsing. It scans line-by-line using bufio.Scanner with
// an 8 MiB buffer, gracefully skipping malformed lines.
func parseFromReader(r io.Reader, id string) (*Session, *ParseStats, error) {
sess := &Session{
ID: id,
}
stats := &ParseStats{}
// Collect tool_use entries keyed by ID.
type toolUse struct {
timestamp time.Time
tool string
input string
}
pendingTools := make(map[string]toolUse)
scanner := bufio.NewScanner(r)
scanner.Buffer(make([]byte, maxScannerBuffer), maxScannerBuffer)
var lineNum int
var lastRaw string
var lastLineFailed bool
for scanner.Scan() {
lineNum++
stats.TotalLines++
raw := scanner.Text()
if strings.TrimSpace(raw) == "" {
continue
}
lastRaw = raw
lastLineFailed = false
var entry rawEntry
if err := json.Unmarshal([]byte(raw), &entry); err != nil {
stats.SkippedLines++
preview := raw
if len(preview) > 100 {
preview = preview[:100]
}
stats.Warnings = append(stats.Warnings,
fmt.Sprintf("line %d: skipped (bad JSON): %s", lineNum, preview))
lastLineFailed = true
continue
}
ts, err := time.Parse(time.RFC3339Nano, entry.Timestamp)
if err != nil {
stats.Warnings = append(stats.Warnings, fmt.Sprintf("line %d: bad timestamp %q: %v", lineNum, entry.Timestamp, err))
continue
}
if sess.StartTime.IsZero() && !ts.IsZero() {
sess.StartTime = ts
}
if !ts.IsZero() {
sess.EndTime = ts
}
switch entry.Type {
case "assistant":
var msg rawMessage
if err := json.Unmarshal(entry.Message, &msg); err != nil {
stats.Warnings = append(stats.Warnings, fmt.Sprintf("line %d: failed to unmarshal assistant message: %v", lineNum, err))
continue
}
for i, raw := range msg.Content {
var block contentBlock
if err := json.Unmarshal(raw, &block); err != nil {
stats.Warnings = append(stats.Warnings, fmt.Sprintf("line %d block %d: failed to unmarshal content: %v", lineNum, i, err))
continue
}
switch block.Type {
case "text":
if text := strings.TrimSpace(block.Text); text != "" {
sess.Events = append(sess.Events, Event{
Timestamp: ts,
Type: "assistant",
Input: truncate(text, 500),
})
}
case "tool_use":
inputStr := extractToolInput(block.Name, block.Input)
pendingTools[block.ID] = toolUse{
timestamp: ts,
tool: block.Name,
input: inputStr,
}
}
}
case "user":
var msg rawMessage
if err := json.Unmarshal(entry.Message, &msg); err != nil {
stats.Warnings = append(stats.Warnings, fmt.Sprintf("line %d: failed to unmarshal user message: %v", lineNum, err))
continue
}
for i, raw := range msg.Content {
var block contentBlock
if err := json.Unmarshal(raw, &block); err != nil {
stats.Warnings = append(stats.Warnings, fmt.Sprintf("line %d block %d: failed to unmarshal content: %v", lineNum, i, err))
continue
}
switch block.Type {
case "tool_result":
if tu, ok := pendingTools[block.ToolUseID]; ok {
output := extractResultContent(block.Content)
isError := block.IsError != nil && *block.IsError
evt := Event{
Timestamp: tu.timestamp,
Type: "tool_use",
Tool: tu.tool,
ToolID: block.ToolUseID,
Input: tu.input,
Output: truncate(output, 2000),
Duration: ts.Sub(tu.timestamp),
Success: !isError,
}
if isError {
evt.ErrorMsg = truncate(output, 500)
}
sess.Events = append(sess.Events, evt)
delete(pendingTools, block.ToolUseID)
}
case "text":
if text := strings.TrimSpace(block.Text); text != "" {
sess.Events = append(sess.Events, Event{
Timestamp: ts,
Type: "user",
Input: truncate(text, 500),
})
}
}
}
}
}
// Detect truncated final line.
if lastLineFailed && lastRaw != "" {
stats.Warnings = append(stats.Warnings, "truncated final line")
}
// Check for scanner buffer errors.
if scanErr := scanner.Err(); scanErr != nil {
return nil, stats, scanErr
}
// Track orphaned tool calls (tool_use with no matching result).
stats.OrphanedToolCalls = len(pendingTools)
if stats.OrphanedToolCalls > 0 {
for id := range pendingTools {
stats.Warnings = append(stats.Warnings,
fmt.Sprintf("orphaned tool call: %s", id))
}
}
return sess, stats, nil
}
func extractToolInput(toolName string, raw json.RawMessage) string {
if raw == nil {
return ""
}
switch toolName {
case "Bash":
var inp bashInput
if json.Unmarshal(raw, &inp) == nil {
desc := inp.Description
if desc != "" {
desc = " # " + desc
}
return inp.Command + desc
}
case "Read":
var inp readInput
if json.Unmarshal(raw, &inp) == nil {
return inp.FilePath
}
case "Edit":
var inp editInput
if json.Unmarshal(raw, &inp) == nil {
return fmt.Sprintf("%s (edit)", inp.FilePath)
}
case "Write":
var inp writeInput
if json.Unmarshal(raw, &inp) == nil {
return fmt.Sprintf("%s (%d bytes)", inp.FilePath, len(inp.Content))
}
case "Grep":
var inp grepInput
if json.Unmarshal(raw, &inp) == nil {
path := inp.Path
if path == "" {
path = "."
}
return fmt.Sprintf("/%s/ in %s", inp.Pattern, path)
}
case "Glob":
var inp globInput
if json.Unmarshal(raw, &inp) == nil {
return inp.Pattern
}
case "Task":
var inp taskInput
if json.Unmarshal(raw, &inp) == nil {
desc := inp.Description
if desc == "" {
desc = truncate(inp.Prompt, 80)
}
return fmt.Sprintf("[%s] %s", inp.SubagentType, desc)
}
}
// Fallback: show raw JSON keys
var m map[string]any
if json.Unmarshal(raw, &m) == nil {
parts := slices.Sorted(maps.Keys(m))
return strings.Join(parts, ", ")
}
return ""
}
func extractResultContent(content any) string {
switch v := content.(type) {
case string:
return v
case []any:
var parts []string
for _, item := range v {
if m, ok := item.(map[string]any); ok {
if text, ok := m["text"].(string); ok {
parts = append(parts, text)
}
}
}
return strings.Join(parts, "\n")
case map[string]any:
if text, ok := v["text"].(string); ok {
return text
}
}
return fmt.Sprintf("%v", content)
}
func truncate(s string, max int) string {
if len(s) <= max {
return s
}
return s[:max] + "..."
}

File diff suppressed because it is too large Load diff

View file

@ -1,64 +0,0 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"iter"
"path/filepath"
"slices"
"strings"
"time"
)
// SearchResult represents a match found in a session transcript.
type SearchResult struct {
SessionID string
Timestamp time.Time
Tool string
Match string
}
// Search finds events matching the query across all sessions in the directory.
func Search(projectsDir, query string) ([]SearchResult, error) {
return slices.Collect(SearchSeq(projectsDir, query)), nil
}
// SearchSeq returns an iterator over search results matching the query across all sessions.
func SearchSeq(projectsDir, query string) iter.Seq[SearchResult] {
return func(yield func(SearchResult) bool) {
matches, err := filepath.Glob(filepath.Join(projectsDir, "*.jsonl"))
if err != nil {
return
}
query = strings.ToLower(query)
for _, path := range matches {
sess, _, err := ParseTranscript(path)
if err != nil {
continue
}
for evt := range sess.EventsSeq() {
if evt.Type != "tool_use" {
continue
}
text := strings.ToLower(evt.Input + " " + evt.Output)
if strings.Contains(text, query) {
matchCtx := evt.Input
if matchCtx == "" {
matchCtx = truncate(evt.Output, 120)
}
res := SearchResult{
SessionID: sess.ID,
Timestamp: evt.Timestamp,
Tool: evt.Tool,
Match: matchCtx,
}
if !yield(res) {
return
}
}
}
}
}
}

View file

@ -1,165 +0,0 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestSearch_EmptyDir_Good(t *testing.T) {
dir := t.TempDir()
results, err := Search(dir, "anything")
require.NoError(t, err)
assert.Empty(t, results)
}
func TestSearch_NoMatches_Good(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "session.jsonl",
toolUseEntry(ts(0), "Bash", "tool-1", map[string]any{
"command": "ls -la",
}),
toolResultEntry(ts(1), "tool-1", "total 42", false),
)
results, err := Search(dir, "nonexistent-query")
require.NoError(t, err)
assert.Empty(t, results)
}
func TestSearch_SingleMatch_Good(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "session.jsonl",
toolUseEntry(ts(0), "Bash", "tool-1", map[string]any{
"command": "go test ./...",
}),
toolResultEntry(ts(1), "tool-1", "PASS ok mypackage 0.5s", false),
)
results, err := Search(dir, "go test")
require.NoError(t, err)
require.Len(t, results, 1)
assert.Equal(t, "session", results[0].SessionID)
assert.Equal(t, "Bash", results[0].Tool)
assert.Contains(t, results[0].Match, "go test")
}
func TestSearchSeq_SingleMatch_Good(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "session.jsonl",
toolUseEntry(ts(0), "Bash", "tool-1", map[string]any{
"command": "go test ./...",
}),
toolResultEntry(ts(1), "tool-1", "PASS ok mypackage 0.5s", false),
)
var results []SearchResult
for r := range SearchSeq(dir, "go test") {
results = append(results, r)
}
require.Len(t, results, 1)
assert.Equal(t, "session", results[0].SessionID)
assert.Equal(t, "Bash", results[0].Tool)
}
func TestSearch_MultipleMatches_Good(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "session1.jsonl",
toolUseEntry(ts(0), "Bash", "t1", map[string]any{
"command": "go test ./...",
}),
toolResultEntry(ts(1), "t1", "PASS", false),
toolUseEntry(ts(2), "Bash", "t2", map[string]any{
"command": "go test -race ./...",
}),
toolResultEntry(ts(3), "t2", "PASS", false),
)
writeJSONL(t, dir, "session2.jsonl",
toolUseEntry(ts(0), "Bash", "t3", map[string]any{
"command": "go test -bench=.",
}),
toolResultEntry(ts(1), "t3", "PASS", false),
)
results, err := Search(dir, "go test")
require.NoError(t, err)
assert.Len(t, results, 3, "should find matches across both sessions")
}
func TestSearch_CaseInsensitive_Good(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "session.jsonl",
toolUseEntry(ts(0), "Bash", "t1", map[string]any{
"command": "GO TEST ./...",
}),
toolResultEntry(ts(1), "t1", "PASS", false),
)
results, err := Search(dir, "go test")
require.NoError(t, err)
assert.Len(t, results, 1, "search should be case-insensitive")
}
func TestSearch_MatchesInOutput_Good(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "session.jsonl",
toolUseEntry(ts(0), "Bash", "t1", map[string]any{
"command": "cat log.txt",
}),
toolResultEntry(ts(1), "t1", "ERROR: connection refused to database", false),
)
results, err := Search(dir, "connection refused")
require.NoError(t, err)
require.Len(t, results, 1, "should match against output text")
// Match field should contain the input (command) since it's non-empty
assert.Contains(t, results[0].Match, "cat log.txt")
}
func TestSearch_SkipsNonToolEvents_Good(t *testing.T) {
dir := t.TempDir()
writeJSONL(t, dir, "session.jsonl",
userTextEntry(ts(0), "Please search for something"),
assistantTextEntry(ts(1), "I will search for something"),
)
// "search" appears in user and assistant text, but Search only checks tool_use events
results, err := Search(dir, "search")
require.NoError(t, err)
assert.Empty(t, results, "should only match tool_use events, not user/assistant text")
}
func TestSearch_NonJSONLIgnored_Good(t *testing.T) {
dir := t.TempDir()
require.NoError(t, os.WriteFile(filepath.Join(dir, "readme.md"), []byte("go test"), 0644))
results, err := Search(dir, "go test")
require.NoError(t, err)
assert.Empty(t, results, "non-JSONL files should be ignored")
}
func TestSearch_MalformedSessionSkipped_Bad(t *testing.T) {
dir := t.TempDir()
// One broken session and one valid session
writeJSONL(t, dir, "broken.jsonl",
`{not valid json at all`,
)
writeJSONL(t, dir, "valid.jsonl",
toolUseEntry(ts(0), "Bash", "t1", map[string]any{
"command": "go test ./...",
}),
toolResultEntry(ts(1), "t1", "PASS", false),
)
results, err := Search(dir, "go test")
require.NoError(t, err)
assert.Len(t, results, 1, "should still find matches in valid sessions")
}

8
sonar-project.properties Normal file
View file

@ -0,0 +1,8 @@
sonar.projectKey=core_go-session
sonar.projectName=core/go-session
sonar.sources=.
sonar.exclusions=**/vendor/**,**/third_party/**,**/.tmp/**,**/gomodcache/**,**/node_modules/**,**/dist/**,**/build/**,**/*_test.go,**/*.test.ts,**/*.test.js,**/*.spec.ts,**/*.spec.js
sonar.tests=.
sonar.test.inclusions=**/*_test.go,**/*.test.ts,**/*.test.js,**/*.spec.ts,**/*.spec.js
sonar.test.exclusions=**/vendor/**,**/third_party/**,**/.tmp/**,**/gomodcache/**,**/node_modules/**,**/dist/**,**/build/**
sonar.go.coverage.reportPaths=coverage.out

45
threats.md Normal file
View file

@ -0,0 +1,45 @@
## 1. Parser DoS
Status: Findings landed
Question: Can an attacker force unbounded parser memory with many large JSONL lines or unmatched tool calls?
Finding: Partial yes. The scanner is bounded to 8 MiB per token, and it now starts with a 64 KiB buffer instead of allocating 8 MiB up front (`parser.go:18`, `parser.go:357-358`). It does not retain N scanner buffers for N lines. However, unmatched `tool_use` records were previously retained in `pendingTools` until EOF and had no count limit; this is now capped at 4096 pending calls (`parser.go:22-23`, `parser.go:430-433`). Tool inputs are now truncated before they are stored in `pendingTools`, so an unmatched Bash command cannot keep an entire scanner-sized line resident (`parser.go:435-439`).
Severity: Medium before fix. Requires attacker-controlled transcript content, but memory growth was linear in unmatched tool_use count and input size.
Coverage: Added `TestParser_ParseTranscriptToolUseInputTruncated_Bad` and `TestParser_ParseTranscriptPendingToolLimit_Bad` (`parser_test.go:1099`, `parser_test.go:1115`).
## 2. Malformed JSONL
Status: No exploitable finding; coverage added
Question: Do malformed or adversarial JSONL records panic or bypass type handling?
Finding: No exploitable parser bug found. Bad top-level JSON is skipped with stats (`parser.go:376-386`), malformed assistant/user messages and content blocks are skipped (`parser.go:404-413`, `parser.go:445-454`), and unexpected tool result/input types fall through type switches without panicking (`parser.go:568-576`, `parser.go:579-598`). Deeply nested JSON is handled through `encoding/json` via core helpers and returned as a normal unmarshal failure, not a panic.
Severity: Low. The remaining cost is bounded by the per-line scanner maximum and the JSON decoder's own validation.
Coverage: Added tests for deeply nested JSON, unexpected tool input/result types, and lone UTF-16 surrogate halves (`parser_test.go:1133`, `parser_test.go:1147`, `parser_test.go:1161`).
## 3. Path traversal
Status: Finding landed
Question: Can FetchSession or ListSessions escape projectsDir through encoded traversal, symlinks, case-insensitive paths, or Windows-style paths?
Finding: Yes for symlinks before fix. `FetchSession` rejected literal `..`, `/`, and `\` in IDs (`parser.go:284-286`), so URL-encoded `..` remains a literal filename unless a caller decodes it before calling; if decoded first, the existing check rejects it. The real gap was that a `linked.jsonl` symlink inside projectsDir could point outside and still be opened/listed because normal stat/open operations follow symlinks. FetchSession now rejects symlink targets (`parser.go:289-292`, `parser.go:616-617`), and ListSessions skips symlink matches before stat/open (`parser.go:156-162`). The local path style is still POSIX-oriented via `path.Join`; Windows UNC behavior is not fully addressed in this package.
Severity: Medium before fix. Exploitation requires ability to place a symlink in projectsDir, but then reads can escape the intended session directory.
Coverage: Added URL-encoded traversal, FetchSession symlink traversal, and ListSessions symlink traversal tests (`parser_test.go:1508`, `parser_test.go:1516`, `parser_test.go:1562`).
## 4. Mantis #669 ParseStats RFC audit
Status: NOTABUG
Question: Does `ParseStats` match RFC §3 field-for-field, especially `Warnings` and `OrphanedToolCalls`?
Finding: Yes. RFC §3 specifies `TotalLines int`, `SkippedLines int`, `OrphanedToolCalls int`, and `Warnings []string`; `parser.go` defines those exact fields and types. `Warnings` is a string slice, not a plain string, and `OrphanedToolCalls` is an integer counter, not a boolean or string.
Coverage: `TestParser_ParseStatsOrphanedToolCalls_Ugly` covers unmatched `tool_use` records without matching `tool_result` records and asserts `ParseStats.OrphanedToolCalls > 0`.

130
video.go
View file

@ -1,130 +0,0 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"fmt"
"os"
"os/exec"
"strings"
coreerr "dappco.re/go/core/log"
)
// RenderMP4 generates an MP4 video from session events using VHS (charmbracelet).
func RenderMP4(sess *Session, outputPath string) error {
if _, err := exec.LookPath("vhs"); err != nil {
return coreerr.E("RenderMP4", "vhs not installed (go install github.com/charmbracelet/vhs@latest)", nil)
}
tape := generateTape(sess, outputPath)
tmpFile, err := os.CreateTemp("", "session-*.tape")
if err != nil {
return coreerr.E("RenderMP4", "create tape", err)
}
defer os.Remove(tmpFile.Name())
if _, err := tmpFile.WriteString(tape); err != nil {
tmpFile.Close()
return coreerr.E("RenderMP4", "write tape", err)
}
tmpFile.Close()
cmd := exec.Command("vhs", tmpFile.Name())
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
return coreerr.E("RenderMP4", "vhs render", err)
}
return nil
}
func generateTape(sess *Session, outputPath string) string {
var b strings.Builder
b.WriteString(fmt.Sprintf("Output %s\n", outputPath))
b.WriteString("Set FontSize 16\n")
b.WriteString("Set Width 1400\n")
b.WriteString("Set Height 800\n")
b.WriteString("Set TypingSpeed 30ms\n")
b.WriteString("Set Theme \"Catppuccin Mocha\"\n")
b.WriteString("Set Shell bash\n")
b.WriteString("\n")
// Title frame
id := sess.ID
if len(id) > 8 {
id = id[:8]
}
b.WriteString(fmt.Sprintf("Type \"# Session %s | %s\"\n",
id, sess.StartTime.Format("2006-01-02 15:04")))
b.WriteString("Enter\n")
b.WriteString("Sleep 2s\n")
b.WriteString("\n")
for _, evt := range sess.Events {
if evt.Type != "tool_use" {
continue
}
switch evt.Tool {
case "Bash":
cmd := extractCommand(evt.Input)
if cmd == "" {
continue
}
// Show the command
b.WriteString(fmt.Sprintf("Type %q\n", "$ "+cmd))
b.WriteString("Enter\n")
// Show abbreviated output
output := evt.Output
if len(output) > 200 {
output = output[:200] + "..."
}
if output != "" {
for line := range strings.SplitSeq(output, "\n") {
if line == "" {
continue
}
b.WriteString(fmt.Sprintf("Type %q\n", line))
b.WriteString("Enter\n")
}
}
// Status indicator
if !evt.Success {
b.WriteString("Type \"# ✗ FAILED\"\n")
} else {
b.WriteString("Type \"# ✓ OK\"\n")
}
b.WriteString("Enter\n")
b.WriteString("Sleep 1s\n")
b.WriteString("\n")
case "Read", "Edit", "Write":
b.WriteString(fmt.Sprintf("Type %q\n",
fmt.Sprintf("# %s: %s", evt.Tool, truncate(evt.Input, 80))))
b.WriteString("Enter\n")
b.WriteString("Sleep 500ms\n")
case "Task":
b.WriteString(fmt.Sprintf("Type %q\n",
fmt.Sprintf("# Agent: %s", truncate(evt.Input, 80))))
b.WriteString("Enter\n")
b.WriteString("Sleep 1s\n")
}
}
b.WriteString("Sleep 3s\n")
return b.String()
}
func extractCommand(input string) string {
// Remove description suffix (after " # ")
if idx := strings.Index(input, " # "); idx > 0 {
return input[:idx]
}
return input
}

View file

@ -1,207 +0,0 @@
// SPDX-Licence-Identifier: EUPL-1.2
package session
import (
"os/exec"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestGenerateTape_BasicSession_Good(t *testing.T) {
sess := &Session{
ID: "tape-test-12345678",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Events: []Event{
{
Type: "tool_use",
Tool: "Bash",
Input: "go test ./...",
Output: "PASS",
Success: true,
},
{
Type: "tool_use",
Tool: "Read",
Input: "/tmp/file.go",
Output: "package main",
Success: true,
},
},
}
tape := generateTape(sess, "/tmp/output.mp4")
assert.Contains(t, tape, "Output /tmp/output.mp4")
assert.Contains(t, tape, "Set FontSize 16")
assert.Contains(t, tape, "tape-tes") // shortID
assert.Contains(t, tape, "2026-02-20 10:00")
assert.Contains(t, tape, `"$ go test ./..."`)
assert.Contains(t, tape, "PASS")
assert.Contains(t, tape, `"# ✓ OK"`)
assert.Contains(t, tape, "# Read: /tmp/file.go")
}
func TestGenerateTape_SkipsNonToolEvents_Good(t *testing.T) {
sess := &Session{
ID: "skip-test",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Events: []Event{
{Type: "user", Input: "Hello"},
{Type: "assistant", Input: "Hi there"},
{Type: "tool_use", Tool: "Bash", Input: "echo hi", Output: "hi", Success: true},
},
}
tape := generateTape(sess, "/tmp/out.mp4")
// User and assistant events should NOT appear in the tape
assert.NotContains(t, tape, "Hello")
assert.NotContains(t, tape, "Hi there")
// Bash command should appear
assert.Contains(t, tape, "echo hi")
}
func TestGenerateTape_FailedCommand_Good(t *testing.T) {
sess := &Session{
ID: "fail-test",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Events: []Event{
{
Type: "tool_use",
Tool: "Bash",
Input: "cat /missing",
Output: "No such file",
Success: false,
},
},
}
tape := generateTape(sess, "/tmp/out.mp4")
assert.Contains(t, tape, `"# ✗ FAILED"`)
}
func TestGenerateTape_LongOutput_Good(t *testing.T) {
sess := &Session{
ID: "long-test",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Events: []Event{
{
Type: "tool_use",
Tool: "Bash",
Input: "cat huge.log",
Output: strings.Repeat("x", 300),
Success: true,
},
},
}
tape := generateTape(sess, "/tmp/out.mp4")
// Output should be truncated to 200 chars + "..."
assert.Contains(t, tape, "...")
}
func TestGenerateTape_TaskEvent_Good(t *testing.T) {
sess := &Session{
ID: "task-test",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Events: []Event{
{
Type: "tool_use",
Tool: "Task",
Input: "[research] Analyse code structure",
},
},
}
tape := generateTape(sess, "/tmp/out.mp4")
assert.Contains(t, tape, "# Agent: [research] Analyse code structure")
}
func TestGenerateTape_EditWriteEvents_Good(t *testing.T) {
sess := &Session{
ID: "edit-test",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Events: []Event{
{Type: "tool_use", Tool: "Edit", Input: "/tmp/app.go (edit)"},
{Type: "tool_use", Tool: "Write", Input: "/tmp/new.go (50 bytes)"},
},
}
tape := generateTape(sess, "/tmp/out.mp4")
assert.Contains(t, tape, "# Edit: /tmp/app.go (edit)")
assert.Contains(t, tape, "# Write: /tmp/new.go (50 bytes)")
}
func TestGenerateTape_EmptySession_Good(t *testing.T) {
sess := &Session{
ID: "empty-test",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Events: nil,
}
tape := generateTape(sess, "/tmp/out.mp4")
// Should still have the header and trailer
assert.Contains(t, tape, "Output /tmp/out.mp4")
assert.Contains(t, tape, "Sleep 3s")
// No tool events
lines := strings.Split(tape, "\n")
var toolLines int
for _, line := range lines {
if strings.Contains(line, "$ ") || strings.Contains(line, "# Read:") ||
strings.Contains(line, "# Edit:") || strings.Contains(line, "# Write:") {
toolLines++
}
}
assert.Equal(t, 0, toolLines)
}
func TestGenerateTape_BashEmptyCommand_Bad(t *testing.T) {
sess := &Session{
ID: "empty-cmd",
StartTime: time.Date(2026, 2, 20, 10, 0, 0, 0, time.UTC),
Events: []Event{
{Type: "tool_use", Tool: "Bash", Input: "", Output: "", Success: true},
},
}
tape := generateTape(sess, "/tmp/out.mp4")
// Empty command should be skipped (extractCommand returns "")
assert.NotContains(t, tape, `"$ "`)
}
func TestExtractCommand_Good(t *testing.T) {
assert.Equal(t, "ls -la", extractCommand("ls -la # list files"))
assert.Equal(t, "go test ./...", extractCommand("go test ./..."))
assert.Equal(t, "echo hello", extractCommand("echo hello"))
}
func TestExtractCommand_NoDescription_Good(t *testing.T) {
assert.Equal(t, "plain command", extractCommand("plain command"))
}
func TestExtractCommand_DescriptionAtStart_Good(t *testing.T) {
// " # " at position 0 means idx <= 0, so it returns the whole input
result := extractCommand(" # description only")
assert.Equal(t, " # description only", result)
}
func TestRenderMP4_NoVHS_Ugly(t *testing.T) {
// Skip if vhs is actually installed (this tests the error path)
if _, err := exec.LookPath("vhs"); err == nil {
t.Skip("vhs is installed; skipping missing-vhs test")
}
sess := &Session{
ID: "no-vhs",
StartTime: time.Now(),
}
err := RenderMP4(sess, "/tmp/test.mp4")
require.Error(t, err)
assert.Contains(t, err.Error(), "vhs not installed")
}