Compare commits

..

164 commits
v0.4.5 ... dev

Author SHA1 Message Date
Snider
c2a1c4e007 feat(mcp): implement TransformerIn/Out AI protocol gateway (RFC §9)
New 5-file subsystem implementing the AI-native protocol gateway:

transformer.go:
- TransformerIn interface (Detect + Normalise)
- TransformerOut interface (Transform)
- MCPRequest / MCPResult neutral types
- NegotiateTransformer router walking priority list:
  Accept header → Path-based → Body inspection → MCP-native fallback

transformer_openai.go: OpenAI Chat Completions ↔ MCP mapping
including tool_calls → tools/call.

transformer_anthropic.go: Anthropic Messages ↔ MCP mapping
including tool_use → tools/call.

transformer_honeypot.go: §9.5 fake-response generator. Catches
malformed/probing input as fallback before MCP-native; returns
plausible canned response so attackers can't easily distinguish
the honeypot from real responses.

Tests cover priority-walking, OpenAI/Anthropic Normalise+Transform
round-trips, honeypot fallback on garbage input.

Ollama + LiteLLM transformers deferred to follow-up tickets per
ticket scope guidance — TODO note in transformer.go references
#197 for the follow-up.

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=197
2026-04-25 20:08:51 +01:00
Snider
70e0c51dc5 fix(mcp/brain/client): add full-jitter to retry backoff + 30s cap
sleep() now computes exponential backoff as baseDelay * 2^N, caps at
maxBackoffDelay (30s), then sleeps a randomized delay in [0, cap)
using crypto/rand. Closes the thundering-herd vector where multiple
concurrent clients on the same upstream all retry at exactly the same
moments.

Retry-After still goes through sleepFor() unmodified — server-
specified delays are not jittered (the server told us when to come
back; respect that).

Cerberus #1055 from workspace-wide sniff.

Tests: distinct jittered delays across two clients (assert different),
Retry-After:2 sleeps exactly 2s, attempt=20 capped at maxBackoffDelay,
100ms+attempt=3 stays in [0, 800ms].

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=1055
2026-04-25 19:41:18 +01:00
Snider
7d32ed8661 chore(mcp/brain/client): close stale-fixed Mantis #1057 + #1058
#1057 (brain.key 0600 perm check): client.go:615 already returns
"brain.key has insecure permissions, expected 0600" — landed in #998
(commit e2bc724).

#1058 (http:// apiURL allows cleartext bearer): client.go:218 already
rejects http:// unless CORE_BRAIN_INSECURE=true — landed in #1052
(commit cdb4bdb).

Both Cerberus findings closed-fixed by prior workspace-wide sniff
follow-ups.

Co-authored-by: Cladius <cladius@lthn.ai>
Closes tasks.lthn.sh/view.php?id=1057
Closes tasks.lthn.sh/view.php?id=1058
2026-04-25 19:23:09 +01:00
Snider
cdb4bdbc45 fix(mcp/brain/client): block absolute-URL bypass + apiURL scheme allowlist (MEDIUM)
requestURL() now returns (string, error) and rejects absolute or
host-bearing URLs BEFORE request construction and BEFORE Authorization
header is set. Closes the bearer-key leak vector: a path that ever
flows from upstream JSON, config, or a tool argument can no longer
spray the Bearer token at attacker-chosen URLs.

New() validates apiURL at construction:
- https://* always accepted
- http://* rejected unless CORE_BRAIN_INSECURE=true is set
  (explicit dev/test opt-in; production should always be TLS)

Cerberus #1052 from workspace-wide sniff. Today's call sites (Remember,
Recall, Forget, List) hardcode the path → safe; this closes the API
shape that invited future Call(ctx, method, untrustedPath, body)
patterns from leaking the bearer.

Tests: absolute http:// + https:// paths make zero HTTP calls, good
relative path construction works, http:// apiURL rejected by default
+ accepted with CORE_BRAIN_INSECURE=true. Existing test fixtures
converted to TLS to match the new default policy.

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=1052
2026-04-25 19:19:36 +01:00
Snider
e73deea84b fix(mcp/brain/tools): cap org field at 128 runes (parity with PHP)
brainRemember, brainRecall, and brainList now validate the org field
against a 128-rune length cap before forwarding to the upstream Brain
service. Matches PHP-side maxLength=128 in BrainService — closes the
Go→PHP drift Cerberus #1006 flagged. coreerr.E typed error returned
on violation.

Note: Codex preflight checked project, agent_id, type — PHP schema
only exposes maxLength for org, so caps weren't added for the other
fields. If those need bounds, file separate tickets.

Tests cover: empty org accepted, "core" accepted, exactly 128 runes
accepted (boundary), 129 rejected on remember/recall/list.

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=1006
2026-04-25 18:50:06 +01:00
Snider
cb62378a2b fix(mcp/agentic_watch): default 30m → 60s + per-call cap at 30m
defaultWatchTimeout reduced to 60s; new maxWatchTimeout 30m caps any
per-call WatchInput.Timeout. resolveWatchTimeout() honours the request
value (in seconds), clamps above-cap, and treats <=0 as default.

Defends against connection-pool drain from idle long-poll subscribers
whether internal-only today or future external-facing.

Tests: default = 60s, 10s honoured, 10h clamps to 30m, 0 falls back.

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=1005
2026-04-25 18:32:21 +01:00
Snider
1e3a5996fa chore(mcp/brain/client): close stale-fixed Mantis #997 — already implemented
CircuitBreaker.lock at client.go:132 already protects the half-open
check-and-set sequence at client.go:443 (read halfOpenInFlight, transition
to true) atomically. Reset writes in recordSuccess/recordFailure/recordIgnored
are also mutex-held. The race window described in the ticket doesn't exist
at the current code shape.

No code change required.

Co-authored-by: Cladius <cladius@lthn.ai>
Closes tasks.lthn.sh/view.php?id=997
2026-04-25 18:23:48 +01:00
Snider
2acf186925 fix(mcp/brain/client): retryableStatus 408+429 + honour Retry-After
retryableStatus now treats 408 (Request Timeout) and 429 (Too Many
Requests) as retryable, matching the PHP-side equivalent. Retry loop
parses Retry-After from the response (seconds form OR HTTP-date),
clamps to 60s max, treats past dates as zero delay.

Tests cover: 408 retried, 429 retried, numeric Retry-After honoured,
past-date Retry-After produces zero sleep, long Retry-After capped.

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=996
2026-04-25 18:13:07 +01:00
Snider
e2bc724bb4 fix(mcp/brain/client): enforce 0600 on ~/.claude/brain.key
Refuse to load brain.key when its mode is more permissive than 0600 —
NewFromEnvironment carries the config error into Call() so callers
get a clear "brain.key has insecure permissions, expected 0600"
rather than a silent credential leak. Read path stats first; does not
auto-chmod (would mask the misconfiguration).

Write path uses coreio.Local.WriteMode and follows up with explicit
os.Chmod 0600, correcting any pre-existing 0644 file on next write.

Tests: write overwrites 0644 → 0600; read of 0644 fixture errors and
leaves the mode untouched.

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=998
2026-04-25 17:58:13 +01:00
Snider
8b48b33622 merge(mcp): reconcile origin AX-6 sweep + brainclient refactor with homelab migration + features
Parent SHAs:
  origin/dev: 4420670 feat(mcp/brain): OpenBrain T1+T2 — shared client + direct/brain-seed adoption (#175 #176)
  homelab/dev: 95f8ad3 docs(security): document accepted ollama CVEs + operator runbook

Resolution strategy (29 conflicts: 26 UU + 3 AA):
- Took origin's body for every conflicting file (AX-6-clean, uses core.* helpers
  not banned stdlib; preserves the brainclient.New() refactor that landed in
  origin commit 4420670 and lifted inline HTTP code into pkg/mcp/brain/client/).
- Sed-rewrote `dappco.re/go/core/X` → `dappco.re/go/X` import paths inline so
  origin's body uses the migrated paths that homelab's go.mod declares.
- go.mod auto-merged toward homelab's NEW dep paths (dappco.re/go/{ai,api,cli,
  io,log,process,rag,webview,ws} v0.8.0-alpha.1) — correct outcome.
- Homelab's standalone-new files (cmd/openbrain-mcp/, ipc.go, tools_metrics.go,
  tools_ws_client.go, tools_webview_embed.go, etc.) preserved via git's
  non-conflicting auto-merge.

Followups (filed separately):
- Stale `replace dappco.re/go/core/process => ../go-process` directive remains
  in go.mod — pre-existing, doesn't match any current dep, will catch in
  future cleanup pass.
- Local build verification deferred: workspace-level go-proxy half-migration
  + missing go-ws entry in ~/Code/go.work block `go build ./...` from
  succeeding host-side; this merge resolved by per-file inspection.

Co-authored-by: Codex <noreply@openai.com>
2026-04-25 16:08:35 +01:00
Snider
44206708f9 feat(mcp/brain): OpenBrain T1+T2 — shared client + direct/brain-seed adoption (#175 #176)
#175 (T1/5 — shared Go HTTP client):
- pkg/mcp/brain/client/client.go: shared client with retry, circuit breaker, org propagation
- 30s default HTTP client, retry on network/5xx only, no retry on 4xx
- Minimal circuit breaker for repeated failures
- pkg/mcp/brain/client/client_test.go: httptest coverage

#176 (T2/5 — direct subsystem + binaries adopt shared client):
- pkg/mcp/brain/direct.go: refactored to delegate through *client.Client
- pkg/mcp/brain/direct_test.go: tests updated to use brainclient.New
- cmd/brain-seed/main.go: uses /v1/brain/remember via shared client with -org

Verification: go test ./pkg/mcp/brain/client/... passed; go build cmd/brain-seed passed.

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=175
Closes tasks.lthn.sh/view.php?id=176
2026-04-25 15:42:11 +01:00
Snider
2910a0d588 feat(mcp): batch — org scoping + agentic_watch 30min default + progressToken helper
Codex 5.5 batch lane processed 5 open Mantis tickets. Sandbox blocked
in-lane commits; supervisor consolidated all implemented changes here.

Tickets implemented:
- #120 — Go brain subsystem org field added: brain tools/provider/direct path now scope by org per RFC §6 (drift from #107 PHP-side closed)
- #194 — agentic_watch default timeout corrected to 30 minutes per RFC §4.2
- #195 — new pkg/mcp/progress.go shared progress helper + wired dispatch/watch/process_run progress notifications per RFC §4.1

Tickets stale-fixed: #196 (MCP Portal discovery already implemented)
Tickets deferred: #197 (TransformerIn/Out — architecture pass)

Verification:
- go test ./pkg/mcp/brain passed
- go test ./pkg/mcp/agentic passed
- go test ./pkg/mcp -run 'TestProgress|TestToolsProcess_ProcessRun|TestProcess' passed
- Full ./pkg/mcp still fails one unrelated existing test (TestBridgeToAPI_Good_DescribableGroup expected 34, got 33 — predates this batch)

Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=120
Closes tasks.lthn.sh/view.php?id=194
Closes tasks.lthn.sh/view.php?id=195
2026-04-25 14:11:08 +01:00
Snider
494e05ada4 test(mcp): rename tests + add Bad/Ugly variants per AX-10 (#199)
Renamed pkg/mcp/mcp_test.go tests to TestMcp_<Function>_<Good|Bad|Ugly>
convention. Added _Bad and _Ugly siblings for: New, supported languages,
language detection, medium file ops, fileExists, listDirectory,
resolveWorkspacePath.

Closes tasks.lthn.sh/view.php?id=199

Co-authored-by: Codex <noreply@openai.com>
2026-04-25 12:06:27 +01:00
Snider
b29b8b5685 fix(mcp): AX-6 sweep on pkg/mcp/authz.go + cmd/brain-seed/main.go
authz.go: removed errors/fmt/strings → coreerr.E + core.Sprintf + core.*
helpers. brain-seed/main.go: removed bytes/fmt/strings → core.NewBuffer +
core.Print/Println + core.* helpers.

Co-authored-by: Codex <noreply@openai.com>
2026-04-25 11:18:36 +01:00
Snider
53bd7478a7 fix(mcp): AX-6 sweep on pkg/mcp/tools_webview.go
Annotated bytes (retained for bytes.NewReader image.Decode boundary).
Replaced bytes.Buffer with core.NewBuffer, sync.Mutex with core.Mutex.

Co-authored-by: Codex <noreply@openai.com>
2026-04-25 10:29:02 +01:00
Snider
9fd2185a86 fix(mcp): AX-6 banned-import purge + annotations in pkg/mcp/transport_http.go
Removed strings, replaced strings.TrimSpace with core.Trim. Annotated
encoding/json (HTTP boundary marshaling), net (dial primitives), net/http
(HTTP transport) as AX-6 structural.

Co-authored-by: Codex <noreply@openai.com>
2026-04-25 10:24:17 +01:00
Snider
95f8ad387c docs(security): document accepted ollama CVEs + operator runbook
Closes Mantis #323.

All 9 CVEs filed in #323 (govulncheck against the github.com/ollama/ollama
indirect dep) are unfixed upstream as of 2026-04-25. We are on v0.18.1
indirect via go-rag; ollama upstream is at v0.21.2 (3 days old). Pin-bump
resolves none of them.

Documents:
- CVE-by-CVE reachability assessment in our call graph
- 7 server-side CVEs (GZIP DoS, OOB, divzero, nullderef, server DoS) →
  unreachable; we are a client, not a server
- 1 conditional (GO-2025-3824 token exposure) → watch flag, reachable IF we
  ever add auth tokens
- 1 operator-side (GO-2025-4251 missing auth) → operator runbook required

Operator runbook covers:
- Network-level isolation (localhost-only or private-network binding)
- Reverse-proxy + auth for shared deployments
- CI-side govulncheck filter scoped to just these 9 CVE IDs

Surface in use: 3 symbols only (api.NewClient, api.Client, api.EmbedRequest)
imported from one file (go-rag/ollama.go). Vendor-fork would be
over-engineering for this scope; pin-bump is unavailable.

Argus filed; athena reviewed + documented.

Co-Authored-By: Argus <argus@lthn.ai>
Co-Authored-By: Athena <athena@lthn.ai>
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-25 01:40:43 +01:00
Snider
b2ed228b3f feat(ax-10): bring mcp to v0.8.0-alpha.1 + CLI test scaffold
- Migrate go.mod direct + indirect deps from dappco.re/go/core/X (pre-migration paths) to dappco.re/go/X at v0.8.0-alpha.1
- Update all Go source imports across 49 files: dappco.re/go/core/{ai,api,cli,io,log,process,rag,webview,ws,i18n,inference} -> dappco.re/go/{ai,api,cli,io,log,process,rag,webview,ws,i18n,inference}
- Add tests/cli/mcp/Taskfile.yaml AX-10 scaffold (build / vet / test under default deps), per RFC-CORE-008-AGENT-EXPERIENCE.md §10
- mcp is library + 4 binaries (brain-seed, core-mcp, mcpcmd, openbrain-mcp); the build target validates all of them

Closes tasks.lthn.sh/view.php?id=198

Co-Authored-By: Athena <athena@lthn.ai>
2026-04-24 23:35:37 +01:00
Codex
903aba4695 feat(mcp): implement HandleIPCEvents ChannelPush path (RFC §5.1)
Added handleChannelPushIPC in new pkg/mcp/ipc.go with Core-style
validation (rejects empty channel). Wired through from
register.go's HandleIPCEvents switch — ChannelPush now routes to the
new handler. Added AX-10 Good/Bad/Ugly tests in pkg/mcp/ipc_test.go
covering forwarding, empty-channel error, nil data payload.

Pre-existing unrelated failure TestBridgeToAPI_Good_DescribableGroup
(expects 34 descriptions, gets 33) is out of this ticket's scope.

Closes tasks.lthn.sh/view.php?id=193

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 21:41:40 +01:00
Codex
96fd169239 fix(mcp): rename tests to AX-10 TestFilename_Function_{Good,Bad,Ugly} (AX-10)
Refactored pkg/mcp/mcp_test.go to match AX-10 naming: every test
function now ends in _Good (happy path), _Bad (error path / expected
failure), or _Ugly (edge case). Added missing variants across covered
groups: New, GetSupportedLanguages, DetectLanguageFromPath, Medium,
FileExists, ListDirectory, ResolveWorkspacePath.

Closes tasks.lthn.sh/view.php?id=199

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 21:31:46 +01:00
Codex
633f295244 fix(mcp): purge banned stdlib imports from pkg/mcp/mcp.go (AX-6)
Removed banned os import; replaced:
- default cwd lookup → core.Env("DIR_CWD")
- os.DirEntry / os.ErrNotExist → annotated io/fs usage
- cleaned comment examples so broad banned-name grep is also clean

Verification: `grep -nE '"(fmt|os|errors|strings|encoding/json)"' pkg/mcp/mcp.go` empty. go vet passes with module drift (pre-existing).

Closes tasks.lthn.sh/view.php?id=191

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 21:27:42 +01:00
Codex
09f786fb80 fix(mcp): purge/annotate banned imports in pkg/mcp/notify.go (AX-6)
fmt, errors, strings, encoding/json swapped to core.* equivalents.
os retained with `// Note:` annotation — stdout stdio writer has no
core.Fs/core.Env equivalent at this layer.

Closes tasks.lthn.sh/view.php?id=192

Co-authored-by: Codex <noreply@openai.com>
2026-04-24 21:24:23 +01:00
Snider
f26ae14222 feat(mcp): add cmd/openbrain-mcp stdio wrapper for Claude Code
Thin Go wrapper mounting pkg/mcp/brain tools onto Service.ServeStdio().
Proxies to the PHP BrainService via --brain-url so any Claude Code
session gains OpenBrain recall/remember via `claude mcp add openbrain`.

Closes tasks.lthn.sh/view.php?id=76
Co-authored-by: Codex <noreply@openai.com>

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-23 15:22:13 +01:00
Snider
62c1949458 fix(mcp): rewrite mcpcmd for new core/cli Command API + correct bridge test
The mcpcmd package was using the removed Cobra-style cli.Command API
(Use/Short/Long/RunE/StringFlag/AddCommand). Rewrites it to the current
core.Command{Description, Action, Flags} path-routed pattern so the
core-mcp binary compiles again. Registers both "mcp" and "mcp/serve"
for parity with the existing OnStartup service-mode flow.

Fixes the bridge DescribableGroup test that expected len == svc.Tools()
but ToolBridge.Describe prepends the GET tool-listing entry, so the
correct expectation is len + 1.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-14 18:02:36 +01:00
Snider
e5caa8d32e feat(mcp): RFC §3 tools + §8 discovery alignment
New tools (RFC §3):
- webview_render / webview_update: embedded UI HTML + state broadcast
  via webview.render / webview.update channels with merge-or-replace
- ws_connect / ws_send / ws_close: outbound WebSocket client tools
  with stable ws-<hex> connection IDs
- process_run: blocking command executor returning ID/exit/output
- rag_search / rag_index: aliases for rag_query / rag_ingest per spec
- rag_retrieve: fetch chunks for a source, ordered by chunk index
- ide_dashboard_state / ide_dashboard_update: merge-or-replace state
  with activity feed entries and dashboard.state.updated broadcast
- agentic_issue_dispatch: spec-aligned name for agentic_dispatch_issue

Discovery (RFC §8.2):
- transport_http.go: /.well-known/mcp-servers.json advertises both
  core-agent and core-mcp with semantic use_when hints

Tool count: 25 → 33. Good/Bad/Ugly coverage added for every new tool.

Pre-existing cmd/mcpcmd Cobra-style build error flagged but untouched
— same cmd vs core.Command migration pattern seen in cmd/api and
cmd/build (which were migrated earlier this session).

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-14 18:02:36 +01:00
Snider
982a3b4b00 fix(mcp): transport addr default + progress notifications + auth timing safety
- transport_http.go: addr "" now defaults to 127.0.0.1:9101 per RFC
- pkg/mcp/agentic/dispatch.go: emits NotifyProgress milestones during
  validation, workspace prep, queue/slot, spawn, start completion
- pkg/mcp/agentic/watch.go: emits NotifyProgress per watched workspace
  completion/failure with running totals
- pkg/mcp/authz.go: restore crypto/subtle for constant-time token
  comparison (timing-attack resistance)
- pkg/mcp/registry.go: related touch-up for the auth path

Spark-medium pass. Unused net/http import cleaned after verify.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-14 18:02:36 +01:00
Snider
91803e32df refactor: AX compliance sweep — replace banned stdlib imports with core primitives
Replaced fmt, strings, sort, os, io, sync, encoding/json, path/filepath,
errors, log, reflect with core.Sprintf, core.E, core.Contains, core.Trim,
core.Split, core.Join, core.JoinPath, slices.Sort, c.Fs(), c.Lock(),
core.JSONMarshal, core.ReadAll and other CoreGO v0.8.0 primitives.

Framework boundary exceptions preserved where stdlib types are required
by external interfaces (Gin, net/http, CGo, Wails, bubbletea).

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-14 18:02:36 +01:00
Snider
65b686283f feat(mcp): export NotifySession for raw JSON-RPC notifications
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-14 18:02:36 +01:00
Snider
cbab302661 refactor(mcp): migrate stdlib imports to core/go primitives + upgrade go-sdk v1.5.0
- Replace fmt/errors/strings/path/filepath with core.Sprintf, core.E,
  core.Contains, core.Path etc. across 16 files
- Remove 'errors' import from bridge.go (core.Is/core.As)
- Remove 'fmt' from transport_tcp.go, ide.go (core.Print, inline interface)
- Remove 'strings' from notify.go, transport_http.go, tools_webview.go,
  process_notifications.go (core.Trim, core.HasPrefix, core.Lower etc.)
- Upgrade go-sdk from v1.4.1 to v1.5.0
- Keep encoding/json for json.NewDecoder/MarshalIndent (no core equivalent)
- Keep os/exec in agentic subsystem (needs go-process Action wiring)

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-14 18:02:36 +01:00
Snider
91297d733d fix(mcp): rewrite mcpcmd for new core/cli Command API + correct bridge test
The mcpcmd package was using the removed Cobra-style cli.Command API
(Use/Short/Long/RunE/StringFlag/AddCommand). Rewrites it to the current
core.Command{Description, Action, Flags} path-routed pattern so the
core-mcp binary compiles again. Registers both "mcp" and "mcp/serve"
for parity with the existing OnStartup service-mode flow.

Fixes the bridge DescribableGroup test that expected len == svc.Tools()
but ToolBridge.Describe prepends the GET tool-listing entry, so the
correct expectation is len + 1.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-14 18:01:17 +01:00
Snider
ca07b6cd62 feat(mcp): RFC §3 tools + §8 discovery alignment
New tools (RFC §3):
- webview_render / webview_update: embedded UI HTML + state broadcast
  via webview.render / webview.update channels with merge-or-replace
- ws_connect / ws_send / ws_close: outbound WebSocket client tools
  with stable ws-<hex> connection IDs
- process_run: blocking command executor returning ID/exit/output
- rag_search / rag_index: aliases for rag_query / rag_ingest per spec
- rag_retrieve: fetch chunks for a source, ordered by chunk index
- ide_dashboard_state / ide_dashboard_update: merge-or-replace state
  with activity feed entries and dashboard.state.updated broadcast
- agentic_issue_dispatch: spec-aligned name for agentic_dispatch_issue

Discovery (RFC §8.2):
- transport_http.go: /.well-known/mcp-servers.json advertises both
  core-agent and core-mcp with semantic use_when hints

Tool count: 25 → 33. Good/Bad/Ugly coverage added for every new tool.

Pre-existing cmd/mcpcmd Cobra-style build error flagged but untouched
— same cmd vs core.Command migration pattern seen in cmd/api and
cmd/build (which were migrated earlier this session).

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-14 15:55:15 +01:00
Snider
92727025e7 fix(mcp): transport addr default + progress notifications + auth timing safety
- transport_http.go: addr "" now defaults to 127.0.0.1:9101 per RFC
- pkg/mcp/agentic/dispatch.go: emits NotifyProgress milestones during
  validation, workspace prep, queue/slot, spawn, start completion
- pkg/mcp/agentic/watch.go: emits NotifyProgress per watched workspace
  completion/failure with running totals
- pkg/mcp/authz.go: restore crypto/subtle for constant-time token
  comparison (timing-attack resistance)
- pkg/mcp/registry.go: related touch-up for the auth path

Spark-medium pass. Unused net/http import cleaned after verify.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-14 14:37:52 +01:00
Snider
d8144fde09 refactor: AX compliance sweep — replace banned stdlib imports with core primitives
Replaced fmt, strings, sort, os, io, sync, encoding/json, path/filepath,
errors, log, reflect with core.Sprintf, core.E, core.Contains, core.Trim,
core.Split, core.Join, core.JoinPath, slices.Sort, c.Fs(), c.Lock(),
core.JSONMarshal, core.ReadAll and other CoreGO v0.8.0 primitives.

Framework boundary exceptions preserved where stdlib types are required
by external interfaces (Gin, net/http, CGo, Wails, bubbletea).

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-13 09:32:00 +01:00
Snider
f9c5362151 feat(mcp): export NotifySession for raw JSON-RPC notifications
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-09 11:07:28 +01:00
Snider
8f3afaa42a refactor(mcp): migrate stdlib imports to core/go primitives + upgrade go-sdk v1.5.0
- Replace fmt/errors/strings/path/filepath with core.Sprintf, core.E,
  core.Contains, core.Path etc. across 16 files
- Remove 'errors' import from bridge.go (core.Is/core.As)
- Remove 'fmt' from transport_tcp.go, ide.go (core.Print, inline interface)
- Remove 'strings' from notify.go, transport_http.go, tools_webview.go,
  process_notifications.go (core.Trim, core.HasPrefix, core.Lower etc.)
- Upgrade go-sdk from v1.4.1 to v1.5.0
- Keep encoding/json for json.NewDecoder/MarshalIndent (no core equivalent)
- Keep os/exec in agentic subsystem (needs go-process Action wiring)

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-08 22:03:52 +01:00
Snider
7fde0c1c21 refactor(mcp): migrate stdlib imports to core/go primitives + upgrade go-sdk v1.5.0
- Replace fmt/errors/strings/path/filepath with core.Sprintf, core.E,
  core.Contains, core.Path etc. across 16 files
- Remove 'errors' import from bridge.go (core.Is/core.As)
- Remove 'fmt' from transport_tcp.go, ide.go (core.Print, inline interface)
- Remove 'strings' from notify.go, transport_http.go, tools_webview.go,
  process_notifications.go (core.Trim, core.HasPrefix, core.Lower etc.)
- Upgrade go-sdk from v1.4.1 to v1.5.0
- Keep encoding/json for json.NewDecoder/MarshalIndent (no core equivalent)
- Keep os/exec in agentic subsystem (needs go-process Action wiring)

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-08 22:00:20 +01:00
Snider
f8f137b465 fix(mcp): disable ListChanged to prevent premature stdio notifications
The go-sdk fires notifications/tools/list_changed and
notifications/resources/list_changed with 10ms delay after AddTool/AddResource.
Since all registration happens before server.Run(), these hit stdout
before the client sends initialize, breaking the MCP handshake.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-08 20:50:46 +01:00
Snider
429f1c2b6c Revert "perf(mcp): gate extended built-in tools behind CORE_MCP_FULL"
This reverts commit 9f7dd84d4a.
2026-04-08 20:47:34 +01:00
Snider
9f7dd84d4a perf(mcp): gate extended built-in tools behind CORE_MCP_FULL
Metrics, RAG, and webview tools only register when CORE_MCP_FULL=1.
Process and WS tools always register (used by factory).
Reduces default tool count by 15.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-08 19:17:32 +01:00
Snider
9bd3084da4 fix(mcp): bridge test body + process dep resolution
- Fix TestBridgeToAPI_Good_EndToEnd: POST with empty JSON body instead of nil
- Add local replace for go-process to resolve API drift with core v0.8.0

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-08 16:39:28 +01:00
Snider
20e4a381cf fix: migrate module paths from forge.lthn.ai to dappco.re
Update all import paths and version pins:
- forge.lthn.ai/core/go-* → dappco.re/go/core/*
- forge.lthn.ai/core/api → dappco.re/go/core/api
- forge.lthn.ai/core/cli → dappco.re/go/core/cli
- Updated: api v0.3.0, cli v0.5.2, ai v0.2.2, io v0.4.1, log v0.1.2
- Updated: process v0.5.0, rag v0.1.13, ws v0.4.0, webview v0.2.1
- Updated: i18n v0.2.3, inference v0.3.0, scm v0.6.1

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-07 12:59:22 +01:00
Snider
cd305904e5 fix: migrate module paths from forge.lthn.ai to dappco.re
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 16:21:14 +01:00
Virgil
0b63f9cd22 fix(mcp): harden tool handlers 2026-04-02 18:50:20 +00:00
Virgil
ab1aa0cad0 refactor(mcp): align REST bridge errors with AX 2026-04-02 18:46:08 +00:00
Virgil
c1d3db1ad3 fix(mcp): ensure CLI shutdown cleanup 2026-04-02 18:42:03 +00:00
Virgil
1b3d102684 refactor(agentic): canonicalize PR input DTO 2026-04-02 18:38:03 +00:00
Virgil
23063599df docs(mcp): clarify channel capability helpers 2026-04-02 18:34:15 +00:00
Virgil
476a699b96 fix(mcp): type REST bridge input errors 2026-04-02 18:29:51 +00:00
Virgil
0bd3f70e20 docs(mcp): align public API comments with AX conventions 2026-04-02 18:26:17 +00:00
Virgil
6e73fb6e8d fix(agentic): preserve issue labels on unlock 2026-04-02 18:23:35 +00:00
Virgil
cf0885389a feat(brain): emit complete bridge notifications 2026-04-02 18:20:25 +00:00
Virgil
6a5a177bec refactor(mcp): default CLI to sandboxed workspace 2026-04-02 18:16:46 +00:00
Virgil
8a1efa8f12 refactor(agentic): surface workspace persistence failures 2026-04-02 18:12:16 +00:00
Virgil
583abea788 refactor(agentic): write workspace files atomically 2026-04-02 18:07:53 +00:00
Virgil
9f68a74491 docs(mcp): add AX-style usage examples to agentic DTOs 2026-04-02 18:04:32 +00:00
Virgil
954a5e1e98 refactor(mcp): document directory entry paths 2026-04-02 18:01:16 +00:00
Virgil
1373a6d296 refactor(mcp): centralize session broadcast iteration 2026-04-02 17:50:49 +00:00
Virgil
d4de2b4cd7 fix(mcp): make BridgeToAPI nil-safe 2026-04-02 17:45:55 +00:00
Virgil
1c3fe69cbc refactor(mcp): centralise channel capability name 2026-04-02 17:40:08 +00:00
Virgil
c42e7ad050 fix(mcp): harden subsystem interface contracts 2026-04-02 17:35:43 +00:00
Virgil
4732e31b74 refactor(mcp): broadcast notifications from session snapshots 2026-04-02 17:32:23 +00:00
Virgil
6e1a7d7d2a refactor(mcp): expose notification method constants 2026-04-02 17:28:30 +00:00
Virgil
5edaa7ead1 fix(mcp): align webview error handling 2026-04-02 17:23:49 +00:00
Virgil
a3c39ccae7 fix(agentic): release issue lock on dispatch failure 2026-04-02 17:18:55 +00:00
Virgil
1873adb6ae Fix MCP bridge test wiring 2026-04-02 17:14:36 +00:00
Virgil
da30f3144a refactor(mcp): record subsystem tools centrally
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 17:11:16 +00:00
Virgil
a215428df8 feat(mcp): add client notification aliases 2026-04-02 17:03:11 +00:00
Virgil
b6aa33a8e0 feat(mcp): improve tool schema generation 2026-04-02 16:58:53 +00:00
Virgil
c83df5f113 feat(agentic): surface workspace metadata in status output 2026-04-02 16:50:50 +00:00
Virgil
6b78f0c137 feat(mcp): add server resource listing 2026-04-02 16:47:03 +00:00
Virgil
4ab909f391 docs(mcp): add usage examples to agentic DTOs 2026-04-02 14:54:37 +00:00
Virgil
ffcd05ea1f chore(mcp): restore SPDX headers in public Go files
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 14:48:46 +00:00
Virgil
555f9ec614 refactor(mcp): expose channel callback subsystem contract
Make the func-based channel wiring contract explicit instead of relying on an anonymous interface inside New(). This keeps the extension point discoverable and aligned with the repository's AX-style API clarity.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 14:43:06 +00:00
Virgil
94cf1c0ba7 fix(mcp): harden notification metadata and logging
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 14:36:33 +00:00
Virgil
fa9a5eed28 refactor(mcp): add typed channel capability helper
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 14:30:04 +00:00
Virgil
af3cf3c8e3 docs(mcp): add usage examples to remaining public DTOs
Align the IDE bridge and brain subsystem public types with the repo's AX-style comment convention by adding concrete usage examples for bridge messages, DTOs, and helper callbacks.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 14:17:47 +00:00
Virgil
2a4e8b7ba3 fix(mcp): snapshot exposed service slices
Return copies from service accessors and ignore nil subsystems during construction to keep the MCP service API stable and AX-friendly.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 14:10:53 +00:00
Virgil
aae824a4d0 docs(mcp): align notification API docs with AX conventions
Reorder subsystem notification wiring so channel callbacks are available before tool registration, and add usage-example comments to the public notification DTOs and helpers.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 14:06:21 +00:00
Virgil
072a36cb73 test(mcp): cover multi-session notification fanout 2026-04-02 13:57:33 +00:00
Virgil
8b7e0c40a6 fix(mcp): harden session notification dispatch 2026-04-02 13:54:12 +00:00
Virgil
8bc44d83a4 fix(mcp): expose channel capability helpers
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 13:49:35 +00:00
Virgil
d9d452b941 refactor(mcp): share notifier contract in brain subsystem
Remove the duplicate brain-local notifier interface and use the shared pkg/mcp Notifier type directly. Also align stale test comments with the current Options constructor API.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 13:44:59 +00:00
Virgil
12346208cc fix(mcp): wire notifier before subsystem registration
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 13:40:22 +00:00
Virgil
cd60d9030c fix(mcp): normalize tcp defaults and notification context
Make notification broadcasting tolerant of a nil context and make TCP transport fall back to DefaultTCPAddr when no address is supplied.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 13:19:24 +00:00
Virgil
a0caa6918c fix(mcp): make notifications nil-safe
Guard the notification broadcast helpers against nil Service and nil server values so they degrade to no-ops instead of panicking. Add coverage for zero-value and nil-server use.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 13:13:04 +00:00
Virgil
d498b2981a docs(mcp): add usage examples to constructors
Core MCP already implements the RFC-shape functionality; this pass tightens the AX surface by adding usage-example comments to public constructors and config helpers.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 13:06:28 +00:00
Virgil
41f83b52f6 refactor(mcp): align service descriptions with runtime behavior
Update stale MCP service and CLI descriptions so they match the current runtime shape, and log the actual bound TCP address when listening on an ephemeral port.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 13:01:41 +00:00
Virgil
8e77c5e58d feat(mcp): auto-select unix socket transport
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 12:56:30 +00:00
Virgil
e09f3518e0 feat(mcp): enrich process notifications with runtime context
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 12:51:27 +00:00
Virgil
ad6ccd09bb feat(mcp): honor webview screenshot format
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 12:46:03 +00:00
Virgil
c7b317402b fix(mcp): align brain list notifications and cleanup test runtime
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 12:41:37 +00:00
Virgil
91e41615d1 feat(mcp): implement HTTP resource reads
Add support for plans://, sessions://, and content:// resources via the MCP API controller instead of returning 501 for every request.\n\nCo-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 12:30:07 +00:00
Virgil
981ad9f7da feat(mcp): fan out bridge observers for brain recall
Allow the IDE bridge to register multiple observers so the IDE and brain subsystems can both react to inbound Laravel messages. Brain recall notifications now fire from the bridge callback with the real result count instead of the request path, and the brain provider follows the same async notification flow.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 12:22:13 +00:00
Virgil
e138af6635 feat(agentic): add watch, mirror, and review queue tools
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 12:16:17 +00:00
Virgil
e40b05c900 feat(mcp): emit test result channel notifications
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 12:09:41 +00:00
Virgil
f62c9c924d fix(mcp): close TCP sessions on handler exit
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 12:03:20 +00:00
Virgil
ca9d879b21 feat(mcp): forward process output notifications
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 11:59:05 +00:00
Virgil
2df8866404 feat(mcp): forward process lifecycle actions to channel notifications
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 11:53:44 +00:00
Virgil
d57f9d4039 refactor(mcp): remove empty runtime options DTO
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 11:46:09 +00:00
Virgil
dcd3187aed feat(agentic): add plan status alias and safer reads
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 11:36:20 +00:00
Virgil
dd33bfb691 refactor(mcp): centralize notification channel names
Keep channel emitters, provider metadata, and capability advertising in sync by sharing the same constants across the MCP subsystems.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 11:28:49 +00:00
Virgil
d5a76bf2c7 feat(agentic): emit harvest completion notifications
Tighten MCP session notification dispatch and add coverage for harvest.complete payloads.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 11:21:02 +00:00
Virgil
ed4efcbd55 feat(ide): emit build start lifecycle notifications
Advertise and emit build.start when a bridged build enters a running state, alongside the existing complete/fail lifecycle events.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 11:14:17 +00:00
Virgil
dd48cc16f8 feat(ide): emit build lifecycle notifications
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 11:09:55 +00:00
Virgil
dd01f366f2 feat(agentic): add plan checkpoint tool
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 11:05:37 +00:00
Virgil
e62f4ab654 fix(mcp): resolve workspace paths for tools
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 11:01:11 +00:00
Virgil
45d439926f feat(ide): add local build and dashboard state
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 10:54:31 +00:00
Virgil
b96b05ab0b fix(mcp): stabilise file existence checks
Use Stat() for file_exists and sort directory listings for deterministic output.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 10:48:34 +00:00
Virgil
599d0b6298 feat(brain): add direct list support
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 10:43:54 +00:00
Virgil
b82d399349 refactor(mcp): centralize channel capability names
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 10:40:40 +00:00
Virgil
0516e51db8 feat(agentic): emit workspace lifecycle notifications
Wire the agentic subsystem into the shared MCP notifier so workspace
completion, blocked, and status transitions are broadcast to clients.
Also keep resumed workspaces from leaving stale running state after the
process exits.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 10:35:58 +00:00
Virgil
bf3ef9f595 fix(mcp): broadcast notifications without log gating
Use the underlying MCP notification path for session broadcasts so fresh sessions receive notifications without requiring a prior log-level handshake.

Add a regression test that verifies broadcast delivery on a connected session.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 10:31:39 +00:00
Virgil
a34115266c fix(mcp): return local IDE state without bridge
Headless IDE sessions now keep working for chat history, session listing, session creation, plan status, dashboard activity, dashboard metrics, and build lookups. The Laravel bridge is still used when available, but it is no longer a hard dependency for the local cache-backed tools.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 10:25:29 +00:00
Virgil
76d351d8a4 feat(mcp): persist IDE session state locally
Generate real session IDs and retain chat/activity/session state in the IDE subsystem so session_create, chat_history, session_list, and dashboard tools return useful local data before the backend responds.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 10:19:29 +00:00
Virgil
c3a449e678 fix(mcp): add session notification helper
Use the Core context for forwarded IPC channel events and expose a session-level notification helper that the broadcast path reuses. This keeps the notification API more symmetric and avoids dropping context during IPC forwarding.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 10:12:35 +00:00
Virgil
a541c95dc8 feat(mcp): align language catalog with detector
Expose all languages already recognized by lang_detect in lang_list, and keep the two paths synchronized through a shared catalog.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 10:09:04 +00:00
Virgil
116df41200 feat(mcp): honor graceful process stop and webview wait timeout
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 10:05:07 +00:00
Virgil
53fffbd96a feat(agentic): add issue dispatch tools
Implement agentic_dispatch_issue and agentic_pr for Forge issue-driven workflows, and track issue-specific branch metadata in agent workspaces.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 07:58:29 +00:00
Virgil
faf9490e7f feat(mcp): wire optional core services in Register
Auto-discovers process.Service and ws.Hub instances already registered in Core so the MCP service factory exposes the matching tool groups without extra manual wiring.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 07:53:06 +00:00
Virgil
aa1146807e fix(mcp): emit claude/channel notifications directly
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 07:47:38 +00:00
Virgil
27107cd75e fix(mcp): enforce REST bridge body limit
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 07:39:19 +00:00
Virgil
6adf61e593 fix(mcp): allow unauthenticated HTTP transport when token is unset
Treat an empty MCP_AUTH_TOKEN as local development mode and pass requests through to /mcp. Add tests for the no-token path and update the empty-token unit case accordingly.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 07:35:35 +00:00
Virgil
7b22fd3141 refactor(mcp): remove unused registry iterator helper
Keeps the registry surface tighter by removing the unused splitTagSeq helper and updating the test to cover the production splitTag helper directly.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 07:29:59 +00:00
Virgil
bfb5bf84e2 feat(mcp): register built-in tool groups
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-02 07:25:48 +00:00
Virgil
20caaebc21 Add describe_table MCP tool 2026-04-01 09:55:50 +00:00
Virgil
a45bc388b5 feat(mcp): advertise brain list completion channel 2026-04-01 07:37:23 +00:00
Virgil
6d45f87e26 refactor(mcp): assert notifier conformance
Aligns the service with the AX preference for explicit interface contracts.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 18:18:16 +00:00
Virgil
9bdd44d9b6 fix(mcp): advertise process.start in channel capability
Sync the claude/channel capability list with the process lifecycle events emitted by the service.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 18:13:27 +00:00
Virgil
102097222d refactor(mcp): remove dead stdio transport state
Drop the unused stdioMode field, route Run() through ServeStdio(), and keep startup logging on the service logger.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 18:08:03 +00:00
Virgil
17280b0aed refactor(mcp): remove deprecated coreRef state
Align the MCP service runtime with the AX service model by dropping the unused coreRef escape hatch and relying on ServiceRuntime for Core access.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 18:04:50 +00:00
Virgil
985bc2017f refactor(mcp): align AX DTO defaults and IDE config
Normalize the MCP constructor workspace defaulting to use the actual working directory, and replace the IDE subsystem's functional options with a Config DTO so the codebase stays aligned with AX-style configuration objects.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-31 18:02:16 +00:00
Virgil
5177dc391b docs(mcp): refresh AX migration notes and options references 2026-03-30 07:52:58 +00:00
Virgil
ea8478b776 feat(mcp): align channel notifications with AX notifier flow 2026-03-30 05:48:11 +00:00
Snider
014c18e563 fix(mcp): set stdioMode in ServeStdio + use shared locked writer
ServeStdio never set stdioMode=true, so ChannelSend always returned
early. Also switched from StdioTransport to IOTransport with a shared
lockedWriter that both the SDK and ChannelSend write through.

This fixes channel notifications not arriving in Claude Code sessions.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 12:22:53 +00:00
Snider
c8089fd597 fix(mcp): use shared locked writer for channel notifications
ChannelSend was writing to os.Stdout directly while the SDK's
StdioTransport also writes to os.Stdout — causing interleaved
JSON-RPC messages. Now both use a shared lockedWriter via IOTransport.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 12:22:53 +00:00
Snider
3ece6656d2 feat(mcp): add ChannelPush IPC message + HandleIPCEvents
Services can now push channel events to Claude Code by sending a
ChannelPush message via Core IPC. The MCP service catches it in
HandleIPCEvents and calls ChannelSend to the stdio transport.

- ChannelPush{Channel, Data} message type in subsystem.go
- HandleIPCEvents on Service catches ChannelPush → ChannelSend
- Enables runner→mcp→Claude Code notification pipeline

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 12:22:53 +00:00
Snider
c68698fc5c deps: bump dappco.re/go/core to v0.8.0-alpha.1
Resolves build failures from using Core primitives (JSONMarshalString,
JSONUnmarshal, ReadAll, ServiceRuntime, etc.) that didn't exist in v0.4.7.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 12:22:46 +00:00
Snider
63dab254bf fix: ide/bridge.go — remove encoding/json
json.Marshal → core.JSONMarshalString
json.Unmarshal → core.JSONUnmarshal

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 12:22:46 +00:00
Snider
83bcb1e5e1 fix: bridge.go — remove encoding/json entirely
Error classification uses string match on unmarshal errors instead of
stdlib json.SyntaxError/UnmarshalTypeError type assertions. No exceptions.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 12:22:46 +00:00
Snider
3d3deb1bf5 fix: bridge.go — io.ReadAll → core.ReadAll, errors.As → core.As
encoding/json stays for SyntaxError/UnmarshalTypeError type assertions.
net/http stays — transport boundary.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 12:22:46 +00:00
Snider
0e38e3a7f0 feat: AX quality gate — eliminate disallowed imports from production code
- mcp.go: os/filepath/strings → core.Env, core.JoinPath, core.PathExt,
  core.PathBase, core.Contains, core.Replace + local helpers
- registry.go: encoding/json/strings → core.JSONUnmarshal, core.Split
- notify.go: encoding/json → core.JSONMarshalString
- tools_metrics.go: fmt/strings → core.Sprintf, core.Trim
- tools_rag.go: fmt → core.Sprintf
- tools_webview.go: fmt → core.Sprintf
- tools_ws.go: fmt → core.Sprintf

Transport files (http/tcp/unix/stdio) retain os/net/http — boundary code.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 12:22:27 +00:00
Snider
403a15f391 feat: migrate registry.go + notify.go to Core primitives
- registry.go: encoding/json → core.JSONUnmarshal, strings → core.Split
- notify.go: encoding/json → core.JSONMarshalString
- Both files now import dappco.re/go/core instead of stdlib

Remaining: mcp.go has os/strings/filepath (transport boundary),
tests need forge.lthn.ai → dappco.re module path migration.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 12:22:27 +00:00
Snider
66527165d0 feat: add ServiceRuntime to MCP Service + use s.Core()
MCP Service now embeds *core.ServiceRuntime[McpOptions] like all v0.8.0
services. OnStartup uses s.Core() instead of casting coreRef.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 12:22:27 +00:00
Snider
96f46e53cb fix: OnStartup/OnShutdown return core.Result (v0.8.0 Startable interface)
Was returning error — Core didn't recognise it as Startable so mcp/serve
commands were never registered. One-line fix per method.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-26 12:22:27 +00:00
d02853cb6c Merge pull request 'feat: mcp.Register + module path dappco.re/go/mcp' (#15) from feat/core-service-pattern into dev 2026-03-24 22:10:00 +00:00
Snider
3d62d2f531 feat: migrate module path forge.lthn.ai/core/mcp → dappco.re/go/mcp
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-24 21:39:52 +00:00
Snider
4ae003e6d8 feat: MCP registers mcp/serve commands in OnStartup
Service stores Core ref, registers transport commands during lifecycle.
Commands use Service methods directly — no external wiring.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-24 21:24:15 +00:00
Snider
7957013500 feat: mcp.Register for core.WithService — auto-discovers subsystems from Core
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-24 21:16:10 +00:00
18ccaec720 Merge pull request 'Update MCP SDK migration plan' (#13) from agent/update-the-migration-plan-at-docs-plans into dev
Reviewed-on: #13
2026-03-24 11:43:22 +00:00
Snider
f15d2cb6ce Merge origin/dev into agent/update-the-migration-plan-at-docs-plans
Resolve conflicts:
- docs/plans/2026-03-21-mcp-sdk-migration.md: keep PR version (AX conventions + notifications plan)
- pkg/mcp/tools_process_ci_test.go: keep dappco.re/go/core import + new Core API

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-24 11:42:10 +00:00
35f6a12639 Merge pull request 'Fix Codex review findings' (#12) from agent/fix-the-following-codex-review-findings into dev
Reviewed-on: #12
2026-03-24 11:37:44 +00:00
2e4608e6b5 Merge branch 'dev' into agent/fix-the-following-codex-review-findings 2026-03-24 11:37:34 +00:00
87b98b0c44 Merge pull request 'Create MCP SDK migration plan' (#14) from agent/create-a-migration-plan-to-replace-the-m into dev
Reviewed-on: #14
2026-03-24 11:37:19 +00:00
ef8ab58d2e Merge pull request '[agent/codex:gpt-5.3-codex-spark] Fix ALL findings from issue #6. Read CLAUDE.md. MCP transpor...' (#10) from agent/full-audit-per-issue--6--read-claude-md into dev 2026-03-23 14:33:54 +00:00
Virgil
d3c7210433 fix(mcp): harden transport auth and workspace prep path validation
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-23 14:33:35 +00:00
Snider
2992f872f0 revert(agentic): remove pipeline chaining from dispatch
MCP SDK doesn't support nested struct slices in schema generation.
Pipeline orchestration will be handled at a higher level.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-22 07:56:28 +00:00
Snider
1d46159340 feat(agentic): pipeline chaining — review→fix→verify in one dispatch
DispatchInput now accepts Pipeline []PipelineStep for follow-up steps.
On agent completion, the next step auto-dispatches with {{.Findings}}
replaced by the previous agent's output. Enables:

  dispatch(review) → auto(fix with findings) → auto(verify)

WorkspaceStatus stores NextSteps for the completion handler.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-22 07:46:44 +00:00
Snider
517afe627f revert(agentic): remove hardcoded copyReference — use embedded templates
Reference files are now embedded in core-agent's workspace template
(pkg/lib/workspace/default/.core/reference/). No hardcoded paths needed.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-22 07:00:03 +00:00
Snider
5d749af517 fix(mcp): resolve codex review findings — spelling, imports, tests, assertions
- UK English in transport_e2e_test.go comments and error strings
- Replace fmt.Printf with coreerr.Error/Warn in brain-seed for errors/skips
- Alias stdlib io as goio in transport_tcp, brain/direct, agentic/prep, bridge, brain-seed
- Add var _ Notifier = (*Service)(nil) compile-time assertion
- Add TestRegisterProcessTools_Bad_NilService for nil-service error path
- Add webview handler tests beyond nil-guard (disconnect success, validation paths)
- Guard tools_process_ci_test.go with //go:build ci (pre-existing build failure)
- Document circular-import exception in EXCEPTIONS.md

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-22 02:14:33 +00:00
Snider
ea81084058 docs(mcp): add SDK migration plan for AX conventions + notifications
5-phase plan covering:
- Options{} struct replacing functional options (breaking)
- SendNotificationToAllClients + claude/channel capability
- Usage-example comments on all public types
- Notifier interface for subsystem event broadcasting
- Consumer migration guide for agent/ide modules

Evaluated mark3labs/mcp-go vs official SDK; recommends staying on
official SDK with Server.Sessions() wrapper for notifications.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-21 12:34:14 +00:00
Snider
4c6c9d7709 refactor: migrate core import to dappco.re/go/core
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-21 12:13:52 +00:00
Snider
6fa6c79b86 docs(mcp): add migration plan for official SDK to mcp-go
Maps all 27 source files and 55 tool registrations from
github.com/modelcontextprotocol/go-sdk to github.com/mark3labs/mcp-go.
Covers handler signature adapter, transport strategy, subsystem interface
changes, and consumer impact. Key motivation: unlock SendNotificationToClient
for claude/channel event push.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-21 11:59:04 +00:00
Snider
bac5b83cbf chore: sync dependencies for v0.3.4
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-17 17:53:10 +00:00
Snider
45fbe0bc73 chore: sync dependencies for v0.3.3
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-17 17:49:05 +00:00
110 changed files with 15309 additions and 1567 deletions

17
EXCEPTIONS.md Normal file
View file

@ -0,0 +1,17 @@
# Exceptions
Items from the Codex review that cannot be fixed, with reasons.
## 6. Compile-time interface assertions in subsystem packages
**Files:** `brain/brain.go`, `brain/direct.go`, `agentic/prep.go`, `ide/ide.go`
**Finding:** Add `var _ Subsystem = (*T)(nil)` compile-time assertions.
**Reason:** The `Subsystem` interface is defined in the parent `mcp` package. Subsystem packages (`brain`, `agentic`, `ide`) cannot import `mcp` because `mcp` already imports them via `Options.Subsystems` — this would create a circular import. The interface conformance is enforced at runtime when `RegisterTools` is called during `mcp.New()`.
## 7. Compile-time Notifier assertion on Service
**Finding:** Add `var _ Notifier = (*Service)(nil)`.
**Resolution:** Fixed — assertion added to `pkg/mcp/subsystem.go` (where the `Notifier` interface is defined). The TODO originally claimed this was already done in commit `907d62a`, but it was not present in the codebase.

View file

@ -1,40 +1,38 @@
// SPDX-License-Identifier: EUPL-1.2 // SPDX-License-Identifier: EUPL-1.2
// brain-seed imports Claude Code MEMORY.md files into the OpenBrain knowledge // brain-seed imports Claude Code MEMORY.md files into the OpenBrain knowledge
// store via the MCP HTTP API (brain_remember tool). The Laravel app handles // store via the shared OpenBrain HTTP client. The Laravel app handles
// embedding, Qdrant storage, and MariaDB dual-write internally. // embedding, Qdrant storage, and MariaDB dual-write internally.
// //
// Usage: // Usage:
// //
// go run ./cmd/brain-seed -api-key YOUR_KEY // go run ./cmd/brain-seed -api-key YOUR_KEY
// go run ./cmd/brain-seed -api-key YOUR_KEY -api https://lthn.sh/api/v1/mcp // go run ./cmd/brain-seed -api-key YOUR_KEY -api https://api.lthn.sh
// go run ./cmd/brain-seed -api-key YOUR_KEY -dry-run // go run ./cmd/brain-seed -api-key YOUR_KEY -dry-run
// go run ./cmd/brain-seed -api-key YOUR_KEY -plans // go run ./cmd/brain-seed -api-key YOUR_KEY -plans
// go run ./cmd/brain-seed -api-key YOUR_KEY -claude-md # Also import CLAUDE.md files // go run ./cmd/brain-seed -api-key YOUR_KEY -claude-md # Also import CLAUDE.md files
package main package main
import ( import (
"bytes" "context"
"crypto/tls"
"encoding/json"
"flag" "flag"
"fmt"
"io"
"net/http"
"os" "os"
"path/filepath" "path/filepath"
"regexp" "regexp"
"strings"
"time"
coreio "forge.lthn.ai/core/go-io" core "dappco.re/go/core"
coreerr "forge.lthn.ai/core/go-log" coreio "dappco.re/go/io"
coreerr "dappco.re/go/log"
brainclient "dappco.re/go/mcp/pkg/mcp/brain/client"
) )
const seedDivider = "======================================================="
var ( var (
apiURL = flag.String("api", "https://lthn.sh/api/v1/mcp", "MCP API base URL") apiURL = flag.String("api", brainclient.DefaultURL, "OpenBrain API base URL")
apiKey = flag.String("api-key", "", "MCP API key (Bearer token)") apiKey = flag.String("api-key", core.Env("CORE_BRAIN_KEY"), "OpenBrain API key (Bearer token)")
server = flag.String("server", "hosthub-agent", "MCP server ID") server = flag.String("server", "hosthub-agent", "Legacy MCP server ID flag; accepted for compatibility")
org = flag.String("org", core.Env("CORE_BRAIN_ORG"), "OpenBrain org for seeded memories")
agent = flag.String("agent", "charon", "Agent ID for attribution") agent = flag.String("agent", "charon", "Agent ID for attribution")
dryRun = flag.Bool("dry-run", false, "Preview without storing") dryRun = flag.Bool("dry-run", false, "Preview without storing")
plans = flag.Bool("plans", false, "Also import plan documents") plans = flag.Bool("plans", false, "Also import plan documents")
@ -45,33 +43,33 @@ var (
maxChars = flag.Int("max-chars", 3800, "Max chars per section (embeddinggemma limit ~4000)") maxChars = flag.Int("max-chars", 3800, "Max chars per section (embeddinggemma limit ~4000)")
) )
// httpClient with TLS skip for non-public TLDs (.lthn.sh has real certs, but var openbrain *brainclient.Client
// allow .lan/.local if someone has legacy config).
var httpClient = &http.Client{
Timeout: 30 * time.Second,
Transport: &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: false},
},
}
func main() { func main() {
flag.Parse() flag.Parse()
fmt.Println("OpenBrain Seed — MCP API Client") core.Println("OpenBrain Seed — API Client")
fmt.Println(strings.Repeat("=", 55)) core.Println(seedDivider)
if *apiKey == "" && !*dryRun { if *apiKey == "" && !*dryRun {
fmt.Println("ERROR: -api-key is required (or use -dry-run)") core.Println("ERROR: -api-key is required (or use -dry-run)")
fmt.Println(" Generate one at: https://lthn.sh/admin/mcp/api-keys") core.Println(" Generate one at: https://lthn.sh/admin/mcp/api-keys")
os.Exit(1) os.Exit(1)
} }
if *dryRun { if *dryRun {
fmt.Println("[DRY RUN] — no data will be stored") core.Println("[DRY RUN] — no data will be stored")
} }
fmt.Printf("API: %s\n", *apiURL) core.Print(nil, "API: %s", *apiURL)
fmt.Printf("Server: %s | Agent: %s\n", *server, *agent) core.Print(nil, "Org: %s | Agent: %s", *org, *agent)
openbrain = brainclient.New(brainclient.Options{
URL: *apiURL,
Key: *apiKey,
Org: *org,
AgentID: *agent,
})
// Discover memory files // Discover memory files
memPath := *memoryPath memPath := *memoryPath
@ -80,7 +78,7 @@ func main() {
memPath = filepath.Join(home, ".claude", "projects", "*", "memory") memPath = filepath.Join(home, ".claude", "projects", "*", "memory")
} }
memFiles, _ := filepath.Glob(filepath.Join(memPath, "*.md")) memFiles, _ := filepath.Glob(filepath.Join(memPath, "*.md"))
fmt.Printf("\nFound %d memory files\n", len(memFiles)) core.Print(nil, "\nFound %d memory files", len(memFiles))
// Discover plan files // Discover plan files
var planFiles []string var planFiles []string
@ -103,7 +101,7 @@ func main() {
hostUkNested, _ := filepath.Glob(filepath.Join(hostUkPath, "*", "*.md")) hostUkNested, _ := filepath.Glob(filepath.Join(hostUkPath, "*", "*.md"))
planFiles = append(planFiles, hostUkNested...) planFiles = append(planFiles, hostUkNested...)
fmt.Printf("Found %d plan files\n", len(planFiles)) core.Print(nil, "Found %d plan files", len(planFiles))
} }
// Discover CLAUDE.md files // Discover CLAUDE.md files
@ -115,7 +113,7 @@ func main() {
cPath = filepath.Join(home, "Code") cPath = filepath.Join(home, "Code")
} }
claudeFiles = discoverClaudeMdFiles(cPath) claudeFiles = discoverClaudeMdFiles(cPath)
fmt.Printf("Found %d CLAUDE.md files\n", len(claudeFiles)) core.Print(nil, "Found %d CLAUDE.md files", len(claudeFiles))
} }
imported := 0 imported := 0
@ -123,21 +121,21 @@ func main() {
errors := 0 errors := 0
// Process memory files // Process memory files
fmt.Println("\n--- Memory Files ---") core.Println("\n--- Memory Files ---")
for _, f := range memFiles { for _, f := range memFiles {
project := extractProject(f) project := extractProject(f)
sections := parseMarkdownSections(f) sections := parseMarkdownSections(f)
filename := strings.TrimSuffix(filepath.Base(f), ".md") filename := core.TrimSuffix(filepath.Base(f), ".md")
if len(sections) == 0 { if len(sections) == 0 {
fmt.Printf(" skip %s/%s (no sections)\n", project, filename) coreerr.Warn("brain-seed: skip file (no sections)", "project", project, "file", filename)
skipped++ skipped++
continue continue
} }
for _, sec := range sections { for _, sec := range sections {
content := sec.heading + "\n\n" + sec.content content := sec.heading + "\n\n" + sec.content
if strings.TrimSpace(sec.content) == "" { if core.Trim(sec.content) == "" {
skipped++ skipped++
continue continue
} }
@ -150,29 +148,29 @@ func main() {
content = truncate(content, *maxChars) content = truncate(content, *maxChars)
if *dryRun { if *dryRun {
fmt.Printf(" [DRY] %s/%s :: %s (%s) — %d chars\n", core.Print(nil, " [DRY] %s/%s :: %s (%s) — %d chars",
project, filename, sec.heading, memType, len(content)) project, filename, sec.heading, memType, len(content))
imported++ imported++
continue continue
} }
if err := callBrainRemember(content, memType, tags, project, confidence); err != nil { if err := callBrainRemember(content, memType, tags, project, confidence); err != nil {
fmt.Printf(" FAIL %s/%s :: %s — %v\n", project, filename, sec.heading, err) coreerr.Error("brain-seed: import failed", "project", project, "file", filename, "heading", sec.heading, "err", err)
errors++ errors++
continue continue
} }
fmt.Printf(" ok %s/%s :: %s (%s)\n", project, filename, sec.heading, memType) core.Print(nil, " ok %s/%s :: %s (%s)", project, filename, sec.heading, memType)
imported++ imported++
} }
} }
// Process plan files // Process plan files
if *plans && len(planFiles) > 0 { if *plans && len(planFiles) > 0 {
fmt.Println("\n--- Plan Documents ---") core.Println("\n--- Plan Documents ---")
for _, f := range planFiles { for _, f := range planFiles {
project := extractProjectFromPlan(f) project := extractProjectFromPlan(f)
sections := parseMarkdownSections(f) sections := parseMarkdownSections(f)
filename := strings.TrimSuffix(filepath.Base(f), ".md") filename := core.TrimSuffix(filepath.Base(f), ".md")
if len(sections) == 0 { if len(sections) == 0 {
skipped++ skipped++
@ -181,7 +179,7 @@ func main() {
for _, sec := range sections { for _, sec := range sections {
content := sec.heading + "\n\n" + sec.content content := sec.heading + "\n\n" + sec.content
if strings.TrimSpace(sec.content) == "" { if core.Trim(sec.content) == "" {
skipped++ skipped++
continue continue
} }
@ -190,18 +188,18 @@ func main() {
content = truncate(content, *maxChars) content = truncate(content, *maxChars)
if *dryRun { if *dryRun {
fmt.Printf(" [DRY] %s :: %s / %s (plan) — %d chars\n", core.Print(nil, " [DRY] %s :: %s / %s (plan) — %d chars",
project, filename, sec.heading, len(content)) project, filename, sec.heading, len(content))
imported++ imported++
continue continue
} }
if err := callBrainRemember(content, "plan", tags, project, 0.6); err != nil { if err := callBrainRemember(content, "plan", tags, project, 0.6); err != nil {
fmt.Printf(" FAIL %s :: %s / %s — %v\n", project, filename, sec.heading, err) coreerr.Error("brain-seed: plan import failed", "project", project, "file", filename, "heading", sec.heading, "err", err)
errors++ errors++
continue continue
} }
fmt.Printf(" ok %s :: %s / %s (plan)\n", project, filename, sec.heading) core.Print(nil, " ok %s :: %s / %s (plan)", project, filename, sec.heading)
imported++ imported++
} }
} }
@ -209,7 +207,7 @@ func main() {
// Process CLAUDE.md files // Process CLAUDE.md files
if *claudeMd && len(claudeFiles) > 0 { if *claudeMd && len(claudeFiles) > 0 {
fmt.Println("\n--- CLAUDE.md Files ---") core.Println("\n--- CLAUDE.md Files ---")
for _, f := range claudeFiles { for _, f := range claudeFiles {
project := extractProjectFromClaudeMd(f) project := extractProjectFromClaudeMd(f)
sections := parseMarkdownSections(f) sections := parseMarkdownSections(f)
@ -221,7 +219,7 @@ func main() {
for _, sec := range sections { for _, sec := range sections {
content := sec.heading + "\n\n" + sec.content content := sec.heading + "\n\n" + sec.content
if strings.TrimSpace(sec.content) == "" { if core.Trim(sec.content) == "" {
skipped++ skipped++
continue continue
} }
@ -230,85 +228,55 @@ func main() {
content = truncate(content, *maxChars) content = truncate(content, *maxChars)
if *dryRun { if *dryRun {
fmt.Printf(" [DRY] %s :: CLAUDE.md / %s (convention) — %d chars\n", core.Print(nil, " [DRY] %s :: CLAUDE.md / %s (convention) — %d chars",
project, sec.heading, len(content)) project, sec.heading, len(content))
imported++ imported++
continue continue
} }
if err := callBrainRemember(content, "convention", tags, project, 0.9); err != nil { if err := callBrainRemember(content, "convention", tags, project, 0.9); err != nil {
fmt.Printf(" FAIL %s :: CLAUDE.md / %s — %v\n", project, sec.heading, err) coreerr.Error("brain-seed: claude-md import failed", "project", project, "heading", sec.heading, "err", err)
errors++ errors++
continue continue
} }
fmt.Printf(" ok %s :: CLAUDE.md / %s (convention)\n", project, sec.heading) core.Print(nil, " ok %s :: CLAUDE.md / %s (convention)", project, sec.heading)
imported++ imported++
} }
} }
} }
fmt.Printf("\n%s\n", strings.Repeat("=", 55)) core.Print(nil, "\n%s", seedDivider)
prefix := "" prefix := ""
if *dryRun { if *dryRun {
prefix = "[DRY RUN] " prefix = "[DRY RUN] "
} }
fmt.Printf("%sImported: %d | Skipped: %d | Errors: %d\n", prefix, imported, skipped, errors) core.Print(nil, "%sImported: %d | Skipped: %d | Errors: %d", prefix, imported, skipped, errors)
} }
// callBrainRemember sends a memory to the MCP API via brain_remember tool. // callBrainRemember sends a memory to OpenBrain via /v1/brain/remember.
func callBrainRemember(content, memType string, tags []string, project string, confidence float64) error { func callBrainRemember(content, memType string, tags []string, project string, confidence float64) error {
args := map[string]any{ if openbrain == nil {
"content": content, openbrain = brainclient.New(brainclient.Options{
"type": memType, URL: *apiURL,
"tags": tags, Key: *apiKey,
"confidence": confidence, Org: *org,
AgentID: *agent,
})
}
input := brainclient.RememberInput{
Content: content,
Type: memType,
Tags: tags,
Org: *org,
AgentID: *agent,
Confidence: confidence,
} }
if project != "" && project != "unknown" { if project != "" && project != "unknown" {
args["project"] = project input.Project = project
} }
_, err := openbrain.Remember(context.Background(), input)
payload := map[string]any{ return coreerr.Wrap(err, "callBrainRemember", "remember")
"server": *server,
"tool": "brain_remember",
"arguments": args,
}
body, err := json.Marshal(payload)
if err != nil {
return coreerr.E("callBrainRemember", "marshal", err)
}
req, err := http.NewRequest("POST", *apiURL+"/tools/call", bytes.NewReader(body))
if err != nil {
return coreerr.E("callBrainRemember", "request", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "Bearer "+*apiKey)
resp, err := httpClient.Do(req)
if err != nil {
return coreerr.E("callBrainRemember", "http", err)
}
defer resp.Body.Close()
respBody, _ := io.ReadAll(resp.Body)
if resp.StatusCode != 200 {
return coreerr.E("callBrainRemember", "HTTP "+string(respBody), nil)
}
var result struct {
Success bool `json:"success"`
Error string `json:"error"`
}
if err := json.Unmarshal(respBody, &result); err != nil {
return coreerr.E("callBrainRemember", "decode", err)
}
if !result.Success {
return coreerr.E("callBrainRemember", "API: "+result.Error, nil)
}
return nil
} }
// truncate caps content to maxLen chars, appending an ellipsis if truncated. // truncate caps content to maxLen chars, appending an ellipsis if truncated.
@ -318,12 +286,21 @@ func truncate(s string, maxLen int) string {
} }
// Find last space before limit to avoid splitting mid-word // Find last space before limit to avoid splitting mid-word
cut := maxLen cut := maxLen
if idx := strings.LastIndex(s[:maxLen], " "); idx > maxLen-200 { if idx := lastByteIndex(s[:maxLen], ' '); idx > maxLen-200 {
cut = idx cut = idx
} }
return s[:cut] + "…" return s[:cut] + "…"
} }
func lastByteIndex(s string, target byte) int {
for i := len(s) - 1; i >= 0; i-- {
if s[i] == target {
return i
}
}
return -1
}
// discoverClaudeMdFiles finds CLAUDE.md files across a code directory. // discoverClaudeMdFiles finds CLAUDE.md files across a code directory.
func discoverClaudeMdFiles(codePath string) []string { func discoverClaudeMdFiles(codePath string) []string {
var files []string var files []string
@ -340,7 +317,7 @@ func discoverClaudeMdFiles(codePath string) []string {
} }
// Limit depth // Limit depth
rel, _ := filepath.Rel(codePath, path) rel, _ := filepath.Rel(codePath, path)
if strings.Count(rel, string(os.PathSeparator)) > 3 { if len(core.Split(rel, string(os.PathSeparator))) > 4 {
return filepath.SkipDir return filepath.SkipDir
} }
return nil return nil
@ -370,19 +347,19 @@ func parseMarkdownSections(path string) []section {
} }
var sections []section var sections []section
lines := strings.Split(data, "\n") lines := core.Split(data, "\n")
var curHeading string var curHeading string
var curContent []string var curContent []string
for _, line := range lines { for _, line := range lines {
if m := headingRe.FindStringSubmatch(line); m != nil { if m := headingRe.FindStringSubmatch(line); m != nil {
if curHeading != "" && len(curContent) > 0 { if curHeading != "" && len(curContent) > 0 {
text := strings.TrimSpace(strings.Join(curContent, "\n")) text := core.Trim(core.Join("\n", curContent...))
if text != "" { if text != "" {
sections = append(sections, section{curHeading, text}) sections = append(sections, section{curHeading, text})
} }
} }
curHeading = strings.TrimSpace(m[1]) curHeading = core.Trim(m[1])
curContent = nil curContent = nil
} else { } else {
curContent = append(curContent, line) curContent = append(curContent, line)
@ -391,17 +368,17 @@ func parseMarkdownSections(path string) []section {
// Flush last section // Flush last section
if curHeading != "" && len(curContent) > 0 { if curHeading != "" && len(curContent) > 0 {
text := strings.TrimSpace(strings.Join(curContent, "\n")) text := core.Trim(core.Join("\n", curContent...))
if text != "" { if text != "" {
sections = append(sections, section{curHeading, text}) sections = append(sections, section{curHeading, text})
} }
} }
// If no headings found, treat entire file as one section // If no headings found, treat entire file as one section
if len(sections) == 0 && strings.TrimSpace(data) != "" { if len(sections) == 0 && core.Trim(data) != "" {
sections = append(sections, section{ sections = append(sections, section{
heading: strings.TrimSuffix(filepath.Base(path), ".md"), heading: core.TrimSuffix(filepath.Base(path), ".md"),
content: strings.TrimSpace(data), content: core.Trim(data),
}) })
} }
@ -459,7 +436,7 @@ func inferType(heading, content, source string) string {
return "convention" return "convention"
} }
lower := strings.ToLower(heading + " " + content) lower := core.Lower(heading + " " + content)
patterns := map[string][]string{ patterns := map[string][]string{
"architecture": {"architecture", "stack", "infrastructure", "layer", "service mesh"}, "architecture": {"architecture", "stack", "infrastructure", "layer", "service mesh"},
"convention": {"convention", "standard", "naming", "pattern", "rule", "coding"}, "convention": {"convention", "standard", "naming", "pattern", "rule", "coding"},
@ -470,7 +447,7 @@ func inferType(heading, content, source string) string {
} }
for t, keywords := range patterns { for t, keywords := range patterns {
for _, kw := range keywords { for _, kw := range keywords {
if strings.Contains(lower, kw) { if core.Contains(lower, kw) {
return t return t
} }
} }
@ -485,7 +462,7 @@ func buildTags(filename, source, project string) []string {
tags = append(tags, "project:"+project) tags = append(tags, "project:"+project)
} }
if filename != "MEMORY" && filename != "CLAUDE" { if filename != "MEMORY" && filename != "CLAUDE" {
tags = append(tags, strings.ReplaceAll(strings.ReplaceAll(filename, "-", " "), "_", " ")) tags = append(tags, core.Replace(core.Replace(filename, "-", " "), "_", " "))
} }
return tags return tags
} }

View file

@ -1,8 +1,8 @@
package main package main
import ( import (
"forge.lthn.ai/core/cli/pkg/cli" "dappco.re/go/cli/pkg/cli"
mcpcmd "forge.lthn.ai/core/mcp/cmd/mcpcmd" mcpcmd "dappco.re/go/mcp/cmd/mcpcmd"
) )
func main() { func main() {

View file

@ -1,7 +1,14 @@
// Package mcpcmd provides the MCP server command. // SPDX-License-Identifier: EUPL-1.2
// Package mcpcmd registers the `mcp` and `mcp serve` CLI commands.
//
// Wiring example:
//
// cli.Main(cli.WithCommands("mcp", mcpcmd.AddMCPCommands))
// //
// Commands: // Commands:
// - mcp serve: Start the MCP server for AI tool integration // - mcp Start the MCP server on stdio (default transport).
// - mcp serve Start the MCP server with auto-selected transport.
package mcpcmd package mcpcmd
import ( import (
@ -10,80 +17,112 @@ import (
"os/signal" "os/signal"
"syscall" "syscall"
"forge.lthn.ai/core/cli/pkg/cli" core "dappco.re/go/core"
"forge.lthn.ai/core/mcp/pkg/mcp" "dappco.re/go/mcp/pkg/mcp"
"forge.lthn.ai/core/mcp/pkg/mcp/agentic" "dappco.re/go/mcp/pkg/mcp/agentic"
"forge.lthn.ai/core/mcp/pkg/mcp/brain" "dappco.re/go/mcp/pkg/mcp/brain"
) )
// newMCPService is the service constructor, indirected for tests.
var newMCPService = mcp.New
// runMCPService starts the MCP server, indirected for tests.
var runMCPService = func(svc *mcp.Service, ctx context.Context) error {
return svc.Run(ctx)
}
// shutdownMCPService performs graceful shutdown, indirected for tests.
var shutdownMCPService = func(svc *mcp.Service, ctx context.Context) error {
return svc.Shutdown(ctx)
}
// workspaceFlag mirrors the --workspace CLI flag value.
var workspaceFlag string var workspaceFlag string
var mcpCmd = &cli.Command{ // unrestrictedFlag mirrors the --unrestricted CLI flag value.
Use: "mcp", var unrestrictedFlag bool
Short: "MCP server for AI tool integration",
Long: "Model Context Protocol (MCP) server providing file operations, RAG, and metrics tools.", // AddMCPCommands registers the `mcp` command tree on the Core instance.
//
// cli.Main(cli.WithCommands("mcp", mcpcmd.AddMCPCommands))
func AddMCPCommands(c *core.Core) {
c.Command("mcp", core.Command{
Description: "Model Context Protocol server (stdio, TCP, Unix socket, HTTP).",
Action: runServeAction,
Flags: core.NewOptions(
core.Option{Key: "workspace", Value: ""},
core.Option{Key: "w", Value: ""},
core.Option{Key: "unrestricted", Value: false},
),
})
c.Command("mcp/serve", core.Command{
Description: "Start the MCP server with auto-selected transport (stdio, TCP, Unix, or HTTP).",
Action: runServeAction,
Flags: core.NewOptions(
core.Option{Key: "workspace", Value: ""},
core.Option{Key: "w", Value: ""},
core.Option{Key: "unrestricted", Value: false},
),
})
} }
var serveCmd = &cli.Command{ // runServeAction is the CLI entrypoint for `mcp` and `mcp serve`.
Use: "serve", //
Short: "Start the MCP server", // opts := core.NewOptions(core.Option{Key: "workspace", Value: "."})
Long: `Start the MCP server on stdio (default) or TCP. // result := runServeAction(opts)
func runServeAction(opts core.Options) core.Result {
workspaceFlag = core.Trim(firstNonEmpty(opts.String("workspace"), opts.String("w")))
unrestrictedFlag = opts.Bool("unrestricted")
The server provides file operations, RAG tools, and metrics tools for AI assistants. if err := runServe(); err != nil {
return core.Result{Value: err, OK: false}
Environment variables: }
MCP_ADDR TCP address to listen on (e.g., "localhost:9999") return core.Result{OK: true}
If not set, uses stdio transport.
Examples:
# Start with stdio transport (for Claude Code integration)
core mcp serve
# Start with workspace restriction
core mcp serve --workspace /path/to/project
# Start TCP server
MCP_ADDR=localhost:9999 core mcp serve`,
RunE: func(cmd *cli.Command, args []string) error {
return runServe()
},
} }
func initFlags() { // firstNonEmpty returns the first non-empty string argument.
cli.StringFlag(serveCmd, &workspaceFlag, "workspace", "w", "", "Restrict file operations to this directory (empty = unrestricted)") //
} // firstNonEmpty("", "foo") == "foo"
// firstNonEmpty("bar", "baz") == "bar"
// AddMCPCommands registers the 'mcp' command and all subcommands. func firstNonEmpty(values ...string) string {
func AddMCPCommands(root *cli.Command) { for _, v := range values {
initFlags() if v != "" {
mcpCmd.AddCommand(serveCmd) return v
root.AddCommand(mcpCmd) }
}
return ""
} }
// runServe wires the MCP service together and blocks until the context is
// cancelled by SIGINT/SIGTERM or a transport error.
//
// if err := runServe(); err != nil {
// core.Error("mcp serve failed", "err", err)
// }
func runServe() error { func runServe() error {
// Build MCP service options opts := mcp.Options{}
var opts []mcp.Option
if workspaceFlag != "" { if unrestrictedFlag {
opts = append(opts, mcp.WithWorkspaceRoot(workspaceFlag)) opts.Unrestricted = true
} else { } else if workspaceFlag != "" {
// Explicitly unrestricted when no workspace specified opts.WorkspaceRoot = workspaceFlag
opts = append(opts, mcp.WithWorkspaceRoot(""))
} }
// Register OpenBrain subsystem (direct HTTP to api.lthn.sh) // Register OpenBrain and agentic subsystems.
opts = append(opts, mcp.WithSubsystem(brain.NewDirect())) opts.Subsystems = []mcp.Subsystem{
brain.NewDirect(),
agentic.NewPrep(),
}
// Register agentic subsystem (workspace prep, agent orchestration) svc, err := newMCPService(opts)
opts = append(opts, mcp.WithSubsystem(agentic.NewPrep()))
// Create the MCP service
svc, err := mcp.New(opts...)
if err != nil { if err != nil {
return cli.Wrap(err, "create MCP service") return core.E("mcpcmd.runServe", "create MCP service", err)
} }
defer func() {
_ = shutdownMCPService(svc, context.Background())
}()
// Set up signal handling for clean shutdown
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
defer cancel() defer cancel()
@ -91,10 +130,12 @@ func runServe() error {
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM) signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
go func() { go func() {
<-sigCh select {
case <-sigCh:
cancel() cancel()
case <-ctx.Done():
}
}() }()
// Run the server (blocks until context cancelled or error) return runMCPService(svc, ctx)
return svc.Run(ctx)
} }

227
cmd/mcpcmd/cmd_mcp_test.go Normal file
View file

@ -0,0 +1,227 @@
// SPDX-License-Identifier: EUPL-1.2
package mcpcmd
import (
"context"
"testing"
core "dappco.re/go/core"
"dappco.re/go/mcp/pkg/mcp"
)
func TestCmdMCP_RunServe_Good_ShutsDownService(t *testing.T) {
restore := stubMCPService(t)
defer restore()
workspaceFlag = ""
unrestrictedFlag = false
var runCalled bool
var shutdownCalled bool
newMCPService = func(opts mcp.Options) (*mcp.Service, error) {
return mcp.New(mcp.Options{})
}
runMCPService = func(svc *mcp.Service, ctx context.Context) error {
runCalled = true
return nil
}
shutdownMCPService = func(svc *mcp.Service, ctx context.Context) error {
shutdownCalled = true
return nil
}
if err := runServe(); err != nil {
t.Fatalf("runServe() returned error: %v", err)
}
if !runCalled {
t.Fatal("expected runMCPService to be called")
}
if !shutdownCalled {
t.Fatal("expected shutdownMCPService to be called")
}
}
func TestCmdMCP_RunServeAction_Good_PropagatesFlags(t *testing.T) {
restore := stubMCPService(t)
defer restore()
workspaceFlag = ""
unrestrictedFlag = false
var gotOpts mcp.Options
newMCPService = func(opts mcp.Options) (*mcp.Service, error) {
gotOpts = opts
return mcp.New(mcp.Options{WorkspaceRoot: t.TempDir()})
}
runMCPService = func(svc *mcp.Service, ctx context.Context) error {
return nil
}
shutdownMCPService = func(svc *mcp.Service, ctx context.Context) error {
return nil
}
tmp := t.TempDir()
opts := core.NewOptions(core.Option{Key: "workspace", Value: tmp})
result := runServeAction(opts)
if !result.OK {
t.Fatalf("expected OK, got %+v", result)
}
if gotOpts.WorkspaceRoot != tmp {
t.Fatalf("expected workspace root %q, got %q", tmp, gotOpts.WorkspaceRoot)
}
if gotOpts.Unrestricted {
t.Fatal("expected Unrestricted=false when --workspace is set")
}
}
func TestCmdMCP_RunServeAction_Good_UnrestrictedFlag(t *testing.T) {
restore := stubMCPService(t)
defer restore()
workspaceFlag = ""
unrestrictedFlag = false
var gotOpts mcp.Options
newMCPService = func(opts mcp.Options) (*mcp.Service, error) {
gotOpts = opts
return mcp.New(mcp.Options{Unrestricted: true})
}
runMCPService = func(svc *mcp.Service, ctx context.Context) error {
return nil
}
shutdownMCPService = func(svc *mcp.Service, ctx context.Context) error {
return nil
}
opts := core.NewOptions(core.Option{Key: "unrestricted", Value: true})
result := runServeAction(opts)
if !result.OK {
t.Fatalf("expected OK, got %+v", result)
}
if !gotOpts.Unrestricted {
t.Fatal("expected Unrestricted=true when --unrestricted is set")
}
}
func TestCmdMCP_RunServe_Bad_CreateServiceFails(t *testing.T) {
restore := stubMCPService(t)
defer restore()
workspaceFlag = ""
unrestrictedFlag = false
sentinel := core.E("mcpcmd.test", "boom", nil)
newMCPService = func(opts mcp.Options) (*mcp.Service, error) {
return nil, sentinel
}
runMCPService = func(svc *mcp.Service, ctx context.Context) error {
t.Fatal("runMCPService should not be called when New fails")
return nil
}
shutdownMCPService = func(svc *mcp.Service, ctx context.Context) error {
t.Fatal("shutdownMCPService should not be called when New fails")
return nil
}
err := runServe()
if err == nil {
t.Fatal("expected error when newMCPService fails")
}
}
func TestCmdMCP_RunServeAction_Bad_PropagatesFailure(t *testing.T) {
restore := stubMCPService(t)
defer restore()
workspaceFlag = ""
unrestrictedFlag = false
newMCPService = func(opts mcp.Options) (*mcp.Service, error) {
return nil, core.E("mcpcmd.test", "construction failed", nil)
}
runMCPService = func(svc *mcp.Service, ctx context.Context) error {
return nil
}
shutdownMCPService = func(svc *mcp.Service, ctx context.Context) error {
return nil
}
result := runServeAction(core.NewOptions())
if result.OK {
t.Fatal("expected runServeAction to fail when service creation fails")
}
if result.Value == nil {
t.Fatal("expected error value on failure")
}
}
func TestCmdMCP_FirstNonEmpty_Ugly_HandlesAllVariants(t *testing.T) {
tests := []struct {
name string
values []string
want string
}{
{"no args", nil, ""},
{"empty string", []string{""}, ""},
{"all empty", []string{"", "", ""}, ""},
{"first non-empty", []string{"foo", "bar"}, "foo"},
{"skip empty", []string{"", "baz"}, "baz"},
{"mixed", []string{"", "", "last"}, "last"},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := firstNonEmpty(tc.values...)
if got != tc.want {
t.Fatalf("firstNonEmpty(%v) = %q, want %q", tc.values, got, tc.want)
}
})
}
}
func TestCmdMCP_AddMCPCommands_Good_RegistersMcpTree(t *testing.T) {
c := core.New()
AddMCPCommands(c)
commands := c.Commands()
if len(commands) == 0 {
t.Fatal("expected at least one registered command")
}
mustHave := map[string]bool{
"mcp": false,
"mcp/serve": false,
}
for _, path := range commands {
if _, ok := mustHave[path]; ok {
mustHave[path] = true
}
}
for path, present := range mustHave {
if !present {
t.Fatalf("expected command %q to be registered", path)
}
}
}
// stubMCPService captures the package-level function pointers and returns a
// restore hook so each test can mutate them without leaking into siblings.
func stubMCPService(t *testing.T) func() {
t.Helper()
oldNew := newMCPService
oldRun := runMCPService
oldShutdown := shutdownMCPService
oldWorkspace := workspaceFlag
oldUnrestricted := unrestrictedFlag
return func() {
newMCPService = oldNew
runMCPService = oldRun
shutdownMCPService = oldShutdown
workspaceFlag = oldWorkspace
unrestrictedFlag = oldUnrestricted
}
}

View file

@ -0,0 +1,29 @@
# openbrain-mcp
`openbrain-mcp` is a thin stdio MCP wrapper for the OpenBrain tools registered in `pkg/mcp/brain`.
Install:
```sh
go install dappco.re/go/mcp/cmd/openbrain-mcp@latest
```
Add it to Claude Code:
```sh
claude mcp add openbrain -- openbrain-mcp --brain-url=http://127.0.0.1:8000/v1/brain --api-key=$OPENBRAIN_API_KEY
```
The wrapper exposes:
- `brain_remember`
- `brain_recall`
- `brain_forget`
- `brain_list`
Flags:
- `--brain-url`: OpenBrain BrainService URL. Defaults to `http://127.0.0.1:8000/v1/brain`.
- `--api-key`: OpenBrain API key. Defaults to `OPENBRAIN_API_KEY`.
The process logs to stderr only. Stdout is reserved for MCP framing.

95
cmd/openbrain-mcp/main.go Normal file
View file

@ -0,0 +1,95 @@
// SPDX-License-Identifier: EUPL-1.2
// openbrain-mcp exposes the OpenBrain MCP tools over stdio for Claude Code.
package main
import (
"context"
"flag"
"os"
"os/signal"
"syscall"
"time"
core "dappco.re/go/core"
coreerr "dappco.re/go/log"
"dappco.re/go/mcp/pkg/mcp"
"dappco.re/go/mcp/pkg/mcp/brain"
)
const defaultBrainURL = "http://127.0.0.1:8000/v1/brain"
var (
brainURLFlag = flag.String("brain-url", defaultBrainURL, "OpenBrain BrainService URL")
apiKeyFlag = flag.String("api-key", "", "OpenBrain API key (defaults to OPENBRAIN_API_KEY)")
)
func main() {
if err := run(); err != nil {
coreerr.Error("openbrain-mcp failed", "err", err)
os.Exit(1)
}
}
func run() error {
flag.Parse()
if err := configureBrainEnv(*brainURLFlag, *apiKeyFlag); err != nil {
return err
}
ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer stop()
svc, err := mcp.New(mcp.Options{
Subsystems: []mcp.Subsystem{
brain.NewDirect(),
},
})
if err != nil {
return core.E("openbrain-mcp.run", "create MCP service", err)
}
defer shutdownService(svc)
if err := svc.ServeStdio(ctx); err != nil && !core.Is(err, context.Canceled) {
return core.E("openbrain-mcp.run", "serve stdio", err)
}
return nil
}
func configureBrainEnv(brainURL, apiKey string) error {
baseURL := directBrainBaseURL(brainURL)
if baseURL == "" {
baseURL = directBrainBaseURL(defaultBrainURL)
}
if err := os.Setenv("CORE_BRAIN_URL", baseURL); err != nil {
return core.E("openbrain-mcp.configure", "set CORE_BRAIN_URL", err)
}
key := core.Trim(apiKey)
if key == "" {
key = core.Trim(core.Env("OPENBRAIN_API_KEY"))
}
if key == "" {
return nil
}
if err := os.Setenv("CORE_BRAIN_KEY", key); err != nil {
return core.E("openbrain-mcp.configure", "set CORE_BRAIN_KEY", err)
}
return nil
}
func directBrainBaseURL(brainURL string) string {
baseURL := core.Trim(brainURL)
baseURL = core.TrimSuffix(baseURL, "/")
baseURL = core.TrimSuffix(baseURL, "/v1/brain")
return core.TrimSuffix(baseURL, "/")
}
func shutdownService(svc *mcp.Service) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
if err := svc.Shutdown(ctx); err != nil {
coreerr.Error("openbrain-mcp shutdown failed", "err", err)
}
}

View file

@ -19,14 +19,18 @@ slice of `ToolRecord` metadata that powers the REST bridge.
```go ```go
svc, err := mcp.New( svc, err := mcp.New(
mcp.WithWorkspaceRoot("/home/user/project"), mcp.Options{
mcp.WithSubsystem(mcp.NewMLSubsystem(mlService)), WorkspaceRoot: "/home/user/project",
mcp.WithProcessService(processService), ProcessService: processService,
mcp.WithWSHub(wsHub), WSHub: wsHub,
Subsystems: []mcp.Subsystem{
&MySubsystem{},
},
},
) )
``` ```
Options follow the functional-options pattern. `WithWorkspaceRoot` creates a Options are provided via the `mcp.Options` DTO. `WorkspaceRoot` creates a
sandboxed `io.Medium` that confines all file operations to a single directory sandboxed `io.Medium` that confines all file operations to a single directory
tree. Passing an empty string disables sandboxing (not recommended for tree. Passing an empty string disables sandboxing (not recommended for
untrusted clients). untrusted clients).
@ -71,7 +75,7 @@ The service registers the following tools at startup:
| **ws** | `ws_start`, `ws_info` | `tools_ws.go` | | **ws** | `ws_start`, `ws_info` | `tools_ws.go` |
Process and WebSocket tools are conditionally registered -- they require Process and WebSocket tools are conditionally registered -- they require
`WithProcessService` and `WithWSHub` respectively. `ProcessService` and `WSHub` in `Options` respectively.
### Subsystem interface ### Subsystem interface
@ -93,17 +97,22 @@ type SubsystemWithShutdown interface {
} }
``` ```
Two subsystems ship with this repo: Three subsystems ship with this repo:
#### ML subsystem (`tools_ml.go`) #### Agentic subsystem (`pkg/mcp/agentic/`)
`MLSubsystem` wraps a `go-ml.Service` and exposes five tools: `agentic` tools prepare workspaces, dispatch agents, and track execution
status for issue-driven task workflows.
- `ml_generate` -- text generation via any registered inference backend. #### Brain subsystem (`pkg/mcp/brain/`)
- `ml_score` -- heuristic and semantic scoring of prompt/response pairs.
- `ml_probe` -- capability probes (predefined prompts run through the model). Proxies OpenBrain knowledge-store operations to the Laravel backend via the IDE
- `ml_status` -- training and generation progress from InfluxDB. bridge. Four tools:
- `ml_backends` -- lists registered inference backends and their availability.
- `brain_remember` -- store a memory (decision, observation, bug, etc.).
- `brain_recall` -- semantic search across stored memories.
- `brain_forget` -- permanently delete a memory.
- `brain_list` -- list memories with filtering (no vector search).
#### IDE subsystem (`pkg/mcp/ide/`) #### IDE subsystem (`pkg/mcp/ide/`)
@ -120,16 +129,6 @@ The IDE bridge (`Bridge`) maintains a persistent WebSocket connection to the
Laravel backend with exponential-backoff reconnection. Messages are forwarded Laravel backend with exponential-backoff reconnection. Messages are forwarded
from Laravel to a local `ws.Hub` for real-time streaming to the IDE frontend. from Laravel to a local `ws.Hub` for real-time streaming to the IDE frontend.
#### Brain subsystem (`pkg/mcp/brain/`)
Proxies OpenBrain knowledge-store operations to the Laravel backend via the
IDE bridge. Four tools:
- `brain_remember` -- store a memory (decision, observation, bug, etc.).
- `brain_recall` -- semantic search across stored memories.
- `brain_forget` -- permanently delete a memory.
- `brain_list` -- list memories with filtering (no vector search).
### Transports ### Transports
The Go server supports three transports, all using line-delimited JSON-RPC: The Go server supports three transports, all using line-delimited JSON-RPC:
@ -141,8 +140,8 @@ The Go server supports three transports, all using line-delimited JSON-RPC:
| **Unix** | `ServeUnix(ctx, path)` | caller-specified socket path | | **Unix** | `ServeUnix(ctx, path)` | caller-specified socket path |
TCP binds to `127.0.0.1` by default when the host component is empty. Binding TCP binds to `127.0.0.1` by default when the host component is empty. Binding
to `0.0.0.0` emits a security warning. Each accepted connection spawns a to `0.0.0.0` emits a security warning. Each accepted connection becomes a
fresh `mcp.Server` instance with its own tool set. session on the shared `mcp.Server`.
### REST bridge ### REST bridge
@ -227,7 +226,7 @@ The `McpApiController` exposes five endpoints behind `mcp.auth` middleware:
| `GET` | `/servers/{id}.json` | Server details with tool definitions | | `GET` | `/servers/{id}.json` | Server details with tool definitions |
| `GET` | `/servers/{id}/tools` | List tools for a server | | `GET` | `/servers/{id}/tools` | List tools for a server |
| `POST` | `/tools/call` | Execute a tool | | `POST` | `/tools/call` | Execute a tool |
| `GET` | `/resources/{uri}` | Read a resource (not yet implemented -- returns 501) | | `GET` | `/resources/{uri}` | Read a resource |
`POST /tools/call` accepts: `POST /tools/call` accepts:

View file

@ -93,7 +93,7 @@ Key test files:
| `transport_tcp_test.go` | TCP transport, loopback default, `0.0.0.0` warning | | `transport_tcp_test.go` | TCP transport, loopback default, `0.0.0.0` warning |
| `transport_e2e_test.go` | End-to-end TCP client/server round-trip | | `transport_e2e_test.go` | End-to-end TCP client/server round-trip |
| `tools_metrics_test.go` | Duration parsing, metrics record/query | | `tools_metrics_test.go` | Duration parsing, metrics record/query |
| `tools_ml_test.go` | ML subsystem tool registration | | `brain/brain_test.go` | Brain subsystem registration and bridge-nil handling |
| `tools_process_test.go` | Process start/stop/kill/list/output/input | | `tools_process_test.go` | Process start/stop/kill/list/output/input |
| `tools_process_ci_test.go` | CI-safe process tests (no external binaries) | | `tools_process_ci_test.go` | CI-safe process tests (no external binaries) |
| `tools_rag_test.go` | RAG query/ingest/collections | | `tools_rag_test.go` | RAG query/ingest/collections |
@ -170,17 +170,17 @@ core/mcp/
| +-- mcp/ | +-- mcp/
| +-- mcp.go # Service, file tools, Run() | +-- mcp.go # Service, file tools, Run()
| +-- registry.go # ToolRecord, addToolRecorded, schema extraction | +-- registry.go # ToolRecord, addToolRecorded, schema extraction
| +-- subsystem.go # Subsystem interface, WithSubsystem option | +-- subsystem.go # Subsystem interface, Options-based registration
| +-- bridge.go # BridgeToAPI (MCP-to-REST adapter) | +-- bridge.go # BridgeToAPI (MCP-to-REST adapter)
| +-- transport_stdio.go | +-- transport_stdio.go
| +-- transport_tcp.go | +-- transport_tcp.go
| +-- transport_unix.go | +-- transport_unix.go
| +-- tools_metrics.go # Metrics record/query | +-- tools_metrics.go # Metrics record/query
| +-- tools_ml.go # MLSubsystem (generate, score, probe, status, backends)
| +-- tools_process.go # Process management tools | +-- tools_process.go # Process management tools
| +-- tools_rag.go # RAG query/ingest/collections | +-- tools_rag.go # RAG query/ingest/collections
| +-- tools_webview.go # Chrome DevTools automation | +-- tools_webview.go # Chrome DevTools automation
| +-- tools_ws.go # WebSocket server tools | +-- tools_ws.go # WebSocket server tools
| +-- agentic/
| +-- brain/ | +-- brain/
| | +-- brain.go # Brain subsystem | | +-- brain.go # Brain subsystem
| | +-- tools.go # remember/recall/forget/list tools | | +-- tools.go # remember/recall/forget/list tools
@ -321,7 +321,11 @@ func (s *Subsystem) RegisterTools(server *mcp.Server) {
2. Register when creating the service: 2. Register when creating the service:
```go ```go
mcp.New(mcp.WithSubsystem(&mysubsystem.Subsystem{})) mcp.New(mcp.Options{
Subsystems: []mcp.Subsystem{
&mysubsystem.Subsystem{},
},
})
``` ```
## Adding a new PHP tool ## Adding a new PHP tool

View file

@ -0,0 +1,50 @@
# Migrating to `Options{}` for MCP Service Construction
## Before (functional options)
```go
svc, err := mcp.New(
mcp.WithWorkspaceRoot("/path/to/project"),
mcp.WithProcessService(processSvc),
mcp.WithWSHub(hub),
mcp.WithSubsystem(brainSub),
mcp.WithSubsystem(ideSub),
)
```
## After (`Options{}` DTO)
```go
svc, err := mcp.New(mcp.Options{
WorkspaceRoot: "/path/to/project",
ProcessService: processSvc,
WSHub: hub,
Subsystems: []mcp.Subsystem{
brainSub,
ideSub,
},
})
```
## Notification helpers
```go
// Broadcast to all MCP sessions.
svc.SendNotificationToAllClients(ctx, "info", "build", map[string]any{
"event": "build.complete",
"repo": "go-io",
})
// Broadcast a named channel event.
svc.ChannelSend(ctx, "build.complete", map[string]any{
"repo": "go-io",
"status": "passed",
})
// Send to one session.
for session := range svc.Sessions() {
svc.ChannelSendToSession(ctx, session, "agent.status", map[string]any{
"state": "running",
})
}
```

View file

@ -0,0 +1,772 @@
# MCP SDK & AX Convention Migration Plan
> **For agentic workers:** REQUIRED: Use superpowers:subagent-driven-development (if subagents available) or superpowers:executing-plans to implement this plan. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Align the MCP service layer with CoreGO AX conventions (Options/Result/Service DTOs), add server→client notification broadcasting via `SendNotificationToAllClients()`, and register the `claude/channel` experimental capability for pushing events into Claude Code sessions.
**Architecture:** Refactor `mcp.Service` from functional options (`Option func(*Service) error`) to an `Options{}` struct. Add notification broadcasting by iterating the official SDK's `Server.Sessions()` (refactoring TCP/Unix transports to use `Server.Connect()` so all sessions are visible). Register `claude/channel` in `ServerCapabilities.Experimental` so clients can discover push-event support.
**Tech Stack:** Go 1.26, `github.com/modelcontextprotocol/go-sdk` v1.4.1, `dappco.re/go/core` v0.4.7
---
## SDK Evaluation
**Current SDK:** `github.com/modelcontextprotocol/go-sdk v1.4.1` (official MCP Go SDK)
**Alternative evaluated:** `github.com/mark3labs/mcp-go` — community SDK with built-in `SendNotificationToAllClients()` and `SendNotificationToClient()`.
| Criteria | Official SDK | mark3labs/mcp-go |
|----------|-------------|------------------|
| Multi-session support | `Server.Sessions()` iterator, `Server.Connect()` | `SendNotificationToAllClients()` built-in |
| Tool registration | Generic `AddTool[In, Out]()` — matches existing pattern | `AddTool(NewTool(), handler)` — would require rewrite |
| Experimental capabilities | `ServerCapabilities.Experimental map[string]any` | Same |
| Transport support | Stdio, SSE, StreamableHTTP, InMemory | Stdio, SSE, StreamableHTTP |
| Options pattern | `*ServerOptions` struct — aligns with AX DTOs | Functional options — conflicts with AX migration |
| Handler signatures | `func(ctx, *CallToolRequest, In) (*CallToolResult, Out, error)` | `func(ctx, CallToolRequest) (*CallToolResult, error)` |
**Decision:** Stay on official SDK. It already uses struct-based options (closer to AX), preserves our generic `addToolRecorded[In, Out]()` pattern, and supports multi-session via `Server.Sessions()`. Implement `SendNotificationToAllClients()` as a thin wrapper.
## Existing Infrastructure
Already built:
- `Service` struct wrapping `*mcp.Server` with functional options (`mcp.go`)
- Generic `addToolRecorded[In, Out]()` for tool registration + REST bridge (`registry.go`)
- `Subsystem` / `SubsystemWithShutdown` interfaces (`subsystem.go`)
- 4 transports: stdio, TCP, Unix, HTTP (`transport_*.go`)
- `BridgeToAPI` REST bridge (`bridge.go`)
- 7 tool groups: files, language, metrics, process, rag, webview, ws
- 3 subsystems: brain, ide, agentic
- Import migration to `dappco.re/go/core` already committed (4c6c9d7)
## Consumer Impact
2 consumers import `forge.lthn.ai/core/mcp`: **agent**, **ide**.
Both call `mcp.New(...)` with functional options and `mcp.WithSubsystem(...)`. Both must be updated after Phase 1.
## File Structure
| File | Action | Purpose |
|------|--------|---------|
| `pkg/mcp/mcp.go` | Modify | Replace `Option` func type with `Options{}` struct; update `New()` |
| `pkg/mcp/subsystem.go` | Modify | Remove `WithSubsystem` func; subsystems move into `Options.Subsystems` |
| `pkg/mcp/notify.go` | Create | `SendNotificationToAllClients()`, `ChannelSend()`, channel helpers |
| `pkg/mcp/registry.go` | Modify | Add usage-example comments |
| `pkg/mcp/bridge.go` | Modify | Minor: usage-example comments |
| `pkg/mcp/transport_stdio.go` | Modify | Usage-example comments |
| `pkg/mcp/transport_tcp.go` | Modify | Usage-example comments |
| `pkg/mcp/transport_unix.go` | Modify | Usage-example comments |
| `pkg/mcp/transport_http.go` | Modify | Usage-example comments |
| `pkg/mcp/tools_metrics.go` | Modify | Usage-example comments on Input/Output types |
| `pkg/mcp/tools_process.go` | Modify | Usage-example comments on Input/Output types |
| `pkg/mcp/tools_rag.go` | Modify | Usage-example comments on Input/Output types |
| `pkg/mcp/tools_webview.go` | Modify | Usage-example comments on Input/Output types |
| `pkg/mcp/tools_ws.go` | Modify | Usage-example comments on Input/Output types |
| `pkg/mcp/mcp_test.go` | Modify | Update tests for `Options{}` constructor |
| `pkg/mcp/subsystem_test.go` | Modify | Update tests for `Options.Subsystems` |
| `pkg/mcp/notify_test.go` | Create | Tests for notification broadcasting |
---
## Phase 1: Service Options{} Refactoring
Replace the functional options pattern with an `Options{}` struct. This is the breaking change — consumers must update their `mcp.New()` calls.
**Files:**
- Modify: `pkg/mcp/mcp.go`
- Modify: `pkg/mcp/subsystem.go`
- Modify: `pkg/mcp/mcp_test.go`
- Modify: `pkg/mcp/subsystem_test.go`
- [ ] **Step 1: Define Options struct and update New()**
Replace the current functional option pattern:
```go
// BEFORE:
type Option func(*Service) error
func WithWorkspaceRoot(root string) Option { ... }
func WithProcessService(ps *process.Service) Option { ... }
func WithWSHub(hub *ws.Hub) Option { ... }
func WithSubsystem(sub Subsystem) Option { ... }
func New(opts ...Option) (*Service, error) { ... }
```
With an `Options{}` struct:
```go
// Options configures a Service.
//
// svc, err := mcp.New(mcp.Options{
// WorkspaceRoot: "/path/to/project",
// ProcessService: ps,
// WSHub: hub,
// Subsystems: []Subsystem{brain, ide},
// })
type Options struct {
WorkspaceRoot string // Restrict file ops to this directory (empty = cwd)
Unrestricted bool // Disable sandboxing entirely (not recommended)
ProcessService *process.Service // Optional process management
WSHub *ws.Hub // Optional WebSocket hub for real-time streaming
Subsystems []Subsystem // Additional tool groups registered at startup
}
// New creates a new MCP service with file operations.
//
// svc, err := mcp.New(mcp.Options{WorkspaceRoot: "."})
func New(opts Options) (*Service, error) {
impl := &mcp.Implementation{
Name: "core-cli",
Version: "0.1.0",
}
server := mcp.NewServer(impl, &mcp.ServerOptions{
Capabilities: &mcp.ServerCapabilities{
Tools: &mcp.ToolCapabilities{ListChanged: true},
},
})
s := &Service{
server: server,
processService: opts.ProcessService,
wsHub: opts.WSHub,
subsystems: opts.Subsystems,
logger: log.Default(),
}
// Workspace root: unrestricted, explicit root, or default to cwd
if opts.Unrestricted {
s.workspaceRoot = ""
s.medium = io.Local
} else {
root := opts.WorkspaceRoot
if root == "" {
cwd, err := os.Getwd()
if err != nil {
return nil, log.E("mcp.New", "failed to get working directory", err)
}
root = cwd
}
abs, err := filepath.Abs(root)
if err != nil {
return nil, log.E("mcp.New", "invalid workspace root", err)
}
m, merr := io.NewSandboxed(abs)
if merr != nil {
return nil, log.E("mcp.New", "failed to create workspace medium", merr)
}
s.workspaceRoot = abs
s.medium = m
}
s.registerTools(s.server)
for _, sub := range s.subsystems {
sub.RegisterTools(s.server)
}
return s, nil
}
```
- [ ] **Step 2: Remove functional option functions**
Delete from `mcp.go`:
- `type Option func(*Service) error`
- `func WithWorkspaceRoot(root string) Option`
- `func WithProcessService(ps *process.Service) Option`
- `func WithWSHub(hub *ws.Hub) Option`
Delete from `subsystem.go`:
- `func WithSubsystem(sub Subsystem) Option`
- [ ] **Step 3: Update tests**
Find all test calls to `New(...)` in `mcp_test.go`, `subsystem_test.go`, `integration_test.go`, `transport_e2e_test.go`, and other `_test.go` files. All tests use `package mcp` (internal). Replace:
```go
// BEFORE:
svc, err := New(WithWorkspaceRoot(dir))
svc, err := New(WithSubsystem(&fakeSub{}))
// AFTER:
svc, err := New(Options{WorkspaceRoot: dir})
svc, err := New(Options{Subsystems: []Subsystem{&fakeSub{}}})
```
- [ ] **Step 4: Verify compilation**
```bash
go vet ./pkg/mcp/...
go build ./pkg/mcp/...
go test ./pkg/mcp/...
```
- [ ] **Step 5: Commit**
```bash
git add pkg/mcp/mcp.go pkg/mcp/subsystem.go pkg/mcp/*_test.go
git commit -m "refactor(mcp): replace functional options with Options{} struct
Aligns with CoreGO AX convention: Options{} DTOs instead of
functional option closures. Breaking change for consumers
(agent, ide) — they must update their mcp.New() calls.
Co-Authored-By: Virgil <virgil@lethean.io>"
```
---
## Phase 2: Notification Support + claude/channel Capability
Add server→client notification broadcasting and register the `claude/channel` experimental capability.
**Files:**
- Create: `pkg/mcp/notify.go`
- Create: `pkg/mcp/notify_test.go`
- Modify: `pkg/mcp/mcp.go` (register experimental capability in `New()`)
- Modify: `pkg/mcp/transport_tcp.go` (use `s.server.Connect()` instead of per-connection servers)
- Modify: `pkg/mcp/transport_unix.go` (same as TCP)
**Important: Transport-level limitation.** The TCP and Unix transports currently create a **new `mcp.Server` per connection** in `handleConnection()`. Sessions on those per-connection servers are invisible to `s.server.Sessions()`. Notifications therefore only reach stdio and HTTP (StreamableHTTP) clients out of the box. To support TCP/Unix notifications, Phase 2 also refactors TCP/Unix to use `s.server.Connect()` instead of creating independent servers — this registers each connection's session on the shared server instance.
- [ ] **Step 1: Refactor TCP/Unix to use shared server sessions**
In `transport_tcp.go` and `transport_unix.go`, replace the per-connection `mcp.NewServer()` call with `s.server.Connect()`:
```go
// BEFORE (transport_tcp.go handleConnection):
server := mcp.NewServer(impl, nil)
s.registerTools(server)
for _, sub := range s.subsystems { sub.RegisterTools(server) }
_ = server.Run(ctx, transport)
// AFTER:
session, err := s.server.Connect(ctx, transport, nil)
if err != nil {
s.logger.Debug("tcp: connect failed", "error", err)
return
}
<-session.Wait()
```
This ensures every TCP/Unix connection registers its session on the shared `s.server`, making it visible to `Sessions()` and `SendNotificationToAllClients`.
- [ ] **Step 2: Create notify.go with notification methods**
```go
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"context"
"iter"
"forge.lthn.ai/core/go-log"
"github.com/modelcontextprotocol/go-sdk/mcp"
)
// SendNotificationToAllClients broadcasts a log-level notification to every
// connected MCP session (stdio, HTTP, TCP, and Unix).
// Errors on individual sessions are logged but do not stop the broadcast.
//
// s.SendNotificationToAllClients(ctx, "info", "build complete", map[string]any{"duration": "3.2s"})
func (s *Service) SendNotificationToAllClients(ctx context.Context, level mcp.LoggingLevel, logger string, data any) {
for session := range s.server.Sessions() {
if err := session.Log(ctx, &mcp.LoggingMessageParams{
Level: level,
Logger: logger,
Data: data,
}); err != nil {
s.logger.Debug("notify: failed to send to session", "session", session.ID(), "error", err)
}
}
}
// ChannelSend pushes a channel event to all connected clients.
// This uses the claude/channel experimental capability.
// Channel names follow the convention "subsystem.event" (e.g. "build.complete", "agent.status").
//
// s.ChannelSend(ctx, "build.complete", map[string]any{"repo": "go-io", "status": "passed"})
func (s *Service) ChannelSend(ctx context.Context, channel string, data any) {
payload := map[string]any{
"channel": channel,
"data": data,
}
s.SendNotificationToAllClients(ctx, "info", "channel", payload)
}
// ChannelSendToSession pushes a channel event to a specific session.
//
// s.ChannelSendToSession(ctx, session, "agent.progress", progressData)
func (s *Service) ChannelSendToSession(ctx context.Context, session *mcp.ServerSession, channel string, data any) {
payload := map[string]any{
"channel": channel,
"data": data,
}
if err := session.Log(ctx, &mcp.LoggingMessageParams{
Level: "info",
Logger: "channel",
Data: payload,
}); err != nil {
s.logger.Debug("channel: failed to send to session", "session", session.ID(), "channel", channel, "error", err)
}
}
// Sessions returns an iterator over all connected MCP sessions.
// Useful for subsystems that need to send targeted notifications.
//
// for session := range s.Sessions() {
// s.ChannelSendToSession(ctx, session, "status", data)
// }
func (s *Service) Sessions() iter.Seq[*mcp.ServerSession] {
return s.server.Sessions()
}
// channelCapability returns the experimental capability descriptor
// for claude/channel, registered during New().
func channelCapability() map[string]any {
return map[string]any{
"claude/channel": map[string]any{
"version": "1",
"description": "Push events into client sessions via named channels",
"channels": []string{
"build.complete",
"build.failed",
"agent.status",
"agent.blocked",
"agent.complete",
"brain.recall.complete",
"process.exit",
"test.result",
},
},
}
}
```
- [ ] **Step 3: Register experimental capability in New()**
Update `New()` in `mcp.go` to pass capabilities to `mcp.NewServer`:
```go
server := mcp.NewServer(impl, &mcp.ServerOptions{
Capabilities: &mcp.ServerCapabilities{
Tools: &mcp.ToolCapabilities{ListChanged: true},
Logging: &mcp.LoggingCapabilities{},
Experimental: channelCapability(),
},
})
```
- [ ] **Step 4: Create notify_test.go**
Uses `package mcp` (internal tests) consistent with all existing test files in this package.
```go
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"context"
"testing"
"github.com/stretchr/testify/assert"
)
func TestSendNotificationToAllClients_Good(t *testing.T) {
svc, err := New(Options{})
assert.NoError(t, err)
// With no connected sessions, should not panic
ctx := context.Background()
svc.SendNotificationToAllClients(ctx, "info", "test", map[string]any{"key": "value"})
}
func TestChannelSend_Good(t *testing.T) {
svc, err := New(Options{})
assert.NoError(t, err)
ctx := context.Background()
svc.ChannelSend(ctx, "build.complete", map[string]any{"repo": "go-io"})
}
func TestChannelCapability_Good(t *testing.T) {
// Verify the capability struct is well-formed
svc, err := New(Options{})
assert.NoError(t, err)
assert.NotNil(t, svc.Server())
}
```
- [ ] **Step 5: Verify compilation and tests**
```bash
go vet ./pkg/mcp/...
go test ./pkg/mcp/...
```
- [ ] **Step 6: Commit**
```bash
git add pkg/mcp/notify.go pkg/mcp/notify_test.go pkg/mcp/mcp.go pkg/mcp/transport_tcp.go pkg/mcp/transport_unix.go
git commit -m "feat(mcp): add notification broadcasting + claude/channel capability
New methods:
- SendNotificationToAllClients: broadcasts to all connected MCP sessions
- ChannelSend: push named channel events (build.complete, agent.status, etc.)
- ChannelSendToSession: push to a specific session
- Sessions: iterator over connected sessions for subsystem use
Refactors TCP/Unix transports to use Server.Connect() instead of
creating per-connection servers, so all sessions are visible to
the notification broadcaster.
Registers claude/channel as an experimental MCP capability so clients
(Claude Code, IDEs) can discover and subscribe to push events.
Co-Authored-By: Virgil <virgil@lethean.io>"
```
---
## Phase 3: Usage-Example Comments + Naming
Add usage-example comments to all public types and functions. This is the CoreGO convention: comments show how to call the thing, not just what it does.
**Files:**
- Modify: `pkg/mcp/mcp.go` (Input/Output types)
- Modify: `pkg/mcp/registry.go`
- Modify: `pkg/mcp/bridge.go`
- Modify: `pkg/mcp/tools_metrics.go`
- Modify: `pkg/mcp/tools_process.go`
- Modify: `pkg/mcp/tools_rag.go`
- Modify: `pkg/mcp/tools_webview.go`
- Modify: `pkg/mcp/tools_ws.go`
- Modify: `pkg/mcp/transport_stdio.go`
- Modify: `pkg/mcp/transport_tcp.go`
- Modify: `pkg/mcp/transport_unix.go`
- Modify: `pkg/mcp/transport_http.go`
- [ ] **Step 1: Update Input/Output type comments in mcp.go**
Add inline usage examples to field comments:
```go
// ReadFileInput contains parameters for reading a file.
//
// input := ReadFileInput{Path: "src/main.go"}
type ReadFileInput struct {
Path string `json:"path"` // e.g. "src/main.go"
}
// ReadFileOutput contains the result of reading a file.
type ReadFileOutput struct {
Content string `json:"content"` // File contents as string
Language string `json:"language"` // e.g. "go", "typescript"
Path string `json:"path"` // Echoed input path
}
```
Apply the same pattern to all Input/Output types in `mcp.go`:
- `WriteFileInput/Output`
- `ListDirectoryInput/Output`, `DirectoryEntry`
- `CreateDirectoryInput/Output`
- `DeleteFileInput/Output`
- `RenameFileInput/Output`
- `FileExistsInput/Output`
- `DetectLanguageInput/Output`
- `GetSupportedLanguagesInput/Output`
- `EditDiffInput/Output`
- [ ] **Step 2: Update tool file comments**
For each tool file (`tools_metrics.go`, `tools_process.go`, `tools_rag.go`, `tools_webview.go`, `tools_ws.go`), add usage-example comments to:
- Input/Output struct definitions
- Handler function doc comments
- Registration function doc comments
Example pattern:
```go
// ProcessStartInput contains parameters for starting a new process.
//
// input := ProcessStartInput{Command: "go", Args: []string{"test", "./..."}}
type ProcessStartInput struct {
Command string `json:"command"` // e.g. "go", "npm"
Args []string `json:"args,omitempty"` // e.g. ["test", "./..."]
Dir string `json:"dir,omitempty"` // Working directory, e.g. "/path/to/project"
Env []string `json:"env,omitempty"` // e.g. ["DEBUG=true", "PORT=8080"]
}
```
- [ ] **Step 3: Update registry.go comments**
```go
// addToolRecorded registers a tool with the MCP server AND records its metadata
// for the REST bridge. The generic type parameters capture In/Out for schema extraction.
//
// addToolRecorded(s, server, "files", &mcp.Tool{
// Name: "file_read",
// Description: "Read the contents of a file",
// }, s.readFile)
func addToolRecorded[In, Out any](...) { ... }
```
- [ ] **Step 4: Update transport comments**
```go
// ServeStdio starts the MCP server on stdin/stdout.
// This is the default transport for IDE integration.
//
// err := svc.ServeStdio(ctx)
func (s *Service) ServeStdio(ctx context.Context) error { ... }
// ServeTCP starts the MCP server on a TCP address.
// Each connection gets its own MCP session.
//
// err := svc.ServeTCP(ctx, "127.0.0.1:9100")
func (s *Service) ServeTCP(ctx context.Context, addr string) error { ... }
// ServeUnix starts the MCP server on a Unix domain socket.
//
// err := svc.ServeUnix(ctx, "/tmp/core-mcp.sock")
func (s *Service) ServeUnix(ctx context.Context, socketPath string) error { ... }
// ServeHTTP starts the MCP server with Streamable HTTP transport.
// Supports optional Bearer token auth via MCP_AUTH_TOKEN env var.
//
// err := svc.ServeHTTP(ctx, "127.0.0.1:9101")
func (s *Service) ServeHTTP(ctx context.Context, addr string) error { ... }
```
- [ ] **Step 5: Verify compilation**
```bash
go vet ./pkg/mcp/...
go build ./pkg/mcp/...
```
- [ ] **Step 6: Commit**
```bash
git add pkg/mcp/*.go
git commit -m "docs(mcp): add usage-example comments to all public types
CoreGO convention: comments show how to call the thing, not just
what it does. Adds inline examples to Input/Output structs, handler
functions, transport methods, and registry functions.
Co-Authored-By: Virgil <virgil@lethean.io>"
```
---
## Phase 4: Wire Notifications into Subsystems
Connect the notification system to existing subsystems so they emit channel events.
**Files:**
- Modify: `pkg/mcp/subsystem.go` (add Notifier interface, SubsystemWithNotifier)
- Modify: `pkg/mcp/mcp.go` (call SetNotifier in New())
- Modify: `pkg/mcp/tools_process.go` (emit process lifecycle events)
- Modify: `pkg/mcp/brain/brain.go` (accept Notifier, emit brain events from bridge callback)
- [ ] **Step 1: Define Notifier interface to avoid circular imports**
Sub-packages (`brain/`, `ide/`) cannot import `pkg/mcp` without creating a cycle. Define a small `Notifier` interface that sub-packages can accept without importing the parent package:
```go
// Notifier pushes events to connected MCP sessions.
// Implemented by *Service. Sub-packages accept this interface
// to avoid circular imports.
//
// notifier.ChannelSend(ctx, "build.complete", data)
type Notifier interface {
ChannelSend(ctx context.Context, channel string, data any)
}
```
Add an optional `SubsystemWithNotifier` interface:
```go
// SubsystemWithNotifier extends Subsystem for those that emit channel events.
// SetNotifier is called after New() before any tool calls.
type SubsystemWithNotifier interface {
Subsystem
SetNotifier(n Notifier)
}
```
In `New()`, after creating the service:
```go
for _, sub := range s.subsystems {
sub.RegisterTools(s.server)
if sn, ok := sub.(SubsystemWithNotifier); ok {
sn.SetNotifier(s)
}
}
```
- [ ] **Step 2: Emit process lifecycle events**
Process tools live in `pkg/mcp/` (same package as Service), so they can call `s.ChannelSend` directly:
```go
// After successful process start:
s.ChannelSend(ctx, "process.start", map[string]any{
"id": output.ID,
"command": input.Command,
})
// In the process exit callback (if wired via ProcessEventCallback):
s.ChannelSend(ctx, "process.exit", map[string]any{
"id": id,
"exitCode": code,
})
```
- [ ] **Step 3: Emit brain events in brain subsystem**
The brain subsystem's recall handler sends requests to the Laravel bridge asynchronously — the returned `output` does not contain real results (they arrive via WebSocket later). Instead, emit the notification from the bridge callback where results actually arrive.
In `pkg/mcp/brain/brain.go`, add a `Notifier` field and `SetNotifier` method:
```go
type Subsystem struct {
bridge *ide.Bridge
notifier Notifier // set by SubsystemWithNotifier
}
// Notifier pushes events to MCP sessions (matches pkg/mcp.Notifier).
type Notifier interface {
ChannelSend(ctx context.Context, channel string, data any)
}
func (s *Subsystem) SetNotifier(n Notifier) {
s.notifier = n
}
```
Then in the bridge message handler (where recall results are received from Laravel), emit the notification with the actual result count:
```go
// In the bridge callback that processes recall results:
if s.notifier != nil {
s.notifier.ChannelSend(ctx, "brain.recall.complete", map[string]any{
"query": query,
"count": len(memories),
})
}
```
- [ ] **Step 4: Verify compilation and tests**
```bash
go vet ./pkg/mcp/...
go test ./pkg/mcp/...
```
- [ ] **Step 5: Commit**
```bash
git add pkg/mcp/subsystem.go pkg/mcp/mcp.go pkg/mcp/tools_process.go pkg/mcp/brain/brain.go
git commit -m "feat(mcp): wire channel notifications into process and brain subsystems
Adds Notifier interface to avoid circular imports between pkg/mcp
and sub-packages. Subsystems that implement SubsystemWithNotifier
receive a Notifier reference. Process tools emit process.start and
process.exit channel events. Brain subsystem emits
brain.recall.complete from the bridge callback (not the handler
return, which is async).
Co-Authored-By: Virgil <virgil@lethean.io>"
```
---
## Phase 5: Consumer Migration Guide
Document the breaking changes for the 2 consumers (agent, ide modules).
**Files:**
- Create: `docs/migration-guide-options.md`
- [ ] **Step 1: Write migration guide**
```markdown
# Migrating to Options{} Constructor
## Before (functional options)
svc, err := mcp.New(
mcp.WithWorkspaceRoot("/path"),
mcp.WithProcessService(ps),
mcp.WithWSHub(hub),
mcp.WithSubsystem(brainSub),
mcp.WithSubsystem(ideSub),
)
## After (Options struct)
svc, err := mcp.New(mcp.Options{
WorkspaceRoot: "/path",
ProcessService: ps,
WSHub: hub,
Subsystems: []mcp.Subsystem{brainSub, ideSub},
})
## New notification API
// Broadcast to all sessions (LoggingLevel is a string type)
svc.SendNotificationToAllClients(ctx, "info", "build", data)
// Push a named channel event
svc.ChannelSend(ctx, "build.complete", data)
// Push to a specific session
for session := range svc.Sessions() {
svc.ChannelSendToSession(ctx, session, "agent.status", data)
}
```
- [ ] **Step 2: Commit**
```bash
git add docs/migration-guide-options.md
git commit -m "docs(mcp): add migration guide for Options{} constructor
Documents breaking changes from functional options to Options{}
struct for consumers (agent, ide modules). Includes notification
API examples.
Co-Authored-By: Virgil <virgil@lethean.io>"
```
---
## Summary
**Total: 5 phases, 24 steps**
| Phase | Scope | Breaking? |
|-------|-------|-----------|
| 1 | `Options{}` struct replaces functional options | Yes — 2 consumers |
| 2 | Notification broadcasting + claude/channel | No — new API |
| 3 | Usage-example comments | No — docs only |
| 4 | Wire notifications into subsystems | No — additive |
| 5 | Consumer migration guide | No — docs only |
After completion:
- `mcp.New(mcp.Options{...})` replaces `mcp.New(mcp.WithXxx(...))`
- `svc.SendNotificationToAllClients(ctx, level, logger, data)` broadcasts to all sessions
- `svc.ChannelSend(ctx, "build.complete", data)` pushes named events
- `claude/channel` experimental capability advertised during MCP initialisation
- Clients (Claude Code, IDEs) can discover push-event support and receive real-time updates
- All public types and functions have usage-example comments

View file

@ -0,0 +1,141 @@
# Security Vulnerabilities — Accepted Findings + Operator Mitigations
This document records security findings (govulncheck, etc.) that have been
manually reviewed and **accepted with documented rationale** rather than
patched. Each entry names the CVE, what makes it not-applicable to our use
case, and any operator-side mitigations required to keep that not-applicable
status valid.
Audit history:
- Mantis #323 — 9 ollama CVEs reviewed and documented (2026-04-25)
---
## github.com/ollama/ollama (indirect via go-rag)
**Status as of 2026-04-25:** all 9 CVEs filed in Mantis #323 are **UNFIXED
upstream** per [pkg.go.dev/vuln](https://pkg.go.dev/vuln/). Pin-bumping does
not resolve any of them. We are on `v0.18.1` indirect; ollama upstream is at
`v0.21.2` (2026-04-23).
**Our usage scope:** the entire workspace imports `github.com/ollama/ollama/api`
from exactly ONE file (`go-rag/ollama.go`). The surface in use is **3 symbols
only**:
- `api.NewClient(baseURL, *http.Client)` — constructor
- `api.Client` — struct value (held as a field by `OllamaClient`)
- `api.EmbedRequest` — embedding-request DTO
**We are a CLIENT** of someone else's Ollama server. We do NOT host an Ollama
server. Most CVEs in the list are server-side code paths that govulncheck's
reachability graph flags because the package is imported, but our actual call
sites do not traverse those paths.
### CVE-by-CVE reachability assessment
| CVE | Description | Reachable from our call graph? | Action |
|---|---|---|---|
| GO-2025-3548 (CVE-2024-12886) | DoS via crafted GZIP | NO — server-side parser | Accept |
| GO-2025-3557 (CVE-2025-0315) | Resource alloc without limits | NO — server-side dispatcher | Accept |
| GO-2025-3558 | Out-of-bounds read | NO — server-side inference | Accept |
| GO-2025-3559 | Divide by zero | NO — server-side inference | Accept |
| GO-2025-3582 | Null pointer deref DoS | NO — server-side handler | Accept |
| GO-2025-3689 | Divide by zero | NO — server-side inference | Accept |
| GO-2025-3695 | Server DoS | NO — server-side handler | Accept |
| GO-2025-3824 (CVE-2025-51471) | Cross-domain token exposure | **CONDITIONAL** — see below | Watch |
| GO-2025-4251 (CVE-2025-63389) | Missing auth on model-mgmt | **OPERATOR-SIDE** — see below | Runbook |
### GO-2025-3824 — token-exposure watch flag
This CVE concerns auth tokens leaking across domain boundaries when Ollama
clients pass authentication. Currently `NewOllamaClient(cfg)` constructs over
plain HTTP/HTTPS without auth headers — the embedding client connects to a
trusted local Ollama instance per the deployment runbook below.
**If we ever add auth-token plumbing to the Ollama client** (e.g. for hosted
Ollama services), re-evaluate this CVE. The reachability flips from NO to YES
the moment we set an Authorization header on `api.NewClient`.
### GO-2025-4251 — operator-side mitigation required
This CVE is a missing authentication / authorization gap on Ollama's
model-management endpoints. The vulnerability is in the **Ollama server**,
not our client code. Our client doesn't expose model-management calls;
operators do via running an Ollama server.
**Operator mitigation (REQUIRED):** see "Ollama deployment" section below.
Operators MUST front their Ollama instance with network-level access controls
or an authentication proxy. This is also Ollama upstream's own recommendation
in the advisory.
### Watch flag
If any of the 9 CVEs gets a fixed version released, re-evaluate:
- Bump `go-rag/go.mod` require for `github.com/ollama/ollama` to the fixed version
- Re-run govulncheck and prune entries from this document accordingly
---
## Ollama deployment — operator runbook
The Ollama instance the agent connects to runs OUTSIDE of our application
boundary. Operators are responsible for these mitigations:
### 1. Network-level isolation (mandatory)
Bind the Ollama server to a private interface or front it with a reverse proxy:
```bash
# OPTION A — localhost-only binding (single-host deployments)
OLLAMA_HOST=127.0.0.1:11434 ollama serve
# OPTION B — private network only (multi-host fleet)
# Bind to the wireguard / tailscale / private-VLAN interface, not 0.0.0.0
OLLAMA_HOST=10.42.0.5:11434 ollama serve
```
**Never** expose Ollama directly to the public internet. GO-2025-4251 makes
model-management operations possible without auth.
### 2. Reverse proxy with auth (recommended for shared deployments)
If multiple agents share an Ollama server, front it with nginx/caddy/traefik
adding HTTP Basic Auth or an authentication proxy (oauth2-proxy, authentik):
```nginx
location /ollama/ {
auth_basic "Ollama API";
auth_basic_user_file /etc/nginx/ollama.htpasswd;
proxy_pass http://10.42.0.5:11434/;
}
```
Configure the agent's `OllamaConfig.Endpoint` to point at the reverse proxy
URL, and add an `Authorization` header to the http.Client passed to
`api.NewClient`. (When that change lands, re-evaluate GO-2025-3824 per
the watch-flag note above.)
### 3. CI-side govulncheck filter
Until upstream Ollama ships fixes for any of the 9 CVEs, CI should suppress
just these specific findings (not blanket-suppress all govulncheck output):
```bash
govulncheck ./... 2>&1 | grep -vE 'GO-2025-(3548|3557|3558|3559|3582|3689|3695|3824|4251)\b'
```
When a CVE gets a fix and we bump past it, drop that CVE ID from the grep
filter so future regressions surface cleanly.
---
## How to add to this document
When a new accepted finding lands:
1. Open a new H2 section named for the dependency
2. Document the reachability + rationale per CVE in a table
3. Add operator-side mitigations if any
4. Update the audit-history bullet at the top with a Mantis ticket reference
**Do NOT add findings here without a Mantis ticket.** Every accepted finding
must have a tracker entry so the rationale is auditable + reviewable.

31
go.mod
View file

@ -1,29 +1,28 @@
module forge.lthn.ai/core/mcp module dappco.re/go/mcp
go 1.26.0 go 1.26.0
require ( require (
dappco.re/go/core v0.4.7 dappco.re/go/core v0.8.0-alpha.1
forge.lthn.ai/core/api v0.1.5 dappco.re/go/ai v0.8.0-alpha.1
forge.lthn.ai/core/cli v0.3.7 dappco.re/go/api v0.8.0-alpha.1
forge.lthn.ai/core/go-ai v0.1.12 dappco.re/go/cli v0.8.0-alpha.1
forge.lthn.ai/core/go-io v0.1.7 dappco.re/go/io v0.8.0-alpha.1
forge.lthn.ai/core/go-log v0.0.4 dappco.re/go/log v0.8.0-alpha.1
forge.lthn.ai/core/go-process v0.2.9 dappco.re/go/process v0.8.0-alpha.1
forge.lthn.ai/core/go-rag v0.1.11 dappco.re/go/rag v0.8.0-alpha.1
forge.lthn.ai/core/go-webview v0.1.6 dappco.re/go/webview v0.8.0-alpha.1
forge.lthn.ai/core/go-ws v0.2.5 dappco.re/go/ws v0.8.0-alpha.1
github.com/gin-gonic/gin v1.12.0 github.com/gin-gonic/gin v1.12.0
github.com/gorilla/websocket v1.5.3 github.com/gorilla/websocket v1.5.3
github.com/modelcontextprotocol/go-sdk v1.4.1 github.com/modelcontextprotocol/go-sdk v1.5.0
github.com/stretchr/testify v1.11.1 github.com/stretchr/testify v1.11.1
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
) )
require ( require (
forge.lthn.ai/core/go v0.3.3 // indirect dappco.re/go/i18n v0.8.0-alpha.1 // indirect
forge.lthn.ai/core/go-i18n v0.1.7 // indirect dappco.re/go/inference v0.8.0-alpha.1 // indirect
forge.lthn.ai/core/go-inference v0.1.6 // indirect
github.com/99designs/gqlgen v0.17.88 // indirect github.com/99designs/gqlgen v0.17.88 // indirect
github.com/KyleBanks/depth v1.2.1 // indirect github.com/KyleBanks/depth v1.2.1 // indirect
github.com/agnivade/levenshtein v1.2.1 // indirect github.com/agnivade/levenshtein v1.2.1 // indirect
@ -149,3 +148,5 @@ require (
google.golang.org/grpc v1.79.2 // indirect google.golang.org/grpc v1.79.2 // indirect
google.golang.org/protobuf v1.36.11 // indirect google.golang.org/protobuf v1.36.11 // indirect
) )
replace dappco.re/go/core/process => ../go-process

52
go.sum
View file

@ -1,29 +1,25 @@
dappco.re/go/core v0.4.7 h1:KmIA/2lo6rl1NMtLrKqCWfMlUqpDZYH3q0/d10dTtGA= dappco.re/go/core v0.8.0-alpha.1 h1:gj7+Scv+L63Z7wMxbJYHhaRFkHJo2u4MMPuUSv/Dhtk=
dappco.re/go/core v0.4.7/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A= dappco.re/go/core v0.8.0-alpha.1/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A=
forge.lthn.ai/core/api v0.1.5 h1:NwZrcOyBjaiz5/cn0n0tnlMUodi8Or6FHMx59C7Kv2o= dappco.re/go/core/ai v0.2.2 h1:fkSKm3ezAljYbghlax5qHDm11uq7LUyIedIQO1PtdcY=
forge.lthn.ai/core/api v0.1.5/go.mod h1:PBnaWyOVXSOGy+0x2XAPUFMYJxQ2CNhppia/D06ZPII= dappco.re/go/core/ai v0.2.2/go.mod h1:+MZN/EArn/W2ag91McL034WxdMSO4IPqFcQER5/POGU=
forge.lthn.ai/core/cli v0.3.7 h1:1GrbaGg0wDGHr6+klSbbGyN/9sSbHvFbdySJznymhwg= dappco.re/go/core/api v0.3.0 h1:uWYgDQ+B4e5pXPX3S5lMsqSJamfpui3LWD5hcdwvWew=
forge.lthn.ai/core/cli v0.3.7/go.mod h1:DBUppJkA9P45ZFGgI2B8VXw1rAZxamHoI/KG7fRvTNs= dappco.re/go/core/api v0.3.0/go.mod h1:1ZDNwPHV6YjkUsjtC3nfLk6U4eqWlQ6qj6yT/MB8r6k=
forge.lthn.ai/core/go v0.3.3 h1:kYYZ2nRYy0/Be3cyuLJspRjLqTMxpckVyhb/7Sw2gd0= dappco.re/go/core/cli v0.5.2 h1:mo+PERo3lUytE+r3ArHr8o2nTftXjgPPsU/rn3ETXDM=
forge.lthn.ai/core/go v0.3.3/go.mod h1:Cp4ac25pghvO2iqOu59t1GyngTKVOzKB5/VPdhRi9CQ= dappco.re/go/core/cli v0.5.2/go.mod h1:D4zfn3ec/hb72AWX/JWDvkW+h2WDKQcxGUrzoss7q2s=
forge.lthn.ai/core/go-ai v0.1.12 h1:OHt0bUABlyhvgxZxyMwueRoh8rS3YKWGFY6++zCAwC8= dappco.re/go/core/i18n v0.2.3 h1:GqFaTR1I0SfSEc4WtsAkgao+jp8X5qcMPqrX0eMAOrY=
forge.lthn.ai/core/go-ai v0.1.12/go.mod h1:5Pc9lszxgkO7Aj2Z3dtq4L9Xk9l/VNN+Baj1t///OCM= dappco.re/go/core/i18n v0.2.3/go.mod h1:LoyX/4fIEJO/wiHY3Q682+4P0Ob7zPemcATfwp0JBUg=
forge.lthn.ai/core/go-i18n v0.1.7 h1:aHkAoc3W8fw3RPNvw/UszQbjyFWXHszzbZgty3SwyAA= dappco.re/go/core/inference v0.3.0 h1:ANFnlVO1LEYDipeDeBgqmb8CHvOTUFhMPyfyHGqO0IY=
forge.lthn.ai/core/go-i18n v0.1.7/go.mod h1:0VDjwtY99NSj2iqwrI09h5GUsJeM9s48MLkr+/Dn4G8= dappco.re/go/core/inference v0.3.0/go.mod h1:wbRY0v6iwOoJCpTvcBFarAM08bMgpPcrF6yv3vccYoA=
forge.lthn.ai/core/go-inference v0.1.6 h1:ce42zC0zO8PuISUyAukAN1NACEdWp5wF1mRgnh5+58E= dappco.re/go/core/io v0.4.1 h1:15dm7ldhFIAuZOrBiQG6XVZDpSvCxtZsUXApwTAB3wQ=
forge.lthn.ai/core/go-inference v0.1.6/go.mod h1:jfWz+IJX55wAH98+ic6FEqqGB6/P31CHlg7VY7pxREw= dappco.re/go/core/io v0.4.1/go.mod h1:w71dukyunczLb8frT9JOd5B78PjwWQD3YAXiCt3AcPA=
forge.lthn.ai/core/go-io v0.1.7 h1:Tdb6sqh+zz1lsGJaNX9RFWM6MJ/RhSAyxfulLXrJsbk= dappco.re/go/core/log v0.1.2 h1:pQSZxKD8VycdvjNJmatXbPSq2OxcP2xHbF20zgFIiZI=
forge.lthn.ai/core/go-io v0.1.7/go.mod h1:8lRLFk4Dnp5cR/Cyzh9WclD5566TbpdRgwcH7UZLWn4= dappco.re/go/core/log v0.1.2/go.mod h1:Nkqb8gsXhZAO8VLpx7B8i1iAmohhzqA20b9Zr8VUcJs=
forge.lthn.ai/core/go-log v0.0.4 h1:KTuCEPgFmuM8KJfnyQ8vPOU1Jg654W74h8IJvfQMfv0= dappco.re/go/core/rag v0.1.13 h1:R2Q+Xw5YenT4uFemXLBu+xQYtyUIYGSmMln5/Z+nol4=
forge.lthn.ai/core/go-log v0.0.4/go.mod h1:r14MXKOD3LF/sI8XUJQhRk/SZHBE7jAFVuCfgkXoZPw= dappco.re/go/core/rag v0.1.13/go.mod h1:wthXtCqYEChjlGIHcJXetlgk49lPDmzG6jFWd1PEIZc=
forge.lthn.ai/core/go-process v0.2.9 h1:Wql+5TUF+lfU2oJ9I+S764MkTqJhBsuyMM0v1zsfZC4= dappco.re/go/core/webview v0.2.1 h1:rdy2sV+MS6RZsav8BiARJxtWhfx7eOAJp3b1Ynp1sYs=
forge.lthn.ai/core/go-process v0.2.9/go.mod h1:NIzZOF5IVYYCjHkcNIGcg1mZH+bzGoie4SlZUDYOKIM= dappco.re/go/core/webview v0.2.1/go.mod h1:Qdo1V/sJJwOnL0hYd3+vzVUJxWYC8eGyILZROya6KoM=
forge.lthn.ai/core/go-rag v0.1.11 h1:KXTOtnOdrx8YKmvnj0EOi2EI/+cKjE8w2PpJCQIrSd8= dappco.re/go/core/ws v0.4.0 h1:yEDV9whXyo+GWzBSjuB3NiLiH2bmBPBWD6rydwHyBn8=
forge.lthn.ai/core/go-rag v0.1.11/go.mod h1:vIlOKVD1SdqqjkJ2XQyXPuKPtiajz/STPLCaDpqOzk8= dappco.re/go/core/ws v0.4.0/go.mod h1:L1rrgW6zU+DztcVBJW2yO5Lm3rGXpyUMOA8OL9zsAok=
forge.lthn.ai/core/go-webview v0.1.6 h1:szXQxRJf2bOZJKh3v1P01B1Vf9mgXaBCXzh0EZu9aoc=
forge.lthn.ai/core/go-webview v0.1.6/go.mod h1:5n1tECD1wBV/uFZRY9ZjfPFO5TYZrlaR3mQFwvO2nek=
forge.lthn.ai/core/go-ws v0.2.5 h1:ZIV7Yrv01R/xpJUogA5vrfP9yB9li1w7EV3eZFMt8h0=
forge.lthn.ai/core/go-ws v0.2.5/go.mod h1:C3riJyLLcV6QhLvYlq3P/XkGTsN598qQeGBoLdoHBU4=
github.com/99designs/gqlgen v0.17.88 h1:neMQDgehMwT1vYIOx/w5ZYPUU/iMNAJzRO44I5Intoc= github.com/99designs/gqlgen v0.17.88 h1:neMQDgehMwT1vYIOx/w5ZYPUU/iMNAJzRO44I5Intoc=
github.com/99designs/gqlgen v0.17.88/go.mod h1:qeqYFEgOeSKqWedOjogPizimp2iu4E23bdPvl4jTYic= github.com/99designs/gqlgen v0.17.88/go.mod h1:qeqYFEgOeSKqWedOjogPizimp2iu4E23bdPvl4jTYic=
github.com/KyleBanks/depth v1.2.1 h1:5h8fQADFrWtarTdtDudMmGsC7GPbOAu6RVB3ffsVFHc= github.com/KyleBanks/depth v1.2.1 h1:5h8fQADFrWtarTdtDudMmGsC7GPbOAu6RVB3ffsVFHc=
@ -222,8 +218,8 @@ github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2J
github.com/mattn/go-localereader v0.0.1/go.mod h1:8fBrzywKY7BI3czFoHkuzRoWE9C+EiG4R1k4Cjx5p88= github.com/mattn/go-localereader v0.0.1/go.mod h1:8fBrzywKY7BI3czFoHkuzRoWE9C+EiG4R1k4Cjx5p88=
github.com/mattn/go-runewidth v0.0.21 h1:jJKAZiQH+2mIinzCJIaIG9Be1+0NR+5sz/lYEEjdM8w= github.com/mattn/go-runewidth v0.0.21 h1:jJKAZiQH+2mIinzCJIaIG9Be1+0NR+5sz/lYEEjdM8w=
github.com/mattn/go-runewidth v0.0.21/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs= github.com/mattn/go-runewidth v0.0.21/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs=
github.com/modelcontextprotocol/go-sdk v1.4.1 h1:M4x9GyIPj+HoIlHNGpK2hq5o3BFhC+78PkEaldQRphc= github.com/modelcontextprotocol/go-sdk v1.5.0 h1:CHU0FIX9kpueNkxuYtfYQn1Z0slhFzBZuq+x6IiblIU=
github.com/modelcontextprotocol/go-sdk v1.4.1/go.mod h1:Bo/mS87hPQqHSRkMv4dQq1XCu6zv4INdXnFZabkNU6s= github.com/modelcontextprotocol/go-sdk v1.5.0/go.mod h1:gggDIhoemhWs3BGkGwd1umzEXCEMMvAnhTrnbXJKKKA=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=

View file

@ -4,16 +4,15 @@ package agentic
import ( import (
"context" "context"
"fmt"
"os" "os"
"os/exec" "os/exec"
"path/filepath"
"strings"
"syscall" "syscall"
"time" "time"
coreio "forge.lthn.ai/core/go-io" core "dappco.re/go/core"
coreerr "forge.lthn.ai/core/go-log" coreio "dappco.re/go/io"
coreerr "dappco.re/go/log"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
@ -42,8 +41,9 @@ type DispatchOutput struct {
OutputFile string `json:"output_file,omitempty"` OutputFile string `json:"output_file,omitempty"`
} }
func (s *PrepSubsystem) registerDispatchTool(server *mcp.Server) { func (s *PrepSubsystem) registerDispatchTool(svc *coremcp.Service) {
mcp.AddTool(server, &mcp.Tool{ server := svc.Server()
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_dispatch", Name: "agentic_dispatch",
Description: "Dispatch a subagent (Gemini, Codex, or Claude) to work on a task. Preps a sandboxed workspace first, then spawns the agent inside it. Templates: conventions, security, coding.", Description: "Dispatch a subagent (Gemini, Codex, or Claude) to work on a task. Preps a sandboxed workspace first, then spawns the agent inside it. Templates: conventions, security, coding.",
}, s.dispatch) }, s.dispatch)
@ -52,7 +52,7 @@ func (s *PrepSubsystem) registerDispatchTool(server *mcp.Server) {
// agentCommand returns the command and args for a given agent type. // agentCommand returns the command and args for a given agent type.
// Supports model variants: "gemini", "gemini:flash", "gemini:pro", "claude", "claude:haiku". // Supports model variants: "gemini", "gemini:flash", "gemini:pro", "claude", "claude:haiku".
func agentCommand(agent, prompt string) (string, []string, error) { func agentCommand(agent, prompt string) (string, []string, error) {
parts := strings.SplitN(agent, ":", 2) parts := core.SplitN(agent, ":", 2)
base := parts[0] base := parts[0]
model := "" model := ""
if len(parts) > 1 { if len(parts) > 1 {
@ -76,7 +76,7 @@ func agentCommand(agent, prompt string) (string, []string, error) {
return "claude", args, nil return "claude", args, nil
case "local": case "local":
home, _ := os.UserHomeDir() home, _ := os.UserHomeDir()
script := filepath.Join(home, "Code", "core", "agent", "scripts", "local-agent.sh") script := core.Path(home, "Code", "core", "agent", "scripts", "local-agent.sh")
return "bash", []string{script, prompt}, nil return "bash", []string{script, prompt}, nil
default: default:
return "", nil, coreerr.E("agentCommand", "unknown agent: "+agent, nil) return "", nil, coreerr.E("agentCommand", "unknown agent: "+agent, nil)
@ -84,6 +84,9 @@ func agentCommand(agent, prompt string) (string, []string, error) {
} }
func (s *PrepSubsystem) dispatch(ctx context.Context, req *mcp.CallToolRequest, input DispatchInput) (*mcp.CallToolResult, DispatchOutput, error) { func (s *PrepSubsystem) dispatch(ctx context.Context, req *mcp.CallToolRequest, input DispatchInput) (*mcp.CallToolResult, DispatchOutput, error) {
progress := coremcp.NewProgressNotifier(ctx, req)
const dispatchProgressTotal = 4
if input.Repo == "" { if input.Repo == "" {
return nil, DispatchOutput{}, coreerr.E("dispatch", "repo is required", nil) return nil, DispatchOutput{}, coreerr.E("dispatch", "repo is required", nil)
} }
@ -100,7 +103,10 @@ func (s *PrepSubsystem) dispatch(ctx context.Context, req *mcp.CallToolRequest,
input.Template = "coding" input.Template = "coding"
} }
_ = progress.Send(1, dispatchProgressTotal, "validated dispatch request")
// Step 1: Prep the sandboxed workspace // Step 1: Prep the sandboxed workspace
_ = progress.Send(2, dispatchProgressTotal, "preparing workspace")
prepInput := PrepInput{ prepInput := PrepInput{
Repo: input.Repo, Repo: input.Repo,
Org: input.Org, Org: input.Org,
@ -115,16 +121,18 @@ func (s *PrepSubsystem) dispatch(ctx context.Context, req *mcp.CallToolRequest,
if err != nil { if err != nil {
return nil, DispatchOutput{}, coreerr.E("dispatch", "prep workspace failed", err) return nil, DispatchOutput{}, coreerr.E("dispatch", "prep workspace failed", err)
} }
_ = progress.Send(3, dispatchProgressTotal, core.Sprintf("workspace prepared for %s", prepOut.Branch))
wsDir := prepOut.WorkspaceDir wsDir := prepOut.WorkspaceDir
srcDir := filepath.Join(wsDir, "src") srcDir := core.Path(wsDir, "src")
// The prompt is just: read PROMPT.md and do the work // The prompt is just: read PROMPT.md and do the work
prompt := "Read PROMPT.md for instructions. All context files (CLAUDE.md, TODO.md, CONTEXT.md, CONSUMERS.md, RECENT.md) are in the parent directory. Work in this directory." prompt := "Read PROMPT.md for instructions. All context files (CLAUDE.md, TODO.md, CONTEXT.md, CONSUMERS.md, RECENT.md) are in the parent directory. Work in this directory."
if input.DryRun { if input.DryRun {
// Read PROMPT.md for the dry run output // Read PROMPT.md for the dry run output
promptRaw, _ := coreio.Local.Read(filepath.Join(wsDir, "PROMPT.md")) promptRaw, _ := coreio.Local.Read(core.Path(wsDir, "PROMPT.md"))
_ = progress.Send(dispatchProgressTotal, dispatchProgressTotal, "dry run complete")
return nil, DispatchOutput{ return nil, DispatchOutput{
Success: true, Success: true,
Agent: input.Agent, Agent: input.Agent,
@ -137,15 +145,18 @@ func (s *PrepSubsystem) dispatch(ctx context.Context, req *mcp.CallToolRequest,
// Step 2: Check per-agent concurrency limit // Step 2: Check per-agent concurrency limit
if !s.canDispatchAgent(input.Agent) { if !s.canDispatchAgent(input.Agent) {
// Queue the workspace — write status as "queued" and return // Queue the workspace — write status as "queued" and return
writeStatus(wsDir, &WorkspaceStatus{ s.saveStatus(wsDir, &WorkspaceStatus{
Status: "queued", Status: "queued",
Agent: input.Agent, Agent: input.Agent,
Repo: input.Repo, Repo: input.Repo,
Org: input.Org, Org: input.Org,
Task: input.Task, Task: input.Task,
Issue: input.Issue,
Branch: prepOut.Branch,
StartedAt: time.Now(), StartedAt: time.Now(),
Runs: 0, Runs: 0,
}) })
_ = progress.Send(dispatchProgressTotal, dispatchProgressTotal, "queued until an agent slot is available")
return nil, DispatchOutput{ return nil, DispatchOutput{
Success: true, Success: true,
Agent: input.Agent, Agent: input.Agent,
@ -157,17 +168,21 @@ func (s *PrepSubsystem) dispatch(ctx context.Context, req *mcp.CallToolRequest,
// Step 3: Write status BEFORE spawning so concurrent dispatches // Step 3: Write status BEFORE spawning so concurrent dispatches
// see this workspace as "running" during the concurrency check. // see this workspace as "running" during the concurrency check.
writeStatus(wsDir, &WorkspaceStatus{ s.saveStatus(wsDir, &WorkspaceStatus{
Status: "running", Status: "running",
Agent: input.Agent, Agent: input.Agent,
Repo: input.Repo, Repo: input.Repo,
Org: input.Org, Org: input.Org,
Task: input.Task, Task: input.Task,
Issue: input.Issue,
Branch: prepOut.Branch,
StartedAt: time.Now(), StartedAt: time.Now(),
Runs: 1, Runs: 1,
}) })
_ = progress.Send(3.5, dispatchProgressTotal, "dispatch slot acquired")
// Step 4: Spawn agent as a detached process // Step 4: Spawn agent as a detached process
_ = progress.Send(4, dispatchProgressTotal, core.Sprintf("spawning agent %s", input.Agent))
// Uses Setpgid so the agent survives parent (MCP server) death. // Uses Setpgid so the agent survives parent (MCP server) death.
// Output goes directly to log file (not buffered in memory). // Output goes directly to log file (not buffered in memory).
command, args, err := agentCommand(input.Agent, prompt) command, args, err := agentCommand(input.Agent, prompt)
@ -175,7 +190,7 @@ func (s *PrepSubsystem) dispatch(ctx context.Context, req *mcp.CallToolRequest,
return nil, DispatchOutput{}, err return nil, DispatchOutput{}, err
} }
outputFile := filepath.Join(wsDir, fmt.Sprintf("agent-%s.log", input.Agent)) outputFile := core.Path(wsDir, core.Sprintf("agent-%s.log", input.Agent))
outFile, err := os.Create(outputFile) outFile, err := os.Create(outputFile)
if err != nil { if err != nil {
return nil, DispatchOutput{}, coreerr.E("dispatch", "failed to create log file", err) return nil, DispatchOutput{}, coreerr.E("dispatch", "failed to create log file", err)
@ -204,24 +219,29 @@ func (s *PrepSubsystem) dispatch(ctx context.Context, req *mcp.CallToolRequest,
if err := cmd.Start(); err != nil { if err := cmd.Start(); err != nil {
outFile.Close() outFile.Close()
// Revert status so the slot is freed // Revert status so the slot is freed
writeStatus(wsDir, &WorkspaceStatus{ s.saveStatus(wsDir, &WorkspaceStatus{
Status: "failed", Status: "failed",
Agent: input.Agent, Agent: input.Agent,
Repo: input.Repo, Repo: input.Repo,
Task: input.Task, Task: input.Task,
Issue: input.Issue,
Branch: prepOut.Branch,
}) })
return nil, DispatchOutput{}, coreerr.E("dispatch", "failed to spawn "+input.Agent, err) return nil, DispatchOutput{}, coreerr.E("dispatch", "failed to spawn "+input.Agent, err)
} }
pid := cmd.Process.Pid pid := cmd.Process.Pid
_ = progress.Send(dispatchProgressTotal, dispatchProgressTotal, "agent process started")
// Update status with PID now that agent is running // Update status with PID now that agent is running
writeStatus(wsDir, &WorkspaceStatus{ s.saveStatus(wsDir, &WorkspaceStatus{
Status: "running", Status: "running",
Agent: input.Agent, Agent: input.Agent,
Repo: input.Repo, Repo: input.Repo,
Org: input.Org, Org: input.Org,
Task: input.Task, Task: input.Task,
Issue: input.Issue,
Branch: prepOut.Branch,
PID: pid, PID: pid,
StartedAt: time.Now(), StartedAt: time.Now(),
Runs: 1, Runs: 1,
@ -233,13 +253,38 @@ func (s *PrepSubsystem) dispatch(ctx context.Context, req *mcp.CallToolRequest,
cmd.Wait() cmd.Wait()
outFile.Close() outFile.Close()
// Update status to completed postCtx := context.WithoutCancel(ctx)
if st, err := readStatus(wsDir); err == nil { status := "completed"
st.Status = "completed" channel := coremcp.ChannelAgentComplete
st.PID = 0 payload := map[string]any{
writeStatus(wsDir, st) "workspace": core.PathBase(wsDir),
"repo": input.Repo,
"org": input.Org,
"agent": input.Agent,
"branch": prepOut.Branch,
} }
// Update status to completed or blocked.
if st, err := readStatus(wsDir); err == nil {
st.PID = 0
if data, err := coreio.Local.Read(core.Path(wsDir, "src", "BLOCKED.md")); err == nil {
status = "blocked"
channel = coremcp.ChannelAgentBlocked
st.Status = status
st.Question = core.Trim(data)
if st.Question != "" {
payload["question"] = st.Question
}
} else {
st.Status = status
}
s.saveStatus(wsDir, st)
}
payload["status"] = status
s.emitChannel(postCtx, channel, payload)
s.emitChannel(postCtx, coremcp.ChannelAgentStatus, payload)
// Ingest scan findings as issues // Ingest scan findings as issues
s.ingestFindings(wsDir) s.ingestFindings(wsDir)

View file

@ -6,11 +6,11 @@ import (
"bytes" "bytes"
"context" "context"
"encoding/json" "encoding/json"
"fmt"
"net/http" "net/http"
"strings"
coreerr "forge.lthn.ai/core/go-log" core "dappco.re/go/core"
coreerr "dappco.re/go/log"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
@ -45,8 +45,9 @@ type ChildRef struct {
URL string `json:"url"` URL string `json:"url"`
} }
func (s *PrepSubsystem) registerEpicTool(server *mcp.Server) { func (s *PrepSubsystem) registerEpicTool(svc *coremcp.Service) {
mcp.AddTool(server, &mcp.Tool{ server := svc.Server()
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_create_epic", Name: "agentic_create_epic",
Description: "Create an epic issue with child issues on Forge. Each task becomes a child issue linked via checklist. Optionally auto-dispatch agents to work each child.", Description: "Create an epic issue with child issues on Forge. Each task becomes a child issue linked via checklist. Optionally auto-dispatch agents to work each child.",
}, s.createEpic) }, s.createEpic)
@ -99,14 +100,14 @@ func (s *PrepSubsystem) createEpic(ctx context.Context, req *mcp.CallToolRequest
} }
// Step 2: Build epic body with checklist // Step 2: Build epic body with checklist
var body strings.Builder body := core.NewBuilder()
if input.Body != "" { if input.Body != "" {
body.WriteString(input.Body) body.WriteString(input.Body)
body.WriteString("\n\n") body.WriteString("\n\n")
} }
body.WriteString("## Tasks\n\n") body.WriteString("## Tasks\n\n")
for _, child := range children { for _, child := range children {
body.WriteString(fmt.Sprintf("- [ ] #%d %s\n", child.Number, child.Title)) body.WriteString(core.Sprintf("- [ ] #%d %s\n", child.Number, child.Title))
} }
// Step 3: Create epic issue // Step 3: Create epic issue
@ -155,8 +156,12 @@ func (s *PrepSubsystem) createIssue(ctx context.Context, org, repo, title, body
payload["labels"] = labelIDs payload["labels"] = labelIDs
} }
data, _ := json.Marshal(payload) r := core.JSONMarshal(payload)
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/issues", s.forgeURL, org, repo) if !r.OK {
return ChildRef{}, coreerr.E("createIssue", "failed to encode issue payload", nil)
}
data := r.Value.([]byte)
url := core.Sprintf("%s/api/v1/repos/%s/%s/issues", s.forgeURL, org, repo)
req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(data)) req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(data))
req.Header.Set("Content-Type", "application/json") req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "token "+s.forgeToken) req.Header.Set("Authorization", "token "+s.forgeToken)
@ -168,7 +173,7 @@ func (s *PrepSubsystem) createIssue(ctx context.Context, org, repo, title, body
defer resp.Body.Close() defer resp.Body.Close()
if resp.StatusCode != 201 { if resp.StatusCode != 201 {
return ChildRef{}, coreerr.E("createIssue", fmt.Sprintf("returned %d", resp.StatusCode), nil) return ChildRef{}, coreerr.E("createIssue", core.Sprintf("returned %d", resp.StatusCode), nil)
} }
var result struct { var result struct {
@ -191,7 +196,7 @@ func (s *PrepSubsystem) resolveLabelIDs(ctx context.Context, org, repo string, n
} }
// Fetch existing labels // Fetch existing labels
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/labels?limit=50", s.forgeURL, org, repo) url := core.Sprintf("%s/api/v1/repos/%s/%s/labels?limit=50", s.forgeURL, org, repo)
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil) req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
req.Header.Set("Authorization", "token "+s.forgeToken) req.Header.Set("Authorization", "token "+s.forgeToken)
@ -244,12 +249,16 @@ func (s *PrepSubsystem) createLabel(ctx context.Context, org, repo, name string)
colour = "#6b7280" colour = "#6b7280"
} }
payload, _ := json.Marshal(map[string]string{ r := core.JSONMarshal(map[string]string{
"name": name, "name": name,
"color": colour, "color": colour,
}) })
if !r.OK {
return 0
}
payload := r.Value.([]byte)
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/labels", s.forgeURL, org, repo) url := core.Sprintf("%s/api/v1/repos/%s/%s/labels", s.forgeURL, org, repo)
req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload)) req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload))
req.Header.Set("Content-Type", "application/json") req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "token "+s.forgeToken) req.Header.Set("Authorization", "token "+s.forgeToken)

View file

@ -3,15 +3,12 @@
package agentic package agentic
import ( import (
"bytes" "context"
"encoding/json"
"fmt"
"net/http" "net/http"
"os"
"path/filepath"
"strings"
coreio "forge.lthn.ai/core/go-io" core "dappco.re/go/core"
coreio "dappco.re/go/io"
coremcp "dappco.re/go/mcp/pkg/mcp"
) )
// ingestFindings reads the agent output log and creates issues via the API // ingestFindings reads the agent output log and creates issues via the API
@ -23,10 +20,7 @@ func (s *PrepSubsystem) ingestFindings(wsDir string) {
} }
// Read the log file // Read the log file
logFiles, err := filepath.Glob(filepath.Join(wsDir, "agent-*.log")) logFiles := core.PathGlob(core.Path(wsDir, "agent-*.log"))
if err != nil {
return
}
if len(logFiles) == 0 { if len(logFiles) == 0 {
return return
} }
@ -39,26 +33,28 @@ func (s *PrepSubsystem) ingestFindings(wsDir string) {
body := contentStr body := contentStr
// Skip quota errors // Skip quota errors
if strings.Contains(body, "QUOTA_EXHAUSTED") || strings.Contains(body, "QuotaError") { if core.Contains(body, "QUOTA_EXHAUSTED") || core.Contains(body, "QuotaError") {
return return
} }
// Only ingest if there are actual findings (file:line references) // Only ingest if there are actual findings (file:line references)
findings := countFileRefs(body) findings := countFileRefs(body)
issueCreated := false
if findings < 2 { if findings < 2 {
s.emitHarvestComplete(context.Background(), wsDir, st.Repo, findings, issueCreated)
return // No meaningful findings return // No meaningful findings
} }
// Determine issue type from the template used // Determine issue type from the template used
issueType := "task" issueType := "task"
priority := "normal" priority := "normal"
if strings.Contains(body, "security") || strings.Contains(body, "Security") { if core.Contains(body, "security") || core.Contains(body, "Security") {
issueType = "bug" issueType = "bug"
priority = "high" priority = "high"
} }
// Create a single issue per repo with all findings in the body // Create a single issue per repo with all findings in the body
title := fmt.Sprintf("Scan findings for %s (%d items)", st.Repo, findings) title := core.Sprintf("Scan findings for %s (%d items)", st.Repo, findings)
// Truncate body to reasonable size for issue description // Truncate body to reasonable size for issue description
description := body description := body
@ -66,7 +62,8 @@ func (s *PrepSubsystem) ingestFindings(wsDir string) {
description = description[:10000] + "\n\n... (truncated, see full log in workspace)" description = description[:10000] + "\n\n... (truncated, see full log in workspace)"
} }
s.createIssueViaAPI(st.Repo, title, description, issueType, priority, "scan") issueCreated = s.createIssueViaAPI(st.Repo, title, description, issueType, priority, "scan")
s.emitHarvestComplete(context.Background(), wsDir, st.Repo, findings, issueCreated)
} }
// countFileRefs counts file:line references in the output (indicates real findings) // countFileRefs counts file:line references in the output (indicates real findings)
@ -81,7 +78,7 @@ func countFileRefs(body string) int {
} }
if j < len(body) && body[j] == '`' { if j < len(body) && body[j] == '`' {
ref := body[i+1 : j] ref := body[i+1 : j]
if strings.Contains(ref, ".go:") || strings.Contains(ref, ".php:") { if core.Contains(ref, ".go:") || core.Contains(ref, ".php:") {
count++ count++
} }
} }
@ -91,20 +88,20 @@ func countFileRefs(body string) int {
} }
// createIssueViaAPI posts an issue to the lthn.sh API // createIssueViaAPI posts an issue to the lthn.sh API
func (s *PrepSubsystem) createIssueViaAPI(repo, title, description, issueType, priority, source string) { func (s *PrepSubsystem) createIssueViaAPI(repo, title, description, issueType, priority, source string) bool {
if s.brainKey == "" { if s.brainKey == "" {
return return false
} }
// Read the agent API key from file // Read the agent API key from file
home, _ := os.UserHomeDir() home := core.Env("HOME")
apiKeyData, err := coreio.Local.Read(filepath.Join(home, ".claude", "agent-api.key")) apiKeyData, err := coreio.Local.Read(core.Path(home, ".claude", "agent-api.key"))
if err != nil { if err != nil {
return return false
} }
apiKey := strings.TrimSpace(apiKeyData) apiKey := core.Trim(apiKeyData)
payload, _ := json.Marshal(map[string]string{ payloadStr := core.JSONMarshalString(map[string]string{
"title": title, "title": title,
"description": description, "description": description,
"type": issueType, "type": issueType,
@ -112,14 +109,31 @@ func (s *PrepSubsystem) createIssueViaAPI(repo, title, description, issueType, p
"reporter": "cladius", "reporter": "cladius",
}) })
req, _ := http.NewRequest("POST", s.brainURL+"/v1/issues", bytes.NewReader(payload)) req, err := http.NewRequest("POST", s.brainURL+"/v1/issues", core.NewReader(payloadStr))
if err != nil {
return false
}
req.Header.Set("Content-Type", "application/json") req.Header.Set("Content-Type", "application/json")
req.Header.Set("Accept", "application/json") req.Header.Set("Accept", "application/json")
req.Header.Set("Authorization", "Bearer "+apiKey) req.Header.Set("Authorization", "Bearer "+apiKey)
resp, err := s.client.Do(req) resp, err := s.client.Do(req)
if err != nil { if err != nil {
return return false
} }
resp.Body.Close() resp.Body.Close()
return resp.StatusCode < 400
}
// emitHarvestComplete announces that finding ingestion finished for a workspace.
//
// ctx := context.Background()
// s.emitHarvestComplete(ctx, "go-io-123", "go-io", 4, true)
func (s *PrepSubsystem) emitHarvestComplete(ctx context.Context, workspace, repo string, findings int, issueCreated bool) {
s.emitChannel(ctx, coremcp.ChannelHarvestComplete, map[string]any{
"workspace": workspace,
"repo": repo,
"findings": findings,
"issue_created": issueCreated,
})
} }

224
pkg/mcp/agentic/issue.go Normal file
View file

@ -0,0 +1,224 @@
// SPDX-License-Identifier: EUPL-1.2
package agentic
import (
"bytes"
"context"
"encoding/json"
"net/http"
core "dappco.re/go/core"
coreerr "dappco.re/go/log"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp"
)
// IssueDispatchInput is the input for agentic_dispatch_issue.
//
// input := IssueDispatchInput{
// Repo: "go-io",
// Issue: 123,
// Agent: "claude",
// }
type IssueDispatchInput struct {
Repo string `json:"repo"` // Target repo (e.g. "go-io")
Org string `json:"org,omitempty"` // Forge org (default "core")
Issue int `json:"issue"` // Forge issue number
Agent string `json:"agent,omitempty"` // "claude" (default), "codex", "gemini"
Template string `json:"template,omitempty"` // "conventions", "security", "coding" (default)
DryRun bool `json:"dry_run,omitempty"` // Preview without executing
}
type forgeIssue struct {
Title string `json:"title"`
Body string `json:"body"`
State string `json:"state"`
Labels []struct {
Name string `json:"name"`
} `json:"labels"`
Assignee *struct {
Login string `json:"login"`
} `json:"assignee"`
}
func (s *PrepSubsystem) registerIssueTools(svc *coremcp.Service) {
server := svc.Server()
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_dispatch_issue",
Description: "Dispatch an agent to work on a Forge issue. Assigns the issue as a lock, prepends the issue body to TODO.md, creates an issue-specific branch, and spawns the agent.",
}, s.dispatchIssue)
// agentic_issue_dispatch is the spec-aligned name for the same action.
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_issue_dispatch",
Description: "Dispatch an agent to work on a Forge issue. Spec-aligned alias for agentic_dispatch_issue.",
}, s.dispatchIssue)
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_pr",
Description: "Create a pull request from an agent workspace. Pushes the branch and creates a Forge PR linked to the tracked issue, if any.",
}, s.createPR)
}
func (s *PrepSubsystem) dispatchIssue(ctx context.Context, req *mcp.CallToolRequest, input IssueDispatchInput) (*mcp.CallToolResult, DispatchOutput, error) {
if input.Repo == "" {
return nil, DispatchOutput{}, coreerr.E("dispatchIssue", "repo is required", nil)
}
if input.Issue == 0 {
return nil, DispatchOutput{}, coreerr.E("dispatchIssue", "issue is required", nil)
}
if input.Org == "" {
input.Org = "core"
}
if input.Agent == "" {
input.Agent = "claude"
}
if input.Template == "" {
input.Template = "coding"
}
issue, err := s.fetchIssue(ctx, input.Org, input.Repo, input.Issue)
if err != nil {
return nil, DispatchOutput{}, err
}
if issue.State != "open" {
return nil, DispatchOutput{}, coreerr.E("dispatchIssue", core.Sprintf("issue %d is %s, not open", input.Issue, issue.State), nil)
}
if issue.Assignee != nil && issue.Assignee.Login != "" {
return nil, DispatchOutput{}, coreerr.E("dispatchIssue", core.Sprintf("issue %d is already assigned to %s", input.Issue, issue.Assignee.Login), nil)
}
if !input.DryRun {
if err := s.lockIssue(ctx, input.Org, input.Repo, input.Issue, input.Agent); err != nil {
return nil, DispatchOutput{}, err
}
var dispatchErr error
defer func() {
if dispatchErr != nil {
_ = s.unlockIssue(ctx, input.Org, input.Repo, input.Issue, issue.Labels)
}
}()
result, out, dispatchErr := s.dispatch(ctx, req, DispatchInput{
Repo: input.Repo,
Org: input.Org,
Issue: input.Issue,
Task: issue.Title,
Agent: input.Agent,
Template: input.Template,
DryRun: input.DryRun,
})
if dispatchErr != nil {
return nil, DispatchOutput{}, dispatchErr
}
return result, out, nil
}
return s.dispatch(ctx, req, DispatchInput{
Repo: input.Repo,
Org: input.Org,
Issue: input.Issue,
Task: issue.Title,
Agent: input.Agent,
Template: input.Template,
DryRun: input.DryRun,
})
}
func (s *PrepSubsystem) unlockIssue(ctx context.Context, org, repo string, issue int, labels []struct {
Name string `json:"name"`
}) error {
updateURL := core.Sprintf("%s/api/v1/repos/%s/%s/issues/%d", s.forgeURL, org, repo, issue)
issueLabels := make([]string, 0, len(labels))
for _, label := range labels {
if label.Name == "in-progress" {
continue
}
issueLabels = append(issueLabels, label.Name)
}
if issueLabels == nil {
issueLabels = []string{}
}
r := core.JSONMarshal(map[string]any{
"assignees": []string{},
"labels": issueLabels,
})
if !r.OK {
return coreerr.E("unlockIssue", "failed to encode issue unlock", nil)
}
payload := r.Value.([]byte)
req, err := http.NewRequestWithContext(ctx, http.MethodPatch, updateURL, bytes.NewReader(payload))
if err != nil {
return coreerr.E("unlockIssue", "failed to build unlock request", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "token "+s.forgeToken)
resp, err := s.client.Do(req)
if err != nil {
return coreerr.E("unlockIssue", "failed to update issue", err)
}
defer resp.Body.Close()
if resp.StatusCode >= http.StatusBadRequest {
return coreerr.E("unlockIssue", core.Sprintf("issue unlock returned %d", resp.StatusCode), nil)
}
return nil
}
func (s *PrepSubsystem) fetchIssue(ctx context.Context, org, repo string, issue int) (*forgeIssue, error) {
url := core.Sprintf("%s/api/v1/repos/%s/%s/issues/%d", s.forgeURL, org, repo, issue)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
return nil, coreerr.E("fetchIssue", "failed to build request", err)
}
req.Header.Set("Authorization", "token "+s.forgeToken)
resp, err := s.client.Do(req)
if err != nil {
return nil, coreerr.E("fetchIssue", "failed to fetch issue", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, coreerr.E("fetchIssue", core.Sprintf("issue %d not found in %s/%s", issue, org, repo), nil)
}
var out forgeIssue
if err := json.NewDecoder(resp.Body).Decode(&out); err != nil {
return nil, coreerr.E("fetchIssue", "failed to decode issue", err)
}
return &out, nil
}
func (s *PrepSubsystem) lockIssue(ctx context.Context, org, repo string, issue int, assignee string) error {
updateURL := core.Sprintf("%s/api/v1/repos/%s/%s/issues/%d", s.forgeURL, org, repo, issue)
r := core.JSONMarshal(map[string]any{
"assignees": []string{assignee},
"labels": []string{"in-progress"},
})
if !r.OK {
return coreerr.E("lockIssue", "failed to encode issue update", nil)
}
payload := r.Value.([]byte)
req, err := http.NewRequestWithContext(ctx, http.MethodPatch, updateURL, bytes.NewReader(payload))
if err != nil {
return coreerr.E("lockIssue", "failed to build update request", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "token "+s.forgeToken)
resp, err := s.client.Do(req)
if err != nil {
return coreerr.E("lockIssue", "failed to update issue", err)
}
defer resp.Body.Close()
if resp.StatusCode >= http.StatusBadRequest {
return coreerr.E("lockIssue", core.Sprintf("issue update returned %d", resp.StatusCode), nil)
}
return nil
}

View file

@ -0,0 +1,227 @@
// SPDX-License-Identifier: EUPL-1.2
package agentic
import (
"bytes"
"context"
"encoding/json"
"io"
"net/http"
"net/http/httptest"
"os"
"os/exec"
"path/filepath"
"strings"
"testing"
)
func TestBranchSlug_Good(t *testing.T) {
got := branchSlug("Fix login crash in API v2")
want := "fix-login-crash-in-api-v2"
if got != want {
t.Fatalf("expected %q, got %q", want, got)
}
}
func TestPrepWorkspace_Good_IssueBranchName(t *testing.T) {
codePath := t.TempDir()
repoDir := initTestRepo(t, codePath, "demo")
_ = repoDir
s := &PrepSubsystem{codePath: codePath}
_, out, err := s.prepWorkspace(context.Background(), nil, PrepInput{
Repo: "demo",
Issue: 42,
Task: "Fix login crash",
})
if err != nil {
t.Fatalf("prepWorkspace failed: %v", err)
}
want := "agent/issue-42-fix-login-crash"
if out.Branch != want {
t.Fatalf("expected branch %q, got %q", want, out.Branch)
}
srcDir := filepath.Join(out.WorkspaceDir, "src")
cmd := exec.Command("git", "rev-parse", "--abbrev-ref", "HEAD")
cmd.Dir = srcDir
data, err := cmd.Output()
if err != nil {
t.Fatalf("failed to read branch: %v", err)
}
if got := strings.TrimSpace(string(data)); got != want {
t.Fatalf("expected git branch %q, got %q", want, got)
}
}
func TestDispatchIssue_Bad_AssignedIssue(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case http.MethodGet:
_ = json.NewEncoder(w).Encode(map[string]any{
"title": "Fix login crash",
"body": "details",
"state": "open",
"assignee": map[string]any{
"login": "someone-else",
},
})
default:
w.WriteHeader(http.StatusOK)
}
}))
defer srv.Close()
s := &PrepSubsystem{
forgeURL: srv.URL,
client: srv.Client(),
}
_, _, err := s.dispatchIssue(context.Background(), nil, IssueDispatchInput{
Repo: "demo",
Org: "core",
Issue: 42,
DryRun: true,
})
if err == nil {
t.Fatal("expected assigned issue to fail")
}
}
func TestDispatchIssue_Good_UnlocksOnPrepFailure(t *testing.T) {
var methods []string
var bodies []string
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
body, _ := io.ReadAll(r.Body)
methods = append(methods, r.Method)
bodies = append(bodies, string(body))
switch r.Method {
case http.MethodGet:
_ = json.NewEncoder(w).Encode(map[string]any{
"title": "Fix login crash",
"body": "details",
"state": "open",
"labels": []map[string]any{
{"name": "bug"},
},
})
case http.MethodPatch:
w.WriteHeader(http.StatusOK)
default:
w.WriteHeader(http.StatusMethodNotAllowed)
}
}))
defer srv.Close()
s := &PrepSubsystem{
forgeURL: srv.URL,
forgeToken: "token",
client: srv.Client(),
codePath: t.TempDir(),
}
_, _, err := s.dispatchIssue(context.Background(), nil, IssueDispatchInput{
Repo: "demo",
Org: "core",
Issue: 42,
})
if err == nil {
t.Fatal("expected dispatch to fail when the repo clone is missing")
}
if got, want := len(methods), 3; got != want {
t.Fatalf("expected %d requests, got %d (%v)", want, got, methods)
}
if methods[0] != http.MethodGet {
t.Fatalf("expected first request to fetch issue, got %s", methods[0])
}
if methods[1] != http.MethodPatch {
t.Fatalf("expected second request to lock issue, got %s", methods[1])
}
if methods[2] != http.MethodPatch {
t.Fatalf("expected third request to unlock issue, got %s", methods[2])
}
if !strings.Contains(bodies[1], `"assignees":["claude"]`) {
t.Fatalf("expected lock request to assign claude, got %s", bodies[1])
}
if !strings.Contains(bodies[2], `"assignees":[]`) {
t.Fatalf("expected unlock request to clear assignees, got %s", bodies[2])
}
if !strings.Contains(bodies[2], `"labels":["bug"]`) {
t.Fatalf("expected unlock request to preserve original labels, got %s", bodies[2])
}
}
func TestLockIssue_Good_RequestBody(t *testing.T) {
var gotMethod string
var gotPath string
var gotBody []byte
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
gotMethod = r.Method
gotPath = r.URL.Path
body, _ := io.ReadAll(r.Body)
gotBody = append([]byte(nil), body...)
w.WriteHeader(http.StatusOK)
}))
defer srv.Close()
s := &PrepSubsystem{
forgeURL: srv.URL,
client: srv.Client(),
}
if err := s.lockIssue(context.Background(), "core", "demo", 42, "claude"); err != nil {
t.Fatalf("lockIssue failed: %v", err)
}
if gotMethod != http.MethodPatch {
t.Fatalf("expected PATCH, got %s", gotMethod)
}
if gotPath != "/api/v1/repos/core/demo/issues/42" {
t.Fatalf("unexpected path %q", gotPath)
}
if !bytes.Contains(gotBody, []byte(`"assignees":["claude"]`)) {
t.Fatalf("expected assignee in body, got %s", string(gotBody))
}
if !bytes.Contains(gotBody, []byte(`"in-progress"`)) {
t.Fatalf("expected in-progress label in body, got %s", string(gotBody))
}
}
func initTestRepo(t *testing.T, codePath, repo string) string {
t.Helper()
repoDir := filepath.Join(codePath, "core", repo)
if err := os.MkdirAll(repoDir, 0o755); err != nil {
t.Fatalf("mkdir repo dir: %v", err)
}
run := func(args ...string) {
t.Helper()
cmd := exec.Command("git", args...)
cmd.Dir = repoDir
cmd.Env = append(os.Environ(),
"GIT_AUTHOR_NAME=Test User",
"GIT_AUTHOR_EMAIL=test@example.com",
"GIT_COMMITTER_NAME=Test User",
"GIT_COMMITTER_EMAIL=test@example.com",
)
if out, err := cmd.CombinedOutput(); err != nil {
t.Fatalf("git %v failed: %v\n%s", args, err, string(out))
}
}
run("init", "-b", "main")
if err := os.WriteFile(filepath.Join(repoDir, "README.md"), []byte("# demo\n"), 0o644); err != nil {
t.Fatalf("write file: %v", err)
}
run("add", "README.md")
run("commit", "-m", "initial commit")
return repoDir
}

124
pkg/mcp/agentic/mirror.go Normal file
View file

@ -0,0 +1,124 @@
// SPDX-License-Identifier: EUPL-1.2
package agentic
import (
"context"
"os/exec"
core "dappco.re/go/core"
coreerr "dappco.re/go/log"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp"
)
// MirrorInput controls Forge to GitHub mirror sync.
type MirrorInput struct {
Repo string `json:"repo,omitempty"`
DryRun bool `json:"dry_run,omitempty"`
MaxFiles int `json:"max_files,omitempty"`
}
// MirrorOutput reports mirror sync results.
type MirrorOutput struct {
Success bool `json:"success"`
Synced []MirrorSync `json:"synced"`
Skipped []string `json:"skipped,omitempty"`
Count int `json:"count"`
}
// MirrorSync records one repo sync attempt.
type MirrorSync struct {
Repo string `json:"repo"`
CommitsAhead int `json:"commits_ahead"`
FilesChanged int `json:"files_changed"`
PRURL string `json:"pr_url,omitempty"`
Pushed bool `json:"pushed"`
Skipped string `json:"skipped,omitempty"`
}
func (s *PrepSubsystem) registerMirrorTool(svc *coremcp.Service) {
server := svc.Server()
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_mirror",
Description: "Mirror Forge repositories to GitHub and open a GitHub PR when there are commits ahead of the remote mirror.",
}, s.mirror)
}
func (s *PrepSubsystem) mirror(ctx context.Context, _ *mcp.CallToolRequest, input MirrorInput) (*mcp.CallToolResult, MirrorOutput, error) {
maxFiles := input.MaxFiles
if maxFiles <= 0 {
maxFiles = 50
}
basePath := repoRootFromCodePath(s.codePath)
repos := []string{}
if input.Repo != "" {
repos = []string{input.Repo}
} else {
repos = listLocalRepos(basePath)
}
synced := make([]MirrorSync, 0, len(repos))
skipped := make([]string, 0)
for _, repo := range repos {
repoDir := core.Path(basePath, repo)
if !hasRemote(repoDir, "github") {
skipped = append(skipped, repo+": no github remote")
continue
}
if _, err := exec.LookPath("git"); err != nil {
return nil, MirrorOutput{}, coreerr.E("mirror", "git CLI is not available", err)
}
_, _ = gitOutput(repoDir, "fetch", "github")
ahead := commitsAhead(repoDir, "github/main", "HEAD")
if ahead <= 0 {
continue
}
files := filesChanged(repoDir, "github/main", "HEAD")
sync := MirrorSync{
Repo: repo,
CommitsAhead: ahead,
FilesChanged: files,
}
if files > maxFiles {
sync.Skipped = core.Sprintf("%d files exceeds limit of %d", files, maxFiles)
synced = append(synced, sync)
continue
}
if input.DryRun {
sync.Skipped = "dry run"
synced = append(synced, sync)
continue
}
if err := ensureDevBranch(repoDir); err != nil {
sync.Skipped = err.Error()
synced = append(synced, sync)
continue
}
sync.Pushed = true
prURL, err := createGitHubPR(ctx, repoDir, repo, ahead, files)
if err != nil {
sync.Skipped = err.Error()
} else {
sync.PRURL = prURL
}
synced = append(synced, sync)
}
return nil, MirrorOutput{
Success: true,
Synced: synced,
Skipped: skipped,
Count: len(synced),
}, nil
}

View file

@ -7,16 +7,21 @@ import (
"crypto/rand" "crypto/rand"
"encoding/hex" "encoding/hex"
"encoding/json" "encoding/json"
"path/filepath"
"strings"
"time" "time"
coreio "forge.lthn.ai/core/go-io" core "dappco.re/go/core"
coreerr "forge.lthn.ai/core/go-log" coreio "dappco.re/go/io"
coreerr "dappco.re/go/log"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
// Plan represents an implementation plan for agent work. // Plan represents an implementation plan for agent work.
//
// plan := Plan{
// Title: "Add notifications",
// Status: "draft",
// }
type Plan struct { type Plan struct {
ID string `json:"id"` ID string `json:"id"`
Title string `json:"title"` Title string `json:"title"`
@ -32,6 +37,8 @@ type Plan struct {
} }
// Phase represents a phase within an implementation plan. // Phase represents a phase within an implementation plan.
//
// phase := Phase{Name: "Implementation", Status: "pending"}
type Phase struct { type Phase struct {
Number int `json:"number"` Number int `json:"number"`
Name string `json:"name"` Name string `json:"name"`
@ -39,11 +46,23 @@ type Phase struct {
Criteria []string `json:"criteria,omitempty"` Criteria []string `json:"criteria,omitempty"`
Tests int `json:"tests,omitempty"` Tests int `json:"tests,omitempty"`
Notes string `json:"notes,omitempty"` Notes string `json:"notes,omitempty"`
Checkpoints []Checkpoint `json:"checkpoints,omitempty"`
}
// Checkpoint records phase progress or completion details.
//
// cp := Checkpoint{Notes: "Implemented transport hooks", Done: true}
type Checkpoint struct {
Notes string `json:"notes,omitempty"`
Done bool `json:"done,omitempty"`
CreatedAt time.Time `json:"created_at"`
} }
// --- Input/Output types --- // --- Input/Output types ---
// PlanCreateInput is the input for agentic_plan_create. // PlanCreateInput is the input for agentic_plan_create.
//
// input := PlanCreateInput{Title: "Add notifications", Objective: "Broadcast MCP events"}
type PlanCreateInput struct { type PlanCreateInput struct {
Title string `json:"title"` Title string `json:"title"`
Objective string `json:"objective"` Objective string `json:"objective"`
@ -54,6 +73,8 @@ type PlanCreateInput struct {
} }
// PlanCreateOutput is the output for agentic_plan_create. // PlanCreateOutput is the output for agentic_plan_create.
//
// // out.Success == true, out.ID != ""
type PlanCreateOutput struct { type PlanCreateOutput struct {
Success bool `json:"success"` Success bool `json:"success"`
ID string `json:"id"` ID string `json:"id"`
@ -61,17 +82,23 @@ type PlanCreateOutput struct {
} }
// PlanReadInput is the input for agentic_plan_read. // PlanReadInput is the input for agentic_plan_read.
//
// input := PlanReadInput{ID: "add-notifications"}
type PlanReadInput struct { type PlanReadInput struct {
ID string `json:"id"` ID string `json:"id"`
} }
// PlanReadOutput is the output for agentic_plan_read. // PlanReadOutput is the output for agentic_plan_read.
//
// // out.Plan.Title == "Add notifications"
type PlanReadOutput struct { type PlanReadOutput struct {
Success bool `json:"success"` Success bool `json:"success"`
Plan Plan `json:"plan"` Plan Plan `json:"plan"`
} }
// PlanUpdateInput is the input for agentic_plan_update. // PlanUpdateInput is the input for agentic_plan_update.
//
// input := PlanUpdateInput{ID: "add-notifications", Status: "ready"}
type PlanUpdateInput struct { type PlanUpdateInput struct {
ID string `json:"id"` ID string `json:"id"`
Status string `json:"status,omitempty"` Status string `json:"status,omitempty"`
@ -83,62 +110,102 @@ type PlanUpdateInput struct {
} }
// PlanUpdateOutput is the output for agentic_plan_update. // PlanUpdateOutput is the output for agentic_plan_update.
//
// // out.Plan.Status == "ready"
type PlanUpdateOutput struct { type PlanUpdateOutput struct {
Success bool `json:"success"` Success bool `json:"success"`
Plan Plan `json:"plan"` Plan Plan `json:"plan"`
} }
// PlanDeleteInput is the input for agentic_plan_delete. // PlanDeleteInput is the input for agentic_plan_delete.
//
// input := PlanDeleteInput{ID: "add-notifications"}
type PlanDeleteInput struct { type PlanDeleteInput struct {
ID string `json:"id"` ID string `json:"id"`
} }
// PlanDeleteOutput is the output for agentic_plan_delete. // PlanDeleteOutput is the output for agentic_plan_delete.
//
// // out.Deleted == "add-notifications"
type PlanDeleteOutput struct { type PlanDeleteOutput struct {
Success bool `json:"success"` Success bool `json:"success"`
Deleted string `json:"deleted"` Deleted string `json:"deleted"`
} }
// PlanListInput is the input for agentic_plan_list. // PlanListInput is the input for agentic_plan_list.
//
// input := PlanListInput{Status: "draft"}
type PlanListInput struct { type PlanListInput struct {
Status string `json:"status,omitempty"` Status string `json:"status,omitempty"`
Repo string `json:"repo,omitempty"` Repo string `json:"repo,omitempty"`
} }
// PlanListOutput is the output for agentic_plan_list. // PlanListOutput is the output for agentic_plan_list.
//
// // len(out.Plans) >= 1
type PlanListOutput struct { type PlanListOutput struct {
Success bool `json:"success"` Success bool `json:"success"`
Count int `json:"count"` Count int `json:"count"`
Plans []Plan `json:"plans"` Plans []Plan `json:"plans"`
} }
// PlanCheckpointInput is the input for agentic_plan_checkpoint.
//
// input := PlanCheckpointInput{ID: "add-notifications", Phase: 1, Done: true}
type PlanCheckpointInput struct {
ID string `json:"id"`
Phase int `json:"phase"`
Notes string `json:"notes,omitempty"`
Done bool `json:"done,omitempty"`
}
// PlanCheckpointOutput is the output for agentic_plan_checkpoint.
//
// // out.Plan.Phases[0].Status == "done"
type PlanCheckpointOutput struct {
Success bool `json:"success"`
Plan Plan `json:"plan"`
}
// --- Registration --- // --- Registration ---
func (s *PrepSubsystem) registerPlanTools(server *mcp.Server) { func (s *PrepSubsystem) registerPlanTools(svc *coremcp.Service) {
mcp.AddTool(server, &mcp.Tool{ server := svc.Server()
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_plan_create", Name: "agentic_plan_create",
Description: "Create an implementation plan. Plans track phased work with acceptance criteria, status lifecycle (draft → ready → in_progress → needs_verification → verified → approved), and per-phase progress.", Description: "Create an implementation plan. Plans track phased work with acceptance criteria, status lifecycle (draft → ready → in_progress → needs_verification → verified → approved), and per-phase progress.",
}, s.planCreate) }, s.planCreate)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_plan_read", Name: "agentic_plan_read",
Description: "Read an implementation plan by ID. Returns the full plan with all phases, criteria, and status.", Description: "Read an implementation plan by ID. Returns the full plan with all phases, criteria, and status.",
}, s.planRead) }, s.planRead)
mcp.AddTool(server, &mcp.Tool{ // agentic_plan_status is kept as a user-facing alias for the read tool.
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_plan_status",
Description: "Get the current status of an implementation plan by ID. Returns the full plan with all phases, criteria, and status.",
}, s.planRead)
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_plan_update", Name: "agentic_plan_update",
Description: "Update an implementation plan. Supports partial updates — only provided fields are changed. Use this to advance status, update phases, or add notes.", Description: "Update an implementation plan. Supports partial updates — only provided fields are changed. Use this to advance status, update phases, or add notes.",
}, s.planUpdate) }, s.planUpdate)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_plan_delete", Name: "agentic_plan_delete",
Description: "Delete an implementation plan by ID. Permanently removes the plan file.", Description: "Delete an implementation plan by ID. Permanently removes the plan file.",
}, s.planDelete) }, s.planDelete)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_plan_list", Name: "agentic_plan_list",
Description: "List implementation plans. Supports filtering by status (draft, ready, in_progress, etc.) and repo.", Description: "List implementation plans. Supports filtering by status (draft, ready, in_progress, etc.) and repo.",
}, s.planList) }, s.planList)
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_plan_checkpoint",
Description: "Record a checkpoint for a plan phase and optionally mark the phase done.",
}, s.planCheckpoint)
} }
// --- Handlers --- // --- Handlers ---
@ -281,11 +348,11 @@ func (s *PrepSubsystem) planList(_ context.Context, _ *mcp.CallToolRequest, inpu
var plans []Plan var plans []Plan
for _, entry := range entries { for _, entry := range entries {
if entry.IsDir() || !strings.HasSuffix(entry.Name(), ".json") { if entry.IsDir() || !core.HasSuffix(entry.Name(), ".json") {
continue continue
} }
id := strings.TrimSuffix(entry.Name(), ".json") id := core.TrimSuffix(entry.Name(), ".json")
plan, err := readPlan(dir, id) plan, err := readPlan(dir, id)
if err != nil { if err != nil {
continue continue
@ -309,44 +376,86 @@ func (s *PrepSubsystem) planList(_ context.Context, _ *mcp.CallToolRequest, inpu
}, nil }, nil
} }
func (s *PrepSubsystem) planCheckpoint(_ context.Context, _ *mcp.CallToolRequest, input PlanCheckpointInput) (*mcp.CallToolResult, PlanCheckpointOutput, error) {
if input.ID == "" {
return nil, PlanCheckpointOutput{}, coreerr.E("planCheckpoint", "id is required", nil)
}
if input.Phase <= 0 {
return nil, PlanCheckpointOutput{}, coreerr.E("planCheckpoint", "phase must be greater than zero", nil)
}
if input.Notes == "" && !input.Done {
return nil, PlanCheckpointOutput{}, coreerr.E("planCheckpoint", "notes or done is required", nil)
}
plan, err := readPlan(s.plansDir(), input.ID)
if err != nil {
return nil, PlanCheckpointOutput{}, err
}
phaseIndex := input.Phase - 1
if phaseIndex >= len(plan.Phases) {
return nil, PlanCheckpointOutput{}, coreerr.E("planCheckpoint", "phase not found", nil)
}
phase := &plan.Phases[phaseIndex]
phase.Checkpoints = append(phase.Checkpoints, Checkpoint{
Notes: input.Notes,
Done: input.Done,
CreatedAt: time.Now(),
})
if input.Done {
phase.Status = "done"
}
plan.UpdatedAt = time.Now()
if _, err := writePlan(s.plansDir(), plan); err != nil {
return nil, PlanCheckpointOutput{}, coreerr.E("planCheckpoint", "failed to write plan", err)
}
return nil, PlanCheckpointOutput{
Success: true,
Plan: *plan,
}, nil
}
// --- Helpers --- // --- Helpers ---
func (s *PrepSubsystem) plansDir() string { func (s *PrepSubsystem) plansDir() string {
return filepath.Join(s.codePath, ".core", "plans") return core.Path(s.codePath, ".core", "plans")
} }
func planPath(dir, id string) string { func planPath(dir, id string) string {
return filepath.Join(dir, id+".json") return core.Path(dir, id+".json")
} }
func generatePlanID(title string) string { func generatePlanID(title string) string {
slug := strings.Map(func(r rune) rune { b := core.NewBuilder()
if r >= 'a' && r <= 'z' || r >= '0' && r <= '9' || r == '-' { b.Grow(len(title))
return r for _, r := range title {
switch {
case r >= 'a' && r <= 'z', r >= '0' && r <= '9', r == '-':
b.WriteRune(r)
case r >= 'A' && r <= 'Z':
b.WriteRune(r + 32)
case r == ' ':
b.WriteByte('-')
} }
if r >= 'A' && r <= 'Z' {
return r + 32
} }
if r == ' ' { slug := b.String()
return '-'
}
return -1
}, title)
// Trim consecutive dashes and cap length // Collapse consecutive dashes and cap length
for strings.Contains(slug, "--") { for core.Contains(slug, "--") {
slug = strings.ReplaceAll(slug, "--", "-") slug = core.Replace(slug, "--", "-")
} }
slug = strings.Trim(slug, "-") slug = trimDashes(slug)
if len(slug) > 30 { if len(slug) > 30 {
slug = slug[:30] slug = trimDashes(slug[:30])
} }
slug = strings.TrimRight(slug, "-")
// Append short random suffix for uniqueness // Append short random suffix for uniqueness
b := make([]byte, 3) rnd := make([]byte, 3)
rand.Read(b) rand.Read(rnd)
return slug + "-" + hex.EncodeToString(b) return slug + "-" + hex.EncodeToString(rnd)
} }
func readPlan(dir, id string) (*Plan, error) { func readPlan(dir, id string) (*Plan, error) {
@ -356,8 +465,8 @@ func readPlan(dir, id string) (*Plan, error) {
} }
var plan Plan var plan Plan
if err := json.Unmarshal([]byte(data), &plan); err != nil { if r := core.JSONUnmarshal([]byte(data), &plan); !r.OK {
return nil, coreerr.E("readPlan", "failed to parse plan "+id, err) return nil, coreerr.E("readPlan", "failed to parse plan "+id, nil)
} }
return &plan, nil return &plan, nil
} }
@ -373,7 +482,7 @@ func writePlan(dir string, plan *Plan) (string, error) {
return "", err return "", err
} }
return path, coreio.Local.Write(path, string(data)) return path, writeAtomic(path, string(data))
} }
func validPlanStatus(status string) bool { func validPlanStatus(status string) bool {

View file

@ -0,0 +1,62 @@
// SPDX-License-Identifier: EUPL-1.2
package agentic
import (
"context"
"testing"
"time"
)
func TestPlanCheckpoint_Good_AppendsCheckpointAndMarksPhaseDone(t *testing.T) {
root := t.TempDir()
sub := &PrepSubsystem{codePath: root}
plan := &Plan{
ID: "plan-1",
Title: "Test plan",
Status: "in_progress",
Objective: "Verify checkpoints",
Phases: []Phase{
{
Number: 1,
Name: "Phase 1",
Status: "in_progress",
},
},
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}
if _, err := writePlan(sub.plansDir(), plan); err != nil {
t.Fatalf("writePlan failed: %v", err)
}
_, out, err := sub.planCheckpoint(context.Background(), nil, PlanCheckpointInput{
ID: plan.ID,
Phase: 1,
Notes: "Implementation verified",
Done: true,
})
if err != nil {
t.Fatalf("planCheckpoint failed: %v", err)
}
if !out.Success {
t.Fatal("expected checkpoint output success")
}
if out.Plan.Phases[0].Status != "done" {
t.Fatalf("expected phase status done, got %q", out.Plan.Phases[0].Status)
}
if len(out.Plan.Phases[0].Checkpoints) != 1 {
t.Fatalf("expected 1 checkpoint, got %d", len(out.Plan.Phases[0].Checkpoints))
}
if out.Plan.Phases[0].Checkpoints[0].Notes != "Implementation verified" {
t.Fatalf("unexpected checkpoint notes: %q", out.Plan.Phases[0].Checkpoints[0].Notes)
}
if !out.Plan.Phases[0].Checkpoints[0].Done {
t.Fatal("expected checkpoint to be marked done")
}
if out.Plan.Phases[0].Checkpoints[0].CreatedAt.IsZero() {
t.Fatal("expected checkpoint timestamp")
}
}

View file

@ -6,21 +6,25 @@ import (
"bytes" "bytes"
"context" "context"
"encoding/json" "encoding/json"
"fmt"
"net/http" "net/http"
"os/exec" "os/exec"
"path/filepath"
"strings"
coreio "forge.lthn.ai/core/go-io" core "dappco.re/go/core"
coreerr "forge.lthn.ai/core/go-log" coreio "dappco.re/go/io"
coreerr "dappco.re/go/log"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
// --- agentic_create_pr --- // --- agentic_create_pr ---
// CreatePRInput is the input for agentic_create_pr. // PRInput is the input for agentic_create_pr and agentic_pr.
type CreatePRInput struct { //
// input := PRInput{
// Workspace: "mcp-1773581873",
// Base: "main",
// }
type PRInput struct {
Workspace string `json:"workspace"` // workspace name (e.g. "mcp-1773581873") Workspace string `json:"workspace"` // workspace name (e.g. "mcp-1773581873")
Title string `json:"title,omitempty"` // PR title (default: task description) Title string `json:"title,omitempty"` // PR title (default: task description)
Body string `json:"body,omitempty"` // PR body (default: auto-generated) Body string `json:"body,omitempty"` // PR body (default: auto-generated)
@ -28,7 +32,12 @@ type CreatePRInput struct {
DryRun bool `json:"dry_run,omitempty"` // preview without creating DryRun bool `json:"dry_run,omitempty"` // preview without creating
} }
// CreatePRInput is kept as a compatibility alias for older callers.
type CreatePRInput = PRInput
// CreatePROutput is the output for agentic_create_pr. // CreatePROutput is the output for agentic_create_pr.
//
// // out.Success == true, out.Branch == "agent/issue-123-fix", out.Pushed == true
type CreatePROutput struct { type CreatePROutput struct {
Success bool `json:"success"` Success bool `json:"success"`
PRURL string `json:"pr_url,omitempty"` PRURL string `json:"pr_url,omitempty"`
@ -39,14 +48,15 @@ type CreatePROutput struct {
Pushed bool `json:"pushed"` Pushed bool `json:"pushed"`
} }
func (s *PrepSubsystem) registerCreatePRTool(server *mcp.Server) { func (s *PrepSubsystem) registerCreatePRTool(svc *coremcp.Service) {
mcp.AddTool(server, &mcp.Tool{ server := svc.Server()
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_create_pr", Name: "agentic_create_pr",
Description: "Create a pull request from an agent workspace. Pushes the branch to Forge and opens a PR. Links to the source issue if one was tracked.", Description: "Create a pull request from an agent workspace. Pushes the branch to Forge and opens a PR. Links to the source issue if one was tracked.",
}, s.createPR) }, s.createPR)
} }
func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, input CreatePRInput) (*mcp.CallToolResult, CreatePROutput, error) { func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, input PRInput) (*mcp.CallToolResult, CreatePROutput, error) {
if input.Workspace == "" { if input.Workspace == "" {
return nil, CreatePROutput{}, coreerr.E("createPR", "workspace is required", nil) return nil, CreatePROutput{}, coreerr.E("createPR", "workspace is required", nil)
} }
@ -54,8 +64,8 @@ func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, in
return nil, CreatePROutput{}, coreerr.E("createPR", "no Forge token configured", nil) return nil, CreatePROutput{}, coreerr.E("createPR", "no Forge token configured", nil)
} }
wsDir := filepath.Join(s.workspaceRoot(), input.Workspace) wsDir := core.Path(s.workspaceRoot(), input.Workspace)
srcDir := filepath.Join(wsDir, "src") srcDir := core.Path(wsDir, "src")
if _, err := coreio.Local.List(srcDir); err != nil { if _, err := coreio.Local.List(srcDir); err != nil {
return nil, CreatePROutput{}, coreerr.E("createPR", "workspace not found: "+input.Workspace, nil) return nil, CreatePROutput{}, coreerr.E("createPR", "workspace not found: "+input.Workspace, nil)
@ -75,7 +85,7 @@ func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, in
if err != nil { if err != nil {
return nil, CreatePROutput{}, coreerr.E("createPR", "failed to detect branch", err) return nil, CreatePROutput{}, coreerr.E("createPR", "failed to detect branch", err)
} }
st.Branch = strings.TrimSpace(string(out)) st.Branch = core.Trim(string(out))
} }
org := st.Org org := st.Org
@ -93,7 +103,7 @@ func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, in
title = st.Task title = st.Task
} }
if title == "" { if title == "" {
title = fmt.Sprintf("Agent work on %s", st.Branch) title = core.Sprintf("Agent work on %s", st.Branch)
} }
// Build PR body // Build PR body
@ -127,11 +137,11 @@ func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, in
// Update status with PR URL // Update status with PR URL
st.PRURL = prURL st.PRURL = prURL
writeStatus(wsDir, st) s.saveStatus(wsDir, st)
// Comment on issue if tracked // Comment on issue if tracked
if st.Issue > 0 { if st.Issue > 0 {
comment := fmt.Sprintf("Pull request created: %s", prURL) comment := core.Sprintf("Pull request created: %s", prURL)
s.commentOnIssue(ctx, org, st.Repo, st.Issue, comment) s.commentOnIssue(ctx, org, st.Repo, st.Issue, comment)
} }
@ -147,31 +157,37 @@ func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, in
} }
func (s *PrepSubsystem) buildPRBody(st *WorkspaceStatus) string { func (s *PrepSubsystem) buildPRBody(st *WorkspaceStatus) string {
var b strings.Builder b := core.NewBuilder()
b.WriteString("## Summary\n\n") b.WriteString("## Summary\n\n")
if st.Task != "" { if st.Task != "" {
b.WriteString(st.Task) b.WriteString(st.Task)
b.WriteString("\n\n") b.WriteString("\n\n")
} }
if st.Issue > 0 { if st.Issue > 0 {
b.WriteString(fmt.Sprintf("Closes #%d\n\n", st.Issue)) b.WriteString(core.Sprintf("Closes #%d\n\n", st.Issue))
} }
b.WriteString(fmt.Sprintf("**Agent:** %s\n", st.Agent)) b.WriteString(core.Sprintf("**Agent:** %s\n", st.Agent))
b.WriteString(fmt.Sprintf("**Runs:** %d\n", st.Runs)) b.WriteString(core.Sprintf("**Runs:** %d\n", st.Runs))
b.WriteString("\n---\n*Created by agentic dispatch*\n") b.WriteString("\n---\n*Created by agentic dispatch*\n")
return b.String() return b.String()
} }
func (s *PrepSubsystem) forgeCreatePR(ctx context.Context, org, repo, head, base, title, body string) (string, int, error) { func (s *PrepSubsystem) forgeCreatePR(ctx context.Context, org, repo, head, base, title, body string) (string, int, error) {
payload, _ := json.Marshal(map[string]any{ payload, err := json.Marshal(map[string]any{
"title": title, "title": title,
"body": body, "body": body,
"head": head, "head": head,
"base": base, "base": base,
}) })
if err != nil {
return "", 0, coreerr.E("forgeCreatePR", "failed to marshal PR payload", err)
}
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/pulls", s.forgeURL, org, repo) url := core.Sprintf("%s/api/v1/repos/%s/%s/pulls", s.forgeURL, org, repo)
req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload)) req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload))
if err != nil {
return "", 0, coreerr.E("forgeCreatePR", "failed to build PR request", err)
}
req.Header.Set("Content-Type", "application/json") req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "token "+s.forgeToken) req.Header.Set("Authorization", "token "+s.forgeToken)
@ -183,25 +199,35 @@ func (s *PrepSubsystem) forgeCreatePR(ctx context.Context, org, repo, head, base
if resp.StatusCode != 201 { if resp.StatusCode != 201 {
var errBody map[string]any var errBody map[string]any
json.NewDecoder(resp.Body).Decode(&errBody) if err := json.NewDecoder(resp.Body).Decode(&errBody); err != nil {
return "", 0, coreerr.E("forgeCreatePR", core.Sprintf("HTTP %d with unreadable error body", resp.StatusCode), err)
}
msg, _ := errBody["message"].(string) msg, _ := errBody["message"].(string)
return "", 0, coreerr.E("forgeCreatePR", fmt.Sprintf("HTTP %d: %s", resp.StatusCode, msg), nil) return "", 0, coreerr.E("forgeCreatePR", core.Sprintf("HTTP %d: %s", resp.StatusCode, msg), nil)
} }
var pr struct { var pr struct {
Number int `json:"number"` Number int `json:"number"`
HTMLURL string `json:"html_url"` HTMLURL string `json:"html_url"`
} }
json.NewDecoder(resp.Body).Decode(&pr) if err := json.NewDecoder(resp.Body).Decode(&pr); err != nil {
return "", 0, coreerr.E("forgeCreatePR", "failed to decode PR response", err)
}
return pr.HTMLURL, pr.Number, nil return pr.HTMLURL, pr.Number, nil
} }
func (s *PrepSubsystem) commentOnIssue(ctx context.Context, org, repo string, issue int, comment string) { func (s *PrepSubsystem) commentOnIssue(ctx context.Context, org, repo string, issue int, comment string) {
payload, _ := json.Marshal(map[string]string{"body": comment}) payload, err := json.Marshal(map[string]string{"body": comment})
if err != nil {
return
}
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/issues/%d/comments", s.forgeURL, org, repo, issue) url := core.Sprintf("%s/api/v1/repos/%s/%s/issues/%d/comments", s.forgeURL, org, repo, issue)
req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload)) req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload))
if err != nil {
return
}
req.Header.Set("Content-Type", "application/json") req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "token "+s.forgeToken) req.Header.Set("Authorization", "token "+s.forgeToken)
@ -215,6 +241,8 @@ func (s *PrepSubsystem) commentOnIssue(ctx context.Context, org, repo string, is
// --- agentic_list_prs --- // --- agentic_list_prs ---
// ListPRsInput is the input for agentic_list_prs. // ListPRsInput is the input for agentic_list_prs.
//
// input := ListPRsInput{Org: "core", Repo: "go-io", State: "open", Limit: 20}
type ListPRsInput struct { type ListPRsInput struct {
Org string `json:"org,omitempty"` // forge org (default "core") Org string `json:"org,omitempty"` // forge org (default "core")
Repo string `json:"repo,omitempty"` // specific repo, or empty for all Repo string `json:"repo,omitempty"` // specific repo, or empty for all
@ -223,6 +251,8 @@ type ListPRsInput struct {
} }
// ListPRsOutput is the output for agentic_list_prs. // ListPRsOutput is the output for agentic_list_prs.
//
// // out.Success == true, len(out.PRs) <= 20
type ListPRsOutput struct { type ListPRsOutput struct {
Success bool `json:"success"` Success bool `json:"success"`
Count int `json:"count"` Count int `json:"count"`
@ -230,6 +260,8 @@ type ListPRsOutput struct {
} }
// PRInfo represents a pull request. // PRInfo represents a pull request.
//
// // pr.Number == 42, pr.Branch == "agent/issue-42-fix"
type PRInfo struct { type PRInfo struct {
Repo string `json:"repo"` Repo string `json:"repo"`
Number int `json:"number"` Number int `json:"number"`
@ -243,8 +275,9 @@ type PRInfo struct {
URL string `json:"url"` URL string `json:"url"`
} }
func (s *PrepSubsystem) registerListPRsTool(server *mcp.Server) { func (s *PrepSubsystem) registerListPRsTool(svc *coremcp.Service) {
mcp.AddTool(server, &mcp.Tool{ server := svc.Server()
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_list_prs", Name: "agentic_list_prs",
Description: "List pull requests across Forge repos. Filter by org, repo, and state (open/closed/all).", Description: "List pull requests across Forge repos. Filter by org, repo, and state (open/closed/all).",
}, s.listPRs) }, s.listPRs)
@ -302,7 +335,7 @@ func (s *PrepSubsystem) listPRs(ctx context.Context, _ *mcp.CallToolRequest, inp
} }
func (s *PrepSubsystem) listRepoPRs(ctx context.Context, org, repo, state string) ([]PRInfo, error) { func (s *PrepSubsystem) listRepoPRs(ctx context.Context, org, repo, state string) ([]PRInfo, error) {
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/pulls?state=%s&limit=10", url := core.Sprintf("%s/api/v1/repos/%s/%s/pulls?state=%s&limit=10",
s.forgeURL, org, repo, state) s.forgeURL, org, repo, state)
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil) req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
req.Header.Set("Authorization", "token "+s.forgeToken) req.Header.Set("Authorization", "token "+s.forgeToken)
@ -313,7 +346,7 @@ func (s *PrepSubsystem) listRepoPRs(ctx context.Context, org, repo, state string
} }
defer resp.Body.Close() defer resp.Body.Close()
if resp.StatusCode != 200 { if resp.StatusCode != 200 {
return nil, coreerr.E("listRepoPRs", fmt.Sprintf("HTTP %d for "+repo, resp.StatusCode), nil) return nil, coreerr.E("listRepoPRs", core.Sprintf("HTTP %d for "+repo, resp.StatusCode), nil)
} }
var prs []struct { var prs []struct {

View file

@ -0,0 +1,28 @@
// SPDX-License-Identifier: EUPL-1.2
package agentic
import (
"context"
"net/http"
"net/http/httptest"
"testing"
)
func TestForgeCreatePR_Bad_InvalidJSONResponse(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusCreated)
_, _ = w.Write([]byte("{not-json"))
}))
defer srv.Close()
s := &PrepSubsystem{
forgeURL: srv.URL,
client: srv.Client(),
}
_, _, err := s.forgeCreatePR(context.Background(), "core", "demo", "agent/test", "main", "Fix bug", "body")
if err == nil {
t.Fatal("expected malformed PR response to fail")
}
}

View file

@ -8,17 +8,14 @@ import (
"context" "context"
"encoding/base64" "encoding/base64"
"encoding/json" "encoding/json"
"fmt"
"io"
"net/http" "net/http"
"os"
"os/exec" "os/exec"
"path/filepath"
"strings"
"time" "time"
coreio "forge.lthn.ai/core/go-io" core "dappco.re/go/core"
coreerr "forge.lthn.ai/core/go-log" coreio "dappco.re/go/io"
coreerr "dappco.re/go/log"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
"gopkg.in/yaml.v3" "gopkg.in/yaml.v3"
) )
@ -32,21 +29,30 @@ type PrepSubsystem struct {
specsPath string specsPath string
codePath string codePath string
client *http.Client client *http.Client
notifier coremcp.Notifier
} }
var (
_ coremcp.Subsystem = (*PrepSubsystem)(nil)
_ coremcp.SubsystemWithShutdown = (*PrepSubsystem)(nil)
_ coremcp.SubsystemWithNotifier = (*PrepSubsystem)(nil)
)
// NewPrep creates an agentic subsystem. // NewPrep creates an agentic subsystem.
//
// prep := NewPrep()
func NewPrep() *PrepSubsystem { func NewPrep() *PrepSubsystem {
home, _ := os.UserHomeDir() home := core.Env("HOME")
forgeToken := os.Getenv("FORGE_TOKEN") forgeToken := core.Env("FORGE_TOKEN")
if forgeToken == "" { if forgeToken == "" {
forgeToken = os.Getenv("GITEA_TOKEN") forgeToken = core.Env("GITEA_TOKEN")
} }
brainKey := os.Getenv("CORE_BRAIN_KEY") brainKey := core.Env("CORE_BRAIN_KEY")
if brainKey == "" { if brainKey == "" {
if data, err := coreio.Local.Read(filepath.Join(home, ".claude", "brain.key")); err == nil { if data, err := coreio.Local.Read(core.Path(home, ".claude", "brain.key")); err == nil {
brainKey = strings.TrimSpace(data) brainKey = core.Trim(data)
} }
} }
@ -55,42 +61,95 @@ func NewPrep() *PrepSubsystem {
forgeToken: forgeToken, forgeToken: forgeToken,
brainURL: envOr("CORE_BRAIN_URL", "https://api.lthn.sh"), brainURL: envOr("CORE_BRAIN_URL", "https://api.lthn.sh"),
brainKey: brainKey, brainKey: brainKey,
specsPath: envOr("SPECS_PATH", filepath.Join(home, "Code", "host-uk", "specs")), specsPath: envOr("SPECS_PATH", core.Path(home, "Code", "host-uk", "specs")),
codePath: envOr("CODE_PATH", filepath.Join(home, "Code")), codePath: envOr("CODE_PATH", core.Path(home, "Code")),
client: &http.Client{Timeout: 30 * time.Second}, client: &http.Client{Timeout: 30 * time.Second},
} }
} }
// SetNotifier wires the shared MCP notifier into the agentic subsystem.
func (s *PrepSubsystem) SetNotifier(n coremcp.Notifier) {
s.notifier = n
}
// emitChannel pushes an agentic event through the shared notifier.
func (s *PrepSubsystem) emitChannel(ctx context.Context, channel string, data any) {
if s.notifier != nil {
s.notifier.ChannelSend(ctx, channel, data)
}
}
func envOr(key, fallback string) string { func envOr(key, fallback string) string {
if v := os.Getenv(key); v != "" { if v := core.Env(key); v != "" {
return v return v
} }
return fallback return fallback
} }
func sanitizeRepoPathSegment(value, field string, allowSubdirs bool) (string, error) {
if core.Trim(value) != value {
return "", coreerr.E("prepWorkspace", field+" contains whitespace", nil)
}
if value == "" {
return "", nil
}
if core.Contains(value, "\\") {
return "", coreerr.E("prepWorkspace", field+" contains invalid path separator", nil)
}
parts := core.Split(value, "/")
if !allowSubdirs && len(parts) != 1 {
return "", coreerr.E("prepWorkspace", field+" may not contain subdirectories", nil)
}
for _, part := range parts {
if part == "" || part == "." || part == ".." {
return "", coreerr.E("prepWorkspace", field+" contains invalid path segment", nil)
}
for _, r := range part {
switch {
case r >= 'a' && r <= 'z',
r >= 'A' && r <= 'Z',
r >= '0' && r <= '9',
r == '-' || r == '_' || r == '.':
continue
default:
return "", coreerr.E("prepWorkspace", field+" contains invalid characters", nil)
}
}
}
return value, nil
}
// Name implements mcp.Subsystem. // Name implements mcp.Subsystem.
func (s *PrepSubsystem) Name() string { return "agentic" } func (s *PrepSubsystem) Name() string { return "agentic" }
// RegisterTools implements mcp.Subsystem. // RegisterTools implements mcp.Subsystem.
func (s *PrepSubsystem) RegisterTools(server *mcp.Server) { func (s *PrepSubsystem) RegisterTools(svc *coremcp.Service) {
mcp.AddTool(server, &mcp.Tool{ server := svc.Server()
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_prep_workspace", Name: "agentic_prep_workspace",
Description: "Prepare a sandboxed agent workspace with TODO.md, CLAUDE.md, CONTEXT.md, CONSUMERS.md, RECENT.md, and a git clone of the target repo in src/.", Description: "Prepare a sandboxed agent workspace with TODO.md, CLAUDE.md, CONTEXT.md, CONSUMERS.md, RECENT.md, and a git clone of the target repo in src/.",
}, s.prepWorkspace) }, s.prepWorkspace)
s.registerDispatchTool(server) s.registerDispatchTool(svc)
s.registerStatusTool(server) s.registerIssueTools(svc)
s.registerResumeTool(server) s.registerStatusTool(svc)
s.registerCreatePRTool(server) s.registerResumeTool(svc)
s.registerListPRsTool(server) s.registerCreatePRTool(svc)
s.registerEpicTool(server) s.registerListPRsTool(svc)
s.registerEpicTool(svc)
s.registerWatchTool(svc)
s.registerReviewQueueTool(svc)
s.registerMirrorTool(svc)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_scan", Name: "agentic_scan",
Description: "Scan Forge repos for open issues with actionable labels (agentic, help-wanted, bug).", Description: "Scan Forge repos for open issues with actionable labels (agentic, help-wanted, bug).",
}, s.scan) }, s.scan)
s.registerPlanTools(server) s.registerPlanTools(svc)
} }
// Shutdown implements mcp.SubsystemWithShutdown. // Shutdown implements mcp.SubsystemWithShutdown.
@ -98,7 +157,7 @@ func (s *PrepSubsystem) Shutdown(_ context.Context) error { return nil }
// workspaceRoot returns the base directory for agent workspaces. // workspaceRoot returns the base directory for agent workspaces.
func (s *PrepSubsystem) workspaceRoot() string { func (s *PrepSubsystem) workspaceRoot() string {
return filepath.Join(s.codePath, ".core", "workspace") return core.Path(s.codePath, ".core", "workspace")
} }
// --- Input/Output types --- // --- Input/Output types ---
@ -109,6 +168,7 @@ type PrepInput struct {
Org string `json:"org,omitempty"` // default "core" Org string `json:"org,omitempty"` // default "core"
Issue int `json:"issue,omitempty"` // Forge issue number Issue int `json:"issue,omitempty"` // Forge issue number
Task string `json:"task,omitempty"` // Task description (if no issue) Task string `json:"task,omitempty"` // Task description (if no issue)
Branch string `json:"branch,omitempty"` // Override branch name
Template string `json:"template,omitempty"` // Prompt template: conventions, security, coding (default: coding) Template string `json:"template,omitempty"` // Prompt template: conventions, security, coding (default: coding)
PlanTemplate string `json:"plan_template,omitempty"` // Plan template slug: bug-fix, code-review, new-feature, refactor, feature-port PlanTemplate string `json:"plan_template,omitempty"` // Plan template slug: bug-fix, code-review, new-feature, refactor, feature-port
Variables map[string]string `json:"variables,omitempty"` // Template variable substitution Variables map[string]string `json:"variables,omitempty"` // Template variable substitution
@ -119,6 +179,7 @@ type PrepInput struct {
type PrepOutput struct { type PrepOutput struct {
Success bool `json:"success"` Success bool `json:"success"`
WorkspaceDir string `json:"workspace_dir"` WorkspaceDir string `json:"workspace_dir"`
Branch string `json:"branch,omitempty"`
WikiPages int `json:"wiki_pages"` WikiPages int `json:"wiki_pages"`
SpecFiles int `json:"spec_files"` SpecFiles int `json:"spec_files"`
Memories int `json:"memories"` Memories int `json:"memories"`
@ -131,6 +192,27 @@ func (s *PrepSubsystem) prepWorkspace(ctx context.Context, _ *mcp.CallToolReques
if input.Repo == "" { if input.Repo == "" {
return nil, PrepOutput{}, coreerr.E("prepWorkspace", "repo is required", nil) return nil, PrepOutput{}, coreerr.E("prepWorkspace", "repo is required", nil)
} }
repo, err := sanitizeRepoPathSegment(input.Repo, "repo", false)
if err != nil {
return nil, PrepOutput{}, err
}
input.Repo = repo
planTemplate, err := sanitizeRepoPathSegment(input.PlanTemplate, "plan_template", false)
if err != nil {
return nil, PrepOutput{}, err
}
input.PlanTemplate = planTemplate
persona := input.Persona
if persona != "" {
persona, err = sanitizeRepoPathSegment(persona, "persona", true)
if err != nil {
return nil, PrepOutput{}, err
}
}
if input.Org == "" { if input.Org == "" {
input.Org = "core" input.Org = "core"
} }
@ -140,8 +222,9 @@ func (s *PrepSubsystem) prepWorkspace(ctx context.Context, _ *mcp.CallToolReques
// Workspace root: .core/workspace/{repo}-{timestamp}/ // Workspace root: .core/workspace/{repo}-{timestamp}/
wsRoot := s.workspaceRoot() wsRoot := s.workspaceRoot()
wsName := fmt.Sprintf("%s-%d", input.Repo, time.Now().Unix()) coreio.Local.EnsureDir(wsRoot)
wsDir := filepath.Join(wsRoot, wsName) wsName := core.Sprintf("%s-%d", input.Repo, time.Now().Unix())
wsDir := core.Path(wsRoot, wsName)
// Create workspace structure // Create workspace structure
// kb/ and specs/ will be created inside src/ after clone // kb/ and specs/ will be created inside src/ after clone
@ -149,57 +232,62 @@ func (s *PrepSubsystem) prepWorkspace(ctx context.Context, _ *mcp.CallToolReques
out := PrepOutput{WorkspaceDir: wsDir} out := PrepOutput{WorkspaceDir: wsDir}
// Source repo path // Source repo path
repoPath := filepath.Join(s.codePath, "core", input.Repo) repoPath := core.Path(s.codePath, "core", input.Repo)
// 1. Clone repo into src/ and create feature branch // 1. Clone repo into src/ and create feature branch
srcDir := filepath.Join(wsDir, "src") srcDir := core.Path(wsDir, "src")
cloneCmd := exec.CommandContext(ctx, "git", "clone", repoPath, srcDir) cloneCmd := exec.CommandContext(ctx, "git", "clone", repoPath, srcDir)
cloneCmd.Run() if err := cloneCmd.Run(); err != nil {
return nil, PrepOutput{}, coreerr.E("prepWorkspace", "failed to clone repository", err)
}
// Create feature branch // Create feature branch.
taskSlug := strings.Map(func(r rune) rune { branchName := input.Branch
if r >= 'a' && r <= 'z' || r >= '0' && r <= '9' || r == '-' { if branchName == "" {
return r taskSlug := branchSlug(input.Task)
if input.Issue > 0 {
issueSlug := branchSlug(input.Task)
branchName = core.Sprintf("agent/issue-%d", input.Issue)
if issueSlug != "" {
branchName += "-" + issueSlug
} }
if r >= 'A' && r <= 'Z' { } else if taskSlug != "" {
return r + 32 // lowercase branchName = core.Sprintf("agent/%s", taskSlug)
} }
return '-'
}, input.Task)
if len(taskSlug) > 40 {
taskSlug = taskSlug[:40]
} }
taskSlug = strings.Trim(taskSlug, "-") if branchName != "" {
branchName := fmt.Sprintf("agent/%s", taskSlug)
branchCmd := exec.CommandContext(ctx, "git", "checkout", "-b", branchName) branchCmd := exec.CommandContext(ctx, "git", "checkout", "-b", branchName)
branchCmd.Dir = srcDir branchCmd.Dir = srcDir
branchCmd.Run() if err := branchCmd.Run(); err != nil {
return nil, PrepOutput{}, coreerr.E("prepWorkspace", "failed to create branch", err)
}
out.Branch = branchName
}
// Create context dirs inside src/ // Create context dirs inside src/
coreio.Local.EnsureDir(filepath.Join(srcDir, "kb")) coreio.Local.EnsureDir(core.Path(srcDir, "kb"))
coreio.Local.EnsureDir(filepath.Join(srcDir, "specs")) coreio.Local.EnsureDir(core.Path(srcDir, "specs"))
// Remote stays as local clone origin — agent cannot push to forge. // Remote stays as local clone origin — agent cannot push to forge.
// Reviewer pulls changes from workspace and pushes after verification. // Reviewer pulls changes from workspace and pushes after verification.
// 2. Copy CLAUDE.md and GEMINI.md to workspace // 2. Copy CLAUDE.md and GEMINI.md to workspace
claudeMdPath := filepath.Join(repoPath, "CLAUDE.md") claudeMdPath := core.Path(repoPath, "CLAUDE.md")
if data, err := coreio.Local.Read(claudeMdPath); err == nil { if data, err := coreio.Local.Read(claudeMdPath); err == nil {
coreio.Local.Write(filepath.Join(wsDir, "src", "CLAUDE.md"), data) _ = writeAtomic(core.Path(wsDir, "src", "CLAUDE.md"), data)
out.ClaudeMd = true out.ClaudeMd = true
} }
// Copy GEMINI.md from core/agent (ethics framework for all agents) // Copy GEMINI.md from core/agent (ethics framework for all agents)
agentGeminiMd := filepath.Join(s.codePath, "core", "agent", "GEMINI.md") agentGeminiMd := core.Path(s.codePath, "core", "agent", "GEMINI.md")
if data, err := coreio.Local.Read(agentGeminiMd); err == nil { if data, err := coreio.Local.Read(agentGeminiMd); err == nil {
coreio.Local.Write(filepath.Join(wsDir, "src", "GEMINI.md"), data) _ = writeAtomic(core.Path(wsDir, "src", "GEMINI.md"), data)
} }
// Copy persona if specified // Copy persona if specified
if input.Persona != "" { if persona != "" {
personaPath := filepath.Join(s.codePath, "core", "agent", "prompts", "personas", input.Persona+".md") personaPath := core.Path(s.codePath, "core", "agent", "prompts", "personas", persona+".md")
if data, err := coreio.Local.Read(personaPath); err == nil { if data, err := coreio.Local.Read(personaPath); err == nil {
coreio.Local.Write(filepath.Join(wsDir, "src", "PERSONA.md"), data) _ = writeAtomic(core.Path(wsDir, "src", "PERSONA.md"), data)
} }
} }
@ -207,9 +295,9 @@ func (s *PrepSubsystem) prepWorkspace(ctx context.Context, _ *mcp.CallToolReques
if input.Issue > 0 { if input.Issue > 0 {
s.generateTodo(ctx, input.Org, input.Repo, input.Issue, wsDir) s.generateTodo(ctx, input.Org, input.Repo, input.Issue, wsDir)
} else if input.Task != "" { } else if input.Task != "" {
todo := fmt.Sprintf("# TASK: %s\n\n**Repo:** %s/%s\n**Status:** ready\n\n## Objective\n\n%s\n", todo := core.Sprintf("# TASK: %s\n\n**Repo:** %s/%s\n**Status:** ready\n\n## Objective\n\n%s\n",
input.Task, input.Org, input.Repo, input.Task) input.Task, input.Org, input.Repo, input.Task)
coreio.Local.Write(filepath.Join(wsDir, "src", "TODO.md"), todo) _ = writeAtomic(core.Path(wsDir, "src", "TODO.md"), todo)
} }
// 4. Generate CONTEXT.md from OpenBrain // 4. Generate CONTEXT.md from OpenBrain
@ -227,21 +315,82 @@ func (s *PrepSubsystem) prepWorkspace(ctx context.Context, _ *mcp.CallToolReques
// 8. Copy spec files into specs/ // 8. Copy spec files into specs/
out.SpecFiles = s.copySpecs(wsDir) out.SpecFiles = s.copySpecs(wsDir)
// 9. Copy AX reference files into .core/reference/ // 9. Write PLAN.md from template (if specified)
s.copyReference(wsDir)
// 11. Write PLAN.md from template (if specified)
if input.PlanTemplate != "" { if input.PlanTemplate != "" {
s.writePlanFromTemplate(input.PlanTemplate, input.Variables, input.Task, wsDir) s.writePlanFromTemplate(input.PlanTemplate, input.Variables, input.Task, wsDir)
} }
// 11. Write prompt template // 10. Write prompt template
s.writePromptTemplate(input.Template, wsDir) s.writePromptTemplate(input.Template, wsDir)
out.Success = true out.Success = true
return nil, out, nil return nil, out, nil
} }
// branchSlug converts a free-form string into a git-friendly branch suffix.
func branchSlug(value string) string {
value = core.Lower(core.Trim(value))
if value == "" {
return ""
}
b := core.NewBuilder()
b.Grow(len(value))
lastDash := false
for _, r := range value {
switch {
case r >= 'a' && r <= 'z', r >= '0' && r <= '9':
b.WriteRune(r)
lastDash = false
case r == '-' || r == '_' || r == '.' || r == ' ':
if !lastDash {
b.WriteByte('-')
lastDash = true
}
default:
if !lastDash {
b.WriteByte('-')
lastDash = true
}
}
}
slug := trimDashes(b.String())
if len(slug) > 40 {
slug = trimDashes(slug[:40])
}
return slug
}
// sanitizeFilename replaces non-alphanumeric characters (except - _ .) with dashes.
func sanitizeFilename(title string) string {
b := core.NewBuilder()
b.Grow(len(title))
for _, r := range title {
switch {
case r >= 'a' && r <= 'z', r >= 'A' && r <= 'Z', r >= '0' && r <= '9',
r == '-', r == '_', r == '.':
b.WriteRune(r)
default:
b.WriteByte('-')
}
}
return b.String()
}
// trimDashes strips leading and trailing dash characters from a string.
func trimDashes(s string) string {
start := 0
for start < len(s) && s[start] == '-' {
start++
}
end := len(s)
for end > start && s[end-1] == '-' {
end--
}
return s[start:end]
}
// --- Prompt templates --- // --- Prompt templates ---
func (s *PrepSubsystem) writePromptTemplate(template, wsDir string) { func (s *PrepSubsystem) writePromptTemplate(template, wsDir string) {
@ -309,7 +458,7 @@ Do NOT push. Commit only — a reviewer will verify and push.
prompt = "Read TODO.md and complete the task. Work in src/.\n" prompt = "Read TODO.md and complete the task. Work in src/.\n"
} }
coreio.Local.Write(filepath.Join(wsDir, "src", "PROMPT.md"), prompt) _ = writeAtomic(core.Path(wsDir, "src", "PROMPT.md"), prompt)
} }
// --- Plan template rendering --- // --- Plan template rendering ---
@ -318,11 +467,11 @@ Do NOT push. Commit only — a reviewer will verify and push.
// and writes PLAN.md into the workspace src/ directory. // and writes PLAN.md into the workspace src/ directory.
func (s *PrepSubsystem) writePlanFromTemplate(templateSlug string, variables map[string]string, task string, wsDir string) { func (s *PrepSubsystem) writePlanFromTemplate(templateSlug string, variables map[string]string, task string, wsDir string) {
// Look for template in core/agent/prompts/templates/ // Look for template in core/agent/prompts/templates/
templatePath := filepath.Join(s.codePath, "core", "agent", "prompts", "templates", templateSlug+".yaml") templatePath := core.Path(s.codePath, "core", "agent", "prompts", "templates", templateSlug+".yaml")
content, err := coreio.Local.Read(templatePath) content, err := coreio.Local.Read(templatePath)
if err != nil { if err != nil {
// Try .yml extension // Try .yml extension
templatePath = filepath.Join(s.codePath, "core", "agent", "prompts", "templates", templateSlug+".yml") templatePath = core.Path(s.codePath, "core", "agent", "prompts", "templates", templateSlug+".yml")
content, err = coreio.Local.Read(templatePath) content, err = coreio.Local.Read(templatePath)
if err != nil { if err != nil {
return // Template not found, skip silently return // Template not found, skip silently
@ -331,8 +480,8 @@ func (s *PrepSubsystem) writePlanFromTemplate(templateSlug string, variables map
// Substitute variables ({{variable_name}} → value) // Substitute variables ({{variable_name}} → value)
for key, value := range variables { for key, value := range variables {
content = strings.ReplaceAll(content, "{{"+key+"}}", value) content = core.Replace(content, "{{"+key+"}}", value)
content = strings.ReplaceAll(content, "{{ "+key+" }}", value) content = core.Replace(content, "{{ "+key+" }}", value)
} }
// Parse the YAML to render as markdown // Parse the YAML to render as markdown
@ -352,7 +501,7 @@ func (s *PrepSubsystem) writePlanFromTemplate(templateSlug string, variables map
} }
// Render as PLAN.md // Render as PLAN.md
var plan strings.Builder plan := core.NewBuilder()
plan.WriteString("# Plan: " + tmpl.Name + "\n\n") plan.WriteString("# Plan: " + tmpl.Name + "\n\n")
if task != "" { if task != "" {
plan.WriteString("**Task:** " + task + "\n\n") plan.WriteString("**Task:** " + task + "\n\n")
@ -370,7 +519,7 @@ func (s *PrepSubsystem) writePlanFromTemplate(templateSlug string, variables map
} }
for i, phase := range tmpl.Phases { for i, phase := range tmpl.Phases {
plan.WriteString(fmt.Sprintf("## Phase %d: %s\n\n", i+1, phase.Name)) plan.WriteString(core.Sprintf("## Phase %d: %s\n\n", i+1, phase.Name))
if phase.Description != "" { if phase.Description != "" {
plan.WriteString(phase.Description + "\n\n") plan.WriteString(phase.Description + "\n\n")
} }
@ -387,7 +536,7 @@ func (s *PrepSubsystem) writePlanFromTemplate(templateSlug string, variables map
plan.WriteString("\n**Commit after completing this phase.**\n\n---\n\n") plan.WriteString("\n**Commit after completing this phase.**\n\n---\n\n")
} }
coreio.Local.Write(filepath.Join(wsDir, "src", "PLAN.md"), plan.String()) _ = writeAtomic(core.Path(wsDir, "src", "PLAN.md"), plan.String())
} }
// --- Helpers (unchanged) --- // --- Helpers (unchanged) ---
@ -397,8 +546,11 @@ func (s *PrepSubsystem) pullWiki(ctx context.Context, org, repo, wsDir string) i
return 0 return 0
} }
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/wiki/pages", s.forgeURL, org, repo) url := core.Sprintf("%s/api/v1/repos/%s/%s/wiki/pages", s.forgeURL, org, repo)
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil) req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return 0
}
req.Header.Set("Authorization", "token "+s.forgeToken) req.Header.Set("Authorization", "token "+s.forgeToken)
resp, err := s.client.Do(req) resp, err := s.client.Do(req)
@ -414,7 +566,9 @@ func (s *PrepSubsystem) pullWiki(ctx context.Context, org, repo, wsDir string) i
Title string `json:"title"` Title string `json:"title"`
SubURL string `json:"sub_url"` SubURL string `json:"sub_url"`
} }
json.NewDecoder(resp.Body).Decode(&pages) if err := json.NewDecoder(resp.Body).Decode(&pages); err != nil {
return 0
}
count := 0 count := 0
for _, page := range pages { for _, page := range pages {
@ -423,8 +577,11 @@ func (s *PrepSubsystem) pullWiki(ctx context.Context, org, repo, wsDir string) i
subURL = page.Title subURL = page.Title
} }
pageURL := fmt.Sprintf("%s/api/v1/repos/%s/%s/wiki/page/%s", s.forgeURL, org, repo, subURL) pageURL := core.Sprintf("%s/api/v1/repos/%s/%s/wiki/page/%s", s.forgeURL, org, repo, subURL)
pageReq, _ := http.NewRequestWithContext(ctx, "GET", pageURL, nil) pageReq, err := http.NewRequestWithContext(ctx, "GET", pageURL, nil)
if err != nil {
continue
}
pageReq.Header.Set("Authorization", "token "+s.forgeToken) pageReq.Header.Set("Authorization", "token "+s.forgeToken)
pageResp, err := s.client.Do(pageReq) pageResp, err := s.client.Do(pageReq)
@ -439,22 +596,22 @@ func (s *PrepSubsystem) pullWiki(ctx context.Context, org, repo, wsDir string) i
var pageData struct { var pageData struct {
ContentBase64 string `json:"content_base64"` ContentBase64 string `json:"content_base64"`
} }
json.NewDecoder(pageResp.Body).Decode(&pageData) if err := json.NewDecoder(pageResp.Body).Decode(&pageData); err != nil {
continue
}
pageResp.Body.Close() pageResp.Body.Close()
if pageData.ContentBase64 == "" { if pageData.ContentBase64 == "" {
continue continue
} }
content, _ := base64.StdEncoding.DecodeString(pageData.ContentBase64) content, err := base64.StdEncoding.DecodeString(pageData.ContentBase64)
filename := strings.Map(func(r rune) rune { if err != nil {
if r >= 'a' && r <= 'z' || r >= 'A' && r <= 'Z' || r >= '0' && r <= '9' || r == '-' || r == '_' || r == '.' { continue
return r
} }
return '-' filename := sanitizeFilename(page.Title) + ".md"
}, page.Title) + ".md"
coreio.Local.Write(filepath.Join(wsDir, "src", "kb", filename), string(content)) _ = writeAtomic(core.Path(wsDir, "src", "kb", filename), string(content))
count++ count++
} }
@ -466,9 +623,9 @@ func (s *PrepSubsystem) copySpecs(wsDir string) int {
count := 0 count := 0
for _, file := range specFiles { for _, file := range specFiles {
src := filepath.Join(s.specsPath, file) src := core.Path(s.specsPath, file)
if data, err := coreio.Local.Read(src); err == nil { if data, err := coreio.Local.Read(src); err == nil {
coreio.Local.Write(filepath.Join(wsDir, "src", "specs", file), data) _ = writeAtomic(core.Path(wsDir, "src", "specs", file), data)
count++ count++
} }
} }
@ -476,47 +633,25 @@ func (s *PrepSubsystem) copySpecs(wsDir string) int {
return count return count
} }
// copyReference copies the AX spec and Core source files into .core/reference/
// so dispatched agents can read the conventions and API without network access.
func (s *PrepSubsystem) copyReference(wsDir string) {
refDir := filepath.Join(wsDir, "src", ".core", "reference")
coreio.Local.EnsureDir(refDir)
// Copy AX spec from docs repo
axSpec := filepath.Join(s.codePath, "core", "docs", "docs", "specs", "RFC-025-AGENT-EXPERIENCE.md")
if data, err := coreio.Local.Read(axSpec); err == nil {
coreio.Local.Write(filepath.Join(refDir, "RFC-025-AGENT-EXPERIENCE.md"), data)
}
// Copy Core Go source files for API reference
coreGoDir := filepath.Join(s.codePath, "core", "go")
entries, err := coreio.Local.List(coreGoDir)
if err != nil {
return
}
for _, entry := range entries {
if entry.IsDir() || filepath.Ext(entry.Name()) != ".go" {
continue
}
if data, err := coreio.Local.Read(filepath.Join(coreGoDir, entry.Name())); err == nil {
coreio.Local.Write(filepath.Join(refDir, entry.Name()), data)
}
}
}
func (s *PrepSubsystem) generateContext(ctx context.Context, repo, wsDir string) int { func (s *PrepSubsystem) generateContext(ctx context.Context, repo, wsDir string) int {
if s.brainKey == "" { if s.brainKey == "" {
return 0 return 0
} }
body, _ := json.Marshal(map[string]any{ body, err := json.Marshal(map[string]any{
"query": "architecture conventions key interfaces for " + repo, "query": "architecture conventions key interfaces for " + repo,
"top_k": 10, "top_k": 10,
"project": repo, "project": repo,
"agent_id": "cladius", "agent_id": "cladius",
}) })
if err != nil {
return 0
}
req, _ := http.NewRequestWithContext(ctx, "POST", s.brainURL+"/v1/brain/recall", strings.NewReader(string(body))) req, err := http.NewRequestWithContext(ctx, "POST", s.brainURL+"/v1/brain/recall", core.NewReader(string(body)))
if err != nil {
return 0
}
req.Header.Set("Content-Type", "application/json") req.Header.Set("Content-Type", "application/json")
req.Header.Set("Accept", "application/json") req.Header.Set("Accept", "application/json")
req.Header.Set("Authorization", "Bearer "+s.brainKey) req.Header.Set("Authorization", "Bearer "+s.brainKey)
@ -530,13 +665,18 @@ func (s *PrepSubsystem) generateContext(ctx context.Context, repo, wsDir string)
return 0 return 0
} }
respData, _ := io.ReadAll(resp.Body) readResult := core.ReadAll(resp.Body)
if !readResult.OK {
return 0
}
var result struct { var result struct {
Memories []map[string]any `json:"memories"` Memories []map[string]any `json:"memories"`
} }
json.Unmarshal(respData, &result) if ur := core.JSONUnmarshal([]byte(readResult.Value.(string)), &result); !ur.OK {
return 0
}
var content strings.Builder content := core.NewBuilder()
content.WriteString("# Context — " + repo + "\n\n") content.WriteString("# Context — " + repo + "\n\n")
content.WriteString("> Relevant knowledge from OpenBrain.\n\n") content.WriteString("> Relevant knowledge from OpenBrain.\n\n")
@ -545,15 +685,15 @@ func (s *PrepSubsystem) generateContext(ctx context.Context, repo, wsDir string)
memContent, _ := mem["content"].(string) memContent, _ := mem["content"].(string)
memProject, _ := mem["project"].(string) memProject, _ := mem["project"].(string)
score, _ := mem["score"].(float64) score, _ := mem["score"].(float64)
content.WriteString(fmt.Sprintf("### %d. %s [%s] (score: %.3f)\n\n%s\n\n", i+1, memProject, memType, score, memContent)) content.WriteString(core.Sprintf("### %d. %s [%s] (score: %.3f)\n\n%s\n\n", i+1, memProject, memType, score, memContent))
} }
coreio.Local.Write(filepath.Join(wsDir, "src", "CONTEXT.md"), content.String()) _ = writeAtomic(core.Path(wsDir, "src", "CONTEXT.md"), content.String())
return len(result.Memories) return len(result.Memories)
} }
func (s *PrepSubsystem) findConsumers(repo, wsDir string) int { func (s *PrepSubsystem) findConsumers(repo, wsDir string) int {
goWorkPath := filepath.Join(s.codePath, "go.work") goWorkPath := core.Path(s.codePath, "go.work")
modulePath := "forge.lthn.ai/core/" + repo modulePath := "forge.lthn.ai/core/" + repo
workData, err := coreio.Local.Read(goWorkPath) workData, err := coreio.Local.Read(goWorkPath)
@ -562,19 +702,19 @@ func (s *PrepSubsystem) findConsumers(repo, wsDir string) int {
} }
var consumers []string var consumers []string
for _, line := range strings.Split(workData, "\n") { for _, line := range core.Split(workData, "\n") {
line = strings.TrimSpace(line) line = core.Trim(line)
if !strings.HasPrefix(line, "./") { if !core.HasPrefix(line, "./") {
continue continue
} }
dir := filepath.Join(s.codePath, strings.TrimPrefix(line, "./")) dir := core.Path(s.codePath, core.TrimPrefix(line, "./"))
goMod := filepath.Join(dir, "go.mod") goMod := core.Path(dir, "go.mod")
modData, err := coreio.Local.Read(goMod) modData, err := coreio.Local.Read(goMod)
if err != nil { if err != nil {
continue continue
} }
if strings.Contains(modData, modulePath) && !strings.HasPrefix(modData, "module "+modulePath) { if core.Contains(modData, modulePath) && !core.HasPrefix(modData, "module "+modulePath) {
consumers = append(consumers, filepath.Base(dir)) consumers = append(consumers, core.PathBase(dir))
} }
} }
@ -584,8 +724,8 @@ func (s *PrepSubsystem) findConsumers(repo, wsDir string) int {
for _, c := range consumers { for _, c := range consumers {
content += "- " + c + "\n" content += "- " + c + "\n"
} }
content += fmt.Sprintf("\n**Breaking change risk: %d consumers.**\n", len(consumers)) content += core.Sprintf("\n**Breaking change risk: %d consumers.**\n", len(consumers))
coreio.Local.Write(filepath.Join(wsDir, "src", "CONSUMERS.md"), content) _ = writeAtomic(core.Path(wsDir, "src", "CONSUMERS.md"), content)
} }
return len(consumers) return len(consumers)
@ -599,10 +739,10 @@ func (s *PrepSubsystem) gitLog(repoPath, wsDir string) int {
return 0 return 0
} }
lines := strings.Split(strings.TrimSpace(string(output)), "\n") lines := core.Split(core.Trim(string(output)), "\n")
if len(lines) > 0 && lines[0] != "" { if len(lines) > 0 && lines[0] != "" {
content := "# Recent Changes\n\n```\n" + string(output) + "```\n" content := "# Recent Changes\n\n```\n" + string(output) + "```\n"
coreio.Local.Write(filepath.Join(wsDir, "src", "RECENT.md"), content) _ = writeAtomic(core.Path(wsDir, "src", "RECENT.md"), content)
} }
return len(lines) return len(lines)
@ -613,7 +753,7 @@ func (s *PrepSubsystem) generateTodo(ctx context.Context, org, repo string, issu
return return
} }
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/issues/%d", s.forgeURL, org, repo, issue) url := core.Sprintf("%s/api/v1/repos/%s/%s/issues/%d", s.forgeURL, org, repo, issue)
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil) req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
req.Header.Set("Authorization", "token "+s.forgeToken) req.Header.Set("Authorization", "token "+s.forgeToken)
@ -632,11 +772,11 @@ func (s *PrepSubsystem) generateTodo(ctx context.Context, org, repo string, issu
} }
json.NewDecoder(resp.Body).Decode(&issueData) json.NewDecoder(resp.Body).Decode(&issueData)
content := fmt.Sprintf("# TASK: %s\n\n", issueData.Title) content := core.Sprintf("# TASK: %s\n\n", issueData.Title)
content += fmt.Sprintf("**Status:** ready\n") content += core.Sprintf("**Status:** ready\n")
content += fmt.Sprintf("**Source:** %s/%s/%s/issues/%d\n", s.forgeURL, org, repo, issue) content += core.Sprintf("**Source:** %s/%s/%s/issues/%d\n", s.forgeURL, org, repo, issue)
content += fmt.Sprintf("**Repo:** %s/%s\n\n---\n\n", org, repo) content += core.Sprintf("**Repo:** %s/%s\n\n---\n\n", org, repo)
content += "## Objective\n\n" + issueData.Body + "\n" content += "## Objective\n\n" + issueData.Body + "\n"
coreio.Local.Write(filepath.Join(wsDir, "src", "TODO.md"), content) _ = writeAtomic(core.Path(wsDir, "src", "TODO.md"), content)
} }

View file

@ -0,0 +1,151 @@
// SPDX-License-Identifier: EUPL-1.2
package agentic
import (
"context"
"strings"
"testing"
coremcp "dappco.re/go/mcp/pkg/mcp"
)
type recordingNotifier struct {
channel string
data any
}
func (r *recordingNotifier) ChannelSend(_ context.Context, channel string, data any) {
r.channel = channel
r.data = data
}
func TestSanitizeRepoPathSegment_Good(t *testing.T) {
t.Run("repo", func(t *testing.T) {
value, err := sanitizeRepoPathSegment("go-io", "repo", false)
if err != nil {
t.Fatalf("expected valid repo name, got error: %v", err)
}
if value != "go-io" {
t.Fatalf("expected normalized value, got: %q", value)
}
})
t.Run("persona", func(t *testing.T) {
value, err := sanitizeRepoPathSegment("engineering/backend-architect", "persona", true)
if err != nil {
t.Fatalf("expected valid persona path, got error: %v", err)
}
if value != "engineering/backend-architect" {
t.Fatalf("expected persona path, got: %q", value)
}
})
}
func TestSanitizeRepoPathSegment_Bad(t *testing.T) {
cases := []struct {
name string
value string
allowPath bool
}{
{"repo segment traversal", "../repo", false},
{"repo nested path", "team/repo", false},
{"plan template traversal", "../secret", false},
{"persona traversal", "engineering/../../admin", true},
{"backslash", "org\\repo", false},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
_, err := sanitizeRepoPathSegment(tc.value, tc.name, tc.allowPath)
if err == nil {
t.Fatal("expected error")
}
})
}
}
func TestPrepWorkspace_Bad_BadRepoTraversal(t *testing.T) {
s := &PrepSubsystem{codePath: t.TempDir()}
_, _, err := s.prepWorkspace(context.Background(), nil, PrepInput{Repo: "../repo"})
if err == nil {
t.Fatal("expected error")
}
if !strings.Contains(strings.ToLower(err.Error()), "repo") {
t.Fatalf("expected repo error, got %q", err)
}
}
func TestPrepWorkspace_Bad_BadPersonaTraversal(t *testing.T) {
s := &PrepSubsystem{codePath: t.TempDir()}
_, _, err := s.prepWorkspace(context.Background(), nil, PrepInput{
Repo: "repo",
Persona: "engineering/../../admin",
})
if err == nil {
t.Fatal("expected error")
}
if !strings.Contains(strings.ToLower(err.Error()), "persona") {
t.Fatalf("expected persona error, got %q", err)
}
}
func TestPrepWorkspace_Bad_BadPlanTemplateTraversal(t *testing.T) {
s := &PrepSubsystem{codePath: t.TempDir()}
_, _, err := s.prepWorkspace(context.Background(), nil, PrepInput{
Repo: "repo",
PlanTemplate: "../secret",
})
if err == nil {
t.Fatal("expected error")
}
if !strings.Contains(strings.ToLower(err.Error()), "plan_template") {
t.Fatalf("expected plan template error, got %q", err)
}
}
func TestSetNotifier_Good_EmitsChannelEvents(t *testing.T) {
s := NewPrep()
notifier := &recordingNotifier{}
s.SetNotifier(notifier)
s.emitChannel(context.Background(), coremcp.ChannelAgentStatus, map[string]any{"status": "running"})
if notifier.channel != coremcp.ChannelAgentStatus {
t.Fatalf("expected %s channel, got %q", coremcp.ChannelAgentStatus, notifier.channel)
}
if payload, ok := notifier.data.(map[string]any); !ok || payload["status"] != "running" {
t.Fatalf("expected payload to include running status, got %#v", notifier.data)
}
}
func TestEmitHarvestComplete_Good_EmitsChannelEvents(t *testing.T) {
s := NewPrep()
notifier := &recordingNotifier{}
s.SetNotifier(notifier)
s.emitHarvestComplete(context.Background(), "go-io-123", "go-io", 4, true)
if notifier.channel != coremcp.ChannelHarvestComplete {
t.Fatalf("expected %s channel, got %q", coremcp.ChannelHarvestComplete, notifier.channel)
}
payload, ok := notifier.data.(map[string]any)
if !ok {
t.Fatalf("expected payload object, got %#v", notifier.data)
}
if payload["workspace"] != "go-io-123" {
t.Fatalf("expected workspace go-io-123, got %#v", payload["workspace"])
}
if payload["repo"] != "go-io" {
t.Fatalf("expected repo go-io, got %#v", payload["repo"])
}
if payload["findings"] != 4 {
t.Fatalf("expected findings 4, got %#v", payload["findings"])
}
if payload["issue_created"] != true {
t.Fatalf("expected issue_created true, got %#v", payload["issue_created"])
}
}

View file

@ -3,18 +3,19 @@
package agentic package agentic
import ( import (
"fmt"
"os" "os"
"os/exec" "os/exec"
"path/filepath"
"strings"
"syscall" "syscall"
"time" "time"
coreio "forge.lthn.ai/core/go-io" core "dappco.re/go/core"
coreio "dappco.re/go/io"
"gopkg.in/yaml.v3" "gopkg.in/yaml.v3"
) )
// os.Create, os.Open, os.DevNull, os.Environ, os.FindProcess are used for
// process spawning and management — no core equivalents for these OS primitives.
// DispatchConfig controls agent dispatch behaviour. // DispatchConfig controls agent dispatch behaviour.
type DispatchConfig struct { type DispatchConfig struct {
DefaultAgent string `yaml:"default_agent"` DefaultAgent string `yaml:"default_agent"`
@ -43,7 +44,7 @@ type AgentsConfig struct {
// loadAgentsConfig reads config/agents.yaml from the code path. // loadAgentsConfig reads config/agents.yaml from the code path.
func (s *PrepSubsystem) loadAgentsConfig() *AgentsConfig { func (s *PrepSubsystem) loadAgentsConfig() *AgentsConfig {
paths := []string{ paths := []string{
filepath.Join(s.codePath, ".core", "agents.yaml"), core.Path(s.codePath, ".core", "agents.yaml"),
} }
for _, path := range paths { for _, path := range paths {
@ -79,9 +80,16 @@ func (s *PrepSubsystem) delayForAgent(agent string) time.Duration {
return 0 return 0
} }
// Parse reset time // Parse reset time (format: "HH:MM")
resetHour, resetMin := 6, 0 resetHour, resetMin := 6, 0
fmt.Sscanf(rate.ResetUTC, "%d:%d", &resetHour, &resetMin) if parts := core.Split(rate.ResetUTC, ":"); len(parts) == 2 {
if h, ok := parseSimpleInt(parts[0]); ok {
resetHour = h
}
if m, ok := parseSimpleInt(parts[1]); ok {
resetMin = m
}
}
now := time.Now().UTC() now := time.Now().UTC()
resetToday := time.Date(now.Year(), now.Month(), now.Day(), resetHour, resetMin, 0, 0, time.UTC) resetToday := time.Date(now.Year(), now.Month(), now.Day(), resetHour, resetMin, 0, 0, time.UTC)
@ -115,9 +123,9 @@ func (s *PrepSubsystem) listWorkspaceDirs() []string {
if !entry.IsDir() { if !entry.IsDir() {
continue continue
} }
path := filepath.Join(wsRoot, entry.Name()) path := core.Path(wsRoot, entry.Name())
// Check if this dir has a status.json (it's a workspace) // Check if this dir has a status.json (it's a workspace)
if coreio.Local.IsFile(filepath.Join(path, "status.json")) { if coreio.Local.IsFile(core.Path(path, "status.json")) {
dirs = append(dirs, path) dirs = append(dirs, path)
continue continue
} }
@ -128,8 +136,8 @@ func (s *PrepSubsystem) listWorkspaceDirs() []string {
} }
for _, sub := range subEntries { for _, sub := range subEntries {
if sub.IsDir() { if sub.IsDir() {
subPath := filepath.Join(path, sub.Name()) subPath := core.Path(path, sub.Name())
if coreio.Local.IsFile(filepath.Join(subPath, "status.json")) { if coreio.Local.IsFile(core.Path(subPath, "status.json")) {
dirs = append(dirs, subPath) dirs = append(dirs, subPath)
} }
} }
@ -146,7 +154,7 @@ func (s *PrepSubsystem) countRunningByAgent(agent string) int {
if err != nil || st.Status != "running" { if err != nil || st.Status != "running" {
continue continue
} }
stBase := strings.SplitN(st.Agent, ":", 2)[0] stBase := core.SplitN(st.Agent, ":", 2)[0]
if stBase != agent { if stBase != agent {
continue continue
} }
@ -162,7 +170,7 @@ func (s *PrepSubsystem) countRunningByAgent(agent string) int {
// baseAgent strips the model variant (gemini:flash → gemini). // baseAgent strips the model variant (gemini:flash → gemini).
func baseAgent(agent string) string { func baseAgent(agent string) string {
return strings.SplitN(agent, ":", 2)[0] return core.SplitN(agent, ":", 2)[0]
} }
// canDispatchAgent checks if we're under the concurrency limit for a specific agent type. // canDispatchAgent checks if we're under the concurrency limit for a specific agent type.
@ -176,6 +184,23 @@ func (s *PrepSubsystem) canDispatchAgent(agent string) bool {
return s.countRunningByAgent(base) < limit return s.countRunningByAgent(base) < limit
} }
// parseSimpleInt parses a small non-negative integer from a string.
// Returns (value, true) on success, (0, false) on failure.
func parseSimpleInt(s string) (int, bool) {
s = core.Trim(s)
if s == "" {
return 0, false
}
n := 0
for _, r := range s {
if r < '0' || r > '9' {
return 0, false
}
n = n*10 + int(r-'0')
}
return n, true
}
// canDispatch is kept for backwards compat. // canDispatch is kept for backwards compat.
func (s *PrepSubsystem) canDispatch() bool { func (s *PrepSubsystem) canDispatch() bool {
return true return true
@ -205,7 +230,7 @@ func (s *PrepSubsystem) drainQueue() {
continue continue
} }
srcDir := filepath.Join(wsDir, "src") srcDir := core.Path(wsDir, "src")
prompt := "Read PROMPT.md for instructions. All context files (CLAUDE.md, TODO.md, CONTEXT.md, CONSUMERS.md, RECENT.md) are in the parent directory. Work in this directory." prompt := "Read PROMPT.md for instructions. All context files (CLAUDE.md, TODO.md, CONTEXT.md, CONSUMERS.md, RECENT.md) are in the parent directory. Work in this directory."
command, args, err := agentCommand(st.Agent, prompt) command, args, err := agentCommand(st.Agent, prompt)
@ -213,7 +238,7 @@ func (s *PrepSubsystem) drainQueue() {
continue continue
} }
outputFile := filepath.Join(wsDir, fmt.Sprintf("agent-%s.log", st.Agent)) outputFile := core.Path(wsDir, core.Sprintf("agent-%s.log", st.Agent))
outFile, err := os.Create(outputFile) outFile, err := os.Create(outputFile)
if err != nil { if err != nil {
continue continue
@ -243,7 +268,7 @@ func (s *PrepSubsystem) drainQueue() {
st.Status = "running" st.Status = "running"
st.PID = cmd.Process.Pid st.PID = cmd.Process.Pid
st.Runs++ st.Runs++
writeStatus(wsDir, st) s.saveStatus(wsDir, st)
go func() { go func() {
cmd.Wait() cmd.Wait()
@ -252,7 +277,7 @@ func (s *PrepSubsystem) drainQueue() {
if st2, err := readStatus(wsDir); err == nil { if st2, err := readStatus(wsDir); err == nil {
st2.Status = "completed" st2.Status = "completed"
st2.PID = 0 st2.PID = 0
writeStatus(wsDir, st2) s.saveStatus(wsDir, st2)
} }
// Ingest scan findings as issues // Ingest scan findings as issues

View file

@ -0,0 +1,208 @@
// SPDX-License-Identifier: EUPL-1.2
package agentic
import (
"context"
"encoding/json"
"os/exec"
"regexp"
"strconv"
"time"
core "dappco.re/go/core"
coreio "dappco.re/go/io"
coreerr "dappco.re/go/log"
)
func listLocalRepos(basePath string) []string {
entries, err := coreio.Local.List(basePath)
if err != nil {
return nil
}
repos := make([]string, 0, len(entries))
for _, entry := range entries {
if entry.IsDir() {
repos = append(repos, entry.Name())
}
}
return repos
}
func hasRemote(repoDir, remote string) bool {
cmd := exec.Command("git", "remote", "get-url", remote)
cmd.Dir = repoDir
if out, err := cmd.Output(); err == nil {
return core.Trim(string(out)) != ""
}
return false
}
func commitsAhead(repoDir, baseRef, headRef string) int {
cmd := exec.Command("git", "rev-list", "--count", baseRef+".."+headRef)
cmd.Dir = repoDir
out, err := cmd.Output()
if err != nil {
return 0
}
count, err := parsePositiveInt(core.Trim(string(out)))
if err != nil {
return 0
}
return count
}
func filesChanged(repoDir, baseRef, headRef string) int {
cmd := exec.Command("git", "diff", "--name-only", baseRef+".."+headRef)
cmd.Dir = repoDir
out, err := cmd.Output()
if err != nil {
return 0
}
count := 0
for _, line := range core.Split(core.Trim(string(out)), "\n") {
if core.Trim(line) != "" {
count++
}
}
return count
}
func gitOutput(repoDir string, args ...string) (string, error) {
cmd := exec.Command("git", args...)
cmd.Dir = repoDir
out, err := cmd.CombinedOutput()
if err != nil {
return "", coreerr.E("gitOutput", string(out), err)
}
return core.Trim(string(out)), nil
}
func parsePositiveInt(value string) (int, error) {
value = core.Trim(value)
if value == "" {
return 0, coreerr.E("parsePositiveInt", "empty value", nil)
}
n := 0
for _, r := range value {
if r < '0' || r > '9' {
return 0, coreerr.E("parsePositiveInt", "value contains non-numeric characters", nil)
}
n = n*10 + int(r-'0')
}
return n, nil
}
func readGitHubPRURL(repoDir string) (string, error) {
cmd := exec.Command("gh", "pr", "list", "--head", "dev", "--state", "open", "--json", "url", "--limit", "1")
cmd.Dir = repoDir
out, err := cmd.Output()
if err != nil {
return "", err
}
var rows []struct {
URL string `json:"url"`
}
if err := json.Unmarshal(out, &rows); err != nil {
return "", err
}
if len(rows) == 0 {
return "", nil
}
return rows[0].URL, nil
}
func createGitHubPR(ctx context.Context, repoDir, repo string, commits, files int) (string, error) {
if _, err := exec.LookPath("gh"); err != nil {
return "", coreerr.E("createGitHubPR", "gh CLI is not available", err)
}
if url, err := readGitHubPRURL(repoDir); err == nil && url != "" {
return url, nil
}
body := "## Forge -> GitHub Sync\n\n"
body += "**Commits:** " + itoa(commits) + "\n"
body += "**Files changed:** " + itoa(files) + "\n\n"
body += "Automated sync from Forge (forge.lthn.ai) to GitHub mirror.\n"
body += "Review with CodeRabbit before merging.\n\n"
body += "---\n"
body += "Co-Authored-By: Virgil <virgil@lethean.io>"
title := "[sync] " + repo + ": " + itoa(commits) + " commits, " + itoa(files) + " files"
cmd := exec.CommandContext(ctx, "gh", "pr", "create",
"--head", "dev",
"--base", "main",
"--title", title,
"--body", body,
)
cmd.Dir = repoDir
out, err := cmd.CombinedOutput()
if err != nil {
return "", coreerr.E("createGitHubPR", string(out), err)
}
lines := core.Split(core.Trim(string(out)), "\n")
if len(lines) == 0 {
return "", nil
}
return core.Trim(lines[len(lines)-1]), nil
}
func ensureDevBranch(repoDir string) error {
cmd := exec.Command("git", "push", "github", "HEAD:refs/heads/dev", "--force")
cmd.Dir = repoDir
out, err := cmd.CombinedOutput()
if err != nil {
return coreerr.E("ensureDevBranch", string(out), err)
}
return nil
}
func reviewerCommand(ctx context.Context, repoDir, reviewer string) *exec.Cmd {
switch reviewer {
case "coderabbit":
return exec.CommandContext(ctx, "coderabbit", "review")
case "codex":
return exec.CommandContext(ctx, "codex", "review")
case "both":
return exec.CommandContext(ctx, "coderabbit", "review")
default:
return exec.CommandContext(ctx, reviewer)
}
}
func itoa(value int) string {
return strconv.Itoa(value)
}
func parseRetryAfter(detail string) time.Duration {
re := regexp.MustCompile(`(?i)(\d+)\s*(minute|minutes|hour|hours|second|seconds)`)
match := re.FindStringSubmatch(detail)
if len(match) != 3 {
return 5 * time.Minute
}
n, err := strconv.Atoi(match[1])
if err != nil || n <= 0 {
return 5 * time.Minute
}
switch core.Lower(match[2]) {
case "hour", "hours":
return time.Duration(n) * time.Hour
case "second", "seconds":
return time.Duration(n) * time.Second
default:
return time.Duration(n) * time.Minute
}
}
func repoRootFromCodePath(codePath string) string {
return core.Path(codePath, "core")
}

View file

@ -4,18 +4,20 @@ package agentic
import ( import (
"context" "context"
"fmt"
"os" "os"
"os/exec" "os/exec"
"path/filepath"
"syscall" "syscall"
coreio "forge.lthn.ai/core/go-io" core "dappco.re/go/core"
coreerr "forge.lthn.ai/core/go-log" coreio "dappco.re/go/io"
coreerr "dappco.re/go/log"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
// ResumeInput is the input for agentic_resume. // ResumeInput is the input for agentic_resume.
//
// input := ResumeInput{Workspace: "go-mcp-1700000000", Answer: "Use the shared notifier"}
type ResumeInput struct { type ResumeInput struct {
Workspace string `json:"workspace"` // workspace name (e.g. "go-scm-1773581173") Workspace string `json:"workspace"` // workspace name (e.g. "go-scm-1773581173")
Answer string `json:"answer,omitempty"` // answer to the blocked question (written to ANSWER.md) Answer string `json:"answer,omitempty"` // answer to the blocked question (written to ANSWER.md)
@ -24,6 +26,8 @@ type ResumeInput struct {
} }
// ResumeOutput is the output for agentic_resume. // ResumeOutput is the output for agentic_resume.
//
// // out.Success == true, out.PID > 0
type ResumeOutput struct { type ResumeOutput struct {
Success bool `json:"success"` Success bool `json:"success"`
Workspace string `json:"workspace"` Workspace string `json:"workspace"`
@ -33,8 +37,9 @@ type ResumeOutput struct {
Prompt string `json:"prompt,omitempty"` Prompt string `json:"prompt,omitempty"`
} }
func (s *PrepSubsystem) registerResumeTool(server *mcp.Server) { func (s *PrepSubsystem) registerResumeTool(svc *coremcp.Service) {
mcp.AddTool(server, &mcp.Tool{ server := svc.Server()
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_resume", Name: "agentic_resume",
Description: "Resume a blocked agent workspace. Writes ANSWER.md if an answer is provided, then relaunches the agent with instructions to read it and continue.", Description: "Resume a blocked agent workspace. Writes ANSWER.md if an answer is provided, then relaunches the agent with instructions to read it and continue.",
}, s.resume) }, s.resume)
@ -45,8 +50,8 @@ func (s *PrepSubsystem) resume(ctx context.Context, _ *mcp.CallToolRequest, inpu
return nil, ResumeOutput{}, coreerr.E("resume", "workspace is required", nil) return nil, ResumeOutput{}, coreerr.E("resume", "workspace is required", nil)
} }
wsDir := filepath.Join(s.workspaceRoot(), input.Workspace) wsDir := core.Path(s.workspaceRoot(), input.Workspace)
srcDir := filepath.Join(wsDir, "src") srcDir := core.Path(wsDir, "src")
// Verify workspace exists // Verify workspace exists
if _, err := coreio.Local.List(srcDir); err != nil { if _, err := coreio.Local.List(srcDir); err != nil {
@ -71,9 +76,9 @@ func (s *PrepSubsystem) resume(ctx context.Context, _ *mcp.CallToolRequest, inpu
// Write ANSWER.md if answer provided // Write ANSWER.md if answer provided
if input.Answer != "" { if input.Answer != "" {
answerPath := filepath.Join(srcDir, "ANSWER.md") answerPath := core.Path(srcDir, "ANSWER.md")
content := fmt.Sprintf("# Answer\n\n%s\n", input.Answer) content := core.Sprintf("# Answer\n\n%s\n", input.Answer)
if err := coreio.Local.Write(answerPath, content); err != nil { if err := writeAtomic(answerPath, content); err != nil {
return nil, ResumeOutput{}, coreerr.E("resume", "failed to write ANSWER.md", err) return nil, ResumeOutput{}, coreerr.E("resume", "failed to write ANSWER.md", err)
} }
} }
@ -95,7 +100,7 @@ func (s *PrepSubsystem) resume(ctx context.Context, _ *mcp.CallToolRequest, inpu
} }
// Spawn agent as detached process (survives parent death) // Spawn agent as detached process (survives parent death)
outputFile := filepath.Join(wsDir, fmt.Sprintf("agent-%s-run%d.log", agent, st.Runs+1)) outputFile := core.Path(wsDir, core.Sprintf("agent-%s-run%d.log", agent, st.Runs+1))
command, args, err := agentCommand(agent, prompt) command, args, err := agentCommand(agent, prompt)
if err != nil { if err != nil {
@ -131,11 +136,38 @@ func (s *PrepSubsystem) resume(ctx context.Context, _ *mcp.CallToolRequest, inpu
st.PID = cmd.Process.Pid st.PID = cmd.Process.Pid
st.Runs++ st.Runs++
st.Question = "" st.Question = ""
writeStatus(wsDir, st) s.saveStatus(wsDir, st)
go func() { go func() {
cmd.Wait() cmd.Wait()
outFile.Close() outFile.Close()
postCtx := context.WithoutCancel(ctx)
status := "completed"
channel := coremcp.ChannelAgentComplete
payload := map[string]any{
"workspace": input.Workspace,
"agent": agent,
"repo": st.Repo,
"branch": st.Branch,
}
if data, err := coreio.Local.Read(core.Path(srcDir, "BLOCKED.md")); err == nil {
status = "blocked"
channel = coremcp.ChannelAgentBlocked
st.Question = core.Trim(data)
if st.Question != "" {
payload["question"] = st.Question
}
}
st.Status = status
st.PID = 0
s.saveStatus(wsDir, st)
payload["status"] = status
s.emitChannel(postCtx, channel, payload)
s.emitChannel(postCtx, coremcp.ChannelAgentStatus, payload)
}() }()
return nil, ResumeOutput{ return nil, ResumeOutput{

View file

@ -0,0 +1,271 @@
// SPDX-License-Identifier: EUPL-1.2
package agentic
import (
"context"
"encoding/json"
"os"
"os/exec"
"regexp"
"time"
core "dappco.re/go/core"
coreio "dappco.re/go/io"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp"
)
// ReviewQueueInput controls the review queue runner.
type ReviewQueueInput struct {
Limit int `json:"limit,omitempty"`
Reviewer string `json:"reviewer,omitempty"`
DryRun bool `json:"dry_run,omitempty"`
LocalOnly bool `json:"local_only,omitempty"`
}
// ReviewQueueOutput reports what happened.
type ReviewQueueOutput struct {
Success bool `json:"success"`
Processed []ReviewResult `json:"processed"`
Skipped []string `json:"skipped,omitempty"`
RateLimit *RateLimitInfo `json:"rate_limit,omitempty"`
}
// ReviewResult is the outcome of reviewing one repo.
type ReviewResult struct {
Repo string `json:"repo"`
Verdict string `json:"verdict"`
Findings int `json:"findings"`
Action string `json:"action"`
Detail string `json:"detail,omitempty"`
}
// RateLimitInfo tracks review rate limit state.
type RateLimitInfo struct {
Limited bool `json:"limited"`
RetryAt time.Time `json:"retry_at,omitempty"`
Message string `json:"message,omitempty"`
}
func reviewQueueHomeDir() string {
if home := os.Getenv("DIR_HOME"); home != "" {
return home
}
home, _ := os.UserHomeDir()
return home
}
func (s *PrepSubsystem) registerReviewQueueTool(svc *coremcp.Service) {
server := svc.Server()
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_review_queue",
Description: "Process repositories that are ahead of the GitHub mirror and summarise review findings.",
}, s.reviewQueue)
}
func (s *PrepSubsystem) reviewQueue(ctx context.Context, _ *mcp.CallToolRequest, input ReviewQueueInput) (*mcp.CallToolResult, ReviewQueueOutput, error) {
limit := input.Limit
if limit <= 0 {
limit = 4
}
basePath := repoRootFromCodePath(s.codePath)
candidates := s.findReviewCandidates(basePath)
if len(candidates) == 0 {
return nil, ReviewQueueOutput{Success: true, Processed: []ReviewResult{}}, nil
}
processed := make([]ReviewResult, 0, len(candidates))
skipped := make([]string, 0)
var rateInfo *RateLimitInfo
for _, repo := range candidates {
if len(processed) >= limit {
skipped = append(skipped, repo+" (limit reached)")
continue
}
if rateInfo != nil && rateInfo.Limited && time.Now().Before(rateInfo.RetryAt) {
skipped = append(skipped, repo+" (rate limited)")
continue
}
repoDir := core.Path(basePath, repo)
reviewer := input.Reviewer
if reviewer == "" {
reviewer = "coderabbit"
}
result := s.reviewRepo(ctx, repoDir, repo, reviewer, input.DryRun, input.LocalOnly)
if result.Verdict == "rate_limited" {
retryAfter := parseRetryAfter(result.Detail)
rateInfo = &RateLimitInfo{
Limited: true,
RetryAt: time.Now().Add(retryAfter),
Message: result.Detail,
}
skipped = append(skipped, repo+" (rate limited)")
continue
}
processed = append(processed, result)
}
if rateInfo != nil {
s.saveRateLimitState(rateInfo)
}
return nil, ReviewQueueOutput{
Success: true,
Processed: processed,
Skipped: skipped,
RateLimit: rateInfo,
}, nil
}
func (s *PrepSubsystem) findReviewCandidates(basePath string) []string {
entries, err := os.ReadDir(basePath)
if err != nil {
return nil
}
candidates := make([]string, 0, len(entries))
for _, entry := range entries {
if !entry.IsDir() {
continue
}
repoDir := core.Path(basePath, entry.Name())
if !hasRemote(repoDir, "github") {
continue
}
if commitsAhead(repoDir, "github/main", "HEAD") <= 0 {
continue
}
candidates = append(candidates, entry.Name())
}
return candidates
}
func (s *PrepSubsystem) reviewRepo(ctx context.Context, repoDir, repo, reviewer string, dryRun, localOnly bool) ReviewResult {
result := ReviewResult{Repo: repo}
if rl := s.loadRateLimitState(); rl != nil && rl.Limited && time.Now().Before(rl.RetryAt) {
result.Verdict = "rate_limited"
result.Detail = core.Sprintf("retry after %s", rl.RetryAt.Format(time.RFC3339))
return result
}
cmd := reviewerCommand(ctx, repoDir, reviewer)
cmd.Dir = repoDir
out, err := cmd.CombinedOutput()
output := core.Trim(string(out))
if core.Contains(core.Lower(output), "rate limit") {
result.Verdict = "rate_limited"
result.Detail = output
return result
}
if err != nil && !core.Contains(output, "No findings") && !core.Contains(output, "no issues") {
result.Verdict = "error"
if output != "" {
result.Detail = output
} else {
result.Detail = err.Error()
}
return result
}
s.storeReviewOutput(repoDir, repo, reviewer, output)
result.Findings = countFindingHints(output)
if core.Contains(output, "No findings") || core.Contains(output, "no issues") || core.Contains(output, "LGTM") {
result.Verdict = "clean"
if dryRun {
result.Action = "skipped (dry run)"
return result
}
if localOnly {
result.Action = "local only"
return result
}
if url, err := readGitHubPRURL(repoDir); err == nil && url != "" {
mergeCmd := exec.CommandContext(ctx, "gh", "pr", "merge", "--auto", "--squash", "--delete-branch")
mergeCmd.Dir = repoDir
if mergeOut, err := mergeCmd.CombinedOutput(); err == nil {
result.Action = "merged"
result.Detail = core.Trim(string(mergeOut))
return result
}
}
result.Action = "waiting"
return result
}
result.Verdict = "findings"
if dryRun {
result.Action = "skipped (dry run)"
return result
}
result.Action = "waiting"
return result
}
func (s *PrepSubsystem) storeReviewOutput(repoDir, repo, reviewer, output string) {
home := reviewQueueHomeDir()
dataDir := core.Path(home, ".core", "training", "reviews")
if err := coreio.Local.EnsureDir(dataDir); err != nil {
return
}
payload := map[string]string{
"repo": repo,
"reviewer": reviewer,
"output": output,
"source": repoDir,
}
data, err := json.MarshalIndent(payload, "", " ")
if err != nil {
return
}
name := core.Sprintf("%s-%s-%d.json", repo, reviewer, time.Now().Unix())
_ = writeAtomic(core.Path(dataDir, name), string(data))
}
func (s *PrepSubsystem) saveRateLimitState(info *RateLimitInfo) {
home := reviewQueueHomeDir()
path := core.Path(home, ".core", "coderabbit-ratelimit.json")
data, err := json.Marshal(info)
if err != nil {
return
}
_ = writeAtomic(path, string(data))
}
func (s *PrepSubsystem) loadRateLimitState() *RateLimitInfo {
home := reviewQueueHomeDir()
path := core.Path(home, ".core", "coderabbit-ratelimit.json")
data, err := coreio.Local.Read(path)
if err != nil {
return nil
}
var info RateLimitInfo
if err := json.Unmarshal([]byte(data), &info); err != nil {
return nil
}
if !info.Limited {
return nil
}
return &info
}
func countFindingHints(output string) int {
re := regexp.MustCompile(`(?m)[^ \t\n\r]+\.(?:go|php|ts|tsx|js|jsx|py|rb|java|cs|cpp|cxx|cc|md):\d+`)
return len(re.FindAllString(output, -1))
}

View file

@ -5,11 +5,10 @@ package agentic
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"fmt"
"net/http" "net/http"
"strings"
coreerr "forge.lthn.ai/core/go-log" core "dappco.re/go/core"
coreerr "dappco.re/go/log"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
@ -81,7 +80,7 @@ func (s *PrepSubsystem) scan(ctx context.Context, _ *mcp.CallToolRequest, input
seen := make(map[string]bool) seen := make(map[string]bool)
var unique []ScanIssue var unique []ScanIssue
for _, issue := range allIssues { for _, issue := range allIssues {
key := fmt.Sprintf("%s#%d", issue.Repo, issue.Number) key := core.Sprintf("%s#%d", issue.Repo, issue.Number)
if !seen[key] { if !seen[key] {
seen[key] = true seen[key] = true
unique = append(unique, issue) unique = append(unique, issue)
@ -100,7 +99,7 @@ func (s *PrepSubsystem) scan(ctx context.Context, _ *mcp.CallToolRequest, input
} }
func (s *PrepSubsystem) listOrgRepos(ctx context.Context, org string) ([]string, error) { func (s *PrepSubsystem) listOrgRepos(ctx context.Context, org string) ([]string, error) {
url := fmt.Sprintf("%s/api/v1/orgs/%s/repos?limit=50", s.forgeURL, org) url := core.Sprintf("%s/api/v1/orgs/%s/repos?limit=50", s.forgeURL, org)
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil) req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
req.Header.Set("Authorization", "token "+s.forgeToken) req.Header.Set("Authorization", "token "+s.forgeToken)
@ -110,7 +109,7 @@ func (s *PrepSubsystem) listOrgRepos(ctx context.Context, org string) ([]string,
} }
defer resp.Body.Close() defer resp.Body.Close()
if resp.StatusCode != 200 { if resp.StatusCode != 200 {
return nil, coreerr.E("listOrgRepos", fmt.Sprintf("HTTP %d listing repos", resp.StatusCode), nil) return nil, coreerr.E("listOrgRepos", core.Sprintf("HTTP %d listing repos", resp.StatusCode), nil)
} }
var repos []struct { var repos []struct {
@ -126,7 +125,7 @@ func (s *PrepSubsystem) listOrgRepos(ctx context.Context, org string) ([]string,
} }
func (s *PrepSubsystem) listRepoIssues(ctx context.Context, org, repo, label string) ([]ScanIssue, error) { func (s *PrepSubsystem) listRepoIssues(ctx context.Context, org, repo, label string) ([]ScanIssue, error) {
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/issues?state=open&labels=%s&limit=10&type=issues", url := core.Sprintf("%s/api/v1/repos/%s/%s/issues?state=open&labels=%s&limit=10&type=issues",
s.forgeURL, org, repo, label) s.forgeURL, org, repo, label)
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil) req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
req.Header.Set("Authorization", "token "+s.forgeToken) req.Header.Set("Authorization", "token "+s.forgeToken)
@ -137,7 +136,7 @@ func (s *PrepSubsystem) listRepoIssues(ctx context.Context, org, repo, label str
} }
defer resp.Body.Close() defer resp.Body.Close()
if resp.StatusCode != 200 { if resp.StatusCode != 200 {
return nil, coreerr.E("listRepoIssues", fmt.Sprintf("HTTP %d for "+repo, resp.StatusCode), nil) return nil, coreerr.E("listRepoIssues", core.Sprintf("HTTP %d for "+repo, resp.StatusCode), nil)
} }
var issues []struct { var issues []struct {
@ -170,7 +169,7 @@ func (s *PrepSubsystem) listRepoIssues(ctx context.Context, org, repo, label str
Title: issue.Title, Title: issue.Title,
Labels: labels, Labels: labels,
Assignee: assignee, Assignee: assignee,
URL: strings.Replace(issue.HTMLURL, "https://forge.lthn.ai", s.forgeURL, 1), URL: core.Replace(issue.HTMLURL, "https://forge.lthn.ai", s.forgeURL),
}) })
} }

View file

@ -6,15 +6,18 @@ import (
"context" "context"
"encoding/json" "encoding/json"
"os" "os"
"path/filepath"
"strings"
"time" "time"
coreio "forge.lthn.ai/core/go-io" core "dappco.re/go/core"
coreerr "forge.lthn.ai/core/go-log" coreio "dappco.re/go/io"
coreerr "dappco.re/go/log"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
// os.Stat and os.FindProcess are used for workspace age detection and PID
// liveness checks — these are OS-level queries with no core equivalent.
// Workspace status file convention: // Workspace status file convention:
// //
// {workspace}/status.json — current state of the workspace // {workspace}/status.json — current state of the workspace
@ -28,6 +31,12 @@ import (
// running → failed (agent crashed / non-zero exit) // running → failed (agent crashed / non-zero exit)
// WorkspaceStatus represents the current state of an agent workspace. // WorkspaceStatus represents the current state of an agent workspace.
//
// status := WorkspaceStatus{
// Status: "blocked",
// Agent: "claude",
// Repo: "go-mcp",
// }
type WorkspaceStatus struct { type WorkspaceStatus struct {
Status string `json:"status"` // running, completed, blocked, failed Status string `json:"status"` // running, completed, blocked, failed
Agent string `json:"agent"` // gemini, claude, codex Agent string `json:"agent"` // gemini, claude, codex
@ -50,45 +59,64 @@ func writeStatus(wsDir string, status *WorkspaceStatus) error {
if err != nil { if err != nil {
return err return err
} }
return coreio.Local.Write(filepath.Join(wsDir, "status.json"), string(data)) return writeAtomic(core.JoinPath(wsDir, "status.json"), string(data))
}
func (s *PrepSubsystem) saveStatus(wsDir string, status *WorkspaceStatus) {
if err := writeStatus(wsDir, status); err != nil {
coreerr.Warn("failed to write workspace status", "workspace", core.PathBase(wsDir), "err", err)
}
} }
func readStatus(wsDir string) (*WorkspaceStatus, error) { func readStatus(wsDir string) (*WorkspaceStatus, error) {
data, err := coreio.Local.Read(filepath.Join(wsDir, "status.json")) data, err := coreio.Local.Read(core.JoinPath(wsDir, "status.json"))
if err != nil { if err != nil {
return nil, err return nil, err
} }
var s WorkspaceStatus var s WorkspaceStatus
if err := json.Unmarshal([]byte(data), &s); err != nil { if r := core.JSONUnmarshal([]byte(data), &s); !r.OK {
return nil, err return nil, coreerr.E("readStatus", "failed to parse status.json", nil)
} }
return &s, nil return &s, nil
} }
// --- agentic_status tool --- // --- agentic_status tool ---
// StatusInput is the input for agentic_status.
//
// input := StatusInput{Workspace: "go-mcp-1700000000"}
type StatusInput struct { type StatusInput struct {
Workspace string `json:"workspace,omitempty"` // specific workspace name, or empty for all Workspace string `json:"workspace,omitempty"` // specific workspace name, or empty for all
} }
// StatusOutput is the output for agentic_status.
//
// // out.Count == 2, len(out.Workspaces) == 2
type StatusOutput struct { type StatusOutput struct {
Workspaces []WorkspaceInfo `json:"workspaces"` Workspaces []WorkspaceInfo `json:"workspaces"`
Count int `json:"count"` Count int `json:"count"`
} }
// WorkspaceInfo summarizes a tracked workspace.
//
// // ws.Name == "go-mcp-1700000000", ws.Status == "running"
type WorkspaceInfo struct { type WorkspaceInfo struct {
Name string `json:"name"` Name string `json:"name"`
Status string `json:"status"` Status string `json:"status"`
Agent string `json:"agent"` Agent string `json:"agent"`
Repo string `json:"repo"` Repo string `json:"repo"`
Branch string `json:"branch,omitempty"`
Issue int `json:"issue,omitempty"`
PRURL string `json:"pr_url,omitempty"`
Task string `json:"task"` Task string `json:"task"`
Age string `json:"age"` Age string `json:"age"`
Question string `json:"question,omitempty"` Question string `json:"question,omitempty"`
Runs int `json:"runs"` Runs int `json:"runs"`
} }
func (s *PrepSubsystem) registerStatusTool(server *mcp.Server) { func (s *PrepSubsystem) registerStatusTool(svc *coremcp.Service) {
mcp.AddTool(server, &mcp.Tool{ server := svc.Server()
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_status", Name: "agentic_status",
Description: "List agent workspaces and their status (running, completed, blocked, failed). Shows blocked agents with their questions.", Description: "List agent workspaces and their status (running, completed, blocked, failed). Shows blocked agents with their questions.",
}, s.status) }, s.status)
@ -96,14 +124,11 @@ func (s *PrepSubsystem) registerStatusTool(server *mcp.Server) {
func (s *PrepSubsystem) status(ctx context.Context, _ *mcp.CallToolRequest, input StatusInput) (*mcp.CallToolResult, StatusOutput, error) { func (s *PrepSubsystem) status(ctx context.Context, _ *mcp.CallToolRequest, input StatusInput) (*mcp.CallToolResult, StatusOutput, error) {
wsDirs := s.listWorkspaceDirs() wsDirs := s.listWorkspaceDirs()
if len(wsDirs) == 0 {
return nil, StatusOutput{}, coreerr.E("status", "no workspaces found", nil)
}
var workspaces []WorkspaceInfo var workspaces []WorkspaceInfo
for _, wsDir := range wsDirs { for _, wsDir := range wsDirs {
name := filepath.Base(wsDir) name := core.PathBase(wsDir)
// Filter by specific workspace if requested // Filter by specific workspace if requested
if input.Workspace != "" && name != input.Workspace { if input.Workspace != "" && name != input.Workspace {
@ -116,7 +141,7 @@ func (s *PrepSubsystem) status(ctx context.Context, _ *mcp.CallToolRequest, inpu
st, err := readStatus(wsDir) st, err := readStatus(wsDir)
if err != nil { if err != nil {
// Legacy workspace (no status.json) — check for log file // Legacy workspace (no status.json) — check for log file
logFiles, _ := filepath.Glob(filepath.Join(wsDir, "agent-*.log")) logFiles := core.PathGlob(core.Path(wsDir, "agent-*.log"))
if len(logFiles) > 0 { if len(logFiles) > 0 {
info.Status = "completed" info.Status = "completed"
} else { } else {
@ -132,6 +157,9 @@ func (s *PrepSubsystem) status(ctx context.Context, _ *mcp.CallToolRequest, inpu
info.Status = st.Status info.Status = st.Status
info.Agent = st.Agent info.Agent = st.Agent
info.Repo = st.Repo info.Repo = st.Repo
info.Branch = st.Branch
info.Issue = st.Issue
info.PRURL = st.PRURL
info.Task = st.Task info.Task = st.Task
info.Runs = st.Runs info.Runs = st.Runs
info.Age = time.Since(st.StartedAt).Truncate(time.Minute).String() info.Age = time.Since(st.StartedAt).Truncate(time.Minute).String()
@ -140,18 +168,39 @@ func (s *PrepSubsystem) status(ctx context.Context, _ *mcp.CallToolRequest, inpu
if st.Status == "running" && st.PID > 0 { if st.Status == "running" && st.PID > 0 {
proc, err := os.FindProcess(st.PID) proc, err := os.FindProcess(st.PID)
if err != nil || proc.Signal(nil) != nil { if err != nil || proc.Signal(nil) != nil {
prevStatus := st.Status
status := "completed"
channel := coremcp.ChannelAgentComplete
payload := map[string]any{
"workspace": name,
"agent": st.Agent,
"repo": st.Repo,
"branch": st.Branch,
}
// Process died — check for BLOCKED.md // Process died — check for BLOCKED.md
blockedPath := filepath.Join(wsDir, "src", "BLOCKED.md") blockedPath := core.Path(wsDir, "src", "BLOCKED.md")
if data, err := coreio.Local.Read(blockedPath); err == nil { if data, err := coreio.Local.Read(blockedPath); err == nil {
info.Status = "blocked" info.Status = "blocked"
info.Question = strings.TrimSpace(data) info.Question = core.Trim(data)
st.Status = "blocked" st.Status = "blocked"
st.Question = info.Question st.Question = info.Question
status = "blocked"
channel = coremcp.ChannelAgentBlocked
if st.Question != "" {
payload["question"] = st.Question
}
} else { } else {
info.Status = "completed" info.Status = "completed"
st.Status = "completed" st.Status = "completed"
} }
writeStatus(wsDir, st) s.saveStatus(wsDir, st)
if prevStatus != status {
payload["status"] = status
s.emitChannel(ctx, channel, payload)
s.emitChannel(ctx, coremcp.ChannelAgentStatus, payload)
}
} }
} }

View file

@ -0,0 +1,94 @@
// SPDX-License-Identifier: EUPL-1.2
package agentic
import (
"context"
"path/filepath"
"testing"
"time"
)
func TestStatus_Good_EmptyWorkspaceSet(t *testing.T) {
sub := &PrepSubsystem{codePath: t.TempDir()}
_, out, err := sub.status(context.Background(), nil, StatusInput{})
if err != nil {
t.Fatalf("status failed: %v", err)
}
if out.Count != 0 {
t.Fatalf("expected count 0, got %d", out.Count)
}
if len(out.Workspaces) != 0 {
t.Fatalf("expected empty workspace list, got %d entries", len(out.Workspaces))
}
}
func TestPlanRead_Good_ReturnsWrittenPlan(t *testing.T) {
sub := &PrepSubsystem{codePath: t.TempDir()}
plan := &Plan{
ID: "plan-1",
Title: "Read me",
Status: "ready",
Objective: "Verify plan reads",
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}
if _, err := writePlan(sub.plansDir(), plan); err != nil {
t.Fatalf("writePlan failed: %v", err)
}
_, out, err := sub.planRead(context.Background(), nil, PlanReadInput{ID: plan.ID})
if err != nil {
t.Fatalf("planRead failed: %v", err)
}
if !out.Success {
t.Fatal("expected success output")
}
if out.Plan.ID != plan.ID {
t.Fatalf("expected plan %q, got %q", plan.ID, out.Plan.ID)
}
if out.Plan.Title != plan.Title {
t.Fatalf("expected title %q, got %q", plan.Title, out.Plan.Title)
}
}
func TestStatus_Good_ExposesWorkspaceMetadata(t *testing.T) {
root := t.TempDir()
sub := &PrepSubsystem{codePath: root}
wsDir := filepath.Join(root, ".core", "workspace", "repo-123")
plan := &WorkspaceStatus{
Status: "completed",
Agent: "claude",
Repo: "go-mcp",
Branch: "agent/issue-42-fix-status",
Issue: 42,
PRURL: "https://forge.example/pr/42",
Task: "Fix status output",
Runs: 2,
}
if err := writeStatus(wsDir, plan); err != nil {
t.Fatalf("writeStatus failed: %v", err)
}
_, out, err := sub.status(context.Background(), nil, StatusInput{})
if err != nil {
t.Fatalf("status failed: %v", err)
}
if out.Count != 1 {
t.Fatalf("expected count 1, got %d", out.Count)
}
info := out.Workspaces[0]
if info.Branch != plan.Branch {
t.Fatalf("expected branch %q, got %q", plan.Branch, info.Branch)
}
if info.Issue != plan.Issue {
t.Fatalf("expected issue %d, got %d", plan.Issue, info.Issue)
}
if info.PRURL != plan.PRURL {
t.Fatalf("expected PR URL %q, got %q", plan.PRURL, info.PRURL)
}
}

205
pkg/mcp/agentic/watch.go Normal file
View file

@ -0,0 +1,205 @@
// SPDX-License-Identifier: EUPL-1.2
package agentic
import (
"context"
"time"
core "dappco.re/go/core"
coreerr "dappco.re/go/log"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp"
)
const (
defaultWatchPollInterval = 5 * time.Second
defaultWatchTimeout = 60 * time.Second
maxWatchTimeout = 30 * time.Minute
)
// WatchInput is the input for agentic_watch.
type WatchInput struct {
Workspaces []string `json:"workspaces,omitempty"`
PollInterval int `json:"poll_interval,omitempty"`
Timeout int `json:"timeout,omitempty"`
}
// WatchOutput is the result of watching one or more workspaces.
type WatchOutput struct {
Success bool `json:"success"`
Completed []WatchResult `json:"completed"`
Failed []WatchResult `json:"failed,omitempty"`
Duration string `json:"duration"`
}
// WatchResult describes one workspace result.
type WatchResult struct {
Workspace string `json:"workspace"`
Agent string `json:"agent"`
Repo string `json:"repo"`
Status string `json:"status"`
Branch string `json:"branch,omitempty"`
Issue int `json:"issue,omitempty"`
PRURL string `json:"pr_url,omitempty"`
}
func (s *PrepSubsystem) registerWatchTool(svc *coremcp.Service) {
server := svc.Server()
coremcp.AddToolRecorded(svc, server, "agentic", &mcp.Tool{
Name: "agentic_watch",
Description: "Watch running or queued agent workspaces until they finish and return a completion summary.",
}, s.watch)
}
func (s *PrepSubsystem) watch(ctx context.Context, req *mcp.CallToolRequest, input WatchInput) (*mcp.CallToolResult, WatchOutput, error) {
pollInterval := time.Duration(input.PollInterval) * time.Second
if pollInterval <= 0 {
pollInterval = defaultWatchPollInterval
}
timeout := resolveWatchTimeout(input)
start := time.Now()
deadline := start.Add(timeout)
targets := input.Workspaces
if len(targets) == 0 {
targets = s.findActiveWorkspaces()
}
if len(targets) == 0 {
return nil, WatchOutput{Success: true, Duration: "0s"}, nil
}
notifier := coremcp.NewProgressNotifier(ctx, req)
progress := float64(0)
total := float64(len(targets))
sendProgress := func(current float64, status WorkspaceStatus) {
_ = notifier.Send(current, total, core.Sprintf("%s %s (%s)", status.Repo, status.Status, status.Agent))
}
remaining := make(map[string]struct{}, len(targets))
for _, workspace := range targets {
remaining[workspace] = struct{}{}
}
completed := make([]WatchResult, 0, len(targets))
failed := make([]WatchResult, 0)
for len(remaining) > 0 {
if time.Now().After(deadline) {
for workspace := range remaining {
failed = append(failed, WatchResult{
Workspace: workspace,
Status: "timeout",
})
}
break
}
select {
case <-ctx.Done():
return nil, WatchOutput{}, coreerr.E("watch", "cancelled", ctx.Err())
case <-time.After(pollInterval):
}
_, statusOut, err := s.status(ctx, req, StatusInput{})
if err != nil {
return nil, WatchOutput{}, coreerr.E("watch", "failed to refresh status", err)
}
for _, info := range statusOut.Workspaces {
if _, ok := remaining[info.Name]; !ok {
continue
}
switch info.Status {
case "completed", "merged", "ready-for-review":
status := WorkspaceStatus{
Repo: info.Repo,
Agent: info.Agent,
Status: info.Status,
}
completed = append(completed, WatchResult{
Workspace: info.Name,
Agent: info.Agent,
Repo: info.Repo,
Status: info.Status,
Branch: info.Branch,
Issue: info.Issue,
PRURL: info.PRURL,
})
delete(remaining, info.Name)
progress++
sendProgress(progress, status)
case "failed", "blocked":
status := WorkspaceStatus{
Repo: info.Repo,
Agent: info.Agent,
Status: info.Status,
}
failed = append(failed, WatchResult{
Workspace: info.Name,
Agent: info.Agent,
Repo: info.Repo,
Status: info.Status,
Branch: info.Branch,
Issue: info.Issue,
PRURL: info.PRURL,
})
delete(remaining, info.Name)
progress++
sendProgress(progress, status)
}
}
}
return nil, WatchOutput{
Success: len(failed) == 0,
Completed: completed,
Failed: failed,
Duration: time.Since(start).Round(time.Second).String(),
}, nil
}
func resolveWatchTimeout(input WatchInput) time.Duration {
if input.Timeout <= 0 {
return defaultWatchTimeout
}
maxSeconds := int(maxWatchTimeout / time.Second)
if input.Timeout > maxSeconds {
return maxWatchTimeout
}
return time.Duration(input.Timeout) * time.Second
}
func (s *PrepSubsystem) findActiveWorkspaces() []string {
wsDirs := s.listWorkspaceDirs()
if len(wsDirs) == 0 {
return nil
}
active := make([]string, 0, len(wsDirs))
for _, wsDir := range wsDirs {
st, err := readStatus(wsDir)
if err != nil {
continue
}
switch st.Status {
case "running", "queued":
active = append(active, core.PathBase(wsDir))
}
}
return active
}
func (s *PrepSubsystem) resolveWorkspaceDir(name string) string {
if core.PathIsAbs(name) {
return name
}
return core.JoinPath(s.workspaceRoot(), name)
}

View file

@ -0,0 +1,41 @@
// SPDX-License-Identifier: EUPL-1.2
package agentic
import (
"testing"
"time"
)
func TestWatchDefaults_Good_RFCOneMinuteTimeout(t *testing.T) {
if defaultWatchTimeout != 60*time.Second {
t.Fatalf("expected default watch timeout to be 60s, got %s", defaultWatchTimeout)
}
if defaultWatchPollInterval != 5*time.Second {
t.Fatalf("expected default poll interval to be 5s, got %s", defaultWatchPollInterval)
}
if maxWatchTimeout != 30*time.Minute {
t.Fatalf("expected max watch timeout to be 30m, got %s", maxWatchTimeout)
}
}
func TestResolveWatchTimeout_Good_HonorsInputTimeout(t *testing.T) {
got := resolveWatchTimeout(WatchInput{Timeout: 10})
if got != 10*time.Second {
t.Fatalf("expected input timeout to be honored as 10s, got %s", got)
}
}
func TestResolveWatchTimeout_Good_ClampsInputTimeout(t *testing.T) {
got := resolveWatchTimeout(WatchInput{Timeout: int((10 * time.Hour) / time.Second)})
if got != 30*time.Minute {
t.Fatalf("expected input timeout to clamp to 30m, got %s", got)
}
}
func TestResolveWatchTimeout_Good_ZeroUsesDefault(t *testing.T) {
got := resolveWatchTimeout(WatchInput{Timeout: 0})
if got != defaultWatchTimeout {
t.Fatalf("expected zero timeout to use default %s, got %s", defaultWatchTimeout, got)
}
}

View file

@ -0,0 +1,54 @@
// SPDX-License-Identifier: EUPL-1.2
package agentic
import (
"os"
core "dappco.re/go/core"
coreio "dappco.re/go/io"
)
// os.CreateTemp, os.Remove, os.Rename are framework-boundary calls for
// atomic file writes — no core equivalent exists for temp file creation.
// writeAtomic writes content to path by staging it in a temporary file and
// renaming it into place.
//
// This avoids exposing partially written workspace files to agents that may
// read status, prompt, or plan documents while they are being updated.
func writeAtomic(path, content string) error {
dir := core.PathDir(path)
if err := coreio.Local.EnsureDir(dir); err != nil {
return err
}
tmp, err := os.CreateTemp(dir, "."+core.PathBase(path)+".*.tmp")
if err != nil {
return err
}
tmpPath := tmp.Name()
cleanup := func() {
_ = tmp.Close()
_ = os.Remove(tmpPath)
}
if _, err := tmp.WriteString(content); err != nil {
cleanup()
return err
}
if err := tmp.Sync(); err != nil {
cleanup()
return err
}
if err := tmp.Close(); err != nil {
_ = os.Remove(tmpPath)
return err
}
if err := os.Rename(tmpPath, path); err != nil {
_ = os.Remove(tmpPath)
return err
}
return nil
}

397
pkg/mcp/authz.go Normal file
View file

@ -0,0 +1,397 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"context"
"crypto/hmac"
"crypto/sha256"
"crypto/subtle"
"encoding/base64"
"encoding/json"
"reflect"
"strconv"
"time"
core "dappco.re/go/core"
coreerr "dappco.re/go/log"
"github.com/modelcontextprotocol/go-sdk/mcp"
)
const (
// authTokenPrefix is the prefix used by HTTP Authorization headers.
authTokenPrefix = "Bearer "
// authDefaultJWTTTL is the default validity duration for minted JWTs.
authDefaultJWTTTL = time.Hour
// authJWTSecretEnv is the HMAC secret used for JWT signing and verification.
authJWTSecretEnv = "MCP_JWT_SECRET"
// authJWTTTLSecondsEnv allows overriding token lifetime.
authJWTTTLSecondsEnv = "MCP_JWT_TTL_SECONDS"
)
// authClaims is the compact claim payload stored inside our internal JWTs.
type authClaims struct {
Workspace string `json:"workspace,omitempty"`
Entitlements []string `json:"entitlements,omitempty"`
Subject string `json:"sub,omitempty"`
Issuer string `json:"iss,omitempty"`
IssuedAt int64 `json:"iat,omitempty"`
ExpiresAt int64 `json:"exp,omitempty"`
}
type authContextKey struct{}
func withAuthClaims(ctx context.Context, claims *authClaims) context.Context {
if ctx == nil {
return context.Background()
}
return context.WithValue(ctx, authContextKey{}, claims)
}
func claimsFromContext(ctx context.Context) *authClaims {
if ctx == nil {
return nil
}
if c := ctx.Value(authContextKey{}); c != nil {
if cl, ok := c.(*authClaims); ok {
return cl
}
}
return nil
}
// authConfig holds token verification options derived from environment.
type authConfig struct {
apiToken string
secret []byte
ttl time.Duration
}
func currentAuthConfig(apiToken string) authConfig {
cfg := authConfig{
apiToken: apiToken,
secret: []byte(core.Env(authJWTSecretEnv)),
ttl: authDefaultJWTTTL,
}
if len(cfg.secret) == 0 {
cfg.secret = []byte(apiToken)
}
if ttlRaw := core.Trim(core.Env(authJWTTTLSecondsEnv)); ttlRaw != "" {
if ttlVal, err := strconv.Atoi(ttlRaw); err == nil && ttlVal > 0 {
cfg.ttl = time.Duration(ttlVal) * time.Second
}
}
return cfg
}
func extractBearerToken(raw string) string {
raw = core.Trim(raw)
if core.HasPrefix(raw, authTokenPrefix) {
return core.Trim(core.TrimPrefix(raw, authTokenPrefix))
}
return ""
}
func parseAuthClaims(authToken, apiToken string) (*authClaims, error) {
cfg := currentAuthConfig(apiToken)
if cfg.apiToken == "" {
return nil, nil
}
tkn := extractBearerToken(authToken)
if tkn == "" {
return nil, coreerr.E("", "missing bearer token", nil)
}
if subtle.ConstantTimeCompare([]byte(tkn), []byte(cfg.apiToken)) == 1 {
return &authClaims{
Subject: "api-key",
IssuedAt: time.Now().Unix(),
}, nil
}
if len(cfg.secret) == 0 {
return nil, coreerr.E("", "jwt secret is not configured", nil)
}
parts := core.Split(tkn, ".")
if len(parts) != 3 {
return nil, coreerr.E("", "invalid token format", nil)
}
headerJSON, err := decodeJWTSection(parts[0])
if err != nil {
return nil, err
}
var header map[string]any
if err := json.Unmarshal(headerJSON, &header); err != nil {
return nil, err
}
if alg, _ := header["alg"].(string); alg != "" && alg != "HS256" {
return nil, coreerr.E("", core.Sprintf("unsupported jwt algorithm: %s", alg), nil)
}
signatureBase := parts[0] + "." + parts[1]
mac := hmac.New(sha256.New, cfg.secret)
mac.Write([]byte(signatureBase))
expectedSig := mac.Sum(nil)
actualSig, err := decodeJWTSection(parts[2])
if err != nil {
return nil, err
}
if !hmac.Equal(expectedSig, actualSig) {
return nil, coreerr.E("", "invalid token signature", nil)
}
payloadJSON, err := decodeJWTSection(parts[1])
if err != nil {
return nil, err
}
var claims authClaims
if err := json.Unmarshal(payloadJSON, &claims); err != nil {
return nil, err
}
now := time.Now().Unix()
if claims.ExpiresAt > 0 && claims.ExpiresAt < now {
return nil, coreerr.E("", "token has expired", nil)
}
return &claims, nil
}
func decodeJWTSection(value string) ([]byte, error) {
raw, err := base64.RawURLEncoding.DecodeString(value)
if err != nil {
return nil, err
}
return raw, nil
}
func encodeJWTSection(value []byte) string {
return base64.RawURLEncoding.EncodeToString(value)
}
func mintJWTToken(rawClaims authClaims, cfg authConfig) (string, error) {
now := time.Now().Unix()
if rawClaims.IssuedAt == 0 {
rawClaims.IssuedAt = now
}
if rawClaims.ExpiresAt == 0 {
rawClaims.ExpiresAt = now + int64(cfg.ttl.Seconds())
}
header := map[string]string{
"alg": "HS256",
"typ": "JWT",
}
headerJSON, err := json.Marshal(header)
if err != nil {
return "", err
}
payloadJSON, err := json.Marshal(rawClaims)
if err != nil {
return "", err
}
signingInput := encodeJWTSection(headerJSON) + "." + encodeJWTSection(payloadJSON)
mac := hmac.New(sha256.New, cfg.secret)
mac.Write([]byte(signingInput))
signature := mac.Sum(nil)
return signingInput + "." + encodeJWTSection(signature), nil
}
func authClaimsFromToolRequest(ctx context.Context, req *mcp.CallToolRequest, apiToken string) (claims *authClaims, inTransport bool, err error) {
cfg := currentAuthConfig(apiToken)
if cfg.apiToken == "" {
return nil, false, nil
}
if req != nil {
extra := req.GetExtra()
if extra == nil || extra.Header == nil {
return nil, true, coreerr.E("", "missing request auth metadata", nil)
}
raw := extra.Header.Get("Authorization")
parsed, err := parseAuthClaims(raw, apiToken)
if err != nil {
return nil, true, err
}
return parsed, true, nil
}
if claims = claimsFromContext(ctx); claims != nil {
return claims, true, nil
}
return nil, false, nil
}
func (s *Service) authorizeToolAccess(ctx context.Context, req *mcp.CallToolRequest, tool string, input any) error {
apiToken := core.Env("MCP_AUTH_TOKEN")
cfg := currentAuthConfig(apiToken)
if cfg.apiToken == "" {
return nil
}
claims, inTransport, err := authClaimsFromToolRequest(ctx, req, apiToken)
if err != nil {
return coreerr.E("auth", "unauthorized", err)
}
if !inTransport {
// Allow direct service method calls in-process, while still enforcing
// transport requests where auth metadata is present.
return nil
}
if claims == nil {
return coreerr.E("auth", "unauthorized", coreerr.E("", "missing auth claims", nil))
}
if !claims.canRunTool(tool) {
return coreerr.E("auth", "forbidden", coreerr.E("", "tool not allowed for token", nil))
}
if !claims.canAccessWorkspaceFromInput(input) {
return coreerr.E("auth", "forbidden", coreerr.E("", "workspace scope mismatch", nil))
}
return nil
}
func (c *authClaims) canRunTool(tool string) bool {
if c == nil {
return false
}
if len(c.Entitlements) == 0 {
return true
}
toolAllow := "tool:" + tool
for _, e := range c.Entitlements {
switch e {
case "*", "tool:*", "tools:*":
return true
default:
if e == tool {
return true
}
if e == toolAllow || e == "tools:"+tool {
return true
}
}
}
return false
}
func (c *authClaims) canAccessWorkspaceFromInput(input any) bool {
if c == nil || c.Workspace == "" || c.Workspace == "*" {
return true
}
target := inputWorkspaceFromValue(input)
if target == "" {
return true
}
return workspaceMatch(c.Workspace, target)
}
func workspaceMatch(claimed, target string) bool {
if core.Trim(claimed) == "" {
return true
}
if core.Trim(target) == "" {
return true
}
if claimed == target {
return true
}
if core.HasSuffix(claimed, "*") {
prefix := core.TrimSuffix(claimed, "*")
return core.HasPrefix(target, prefix)
}
return core.HasPrefix(target, claimed+"/")
}
func inputWorkspaceFromValue(input any) string {
if input == nil {
return ""
}
v := reflect.ValueOf(input)
for v.Kind() == reflect.Pointer && !v.IsNil() {
v = v.Elem()
}
if !v.IsValid() {
return ""
}
switch v.Kind() {
case reflect.Map:
return workspaceFromMap(v)
case reflect.Struct:
return workspaceFromStruct(v)
default:
return ""
}
}
func workspaceFromMap(v reflect.Value) string {
if v.IsNil() {
return ""
}
keyType := v.Type().Key()
if keyType.Kind() != reflect.String {
return ""
}
for _, key := range []string{
"workspace",
"repo",
"repository",
"project",
"workspace_id",
} {
mapKey := reflect.ValueOf(key)
if mapKey.Type() != keyType {
if mapKey.Type().ConvertibleTo(keyType) {
mapKey = mapKey.Convert(keyType)
} else {
continue
}
}
if mapKey.IsValid() {
raw := v.MapIndex(mapKey)
if raw.IsValid() && raw.Kind() == reflect.String {
return core.Trim(raw.String())
}
}
}
return ""
}
func workspaceFromStruct(v reflect.Value) string {
t := v.Type()
for i := 0; i < v.NumField(); i++ {
f := v.Field(i)
ft := t.Field(i)
if !f.CanInterface() {
continue
}
keys := []string{core.Lower(ft.Name)}
if tag := ft.Tag.Get("json"); tag != "" {
keys = append(keys, core.Lower(core.Split(tag, ",")[0]))
}
for _, candidate := range keys {
if candidate != "workspace" && candidate != "repo" && candidate != "repository" {
continue
}
switch f.Kind() {
case reflect.String:
if s := core.Trim(f.String()); s != "" {
return s
}
case reflect.Pointer:
if f.IsNil() {
continue
}
if f.Elem().Kind() == reflect.String {
if s := core.Trim(f.Elem().String()); s != "" {
return s
}
}
}
}
}
return ""
}

View file

@ -7,9 +7,9 @@ package brain
import ( import (
"context" "context"
coreerr "forge.lthn.ai/core/go-log" coreerr "dappco.re/go/log"
"forge.lthn.ai/core/mcp/pkg/mcp/ide" coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp" "dappco.re/go/mcp/pkg/mcp/ide"
) )
// errBridgeNotAvailable is returned when a tool requires the Laravel bridge // errBridgeNotAvailable is returned when a tool requires the Laravel bridge
@ -20,20 +20,56 @@ var errBridgeNotAvailable = coreerr.E("brain", "bridge not available", nil)
// It proxies brain_* tool calls to the Laravel backend via the shared IDE bridge. // It proxies brain_* tool calls to the Laravel backend via the shared IDE bridge.
type Subsystem struct { type Subsystem struct {
bridge *ide.Bridge bridge *ide.Bridge
notifier coremcp.Notifier
} }
var (
_ coremcp.Subsystem = (*Subsystem)(nil)
_ coremcp.SubsystemWithShutdown = (*Subsystem)(nil)
_ coremcp.SubsystemWithNotifier = (*Subsystem)(nil)
)
// New creates a brain subsystem that uses the given IDE bridge for Laravel communication. // New creates a brain subsystem that uses the given IDE bridge for Laravel communication.
//
// brain := New(ideBridge)
//
// Pass nil if headless (tools will return errBridgeNotAvailable). // Pass nil if headless (tools will return errBridgeNotAvailable).
func New(bridge *ide.Bridge) *Subsystem { func New(bridge *ide.Bridge) *Subsystem {
return &Subsystem{bridge: bridge} s := &Subsystem{bridge: bridge}
if bridge != nil {
bridge.AddObserver(func(msg ide.BridgeMessage) {
s.handleBridgeMessage(msg)
})
}
return s
} }
// Name implements mcp.Subsystem. // Name implements mcp.Subsystem.
func (s *Subsystem) Name() string { return "brain" } func (s *Subsystem) Name() string { return "brain" }
// SetNotifier stores the shared notifier so this subsystem can emit channel events.
func (s *Subsystem) SetNotifier(n coremcp.Notifier) {
s.notifier = n
}
// RegisterTools implements mcp.Subsystem. // RegisterTools implements mcp.Subsystem.
func (s *Subsystem) RegisterTools(server *mcp.Server) { func (s *Subsystem) RegisterTools(svc *coremcp.Service) {
s.registerBrainTools(server) s.registerBrainTools(svc)
}
func (s *Subsystem) handleBridgeMessage(msg ide.BridgeMessage) {
switch msg.Type {
case "brain_remember":
emitBridgeChannel(context.Background(), s.notifier, coremcp.ChannelBrainRememberDone, bridgePayload(msg.Data, "org", "type", "project"))
case "brain_recall":
payload := bridgePayload(msg.Data, "query", "org", "project", "type", "agent_id")
payload["count"] = bridgeCount(msg.Data)
emitBridgeChannel(context.Background(), s.notifier, coremcp.ChannelBrainRecallDone, payload)
case "brain_forget":
emitBridgeChannel(context.Background(), s.notifier, coremcp.ChannelBrainForgetDone, bridgePayload(msg.Data, "id", "reason"))
case "brain_list":
emitBridgeChannel(context.Background(), s.notifier, coremcp.ChannelBrainListDone, bridgePayload(msg.Data, "org", "project", "type", "agent_id", "limit"))
}
} }
// Shutdown implements mcp.SubsystemWithShutdown. // Shutdown implements mcp.SubsystemWithShutdown.

View file

@ -7,8 +7,20 @@ import (
"encoding/json" "encoding/json"
"testing" "testing"
"time" "time"
"dappco.re/go/mcp/pkg/mcp/ide"
) )
type recordingNotifier struct {
channel string
data any
}
func (r *recordingNotifier) ChannelSend(_ context.Context, channel string, data any) {
r.channel = channel
r.data = data
}
// --- Nil bridge tests (headless mode) --- // --- Nil bridge tests (headless mode) ---
func TestBrainRemember_Bad_NilBridge(t *testing.T) { func TestBrainRemember_Bad_NilBridge(t *testing.T) {
@ -68,6 +80,43 @@ func TestSubsystem_Good_ShutdownNoop(t *testing.T) {
} }
} }
func TestSubsystem_Good_BridgeRecallNotification(t *testing.T) {
sub := New(nil)
notifier := &recordingNotifier{}
sub.notifier = notifier
sub.handleBridgeMessage(ide.BridgeMessage{
Type: "brain_recall",
Data: map[string]any{
"query": "how does scoring work?",
"org": "core",
"project": "eaas",
"memories": []any{
map[string]any{"id": "m1"},
map[string]any{"id": "m2"},
},
},
})
if notifier.channel != "brain.recall.complete" {
t.Fatalf("expected brain.recall.complete, got %q", notifier.channel)
}
payload, ok := notifier.data.(map[string]any)
if !ok {
t.Fatalf("expected payload map, got %T", notifier.data)
}
if payload["count"] != 2 {
t.Fatalf("expected count 2, got %v", payload["count"])
}
if payload["query"] != "how does scoring work?" {
t.Fatalf("expected query to be forwarded, got %v", payload["query"])
}
if payload["org"] != "core" {
t.Fatalf("expected org to be forwarded, got %v", payload["org"])
}
}
// --- Struct round-trip tests --- // --- Struct round-trip tests ---
func TestRememberInput_Good_RoundTrip(t *testing.T) { func TestRememberInput_Good_RoundTrip(t *testing.T) {
@ -75,6 +124,7 @@ func TestRememberInput_Good_RoundTrip(t *testing.T) {
Content: "LEM scoring was blind to negative emotions", Content: "LEM scoring was blind to negative emotions",
Type: "bug", Type: "bug",
Tags: []string{"scoring", "lem"}, Tags: []string{"scoring", "lem"},
Org: "core",
Project: "eaas", Project: "eaas",
Confidence: 0.95, Confidence: 0.95,
Supersedes: "550e8400-e29b-41d4-a716-446655440000", Supersedes: "550e8400-e29b-41d4-a716-446655440000",
@ -94,6 +144,9 @@ func TestRememberInput_Good_RoundTrip(t *testing.T) {
if len(out.Tags) != 2 || out.Tags[0] != "scoring" { if len(out.Tags) != 2 || out.Tags[0] != "scoring" {
t.Errorf("round-trip mismatch: tags") t.Errorf("round-trip mismatch: tags")
} }
if out.Org != "core" {
t.Errorf("round-trip mismatch: org %q != core", out.Org)
}
if out.Confidence != 0.95 { if out.Confidence != 0.95 {
t.Errorf("round-trip mismatch: confidence %f != 0.95", out.Confidence) t.Errorf("round-trip mismatch: confidence %f != 0.95", out.Confidence)
} }
@ -123,6 +176,7 @@ func TestRecallInput_Good_RoundTrip(t *testing.T) {
Query: "how does verdict classification work?", Query: "how does verdict classification work?",
TopK: 5, TopK: 5,
Filter: RecallFilter{ Filter: RecallFilter{
Org: "core",
Project: "eaas", Project: "eaas",
MinConfidence: 0.5, MinConfidence: 0.5,
}, },
@ -138,7 +192,7 @@ func TestRecallInput_Good_RoundTrip(t *testing.T) {
if out.Query != in.Query || out.TopK != 5 { if out.Query != in.Query || out.TopK != 5 {
t.Errorf("round-trip mismatch: query or topK") t.Errorf("round-trip mismatch: query or topK")
} }
if out.Filter.Project != "eaas" || out.Filter.MinConfidence != 0.5 { if out.Filter.Org != "core" || out.Filter.Project != "eaas" || out.Filter.MinConfidence != 0.5 {
t.Errorf("round-trip mismatch: filter") t.Errorf("round-trip mismatch: filter")
} }
} }
@ -150,6 +204,7 @@ func TestMemory_Good_RoundTrip(t *testing.T) {
Type: "decision", Type: "decision",
Content: "Use Qdrant for vector search", Content: "Use Qdrant for vector search",
Tags: []string{"architecture", "openbrain"}, Tags: []string{"architecture", "openbrain"},
Org: "core",
Project: "php-agentic", Project: "php-agentic",
Confidence: 0.9, Confidence: 0.9,
CreatedAt: "2026-03-03T12:00:00+00:00", CreatedAt: "2026-03-03T12:00:00+00:00",
@ -163,7 +218,7 @@ func TestMemory_Good_RoundTrip(t *testing.T) {
if err := json.Unmarshal(data, &out); err != nil { if err := json.Unmarshal(data, &out); err != nil {
t.Fatalf("unmarshal failed: %v", err) t.Fatalf("unmarshal failed: %v", err)
} }
if out.ID != in.ID || out.AgentID != "virgil" || out.Type != "decision" { if out.ID != in.ID || out.AgentID != "virgil" || out.Type != "decision" || out.Org != "core" {
t.Errorf("round-trip mismatch: %+v", out) t.Errorf("round-trip mismatch: %+v", out)
} }
} }
@ -188,6 +243,7 @@ func TestForgetInput_Good_RoundTrip(t *testing.T) {
func TestListInput_Good_RoundTrip(t *testing.T) { func TestListInput_Good_RoundTrip(t *testing.T) {
in := ListInput{ in := ListInput{
Org: "core",
Project: "eaas", Project: "eaas",
Type: "decision", Type: "decision",
AgentID: "charon", AgentID: "charon",
@ -201,7 +257,7 @@ func TestListInput_Good_RoundTrip(t *testing.T) {
if err := json.Unmarshal(data, &out); err != nil { if err := json.Unmarshal(data, &out); err != nil {
t.Fatalf("unmarshal failed: %v", err) t.Fatalf("unmarshal failed: %v", err)
} }
if out.Project != "eaas" || out.Type != "decision" || out.AgentID != "charon" || out.Limit != 20 { if out.Org != "core" || out.Project != "eaas" || out.Type != "decision" || out.AgentID != "charon" || out.Limit != 20 {
t.Errorf("round-trip mismatch: %+v", out) t.Errorf("round-trip mismatch: %+v", out)
} }
} }

View file

@ -0,0 +1,59 @@
// SPDX-License-Identifier: EUPL-1.2
package brain
import (
"context"
coremcp "dappco.re/go/mcp/pkg/mcp"
)
func bridgePayload(data any, keys ...string) map[string]any {
payload := make(map[string]any)
m, ok := data.(map[string]any)
if !ok {
return payload
}
for _, key := range keys {
if value, ok := m[key]; ok {
payload[key] = value
}
}
return payload
}
func bridgeCount(data any) int {
m, ok := data.(map[string]any)
if !ok {
return 0
}
if count, ok := m["count"]; ok {
switch v := count.(type) {
case int:
return v
case int32:
return int(v)
case int64:
return int(v)
case float64:
return int(v)
}
}
if memories, ok := m["memories"].([]any); ok {
return len(memories)
}
return 0
}
func emitBridgeChannel(ctx context.Context, notifier coremcp.Notifier, channel string, data any) {
if notifier == nil {
return
}
notifier.ChannelSend(ctx, channel, data)
}

View file

@ -0,0 +1,673 @@
// SPDX-License-Identifier: EUPL-1.2
// Package client provides the shared OpenBrain HTTP client.
//
// c := client.New(client.Options{URL: core.Env("CORE_BRAIN_URL"), Key: core.Env("CORE_BRAIN_KEY")})
// _, err := c.Remember(ctx, client.RememberInput{
// Org: "core",
// Project: "mcp",
// Content: "Use one OpenBrain client for retry and circuit-breaker policy.",
// Type: "decision",
// })
package client
import (
"context"
cryptorand "crypto/rand"
"io"
"io/fs"
"math/big"
"net/http"
"net/url"
"os"
"strconv"
"sync"
"time"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
)
const (
DefaultURL = "https://api.lthn.sh"
insecureBrainEnv = "CORE_BRAIN_INSECURE"
brainKeyFileMode = fs.FileMode(0o600)
defaultAgentID = "cladius"
defaultTimeout = 30 * time.Second
defaultMaxAttempts = 3
defaultBaseDelay = 100 * time.Millisecond
defaultFailureThreshold = 3
defaultSuccessThreshold = 1
defaultCircuitCooldown = 30 * time.Second
defaultMaxResponseBytes = int64(1 << 20)
maxBackoffDelay = 30 * time.Second
maxRetryAfterDelay = 60 * time.Second
defaultRecallTopK = 10
defaultListLimit = 50
)
// ErrCircuitOpen is returned when repeated upstream failures have opened the circuit.
var ErrCircuitOpen = core.NewError("brain client circuit open")
// Options configures the shared OpenBrain client.
type Options struct {
URL string
Key string
Org string
AgentID string
HTTPClient *http.Client
MaxAttempts int
BaseDelay time.Duration
MaxResponseBytes int64
CircuitBreaker *CircuitBreaker
}
// Client calls the Laravel /v1/brain/* API with shared retry and circuit policy.
type Client struct {
apiURL string
apiKey string
org string
agentID string
httpClient *http.Client
maxAttempts int
baseDelay time.Duration
maxResponseBytes int64
circuitBreaker *CircuitBreaker
configErr error
sleepFunc func(context.Context, time.Duration) error
}
// RememberInput is the request body for POST /v1/brain/remember.
type RememberInput struct {
Content string `json:"content"`
Type string `json:"type"`
Tags []string `json:"tags,omitempty"`
Org string `json:"org,omitempty"`
Project string `json:"project,omitempty"`
AgentID string `json:"agent_id,omitempty"`
Confidence float64 `json:"confidence,omitempty"`
Supersedes string `json:"supersedes,omitempty"`
ExpiresIn int `json:"expires_in,omitempty"`
}
// RecallInput is the request body for POST /v1/brain/recall.
type RecallInput struct {
Query string `json:"query"`
TopK int `json:"top_k,omitempty"`
Org string `json:"org,omitempty"`
Project string `json:"project,omitempty"`
Type any `json:"type,omitempty"`
AgentID string `json:"agent_id,omitempty"`
MinConfidence float64 `json:"min_confidence,omitempty"`
}
// ForgetInput selects the memory removed by DELETE /v1/brain/forget/{id}.
type ForgetInput struct {
ID string `json:"id"`
Reason string `json:"reason,omitempty"`
}
// ListInput provides URL parameters for GET /v1/brain/list.
type ListInput struct {
Org string `json:"org,omitempty"`
Project string `json:"project,omitempty"`
Type string `json:"type,omitempty"`
AgentID string `json:"agent_id,omitempty"`
Limit int `json:"limit,omitempty"`
}
// CircuitState is the current breaker state.
type CircuitState string
const (
CircuitClosed CircuitState = "closed"
CircuitOpen CircuitState = "open"
CircuitHalfOpen CircuitState = "half_open"
)
// CircuitBreakerOptions controls when the circuit opens and recovers.
type CircuitBreakerOptions struct {
FailureThreshold int
SuccessThreshold int
Cooldown time.Duration
}
// CircuitBreaker protects OpenBrain from repeated failed calls.
type CircuitBreaker struct {
lock sync.Mutex
state CircuitState
failureThreshold int
successThreshold int
cooldown time.Duration
consecutiveFails int
consecutiveWins int
openedAt time.Time
halfOpenInFlight bool
}
// New creates a shared OpenBrain client.
func New(options Options) *Client {
apiURL := core.Trim(options.URL)
if apiURL == "" {
apiURL = DefaultURL
}
configErr := validateAPIURL(apiURL)
agentID := core.Trim(options.AgentID)
if agentID == "" {
agentID = defaultAgentID
}
httpClient := options.HTTPClient
if httpClient == nil {
httpClient = &http.Client{Timeout: defaultTimeout}
}
maxAttempts := options.MaxAttempts
if maxAttempts <= 0 {
maxAttempts = defaultMaxAttempts
}
baseDelay := options.BaseDelay
if baseDelay <= 0 {
baseDelay = defaultBaseDelay
}
maxResponseBytes := options.MaxResponseBytes
if maxResponseBytes <= 0 {
maxResponseBytes = defaultMaxResponseBytes
}
breaker := options.CircuitBreaker
if breaker == nil {
breaker = NewCircuitBreaker(CircuitBreakerOptions{})
}
return &Client{
apiURL: core.TrimSuffix(apiURL, "/"),
apiKey: core.Trim(options.Key),
org: core.Trim(options.Org),
agentID: agentID,
httpClient: httpClient,
maxAttempts: maxAttempts,
baseDelay: baseDelay,
maxResponseBytes: maxResponseBytes,
circuitBreaker: breaker,
configErr: configErr,
sleepFunc: sleepDuration,
}
}
// NewFromEnvironment reads CORE_BRAIN_* settings and ~/.claude/brain.key.
func NewFromEnvironment() *Client {
apiKey, configErr := apiKeyFromEnvironment()
client := New(Options{
URL: envOr("CORE_BRAIN_URL", DefaultURL),
Key: apiKey,
Org: core.Env("CORE_BRAIN_ORG"),
AgentID: core.Env("CORE_BRAIN_AGENT_ID"),
})
if configErr != nil {
client.configErr = configErr
}
return client
}
func validateAPIURL(apiURL string) error {
parsed, err := url.Parse(apiURL)
if err != nil || parsed.Scheme == "" || parsed.Host == "" {
return core.E("brain.client", "invalid API URL", err)
}
if parsed.Scheme == "https" {
return nil
}
if parsed.Scheme == "http" && core.Trim(core.Env(insecureBrainEnv)) == "true" {
return nil
}
return core.E("brain.client", "API URL must use https unless CORE_BRAIN_INSECURE=true", nil)
}
// WriteBrainKey stores the OpenBrain API key at ~/.claude/brain.key with owner-only permissions.
func WriteBrainKey(apiKey string) error {
home := core.Env("HOME")
if home == "" {
return core.E("brain.client", "HOME not set", nil)
}
return writeBrainKeyFile(brainKeyPath(home), apiKey)
}
// NewCircuitBreaker creates a circuit breaker with OpenBrain defaults.
func NewCircuitBreaker(options CircuitBreakerOptions) *CircuitBreaker {
failureThreshold := options.FailureThreshold
if failureThreshold <= 0 {
failureThreshold = defaultFailureThreshold
}
successThreshold := options.SuccessThreshold
if successThreshold <= 0 {
successThreshold = defaultSuccessThreshold
}
cooldown := options.Cooldown
if cooldown <= 0 {
cooldown = defaultCircuitCooldown
}
return &CircuitBreaker{
state: CircuitClosed,
failureThreshold: failureThreshold,
successThreshold: successThreshold,
cooldown: cooldown,
}
}
// State returns the current breaker state.
func (breaker *CircuitBreaker) State() CircuitState {
if breaker == nil {
return CircuitClosed
}
breaker.lock.Lock()
defer breaker.lock.Unlock()
return breaker.stateNow(time.Now())
}
// Remember stores a memory in OpenBrain.
func (c *Client) Remember(ctx context.Context, input RememberInput) (map[string]any, error) {
input.Org = c.orgFor(input.Org)
input.AgentID = c.agentFor(input.AgentID)
return c.Call(ctx, http.MethodPost, "/v1/brain/remember", input)
}
// Recall searches memories in OpenBrain.
func (c *Client) Recall(ctx context.Context, input RecallInput) (map[string]any, error) {
input.Org = c.orgFor(input.Org)
input.AgentID = c.agentFor(input.AgentID)
if input.TopK == 0 {
input.TopK = defaultRecallTopK
}
return c.Call(ctx, http.MethodPost, "/v1/brain/recall", input)
}
// Forget removes one memory from OpenBrain.
func (c *Client) Forget(ctx context.Context, input ForgetInput) (map[string]any, error) {
return c.Call(ctx, http.MethodDelete, core.Concat("/v1/brain/forget/", url.PathEscape(input.ID)), nil)
}
// List returns memories from OpenBrain using URL query filters.
func (c *Client) List(ctx context.Context, input ListInput) (map[string]any, error) {
input.Org = c.orgFor(input.Org)
if input.Limit == 0 {
input.Limit = defaultListLimit
}
values := url.Values{}
if input.Org != "" {
values.Set("org", input.Org)
}
if input.Project != "" {
values.Set("project", input.Project)
}
if input.Type != "" {
values.Set("type", input.Type)
}
if input.AgentID != "" {
values.Set("agent_id", input.AgentID)
}
values.Set("limit", core.Sprintf("%d", input.Limit))
return c.Call(ctx, http.MethodGet, core.Concat("/v1/brain/list?", values.Encode()), nil)
}
// Call performs one OpenBrain API request through retry and circuit-breaker policy.
func (c *Client) Call(ctx context.Context, method, path string, body any) (map[string]any, error) {
if c.configErr != nil {
return nil, c.configErr
}
if c.apiKey == "" {
return nil, core.E("brain.client", "no API key (set CORE_BRAIN_KEY or create ~/.claude/brain.key)", nil)
}
if err := c.circuitBreaker.beforeRequest(); err != nil {
return nil, err
}
bodyString := ""
if body != nil {
bodyString = core.JSONMarshalString(body)
}
var lastErr error
for attempt := 1; attempt <= c.maxAttempts; attempt++ {
payload, retryable, retryAfter, hasRetryAfter, err := c.doOnce(ctx, method, path, bodyString, body != nil)
if err == nil {
c.circuitBreaker.recordSuccess()
return payload, nil
}
lastErr = err
if !retryable {
c.circuitBreaker.recordIgnored()
break
}
c.circuitBreaker.recordFailure()
if c.circuitBreaker.State() == CircuitOpen || attempt == c.maxAttempts {
break
}
var sleepErr error
if hasRetryAfter {
sleepErr = c.sleepFor(ctx, retryAfter)
} else {
sleepErr = c.sleep(ctx, attempt)
}
if sleepErr != nil {
lastErr = sleepErr
break
}
}
return nil, lastErr
}
func (c *Client) doOnce(ctx context.Context, method, path, bodyString string, hasBody bool) (map[string]any, bool, time.Duration, bool, error) {
var reader io.Reader
if hasBody {
reader = core.NewReader(bodyString)
}
requestURL, err := c.requestURL(path)
if err != nil {
return nil, false, 0, false, err
}
request, err := http.NewRequestWithContext(ctx, method, requestURL, reader)
if err != nil {
return nil, false, 0, false, core.E("brain.client", "create request", err)
}
request.Header.Set("Accept", "application/json")
request.Header.Set("Authorization", core.Concat("Bearer ", c.apiKey))
if hasBody {
request.Header.Set("Content-Type", "application/json")
}
response, err := c.httpClient.Do(request)
if err != nil {
if ctx.Err() != nil {
return nil, false, 0, false, core.E("brain.client", "request cancelled", ctx.Err())
}
return nil, true, 0, false, core.E("brain.client", "request failed", err)
}
defer response.Body.Close()
readResult := core.ReadAll(io.LimitReader(response.Body, c.maxResponseBytes+1))
if !readResult.OK {
if readErr, ok := readResult.Value.(error); ok {
return nil, false, 0, false, core.E("brain.client", "read response", readErr)
}
return nil, false, 0, false, core.E("brain.client", "read response", nil)
}
raw := readResult.Value.(string)
if int64(len(raw)) > c.maxResponseBytes {
return nil, false, 0, false, core.E("brain.client", "response too large", nil)
}
if response.StatusCode >= http.StatusBadRequest {
retryAfter, hasRetryAfter := parseRetryAfter(response.Header.Get("Retry-After"), time.Now())
return nil, retryableStatus(response.StatusCode), retryAfter, hasRetryAfter, core.E("brain.client", core.Concat("upstream returned ", response.Status, ": ", core.Trim(raw)), nil)
}
result := map[string]any{}
if parseResult := core.JSONUnmarshalString(raw, &result); !parseResult.OK {
if parseErr, ok := parseResult.Value.(error); ok {
return nil, false, 0, false, core.E("brain.client", "parse response", parseErr)
}
return nil, false, 0, false, core.E("brain.client", "parse response", nil)
}
return result, false, 0, false, nil
}
func (c *Client) requestURL(path string) (string, error) {
parsed, err := url.Parse(path)
if err == nil && (parsed.IsAbs() || parsed.Host != "") {
return "", core.E("brain.client", "absolute request URL rejected", nil)
}
if !core.HasPrefix(path, "/") {
path = core.Concat("/", path)
}
return core.Concat(c.apiURL, path), nil
}
func (c *Client) sleep(ctx context.Context, attempt int) error {
retryAttempt := attempt - 1
delay := jitteredBackoffDelay(c.baseDelay, retryAttempt)
return c.sleepFor(ctx, delay)
}
func (c *Client) sleepFor(ctx context.Context, delay time.Duration) error {
if c.sleepFunc != nil {
return c.sleepFunc(ctx, delay)
}
return sleepDuration(ctx, delay)
}
func sleepDuration(ctx context.Context, delay time.Duration) error {
if delay <= 0 {
return nil
}
timer := time.NewTimer(delay)
defer timer.Stop()
select {
case <-ctx.Done():
return core.E("brain.client", "request cancelled", ctx.Err())
case <-timer.C:
return nil
}
}
func jitteredBackoffDelay(baseDelay time.Duration, attempt int) time.Duration {
limit := backoffDelayLimit(baseDelay, attempt)
if limit <= 0 {
return 0
}
jitter, err := cryptorand.Int(cryptorand.Reader, big.NewInt(int64(limit)))
if err != nil {
return limit
}
return time.Duration(jitter.Int64())
}
func backoffDelayLimit(baseDelay time.Duration, attempt int) time.Duration {
if baseDelay <= 0 {
return 0
}
if baseDelay >= maxBackoffDelay {
return maxBackoffDelay
}
if attempt <= 0 {
return baseDelay
}
delay := baseDelay
for i := 0; i < attempt; i++ {
if delay >= maxBackoffDelay/2 {
return maxBackoffDelay
}
delay *= 2
}
if delay > maxBackoffDelay {
return maxBackoffDelay
}
return delay
}
func (c *Client) orgFor(org string) string {
org = core.Trim(org)
if org != "" {
return org
}
return c.org
}
func (c *Client) agentFor(agentID string) string {
agentID = core.Trim(agentID)
if agentID != "" {
return agentID
}
return c.agentID
}
func (breaker *CircuitBreaker) beforeRequest() error {
if breaker == nil {
return nil
}
breaker.lock.Lock()
defer breaker.lock.Unlock()
state := breaker.stateNow(time.Now())
if state == CircuitOpen {
return ErrCircuitOpen
}
if state == CircuitHalfOpen {
if breaker.halfOpenInFlight {
return ErrCircuitOpen
}
breaker.halfOpenInFlight = true
}
return nil
}
func (breaker *CircuitBreaker) recordSuccess() {
if breaker == nil {
return
}
breaker.lock.Lock()
defer breaker.lock.Unlock()
breaker.halfOpenInFlight = false
breaker.consecutiveFails = 0
breaker.consecutiveWins++
if breaker.state == CircuitHalfOpen && breaker.consecutiveWins >= breaker.successThreshold {
breaker.state = CircuitClosed
breaker.consecutiveWins = 0
}
if breaker.state == CircuitClosed {
breaker.consecutiveWins = 0
}
}
func (breaker *CircuitBreaker) recordFailure() {
if breaker == nil {
return
}
breaker.lock.Lock()
defer breaker.lock.Unlock()
breaker.halfOpenInFlight = false
breaker.consecutiveWins = 0
breaker.consecutiveFails++
if breaker.state == CircuitHalfOpen || breaker.consecutiveFails >= breaker.failureThreshold {
breaker.state = CircuitOpen
breaker.openedAt = time.Now()
}
}
func (breaker *CircuitBreaker) recordIgnored() {
if breaker == nil {
return
}
breaker.lock.Lock()
defer breaker.lock.Unlock()
breaker.halfOpenInFlight = false
}
func (breaker *CircuitBreaker) stateNow(now time.Time) CircuitState {
if breaker.state == "" {
breaker.state = CircuitClosed
}
if breaker.state == CircuitOpen && now.Sub(breaker.openedAt) >= breaker.cooldown {
breaker.state = CircuitHalfOpen
breaker.consecutiveFails = 0
breaker.consecutiveWins = 0
breaker.halfOpenInFlight = false
}
return breaker.state
}
func retryableStatus(statusCode int) bool {
return statusCode == http.StatusRequestTimeout || statusCode == http.StatusTooManyRequests || statusCode >= http.StatusInternalServerError
}
func parseRetryAfter(value string, now time.Time) (time.Duration, bool) {
value = core.Trim(value)
if value == "" {
return 0, false
}
if seconds, err := strconv.ParseInt(value, 10, 64); err == nil {
if seconds <= 0 {
return 0, true
}
maxSeconds := int64(maxRetryAfterDelay / time.Second)
if seconds > maxSeconds {
return maxRetryAfterDelay, true
}
return time.Duration(seconds) * time.Second, true
}
retryAt, err := http.ParseTime(value)
if err != nil {
return 0, false
}
delay := retryAt.Sub(now)
if delay <= 0 {
return 0, true
}
if delay > maxRetryAfterDelay {
return maxRetryAfterDelay, true
}
return delay, true
}
func envOr(key, fallback string) string {
value := core.Env(key)
if value != "" {
return value
}
return fallback
}
func apiKeyFromEnvironment() (string, error) {
if apiKey := core.Trim(core.Env("CORE_BRAIN_KEY")); apiKey != "" {
return apiKey, nil
}
home := core.Env("HOME")
if home == "" {
return "", nil
}
apiKey, err := readBrainKeyFile(brainKeyPath(home))
if err != nil {
if core.Is(err, fs.ErrNotExist) {
return "", nil
}
return "", err
}
return apiKey, nil
}
func brainKeyPath(home string) string {
return core.JoinPath(home, ".claude", "brain.key")
}
func readBrainKeyFile(path string) (string, error) {
info, err := coreio.Local.Stat(path)
if err != nil {
return "", err
}
if brainKeyModeInsecure(info.Mode().Perm()) {
return "", core.E("brain.client", "brain.key has insecure permissions, expected 0600", nil)
}
data, err := coreio.Local.Read(path)
if err != nil {
return "", err
}
return core.Trim(data), nil
}
func writeBrainKeyFile(path, apiKey string) error {
if err := coreio.Local.WriteMode(path, core.Trim(apiKey)+"\n", brainKeyFileMode); err != nil {
return err
}
if err := os.Chmod(path, brainKeyFileMode); err != nil {
return core.E("brain.client", "chmod brain.key", err)
}
return nil
}
func brainKeyModeInsecure(mode fs.FileMode) bool {
return mode.Perm()&^brainKeyFileMode != 0
}

View file

@ -0,0 +1,595 @@
// SPDX-License-Identifier: EUPL-1.2
package client
import (
"context"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"strings"
"testing"
"time"
core "dappco.re/go/core"
)
func TestClientRemember_Good_SendsOrgAndAuth(t *testing.T) {
var gotBody map[string]any
server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
t.Fatalf("expected POST, got %s", r.Method)
}
if r.URL.Path != "/v1/brain/remember" {
t.Fatalf("expected /v1/brain/remember, got %s", r.URL.Path)
}
if r.Header.Get("Authorization") != "Bearer test-key" {
t.Fatalf("expected bearer token, got %q", r.Header.Get("Authorization"))
}
gotBody = readRequestBody(t, r)
writeJSON(t, w, http.StatusOK, map[string]any{"id": "mem-1"})
}))
defer server.Close()
c := New(Options{
URL: server.URL,
Key: "test-key",
Org: "core",
AgentID: "codex",
HTTPClient: server.Client(),
MaxAttempts: 1,
})
result, err := c.Remember(context.Background(), RememberInput{
Content: "remember org",
Type: "decision",
Project: "mcp",
})
if err != nil {
t.Fatalf("Remember failed: %v", err)
}
if result["id"] != "mem-1" {
t.Fatalf("expected id mem-1, got %v", result["id"])
}
if gotBody["org"] != "core" {
t.Fatalf("expected org=core, got %v", gotBody["org"])
}
if gotBody["project"] != "mcp" {
t.Fatalf("expected project=mcp, got %v", gotBody["project"])
}
if gotBody["agent_id"] != "codex" {
t.Fatalf("expected agent_id=codex, got %v", gotBody["agent_id"])
}
}
func TestClientList_Good_SendsOrgURLParam(t *testing.T) {
server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
t.Fatalf("expected GET, got %s", r.Method)
}
if r.URL.Path != "/v1/brain/list" {
t.Fatalf("expected /v1/brain/list, got %s", r.URL.Path)
}
if got := r.URL.Query().Get("org"); got != "core" {
t.Fatalf("expected org=core, got %q", got)
}
if got := r.URL.Query().Get("project"); got != "mcp" {
t.Fatalf("expected project=mcp, got %q", got)
}
if got := r.URL.Query().Get("limit"); got != "50" {
t.Fatalf("expected default limit=50, got %q", got)
}
writeJSON(t, w, http.StatusOK, map[string]any{"memories": []any{}})
}))
defer server.Close()
c := New(Options{URL: server.URL, Key: "test-key", Org: "core", HTTPClient: server.Client(), MaxAttempts: 1})
if _, err := c.List(context.Background(), ListInput{Project: "mcp"}); err != nil {
t.Fatalf("List failed: %v", err)
}
}
func TestClientCall_Good_BuildsRequestAgainstAPIURL(t *testing.T) {
gotHost := ""
server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
t.Fatalf("expected POST, got %s", r.Method)
}
if r.URL.Path != "/v1/brain/remember" {
t.Fatalf("expected /v1/brain/remember, got %s", r.URL.Path)
}
gotHost = r.Host
writeJSON(t, w, http.StatusOK, map[string]any{"id": "mem-1"})
}))
defer server.Close()
c := New(Options{
URL: server.URL,
Key: "test-key",
HTTPClient: server.Client(),
MaxAttempts: 1,
})
result, err := c.Call(context.Background(), http.MethodPost, "/v1/brain/remember", map[string]any{"content": "safe"})
if err != nil {
t.Fatalf("Call failed: %v", err)
}
if result["id"] != "mem-1" {
t.Fatalf("expected id mem-1, got %v", result["id"])
}
if gotHost != strings.TrimPrefix(server.URL, "https://") {
t.Fatalf("expected host %s, got %s", strings.TrimPrefix(server.URL, "https://"), gotHost)
}
}
func TestClientCall_Bad_RejectsAbsoluteRequestURL(t *testing.T) {
for _, requestPath := range []string{"http://attacker.com/leak", "https://attacker.com/leak"} {
t.Run(requestPath, func(t *testing.T) {
calls := 0
c := New(Options{
URL: "https://brain.test",
Key: "test-key",
HTTPClient: &http.Client{Transport: roundTripFunc(func(*http.Request) (*http.Response, error) {
calls++
return nil, core.E("test", "unexpected HTTP request", nil)
})},
MaxAttempts: 1,
})
_, err := c.Call(context.Background(), http.MethodPost, requestPath, map[string]any{"content": "leak"})
if err == nil {
t.Fatal("expected absolute URL error")
}
if !strings.Contains(err.Error(), "absolute request URL rejected") {
t.Fatalf("expected absolute URL rejection, got %v", err)
}
if calls != 0 {
t.Fatalf("expected no HTTP requests, got %d", calls)
}
})
}
}
func TestClientNew_Bad_RejectsHTTPAPIURLWithoutInsecureEnv(t *testing.T) {
t.Setenv(insecureBrainEnv, "")
c := New(Options{URL: "http://internal/", Key: "test-key"})
if c.configErr == nil {
t.Fatal("expected insecure HTTP API URL to be rejected")
}
if !strings.Contains(c.configErr.Error(), "API URL must use https unless CORE_BRAIN_INSECURE=true") {
t.Fatalf("expected insecure API URL error, got %v", c.configErr)
}
}
func TestClientNew_Good_AllowsHTTPAPIURLWithInsecureEnv(t *testing.T) {
t.Setenv(insecureBrainEnv, "true")
c := New(Options{URL: "http://internal/", Key: "test-key"})
if c.configErr != nil {
t.Fatalf("expected insecure HTTP API URL to be allowed, got %v", c.configErr)
}
}
func TestClientCall_Good_Retries503ThenSucceeds(t *testing.T) {
attempts := 0
server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
attempts++
if attempts == 1 {
writeJSON(t, w, http.StatusServiceUnavailable, map[string]any{"error": "down"})
return
}
writeJSON(t, w, http.StatusOK, map[string]any{"memories": []any{}})
}))
defer server.Close()
c := New(Options{
URL: server.URL,
Key: "test-key",
HTTPClient: server.Client(),
MaxAttempts: 3,
BaseDelay: time.Nanosecond,
})
if _, err := c.Recall(context.Background(), RecallInput{Query: "retry"}); err != nil {
t.Fatalf("Recall failed after retry: %v", err)
}
if attempts != 2 {
t.Fatalf("expected 2 attempts, got %d", attempts)
}
}
func TestClientCall_Good_Retries408ThenSucceeds(t *testing.T) {
attempts := 0
server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
attempts++
if attempts == 1 {
writeJSON(t, w, http.StatusRequestTimeout, map[string]any{"error": "timeout"})
return
}
writeJSON(t, w, http.StatusOK, map[string]any{"memories": []any{}})
}))
defer server.Close()
c := New(Options{
URL: server.URL,
Key: "test-key",
HTTPClient: server.Client(),
MaxAttempts: 3,
BaseDelay: time.Nanosecond,
})
if _, err := c.Recall(context.Background(), RecallInput{Query: "retry"}); err != nil {
t.Fatalf("Recall failed after retry: %v", err)
}
if attempts != 2 {
t.Fatalf("expected 2 attempts, got %d", attempts)
}
}
func TestClientCall_Good_Retries429ThenSucceeds(t *testing.T) {
attempts := 0
server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
attempts++
if attempts == 1 {
writeJSON(t, w, http.StatusTooManyRequests, map[string]any{"error": "rate limited"})
return
}
writeJSON(t, w, http.StatusOK, map[string]any{"memories": []any{}})
}))
defer server.Close()
c := New(Options{
URL: server.URL,
Key: "test-key",
HTTPClient: server.Client(),
MaxAttempts: 3,
BaseDelay: time.Nanosecond,
})
if _, err := c.Recall(context.Background(), RecallInput{Query: "retry"}); err != nil {
t.Fatalf("Recall failed after retry: %v", err)
}
if attempts != 2 {
t.Fatalf("expected 2 attempts, got %d", attempts)
}
}
func TestClientCall_Good_Retries429UsingRetryAfterSeconds(t *testing.T) {
attempts := 0
server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
attempts++
if attempts == 1 {
w.Header().Set("Retry-After", "2")
writeJSON(t, w, http.StatusTooManyRequests, map[string]any{"error": "rate limited"})
return
}
writeJSON(t, w, http.StatusOK, map[string]any{"memories": []any{}})
}))
defer server.Close()
c := New(Options{
URL: server.URL,
Key: "test-key",
HTTPClient: server.Client(),
MaxAttempts: 3,
BaseDelay: time.Nanosecond,
})
sleeps := []time.Duration{}
c.sleepFunc = func(ctx context.Context, delay time.Duration) error {
sleeps = append(sleeps, delay)
return nil
}
if _, err := c.Recall(context.Background(), RecallInput{Query: "retry"}); err != nil {
t.Fatalf("Recall failed after retry: %v", err)
}
if attempts != 2 {
t.Fatalf("expected 2 attempts, got %d", attempts)
}
if len(sleeps) != 1 {
t.Fatalf("expected one retry sleep, got %d", len(sleeps))
}
if sleeps[0] != 2*time.Second {
t.Fatalf("expected Retry-After sleep of 2s, got %v", sleeps[0])
}
}
func TestClientSleep_Good_AppliesJitterAcrossClients(t *testing.T) {
ctx := context.Background()
c1 := New(Options{URL: "https://brain.test", Key: "test-key", BaseDelay: 10 * time.Second})
c2 := New(Options{URL: "https://brain.test", Key: "test-key", BaseDelay: 10 * time.Second})
var delay1 time.Duration
var delay2 time.Duration
c1.sleepFunc = func(ctx context.Context, delay time.Duration) error {
delay1 = delay
return nil
}
c2.sleepFunc = func(ctx context.Context, delay time.Duration) error {
delay2 = delay
return nil
}
for i := 0; i < 10; i++ {
if err := c1.sleep(ctx, 3); err != nil {
t.Fatalf("first client sleep failed: %v", err)
}
if err := c2.sleep(ctx, 3); err != nil {
t.Fatalf("second client sleep failed: %v", err)
}
if delay1 < 0 || delay1 > maxBackoffDelay {
t.Fatalf("first client delay out of range: %v", delay1)
}
if delay2 < 0 || delay2 > maxBackoffDelay {
t.Fatalf("second client delay out of range: %v", delay2)
}
if delay1 != delay2 {
return
}
}
t.Fatalf("expected jitter to produce different delays for two clients, both got %v", delay1)
}
func TestJitteredBackoffDelay_Good_CapsHighAttempt(t *testing.T) {
if limit := backoffDelayLimit(defaultBaseDelay, 20); limit != maxBackoffDelay {
t.Fatalf("expected high-attempt backoff limit %v, got %v", maxBackoffDelay, limit)
}
for i := 0; i < 10; i++ {
if delay := jitteredBackoffDelay(defaultBaseDelay, 20); delay < 0 || delay > maxBackoffDelay {
t.Fatalf("expected high-attempt jitter <= %v, got %v", maxBackoffDelay, delay)
}
}
}
func TestJitteredBackoffDelay_Good_UsesFullJitterRange(t *testing.T) {
limit := 800 * time.Millisecond
if got := backoffDelayLimit(100*time.Millisecond, 3); got != limit {
t.Fatalf("expected attempt 3 backoff limit %v, got %v", limit, got)
}
for i := 0; i < 10; i++ {
if delay := jitteredBackoffDelay(100*time.Millisecond, 3); delay < 0 || delay > limit {
t.Fatalf("expected jitter in [0, %v], got %v", limit, delay)
}
}
}
func TestClientCall_Good_Retries429WithPastRetryAfterDateWithoutNegativeSleep(t *testing.T) {
attempts := 0
server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
attempts++
if attempts == 1 {
w.Header().Set("Retry-After", "Wed, 21 Oct 2015 07:28:00 GMT")
writeJSON(t, w, http.StatusTooManyRequests, map[string]any{"error": "rate limited"})
return
}
writeJSON(t, w, http.StatusOK, map[string]any{"memories": []any{}})
}))
defer server.Close()
c := New(Options{
URL: server.URL,
Key: "test-key",
HTTPClient: server.Client(),
MaxAttempts: 3,
BaseDelay: time.Nanosecond,
})
sleeps := []time.Duration{}
c.sleepFunc = func(ctx context.Context, delay time.Duration) error {
sleeps = append(sleeps, delay)
return nil
}
if _, err := c.Recall(context.Background(), RecallInput{Query: "retry"}); err != nil {
t.Fatalf("Recall failed after retry: %v", err)
}
if attempts != 2 {
t.Fatalf("expected 2 attempts, got %d", attempts)
}
if len(sleeps) != 1 {
t.Fatalf("expected one retry sleep, got %d", len(sleeps))
}
if sleeps[0] != 0 {
t.Fatalf("expected past Retry-After date to sleep zero, got %v", sleeps[0])
}
}
func TestClientCall_Good_CapsRetryAfterDelay(t *testing.T) {
attempts := 0
server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
attempts++
if attempts == 1 {
w.Header().Set("Retry-After", "9999")
writeJSON(t, w, http.StatusServiceUnavailable, map[string]any{"error": "down"})
return
}
writeJSON(t, w, http.StatusOK, map[string]any{"memories": []any{}})
}))
defer server.Close()
c := New(Options{
URL: server.URL,
Key: "test-key",
HTTPClient: server.Client(),
MaxAttempts: 3,
BaseDelay: time.Nanosecond,
})
sleeps := []time.Duration{}
c.sleepFunc = func(ctx context.Context, delay time.Duration) error {
sleeps = append(sleeps, delay)
return nil
}
if _, err := c.Recall(context.Background(), RecallInput{Query: "retry"}); err != nil {
t.Fatalf("Recall failed after retry: %v", err)
}
if attempts != 2 {
t.Fatalf("expected 2 attempts, got %d", attempts)
}
if len(sleeps) != 1 {
t.Fatalf("expected one retry sleep, got %d", len(sleeps))
}
if sleeps[0] != maxRetryAfterDelay {
t.Fatalf("expected capped Retry-After sleep of %v, got %v", maxRetryAfterDelay, sleeps[0])
}
}
func TestClientCall_Bad_DoesNotRetry400(t *testing.T) {
attempts := 0
server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
attempts++
writeJSON(t, w, http.StatusBadRequest, map[string]any{"error": "bad request"})
}))
defer server.Close()
c := New(Options{
URL: server.URL,
Key: "test-key",
HTTPClient: server.Client(),
MaxAttempts: 3,
BaseDelay: time.Nanosecond,
})
if _, err := c.Recall(context.Background(), RecallInput{Query: "bad"}); err == nil {
t.Fatal("expected 400 error")
}
if attempts != 1 {
t.Fatalf("expected one attempt for 400, got %d", attempts)
}
}
func TestClientCall_Bad_Continuous503OpensCircuit(t *testing.T) {
attempts := 0
server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
attempts++
writeJSON(t, w, http.StatusServiceUnavailable, map[string]any{"error": "down"})
}))
defer server.Close()
breaker := NewCircuitBreaker(CircuitBreakerOptions{
FailureThreshold: 3,
SuccessThreshold: 1,
Cooldown: time.Hour,
})
c := New(Options{
URL: server.URL,
Key: "test-key",
HTTPClient: server.Client(),
MaxAttempts: 3,
BaseDelay: time.Nanosecond,
CircuitBreaker: breaker,
})
if _, err := c.Recall(context.Background(), RecallInput{Query: "down"}); err == nil {
t.Fatal("expected 503 error")
}
if breaker.State() != CircuitOpen {
t.Fatalf("expected circuit open, got %s", breaker.State())
}
if _, err := c.Recall(context.Background(), RecallInput{Query: "down"}); !core.Is(err, ErrCircuitOpen) {
t.Fatalf("expected ErrCircuitOpen, got %v", err)
}
if attempts != 3 {
t.Fatalf("expected no network attempt after circuit open, got %d attempts", attempts)
}
}
func TestClientCall_Bad_ContextCancellation(t *testing.T) {
attempts := 0
server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
attempts++
writeJSON(t, w, http.StatusOK, map[string]any{"ok": true})
}))
defer server.Close()
c := New(Options{URL: server.URL, Key: "test-key", HTTPClient: server.Client(), MaxAttempts: 3})
ctx, cancel := context.WithCancel(context.Background())
cancel()
if _, err := c.Recall(ctx, RecallInput{Query: "cancelled"}); !core.Is(err, context.Canceled) {
t.Fatalf("expected context.Canceled, got %v", err)
}
if attempts != 0 {
t.Fatalf("expected cancelled request to avoid network, got %d attempts", attempts)
}
}
func TestWriteBrainKey_Good_Uses0600(t *testing.T) {
home := t.TempDir()
path := filepath.Join(home, ".claude", "brain.key")
if err := os.MkdirAll(filepath.Dir(path), 0o755); err != nil {
t.Fatalf("create fixture dir: %v", err)
}
if err := os.WriteFile(path, []byte("old-key\n"), 0o644); err != nil {
t.Fatalf("write fixture: %v", err)
}
t.Setenv("HOME", home)
if err := WriteBrainKey("test-key"); err != nil {
t.Fatalf("WriteBrainKey failed: %v", err)
}
info, err := os.Stat(path)
if err != nil {
t.Fatalf("stat brain key: %v", err)
}
if got := info.Mode().Perm(); got != brainKeyFileMode {
t.Fatalf("expected brain.key mode %v, got %v", brainKeyFileMode, got)
}
data, err := os.ReadFile(path)
if err != nil {
t.Fatalf("read brain key: %v", err)
}
if got := string(data); got != "test-key\n" {
t.Fatalf("expected written key, got %q", got)
}
}
func TestBrainKeyFile_Bad_RejectsInsecurePermissions(t *testing.T) {
path := filepath.Join(t.TempDir(), "brain.key")
if err := os.WriteFile(path, []byte("test-key\n"), brainKeyFileMode); err != nil {
t.Fatalf("write fixture: %v", err)
}
if err := os.Chmod(path, 0o644); err != nil {
t.Fatalf("chmod fixture: %v", err)
}
if _, err := readBrainKeyFile(path); err == nil {
t.Fatal("expected insecure permissions error")
} else if !strings.Contains(err.Error(), "brain.key has insecure permissions, expected 0600") {
t.Fatalf("expected insecure permissions error, got %v", err)
}
info, err := os.Stat(path)
if err != nil {
t.Fatalf("stat brain key: %v", err)
}
if got := info.Mode().Perm(); got != 0o644 {
t.Fatalf("read should not chmod brain.key, got mode %v", got)
}
}
func readRequestBody(t *testing.T, r *http.Request) map[string]any {
t.Helper()
readResult := core.ReadAll(r.Body)
if !readResult.OK {
t.Fatalf("failed to read body: %v", readResult.Value)
}
body := map[string]any{}
if decodeResult := core.JSONUnmarshalString(readResult.Value.(string), &body); !decodeResult.OK {
t.Fatalf("failed to decode body: %v", decodeResult.Value)
}
return body
}
func writeJSON(t *testing.T, w http.ResponseWriter, status int, payload any) {
t.Helper()
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(status)
if _, err := w.Write([]byte(core.JSONMarshalString(payload))); err != nil {
t.Fatalf("failed to write response: %v", err)
}
}
type roundTripFunc func(*http.Request) (*http.Response, error)
func (fn roundTripFunc) RoundTrip(request *http.Request) (*http.Response, error) {
return fn(request)
}

View file

@ -3,34 +3,34 @@
package brain package brain
import ( import (
"bytes"
"context" "context"
"encoding/json"
"fmt"
"io"
"net/http"
"os"
"strings"
"time" "time"
coreio "forge.lthn.ai/core/go-io" core "dappco.re/go/core"
coreerr "forge.lthn.ai/core/go-log" coremcp "dappco.re/go/mcp/pkg/mcp"
brainclient "dappco.re/go/mcp/pkg/mcp/brain/client"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
// channelSender is the callback for pushing channel events. // channelSender is the callback for pushing channel events.
//
// fn := func(ctx context.Context, channel string, data any) { ... }
type channelSender func(ctx context.Context, channel string, data any) type channelSender func(ctx context.Context, channel string, data any)
// DirectSubsystem implements mcp.Subsystem for OpenBrain via direct HTTP calls. // DirectSubsystem implements mcp.Subsystem for OpenBrain via direct HTTP calls.
// Unlike Subsystem (which uses the IDE WebSocket bridge), this calls the // Unlike Subsystem (which uses the IDE WebSocket bridge), this calls the
// Laravel API directly — suitable for standalone core-mcp usage. // Laravel API directly — suitable for standalone core-mcp usage.
type DirectSubsystem struct { type DirectSubsystem struct {
apiURL string apiClient *brainclient.Client
apiKey string
client *http.Client
onChannel channelSender onChannel channelSender
} }
var (
_ coremcp.Subsystem = (*DirectSubsystem)(nil)
_ coremcp.SubsystemWithShutdown = (*DirectSubsystem)(nil)
_ coremcp.SubsystemWithChannelCallback = (*DirectSubsystem)(nil)
)
// OnChannel sets a callback for channel event broadcasting. // OnChannel sets a callback for channel event broadcasting.
// Called by the MCP service after creation to wire up notifications. // Called by the MCP service after creation to wire up notifications.
// //
@ -42,104 +42,69 @@ func (s *DirectSubsystem) OnChannel(fn func(ctx context.Context, channel string,
} }
// NewDirect creates a brain subsystem that calls the OpenBrain API directly. // NewDirect creates a brain subsystem that calls the OpenBrain API directly.
//
// brain := NewDirect()
//
// Reads CORE_BRAIN_URL and CORE_BRAIN_KEY from environment, or falls back // Reads CORE_BRAIN_URL and CORE_BRAIN_KEY from environment, or falls back
// to ~/.claude/brain.key for the API key. // to ~/.claude/brain.key for the API key.
func NewDirect() *DirectSubsystem { func NewDirect() *DirectSubsystem {
apiURL := os.Getenv("CORE_BRAIN_URL") return NewDirectWithClient(brainclient.NewFromEnvironment())
if apiURL == "" {
apiURL = "https://api.lthn.sh"
} }
apiKey := os.Getenv("CORE_BRAIN_KEY") // NewDirectWithClient creates a direct brain subsystem using the shared client.
if apiKey == "" { //
if data, err := coreio.Local.Read(os.ExpandEnv("$HOME/.claude/brain.key")); err == nil { // brain := NewDirectWithClient(client.New(client.Options{URL: "http://127.0.0.1:8080", Key: "test"}))
apiKey = strings.TrimSpace(data) func NewDirectWithClient(apiClient *brainclient.Client) *DirectSubsystem {
} if apiClient == nil {
} apiClient = brainclient.NewFromEnvironment()
return &DirectSubsystem{
apiURL: apiURL,
apiKey: apiKey,
client: &http.Client{Timeout: 30 * time.Second},
} }
return &DirectSubsystem{apiClient: apiClient}
} }
// Name implements mcp.Subsystem. // Name implements mcp.Subsystem.
func (s *DirectSubsystem) Name() string { return "brain" } func (s *DirectSubsystem) Name() string { return "brain" }
// RegisterTools implements mcp.Subsystem. // RegisterTools implements mcp.Subsystem.
func (s *DirectSubsystem) RegisterTools(server *mcp.Server) { func (s *DirectSubsystem) RegisterTools(svc *coremcp.Service) {
mcp.AddTool(server, &mcp.Tool{ server := svc.Server()
coremcp.AddToolRecorded(svc, server, "brain", &mcp.Tool{
Name: "brain_remember", Name: "brain_remember",
Description: "Store a memory in OpenBrain. Types: fact, decision, observation, plan, convention, architecture, research, documentation, service, bug, pattern, context, procedure.", Description: "Store a memory in OpenBrain. Types: fact, decision, observation, plan, convention, architecture, research, documentation, service, bug, pattern, context, procedure.",
}, s.remember) }, s.remember)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "brain", &mcp.Tool{
Name: "brain_recall", Name: "brain_recall",
Description: "Semantic search across OpenBrain memories. Returns memories ranked by similarity. Use agent_id 'cladius' for Cladius's memories.", Description: "Semantic search across OpenBrain memories. Returns memories ranked by similarity. Use agent_id 'cladius' for Cladius's memories.",
}, s.recall) }, s.recall)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "brain", &mcp.Tool{
Name: "brain_forget", Name: "brain_forget",
Description: "Remove a memory from OpenBrain by ID.", Description: "Remove a memory from OpenBrain by ID.",
}, s.forget) }, s.forget)
coremcp.AddToolRecorded(svc, server, "brain", &mcp.Tool{
Name: "brain_list",
Description: "List memories in OpenBrain with optional filtering by org, project, type, and agent.",
}, s.list)
} }
// Shutdown implements mcp.SubsystemWithShutdown. // Shutdown implements mcp.SubsystemWithShutdown.
func (s *DirectSubsystem) Shutdown(_ context.Context) error { return nil } func (s *DirectSubsystem) Shutdown(_ context.Context) error { return nil }
func (s *DirectSubsystem) apiCall(ctx context.Context, method, path string, body any) (map[string]any, error) { func (s *DirectSubsystem) apiCall(ctx context.Context, method, path string, body any) (map[string]any, error) {
if s.apiKey == "" { return s.client().Call(ctx, method, path, body)
return nil, coreerr.E("brain.apiCall", "no API key (set CORE_BRAIN_KEY or create ~/.claude/brain.key)", nil)
}
var reqBody io.Reader
if body != nil {
data, err := json.Marshal(body)
if err != nil {
return nil, coreerr.E("brain.apiCall", "marshal request", err)
}
reqBody = bytes.NewReader(data)
}
req, err := http.NewRequestWithContext(ctx, method, s.apiURL+path, reqBody)
if err != nil {
return nil, coreerr.E("brain.apiCall", "create request", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Accept", "application/json")
req.Header.Set("Authorization", "Bearer "+s.apiKey)
resp, err := s.client.Do(req)
if err != nil {
return nil, coreerr.E("brain.apiCall", "API call failed", err)
}
defer resp.Body.Close()
respData, err := io.ReadAll(resp.Body)
if err != nil {
return nil, coreerr.E("brain.apiCall", "read response", err)
}
if resp.StatusCode >= 400 {
return nil, coreerr.E("brain.apiCall", "API returned "+string(respData), nil)
}
var result map[string]any
if err := json.Unmarshal(respData, &result); err != nil {
return nil, coreerr.E("brain.apiCall", "parse response", err)
}
return result, nil
} }
func (s *DirectSubsystem) remember(ctx context.Context, _ *mcp.CallToolRequest, input RememberInput) (*mcp.CallToolResult, RememberOutput, error) { func (s *DirectSubsystem) remember(ctx context.Context, _ *mcp.CallToolRequest, input RememberInput) (*mcp.CallToolResult, RememberOutput, error) {
result, err := s.apiCall(ctx, "POST", "/v1/brain/remember", map[string]any{ result, err := s.client().Remember(ctx, brainclient.RememberInput{
"content": input.Content, Content: input.Content,
"type": input.Type, Type: input.Type,
"tags": input.Tags, Tags: input.Tags,
"project": input.Project, Org: input.Org,
"agent_id": "cladius", Project: input.Project,
Confidence: input.Confidence,
Supersedes: input.Supersedes,
ExpiresIn: input.ExpiresIn,
}) })
if err != nil { if err != nil {
return nil, RememberOutput{}, err return nil, RememberOutput{}, err
@ -147,8 +112,9 @@ func (s *DirectSubsystem) remember(ctx context.Context, _ *mcp.CallToolRequest,
id, _ := result["id"].(string) id, _ := result["id"].(string)
if s.onChannel != nil { if s.onChannel != nil {
s.onChannel(ctx, "brain.remember.complete", map[string]any{ s.onChannel(ctx, coremcp.ChannelBrainRememberDone, map[string]any{
"id": id, "id": id,
"org": input.Org,
"type": input.Type, "type": input.Type,
"project": input.Project, "project": input.Project,
}) })
@ -161,36 +127,117 @@ func (s *DirectSubsystem) remember(ctx context.Context, _ *mcp.CallToolRequest,
} }
func (s *DirectSubsystem) recall(ctx context.Context, _ *mcp.CallToolRequest, input RecallInput) (*mcp.CallToolResult, RecallOutput, error) { func (s *DirectSubsystem) recall(ctx context.Context, _ *mcp.CallToolRequest, input RecallInput) (*mcp.CallToolResult, RecallOutput, error) {
body := map[string]any{ result, err := s.client().Recall(ctx, brainclient.RecallInput{
"query": input.Query, Query: input.Query,
"top_k": input.TopK, TopK: input.TopK,
"agent_id": "cladius", Org: input.Filter.Org,
} Project: input.Filter.Project,
if input.Filter.Project != "" { Type: input.Filter.Type,
body["project"] = input.Filter.Project AgentID: input.Filter.AgentID,
} MinConfidence: input.Filter.MinConfidence,
if input.Filter.Type != nil { })
body["type"] = input.Filter.Type
}
if input.TopK == 0 {
body["top_k"] = 10
}
result, err := s.apiCall(ctx, "POST", "/v1/brain/recall", body)
if err != nil { if err != nil {
return nil, RecallOutput{}, err return nil, RecallOutput{}, err
} }
memories := memoriesFromResult(result)
if s.onChannel != nil {
s.onChannel(ctx, coremcp.ChannelBrainRecallDone, map[string]any{
"query": input.Query,
"org": input.Filter.Org,
"project": input.Filter.Project,
"count": len(memories),
})
}
return nil, RecallOutput{
Success: true,
Count: len(memories),
Memories: memories,
}, nil
}
func (s *DirectSubsystem) forget(ctx context.Context, _ *mcp.CallToolRequest, input ForgetInput) (*mcp.CallToolResult, ForgetOutput, error) {
_, err := s.client().Forget(ctx, brainclient.ForgetInput{ID: input.ID, Reason: input.Reason})
if err != nil {
return nil, ForgetOutput{}, err
}
if s.onChannel != nil {
s.onChannel(ctx, coremcp.ChannelBrainForgetDone, map[string]any{
"id": input.ID,
"reason": input.Reason,
})
}
return nil, ForgetOutput{
Success: true,
Forgotten: input.ID,
Timestamp: time.Now(),
}, nil
}
func (s *DirectSubsystem) list(ctx context.Context, _ *mcp.CallToolRequest, input ListInput) (*mcp.CallToolResult, ListOutput, error) {
limit := input.Limit
if limit == 0 {
limit = 50
}
result, err := s.client().List(ctx, brainclient.ListInput{
Org: input.Org,
Project: input.Project,
Type: input.Type,
AgentID: input.AgentID,
Limit: limit,
})
if err != nil {
return nil, ListOutput{}, err
}
memories := memoriesFromResult(result)
if s.onChannel != nil {
s.onChannel(ctx, coremcp.ChannelBrainListDone, map[string]any{
"org": input.Org,
"project": input.Project,
"type": input.Type,
"agent_id": input.AgentID,
"limit": limit,
})
}
return nil, ListOutput{
Success: true,
Count: len(memories),
Memories: memories,
}, nil
}
func (s *DirectSubsystem) client() *brainclient.Client {
if s.apiClient == nil {
s.apiClient = brainclient.NewFromEnvironment()
}
return s.apiClient
}
// memoriesFromResult extracts Memory entries from an API response map.
func memoriesFromResult(result map[string]any) []Memory {
var memories []Memory var memories []Memory
if mems, ok := result["memories"].([]any); ok { mems, ok := result["memories"].([]any)
if !ok {
return memories
}
for _, m := range mems { for _, m := range mems {
if mm, ok := m.(map[string]any); ok { mm, ok := m.(map[string]any)
if !ok {
continue
}
mem := Memory{ mem := Memory{
Content: fmt.Sprintf("%v", mm["content"]), Content: stringFromMap(mm, "content"),
Type: fmt.Sprintf("%v", mm["type"]), Type: stringFromMap(mm, "type"),
Project: fmt.Sprintf("%v", mm["project"]), Org: stringFromMap(mm, "org"),
AgentID: fmt.Sprintf("%v", mm["agent_id"]), Project: stringFromMap(mm, "project"),
CreatedAt: fmt.Sprintf("%v", mm["created_at"]), AgentID: stringFromMap(mm, "agent_id"),
CreatedAt: stringFromMap(mm, "created_at"),
} }
if id, ok := mm["id"].(string); ok { if id, ok := mm["id"].(string); ok {
mem.ID = id mem.ID = id
@ -203,31 +250,18 @@ func (s *DirectSubsystem) recall(ctx context.Context, _ *mcp.CallToolRequest, in
} }
memories = append(memories, mem) memories = append(memories, mem)
} }
} return memories
} }
if s.onChannel != nil { // stringFromMap extracts a string value from a map, returning "" if missing or wrong type.
s.onChannel(ctx, "brain.recall.complete", map[string]any{ func stringFromMap(m map[string]any, key string) string {
"query": input.Query, v, ok := m[key]
"count": len(memories), if !ok || v == nil {
}) return ""
} }
return nil, RecallOutput{ s, ok := v.(string)
Success: true, if !ok {
Count: len(memories), return core.Sprintf("%v", v)
Memories: memories,
}, nil
} }
return s
func (s *DirectSubsystem) forget(ctx context.Context, _ *mcp.CallToolRequest, input ForgetInput) (*mcp.CallToolResult, ForgetOutput, error) {
_, err := s.apiCall(ctx, "DELETE", "/v1/brain/forget/"+input.ID, nil)
if err != nil {
return nil, ForgetOutput{}, err
}
return nil, ForgetOutput{
Success: true,
Forgotten: input.ID,
Timestamp: time.Now(),
}, nil
} }

View file

@ -8,14 +8,21 @@ import (
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"testing" "testing"
"time"
brainclient "dappco.re/go/mcp/pkg/mcp/brain/client"
) )
// newTestDirect creates a DirectSubsystem pointing at a test server. // newTestDirect creates a DirectSubsystem pointing at a test server.
func newTestDirect(url string) *DirectSubsystem { func newTestDirect(url string) *DirectSubsystem {
return &DirectSubsystem{ return &DirectSubsystem{
apiURL: url, apiClient: brainclient.New(brainclient.Options{
apiKey: "test-key", URL: url,
client: http.DefaultClient, Key: "test-key",
HTTPClient: http.DefaultClient,
MaxAttempts: 1,
BaseDelay: time.Nanosecond,
}),
} }
} }
@ -84,7 +91,12 @@ func TestApiCall_Good_GetNilBody(t *testing.T) {
} }
func TestApiCall_Bad_NoApiKey(t *testing.T) { func TestApiCall_Bad_NoApiKey(t *testing.T) {
s := &DirectSubsystem{apiKey: "", client: http.DefaultClient} s := &DirectSubsystem{apiClient: brainclient.New(brainclient.Options{
URL: "http://example.test",
Key: "",
HTTPClient: http.DefaultClient,
MaxAttempts: 1,
})}
_, err := s.apiCall(context.Background(), "GET", "/test", nil) _, err := s.apiCall(context.Background(), "GET", "/test", nil)
if err == nil { if err == nil {
t.Error("expected error when apiKey is empty") t.Error("expected error when apiKey is empty")
@ -121,9 +133,12 @@ func TestApiCall_Bad_InvalidJson(t *testing.T) {
func TestApiCall_Bad_Unreachable(t *testing.T) { func TestApiCall_Bad_Unreachable(t *testing.T) {
s := &DirectSubsystem{ s := &DirectSubsystem{
apiURL: "http://127.0.0.1:1", // nothing listening apiClient: brainclient.New(brainclient.Options{
apiKey: "key", URL: "http://127.0.0.1:1", // nothing listening
client: http.DefaultClient, Key: "key",
HTTPClient: http.DefaultClient,
MaxAttempts: 1,
}),
} }
_, err := s.apiCall(context.Background(), "GET", "/test", nil) _, err := s.apiCall(context.Background(), "GET", "/test", nil)
if err == nil { if err == nil {
@ -143,6 +158,9 @@ func TestDirectRemember_Good(t *testing.T) {
if body["agent_id"] != "cladius" { if body["agent_id"] != "cladius" {
t.Errorf("expected agent_id=cladius, got %v", body["agent_id"]) t.Errorf("expected agent_id=cladius, got %v", body["agent_id"])
} }
if body["org"] != "core" {
t.Errorf("expected org=core, got %v", body["org"])
}
w.WriteHeader(200) w.WriteHeader(200)
json.NewEncoder(w).Encode(map[string]any{"id": "mem-456"}) json.NewEncoder(w).Encode(map[string]any{"id": "mem-456"})
})) }))
@ -152,6 +170,7 @@ func TestDirectRemember_Good(t *testing.T) {
_, out, err := s.remember(context.Background(), nil, RememberInput{ _, out, err := s.remember(context.Background(), nil, RememberInput{
Content: "test memory", Content: "test memory",
Type: "observation", Type: "observation",
Org: "core",
Project: "test-project", Project: "test-project",
}) })
if err != nil { if err != nil {
@ -188,6 +207,9 @@ func TestDirectRecall_Good(t *testing.T) {
if body["query"] != "scoring algorithm" { if body["query"] != "scoring algorithm" {
t.Errorf("unexpected query: %v", body["query"]) t.Errorf("unexpected query: %v", body["query"])
} }
if body["org"] != "core" {
t.Errorf("expected org=core, got %v", body["org"])
}
w.WriteHeader(200) w.WriteHeader(200)
json.NewEncoder(w).Encode(map[string]any{ json.NewEncoder(w).Encode(map[string]any{
"memories": []any{ "memories": []any{
@ -195,6 +217,7 @@ func TestDirectRecall_Good(t *testing.T) {
"id": "mem-1", "id": "mem-1",
"content": "scoring uses weighted average", "content": "scoring uses weighted average",
"type": "architecture", "type": "architecture",
"org": "core",
"project": "eaas", "project": "eaas",
"agent_id": "virgil", "agent_id": "virgil",
"score": 0.92, "score": 0.92,
@ -209,7 +232,7 @@ func TestDirectRecall_Good(t *testing.T) {
_, out, err := s.recall(context.Background(), nil, RecallInput{ _, out, err := s.recall(context.Background(), nil, RecallInput{
Query: "scoring algorithm", Query: "scoring algorithm",
TopK: 5, TopK: 5,
Filter: RecallFilter{Project: "eaas"}, Filter: RecallFilter{Org: "core", Project: "eaas"},
}) })
if err != nil { if err != nil {
t.Fatalf("recall failed: %v", err) t.Fatalf("recall failed: %v", err)
@ -220,6 +243,9 @@ func TestDirectRecall_Good(t *testing.T) {
if out.Memories[0].ID != "mem-1" { if out.Memories[0].ID != "mem-1" {
t.Errorf("expected id=mem-1, got %q", out.Memories[0].ID) t.Errorf("expected id=mem-1, got %q", out.Memories[0].ID)
} }
if out.Memories[0].Org != "core" {
t.Errorf("expected org=core, got %q", out.Memories[0].Org)
}
if out.Memories[0].Confidence != 0.92 { if out.Memories[0].Confidence != 0.92 {
t.Errorf("expected score=0.92, got %f", out.Memories[0].Confidence) t.Errorf("expected score=0.92, got %f", out.Memories[0].Confidence)
} }
@ -290,6 +316,48 @@ func TestDirectForget_Good(t *testing.T) {
} }
} }
func TestDirectForget_Good_EmitsChannel(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200)
json.NewEncoder(w).Encode(map[string]any{"success": true})
}))
defer srv.Close()
var gotChannel string
var gotPayload map[string]any
s := newTestDirect(srv.URL)
s.onChannel = func(_ context.Context, channel string, data any) {
gotChannel = channel
if payload, ok := data.(map[string]any); ok {
gotPayload = payload
}
}
_, out, err := s.forget(context.Background(), nil, ForgetInput{
ID: "mem-789",
Reason: "outdated",
})
if err != nil {
t.Fatalf("forget failed: %v", err)
}
if !out.Success {
t.Fatal("expected success=true")
}
if gotChannel != "brain.forget.complete" {
t.Fatalf("expected brain.forget.complete, got %q", gotChannel)
}
if gotPayload == nil {
t.Fatal("expected channel payload")
}
if gotPayload["id"] != "mem-789" {
t.Fatalf("expected id=mem-789, got %v", gotPayload["id"])
}
if gotPayload["reason"] != "outdated" {
t.Fatalf("expected reason=outdated, got %v", gotPayload["reason"])
}
}
func TestDirectForget_Bad_ApiError(t *testing.T) { func TestDirectForget_Bad_ApiError(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(404) w.WriteHeader(404)
@ -303,3 +371,136 @@ func TestDirectForget_Bad_ApiError(t *testing.T) {
t.Error("expected error on 404") t.Error("expected error on 404")
} }
} }
// --- list tool tests ---
func TestDirectList_Good(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
t.Errorf("expected GET, got %s", r.Method)
}
if got := r.URL.Query().Get("project"); got != "eaas" {
t.Errorf("expected project=eaas, got %q", got)
}
if got := r.URL.Query().Get("org"); got != "core" {
t.Errorf("expected org=core, got %q", got)
}
if got := r.URL.Query().Get("type"); got != "decision" {
t.Errorf("expected type=decision, got %q", got)
}
if got := r.URL.Query().Get("agent_id"); got != "virgil" {
t.Errorf("expected agent_id=virgil, got %q", got)
}
if got := r.URL.Query().Get("limit"); got != "20" {
t.Errorf("expected limit=20, got %q", got)
}
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]any{
"memories": []any{
map[string]any{
"id": "mem-1",
"content": "use qdrant",
"type": "decision",
"org": "core",
"project": "eaas",
"agent_id": "virgil",
"score": 0.88,
"created_at": "2026-03-01T00:00:00Z",
},
},
})
}))
defer srv.Close()
s := newTestDirect(srv.URL)
_, out, err := s.list(context.Background(), nil, ListInput{
Org: "core",
Project: "eaas",
Type: "decision",
AgentID: "virgil",
Limit: 20,
})
if err != nil {
t.Fatalf("list failed: %v", err)
}
if !out.Success || out.Count != 1 {
t.Fatalf("expected 1 memory, got %+v", out)
}
if out.Memories[0].ID != "mem-1" {
t.Errorf("expected id=mem-1, got %q", out.Memories[0].ID)
}
if out.Memories[0].Confidence != 0.88 {
t.Errorf("expected score=0.88, got %f", out.Memories[0].Confidence)
}
if out.Memories[0].Org != "core" {
t.Errorf("expected org=core, got %q", out.Memories[0].Org)
}
}
func TestDirectList_Good_EmitsAgentIDChannelPayload(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]any{"memories": []any{}})
}))
defer srv.Close()
var gotChannel string
var gotPayload map[string]any
s := newTestDirect(srv.URL)
s.onChannel = func(_ context.Context, channel string, data any) {
gotChannel = channel
if payload, ok := data.(map[string]any); ok {
gotPayload = payload
}
}
_, out, err := s.list(context.Background(), nil, ListInput{
Org: "core",
Project: "eaas",
Type: "decision",
AgentID: "virgil",
Limit: 20,
})
if err != nil {
t.Fatalf("list failed: %v", err)
}
if !out.Success {
t.Fatal("expected list success")
}
if gotChannel != "brain.list.complete" {
t.Fatalf("expected brain.list.complete, got %q", gotChannel)
}
if gotPayload == nil {
t.Fatal("expected channel payload")
}
if gotPayload["agent_id"] != "virgil" {
t.Fatalf("expected agent_id=virgil, got %v", gotPayload["agent_id"])
}
if gotPayload["project"] != "eaas" {
t.Fatalf("expected project=eaas, got %v", gotPayload["project"])
}
if gotPayload["org"] != "core" {
t.Fatalf("expected org=core, got %v", gotPayload["org"])
}
}
func TestDirectList_Good_DefaultLimit(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if got := r.URL.Query().Get("limit"); got != "50" {
t.Errorf("expected limit=50, got %q", got)
}
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]any{"memories": []any{}})
}))
defer srv.Close()
s := newTestDirect(srv.URL)
_, out, err := s.list(context.Background(), nil, ListInput{})
if err != nil {
t.Fatalf("list failed: %v", err)
}
if !out.Success || out.Count != 0 {
t.Fatalf("expected empty list, got %+v", out)
}
}

View file

@ -5,10 +5,11 @@ package brain
import ( import (
"net/http" "net/http"
"forge.lthn.ai/core/api" "dappco.re/go/api"
"forge.lthn.ai/core/api/pkg/provider" "dappco.re/go/core/api/pkg/provider"
"forge.lthn.ai/core/go-ws" "dappco.re/go/ws"
"forge.lthn.ai/core/mcp/pkg/mcp/ide" coremcp "dappco.re/go/mcp/pkg/mcp"
"dappco.re/go/mcp/pkg/mcp/ide"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
) )
@ -30,10 +31,16 @@ var (
// NewProvider creates a brain provider that proxies to Laravel via the IDE bridge. // NewProvider creates a brain provider that proxies to Laravel via the IDE bridge.
// The WS hub is used to emit brain events. Pass nil for hub if not needed. // The WS hub is used to emit brain events. Pass nil for hub if not needed.
func NewProvider(bridge *ide.Bridge, hub *ws.Hub) *BrainProvider { func NewProvider(bridge *ide.Bridge, hub *ws.Hub) *BrainProvider {
return &BrainProvider{ p := &BrainProvider{
bridge: bridge, bridge: bridge,
hub: hub, hub: hub,
} }
if bridge != nil {
bridge.AddObserver(func(msg ide.BridgeMessage) {
p.handleBridgeMessage(msg)
})
}
return p
} }
// Name implements api.RouteGroup. // Name implements api.RouteGroup.
@ -45,9 +52,10 @@ func (p *BrainProvider) BasePath() string { return "/api/brain" }
// Channels implements provider.Streamable. // Channels implements provider.Streamable.
func (p *BrainProvider) Channels() []string { func (p *BrainProvider) Channels() []string {
return []string{ return []string{
"brain.remember.complete", coremcp.ChannelBrainRememberDone,
"brain.recall.complete", coremcp.ChannelBrainRecallDone,
"brain.forget.complete", coremcp.ChannelBrainForgetDone,
coremcp.ChannelBrainListDone,
} }
} }
@ -83,6 +91,7 @@ func (p *BrainProvider) Describe() []api.RouteDescription {
"content": map[string]any{"type": "string"}, "content": map[string]any{"type": "string"},
"type": map[string]any{"type": "string"}, "type": map[string]any{"type": "string"},
"tags": map[string]any{"type": "array", "items": map[string]any{"type": "string"}}, "tags": map[string]any{"type": "array", "items": map[string]any{"type": "string"}},
"org": map[string]any{"type": "string"},
"project": map[string]any{"type": "string"}, "project": map[string]any{"type": "string"},
"confidence": map[string]any{"type": "number"}, "confidence": map[string]any{"type": "number"},
}, },
@ -111,6 +120,7 @@ func (p *BrainProvider) Describe() []api.RouteDescription {
"filter": map[string]any{ "filter": map[string]any{
"type": "object", "type": "object",
"properties": map[string]any{ "properties": map[string]any{
"org": map[string]any{"type": "string"},
"project": map[string]any{"type": "string"}, "project": map[string]any{"type": "string"},
"type": map[string]any{"type": "string"}, "type": map[string]any{"type": "string"},
}, },
@ -153,7 +163,7 @@ func (p *BrainProvider) Describe() []api.RouteDescription {
Method: "GET", Method: "GET",
Path: "/list", Path: "/list",
Summary: "List memories", Summary: "List memories",
Description: "List memories with optional filtering by project, type, and agent.", Description: "List memories with optional filtering by org, project, type, and agent.",
Tags: []string{"brain"}, Tags: []string{"brain"},
Response: map[string]any{ Response: map[string]any{
"type": "object", "type": "object",
@ -200,6 +210,7 @@ func (p *BrainProvider) remember(c *gin.Context) {
"content": input.Content, "content": input.Content,
"type": input.Type, "type": input.Type,
"tags": input.Tags, "tags": input.Tags,
"org": input.Org,
"project": input.Project, "project": input.Project,
"confidence": input.Confidence, "confidence": input.Confidence,
"supersedes": input.Supersedes, "supersedes": input.Supersedes,
@ -211,7 +222,8 @@ func (p *BrainProvider) remember(c *gin.Context) {
return return
} }
p.emitEvent("brain.remember.complete", map[string]any{ p.emitEvent(coremcp.ChannelBrainRememberDone, map[string]any{
"org": input.Org,
"type": input.Type, "type": input.Type,
"project": input.Project, "project": input.Project,
}) })
@ -244,10 +256,6 @@ func (p *BrainProvider) recall(c *gin.Context) {
return return
} }
p.emitEvent("brain.recall.complete", map[string]any{
"query": input.Query,
})
c.JSON(http.StatusOK, api.OK(RecallOutput{ c.JSON(http.StatusOK, api.OK(RecallOutput{
Success: true, Success: true,
Memories: []Memory{}, Memories: []Memory{},
@ -278,7 +286,7 @@ func (p *BrainProvider) forget(c *gin.Context) {
return return
} }
p.emitEvent("brain.forget.complete", map[string]any{ p.emitEvent(coremcp.ChannelBrainForgetDone, map[string]any{
"id": input.ID, "id": input.ID,
}) })
@ -294,13 +302,20 @@ func (p *BrainProvider) list(c *gin.Context) {
return return
} }
project := c.Query("project")
org := c.Query("org")
typ := c.Query("type")
agentID := c.Query("agent_id")
limit := c.Query("limit")
err := p.bridge.Send(ide.BridgeMessage{ err := p.bridge.Send(ide.BridgeMessage{
Type: "brain_list", Type: "brain_list",
Data: map[string]any{ Data: map[string]any{
"project": c.Query("project"), "org": org,
"type": c.Query("type"), "project": project,
"agent_id": c.Query("agent_id"), "type": typ,
"limit": c.Query("limit"), "agent_id": agentID,
"limit": limit,
}, },
}) })
if err != nil { if err != nil {
@ -308,6 +323,14 @@ func (p *BrainProvider) list(c *gin.Context) {
return return
} }
p.emitEvent(coremcp.ChannelBrainListDone, map[string]any{
"org": org,
"project": project,
"type": typ,
"agent_id": agentID,
"limit": limit,
})
c.JSON(http.StatusOK, api.OK(ListOutput{ c.JSON(http.StatusOK, api.OK(ListOutput{
Success: true, Success: true,
Memories: []Memory{}, Memories: []Memory{},
@ -334,3 +357,18 @@ func (p *BrainProvider) emitEvent(channel string, data any) {
Data: data, Data: data,
}) })
} }
func (p *BrainProvider) handleBridgeMessage(msg ide.BridgeMessage) {
switch msg.Type {
case "brain_remember":
p.emitEvent(coremcp.ChannelBrainRememberDone, bridgePayload(msg.Data, "org", "type", "project"))
case "brain_recall":
payload := bridgePayload(msg.Data, "query", "org", "project", "type", "agent_id")
payload["count"] = bridgeCount(msg.Data)
p.emitEvent(coremcp.ChannelBrainRecallDone, payload)
case "brain_forget":
p.emitEvent(coremcp.ChannelBrainForgetDone, bridgePayload(msg.Data, "id", "reason"))
case "brain_list":
p.emitEvent(coremcp.ChannelBrainListDone, bridgePayload(msg.Data, "org", "project", "type", "agent_id", "limit"))
}
}

View file

@ -0,0 +1,38 @@
// SPDX-License-Identifier: EUPL-1.2
package brain
import (
"testing"
"dappco.re/go/mcp/pkg/mcp/ide"
)
func TestBrainProviderChannels_Good_IncludesListComplete(t *testing.T) {
p := NewProvider(nil, nil)
channels := p.Channels()
found := false
for _, channel := range channels {
if channel == "brain.list.complete" {
found = true
break
}
}
if !found {
t.Fatalf("expected brain.list.complete in provider channels: %#v", channels)
}
}
func TestBrainProviderHandleBridgeMessage_Good_SupportsBrainEvents(t *testing.T) {
p := NewProvider(nil, nil)
for _, msg := range []ide.BridgeMessage{
{Type: "brain_remember", Data: map[string]any{"type": "bug", "project": "core/mcp"}},
{Type: "brain_recall", Data: map[string]any{"query": "test", "memories": []any{map[string]any{"id": "m1"}}}},
{Type: "brain_forget", Data: map[string]any{"id": "mem-123", "reason": "outdated"}},
{Type: "brain_list", Data: map[string]any{"project": "core/mcp", "limit": 10}},
} {
p.handleBridgeMessage(msg)
}
}

View file

@ -5,19 +5,33 @@ package brain
import ( import (
"context" "context"
"time" "time"
"unicode/utf8"
coreerr "forge.lthn.ai/core/go-log" coreerr "dappco.re/go/log"
"forge.lthn.ai/core/mcp/pkg/mcp/ide" coremcp "dappco.re/go/mcp/pkg/mcp"
"dappco.re/go/mcp/pkg/mcp/ide"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
const brainOrgMaxLength = 128
// emitChannel pushes a brain event through the shared notifier.
func (s *Subsystem) emitChannel(ctx context.Context, channel string, data any) {
if s.notifier != nil {
s.notifier.ChannelSend(ctx, channel, data)
}
}
// -- Input/Output types ------------------------------------------------------- // -- Input/Output types -------------------------------------------------------
// RememberInput is the input for brain_remember. // RememberInput is the input for brain_remember.
//
// input := RememberInput{Content: "Use Qdrant for vector search", Type: "decision", Org: "core"}
type RememberInput struct { type RememberInput struct {
Content string `json:"content"` Content string `json:"content"`
Type string `json:"type"` Type string `json:"type"`
Tags []string `json:"tags,omitempty"` Tags []string `json:"tags,omitempty"`
Org string `json:"org,omitempty"`
Project string `json:"project,omitempty"` Project string `json:"project,omitempty"`
Confidence float64 `json:"confidence,omitempty"` Confidence float64 `json:"confidence,omitempty"`
Supersedes string `json:"supersedes,omitempty"` Supersedes string `json:"supersedes,omitempty"`
@ -25,6 +39,8 @@ type RememberInput struct {
} }
// RememberOutput is the output for brain_remember. // RememberOutput is the output for brain_remember.
//
// // out.Success == true
type RememberOutput struct { type RememberOutput struct {
Success bool `json:"success"` Success bool `json:"success"`
MemoryID string `json:"memoryId,omitempty"` MemoryID string `json:"memoryId,omitempty"`
@ -32,6 +48,8 @@ type RememberOutput struct {
} }
// RecallInput is the input for brain_recall. // RecallInput is the input for brain_recall.
//
// input := RecallInput{Query: "vector search", TopK: 5}
type RecallInput struct { type RecallInput struct {
Query string `json:"query"` Query string `json:"query"`
TopK int `json:"top_k,omitempty"` TopK int `json:"top_k,omitempty"`
@ -39,7 +57,10 @@ type RecallInput struct {
} }
// RecallFilter holds optional filter criteria for brain_recall. // RecallFilter holds optional filter criteria for brain_recall.
//
// filter := RecallFilter{Org: "core", Project: "core/mcp", MinConfidence: 0.5}
type RecallFilter struct { type RecallFilter struct {
Org string `json:"org,omitempty"`
Project string `json:"project,omitempty"` Project string `json:"project,omitempty"`
Type any `json:"type,omitempty"` Type any `json:"type,omitempty"`
AgentID string `json:"agent_id,omitempty"` AgentID string `json:"agent_id,omitempty"`
@ -47,6 +68,8 @@ type RecallFilter struct {
} }
// RecallOutput is the output for brain_recall. // RecallOutput is the output for brain_recall.
//
// // out.Memories contains ranked matches
type RecallOutput struct { type RecallOutput struct {
Success bool `json:"success"` Success bool `json:"success"`
Count int `json:"count"` Count int `json:"count"`
@ -54,12 +77,15 @@ type RecallOutput struct {
} }
// Memory is a single memory entry returned by recall or list. // Memory is a single memory entry returned by recall or list.
//
// mem := Memory{ID: "m1", Type: "bug", Content: "Fix timeout handling"}
type Memory struct { type Memory struct {
ID string `json:"id"` ID string `json:"id"`
AgentID string `json:"agent_id"` AgentID string `json:"agent_id"`
Type string `json:"type"` Type string `json:"type"`
Content string `json:"content"` Content string `json:"content"`
Tags []string `json:"tags,omitempty"` Tags []string `json:"tags,omitempty"`
Org string `json:"org,omitempty"`
Project string `json:"project,omitempty"` Project string `json:"project,omitempty"`
Confidence float64 `json:"confidence"` Confidence float64 `json:"confidence"`
SupersedesID string `json:"supersedes_id,omitempty"` SupersedesID string `json:"supersedes_id,omitempty"`
@ -69,12 +95,16 @@ type Memory struct {
} }
// ForgetInput is the input for brain_forget. // ForgetInput is the input for brain_forget.
//
// input := ForgetInput{ID: "m1"}
type ForgetInput struct { type ForgetInput struct {
ID string `json:"id"` ID string `json:"id"`
Reason string `json:"reason,omitempty"` Reason string `json:"reason,omitempty"`
} }
// ForgetOutput is the output for brain_forget. // ForgetOutput is the output for brain_forget.
//
// // out.Forgotten contains the deleted memory ID
type ForgetOutput struct { type ForgetOutput struct {
Success bool `json:"success"` Success bool `json:"success"`
Forgotten string `json:"forgotten"` Forgotten string `json:"forgotten"`
@ -82,7 +112,10 @@ type ForgetOutput struct {
} }
// ListInput is the input for brain_list. // ListInput is the input for brain_list.
//
// input := ListInput{Org: "core", Project: "core/mcp", Limit: 50}
type ListInput struct { type ListInput struct {
Org string `json:"org,omitempty"`
Project string `json:"project,omitempty"` Project string `json:"project,omitempty"`
Type string `json:"type,omitempty"` Type string `json:"type,omitempty"`
AgentID string `json:"agent_id,omitempty"` AgentID string `json:"agent_id,omitempty"`
@ -90,39 +123,64 @@ type ListInput struct {
} }
// ListOutput is the output for brain_list. // ListOutput is the output for brain_list.
//
// // out.Count reports how many memories were returned
type ListOutput struct { type ListOutput struct {
Success bool `json:"success"` Success bool `json:"success"`
Count int `json:"count"` Count int `json:"count"`
Memories []Memory `json:"memories"` Memories []Memory `json:"memories"`
} }
func validateBrainOrg(org string) error {
if utf8.RuneCountInString(org) > brainOrgMaxLength {
return coreerr.E("brain.validate", "org exceeds maximum length of 128 characters", nil)
}
return nil
}
func validateRememberInput(input RememberInput) error {
return validateBrainOrg(input.Org)
}
func validateRecallInput(input RecallInput) error {
return validateBrainOrg(input.Filter.Org)
}
func validateListInput(input ListInput) error {
return validateBrainOrg(input.Org)
}
// -- Tool registration -------------------------------------------------------- // -- Tool registration --------------------------------------------------------
func (s *Subsystem) registerBrainTools(server *mcp.Server) { func (s *Subsystem) registerBrainTools(svc *coremcp.Service) {
mcp.AddTool(server, &mcp.Tool{ server := svc.Server()
coremcp.AddToolRecorded(svc, server, "brain", &mcp.Tool{
Name: "brain_remember", Name: "brain_remember",
Description: "Store a memory in the shared OpenBrain knowledge store. Persists decisions, observations, conventions, research, plans, bugs, or architecture knowledge for other agents.", Description: "Store a memory in the shared OpenBrain knowledge store. Persists decisions, observations, conventions, research, plans, bugs, or architecture knowledge for other agents.",
}, s.brainRemember) }, s.brainRemember)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "brain", &mcp.Tool{
Name: "brain_recall", Name: "brain_recall",
Description: "Semantic search across the shared OpenBrain knowledge store. Returns memories ranked by similarity to your query, with optional filtering.", Description: "Semantic search across the shared OpenBrain knowledge store. Returns memories ranked by similarity to your query, with optional filtering.",
}, s.brainRecall) }, s.brainRecall)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "brain", &mcp.Tool{
Name: "brain_forget", Name: "brain_forget",
Description: "Remove a memory from the shared OpenBrain knowledge store. Permanently deletes from both database and vector index.", Description: "Remove a memory from the shared OpenBrain knowledge store. Permanently deletes from both database and vector index.",
}, s.brainForget) }, s.brainForget)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "brain", &mcp.Tool{
Name: "brain_list", Name: "brain_list",
Description: "List memories in the shared OpenBrain knowledge store. Supports filtering by project, type, and agent. No vector search -- use brain_recall for semantic queries.", Description: "List memories in the shared OpenBrain knowledge store. Supports filtering by org, project, type, and agent. No vector search -- use brain_recall for semantic queries.",
}, s.brainList) }, s.brainList)
} }
// -- Tool handlers ------------------------------------------------------------ // -- Tool handlers ------------------------------------------------------------
func (s *Subsystem) brainRemember(_ context.Context, _ *mcp.CallToolRequest, input RememberInput) (*mcp.CallToolResult, RememberOutput, error) { func (s *Subsystem) brainRemember(ctx context.Context, _ *mcp.CallToolRequest, input RememberInput) (*mcp.CallToolResult, RememberOutput, error) {
if err := validateRememberInput(input); err != nil {
return nil, RememberOutput{}, err
}
if s.bridge == nil { if s.bridge == nil {
return nil, RememberOutput{}, errBridgeNotAvailable return nil, RememberOutput{}, errBridgeNotAvailable
} }
@ -133,6 +191,7 @@ func (s *Subsystem) brainRemember(_ context.Context, _ *mcp.CallToolRequest, inp
"content": input.Content, "content": input.Content,
"type": input.Type, "type": input.Type,
"tags": input.Tags, "tags": input.Tags,
"org": input.Org,
"project": input.Project, "project": input.Project,
"confidence": input.Confidence, "confidence": input.Confidence,
"supersedes": input.Supersedes, "supersedes": input.Supersedes,
@ -143,13 +202,22 @@ func (s *Subsystem) brainRemember(_ context.Context, _ *mcp.CallToolRequest, inp
return nil, RememberOutput{}, coreerr.E("brain.remember", "failed to send brain_remember", err) return nil, RememberOutput{}, coreerr.E("brain.remember", "failed to send brain_remember", err)
} }
s.emitChannel(ctx, coremcp.ChannelBrainRememberDone, map[string]any{
"org": input.Org,
"type": input.Type,
"project": input.Project,
})
return nil, RememberOutput{ return nil, RememberOutput{
Success: true, Success: true,
Timestamp: time.Now(), Timestamp: time.Now(),
}, nil }, nil
} }
func (s *Subsystem) brainRecall(_ context.Context, _ *mcp.CallToolRequest, input RecallInput) (*mcp.CallToolResult, RecallOutput, error) { func (s *Subsystem) brainRecall(ctx context.Context, _ *mcp.CallToolRequest, input RecallInput) (*mcp.CallToolResult, RecallOutput, error) {
if err := validateRecallInput(input); err != nil {
return nil, RecallOutput{}, err
}
if s.bridge == nil { if s.bridge == nil {
return nil, RecallOutput{}, errBridgeNotAvailable return nil, RecallOutput{}, errBridgeNotAvailable
} }
@ -172,7 +240,7 @@ func (s *Subsystem) brainRecall(_ context.Context, _ *mcp.CallToolRequest, input
}, nil }, nil
} }
func (s *Subsystem) brainForget(_ context.Context, _ *mcp.CallToolRequest, input ForgetInput) (*mcp.CallToolResult, ForgetOutput, error) { func (s *Subsystem) brainForget(ctx context.Context, _ *mcp.CallToolRequest, input ForgetInput) (*mcp.CallToolResult, ForgetOutput, error) {
if s.bridge == nil { if s.bridge == nil {
return nil, ForgetOutput{}, errBridgeNotAvailable return nil, ForgetOutput{}, errBridgeNotAvailable
} }
@ -188,6 +256,10 @@ func (s *Subsystem) brainForget(_ context.Context, _ *mcp.CallToolRequest, input
return nil, ForgetOutput{}, coreerr.E("brain.forget", "failed to send brain_forget", err) return nil, ForgetOutput{}, coreerr.E("brain.forget", "failed to send brain_forget", err)
} }
s.emitChannel(ctx, coremcp.ChannelBrainForgetDone, map[string]any{
"id": input.ID,
})
return nil, ForgetOutput{ return nil, ForgetOutput{
Success: true, Success: true,
Forgotten: input.ID, Forgotten: input.ID,
@ -195,7 +267,10 @@ func (s *Subsystem) brainForget(_ context.Context, _ *mcp.CallToolRequest, input
}, nil }, nil
} }
func (s *Subsystem) brainList(_ context.Context, _ *mcp.CallToolRequest, input ListInput) (*mcp.CallToolResult, ListOutput, error) { func (s *Subsystem) brainList(ctx context.Context, _ *mcp.CallToolRequest, input ListInput) (*mcp.CallToolResult, ListOutput, error) {
if err := validateListInput(input); err != nil {
return nil, ListOutput{}, err
}
if s.bridge == nil { if s.bridge == nil {
return nil, ListOutput{}, errBridgeNotAvailable return nil, ListOutput{}, errBridgeNotAvailable
} }
@ -207,6 +282,7 @@ func (s *Subsystem) brainList(_ context.Context, _ *mcp.CallToolRequest, input L
err := s.bridge.Send(ide.BridgeMessage{ err := s.bridge.Send(ide.BridgeMessage{
Type: "brain_list", Type: "brain_list",
Data: map[string]any{ Data: map[string]any{
"org": input.Org,
"project": input.Project, "project": input.Project,
"type": input.Type, "type": input.Type,
"agent_id": input.AgentID, "agent_id": input.AgentID,
@ -217,6 +293,14 @@ func (s *Subsystem) brainList(_ context.Context, _ *mcp.CallToolRequest, input L
return nil, ListOutput{}, coreerr.E("brain.list", "failed to send brain_list", err) return nil, ListOutput{}, coreerr.E("brain.list", "failed to send brain_list", err)
} }
s.emitChannel(ctx, coremcp.ChannelBrainListDone, map[string]any{
"org": input.Org,
"project": input.Project,
"type": input.Type,
"agent_id": input.AgentID,
"limit": limit,
})
return nil, ListOutput{ return nil, ListOutput{
Success: true, Success: true,
Memories: []Memory{}, Memories: []Memory{},

166
pkg/mcp/brain/tools_test.go Normal file
View file

@ -0,0 +1,166 @@
// SPDX-License-Identifier: EUPL-1.2
package brain
import (
"context"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"dappco.re/go/mcp/pkg/mcp/ide"
"dappco.re/go/ws"
"github.com/gorilla/websocket"
)
var brainToolTestUpgrader = websocket.Upgrader{
CheckOrigin: func(_ *http.Request) bool { return true },
}
func newConnectedBrainToolSubsystem(t *testing.T) (*Subsystem, <-chan ide.BridgeMessage) {
t.Helper()
messages := make(chan ide.BridgeMessage, 8)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
conn, err := brainToolTestUpgrader.Upgrade(w, r, nil)
if err != nil {
t.Logf("upgrade error: %v", err)
return
}
defer conn.Close()
for {
var msg ide.BridgeMessage
if err := conn.ReadJSON(&msg); err != nil {
return
}
messages <- msg
}
}))
ctx, cancel := context.WithCancel(context.Background())
hub := ws.NewHub()
go hub.Run(ctx)
cfg := ide.DefaultConfig()
cfg.LaravelWSURL = "ws" + strings.TrimPrefix(srv.URL, "http")
cfg.ReconnectInterval = 10 * time.Millisecond
cfg.MaxReconnectInterval = 10 * time.Millisecond
bridge := ide.NewBridge(hub, cfg)
bridge.Start(ctx)
waitBrainToolBridgeConnected(t, bridge)
t.Cleanup(func() {
bridge.Shutdown()
cancel()
srv.Close()
})
return New(bridge), messages
}
func waitBrainToolBridgeConnected(t *testing.T, bridge *ide.Bridge) {
t.Helper()
deadline := time.Now().Add(2 * time.Second)
for time.Now().Before(deadline) {
if bridge.Connected() {
return
}
time.Sleep(10 * time.Millisecond)
}
t.Fatal("bridge did not connect within timeout")
}
func readBrainToolBridgeMessage(t *testing.T, messages <-chan ide.BridgeMessage) ide.BridgeMessage {
t.Helper()
select {
case msg := <-messages:
return msg
case <-time.After(2 * time.Second):
t.Fatal("timed out waiting for bridge message")
return ide.BridgeMessage{}
}
}
func assertBrainOrgValidationError(t *testing.T, err error) {
t.Helper()
if err == nil {
t.Fatal("expected org validation error")
}
if !strings.Contains(err.Error(), "org exceeds maximum length of 128 characters") {
t.Fatalf("expected org length error, got %v", err)
}
}
func TestBrainRemember_Good_OrgLengthBoundary(t *testing.T) {
sub, messages := newConnectedBrainToolSubsystem(t)
for _, tc := range []struct {
name string
org string
}{
{name: "non_empty", org: "core"},
{name: "empty", org: ""},
{name: "boundary", org: strings.Repeat("a", brainOrgMaxLength)},
} {
t.Run(tc.name, func(t *testing.T) {
_, out, err := sub.brainRemember(context.Background(), nil, RememberInput{
Content: "test memory",
Type: "observation",
Org: tc.org,
})
if err != nil {
t.Fatalf("brainRemember failed: %v", err)
}
if !out.Success {
t.Fatal("expected success=true")
}
msg := readBrainToolBridgeMessage(t, messages)
if msg.Type != "brain_remember" {
t.Fatalf("expected brain_remember message, got %q", msg.Type)
}
data, ok := msg.Data.(map[string]any)
if !ok {
t.Fatalf("expected bridge data map, got %T", msg.Data)
}
if data["org"] != tc.org {
t.Fatalf("expected org %q, got %v", tc.org, data["org"])
}
})
}
}
func TestBrainRemember_Bad_OrgTooLong(t *testing.T) {
sub := New(nil)
_, _, err := sub.brainRemember(context.Background(), nil, RememberInput{
Content: "test memory",
Type: "observation",
Org: strings.Repeat("a", brainOrgMaxLength+1),
})
assertBrainOrgValidationError(t, err)
}
func TestBrainOrgValidation_Bad_RecallAndListRejectBeforeBridge(t *testing.T) {
sub := New(nil)
tooLong := strings.Repeat("a", brainOrgMaxLength+1)
_, _, err := sub.brainRecall(context.Background(), nil, RecallInput{
Query: "test",
Filter: RecallFilter{Org: tooLong},
})
assertBrainOrgValidationError(t, err)
_, _, err = sub.brainList(context.Background(), nil, ListInput{
Org: tooLong,
})
assertBrainOrgValidationError(t, err)
}

View file

@ -3,14 +3,12 @@
package mcp package mcp
import ( import (
"encoding/json"
"errors"
"io"
"net/http" "net/http"
core "dappco.re/go/core"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
api "forge.lthn.ai/core/api" api "dappco.re/go/api"
) )
// maxBodySize is the maximum request body size accepted by bridged tool endpoints. // maxBodySize is the maximum request body size accepted by bridged tool endpoints.
@ -25,6 +23,10 @@ const maxBodySize = 10 << 20 // 10 MB
// mcp.BridgeToAPI(svc, bridge) // mcp.BridgeToAPI(svc, bridge)
// bridge.Mount(router, "/v1/tools") // bridge.Mount(router, "/v1/tools")
func BridgeToAPI(svc *Service, bridge *api.ToolBridge) { func BridgeToAPI(svc *Service, bridge *api.ToolBridge) {
if svc == nil || bridge == nil {
return
}
for rec := range svc.ToolsSeq() { for rec := range svc.ToolsSeq() {
desc := api.ToolDescriptor{ desc := api.ToolDescriptor{
Name: rec.Name, Name: rec.Name,
@ -40,21 +42,27 @@ func BridgeToAPI(svc *Service, bridge *api.ToolBridge) {
bridge.Add(desc, func(c *gin.Context) { bridge.Add(desc, func(c *gin.Context) {
var body []byte var body []byte
if c.Request.Body != nil { if c.Request.Body != nil {
var err error c.Request.Body = http.MaxBytesReader(c.Writer, c.Request.Body, maxBodySize)
body, err = io.ReadAll(io.LimitReader(c.Request.Body, maxBodySize)) r := core.ReadAll(c.Request.Body)
if err != nil { if !r.OK {
if err, ok := r.Value.(error); ok {
var maxBytesErr *http.MaxBytesError
if core.As(err, &maxBytesErr) || core.Contains(err.Error(), "request body too large") {
c.JSON(http.StatusRequestEntityTooLarge, api.Fail("request_too_large", "Request body exceeds 10 MB limit"))
return
}
}
c.JSON(http.StatusBadRequest, api.Fail("invalid_request", "Failed to read request body")) c.JSON(http.StatusBadRequest, api.Fail("invalid_request", "Failed to read request body"))
return return
} }
body = []byte(r.Value.(string))
} }
result, err := handler(c.Request.Context(), body) result, err := handler(c.Request.Context(), body)
if err != nil { if err != nil {
// Classify JSON parse errors as client errors (400), // Body present + error = likely bad input (malformed JSON).
// everything else as server errors (500). // No body + error = tool execution failure.
var syntaxErr *json.SyntaxError if core.Is(err, errInvalidRESTInput) {
var typeErr *json.UnmarshalTypeError
if errors.As(err, &syntaxErr) || errors.As(err, &typeErr) {
c.JSON(http.StatusBadRequest, api.Fail("invalid_input", "Malformed JSON in request body")) c.JSON(http.StatusBadRequest, api.Fail("invalid_input", "Malformed JSON in request body"))
return return
} }

View file

@ -1,6 +1,6 @@
// SPDX-License-Identifier: EUPL-1.2 // SPDX-License-Identifier: EUPL-1.2
package mcp package mcp_test
import ( import (
"encoding/json" "encoding/json"
@ -13,7 +13,11 @@ import (
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
api "forge.lthn.ai/core/api" mcp "dappco.re/go/mcp/pkg/mcp"
"dappco.re/go/mcp/pkg/mcp/agentic"
"dappco.re/go/mcp/pkg/mcp/brain"
"dappco.re/go/mcp/pkg/mcp/ide"
api "dappco.re/go/api"
) )
func init() { func init() {
@ -21,13 +25,20 @@ func init() {
} }
func TestBridgeToAPI_Good_AllTools(t *testing.T) { func TestBridgeToAPI_Good_AllTools(t *testing.T) {
svc, err := New(Options{WorkspaceRoot: t.TempDir()}) svc, err := mcp.New(mcp.Options{
WorkspaceRoot: t.TempDir(),
Subsystems: []mcp.Subsystem{
brain.New(nil),
agentic.NewPrep(),
ide.New(nil, ide.Config{}),
},
})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
bridge := api.NewToolBridge("/tools") bridge := api.NewToolBridge("/tools")
BridgeToAPI(svc, bridge) mcp.BridgeToAPI(svc, bridge)
svcCount := len(svc.Tools()) svcCount := len(svc.Tools())
bridgeCount := len(bridge.Tools()) bridgeCount := len(bridge.Tools())
@ -49,28 +60,37 @@ func TestBridgeToAPI_Good_AllTools(t *testing.T) {
t.Errorf("bridge has tool %q not found in service", td.Name) t.Errorf("bridge has tool %q not found in service", td.Name)
} }
} }
for _, want := range []string{"brain_list", "agentic_plan_create", "ide_dashboard_overview"} {
if !svcNames[want] {
t.Fatalf("expected recorded tool %q to be present", want)
}
}
} }
func TestBridgeToAPI_Good_DescribableGroup(t *testing.T) { func TestBridgeToAPI_Good_DescribableGroup(t *testing.T) {
svc, err := New(Options{WorkspaceRoot: t.TempDir()}) svc, err := mcp.New(mcp.Options{WorkspaceRoot: t.TempDir()})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
bridge := api.NewToolBridge("/tools") bridge := api.NewToolBridge("/tools")
BridgeToAPI(svc, bridge) mcp.BridgeToAPI(svc, bridge)
// ToolBridge implements DescribableGroup. // ToolBridge implements DescribableGroup.
var dg api.DescribableGroup = bridge var dg api.DescribableGroup = bridge
descs := dg.Describe() descs := dg.Describe()
if len(descs) != len(svc.Tools()) { // ToolBridge.Describe prepends a GET entry describing the tool listing
t.Fatalf("expected %d descriptions, got %d", len(svc.Tools()), len(descs)) // endpoint, so the expected count is svc.Tools() + 1.
wantDescs := len(svc.Tools()) + 1
if len(descs) != wantDescs {
t.Fatalf("expected %d descriptions, got %d", wantDescs, len(descs))
} }
for _, d := range descs { for _, d := range descs {
if d.Method != "POST" { if d.Method != "POST" && d.Method != "GET" {
t.Errorf("expected Method=POST for %s, got %q", d.Path, d.Method) t.Errorf("expected Method=POST or GET for %s, got %q", d.Path, d.Method)
} }
if d.Summary == "" { if d.Summary == "" {
t.Errorf("expected non-empty Summary for %s", d.Path) t.Errorf("expected non-empty Summary for %s", d.Path)
@ -90,13 +110,13 @@ func TestBridgeToAPI_Good_FileRead(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
svc, err := New(Options{WorkspaceRoot: tmpDir}) svc, err := mcp.New(mcp.Options{WorkspaceRoot: tmpDir})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
bridge := api.NewToolBridge("/tools") bridge := api.NewToolBridge("/tools")
BridgeToAPI(svc, bridge) mcp.BridgeToAPI(svc, bridge)
// Register with a Gin engine and make a request. // Register with a Gin engine and make a request.
engine := gin.New() engine := gin.New()
@ -114,7 +134,7 @@ func TestBridgeToAPI_Good_FileRead(t *testing.T) {
} }
// Parse the response envelope. // Parse the response envelope.
var resp api.Response[ReadFileOutput] var resp api.Response[mcp.ReadFileOutput]
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil { if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("unmarshal error: %v", err) t.Fatalf("unmarshal error: %v", err)
} }
@ -130,13 +150,13 @@ func TestBridgeToAPI_Good_FileRead(t *testing.T) {
} }
func TestBridgeToAPI_Bad_InvalidJSON(t *testing.T) { func TestBridgeToAPI_Bad_InvalidJSON(t *testing.T) {
svc, err := New(Options{WorkspaceRoot: t.TempDir()}) svc, err := mcp.New(mcp.Options{WorkspaceRoot: t.TempDir()})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
bridge := api.NewToolBridge("/tools") bridge := api.NewToolBridge("/tools")
BridgeToAPI(svc, bridge) mcp.BridgeToAPI(svc, bridge)
engine := gin.New() engine := gin.New()
rg := engine.Group(bridge.BasePath()) rg := engine.Group(bridge.BasePath())
@ -148,13 +168,8 @@ func TestBridgeToAPI_Bad_InvalidJSON(t *testing.T) {
req.Header.Set("Content-Type", "application/json") req.Header.Set("Content-Type", "application/json")
engine.ServeHTTP(w, req) engine.ServeHTTP(w, req)
if w.Code != http.StatusInternalServerError { if w.Code != http.StatusBadRequest {
// The handler unmarshals via RESTHandler which returns an error, t.Fatalf("expected 400 for invalid JSON, got %d: %s", w.Code, w.Body.String())
// but since it's a JSON parse error it ends up as tool_error.
// Check we get a non-200 with an error envelope.
if w.Code == http.StatusOK {
t.Fatalf("expected non-200 for invalid JSON, got 200")
}
} }
var resp api.Response[any] var resp api.Response[any]
@ -169,14 +184,49 @@ func TestBridgeToAPI_Bad_InvalidJSON(t *testing.T) {
} }
} }
func TestBridgeToAPI_Good_EndToEnd(t *testing.T) { func TestBridgeToAPI_Bad_OversizedBody(t *testing.T) {
svc, err := New(Options{WorkspaceRoot: t.TempDir()}) svc, err := mcp.New(mcp.Options{WorkspaceRoot: t.TempDir()})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
bridge := api.NewToolBridge("/tools") bridge := api.NewToolBridge("/tools")
BridgeToAPI(svc, bridge) mcp.BridgeToAPI(svc, bridge)
engine := gin.New()
rg := engine.Group(bridge.BasePath())
bridge.RegisterRoutes(rg)
body := strings.Repeat("a", 10<<20+1)
w := httptest.NewRecorder()
req, _ := http.NewRequest(http.MethodPost, "/tools/file_read", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
engine.ServeHTTP(w, req)
if w.Code != http.StatusRequestEntityTooLarge {
t.Fatalf("expected 413 for oversized body, got %d: %s", w.Code, w.Body.String())
}
var resp api.Response[any]
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("unmarshal error: %v", err)
}
if resp.Success {
t.Fatal("expected Success=false for oversized body")
}
if resp.Error == nil {
t.Fatal("expected error in response")
}
}
func TestBridgeToAPI_Good_EndToEnd(t *testing.T) {
svc, err := mcp.New(mcp.Options{WorkspaceRoot: t.TempDir()})
if err != nil {
t.Fatal(err)
}
bridge := api.NewToolBridge("/tools")
mcp.BridgeToAPI(svc, bridge)
// Create an api.Engine with the bridge registered and Swagger enabled. // Create an api.Engine with the bridge registered and Swagger enabled.
e, err := api.New( e, err := api.New(
@ -203,7 +253,7 @@ func TestBridgeToAPI_Good_EndToEnd(t *testing.T) {
} }
// Verify a tool endpoint is reachable through the engine. // Verify a tool endpoint is reachable through the engine.
resp2, err := http.Post(srv.URL+"/tools/lang_list", "application/json", nil) resp2, err := http.Post(srv.URL+"/tools/lang_list", "application/json", strings.NewReader("{}"))
if err != nil { if err != nil {
t.Fatalf("lang_list request failed: %v", err) t.Fatalf("lang_list request failed: %v", err)
} }
@ -212,7 +262,7 @@ func TestBridgeToAPI_Good_EndToEnd(t *testing.T) {
t.Fatalf("expected 200 for /tools/lang_list, got %d", resp2.StatusCode) t.Fatalf("expected 200 for /tools/lang_list, got %d", resp2.StatusCode)
} }
var langResp api.Response[GetSupportedLanguagesOutput] var langResp api.Response[mcp.GetSupportedLanguagesOutput]
if err := json.NewDecoder(resp2.Body).Decode(&langResp); err != nil { if err := json.NewDecoder(resp2.Body).Decode(&langResp); err != nil {
t.Fatalf("unmarshal error: %v", err) t.Fatalf("unmarshal error: %v", err)
} }

View file

@ -1,18 +1,26 @@
// SPDX-License-Identifier: EUPL-1.2
package ide package ide
import ( import (
"context" "context"
"encoding/json" core "dappco.re/go/core"
"net/http" "net/http"
"sync" "sync"
"time" "time"
coreerr "forge.lthn.ai/core/go-log" coreerr "dappco.re/go/log"
"forge.lthn.ai/core/go-ws" "dappco.re/go/ws"
"github.com/gorilla/websocket" "github.com/gorilla/websocket"
) )
// BridgeMessage is the wire format between the IDE and Laravel. // BridgeMessage is the wire format between the IDE bridge and Laravel.
//
// msg := BridgeMessage{
// Type: "chat_send",
// SessionID: "sess-42",
// Data: "hello",
// }
type BridgeMessage struct { type BridgeMessage struct {
Type string `json:"type"` Type string `json:"type"`
Channel string `json:"channel,omitempty"` Channel string `json:"channel,omitempty"`
@ -23,6 +31,8 @@ type BridgeMessage struct {
// Bridge maintains a WebSocket connection to the Laravel core-agentic // Bridge maintains a WebSocket connection to the Laravel core-agentic
// backend and forwards responses to a local ws.Hub. // backend and forwards responses to a local ws.Hub.
//
// bridge := NewBridge(hub, cfg)
type Bridge struct { type Bridge struct {
cfg Config cfg Config
hub *ws.Hub hub *ws.Hub
@ -31,22 +41,57 @@ type Bridge struct {
mu sync.Mutex mu sync.Mutex
connected bool connected bool
cancel context.CancelFunc cancel context.CancelFunc
observers []func(BridgeMessage)
} }
// NewBridge creates a bridge that will connect to the Laravel backend and // NewBridge creates a bridge that will connect to the Laravel backend and
// forward incoming messages to the provided ws.Hub channels. // forward incoming messages to the provided ws.Hub channels.
//
// bridge := NewBridge(hub, cfg)
func NewBridge(hub *ws.Hub, cfg Config) *Bridge { func NewBridge(hub *ws.Hub, cfg Config) *Bridge {
return &Bridge{cfg: cfg, hub: hub} return &Bridge{cfg: cfg, hub: hub}
} }
// SetObserver registers a callback for inbound bridge messages.
//
// bridge.SetObserver(func(msg BridgeMessage) {
// fmt.Println(msg.Type)
// })
func (b *Bridge) SetObserver(fn func(BridgeMessage)) {
b.mu.Lock()
defer b.mu.Unlock()
if fn == nil {
b.observers = nil
return
}
b.observers = []func(BridgeMessage){fn}
}
// AddObserver registers an additional bridge observer.
// Observers are invoked in registration order after each inbound message.
//
// bridge.AddObserver(func(msg BridgeMessage) { log.Println(msg.Type) })
func (b *Bridge) AddObserver(fn func(BridgeMessage)) {
if fn == nil {
return
}
b.mu.Lock()
defer b.mu.Unlock()
b.observers = append(b.observers, fn)
}
// Start begins the connection loop in a background goroutine. // Start begins the connection loop in a background goroutine.
// Call Shutdown to stop it. // Call Shutdown to stop it.
//
// bridge.Start(ctx)
func (b *Bridge) Start(ctx context.Context) { func (b *Bridge) Start(ctx context.Context) {
ctx, b.cancel = context.WithCancel(ctx) ctx, b.cancel = context.WithCancel(ctx)
go b.connectLoop(ctx) go b.connectLoop(ctx)
} }
// Shutdown cleanly closes the bridge. // Shutdown cleanly closes the bridge.
//
// bridge.Shutdown()
func (b *Bridge) Shutdown() { func (b *Bridge) Shutdown() {
if b.cancel != nil { if b.cancel != nil {
b.cancel() b.cancel()
@ -61,6 +106,10 @@ func (b *Bridge) Shutdown() {
} }
// Connected reports whether the bridge has an active connection. // Connected reports whether the bridge has an active connection.
//
// if bridge.Connected() {
// fmt.Println("online")
// }
func (b *Bridge) Connected() bool { func (b *Bridge) Connected() bool {
b.mu.Lock() b.mu.Lock()
defer b.mu.Unlock() defer b.mu.Unlock()
@ -68,6 +117,8 @@ func (b *Bridge) Connected() bool {
} }
// Send sends a message to the Laravel backend. // Send sends a message to the Laravel backend.
//
// err := bridge.Send(BridgeMessage{Type: "dashboard_overview"})
func (b *Bridge) Send(msg BridgeMessage) error { func (b *Bridge) Send(msg BridgeMessage) error {
b.mu.Lock() b.mu.Lock()
defer b.mu.Unlock() defer b.mu.Unlock()
@ -75,10 +126,7 @@ func (b *Bridge) Send(msg BridgeMessage) error {
return coreerr.E("bridge.Send", "not connected", nil) return coreerr.E("bridge.Send", "not connected", nil)
} }
msg.Timestamp = time.Now() msg.Timestamp = time.Now()
data, err := json.Marshal(msg) data := []byte(core.JSONMarshalString(msg))
if err != nil {
return coreerr.E("bridge.Send", "marshal failed", err)
}
return b.conn.WriteMessage(websocket.TextMessage, data) return b.conn.WriteMessage(websocket.TextMessage, data)
} }
@ -158,14 +206,29 @@ func (b *Bridge) readLoop(ctx context.Context) {
} }
var msg BridgeMessage var msg BridgeMessage
if err := json.Unmarshal(data, &msg); err != nil { if r := core.JSONUnmarshal(data, &msg); !r.OK {
coreerr.Warn("ide bridge: unmarshal error", "err", err) coreerr.Warn("ide bridge: unmarshal error")
continue continue
} }
b.dispatch(msg) b.dispatch(msg)
for _, observer := range b.snapshotObservers() {
observer(msg)
} }
} }
}
func (b *Bridge) snapshotObservers() []func(BridgeMessage) {
b.mu.Lock()
defer b.mu.Unlock()
if len(b.observers) == 0 {
return nil
}
observers := make([]func(BridgeMessage), len(b.observers))
copy(observers, b.observers)
return observers
}
// dispatch routes an incoming message to the appropriate ws.Hub channel. // dispatch routes an incoming message to the appropriate ws.Hub channel.
func (b *Bridge) dispatch(msg BridgeMessage) { func (b *Bridge) dispatch(msg BridgeMessage) {

View file

@ -11,7 +11,7 @@ import (
"testing" "testing"
"time" "time"
"forge.lthn.ai/core/go-ws" "dappco.re/go/ws"
"github.com/gorilla/websocket" "github.com/gorilla/websocket"
) )
@ -164,6 +164,71 @@ func TestBridge_Good_MessageDispatch(t *testing.T) {
// This confirms the dispatch path ran without error. // This confirms the dispatch path ran without error.
} }
func TestBridge_Good_MultipleObservers(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
conn, err := testUpgrader.Upgrade(w, r, nil)
if err != nil {
return
}
defer conn.Close()
msg := BridgeMessage{
Type: "brain_recall",
Data: map[string]any{
"query": "test query",
"count": 3,
},
}
data, _ := json.Marshal(msg)
_ = conn.WriteMessage(websocket.TextMessage, data)
for {
if _, _, err := conn.ReadMessage(); err != nil {
break
}
}
}))
defer ts.Close()
hub := ws.NewHub()
ctx := t.Context()
go hub.Run(ctx)
cfg := DefaultConfig()
cfg.LaravelWSURL = wsURL(ts)
cfg.ReconnectInterval = 100 * time.Millisecond
bridge := NewBridge(hub, cfg)
first := make(chan struct{}, 1)
second := make(chan struct{}, 1)
bridge.AddObserver(func(msg BridgeMessage) {
if msg.Type == "brain_recall" {
first <- struct{}{}
}
})
bridge.AddObserver(func(msg BridgeMessage) {
if msg.Type == "brain_recall" {
second <- struct{}{}
}
})
bridge.Start(ctx)
waitConnected(t, bridge, 2*time.Second)
select {
case <-first:
case <-time.After(2 * time.Second):
t.Fatal("timed out waiting for first observer")
}
select {
case <-second:
case <-time.After(2 * time.Second):
t.Fatal("timed out waiting for second observer")
}
}
func TestBridge_Good_Reconnect(t *testing.T) { func TestBridge_Good_Reconnect(t *testing.T) {
// Use atomic counter to avoid data race between HTTP handler goroutine // Use atomic counter to avoid data race between HTTP handler goroutine
// and the test goroutine. // and the test goroutine.
@ -412,11 +477,10 @@ func TestBridge_Good_NoAuthHeaderWhenTokenEmpty(t *testing.T) {
} }
} }
func TestBridge_Good_WithTokenOption(t *testing.T) { func TestBridge_Good_ConfigToken(t *testing.T) {
// Verify the WithToken option function works. // Verify the Config DTO carries token settings through unchanged.
cfg := DefaultConfig() cfg := DefaultConfig()
opt := WithToken("my-token") cfg.Token = "my-token"
opt(&cfg)
if cfg.Token != "my-token" { if cfg.Token != "my-token" {
t.Errorf("expected token 'my-token', got %q", cfg.Token) t.Errorf("expected token 'my-token', got %q", cfg.Token)
@ -424,14 +488,14 @@ func TestBridge_Good_WithTokenOption(t *testing.T) {
} }
func TestSubsystem_Good_Name(t *testing.T) { func TestSubsystem_Good_Name(t *testing.T) {
sub := New(nil) sub := New(nil, Config{})
if sub.Name() != "ide" { if sub.Name() != "ide" {
t.Errorf("expected name 'ide', got %q", sub.Name()) t.Errorf("expected name 'ide', got %q", sub.Name())
} }
} }
func TestSubsystem_Good_NilHub(t *testing.T) { func TestSubsystem_Good_NilHub(t *testing.T) {
sub := New(nil) sub := New(nil, Config{})
if sub.Bridge() != nil { if sub.Bridge() != nil {
t.Error("expected nil bridge when hub is nil") t.Error("expected nil bridge when hub is nil")
} }

View file

@ -1,10 +1,17 @@
// Package ide provides an MCP subsystem that bridges the desktop IDE to // Package ide provides an MCP subsystem that bridges the desktop IDE to
// a Laravel core-agentic backend over WebSocket. // a Laravel core-agentic backend over WebSocket.
// SPDX-License-Identifier: EUPL-1.2
package ide package ide
import "time" import "time"
// Config holds connection and workspace settings for the IDE subsystem. // Config holds connection and workspace settings for the IDE subsystem.
//
// cfg := Config{
// LaravelWSURL: "ws://localhost:9876/ws",
// WorkspaceRoot: "/workspace",
// }
type Config struct { type Config struct {
// LaravelWSURL is the WebSocket endpoint for the Laravel core-agentic backend. // LaravelWSURL is the WebSocket endpoint for the Laravel core-agentic backend.
LaravelWSURL string LaravelWSURL string
@ -24,34 +31,27 @@ type Config struct {
} }
// DefaultConfig returns sensible defaults for local development. // DefaultConfig returns sensible defaults for local development.
//
// cfg := DefaultConfig()
func DefaultConfig() Config { func DefaultConfig() Config {
return Config{ return Config{}.WithDefaults()
LaravelWSURL: "ws://localhost:9876/ws",
WorkspaceRoot: ".",
ReconnectInterval: 2 * time.Second,
MaxReconnectInterval: 30 * time.Second,
}
} }
// Option configures the IDE subsystem. // WithDefaults fills unset fields with the default development values.
type Option func(*Config) //
// cfg := Config{WorkspaceRoot: "/workspace"}.WithDefaults()
// WithLaravelURL sets the Laravel WebSocket endpoint. func (c Config) WithDefaults() Config {
func WithLaravelURL(url string) Option { if c.LaravelWSURL == "" {
return func(c *Config) { c.LaravelWSURL = url } c.LaravelWSURL = "ws://localhost:9876/ws"
} }
if c.WorkspaceRoot == "" {
// WithWorkspaceRoot sets the workspace root directory. c.WorkspaceRoot = "."
func WithWorkspaceRoot(root string) Option {
return func(c *Config) { c.WorkspaceRoot = root }
} }
if c.ReconnectInterval == 0 {
// WithReconnectInterval sets the base reconnect interval. c.ReconnectInterval = 2 * time.Second
func WithReconnectInterval(d time.Duration) Option {
return func(c *Config) { c.ReconnectInterval = d }
} }
if c.MaxReconnectInterval == 0 {
// WithToken sets the Bearer token for WebSocket authentication. c.MaxReconnectInterval = 30 * time.Second
func WithToken(token string) Option { }
return func(c *Config) { c.Token = token } return c
} }

View file

@ -1,11 +1,16 @@
// SPDX-License-Identifier: EUPL-1.2
package ide package ide
import ( import (
"context" "context"
"sync"
"time"
coreerr "forge.lthn.ai/core/go-log" core "dappco.re/go/core"
"forge.lthn.ai/core/go-ws" coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp" coreerr "dappco.re/go/log"
"dappco.re/go/ws"
) )
// errBridgeNotAvailable is returned when a tool requires the Laravel bridge // errBridgeNotAvailable is returned when a tool requires the Laravel bridge
@ -17,30 +22,59 @@ type Subsystem struct {
cfg Config cfg Config
bridge *Bridge bridge *Bridge
hub *ws.Hub hub *ws.Hub
notifier coremcp.Notifier
stateMu sync.Mutex
sessionOrder []string
sessions map[string]Session
chats map[string][]ChatMessage
buildOrder []string
builds map[string]BuildInfo
buildLogMap map[string][]string
activity []ActivityEvent
} }
// New creates an IDE subsystem. The ws.Hub is used for real-time forwarding; var (
// pass nil if headless (tools still work but real-time streaming is disabled). _ coremcp.Subsystem = (*Subsystem)(nil)
func New(hub *ws.Hub, opts ...Option) *Subsystem { _ coremcp.SubsystemWithShutdown = (*Subsystem)(nil)
cfg := DefaultConfig() _ coremcp.SubsystemWithNotifier = (*Subsystem)(nil)
for _, opt := range opts { )
opt(&cfg)
// New creates an IDE subsystem from a Config DTO.
//
// cfg := DefaultConfig()
// ide := New(hub, cfg)
//
// The ws.Hub is used for real-time forwarding; pass nil if headless
// (tools still work but real-time streaming is disabled).
func New(hub *ws.Hub, cfg Config) *Subsystem {
cfg = cfg.WithDefaults()
s := &Subsystem{
cfg: cfg,
bridge: nil,
hub: hub,
sessions: make(map[string]Session),
chats: make(map[string][]ChatMessage),
builds: make(map[string]BuildInfo),
buildLogMap: make(map[string][]string),
} }
var bridge *Bridge
if hub != nil { if hub != nil {
bridge = NewBridge(hub, cfg) s.bridge = NewBridge(hub, cfg)
s.bridge.AddObserver(func(msg BridgeMessage) {
s.handleBridgeMessage(msg)
})
} }
return &Subsystem{cfg: cfg, bridge: bridge, hub: hub} return s
} }
// Name implements mcp.Subsystem. // Name implements mcp.Subsystem.
func (s *Subsystem) Name() string { return "ide" } func (s *Subsystem) Name() string { return "ide" }
// RegisterTools implements mcp.Subsystem. // RegisterTools implements mcp.Subsystem.
func (s *Subsystem) RegisterTools(server *mcp.Server) { func (s *Subsystem) RegisterTools(svc *coremcp.Service) {
s.registerChatTools(server) s.registerChatTools(svc)
s.registerBuildTools(server) s.registerBuildTools(svc)
s.registerDashboardTools(server) s.registerDashboardTools(svc)
} }
// Shutdown implements mcp.SubsystemWithShutdown. // Shutdown implements mcp.SubsystemWithShutdown.
@ -51,6 +85,11 @@ func (s *Subsystem) Shutdown(_ context.Context) error {
return nil return nil
} }
// SetNotifier wires the shared MCP notifier into the IDE subsystem.
func (s *Subsystem) SetNotifier(n coremcp.Notifier) {
s.notifier = n
}
// Bridge returns the Laravel WebSocket bridge (may be nil in headless mode). // Bridge returns the Laravel WebSocket bridge (may be nil in headless mode).
func (s *Subsystem) Bridge() *Bridge { return s.bridge } func (s *Subsystem) Bridge() *Bridge { return s.bridge }
@ -60,3 +99,469 @@ func (s *Subsystem) StartBridge(ctx context.Context) {
s.bridge.Start(ctx) s.bridge.Start(ctx)
} }
} }
func (s *Subsystem) addSession(session Session) {
s.stateMu.Lock()
defer s.stateMu.Unlock()
if s.sessions == nil {
s.sessions = make(map[string]Session)
}
if s.chats == nil {
s.chats = make(map[string][]ChatMessage)
}
if _, exists := s.sessions[session.ID]; !exists {
s.sessionOrder = append(s.sessionOrder, session.ID)
}
s.sessions[session.ID] = session
}
func (s *Subsystem) addBuild(build BuildInfo) {
s.stateMu.Lock()
defer s.stateMu.Unlock()
if s.builds == nil {
s.builds = make(map[string]BuildInfo)
}
if s.buildLogMap == nil {
s.buildLogMap = make(map[string][]string)
}
if _, exists := s.builds[build.ID]; !exists {
s.buildOrder = append(s.buildOrder, build.ID)
}
if build.StartedAt.IsZero() {
build.StartedAt = time.Now()
}
s.builds[build.ID] = build
}
func (s *Subsystem) listBuilds(repo string, limit int) []BuildInfo {
s.stateMu.Lock()
defer s.stateMu.Unlock()
if len(s.buildOrder) == 0 {
return []BuildInfo{}
}
if limit <= 0 {
limit = len(s.buildOrder)
}
builds := make([]BuildInfo, 0, limit)
for i := len(s.buildOrder) - 1; i >= 0; i-- {
id := s.buildOrder[i]
build, ok := s.builds[id]
if !ok {
continue
}
if repo != "" && build.Repo != repo {
continue
}
builds = append(builds, build)
if len(builds) >= limit {
break
}
}
return builds
}
func (s *Subsystem) appendBuildLog(buildID, line string) {
s.stateMu.Lock()
defer s.stateMu.Unlock()
if s.buildLogMap == nil {
s.buildLogMap = make(map[string][]string)
}
s.buildLogMap[buildID] = append(s.buildLogMap[buildID], line)
}
func (s *Subsystem) setBuildLogs(buildID string, lines []string) {
s.stateMu.Lock()
defer s.stateMu.Unlock()
if s.buildLogMap == nil {
s.buildLogMap = make(map[string][]string)
}
if len(lines) == 0 {
s.buildLogMap[buildID] = []string{}
return
}
out := make([]string, len(lines))
copy(out, lines)
s.buildLogMap[buildID] = out
}
func (s *Subsystem) buildLogTail(buildID string, tail int) []string {
s.stateMu.Lock()
defer s.stateMu.Unlock()
lines := s.buildLogMap[buildID]
if len(lines) == 0 {
return []string{}
}
if tail <= 0 || tail > len(lines) {
tail = len(lines)
}
start := len(lines) - tail
out := make([]string, tail)
copy(out, lines[start:])
return out
}
func (s *Subsystem) buildSnapshot(buildID string) (BuildInfo, bool) {
s.stateMu.Lock()
defer s.stateMu.Unlock()
build, ok := s.builds[buildID]
return build, ok
}
func (s *Subsystem) buildRepoCount() int {
s.stateMu.Lock()
defer s.stateMu.Unlock()
repos := make(map[string]struct{})
for _, build := range s.builds {
if build.Repo != "" {
repos[build.Repo] = struct{}{}
}
}
return len(repos)
}
func (s *Subsystem) listSessions() []Session {
s.stateMu.Lock()
defer s.stateMu.Unlock()
if len(s.sessionOrder) == 0 {
return []Session{}
}
result := make([]Session, 0, len(s.sessionOrder))
for _, id := range s.sessionOrder {
if session, ok := s.sessions[id]; ok {
result = append(result, session)
}
}
return result
}
func (s *Subsystem) appendChatMessage(sessionID, role, content string) {
s.stateMu.Lock()
defer s.stateMu.Unlock()
if s.chats == nil {
s.chats = make(map[string][]ChatMessage)
}
s.chats[sessionID] = append(s.chats[sessionID], ChatMessage{
Role: role,
Content: content,
Timestamp: time.Now(),
})
}
func (s *Subsystem) chatMessages(sessionID string) []ChatMessage {
s.stateMu.Lock()
defer s.stateMu.Unlock()
history := s.chats[sessionID]
if len(history) == 0 {
return []ChatMessage{}
}
out := make([]ChatMessage, len(history))
copy(out, history)
return out
}
func (s *Subsystem) recordActivity(typ, msg string) {
s.stateMu.Lock()
defer s.stateMu.Unlock()
s.activity = append(s.activity, ActivityEvent{
Type: typ,
Message: msg,
Timestamp: time.Now(),
})
}
func (s *Subsystem) activityFeed(limit int) []ActivityEvent {
s.stateMu.Lock()
defer s.stateMu.Unlock()
if limit <= 0 || limit > len(s.activity) {
limit = len(s.activity)
}
if limit == 0 {
return []ActivityEvent{}
}
start := len(s.activity) - limit
out := make([]ActivityEvent, limit)
copy(out, s.activity[start:])
return out
}
func (s *Subsystem) handleBridgeMessage(msg BridgeMessage) {
switch msg.Type {
case "build_status":
if build, ok := buildInfoFromData(msg.Data); ok {
s.addBuild(build)
s.emitBuildLifecycle(build)
if lines := buildLinesFromData(msg.Data); len(lines) > 0 {
s.setBuildLogs(build.ID, lines)
}
}
case "build_list":
for _, build := range buildInfosFromData(msg.Data) {
s.addBuild(build)
}
case "build_logs":
buildID, lines := buildLogsFromData(msg.Data)
if buildID != "" {
s.setBuildLogs(buildID, lines)
}
case "session_list":
for _, session := range sessionsFromData(msg.Data) {
s.addSession(session)
}
case "session_create":
if session, ok := sessionFromData(msg.Data); ok {
s.addSession(session)
}
case "chat_history":
if sessionID, messages := chatHistoryFromData(msg.Data); sessionID != "" {
for _, message := range messages {
s.appendChatMessage(sessionID, message.Role, message.Content)
}
}
}
}
func (s *Subsystem) emitBuildLifecycle(build BuildInfo) {
if s.notifier == nil {
return
}
channel := ""
switch build.Status {
case "running", "in_progress", "started":
channel = coremcp.ChannelBuildStart
case "success", "succeeded", "completed", "passed":
channel = coremcp.ChannelBuildComplete
case "failed", "error":
channel = coremcp.ChannelBuildFailed
default:
return
}
payload := map[string]any{
"id": build.ID,
"repo": build.Repo,
"branch": build.Branch,
"status": build.Status,
"startedAt": build.StartedAt,
}
if build.Duration != "" {
payload["duration"] = build.Duration
}
s.notifier.ChannelSend(context.Background(), channel, payload)
}
func buildInfoFromData(data any) (BuildInfo, bool) {
m, ok := data.(map[string]any)
if !ok {
return BuildInfo{}, false
}
id, _ := m["buildId"].(string)
if id == "" {
id, _ = m["id"].(string)
}
if id == "" {
return BuildInfo{}, false
}
build := BuildInfo{
ID: id,
Repo: stringFromAny(m["repo"]),
Branch: stringFromAny(m["branch"]),
Status: stringFromAny(m["status"]),
}
if build.Status == "" {
build.Status = "unknown"
}
if startedAt, ok := m["startedAt"].(time.Time); ok {
build.StartedAt = startedAt
}
if duration := stringFromAny(m["duration"]); duration != "" {
build.Duration = duration
}
return build, true
}
func buildInfosFromData(data any) []BuildInfo {
m, ok := data.(map[string]any)
if !ok {
return []BuildInfo{}
}
raw, ok := m["builds"].([]any)
if !ok {
return []BuildInfo{}
}
builds := make([]BuildInfo, 0, len(raw))
for _, item := range raw {
build, ok := buildInfoFromData(item)
if ok {
builds = append(builds, build)
}
}
return builds
}
func buildLinesFromData(data any) []string {
_, lines := buildLogsFromData(data)
return lines
}
func buildLogsFromData(data any) (string, []string) {
m, ok := data.(map[string]any)
if !ok {
return "", []string{}
}
buildID, _ := m["buildId"].(string)
if buildID == "" {
buildID, _ = m["id"].(string)
}
switch raw := m["lines"].(type) {
case []any:
lines := make([]string, 0, len(raw))
for _, item := range raw {
lines = append(lines, stringFromAny(item))
}
return buildID, lines
case []string:
lines := make([]string, len(raw))
copy(lines, raw)
return buildID, lines
}
if output := stringFromAny(m["output"]); output != "" {
return buildID, []string{output}
}
return buildID, []string{}
}
func sessionsFromData(data any) []Session {
m, ok := data.(map[string]any)
if !ok {
return []Session{}
}
raw, ok := m["sessions"].([]any)
if !ok {
return []Session{}
}
sessions := make([]Session, 0, len(raw))
for _, item := range raw {
session, ok := sessionFromData(item)
if ok {
sessions = append(sessions, session)
}
}
return sessions
}
func sessionFromData(data any) (Session, bool) {
m, ok := data.(map[string]any)
if !ok {
return Session{}, false
}
id, _ := m["id"].(string)
if id == "" {
return Session{}, false
}
session := Session{
ID: id,
Name: stringFromAny(m["name"]),
Status: stringFromAny(m["status"]),
CreatedAt: time.Now(),
}
if createdAt, ok := m["createdAt"].(time.Time); ok {
session.CreatedAt = createdAt
}
if session.Status == "" {
session.Status = "unknown"
}
return session, true
}
func chatHistoryFromData(data any) (string, []ChatMessage) {
m, ok := data.(map[string]any)
if !ok {
return "", []ChatMessage{}
}
sessionID, _ := m["sessionId"].(string)
if sessionID == "" {
sessionID, _ = m["session_id"].(string)
}
raw, ok := m["messages"].([]any)
if !ok {
return sessionID, []ChatMessage{}
}
messages := make([]ChatMessage, 0, len(raw))
for _, item := range raw {
if msg, ok := chatMessageFromData(item); ok {
messages = append(messages, msg)
}
}
return sessionID, messages
}
func chatMessageFromData(data any) (ChatMessage, bool) {
m, ok := data.(map[string]any)
if !ok {
return ChatMessage{}, false
}
role := stringFromAny(m["role"])
content := stringFromAny(m["content"])
if role == "" && content == "" {
return ChatMessage{}, false
}
msg := ChatMessage{
Role: role,
Content: content,
Timestamp: time.Now(),
}
if ts, ok := m["timestamp"].(time.Time); ok {
msg.Timestamp = ts
}
return msg, true
}
func stringFromAny(v any) string {
switch value := v.(type) {
case string:
return value
case interface{ String() string }:
return value.String()
default:
return ""
}
}
func newSessionID() string {
return core.ID()
}

View file

@ -1,20 +1,27 @@
// SPDX-License-Identifier: EUPL-1.2
package ide package ide
import ( import (
"context" "context"
"time" "time"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
// Build tool input/output types. // Build tool input/output types.
// BuildStatusInput is the input for ide_build_status. // BuildStatusInput is the input for ide_build_status.
//
// input := BuildStatusInput{BuildID: "build-123"}
type BuildStatusInput struct { type BuildStatusInput struct {
BuildID string `json:"buildId"` BuildID string `json:"buildId"`
} }
// BuildInfo represents a single build. // BuildInfo represents a single build.
//
// info := BuildInfo{ID: "build-123", Repo: "go-io", Status: "running"}
type BuildInfo struct { type BuildInfo struct {
ID string `json:"id"` ID string `json:"id"`
Repo string `json:"repo"` Repo string `json:"repo"`
@ -25,90 +32,102 @@ type BuildInfo struct {
} }
// BuildStatusOutput is the output for ide_build_status. // BuildStatusOutput is the output for ide_build_status.
//
// // out.Build.Status == "running"
type BuildStatusOutput struct { type BuildStatusOutput struct {
Build BuildInfo `json:"build"` Build BuildInfo `json:"build"`
} }
// BuildListInput is the input for ide_build_list. // BuildListInput is the input for ide_build_list.
//
// input := BuildListInput{Repo: "go-io", Limit: 20}
type BuildListInput struct { type BuildListInput struct {
Repo string `json:"repo,omitempty"` Repo string `json:"repo,omitempty"`
Limit int `json:"limit,omitempty"` Limit int `json:"limit,omitempty"`
} }
// BuildListOutput is the output for ide_build_list. // BuildListOutput is the output for ide_build_list.
//
// // out.Builds holds the local build snapshot
type BuildListOutput struct { type BuildListOutput struct {
Builds []BuildInfo `json:"builds"` Builds []BuildInfo `json:"builds"`
} }
// BuildLogsInput is the input for ide_build_logs. // BuildLogsInput is the input for ide_build_logs.
//
// input := BuildLogsInput{BuildID: "build-123", Tail: 200}
type BuildLogsInput struct { type BuildLogsInput struct {
BuildID string `json:"buildId"` BuildID string `json:"buildId"`
Tail int `json:"tail,omitempty"` Tail int `json:"tail,omitempty"`
} }
// BuildLogsOutput is the output for ide_build_logs. // BuildLogsOutput is the output for ide_build_logs.
//
// // out.Lines contains the captured build log lines
type BuildLogsOutput struct { type BuildLogsOutput struct {
BuildID string `json:"buildId"` BuildID string `json:"buildId"`
Lines []string `json:"lines"` Lines []string `json:"lines"`
} }
func (s *Subsystem) registerBuildTools(server *mcp.Server) { func (s *Subsystem) registerBuildTools(svc *coremcp.Service) {
mcp.AddTool(server, &mcp.Tool{ server := svc.Server()
coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_build_status", Name: "ide_build_status",
Description: "Get the status of a specific build", Description: "Get the status of a specific build",
}, s.buildStatus) }, s.buildStatus)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_build_list", Name: "ide_build_list",
Description: "List recent builds, optionally filtered by repository", Description: "List recent builds, optionally filtered by repository",
}, s.buildList) }, s.buildList)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_build_logs", Name: "ide_build_logs",
Description: "Retrieve log output for a build", Description: "Retrieve log output for a build",
}, s.buildLogs) }, s.buildLogs)
} }
// buildStatus requests build status from the Laravel backend. // buildStatus returns a local best-effort build status and refreshes the
// Stub implementation: sends request via bridge, returns "unknown" status. Awaiting Laravel backend. // Laravel backend when the bridge is available.
func (s *Subsystem) buildStatus(_ context.Context, _ *mcp.CallToolRequest, input BuildStatusInput) (*mcp.CallToolResult, BuildStatusOutput, error) { func (s *Subsystem) buildStatus(_ context.Context, _ *mcp.CallToolRequest, input BuildStatusInput) (*mcp.CallToolResult, BuildStatusOutput, error) {
if s.bridge == nil { if s.bridge != nil {
return nil, BuildStatusOutput{}, errBridgeNotAvailable
}
_ = s.bridge.Send(BridgeMessage{ _ = s.bridge.Send(BridgeMessage{
Type: "build_status", Type: "build_status",
Data: map[string]any{"buildId": input.BuildID}, Data: map[string]any{"buildId": input.BuildID},
}) })
return nil, BuildStatusOutput{
Build: BuildInfo{ID: input.BuildID, Status: "unknown"},
}, nil
} }
// buildList requests a list of builds from the Laravel backend. build := BuildInfo{ID: input.BuildID, Status: "unknown"}
// Stub implementation: sends request via bridge, returns empty list. Awaiting Laravel backend. if cached, ok := s.buildSnapshot(input.BuildID); ok {
func (s *Subsystem) buildList(_ context.Context, _ *mcp.CallToolRequest, input BuildListInput) (*mcp.CallToolResult, BuildListOutput, error) { build = cached
if s.bridge == nil {
return nil, BuildListOutput{}, errBridgeNotAvailable
} }
return nil, BuildStatusOutput{Build: build}, nil
}
// buildList returns the local build list snapshot and refreshes the Laravel
// backend when the bridge is available.
func (s *Subsystem) buildList(_ context.Context, _ *mcp.CallToolRequest, input BuildListInput) (*mcp.CallToolResult, BuildListOutput, error) {
if s.bridge != nil {
_ = s.bridge.Send(BridgeMessage{ _ = s.bridge.Send(BridgeMessage{
Type: "build_list", Type: "build_list",
Data: map[string]any{"repo": input.Repo, "limit": input.Limit}, Data: map[string]any{"repo": input.Repo, "limit": input.Limit},
}) })
return nil, BuildListOutput{Builds: []BuildInfo{}}, nil }
return nil, BuildListOutput{Builds: s.listBuilds(input.Repo, input.Limit)}, nil
} }
// buildLogs requests build log output from the Laravel backend. // buildLogs returns the local build log snapshot and refreshes the Laravel
// Stub implementation: sends request via bridge, returns empty lines. Awaiting Laravel backend. // backend when the bridge is available.
func (s *Subsystem) buildLogs(_ context.Context, _ *mcp.CallToolRequest, input BuildLogsInput) (*mcp.CallToolResult, BuildLogsOutput, error) { func (s *Subsystem) buildLogs(_ context.Context, _ *mcp.CallToolRequest, input BuildLogsInput) (*mcp.CallToolResult, BuildLogsOutput, error) {
if s.bridge == nil { if s.bridge != nil {
return nil, BuildLogsOutput{}, errBridgeNotAvailable
}
_ = s.bridge.Send(BridgeMessage{ _ = s.bridge.Send(BridgeMessage{
Type: "build_logs", Type: "build_logs",
Data: map[string]any{"buildId": input.BuildID, "tail": input.Tail}, Data: map[string]any{"buildId": input.BuildID, "tail": input.Tail},
}) })
}
return nil, BuildLogsOutput{ return nil, BuildLogsOutput{
BuildID: input.BuildID, BuildID: input.BuildID,
Lines: []string{}, Lines: s.buildLogTail(input.BuildID, input.Tail),
}, nil }, nil
} }

View file

@ -1,22 +1,29 @@
// SPDX-License-Identifier: EUPL-1.2
package ide package ide
import ( import (
"context" "context"
"time" "time"
coreerr "forge.lthn.ai/core/go-log" coremcp "dappco.re/go/mcp/pkg/mcp"
coreerr "dappco.re/go/log"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
// Chat tool input/output types. // Chat tool input/output types.
// ChatSendInput is the input for ide_chat_send. // ChatSendInput is the input for ide_chat_send.
//
// input := ChatSendInput{SessionID: "sess-42", Message: "hello"}
type ChatSendInput struct { type ChatSendInput struct {
SessionID string `json:"sessionId"` SessionID string `json:"sessionId"`
Message string `json:"message"` Message string `json:"message"`
} }
// ChatSendOutput is the output for ide_chat_send. // ChatSendOutput is the output for ide_chat_send.
//
// // out.Sent == true, out.SessionID == "sess-42"
type ChatSendOutput struct { type ChatSendOutput struct {
Sent bool `json:"sent"` Sent bool `json:"sent"`
SessionID string `json:"sessionId"` SessionID string `json:"sessionId"`
@ -24,12 +31,16 @@ type ChatSendOutput struct {
} }
// ChatHistoryInput is the input for ide_chat_history. // ChatHistoryInput is the input for ide_chat_history.
//
// input := ChatHistoryInput{SessionID: "sess-42", Limit: 50}
type ChatHistoryInput struct { type ChatHistoryInput struct {
SessionID string `json:"sessionId"` SessionID string `json:"sessionId"`
Limit int `json:"limit,omitempty"` Limit int `json:"limit,omitempty"`
} }
// ChatMessage represents a single message in history. // ChatMessage represents a single message in history.
//
// msg := ChatMessage{Role: "user", Content: "hello"}
type ChatMessage struct { type ChatMessage struct {
Role string `json:"role"` Role string `json:"role"`
Content string `json:"content"` Content string `json:"content"`
@ -37,15 +48,21 @@ type ChatMessage struct {
} }
// ChatHistoryOutput is the output for ide_chat_history. // ChatHistoryOutput is the output for ide_chat_history.
//
// // out.Messages contains the stored chat transcript
type ChatHistoryOutput struct { type ChatHistoryOutput struct {
SessionID string `json:"sessionId"` SessionID string `json:"sessionId"`
Messages []ChatMessage `json:"messages"` Messages []ChatMessage `json:"messages"`
} }
// SessionListInput is the input for ide_session_list. // SessionListInput is the input for ide_session_list.
//
// input := SessionListInput{}
type SessionListInput struct{} type SessionListInput struct{}
// Session represents an agent session. // Session represents an agent session.
//
// session := Session{ID: "sess-42", Name: "draft", Status: "running"}
type Session struct { type Session struct {
ID string `json:"id"` ID string `json:"id"`
Name string `json:"name"` Name string `json:"name"`
@ -54,67 +71,81 @@ type Session struct {
} }
// SessionListOutput is the output for ide_session_list. // SessionListOutput is the output for ide_session_list.
//
// // out.Sessions contains every locally tracked session
type SessionListOutput struct { type SessionListOutput struct {
Sessions []Session `json:"sessions"` Sessions []Session `json:"sessions"`
} }
// SessionCreateInput is the input for ide_session_create. // SessionCreateInput is the input for ide_session_create.
//
// input := SessionCreateInput{Name: "draft"}
type SessionCreateInput struct { type SessionCreateInput struct {
Name string `json:"name"` Name string `json:"name"`
} }
// SessionCreateOutput is the output for ide_session_create. // SessionCreateOutput is the output for ide_session_create.
//
// // out.Session.ID is assigned by the backend or local store
type SessionCreateOutput struct { type SessionCreateOutput struct {
Session Session `json:"session"` Session Session `json:"session"`
} }
// PlanStatusInput is the input for ide_plan_status. // PlanStatusInput is the input for ide_plan_status.
//
// input := PlanStatusInput{SessionID: "sess-42"}
type PlanStatusInput struct { type PlanStatusInput struct {
SessionID string `json:"sessionId"` SessionID string `json:"sessionId"`
} }
// PlanStep is a single step in an agent plan. // PlanStep is a single step in an agent plan.
//
// step := PlanStep{Name: "prep", Status: "done"}
type PlanStep struct { type PlanStep struct {
Name string `json:"name"` Name string `json:"name"`
Status string `json:"status"` Status string `json:"status"`
} }
// PlanStatusOutput is the output for ide_plan_status. // PlanStatusOutput is the output for ide_plan_status.
//
// // out.Steps contains the current plan breakdown
type PlanStatusOutput struct { type PlanStatusOutput struct {
SessionID string `json:"sessionId"` SessionID string `json:"sessionId"`
Status string `json:"status"` Status string `json:"status"`
Steps []PlanStep `json:"steps"` Steps []PlanStep `json:"steps"`
} }
func (s *Subsystem) registerChatTools(server *mcp.Server) { func (s *Subsystem) registerChatTools(svc *coremcp.Service) {
mcp.AddTool(server, &mcp.Tool{ server := svc.Server()
coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_chat_send", Name: "ide_chat_send",
Description: "Send a message to an agent chat session", Description: "Send a message to an agent chat session",
}, s.chatSend) }, s.chatSend)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_chat_history", Name: "ide_chat_history",
Description: "Retrieve message history for a chat session", Description: "Retrieve message history for a chat session",
}, s.chatHistory) }, s.chatHistory)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_session_list", Name: "ide_session_list",
Description: "List active agent sessions", Description: "List active agent sessions",
}, s.sessionList) }, s.sessionList)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_session_create", Name: "ide_session_create",
Description: "Create a new agent session", Description: "Create a new agent session",
}, s.sessionCreate) }, s.sessionCreate)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_plan_status", Name: "ide_plan_status",
Description: "Get the current plan status for a session", Description: "Get the current plan status for a session",
}, s.planStatus) }, s.planStatus)
} }
// chatSend forwards a chat message to the Laravel backend via bridge. // chatSend forwards a chat message to the Laravel backend via bridge.
// Stub implementation: delegates to bridge, real response arrives via WebSocket subscription. // The subsystem also stores the message locally so history lookups can
// return something useful before the backend answers.
func (s *Subsystem) chatSend(_ context.Context, _ *mcp.CallToolRequest, input ChatSendInput) (*mcp.CallToolResult, ChatSendOutput, error) { func (s *Subsystem) chatSend(_ context.Context, _ *mcp.CallToolRequest, input ChatSendInput) (*mcp.CallToolResult, ChatSendOutput, error) {
if s.bridge == nil { if s.bridge == nil {
return nil, ChatSendOutput{}, errBridgeNotAvailable return nil, ChatSendOutput{}, errBridgeNotAvailable
@ -128,6 +159,10 @@ func (s *Subsystem) chatSend(_ context.Context, _ *mcp.CallToolRequest, input Ch
if err != nil { if err != nil {
return nil, ChatSendOutput{}, coreerr.E("ide.chatSend", "failed to send message", err) return nil, ChatSendOutput{}, coreerr.E("ide.chatSend", "failed to send message", err)
} }
s.appendChatMessage(input.SessionID, "user", input.Message)
s.recordActivity("chat_send", "forwarded chat message for session "+input.SessionID)
return nil, ChatSendOutput{ return nil, ChatSendOutput{
Sent: true, Sent: true,
SessionID: input.SessionID, SessionID: input.SessionID,
@ -135,67 +170,77 @@ func (s *Subsystem) chatSend(_ context.Context, _ *mcp.CallToolRequest, input Ch
}, nil }, nil
} }
// chatHistory requests message history from the Laravel backend. // chatHistory returns the local message history for a session and refreshes
// Stub implementation: sends request via bridge, returns empty messages. Real data arrives via WebSocket. // the Laravel backend when the bridge is available.
func (s *Subsystem) chatHistory(_ context.Context, _ *mcp.CallToolRequest, input ChatHistoryInput) (*mcp.CallToolResult, ChatHistoryOutput, error) { func (s *Subsystem) chatHistory(_ context.Context, _ *mcp.CallToolRequest, input ChatHistoryInput) (*mcp.CallToolResult, ChatHistoryOutput, error) {
if s.bridge == nil { if s.bridge != nil {
return nil, ChatHistoryOutput{}, errBridgeNotAvailable // Request history via bridge when available; the local cache still
} // provides an immediate response in headless mode.
// Request history via bridge; for now return placeholder indicating the
// request was forwarded. Real data arrives via WebSocket subscription.
_ = s.bridge.Send(BridgeMessage{ _ = s.bridge.Send(BridgeMessage{
Type: "chat_history", Type: "chat_history",
SessionID: input.SessionID, SessionID: input.SessionID,
Data: map[string]any{"limit": input.Limit}, Data: map[string]any{"limit": input.Limit},
}) })
}
return nil, ChatHistoryOutput{ return nil, ChatHistoryOutput{
SessionID: input.SessionID, SessionID: input.SessionID,
Messages: []ChatMessage{}, Messages: s.chatMessages(input.SessionID),
}, nil }, nil
} }
// sessionList requests the session list from the Laravel backend. // sessionList returns the local session cache and refreshes the Laravel
// Stub implementation: sends request via bridge, returns empty sessions. Awaiting Laravel backend. // backend when the bridge is available.
func (s *Subsystem) sessionList(_ context.Context, _ *mcp.CallToolRequest, _ SessionListInput) (*mcp.CallToolResult, SessionListOutput, error) { func (s *Subsystem) sessionList(_ context.Context, _ *mcp.CallToolRequest, _ SessionListInput) (*mcp.CallToolResult, SessionListOutput, error) {
if s.bridge == nil { if s.bridge != nil {
return nil, SessionListOutput{}, errBridgeNotAvailable
}
_ = s.bridge.Send(BridgeMessage{Type: "session_list"}) _ = s.bridge.Send(BridgeMessage{Type: "session_list"})
return nil, SessionListOutput{Sessions: []Session{}}, nil }
return nil, SessionListOutput{Sessions: s.listSessions()}, nil
} }
// sessionCreate requests a new session from the Laravel backend. // sessionCreate creates a local session record immediately and forwards the
// Stub implementation: sends request via bridge, returns placeholder session. Awaiting Laravel backend. // request to the Laravel backend when the bridge is available.
func (s *Subsystem) sessionCreate(_ context.Context, _ *mcp.CallToolRequest, input SessionCreateInput) (*mcp.CallToolResult, SessionCreateOutput, error) { func (s *Subsystem) sessionCreate(_ context.Context, _ *mcp.CallToolRequest, input SessionCreateInput) (*mcp.CallToolResult, SessionCreateOutput, error) {
if s.bridge == nil { if s.bridge != nil {
return nil, SessionCreateOutput{}, errBridgeNotAvailable if err := s.bridge.Send(BridgeMessage{
}
_ = s.bridge.Send(BridgeMessage{
Type: "session_create", Type: "session_create",
Data: map[string]any{"name": input.Name}, Data: map[string]any{"name": input.Name},
}) }); err != nil {
return nil, SessionCreateOutput{ return nil, SessionCreateOutput{}, err
Session: Session{ }
}
session := Session{
ID: newSessionID(),
Name: input.Name, Name: input.Name,
Status: "creating", Status: "creating",
CreatedAt: time.Now(), CreatedAt: time.Now(),
}, }
s.addSession(session)
s.recordActivity("session_create", "created session "+session.ID)
return nil, SessionCreateOutput{
Session: session,
}, nil }, nil
} }
// planStatus requests plan status from the Laravel backend. // planStatus returns the local best-effort session status and refreshes the
// Stub implementation: sends request via bridge, returns "unknown" status. Awaiting Laravel backend. // Laravel backend when the bridge is available.
func (s *Subsystem) planStatus(_ context.Context, _ *mcp.CallToolRequest, input PlanStatusInput) (*mcp.CallToolResult, PlanStatusOutput, error) { func (s *Subsystem) planStatus(_ context.Context, _ *mcp.CallToolRequest, input PlanStatusInput) (*mcp.CallToolResult, PlanStatusOutput, error) {
if s.bridge == nil { if s.bridge != nil {
return nil, PlanStatusOutput{}, errBridgeNotAvailable
}
_ = s.bridge.Send(BridgeMessage{ _ = s.bridge.Send(BridgeMessage{
Type: "plan_status", Type: "plan_status",
SessionID: input.SessionID, SessionID: input.SessionID,
}) })
}
s.stateMu.Lock()
session, ok := s.sessions[input.SessionID]
s.stateMu.Unlock()
status := "unknown"
if ok && session.Status != "" {
status = session.Status
}
return nil, PlanStatusOutput{ return nil, PlanStatusOutput{
SessionID: input.SessionID, SessionID: input.SessionID,
Status: "unknown", Status: status,
Steps: []PlanStep{}, Steps: []PlanStep{},
}, nil }, nil
} }

View file

@ -1,18 +1,26 @@
// SPDX-License-Identifier: EUPL-1.2
package ide package ide
import ( import (
"context" "context"
"sync"
"time" "time"
coremcp "dappco.re/go/mcp/pkg/mcp"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
// Dashboard tool input/output types. // Dashboard tool input/output types.
// DashboardOverviewInput is the input for ide_dashboard_overview. // DashboardOverviewInput is the input for ide_dashboard_overview.
//
// input := DashboardOverviewInput{}
type DashboardOverviewInput struct{} type DashboardOverviewInput struct{}
// DashboardOverview contains high-level platform stats. // DashboardOverview contains high-level platform stats.
//
// overview := DashboardOverview{Repos: 12, ActiveSessions: 3}
type DashboardOverview struct { type DashboardOverview struct {
Repos int `json:"repos"` Repos int `json:"repos"`
Services int `json:"services"` Services int `json:"services"`
@ -22,16 +30,22 @@ type DashboardOverview struct {
} }
// DashboardOverviewOutput is the output for ide_dashboard_overview. // DashboardOverviewOutput is the output for ide_dashboard_overview.
//
// // out.Overview.BridgeOnline reports bridge connectivity
type DashboardOverviewOutput struct { type DashboardOverviewOutput struct {
Overview DashboardOverview `json:"overview"` Overview DashboardOverview `json:"overview"`
} }
// DashboardActivityInput is the input for ide_dashboard_activity. // DashboardActivityInput is the input for ide_dashboard_activity.
//
// input := DashboardActivityInput{Limit: 25}
type DashboardActivityInput struct { type DashboardActivityInput struct {
Limit int `json:"limit,omitempty"` Limit int `json:"limit,omitempty"`
} }
// ActivityEvent represents a single activity feed item. // ActivityEvent represents a single activity feed item.
//
// event := ActivityEvent{Type: "build", Message: "build finished"}
type ActivityEvent struct { type ActivityEvent struct {
Type string `json:"type"` Type string `json:"type"`
Message string `json:"message"` Message string `json:"message"`
@ -39,16 +53,22 @@ type ActivityEvent struct {
} }
// DashboardActivityOutput is the output for ide_dashboard_activity. // DashboardActivityOutput is the output for ide_dashboard_activity.
//
// // out.Events contains the recent activity feed
type DashboardActivityOutput struct { type DashboardActivityOutput struct {
Events []ActivityEvent `json:"events"` Events []ActivityEvent `json:"events"`
} }
// DashboardMetricsInput is the input for ide_dashboard_metrics. // DashboardMetricsInput is the input for ide_dashboard_metrics.
//
// input := DashboardMetricsInput{Period: "24h"}
type DashboardMetricsInput struct { type DashboardMetricsInput struct {
Period string `json:"period,omitempty"` // "1h", "24h", "7d" Period string `json:"period,omitempty"` // "1h", "24h", "7d"
} }
// DashboardMetrics contains aggregate metrics. // DashboardMetrics contains aggregate metrics.
//
// metrics := DashboardMetrics{BuildsTotal: 42, SuccessRate: 0.95}
type DashboardMetrics struct { type DashboardMetrics struct {
BuildsTotal int `json:"buildsTotal"` BuildsTotal int `json:"buildsTotal"`
BuildsSuccess int `json:"buildsSuccess"` BuildsSuccess int `json:"buildsSuccess"`
@ -60,32 +80,88 @@ type DashboardMetrics struct {
} }
// DashboardMetricsOutput is the output for ide_dashboard_metrics. // DashboardMetricsOutput is the output for ide_dashboard_metrics.
//
// // out.Metrics summarises the selected time window
type DashboardMetricsOutput struct { type DashboardMetricsOutput struct {
Period string `json:"period"` Period string `json:"period"`
Metrics DashboardMetrics `json:"metrics"` Metrics DashboardMetrics `json:"metrics"`
} }
func (s *Subsystem) registerDashboardTools(server *mcp.Server) { // DashboardStateInput is the input for ide_dashboard_state.
mcp.AddTool(server, &mcp.Tool{ //
// input := DashboardStateInput{}
type DashboardStateInput struct{}
// DashboardStateOutput is the output for ide_dashboard_state.
//
// // out.State["theme"] == "dark"
type DashboardStateOutput struct {
State map[string]any `json:"state"` // arbitrary key/value map
UpdatedAt time.Time `json:"updatedAt"` // when the state last changed
}
// DashboardUpdateInput is the input for ide_dashboard_update.
//
// input := DashboardUpdateInput{
// State: map[string]any{"theme": "light", "sidebar": true},
// Replace: false,
// }
type DashboardUpdateInput struct {
State map[string]any `json:"state"` // partial or full state
Replace bool `json:"replace,omitempty"` // true to overwrite, false to merge (default)
}
// DashboardUpdateOutput is the output for ide_dashboard_update.
//
// // out.State reflects the merged/replaced state
type DashboardUpdateOutput struct {
State map[string]any `json:"state"` // merged state after the update
UpdatedAt time.Time `json:"updatedAt"` // when the state was applied
}
// dashboardStateStore holds the mutable dashboard UI state shared between the
// IDE frontend and MCP callers. Access is guarded by dashboardStateMu.
var (
dashboardStateMu sync.RWMutex
dashboardStateStore = map[string]any{}
dashboardStateUpdated time.Time
)
func (s *Subsystem) registerDashboardTools(svc *coremcp.Service) {
server := svc.Server()
coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_dashboard_overview", Name: "ide_dashboard_overview",
Description: "Get a high-level overview of the platform (repos, services, sessions, builds)", Description: "Get a high-level overview of the platform (repos, services, sessions, builds)",
}, s.dashboardOverview) }, s.dashboardOverview)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_dashboard_activity", Name: "ide_dashboard_activity",
Description: "Get the recent activity feed", Description: "Get the recent activity feed",
}, s.dashboardActivity) }, s.dashboardActivity)
mcp.AddTool(server, &mcp.Tool{ coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_dashboard_metrics", Name: "ide_dashboard_metrics",
Description: "Get aggregate build and agent metrics for a time period", Description: "Get aggregate build and agent metrics for a time period",
}, s.dashboardMetrics) }, s.dashboardMetrics)
coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_dashboard_state",
Description: "Get the current dashboard UI state (arbitrary key/value map shared with the IDE).",
}, s.dashboardState)
coremcp.AddToolRecorded(svc, server, "ide", &mcp.Tool{
Name: "ide_dashboard_update",
Description: "Update the dashboard UI state. Merges into existing state by default; set replace=true to overwrite.",
}, s.dashboardUpdate)
} }
// dashboardOverview returns a platform overview with bridge status. // dashboardOverview returns a platform overview with bridge status and
// Stub implementation: only BridgeOnline is live; other fields return zero values. Awaiting Laravel backend. // locally tracked session counts.
func (s *Subsystem) dashboardOverview(_ context.Context, _ *mcp.CallToolRequest, _ DashboardOverviewInput) (*mcp.CallToolResult, DashboardOverviewOutput, error) { func (s *Subsystem) dashboardOverview(_ context.Context, _ *mcp.CallToolRequest, _ DashboardOverviewInput) (*mcp.CallToolResult, DashboardOverviewOutput, error) {
connected := s.bridge != nil && s.bridge.Connected() connected := s.bridge != nil && s.bridge.Connected()
activeSessions := len(s.listSessions())
builds := s.listBuilds("", 0)
repos := s.buildRepoCount()
if s.bridge != nil { if s.bridge != nil {
_ = s.bridge.Send(BridgeMessage{Type: "dashboard_overview"}) _ = s.bridge.Send(BridgeMessage{Type: "dashboard_overview"})
@ -93,40 +169,172 @@ func (s *Subsystem) dashboardOverview(_ context.Context, _ *mcp.CallToolRequest,
return nil, DashboardOverviewOutput{ return nil, DashboardOverviewOutput{
Overview: DashboardOverview{ Overview: DashboardOverview{
Repos: repos,
Services: len(builds),
ActiveSessions: activeSessions,
RecentBuilds: len(builds),
BridgeOnline: connected, BridgeOnline: connected,
}, },
}, nil }, nil
} }
// dashboardActivity requests the activity feed from the Laravel backend. // dashboardActivity returns the local activity feed and refreshes the Laravel
// Stub implementation: sends request via bridge, returns empty events. Awaiting Laravel backend. // backend when the bridge is available.
func (s *Subsystem) dashboardActivity(_ context.Context, _ *mcp.CallToolRequest, input DashboardActivityInput) (*mcp.CallToolResult, DashboardActivityOutput, error) { func (s *Subsystem) dashboardActivity(_ context.Context, _ *mcp.CallToolRequest, input DashboardActivityInput) (*mcp.CallToolResult, DashboardActivityOutput, error) {
if s.bridge == nil { if s.bridge != nil {
return nil, DashboardActivityOutput{}, errBridgeNotAvailable
}
_ = s.bridge.Send(BridgeMessage{ _ = s.bridge.Send(BridgeMessage{
Type: "dashboard_activity", Type: "dashboard_activity",
Data: map[string]any{"limit": input.Limit}, Data: map[string]any{"limit": input.Limit},
}) })
return nil, DashboardActivityOutput{Events: []ActivityEvent{}}, nil }
return nil, DashboardActivityOutput{Events: s.activityFeed(input.Limit)}, nil
} }
// dashboardMetrics requests aggregate metrics from the Laravel backend. // dashboardMetrics returns local session and message counts and refreshes the
// Stub implementation: sends request via bridge, returns zero metrics. Awaiting Laravel backend. // Laravel backend when the bridge is available.
func (s *Subsystem) dashboardMetrics(_ context.Context, _ *mcp.CallToolRequest, input DashboardMetricsInput) (*mcp.CallToolResult, DashboardMetricsOutput, error) { func (s *Subsystem) dashboardMetrics(_ context.Context, _ *mcp.CallToolRequest, input DashboardMetricsInput) (*mcp.CallToolResult, DashboardMetricsOutput, error) {
if s.bridge == nil {
return nil, DashboardMetricsOutput{}, errBridgeNotAvailable
}
period := input.Period period := input.Period
if period == "" { if period == "" {
period = "24h" period = "24h"
} }
if s.bridge != nil {
_ = s.bridge.Send(BridgeMessage{ _ = s.bridge.Send(BridgeMessage{
Type: "dashboard_metrics", Type: "dashboard_metrics",
Data: map[string]any{"period": period}, Data: map[string]any{"period": period},
}) })
}
s.stateMu.Lock()
sessions := len(s.sessions)
messages := 0
builds := make([]BuildInfo, 0, len(s.buildOrder))
for _, id := range s.buildOrder {
if build, ok := s.builds[id]; ok {
builds = append(builds, build)
}
}
for _, history := range s.chats {
messages += len(history)
}
s.stateMu.Unlock()
total := len(builds)
success := 0
failed := 0
var durationTotal time.Duration
var durationCount int
for _, build := range builds {
switch build.Status {
case "success", "succeeded", "completed", "passed":
success++
case "failed", "error":
failed++
}
if build.Duration == "" {
continue
}
if d, err := time.ParseDuration(build.Duration); err == nil {
durationTotal += d
durationCount++
}
}
avgBuildTime := ""
if durationCount > 0 {
avgBuildTime = (durationTotal / time.Duration(durationCount)).String()
}
successRate := 0.0
if total > 0 {
successRate = float64(success) / float64(total)
}
return nil, DashboardMetricsOutput{ return nil, DashboardMetricsOutput{
Period: period, Period: period,
Metrics: DashboardMetrics{}, Metrics: DashboardMetrics{
BuildsTotal: total,
BuildsSuccess: success,
BuildsFailed: failed,
AvgBuildTime: avgBuildTime,
AgentSessions: sessions,
MessagesTotal: messages,
SuccessRate: successRate,
},
}, nil }, nil
} }
// dashboardState returns the current dashboard UI state as a snapshot.
//
// out := s.dashboardState(ctx, nil, DashboardStateInput{})
func (s *Subsystem) dashboardState(_ context.Context, _ *mcp.CallToolRequest, _ DashboardStateInput) (*mcp.CallToolResult, DashboardStateOutput, error) {
dashboardStateMu.RLock()
defer dashboardStateMu.RUnlock()
snapshot := make(map[string]any, len(dashboardStateStore))
for k, v := range dashboardStateStore {
snapshot[k] = v
}
return nil, DashboardStateOutput{
State: snapshot,
UpdatedAt: dashboardStateUpdated,
}, nil
}
// dashboardUpdate merges or replaces the dashboard UI state and emits an
// activity event so the IDE can react to the change.
//
// out := s.dashboardUpdate(ctx, nil, DashboardUpdateInput{State: map[string]any{"theme": "dark"}})
func (s *Subsystem) dashboardUpdate(ctx context.Context, _ *mcp.CallToolRequest, input DashboardUpdateInput) (*mcp.CallToolResult, DashboardUpdateOutput, error) {
now := time.Now()
dashboardStateMu.Lock()
if input.Replace || dashboardStateStore == nil {
dashboardStateStore = make(map[string]any, len(input.State))
}
for k, v := range input.State {
dashboardStateStore[k] = v
}
dashboardStateUpdated = now
snapshot := make(map[string]any, len(dashboardStateStore))
for k, v := range dashboardStateStore {
snapshot[k] = v
}
dashboardStateMu.Unlock()
// Record the change on the activity feed so ide_dashboard_activity
// reflects state transitions alongside build/session events.
s.recordActivity("dashboard_state", "dashboard state updated")
// Push the update over the Laravel bridge when available so web clients
// stay in sync with desktop tooling.
if s.bridge != nil {
_ = s.bridge.Send(BridgeMessage{
Type: "dashboard_update",
Data: snapshot,
})
}
// Surface the change on the shared MCP notifier so connected sessions
// receive a JSON-RPC notification alongside the tool response.
if s.notifier != nil {
s.notifier.ChannelSend(ctx, "dashboard.state.updated", map[string]any{
"state": snapshot,
"updatedAt": now,
})
}
return nil, DashboardUpdateOutput{
State: snapshot,
UpdatedAt: now,
}, nil
}
// resetDashboardState clears the shared dashboard state. Intended for tests.
func resetDashboardState() {
dashboardStateMu.Lock()
defer dashboardStateMu.Unlock()
dashboardStateStore = map[string]any{}
dashboardStateUpdated = time.Time{}
}

View file

@ -8,14 +8,25 @@ import (
"testing" "testing"
"time" "time"
"forge.lthn.ai/core/go-ws" coremcp "dappco.re/go/mcp/pkg/mcp"
"dappco.re/go/ws"
) )
// --- Helpers --- // --- Helpers ---
// newNilBridgeSubsystem returns a Subsystem with no hub/bridge (headless mode). // newNilBridgeSubsystem returns a Subsystem with no hub/bridge (headless mode).
func newNilBridgeSubsystem() *Subsystem { func newNilBridgeSubsystem() *Subsystem {
return New(nil) return New(nil, Config{})
}
type recordingNotifier struct {
channel string
data any
}
func (r *recordingNotifier) ChannelSend(_ context.Context, channel string, data any) {
r.channel = channel
r.data = data
} }
// newConnectedSubsystem returns a Subsystem with a connected bridge and a // newConnectedSubsystem returns a Subsystem with a connected bridge and a
@ -42,10 +53,10 @@ func newConnectedSubsystem(t *testing.T) (*Subsystem, context.CancelFunc, *httpt
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
go hub.Run(ctx) go hub.Run(ctx)
sub := New(hub, sub := New(hub, Config{
WithLaravelURL(wsURL(ts)), LaravelWSURL: wsURL(ts),
WithReconnectInterval(50*time.Millisecond), ReconnectInterval: 50 * time.Millisecond,
) })
sub.StartBridge(ctx) sub.StartBridge(ctx)
waitConnected(t, sub.Bridge(), 2*time.Second) waitConnected(t, sub.Bridge(), 2*time.Second)
@ -90,56 +101,90 @@ func TestChatSend_Good_Connected(t *testing.T) {
} }
} }
// TestChatHistory_Bad_NilBridge verifies chatHistory returns error without a bridge. // TestChatHistory_Good_NilBridge verifies chatHistory returns local cache without a bridge.
func TestChatHistory_Bad_NilBridge(t *testing.T) { func TestChatHistory_Good_NilBridge(t *testing.T) {
sub := newNilBridgeSubsystem() sub := newNilBridgeSubsystem()
_, _, err := sub.chatHistory(context.Background(), nil, ChatHistoryInput{ _, out, err := sub.chatHistory(context.Background(), nil, ChatHistoryInput{
SessionID: "s1", SessionID: "s1",
}) })
if err == nil { if err != nil {
t.Error("expected error when bridge is nil") t.Fatalf("chatHistory failed: %v", err)
}
if out.SessionID != "s1" {
t.Errorf("expected sessionId 's1', got %q", out.SessionID)
}
if out.Messages == nil {
t.Error("expected non-nil messages slice")
} }
} }
// TestChatHistory_Good_Connected verifies chatHistory succeeds and returns empty messages. // TestChatHistory_Good_Connected verifies chatHistory succeeds and returns stored messages.
func TestChatHistory_Good_Connected(t *testing.T) { func TestChatHistory_Good_Connected(t *testing.T) {
sub, cancel, ts := newConnectedSubsystem(t) sub, cancel, ts := newConnectedSubsystem(t)
defer cancel() defer cancel()
defer ts.Close() defer ts.Close()
_, _, err := sub.sessionCreate(context.Background(), nil, SessionCreateInput{
Name: "history-test",
})
if err != nil {
t.Fatalf("sessionCreate failed: %v", err)
}
_, _, err = sub.chatSend(context.Background(), nil, ChatSendInput{
SessionID: sub.listSessions()[0].ID,
Message: "hello history",
})
if err != nil {
t.Fatalf("chatSend failed: %v", err)
}
_, out, err := sub.chatHistory(context.Background(), nil, ChatHistoryInput{ _, out, err := sub.chatHistory(context.Background(), nil, ChatHistoryInput{
SessionID: "sess-1", SessionID: sub.listSessions()[0].ID,
Limit: 50, Limit: 50,
}) })
if err != nil { if err != nil {
t.Fatalf("chatHistory failed: %v", err) t.Fatalf("chatHistory failed: %v", err)
} }
if out.SessionID != "sess-1" { if out.SessionID != sub.listSessions()[0].ID {
t.Errorf("expected sessionId 'sess-1', got %q", out.SessionID) t.Errorf("expected sessionId %q, got %q", sub.listSessions()[0].ID, out.SessionID)
} }
if out.Messages == nil { if out.Messages == nil {
t.Error("expected non-nil messages slice") t.Error("expected non-nil messages slice")
} }
if len(out.Messages) != 0 { if len(out.Messages) != 1 {
t.Errorf("expected 0 messages (stub), got %d", len(out.Messages)) t.Errorf("expected 1 stored message, got %d", len(out.Messages))
}
if out.Messages[0].Content != "hello history" {
t.Errorf("expected stored message content %q, got %q", "hello history", out.Messages[0].Content)
} }
} }
// TestSessionList_Bad_NilBridge verifies sessionList returns error without a bridge. // TestSessionList_Good_NilBridge verifies sessionList returns local sessions without a bridge.
func TestSessionList_Bad_NilBridge(t *testing.T) { func TestSessionList_Good_NilBridge(t *testing.T) {
sub := newNilBridgeSubsystem() sub := newNilBridgeSubsystem()
_, _, err := sub.sessionList(context.Background(), nil, SessionListInput{}) _, out, err := sub.sessionList(context.Background(), nil, SessionListInput{})
if err == nil { if err != nil {
t.Error("expected error when bridge is nil") t.Fatalf("sessionList failed: %v", err)
}
if out.Sessions == nil {
t.Error("expected non-nil sessions slice")
} }
} }
// TestSessionList_Good_Connected verifies sessionList returns empty sessions. // TestSessionList_Good_Connected verifies sessionList returns stored sessions.
func TestSessionList_Good_Connected(t *testing.T) { func TestSessionList_Good_Connected(t *testing.T) {
sub, cancel, ts := newConnectedSubsystem(t) sub, cancel, ts := newConnectedSubsystem(t)
defer cancel() defer cancel()
defer ts.Close() defer ts.Close()
_, _, err := sub.sessionCreate(context.Background(), nil, SessionCreateInput{
Name: "session-list-test",
})
if err != nil {
t.Fatalf("sessionCreate failed: %v", err)
}
_, out, err := sub.sessionList(context.Background(), nil, SessionListInput{}) _, out, err := sub.sessionList(context.Background(), nil, SessionListInput{})
if err != nil { if err != nil {
t.Fatalf("sessionList failed: %v", err) t.Fatalf("sessionList failed: %v", err)
@ -147,23 +192,32 @@ func TestSessionList_Good_Connected(t *testing.T) {
if out.Sessions == nil { if out.Sessions == nil {
t.Error("expected non-nil sessions slice") t.Error("expected non-nil sessions slice")
} }
if len(out.Sessions) != 0 { if len(out.Sessions) != 1 {
t.Errorf("expected 0 sessions (stub), got %d", len(out.Sessions)) t.Errorf("expected 1 stored session, got %d", len(out.Sessions))
}
if out.Sessions[0].ID == "" {
t.Error("expected stored session to have an ID")
} }
} }
// TestSessionCreate_Bad_NilBridge verifies sessionCreate returns error without a bridge. // TestSessionCreate_Good_NilBridge verifies sessionCreate stores a local session without a bridge.
func TestSessionCreate_Bad_NilBridge(t *testing.T) { func TestSessionCreate_Good_NilBridge(t *testing.T) {
sub := newNilBridgeSubsystem() sub := newNilBridgeSubsystem()
_, _, err := sub.sessionCreate(context.Background(), nil, SessionCreateInput{ _, out, err := sub.sessionCreate(context.Background(), nil, SessionCreateInput{
Name: "test", Name: "test",
}) })
if err == nil { if err != nil {
t.Error("expected error when bridge is nil") t.Fatalf("sessionCreate failed: %v", err)
}
if out.Session.Name != "test" {
t.Errorf("expected session name 'test', got %q", out.Session.Name)
}
if out.Session.ID == "" {
t.Error("expected non-empty session ID")
} }
} }
// TestSessionCreate_Good_Connected verifies sessionCreate returns a session stub. // TestSessionCreate_Good_Connected verifies sessionCreate returns a stored session.
func TestSessionCreate_Good_Connected(t *testing.T) { func TestSessionCreate_Good_Connected(t *testing.T) {
sub, cancel, ts := newConnectedSubsystem(t) sub, cancel, ts := newConnectedSubsystem(t)
defer cancel() defer cancel()
@ -184,36 +238,52 @@ func TestSessionCreate_Good_Connected(t *testing.T) {
if out.Session.CreatedAt.IsZero() { if out.Session.CreatedAt.IsZero() {
t.Error("expected non-zero CreatedAt") t.Error("expected non-zero CreatedAt")
} }
if out.Session.ID == "" {
t.Error("expected non-empty session ID")
}
} }
// TestPlanStatus_Bad_NilBridge verifies planStatus returns error without a bridge. // TestPlanStatus_Good_NilBridge verifies planStatus returns local status without a bridge.
func TestPlanStatus_Bad_NilBridge(t *testing.T) { func TestPlanStatus_Good_NilBridge(t *testing.T) {
sub := newNilBridgeSubsystem() sub := newNilBridgeSubsystem()
_, _, err := sub.planStatus(context.Background(), nil, PlanStatusInput{ _, out, err := sub.planStatus(context.Background(), nil, PlanStatusInput{
SessionID: "s1", SessionID: "s1",
}) })
if err == nil { if err != nil {
t.Error("expected error when bridge is nil") t.Fatalf("planStatus failed: %v", err)
}
if out.SessionID != "s1" {
t.Errorf("expected sessionId 's1', got %q", out.SessionID)
}
if out.Status != "unknown" {
t.Errorf("expected status 'unknown', got %q", out.Status)
} }
} }
// TestPlanStatus_Good_Connected verifies planStatus returns a stub status. // TestPlanStatus_Good_Connected verifies planStatus returns a status for a known session.
func TestPlanStatus_Good_Connected(t *testing.T) { func TestPlanStatus_Good_Connected(t *testing.T) {
sub, cancel, ts := newConnectedSubsystem(t) sub, cancel, ts := newConnectedSubsystem(t)
defer cancel() defer cancel()
defer ts.Close() defer ts.Close()
_, createOut, err := sub.sessionCreate(context.Background(), nil, SessionCreateInput{
Name: "plan-status-test",
})
if err != nil {
t.Fatalf("sessionCreate failed: %v", err)
}
_, out, err := sub.planStatus(context.Background(), nil, PlanStatusInput{ _, out, err := sub.planStatus(context.Background(), nil, PlanStatusInput{
SessionID: "sess-7", SessionID: createOut.Session.ID,
}) })
if err != nil { if err != nil {
t.Fatalf("planStatus failed: %v", err) t.Fatalf("planStatus failed: %v", err)
} }
if out.SessionID != "sess-7" { if out.SessionID != createOut.Session.ID {
t.Errorf("expected sessionId 'sess-7', got %q", out.SessionID) t.Errorf("expected sessionId %q, got %q", createOut.Session.ID, out.SessionID)
} }
if out.Status != "unknown" { if out.Status != "creating" {
t.Errorf("expected status 'unknown', got %q", out.Status) t.Errorf("expected status 'creating', got %q", out.Status)
} }
if out.Steps == nil { if out.Steps == nil {
t.Error("expected non-nil steps slice") t.Error("expected non-nil steps slice")
@ -222,14 +292,20 @@ func TestPlanStatus_Good_Connected(t *testing.T) {
// --- 4.3: Build tool tests --- // --- 4.3: Build tool tests ---
// TestBuildStatus_Bad_NilBridge verifies buildStatus returns error without a bridge. // TestBuildStatus_Good_NilBridge verifies buildStatus returns a local stub without a bridge.
func TestBuildStatus_Bad_NilBridge(t *testing.T) { func TestBuildStatus_Good_NilBridge(t *testing.T) {
sub := newNilBridgeSubsystem() sub := newNilBridgeSubsystem()
_, _, err := sub.buildStatus(context.Background(), nil, BuildStatusInput{ _, out, err := sub.buildStatus(context.Background(), nil, BuildStatusInput{
BuildID: "b1", BuildID: "b1",
}) })
if err == nil { if err != nil {
t.Error("expected error when bridge is nil") t.Fatalf("buildStatus failed: %v", err)
}
if out.Build.ID != "b1" {
t.Errorf("expected build ID 'b1', got %q", out.Build.ID)
}
if out.Build.Status != "unknown" {
t.Errorf("expected status 'unknown', got %q", out.Build.Status)
} }
} }
@ -253,15 +329,74 @@ func TestBuildStatus_Good_Connected(t *testing.T) {
} }
} }
// TestBuildList_Bad_NilBridge verifies buildList returns error without a bridge. // TestBuildStatus_Good_EmitsLifecycle verifies bridge updates broadcast build lifecycle events.
func TestBuildList_Bad_NilBridge(t *testing.T) { func TestBuildStatus_Good_EmitsLifecycle(t *testing.T) {
sub := newNilBridgeSubsystem() sub := newNilBridgeSubsystem()
_, _, err := sub.buildList(context.Background(), nil, BuildListInput{ notifier := &recordingNotifier{}
sub.SetNotifier(notifier)
sub.handleBridgeMessage(BridgeMessage{
Type: "build_status",
Data: map[string]any{
"buildId": "build-1",
"repo": "core-php",
"branch": "main",
"status": "success",
},
})
if notifier.channel != coremcp.ChannelBuildComplete {
t.Fatalf("expected %s channel, got %q", coremcp.ChannelBuildComplete, notifier.channel)
}
payload, ok := notifier.data.(map[string]any)
if !ok {
t.Fatalf("expected payload map, got %T", notifier.data)
}
if payload["id"] != "build-1" {
t.Fatalf("expected build id build-1, got %v", payload["id"])
}
}
// TestBuildStatus_Good_EmitsStartLifecycle verifies running builds broadcast a start event.
func TestBuildStatus_Good_EmitsStartLifecycle(t *testing.T) {
sub := newNilBridgeSubsystem()
notifier := &recordingNotifier{}
sub.SetNotifier(notifier)
sub.handleBridgeMessage(BridgeMessage{
Type: "build_status",
Data: map[string]any{
"buildId": "build-2",
"repo": "core-php",
"branch": "main",
"status": "running",
},
})
if notifier.channel != coremcp.ChannelBuildStart {
t.Fatalf("expected %s channel, got %q", coremcp.ChannelBuildStart, notifier.channel)
}
payload, ok := notifier.data.(map[string]any)
if !ok {
t.Fatalf("expected payload map, got %T", notifier.data)
}
if payload["id"] != "build-2" {
t.Fatalf("expected build id build-2, got %v", payload["id"])
}
}
// TestBuildList_Good_NilBridge verifies buildList returns an empty list without a bridge.
func TestBuildList_Good_NilBridge(t *testing.T) {
sub := newNilBridgeSubsystem()
_, out, err := sub.buildList(context.Background(), nil, BuildListInput{
Repo: "core-php", Repo: "core-php",
Limit: 10, Limit: 10,
}) })
if err == nil { if err != nil {
t.Error("expected error when bridge is nil") t.Fatalf("buildList failed: %v", err)
}
if out.Builds == nil {
t.Error("expected non-nil builds slice")
} }
} }
@ -286,15 +421,21 @@ func TestBuildList_Good_Connected(t *testing.T) {
} }
} }
// TestBuildLogs_Bad_NilBridge verifies buildLogs returns error without a bridge. // TestBuildLogs_Good_NilBridge verifies buildLogs returns empty lines without a bridge.
func TestBuildLogs_Bad_NilBridge(t *testing.T) { func TestBuildLogs_Good_NilBridge(t *testing.T) {
sub := newNilBridgeSubsystem() sub := newNilBridgeSubsystem()
_, _, err := sub.buildLogs(context.Background(), nil, BuildLogsInput{ _, out, err := sub.buildLogs(context.Background(), nil, BuildLogsInput{
BuildID: "b1", BuildID: "b1",
Tail: 100, Tail: 100,
}) })
if err == nil { if err != nil {
t.Error("expected error when bridge is nil") t.Fatalf("buildLogs failed: %v", err)
}
if out.BuildID != "b1" {
t.Errorf("expected buildId 'b1', got %q", out.BuildID)
}
if out.Lines == nil {
t.Error("expected non-nil lines slice")
} }
} }
@ -337,12 +478,19 @@ func TestDashboardOverview_Good_NilBridge(t *testing.T) {
} }
} }
// TestDashboardOverview_Good_Connected verifies dashboardOverview reports bridge online. // TestDashboardOverview_Good_Connected verifies dashboardOverview reports bridge online and local sessions.
func TestDashboardOverview_Good_Connected(t *testing.T) { func TestDashboardOverview_Good_Connected(t *testing.T) {
sub, cancel, ts := newConnectedSubsystem(t) sub, cancel, ts := newConnectedSubsystem(t)
defer cancel() defer cancel()
defer ts.Close() defer ts.Close()
_, _, err := sub.sessionCreate(context.Background(), nil, SessionCreateInput{
Name: "dashboard-test",
})
if err != nil {
t.Fatalf("sessionCreate failed: %v", err)
}
_, out, err := sub.dashboardOverview(context.Background(), nil, DashboardOverviewInput{}) _, out, err := sub.dashboardOverview(context.Background(), nil, DashboardOverviewInput{})
if err != nil { if err != nil {
t.Fatalf("dashboardOverview failed: %v", err) t.Fatalf("dashboardOverview failed: %v", err)
@ -350,25 +498,38 @@ func TestDashboardOverview_Good_Connected(t *testing.T) {
if !out.Overview.BridgeOnline { if !out.Overview.BridgeOnline {
t.Error("expected BridgeOnline=true when bridge is connected") t.Error("expected BridgeOnline=true when bridge is connected")
} }
if out.Overview.ActiveSessions != 1 {
t.Errorf("expected 1 active session, got %d", out.Overview.ActiveSessions)
}
} }
// TestDashboardActivity_Bad_NilBridge verifies dashboardActivity returns error without bridge. // TestDashboardActivity_Good_NilBridge verifies dashboardActivity returns local activity without bridge.
func TestDashboardActivity_Bad_NilBridge(t *testing.T) { func TestDashboardActivity_Good_NilBridge(t *testing.T) {
sub := newNilBridgeSubsystem() sub := newNilBridgeSubsystem()
_, _, err := sub.dashboardActivity(context.Background(), nil, DashboardActivityInput{ _, out, err := sub.dashboardActivity(context.Background(), nil, DashboardActivityInput{
Limit: 10, Limit: 10,
}) })
if err == nil { if err != nil {
t.Error("expected error when bridge is nil") t.Fatalf("dashboardActivity failed: %v", err)
}
if out.Events == nil {
t.Error("expected non-nil events slice")
} }
} }
// TestDashboardActivity_Good_Connected verifies dashboardActivity returns empty events. // TestDashboardActivity_Good_Connected verifies dashboardActivity returns stored events.
func TestDashboardActivity_Good_Connected(t *testing.T) { func TestDashboardActivity_Good_Connected(t *testing.T) {
sub, cancel, ts := newConnectedSubsystem(t) sub, cancel, ts := newConnectedSubsystem(t)
defer cancel() defer cancel()
defer ts.Close() defer ts.Close()
_, _, err := sub.sessionCreate(context.Background(), nil, SessionCreateInput{
Name: "activity-test",
})
if err != nil {
t.Fatalf("sessionCreate failed: %v", err)
}
_, out, err := sub.dashboardActivity(context.Background(), nil, DashboardActivityInput{ _, out, err := sub.dashboardActivity(context.Background(), nil, DashboardActivityInput{
Limit: 20, Limit: 20,
}) })
@ -378,19 +539,25 @@ func TestDashboardActivity_Good_Connected(t *testing.T) {
if out.Events == nil { if out.Events == nil {
t.Error("expected non-nil events slice") t.Error("expected non-nil events slice")
} }
if len(out.Events) != 0 { if len(out.Events) != 1 {
t.Errorf("expected 0 events (stub), got %d", len(out.Events)) t.Errorf("expected 1 stored event, got %d", len(out.Events))
}
if len(out.Events) > 0 && out.Events[0].Type != "session_create" {
t.Errorf("expected first event type 'session_create', got %q", out.Events[0].Type)
} }
} }
// TestDashboardMetrics_Bad_NilBridge verifies dashboardMetrics returns error without bridge. // TestDashboardMetrics_Good_NilBridge verifies dashboardMetrics returns local metrics without bridge.
func TestDashboardMetrics_Bad_NilBridge(t *testing.T) { func TestDashboardMetrics_Good_NilBridge(t *testing.T) {
sub := newNilBridgeSubsystem() sub := newNilBridgeSubsystem()
_, _, err := sub.dashboardMetrics(context.Background(), nil, DashboardMetricsInput{ _, out, err := sub.dashboardMetrics(context.Background(), nil, DashboardMetricsInput{
Period: "1h", Period: "1h",
}) })
if err == nil { if err != nil {
t.Error("expected error when bridge is nil") t.Fatalf("dashboardMetrics failed: %v", err)
}
if out.Period != "1h" {
t.Errorf("expected period '1h', got %q", out.Period)
} }
} }
@ -690,7 +857,7 @@ func TestSubsystem_Good_RegisterTools(t *testing.T) {
// RegisterTools requires a real mcp.Server which is complex to construct // RegisterTools requires a real mcp.Server which is complex to construct
// in isolation. This test verifies the Subsystem can be created and // in isolation. This test verifies the Subsystem can be created and
// the Bridge/Shutdown path works end-to-end. // the Bridge/Shutdown path works end-to-end.
sub := New(nil) sub := New(nil, Config{})
if sub.Bridge() != nil { if sub.Bridge() != nil {
t.Error("expected nil bridge with nil hub") t.Error("expected nil bridge with nil hub")
} }
@ -701,20 +868,20 @@ func TestSubsystem_Good_RegisterTools(t *testing.T) {
// TestSubsystem_Good_StartBridgeNilHub verifies StartBridge is a no-op with nil hub. // TestSubsystem_Good_StartBridgeNilHub verifies StartBridge is a no-op with nil hub.
func TestSubsystem_Good_StartBridgeNilHub(t *testing.T) { func TestSubsystem_Good_StartBridgeNilHub(t *testing.T) {
sub := New(nil) sub := New(nil, Config{})
// Should not panic // Should not panic
sub.StartBridge(context.Background()) sub.StartBridge(context.Background())
} }
// TestSubsystem_Good_WithOptions verifies all config options apply correctly. // TestSubsystem_Good_WithConfig verifies the Config DTO applies correctly.
func TestSubsystem_Good_WithOptions(t *testing.T) { func TestSubsystem_Good_WithConfig(t *testing.T) {
hub := ws.NewHub() hub := ws.NewHub()
sub := New(hub, sub := New(hub, Config{
WithLaravelURL("ws://custom:1234/ws"), LaravelWSURL: "ws://custom:1234/ws",
WithWorkspaceRoot("/tmp/test"), WorkspaceRoot: "/tmp/test",
WithReconnectInterval(5*time.Second), ReconnectInterval: 5 * time.Second,
WithToken("secret-123"), Token: "secret-123",
) })
if sub.cfg.LaravelWSURL != "ws://custom:1234/ws" { if sub.cfg.LaravelWSURL != "ws://custom:1234/ws" {
t.Errorf("expected custom URL, got %q", sub.cfg.LaravelWSURL) t.Errorf("expected custom URL, got %q", sub.cfg.LaravelWSURL)
@ -761,7 +928,10 @@ func TestChatSend_Good_BridgeMessageType(t *testing.T) {
ctx := t.Context() ctx := t.Context()
go hub.Run(ctx) go hub.Run(ctx)
sub := New(hub, WithLaravelURL(wsURL(ts)), WithReconnectInterval(50*time.Millisecond)) sub := New(hub, Config{
LaravelWSURL: wsURL(ts),
ReconnectInterval: 50 * time.Millisecond,
})
sub.StartBridge(ctx) sub.StartBridge(ctx)
waitConnected(t, sub.Bridge(), 2*time.Second) waitConnected(t, sub.Bridge(), 2*time.Second)
@ -779,3 +949,76 @@ func TestChatSend_Good_BridgeMessageType(t *testing.T) {
t.Fatal("timed out waiting for bridge message") t.Fatal("timed out waiting for bridge message")
} }
} }
// TestToolsDashboard_DashboardState_Good returns an empty state when the
// store has not been touched.
func TestToolsDashboard_DashboardState_Good(t *testing.T) {
t.Cleanup(resetDashboardState)
sub := newNilBridgeSubsystem()
_, out, err := sub.dashboardState(context.Background(), nil, DashboardStateInput{})
if err != nil {
t.Fatalf("dashboardState failed: %v", err)
}
if len(out.State) != 0 {
t.Fatalf("expected empty state, got %v", out.State)
}
}
// TestToolsDashboard_DashboardUpdate_Good merges the supplied state into the
// shared store and reflects it back on a subsequent dashboardState call.
func TestToolsDashboard_DashboardUpdate_Good(t *testing.T) {
t.Cleanup(resetDashboardState)
sub := newNilBridgeSubsystem()
_, updateOut, err := sub.dashboardUpdate(context.Background(), nil, DashboardUpdateInput{
State: map[string]any{"theme": "dark"},
})
if err != nil {
t.Fatalf("dashboardUpdate failed: %v", err)
}
if updateOut.State["theme"] != "dark" {
t.Fatalf("expected theme 'dark', got %v", updateOut.State["theme"])
}
_, readOut, err := sub.dashboardState(context.Background(), nil, DashboardStateInput{})
if err != nil {
t.Fatalf("dashboardState failed: %v", err)
}
if readOut.State["theme"] != "dark" {
t.Fatalf("expected persisted theme 'dark', got %v", readOut.State["theme"])
}
if readOut.UpdatedAt.IsZero() {
t.Fatal("expected non-zero UpdatedAt after update")
}
}
// TestToolsDashboard_DashboardUpdate_Ugly replaces (not merges) prior state
// when Replace=true.
func TestToolsDashboard_DashboardUpdate_Ugly(t *testing.T) {
t.Cleanup(resetDashboardState)
sub := newNilBridgeSubsystem()
_, _, err := sub.dashboardUpdate(context.Background(), nil, DashboardUpdateInput{
State: map[string]any{"theme": "dark", "sidebar": true},
})
if err != nil {
t.Fatalf("seed dashboardUpdate failed: %v", err)
}
_, out, err := sub.dashboardUpdate(context.Background(), nil, DashboardUpdateInput{
State: map[string]any{"theme": "light"},
Replace: true,
})
if err != nil {
t.Fatalf("replace dashboardUpdate failed: %v", err)
}
if _, ok := out.State["sidebar"]; ok {
t.Fatal("expected sidebar to be removed after replace")
}
if out.State["theme"] != "light" {
t.Fatalf("expected theme 'light', got %v", out.State["theme"])
}
}

18
pkg/mcp/ipc.go Normal file
View file

@ -0,0 +1,18 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"context"
core "dappco.re/go/core"
)
func (s *Service) handleChannelPushIPC(ctx context.Context, ev ChannelPush) core.Result {
if core.Trim(ev.Channel) == "" {
return core.Result{Value: core.E("mcp.HandleIPCEvents", "channel is required", nil), OK: false}
}
s.ChannelSend(ctx, ev.Channel, ev.Data)
return core.Result{OK: true}
}

111
pkg/mcp/ipc_test.go Normal file
View file

@ -0,0 +1,111 @@
package mcp
import (
"testing"
"time"
)
func TestIPC_HandleIPCEvents_Good(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
cancel, session, clientConn := connectNotificationSession(t, svc)
defer cancel()
defer session.Close()
defer clientConn.Close()
clientConn.SetDeadline(time.Now().Add(5 * time.Second))
read := readNotificationMessageUntil(t, clientConn, func(msg map[string]any) bool {
return msg["method"] == ChannelNotificationMethod
})
result := svc.HandleIPCEvents(nil, ChannelPush{
Channel: "agent.completed",
Data: map[string]any{
"repo": "core/mcp",
"ok": true,
},
})
if !result.OK {
t.Fatalf("HandleIPCEvents() returned non-OK result: %#v", result.Value)
}
res := <-read
if res.err != nil {
t.Fatalf("failed to read channel notification: %v", res.err)
}
params, ok := res.msg["params"].(map[string]any)
if !ok {
t.Fatalf("expected params object, got %T", res.msg["params"])
}
if params["channel"] != "agent.completed" {
t.Fatalf("expected channel agent.completed, got %#v", params["channel"])
}
payload, ok := params["data"].(map[string]any)
if !ok {
t.Fatalf("expected data object, got %T", params["data"])
}
if payload["repo"] != "core/mcp" || payload["ok"] != true {
t.Fatalf("unexpected payload: %#v", payload)
}
}
func TestIPC_HandleIPCEvents_Bad(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
result := svc.HandleIPCEvents(nil, ChannelPush{
Channel: " \t ",
Data: map[string]any{"ok": false},
})
if result.OK {
t.Fatal("expected empty ChannelPush channel to fail")
}
if _, ok := result.Value.(error); !ok {
t.Fatalf("expected error result value, got %T", result.Value)
}
}
func TestIPC_HandleIPCEvents_Ugly(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
cancel, session, clientConn := connectNotificationSession(t, svc)
defer cancel()
defer session.Close()
defer clientConn.Close()
clientConn.SetDeadline(time.Now().Add(5 * time.Second))
read := readNotificationMessageUntil(t, clientConn, func(msg map[string]any) bool {
params, ok := msg["params"].(map[string]any)
return msg["method"] == ChannelNotificationMethod && ok && params["channel"] == "agent.edge"
})
result := svc.HandleIPCEvents(nil, ChannelPush{Channel: "agent.edge"})
if !result.OK {
t.Fatalf("HandleIPCEvents() returned non-OK result: %#v", result.Value)
}
res := <-read
if res.err != nil {
t.Fatalf("failed to read edge notification: %v", res.err)
}
params, ok := res.msg["params"].(map[string]any)
if !ok {
t.Fatalf("expected params object, got %T", res.msg["params"])
}
if _, ok := params["data"]; !ok {
t.Fatalf("expected data key for nil ChannelPush data: %#v", params)
}
if params["data"] != nil {
t.Fatalf("expected nil data, got %#v", params["data"])
}
}

View file

@ -29,9 +29,9 @@ func TestService_Iterators(t *testing.T) {
} }
} }
func TestRegistry_SplitTagSeq(t *testing.T) { func TestRegistry_SplitTag(t *testing.T) {
tag := "name,omitempty,json" tag := "name,omitempty,json"
parts := slices.Collect(splitTagSeq(tag)) parts := splitTag(tag)
expected := []string{"name", "omitempty", "json"} expected := []string{"name", "omitempty", "json"}
if !slices.Equal(parts, expected) { if !slices.Equal(parts, expected) {

View file

@ -5,39 +5,44 @@
package mcp package mcp
import ( import (
"cmp"
"context" "context"
"iter" "iter"
"net/http" "net/http"
"os" "os"
"path/filepath" "path/filepath"
"slices" "slices"
"strings"
"sync" "sync"
"forge.lthn.ai/core/go-io" core "dappco.re/go/core"
"forge.lthn.ai/core/go-log" "dappco.re/go/io"
"forge.lthn.ai/core/go-process" "dappco.re/go/log"
"forge.lthn.ai/core/go-ws" "dappco.re/go/process"
"dappco.re/go/ws"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
// Service provides a lightweight MCP server with file operations only. // Service provides a lightweight MCP server with file operations and
// optional subsystems.
// For full GUI features, use the core-gui package. // For full GUI features, use the core-gui package.
// //
// svc, err := mcp.New(mcp.Options{WorkspaceRoot: "/home/user/project"}) // svc, err := mcp.New(mcp.Options{WorkspaceRoot: "/home/user/project"})
// defer svc.Shutdown(ctx) // defer svc.Shutdown(ctx)
type Service struct { type Service struct {
*core.ServiceRuntime[struct{}] // Core access via s.Core()
server *mcp.Server server *mcp.Server
workspaceRoot string // Root directory for file operations (empty = unrestricted) workspaceRoot string // Root directory for file operations (empty = cwd unless Unrestricted)
medium io.Medium // Filesystem medium for sandboxed operations medium io.Medium // Filesystem medium for sandboxed operations
subsystems []Subsystem // Additional subsystems registered via WithSubsystem subsystems []Subsystem // Additional subsystems registered via Options.Subsystems
logger *log.Logger // Logger for tool execution auditing logger *log.Logger // Logger for tool execution auditing
processService *process.Service // Process management service (optional) processService *process.Service // Process management service (optional)
wsHub *ws.Hub // WebSocket hub for real-time streaming (optional) wsHub *ws.Hub // WebSocket hub for real-time streaming (optional)
wsServer *http.Server // WebSocket HTTP server (optional) wsServer *http.Server // WebSocket HTTP server (optional)
wsAddr string // WebSocket server address wsAddr string // WebSocket server address
wsMu sync.Mutex // Protects wsServer and wsAddr wsMu sync.Mutex // Protects wsServer and wsAddr
stdioMode bool // True when running via stdio transport processMu sync.Mutex // Protects processMeta
processMeta map[string]processRuntime
tools []ToolRecord // Parallel tool registry for REST bridge tools []ToolRecord // Parallel tool registry for REST bridge
} }
@ -56,7 +61,7 @@ type Options struct {
Subsystems []Subsystem // Additional tool groups registered at startup Subsystems []Subsystem // Additional tool groups registered at startup
} }
// New creates a new MCP service with file operations. // New creates a new MCP service with file operations and optional subsystems.
// //
// svc, err := mcp.New(mcp.Options{WorkspaceRoot: "."}) // svc, err := mcp.New(mcp.Options{WorkspaceRoot: "."})
func New(opts Options) (*Service, error) { func New(opts Options) (*Service, error) {
@ -67,7 +72,8 @@ func New(opts Options) (*Service, error) {
server := mcp.NewServer(impl, &mcp.ServerOptions{ server := mcp.NewServer(impl, &mcp.ServerOptions{
Capabilities: &mcp.ServerCapabilities{ Capabilities: &mcp.ServerCapabilities{
Tools: &mcp.ToolCapabilities{ListChanged: true}, Resources: &mcp.ResourceCapabilities{ListChanged: false},
Tools: &mcp.ToolCapabilities{ListChanged: false},
Logging: &mcp.LoggingCapabilities{}, Logging: &mcp.LoggingCapabilities{},
Experimental: channelCapability(), Experimental: channelCapability(),
}, },
@ -77,8 +83,8 @@ func New(opts Options) (*Service, error) {
server: server, server: server,
processService: opts.ProcessService, processService: opts.ProcessService,
wsHub: opts.WSHub, wsHub: opts.WSHub,
subsystems: opts.Subsystems,
logger: log.Default(), logger: log.Default(),
processMeta: make(map[string]processRuntime),
} }
// Workspace root: unrestricted, explicit root, or default to cwd // Workspace root: unrestricted, explicit root, or default to cwd
@ -90,39 +96,41 @@ func New(opts Options) (*Service, error) {
if root == "" { if root == "" {
cwd, err := os.Getwd() cwd, err := os.Getwd()
if err != nil { if err != nil {
return nil, log.E("mcp.New", "failed to get working directory", err) return nil, core.E("mcp.New", "failed to get working directory", err)
} }
root = cwd root = cwd
} }
abs, err := filepath.Abs(root) abs, err := filepath.Abs(root)
if err != nil { if err != nil {
return nil, log.E("mcp.New", "invalid workspace root", err) return nil, core.E("mcp.New", "failed to resolve workspace root", err)
}
m, merr := io.NewSandboxed(abs)
if merr != nil {
return nil, log.E("mcp.New", "failed to create workspace medium", merr)
} }
s.workspaceRoot = abs s.workspaceRoot = abs
m, merr := io.NewSandboxed(abs)
if merr != nil {
return nil, core.E("mcp.New", "failed to create workspace medium", merr)
}
s.medium = m s.medium = m
} }
s.registerTools(s.server) s.registerTools(s.server)
for _, sub := range s.subsystems { s.subsystems = make([]Subsystem, 0, len(opts.Subsystems))
sub.RegisterTools(s.server) for _, sub := range opts.Subsystems {
if sub == nil {
continue
}
s.subsystems = append(s.subsystems, sub)
if sn, ok := sub.(SubsystemWithNotifier); ok { if sn, ok := sub.(SubsystemWithNotifier); ok {
sn.SetNotifier(s) sn.SetNotifier(s)
} }
// Wire channel callback for subsystems that use func-based notification // Wire channel callback for subsystems that use func-based notification.
type channelWirer interface { if cw, ok := sub.(SubsystemWithChannelCallback); ok {
OnChannel(func(ctx context.Context, channel string, data any))
}
if cw, ok := sub.(channelWirer); ok {
svc := s // capture for closure svc := s // capture for closure
cw.OnChannel(func(ctx context.Context, channel string, data any) { cw.OnChannel(func(ctx context.Context, channel string, data any) {
svc.ChannelSend(ctx, channel, data) svc.ChannelSend(ctx, channel, data)
}) })
} }
sub.RegisterTools(s)
} }
return s, nil return s, nil
@ -134,7 +142,7 @@ func New(opts Options) (*Service, error) {
// fmt.Println(sub.Name()) // fmt.Println(sub.Name())
// } // }
func (s *Service) Subsystems() []Subsystem { func (s *Service) Subsystems() []Subsystem {
return s.subsystems return slices.Clone(s.subsystems)
} }
// SubsystemsSeq returns an iterator over the registered subsystems. // SubsystemsSeq returns an iterator over the registered subsystems.
@ -143,7 +151,7 @@ func (s *Service) Subsystems() []Subsystem {
// fmt.Println(sub.Name()) // fmt.Println(sub.Name())
// } // }
func (s *Service) SubsystemsSeq() iter.Seq[Subsystem] { func (s *Service) SubsystemsSeq() iter.Seq[Subsystem] {
return slices.Values(s.subsystems) return slices.Values(slices.Clone(s.subsystems))
} }
// Tools returns all recorded tool metadata. // Tools returns all recorded tool metadata.
@ -152,7 +160,7 @@ func (s *Service) SubsystemsSeq() iter.Seq[Subsystem] {
// fmt.Printf("%s (%s): %s\n", t.Name, t.Group, t.Description) // fmt.Printf("%s (%s): %s\n", t.Name, t.Group, t.Description)
// } // }
func (s *Service) Tools() []ToolRecord { func (s *Service) Tools() []ToolRecord {
return s.tools return slices.Clone(s.tools)
} }
// ToolsSeq returns an iterator over all recorded tool metadata. // ToolsSeq returns an iterator over all recorded tool metadata.
@ -161,7 +169,7 @@ func (s *Service) Tools() []ToolRecord {
// fmt.Println(rec.Name) // fmt.Println(rec.Name)
// } // }
func (s *Service) ToolsSeq() iter.Seq[ToolRecord] { func (s *Service) ToolsSeq() iter.Seq[ToolRecord] {
return slices.Values(s.tools) return slices.Values(slices.Clone(s.tools))
} }
// Shutdown gracefully shuts down all subsystems that support it. // Shutdown gracefully shuts down all subsystems that support it.
@ -170,16 +178,41 @@ func (s *Service) ToolsSeq() iter.Seq[ToolRecord] {
// defer cancel() // defer cancel()
// if err := svc.Shutdown(ctx); err != nil { log.Fatal(err) } // if err := svc.Shutdown(ctx); err != nil { log.Fatal(err) }
func (s *Service) Shutdown(ctx context.Context) error { func (s *Service) Shutdown(ctx context.Context) error {
var shutdownErr error
for _, sub := range s.subsystems { for _, sub := range s.subsystems {
if sh, ok := sub.(SubsystemWithShutdown); ok { if sh, ok := sub.(SubsystemWithShutdown); ok {
if err := sh.Shutdown(ctx); err != nil { if err := sh.Shutdown(ctx); err != nil {
return log.E("mcp.Shutdown", "shutdown "+sub.Name(), err) if shutdownErr == nil {
shutdownErr = log.E("mcp.Shutdown", "shutdown "+sub.Name(), err)
} }
} }
} }
return nil
} }
if s.wsServer != nil {
s.wsMu.Lock()
server := s.wsServer
s.wsMu.Unlock()
if err := server.Shutdown(ctx); err != nil && shutdownErr == nil {
shutdownErr = log.E("mcp.Shutdown", "shutdown websocket server", err)
}
s.wsMu.Lock()
if s.wsServer == server {
s.wsServer = nil
s.wsAddr = ""
}
s.wsMu.Unlock()
}
if err := closeWebviewConnection(); err != nil && shutdownErr == nil {
shutdownErr = log.E("mcp.Shutdown", "close webview connection", err)
}
return shutdownErr
}
// WSHub returns the WebSocket hub, or nil if not configured. // WSHub returns the WebSocket hub, or nil if not configured.
// //
@ -199,7 +232,30 @@ func (s *Service) ProcessService() *process.Service {
return s.processService return s.processService
} }
// registerTools adds file operation tools to the MCP server. // resolveWorkspacePath converts a tool path into the filesystem path the
// service actually operates on.
//
// Sandboxed services keep paths anchored under workspaceRoot. Unrestricted
// services preserve absolute paths and clean relative ones against the current
// working directory.
func (s *Service) resolveWorkspacePath(path string) string {
if path == "" {
return ""
}
if s.workspaceRoot == "" {
return core.CleanPath(path, "/")
}
clean := core.CleanPath(string(filepath.Separator)+path, "/")
clean = core.TrimPrefix(clean, string(filepath.Separator))
if clean == "." || clean == "" {
return s.workspaceRoot
}
return core.Path(s.workspaceRoot, clean)
}
// registerTools adds the built-in tool groups to the MCP server.
func (s *Service) registerTools(server *mcp.Server) { func (s *Service) registerTools(server *mcp.Server) {
// File operations // File operations
addToolRecorded(s, server, "files", &mcp.Tool{ addToolRecorded(s, server, "files", &mcp.Tool{
@ -253,6 +309,14 @@ func (s *Service) registerTools(server *mcp.Server) {
Name: "lang_list", Name: "lang_list",
Description: "Get list of supported programming languages", Description: "Get list of supported programming languages",
}, s.getSupportedLanguages) }, s.getSupportedLanguages)
// Additional built-in tool groups.
s.registerMetricsTools(server)
s.registerRAGTools(server)
s.registerProcessTools(server)
s.registerWebviewTools(server)
s.registerWSTools(server)
s.registerWSClientTools(server)
} }
// Tool input/output types for MCP file operations. // Tool input/output types for MCP file operations.
@ -402,7 +466,7 @@ type GetSupportedLanguagesInput struct{}
// GetSupportedLanguagesOutput contains the list of supported languages. // GetSupportedLanguagesOutput contains the list of supported languages.
// //
// // len(out.Languages) == 15 // // len(out.Languages) == 23
// // out.Languages[0].ID == "typescript" // // out.Languages[0].ID == "typescript"
type GetSupportedLanguagesOutput struct { type GetSupportedLanguagesOutput struct {
Languages []LanguageInfo `json:"languages"` // all recognised languages Languages []LanguageInfo `json:"languages"` // all recognised languages
@ -443,6 +507,10 @@ type EditDiffOutput struct {
// Tool handlers // Tool handlers
func (s *Service) readFile(ctx context.Context, req *mcp.CallToolRequest, input ReadFileInput) (*mcp.CallToolResult, ReadFileOutput, error) { func (s *Service) readFile(ctx context.Context, req *mcp.CallToolRequest, input ReadFileInput) (*mcp.CallToolResult, ReadFileOutput, error) {
if s.medium == nil {
return nil, ReadFileOutput{}, log.E("mcp.readFile", "workspace medium unavailable", nil)
}
content, err := s.medium.Read(input.Path) content, err := s.medium.Read(input.Path)
if err != nil { if err != nil {
return nil, ReadFileOutput{}, log.E("mcp.readFile", "failed to read file", err) return nil, ReadFileOutput{}, log.E("mcp.readFile", "failed to read file", err)
@ -455,6 +523,10 @@ func (s *Service) readFile(ctx context.Context, req *mcp.CallToolRequest, input
} }
func (s *Service) writeFile(ctx context.Context, req *mcp.CallToolRequest, input WriteFileInput) (*mcp.CallToolResult, WriteFileOutput, error) { func (s *Service) writeFile(ctx context.Context, req *mcp.CallToolRequest, input WriteFileInput) (*mcp.CallToolResult, WriteFileOutput, error) {
if s.medium == nil {
return nil, WriteFileOutput{}, log.E("mcp.writeFile", "workspace medium unavailable", nil)
}
// Medium.Write creates parent directories automatically // Medium.Write creates parent directories automatically
if err := s.medium.Write(input.Path, input.Content); err != nil { if err := s.medium.Write(input.Path, input.Content); err != nil {
return nil, WriteFileOutput{}, log.E("mcp.writeFile", "failed to write file", err) return nil, WriteFileOutput{}, log.E("mcp.writeFile", "failed to write file", err)
@ -463,10 +535,17 @@ func (s *Service) writeFile(ctx context.Context, req *mcp.CallToolRequest, input
} }
func (s *Service) listDirectory(ctx context.Context, req *mcp.CallToolRequest, input ListDirectoryInput) (*mcp.CallToolResult, ListDirectoryOutput, error) { func (s *Service) listDirectory(ctx context.Context, req *mcp.CallToolRequest, input ListDirectoryInput) (*mcp.CallToolResult, ListDirectoryOutput, error) {
if s.medium == nil {
return nil, ListDirectoryOutput{}, log.E("mcp.listDirectory", "workspace medium unavailable", nil)
}
entries, err := s.medium.List(input.Path) entries, err := s.medium.List(input.Path)
if err != nil { if err != nil {
return nil, ListDirectoryOutput{}, log.E("mcp.listDirectory", "failed to list directory", err) return nil, ListDirectoryOutput{}, log.E("mcp.listDirectory", "failed to list directory", err)
} }
slices.SortFunc(entries, func(a, b os.DirEntry) int {
return cmp.Compare(a.Name(), b.Name())
})
result := make([]DirectoryEntry, 0, len(entries)) result := make([]DirectoryEntry, 0, len(entries))
for _, e := range entries { for _, e := range entries {
info, _ := e.Info() info, _ := e.Info()
@ -476,10 +555,7 @@ func (s *Service) listDirectory(ctx context.Context, req *mcp.CallToolRequest, i
} }
result = append(result, DirectoryEntry{ result = append(result, DirectoryEntry{
Name: e.Name(), Name: e.Name(),
Path: filepath.Join(input.Path, e.Name()), // Note: This might be relative path, client might expect absolute? Path: directoryEntryPath(input.Path, e.Name()),
// Issue 103 says "Replace ... with local.Medium sandboxing".
// Previous code returned `filepath.Join(input.Path, e.Name())`.
// If input.Path is relative, this preserves it.
IsDir: e.IsDir(), IsDir: e.IsDir(),
Size: size, Size: size,
}) })
@ -487,7 +563,23 @@ func (s *Service) listDirectory(ctx context.Context, req *mcp.CallToolRequest, i
return nil, ListDirectoryOutput{Entries: result, Path: input.Path}, nil return nil, ListDirectoryOutput{Entries: result, Path: input.Path}, nil
} }
// directoryEntryPath returns the documented display path for a directory entry.
//
// Example:
//
// directoryEntryPath("src", "main.go") == "src/main.go"
func directoryEntryPath(dir, name string) string {
if dir == "" {
return name
}
return core.JoinPath(dir, name)
}
func (s *Service) createDirectory(ctx context.Context, req *mcp.CallToolRequest, input CreateDirectoryInput) (*mcp.CallToolResult, CreateDirectoryOutput, error) { func (s *Service) createDirectory(ctx context.Context, req *mcp.CallToolRequest, input CreateDirectoryInput) (*mcp.CallToolResult, CreateDirectoryOutput, error) {
if s.medium == nil {
return nil, CreateDirectoryOutput{}, log.E("mcp.createDirectory", "workspace medium unavailable", nil)
}
if err := s.medium.EnsureDir(input.Path); err != nil { if err := s.medium.EnsureDir(input.Path); err != nil {
return nil, CreateDirectoryOutput{}, log.E("mcp.createDirectory", "failed to create directory", err) return nil, CreateDirectoryOutput{}, log.E("mcp.createDirectory", "failed to create directory", err)
} }
@ -495,6 +587,10 @@ func (s *Service) createDirectory(ctx context.Context, req *mcp.CallToolRequest,
} }
func (s *Service) deleteFile(ctx context.Context, req *mcp.CallToolRequest, input DeleteFileInput) (*mcp.CallToolResult, DeleteFileOutput, error) { func (s *Service) deleteFile(ctx context.Context, req *mcp.CallToolRequest, input DeleteFileInput) (*mcp.CallToolResult, DeleteFileOutput, error) {
if s.medium == nil {
return nil, DeleteFileOutput{}, log.E("mcp.deleteFile", "workspace medium unavailable", nil)
}
if err := s.medium.Delete(input.Path); err != nil { if err := s.medium.Delete(input.Path); err != nil {
return nil, DeleteFileOutput{}, log.E("mcp.deleteFile", "failed to delete file", err) return nil, DeleteFileOutput{}, log.E("mcp.deleteFile", "failed to delete file", err)
} }
@ -502,6 +598,10 @@ func (s *Service) deleteFile(ctx context.Context, req *mcp.CallToolRequest, inpu
} }
func (s *Service) renameFile(ctx context.Context, req *mcp.CallToolRequest, input RenameFileInput) (*mcp.CallToolResult, RenameFileOutput, error) { func (s *Service) renameFile(ctx context.Context, req *mcp.CallToolRequest, input RenameFileInput) (*mcp.CallToolResult, RenameFileOutput, error) {
if s.medium == nil {
return nil, RenameFileOutput{}, log.E("mcp.renameFile", "workspace medium unavailable", nil)
}
if err := s.medium.Rename(input.OldPath, input.NewPath); err != nil { if err := s.medium.Rename(input.OldPath, input.NewPath); err != nil {
return nil, RenameFileOutput{}, log.E("mcp.renameFile", "failed to rename file", err) return nil, RenameFileOutput{}, log.E("mcp.renameFile", "failed to rename file", err)
} }
@ -509,21 +609,22 @@ func (s *Service) renameFile(ctx context.Context, req *mcp.CallToolRequest, inpu
} }
func (s *Service) fileExists(ctx context.Context, req *mcp.CallToolRequest, input FileExistsInput) (*mcp.CallToolResult, FileExistsOutput, error) { func (s *Service) fileExists(ctx context.Context, req *mcp.CallToolRequest, input FileExistsInput) (*mcp.CallToolResult, FileExistsOutput, error) {
exists := s.medium.IsFile(input.Path) if s.medium == nil {
if exists { return nil, FileExistsOutput{}, log.E("mcp.fileExists", "workspace medium unavailable", nil)
return nil, FileExistsOutput{Exists: true, IsDir: false, Path: input.Path}, nil
} }
// Check if it's a directory by attempting to list it
// List might fail if it's a file too (but we checked IsFile) or if doesn't exist.
_, err := s.medium.List(input.Path)
isDir := err == nil
// If List failed, it might mean it doesn't exist OR it's a special file or permissions. info, err := s.medium.Stat(input.Path)
// Assuming if List works, it's a directory. if err != nil {
if core.Is(err, os.ErrNotExist) {
// Refinement: If it doesn't exist, List returns error. return nil, FileExistsOutput{Exists: false, IsDir: false, Path: input.Path}, nil
}
return nil, FileExistsOutput{Exists: isDir, IsDir: isDir, Path: input.Path}, nil return nil, FileExistsOutput{}, log.E("mcp.fileExists", "failed to stat path", err)
}
return nil, FileExistsOutput{
Exists: true,
IsDir: info.IsDir(),
Path: input.Path,
}, nil
} }
func (s *Service) detectLanguage(ctx context.Context, req *mcp.CallToolRequest, input DetectLanguageInput) (*mcp.CallToolResult, DetectLanguageOutput, error) { func (s *Service) detectLanguage(ctx context.Context, req *mcp.CallToolRequest, input DetectLanguageInput) (*mcp.CallToolResult, DetectLanguageOutput, error) {
@ -532,27 +633,14 @@ func (s *Service) detectLanguage(ctx context.Context, req *mcp.CallToolRequest,
} }
func (s *Service) getSupportedLanguages(ctx context.Context, req *mcp.CallToolRequest, input GetSupportedLanguagesInput) (*mcp.CallToolResult, GetSupportedLanguagesOutput, error) { func (s *Service) getSupportedLanguages(ctx context.Context, req *mcp.CallToolRequest, input GetSupportedLanguagesInput) (*mcp.CallToolResult, GetSupportedLanguagesOutput, error) {
languages := []LanguageInfo{ return nil, GetSupportedLanguagesOutput{Languages: supportedLanguages()}, nil
{ID: "typescript", Name: "TypeScript", Extensions: []string{".ts", ".tsx"}},
{ID: "javascript", Name: "JavaScript", Extensions: []string{".js", ".jsx"}},
{ID: "go", Name: "Go", Extensions: []string{".go"}},
{ID: "python", Name: "Python", Extensions: []string{".py"}},
{ID: "rust", Name: "Rust", Extensions: []string{".rs"}},
{ID: "java", Name: "Java", Extensions: []string{".java"}},
{ID: "php", Name: "PHP", Extensions: []string{".php"}},
{ID: "ruby", Name: "Ruby", Extensions: []string{".rb"}},
{ID: "html", Name: "HTML", Extensions: []string{".html", ".htm"}},
{ID: "css", Name: "CSS", Extensions: []string{".css"}},
{ID: "json", Name: "JSON", Extensions: []string{".json"}},
{ID: "yaml", Name: "YAML", Extensions: []string{".yaml", ".yml"}},
{ID: "markdown", Name: "Markdown", Extensions: []string{".md", ".markdown"}},
{ID: "sql", Name: "SQL", Extensions: []string{".sql"}},
{ID: "shell", Name: "Shell", Extensions: []string{".sh", ".bash"}},
}
return nil, GetSupportedLanguagesOutput{Languages: languages}, nil
} }
func (s *Service) editDiff(ctx context.Context, req *mcp.CallToolRequest, input EditDiffInput) (*mcp.CallToolResult, EditDiffOutput, error) { func (s *Service) editDiff(ctx context.Context, req *mcp.CallToolRequest, input EditDiffInput) (*mcp.CallToolResult, EditDiffOutput, error) {
if s.medium == nil {
return nil, EditDiffOutput{}, log.E("mcp.editDiff", "workspace medium unavailable", nil)
}
if input.OldString == "" { if input.OldString == "" {
return nil, EditDiffOutput{}, log.E("mcp.editDiff", "old_string cannot be empty", nil) return nil, EditDiffOutput{}, log.E("mcp.editDiff", "old_string cannot be empty", nil)
} }
@ -565,16 +653,16 @@ func (s *Service) editDiff(ctx context.Context, req *mcp.CallToolRequest, input
count := 0 count := 0
if input.ReplaceAll { if input.ReplaceAll {
count = strings.Count(content, input.OldString) count = countOccurrences(content, input.OldString)
if count == 0 { if count == 0 {
return nil, EditDiffOutput{}, log.E("mcp.editDiff", "old_string not found in file", nil) return nil, EditDiffOutput{}, log.E("mcp.editDiff", "old_string not found in file", nil)
} }
content = strings.ReplaceAll(content, input.OldString, input.NewString) content = core.Replace(content, input.OldString, input.NewString)
} else { } else {
if !strings.Contains(content, input.OldString) { if !core.Contains(content, input.OldString) {
return nil, EditDiffOutput{}, log.E("mcp.editDiff", "old_string not found in file", nil) return nil, EditDiffOutput{}, log.E("mcp.editDiff", "old_string not found in file", nil)
} }
content = strings.Replace(content, input.OldString, input.NewString, 1) content = replaceFirst(content, input.OldString, input.NewString)
count = 1 count = 1
} }
@ -591,58 +679,79 @@ func (s *Service) editDiff(ctx context.Context, req *mcp.CallToolRequest, input
// detectLanguageFromPath maps file extensions to language IDs. // detectLanguageFromPath maps file extensions to language IDs.
func detectLanguageFromPath(path string) string { func detectLanguageFromPath(path string) string {
ext := filepath.Ext(path) if core.PathBase(path) == "Dockerfile" {
switch ext {
case ".ts", ".tsx":
return "typescript"
case ".js", ".jsx":
return "javascript"
case ".go":
return "go"
case ".py":
return "python"
case ".rs":
return "rust"
case ".rb":
return "ruby"
case ".java":
return "java"
case ".php":
return "php"
case ".c", ".h":
return "c"
case ".cpp", ".hpp", ".cc", ".cxx":
return "cpp"
case ".cs":
return "csharp"
case ".html", ".htm":
return "html"
case ".css":
return "css"
case ".scss":
return "scss"
case ".json":
return "json"
case ".yaml", ".yml":
return "yaml"
case ".xml":
return "xml"
case ".md", ".markdown":
return "markdown"
case ".sql":
return "sql"
case ".sh", ".bash":
return "shell"
case ".swift":
return "swift"
case ".kt", ".kts":
return "kotlin"
default:
if filepath.Base(path) == "Dockerfile" {
return "dockerfile" return "dockerfile"
} }
ext := core.PathExt(path)
if lang, ok := languageByExtension[ext]; ok {
return lang
}
return "plaintext" return "plaintext"
} }
var languageByExtension = map[string]string{
".ts": "typescript",
".tsx": "typescript",
".js": "javascript",
".jsx": "javascript",
".go": "go",
".py": "python",
".rs": "rust",
".rb": "ruby",
".java": "java",
".php": "php",
".c": "c",
".h": "c",
".cpp": "cpp",
".hpp": "cpp",
".cc": "cpp",
".cxx": "cpp",
".cs": "csharp",
".html": "html",
".htm": "html",
".css": "css",
".scss": "scss",
".json": "json",
".yaml": "yaml",
".yml": "yaml",
".xml": "xml",
".md": "markdown",
".markdown": "markdown",
".sql": "sql",
".sh": "shell",
".bash": "shell",
".swift": "swift",
".kt": "kotlin",
".kts": "kotlin",
}
func supportedLanguages() []LanguageInfo {
return []LanguageInfo{
{ID: "typescript", Name: "TypeScript", Extensions: []string{".ts", ".tsx"}},
{ID: "javascript", Name: "JavaScript", Extensions: []string{".js", ".jsx"}},
{ID: "go", Name: "Go", Extensions: []string{".go"}},
{ID: "python", Name: "Python", Extensions: []string{".py"}},
{ID: "rust", Name: "Rust", Extensions: []string{".rs"}},
{ID: "ruby", Name: "Ruby", Extensions: []string{".rb"}},
{ID: "java", Name: "Java", Extensions: []string{".java"}},
{ID: "php", Name: "PHP", Extensions: []string{".php"}},
{ID: "c", Name: "C", Extensions: []string{".c", ".h"}},
{ID: "cpp", Name: "C++", Extensions: []string{".cpp", ".hpp", ".cc", ".cxx"}},
{ID: "csharp", Name: "C#", Extensions: []string{".cs"}},
{ID: "html", Name: "HTML", Extensions: []string{".html", ".htm"}},
{ID: "css", Name: "CSS", Extensions: []string{".css"}},
{ID: "scss", Name: "SCSS", Extensions: []string{".scss"}},
{ID: "json", Name: "JSON", Extensions: []string{".json"}},
{ID: "yaml", Name: "YAML", Extensions: []string{".yaml", ".yml"}},
{ID: "xml", Name: "XML", Extensions: []string{".xml"}},
{ID: "markdown", Name: "Markdown", Extensions: []string{".md", ".markdown"}},
{ID: "sql", Name: "SQL", Extensions: []string{".sql"}},
{ID: "shell", Name: "Shell", Extensions: []string{".sh", ".bash"}},
{ID: "swift", Name: "Swift", Extensions: []string{".swift"}},
{ID: "kotlin", Name: "Kotlin", Extensions: []string{".kt", ".kts"}},
{ID: "dockerfile", Name: "Dockerfile", Extensions: []string{}},
}
} }
// Run starts the MCP server, auto-selecting transport from environment. // Run starts the MCP server, auto-selecting transport from environment.
@ -654,20 +763,52 @@ func detectLanguageFromPath(path string) string {
// os.Setenv("MCP_ADDR", "127.0.0.1:9100") // os.Setenv("MCP_ADDR", "127.0.0.1:9100")
// svc.Run(ctx) // svc.Run(ctx)
// //
// // Unix socket (set MCP_UNIX_SOCKET):
// os.Setenv("MCP_UNIX_SOCKET", "/tmp/core-mcp.sock")
// svc.Run(ctx)
//
// // HTTP (set MCP_HTTP_ADDR): // // HTTP (set MCP_HTTP_ADDR):
// os.Setenv("MCP_HTTP_ADDR", "127.0.0.1:9101") // os.Setenv("MCP_HTTP_ADDR", "127.0.0.1:9101")
// svc.Run(ctx) // svc.Run(ctx)
func (s *Service) Run(ctx context.Context) error { func (s *Service) Run(ctx context.Context) error {
if httpAddr := os.Getenv("MCP_HTTP_ADDR"); httpAddr != "" { if httpAddr := core.Env("MCP_HTTP_ADDR"); httpAddr != "" {
return s.ServeHTTP(ctx, httpAddr) return s.ServeHTTP(ctx, httpAddr)
} }
if addr := os.Getenv("MCP_ADDR"); addr != "" { if addr := core.Env("MCP_ADDR"); addr != "" {
return s.ServeTCP(ctx, addr) return s.ServeTCP(ctx, addr)
} }
s.stdioMode = true if socketPath := core.Env("MCP_UNIX_SOCKET"); socketPath != "" {
return s.server.Run(ctx, &mcp.StdioTransport{}) return s.ServeUnix(ctx, socketPath)
}
return s.ServeStdio(ctx)
} }
// countOccurrences counts non-overlapping instances of substr in s.
func countOccurrences(s, substr string) int {
if substr == "" {
return 0
}
count := 0
for i := 0; i <= len(s)-len(substr); i++ {
if s[i:i+len(substr)] == substr {
count++
i += len(substr) - 1
}
}
return count
}
// replaceFirst replaces the first occurrence of old with new in s.
func replaceFirst(s, old, new string) string {
i := 0
for i <= len(s)-len(old) {
if s[i:i+len(old)] == old {
return core.Concat(s[:i], new, s[i+len(old):])
}
i++
}
return s
}
// Server returns the underlying MCP server for advanced configuration. // Server returns the underlying MCP server for advanced configuration.
// //

View file

@ -3,10 +3,11 @@ package mcp
import ( import (
"os" "os"
"path/filepath" "path/filepath"
"sync"
"testing" "testing"
) )
func TestNew_Good_DefaultWorkspace(t *testing.T) { func TestMcp_New_Good_DefaultWorkspace(t *testing.T) {
cwd, err := os.Getwd() cwd, err := os.Getwd()
if err != nil { if err != nil {
t.Fatalf("Failed to get working directory: %v", err) t.Fatalf("Failed to get working directory: %v", err)
@ -25,7 +26,7 @@ func TestNew_Good_DefaultWorkspace(t *testing.T) {
} }
} }
func TestNew_Good_CustomWorkspace(t *testing.T) { func TestMcp_New_Good_CustomWorkspace(t *testing.T) {
tmpDir := t.TempDir() tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir}) s, err := New(Options{WorkspaceRoot: tmpDir})
@ -41,7 +42,7 @@ func TestNew_Good_CustomWorkspace(t *testing.T) {
} }
} }
func TestNew_Good_NoRestriction(t *testing.T) { func TestMcp_New_Good_NoRestriction(t *testing.T) {
s, err := New(Options{Unrestricted: true}) s, err := New(Options{Unrestricted: true})
if err != nil { if err != nil {
t.Fatalf("Failed to create service: %v", err) t.Fatalf("Failed to create service: %v", err)
@ -55,7 +56,211 @@ func TestNew_Good_NoRestriction(t *testing.T) {
} }
} }
func TestMedium_Good_ReadWrite(t *testing.T) { func TestMcp_New_Good_RegistersBuiltInTools(t *testing.T) {
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
tools := map[string]bool{}
for _, rec := range s.Tools() {
tools[rec.Name] = true
}
for _, name := range []string{
"metrics_record",
"metrics_query",
"rag_query",
"rag_ingest",
"rag_collections",
"webview_connect",
"webview_disconnect",
"webview_navigate",
"webview_click",
"webview_type",
"webview_query",
"webview_console",
"webview_eval",
"webview_screenshot",
"webview_wait",
} {
if !tools[name] {
t.Fatalf("expected tool %q to be registered", name)
}
}
for _, name := range []string{"process_start", "ws_start"} {
if tools[name] {
t.Fatalf("did not expect tool %q to be registered without dependencies", name)
}
}
}
func TestMcp_New_Bad_NilSubsystemIgnored(t *testing.T) {
s, err := New(Options{Subsystems: []Subsystem{nil}})
if err != nil {
t.Fatalf("New failed with nil subsystem: %v", err)
}
if len(s.Subsystems()) != 0 {
t.Fatalf("expected nil subsystem to be ignored, got %d subsystems", len(s.Subsystems()))
}
}
func TestMcp_New_Ugly_ConcurrentConstruction(t *testing.T) {
tmpDir := t.TempDir()
const workers = 8
var wg sync.WaitGroup
errs := make(chan error, workers)
for i := 0; i < workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
errs <- err
return
}
if s.workspaceRoot != tmpDir || s.medium == nil {
errs <- os.ErrInvalid
}
}()
}
wg.Wait()
close(errs)
for err := range errs {
if err != nil {
t.Fatalf("concurrent New failed: %v", err)
}
}
}
func TestMcp_GetSupportedLanguages_Good_IncludesAllDetectedLanguages(t *testing.T) {
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
_, out, err := s.getSupportedLanguages(nil, nil, GetSupportedLanguagesInput{})
if err != nil {
t.Fatalf("getSupportedLanguages failed: %v", err)
}
if got, want := len(out.Languages), 23; got != want {
t.Fatalf("expected %d supported languages, got %d", want, got)
}
got := map[string]bool{}
for _, lang := range out.Languages {
got[lang.ID] = true
}
for _, want := range []string{
"typescript",
"javascript",
"go",
"python",
"rust",
"ruby",
"java",
"php",
"c",
"cpp",
"csharp",
"html",
"css",
"scss",
"json",
"yaml",
"xml",
"markdown",
"sql",
"shell",
"swift",
"kotlin",
"dockerfile",
} {
if !got[want] {
t.Fatalf("expected language %q to be listed", want)
}
}
}
func TestMcp_GetSupportedLanguages_Bad_IgnoresUnsupportedInputState(t *testing.T) {
s := &Service{}
_, out, err := s.getSupportedLanguages(nil, nil, GetSupportedLanguagesInput{})
if err != nil {
t.Fatalf("getSupportedLanguages failed without initialized service state: %v", err)
}
if len(out.Languages) == 0 {
t.Fatal("expected supported languages to be returned")
}
}
func TestMcp_GetSupportedLanguages_Ugly_ReturnsIndependentSnapshots(t *testing.T) {
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
_, first, err := s.getSupportedLanguages(nil, nil, GetSupportedLanguagesInput{})
if err != nil {
t.Fatalf("getSupportedLanguages failed: %v", err)
}
first.Languages[0].ID = "mutated"
_, second, err := s.getSupportedLanguages(nil, nil, GetSupportedLanguagesInput{})
if err != nil {
t.Fatalf("getSupportedLanguages failed on second call: %v", err)
}
if second.Languages[0].ID == "mutated" {
t.Fatal("expected a fresh supported languages snapshot")
}
}
func TestMcp_DetectLanguageFromPath_Good_KnownExtensions(t *testing.T) {
cases := map[string]string{
"main.go": "go",
"index.tsx": "typescript",
"style.scss": "scss",
"Program.cs": "csharp",
"module.kt": "kotlin",
"docker/Dockerfile": "dockerfile",
}
for path, want := range cases {
if got := detectLanguageFromPath(path); got != want {
t.Fatalf("detectLanguageFromPath(%q) = %q, want %q", path, got, want)
}
}
}
func TestMcp_DetectLanguageFromPath_Bad_UnsupportedExtensionDefaultsPlaintext(t *testing.T) {
if got := detectLanguageFromPath("archive.unknown"); got != "plaintext" {
t.Fatalf("expected unsupported extension to be plaintext, got %q", got)
}
}
func TestMcp_DetectLanguageFromPath_Ugly_BoundaryPaths(t *testing.T) {
cases := map[string]string{
"": "plaintext",
"Dockerfile": "dockerfile",
"nested/Makefile": "plaintext",
"nested/file.TSX": "plaintext",
"nested/.env": "plaintext",
"nested/file.bash": "shell",
}
for path, want := range cases {
if got := detectLanguageFromPath(path); got != want {
t.Fatalf("detectLanguageFromPath(%q) = %q, want %q", path, got, want)
}
}
}
func TestMcp_Medium_Good_ReadWrite(t *testing.T) {
tmpDir := t.TempDir() tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir}) s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil { if err != nil {
@ -85,7 +290,53 @@ func TestMedium_Good_ReadWrite(t *testing.T) {
} }
} }
func TestMedium_Good_EnsureDir(t *testing.T) { func TestMcp_Medium_Bad_ReadMissingFile(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
if _, err := s.medium.Read("missing.txt"); err == nil {
t.Fatal("expected reading a missing file to fail")
}
}
func TestMcp_Medium_Ugly_ConcurrentReadWrite(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
const workers = 8
var wg sync.WaitGroup
errs := make(chan error, workers)
for i := 0; i < workers; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
path := filepath.Join("concurrent", string(rune('a'+i))+".txt")
if err := s.medium.Write(path, "content"); err != nil {
errs <- err
return
}
if _, err := s.medium.Read(path); err != nil {
errs <- err
}
}(i)
}
wg.Wait()
close(errs)
for err := range errs {
if err != nil {
t.Fatalf("concurrent medium access failed: %v", err)
}
}
}
func TestMcp_Medium_Good_EnsureDir(t *testing.T) {
tmpDir := t.TempDir() tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir}) s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil { if err != nil {
@ -108,7 +359,163 @@ func TestMedium_Good_EnsureDir(t *testing.T) {
} }
} }
func TestMedium_Good_IsFile(t *testing.T) { func TestMcp_Medium_Bad_EnsureDirOverFile(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
if err := s.medium.Write("same", "content"); err != nil {
t.Fatalf("Failed to write file: %v", err)
}
if err := s.medium.EnsureDir("same"); err == nil {
t.Fatal("expected EnsureDir over an existing file to fail")
}
}
func TestMcp_Medium_Ugly_EnsureDirIdempotentNestedBoundary(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
for i := 0; i < 2; i++ {
if err := s.medium.EnsureDir("subdir/nested"); err != nil {
t.Fatalf("EnsureDir call %d failed: %v", i+1, err)
}
}
}
func TestMcp_FileExists_Good_FileAndDirectory(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
if err := s.medium.EnsureDir("nested"); err != nil {
t.Fatalf("Failed to create directory: %v", err)
}
if err := s.medium.Write("nested/file.txt", "content"); err != nil {
t.Fatalf("Failed to write file: %v", err)
}
_, fileOut, err := s.fileExists(nil, nil, FileExistsInput{Path: "nested/file.txt"})
if err != nil {
t.Fatalf("fileExists(file) failed: %v", err)
}
if !fileOut.Exists {
t.Fatal("expected file to exist")
}
if fileOut.IsDir {
t.Fatal("expected file to not be reported as a directory")
}
_, dirOut, err := s.fileExists(nil, nil, FileExistsInput{Path: "nested"})
if err != nil {
t.Fatalf("fileExists(dir) failed: %v", err)
}
if !dirOut.Exists {
t.Fatal("expected directory to exist")
}
if !dirOut.IsDir {
t.Fatal("expected directory to be reported as a directory")
}
}
func TestMcp_FileExists_Bad_MissingPath(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
_, out, err := s.fileExists(nil, nil, FileExistsInput{Path: "missing.txt"})
if err != nil {
t.Fatalf("fileExists(missing) failed: %v", err)
}
if out.Exists || out.IsDir {
t.Fatalf("expected missing path to be reported absent, got %+v", out)
}
}
func TestMcp_FileExists_Ugly_NilMedium(t *testing.T) {
s := &Service{}
if _, _, err := s.fileExists(nil, nil, FileExistsInput{Path: "anything"}); err == nil {
t.Fatal("expected fileExists to fail when medium is nil")
}
}
func TestMcp_ListDirectory_Good_ReturnsDocumentedEntryPaths(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
if err := s.medium.EnsureDir("nested"); err != nil {
t.Fatalf("Failed to create directory: %v", err)
}
if err := s.medium.Write("nested/file.txt", "content"); err != nil {
t.Fatalf("Failed to write file: %v", err)
}
_, out, err := s.listDirectory(nil, nil, ListDirectoryInput{Path: "nested"})
if err != nil {
t.Fatalf("listDirectory failed: %v", err)
}
if len(out.Entries) != 1 {
t.Fatalf("expected one entry, got %d", len(out.Entries))
}
want := filepath.Join("nested", "file.txt")
if out.Entries[0].Path != want {
t.Fatalf("expected entry path %q, got %q", want, out.Entries[0].Path)
}
}
func TestMcp_ListDirectory_Bad_MissingDirectory(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
if _, _, err := s.listDirectory(nil, nil, ListDirectoryInput{Path: "missing"}); err == nil {
t.Fatal("expected listing a missing directory to fail")
}
}
func TestMcp_ListDirectory_Ugly_SortsEntries(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
for _, name := range []string{"b.txt", "a.txt", "c.txt"} {
if err := s.medium.Write(filepath.Join("nested", name), "content"); err != nil {
t.Fatalf("Failed to write %s: %v", name, err)
}
}
_, out, err := s.listDirectory(nil, nil, ListDirectoryInput{Path: "nested"})
if err != nil {
t.Fatalf("listDirectory failed: %v", err)
}
if len(out.Entries) != 3 {
t.Fatalf("expected three entries, got %d", len(out.Entries))
}
for i, want := range []string{"a.txt", "b.txt", "c.txt"} {
if out.Entries[i].Name != want {
t.Fatalf("entry %d = %q, want %q", i, out.Entries[i].Name, want)
}
}
}
func TestMcp_Medium_Good_IsFile(t *testing.T) {
tmpDir := t.TempDir() tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir}) s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil { if err != nil {
@ -129,7 +536,94 @@ func TestMedium_Good_IsFile(t *testing.T) {
} }
} }
func TestSandboxing_Traversal_Sanitized(t *testing.T) { func TestMcp_Medium_Bad_IsFileEmptyPath(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
if s.medium.IsFile("") {
t.Fatal("empty path should not be a file")
}
}
func TestMcp_Medium_Ugly_IsFileDirectoryBoundary(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
if err := s.medium.EnsureDir("nested"); err != nil {
t.Fatalf("Failed to create directory: %v", err)
}
if s.medium.IsFile("nested") {
t.Fatal("directory should not be reported as a file")
}
}
func TestMcp_ResolveWorkspacePath_Good(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
cases := map[string]string{
"docs/readme.md": filepath.Join(tmpDir, "docs", "readme.md"),
"/docs/readme.md": filepath.Join(tmpDir, "docs", "readme.md"),
"../escape/notes.md": filepath.Join(tmpDir, "escape", "notes.md"),
"": "",
}
for input, want := range cases {
if got := s.resolveWorkspacePath(input); got != want {
t.Fatalf("resolveWorkspacePath(%q) = %q, want %q", input, got, want)
}
}
}
func TestMcp_ResolveWorkspacePath_Good_Unrestricted(t *testing.T) {
s, err := New(Options{Unrestricted: true})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
if got, want := s.resolveWorkspacePath("docs/readme.md"), filepath.Clean("docs/readme.md"); got != want {
t.Fatalf("resolveWorkspacePath(relative) = %q, want %q", got, want)
}
if got, want := s.resolveWorkspacePath("/tmp/readme.md"), filepath.Clean("/tmp/readme.md"); got != want {
t.Fatalf("resolveWorkspacePath(absolute) = %q, want %q", got, want)
}
}
func TestMcp_ResolveWorkspacePath_Bad_EmptyPath(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
if got := s.resolveWorkspacePath(""); got != "" {
t.Fatalf("resolveWorkspacePath(empty) = %q, want empty", got)
}
}
func TestMcp_ResolveWorkspacePath_Ugly_TraversalSanitized(t *testing.T) {
tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
got := s.resolveWorkspacePath("../../secret.txt")
want := filepath.Join(tmpDir, "secret.txt")
if got != want {
t.Fatalf("resolveWorkspacePath(traversal) = %q, want %q", got, want)
}
}
func TestMcp_Medium_Ugly_TraversalSanitized(t *testing.T) {
tmpDir := t.TempDir() tmpDir := t.TempDir()
s, err := New(Options{WorkspaceRoot: tmpDir}) s, err := New(Options{WorkspaceRoot: tmpDir})
if err != nil { if err != nil {
@ -149,7 +643,7 @@ func TestSandboxing_Traversal_Sanitized(t *testing.T) {
// should validate inputs before calling Medium. // should validate inputs before calling Medium.
} }
func TestSandboxing_Symlinks_Blocked(t *testing.T) { func TestMcp_Medium_Ugly_SymlinksBlocked(t *testing.T) {
tmpDir := t.TempDir() tmpDir := t.TempDir()
outsideDir := t.TempDir() outsideDir := t.TempDir()

View file

@ -7,17 +7,142 @@
package mcp package mcp
import ( import (
"cmp"
"context" "context"
"encoding/json" "io"
"iter" "iter"
"os" "os" // Note: required for process stdout; core Fs/Env do not expose a stdio writer.
"reflect"
"slices"
"sync" "sync"
"unsafe"
core "dappco.re/go/core"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
// stdoutMu protects stdout writes from concurrent goroutines. func normalizeNotificationContext(ctx context.Context) context.Context {
var stdoutMu sync.Mutex if ctx == nil {
return context.Background()
}
return ctx
}
// lockedWriter wraps an io.Writer with a mutex.
// Both the SDK's transport and ChannelSend use this writer,
// ensuring channel notifications don't interleave with SDK messages.
type lockedWriter struct {
mu sync.Mutex
w io.Writer
}
func (lw *lockedWriter) Write(p []byte) (int, error) {
lw.mu.Lock()
defer lw.mu.Unlock()
return lw.w.Write(p)
}
func (lw *lockedWriter) Close() error { return nil }
// sharedStdout is the single writer for all stdio output.
// Created once when the MCP service enters stdio mode.
var sharedStdout = &lockedWriter{w: os.Stdout}
// ChannelNotificationMethod is the JSON-RPC method used for named channel
// events sent through claude/channel.
const ChannelNotificationMethod = "notifications/claude/channel"
// LoggingNotificationMethod is the JSON-RPC method used for log messages sent
// to connected MCP clients.
const LoggingNotificationMethod = "notifications/message"
// ClaudeChannelCapabilityName is the experimental capability key advertised
// by the MCP server for channel-based client notifications.
const ClaudeChannelCapabilityName = "claude/channel"
// Shared channel names. Keeping them central avoids drift between emitters
// and the advertised claude/channel capability.
//
// Use these names when emitting structured events from subsystems:
//
// s.ChannelSend(ctx, ChannelProcessStart, map[string]any{"id": "proc-1"})
const (
ChannelBuildStart = "build.start"
ChannelBuildComplete = "build.complete"
ChannelBuildFailed = "build.failed"
ChannelAgentComplete = "agent.complete"
ChannelAgentBlocked = "agent.blocked"
ChannelAgentStatus = "agent.status"
ChannelBrainForgetDone = "brain.forget.complete"
ChannelBrainListDone = "brain.list.complete"
ChannelBrainRecallDone = "brain.recall.complete"
ChannelBrainRememberDone = "brain.remember.complete"
ChannelHarvestComplete = "harvest.complete"
ChannelInboxMessage = "inbox.message"
ChannelProcessExit = "process.exit"
ChannelProcessStart = "process.start"
ChannelProcessOutput = "process.output"
ChannelTestResult = "test.result"
)
var channelCapabilityList = []string{
ChannelBuildStart,
ChannelAgentComplete,
ChannelAgentBlocked,
ChannelAgentStatus,
ChannelBuildComplete,
ChannelBuildFailed,
ChannelBrainForgetDone,
ChannelBrainListDone,
ChannelBrainRecallDone,
ChannelBrainRememberDone,
ChannelHarvestComplete,
ChannelInboxMessage,
ChannelProcessExit,
ChannelProcessStart,
ChannelProcessOutput,
ChannelTestResult,
}
// ChannelCapabilitySpec describes the experimental claude/channel capability.
//
// spec := ChannelCapabilitySpec{
// Version: "1",
// Description: "Push events into client sessions via named channels",
// Channels: ChannelCapabilityChannels(),
// }
type ChannelCapabilitySpec struct {
Version string `json:"version"` // e.g. "1"
Description string `json:"description"` // capability summary shown to clients
Channels []string `json:"channels"` // e.g. []string{"build.complete", "agent.status"}
}
// Map converts the typed capability into the wire-format map expected by the SDK.
//
// caps := ChannelCapabilitySpec{
// Version: "1",
// Description: "Push events into client sessions via named channels",
// Channels: ChannelCapabilityChannels(),
// }.Map()
func (c ChannelCapabilitySpec) Map() map[string]any {
return map[string]any{
"version": c.Version,
"description": c.Description,
"channels": slices.Clone(c.Channels),
}
}
// ChannelNotification is the payload sent through the experimental channel
// notification method.
//
// n := ChannelNotification{
// Channel: ChannelBuildComplete,
// Data: map[string]any{"repo": "core/mcp"},
// }
type ChannelNotification struct {
Channel string `json:"channel"` // e.g. "build.complete"
Data any `json:"data"` // arbitrary payload for the named channel
}
// SendNotificationToAllClients broadcasts a log-level notification to every // SendNotificationToAllClients broadcasts a log-level notification to every
// connected MCP session (stdio, HTTP, TCP, and Unix). // connected MCP session (stdio, HTTP, TCP, and Unix).
@ -25,28 +150,49 @@ var stdoutMu sync.Mutex
// //
// s.SendNotificationToAllClients(ctx, "info", "monitor", map[string]any{"event": "build complete"}) // s.SendNotificationToAllClients(ctx, "info", "monitor", map[string]any{"event": "build complete"})
func (s *Service) SendNotificationToAllClients(ctx context.Context, level mcp.LoggingLevel, logger string, data any) { func (s *Service) SendNotificationToAllClients(ctx context.Context, level mcp.LoggingLevel, logger string, data any) {
for session := range s.server.Sessions() { if s == nil || s.server == nil {
if err := session.Log(ctx, &mcp.LoggingMessageParams{ return
}
ctx = normalizeNotificationContext(ctx)
s.broadcastToSessions(func(session *mcp.ServerSession) {
s.sendLoggingNotificationToSession(ctx, session, level, logger, data)
})
}
// SendNotificationToSession sends a log-level notification to one connected
// MCP session.
//
// s.SendNotificationToSession(ctx, session, "info", "monitor", data)
func (s *Service) SendNotificationToSession(ctx context.Context, session *mcp.ServerSession, level mcp.LoggingLevel, logger string, data any) {
if s == nil || s.server == nil {
return
}
ctx = normalizeNotificationContext(ctx)
s.sendLoggingNotificationToSession(ctx, session, level, logger, data)
}
// SendNotificationToClient sends a log-level notification to one connected
// MCP client.
//
// s.SendNotificationToClient(ctx, client, "info", "monitor", data)
func (s *Service) SendNotificationToClient(ctx context.Context, client *mcp.ServerSession, level mcp.LoggingLevel, logger string, data any) {
s.SendNotificationToSession(ctx, client, level, logger, data)
}
func (s *Service) sendLoggingNotificationToSession(ctx context.Context, session *mcp.ServerSession, level mcp.LoggingLevel, logger string, data any) {
if s == nil || s.server == nil || session == nil {
return
}
ctx = normalizeNotificationContext(ctx)
if err := sendSessionNotification(ctx, session, LoggingNotificationMethod, &mcp.LoggingMessageParams{
Level: level, Level: level,
Logger: logger, Logger: logger,
Data: data, Data: data,
}); err != nil { }); err != nil {
s.logger.Debug("notify: failed to send to session", "session", session.ID(), "error", err) s.debugNotify("notify: failed to send to session", "session", session.ID(), "error", err)
} }
} }
}
// channelNotification is the JSON-RPC notification format for claude/channel.
type channelNotification struct {
JSONRPC string `json:"jsonrpc"`
Method string `json:"method"`
Params channelParams `json:"params"`
}
type channelParams struct {
Content string `json:"content"`
Meta map[string]string `json:"meta,omitempty"`
}
// ChannelSend pushes a channel event to all connected clients via // ChannelSend pushes a channel event to all connected clients via
// the notifications/claude/channel JSON-RPC method. // the notifications/claude/channel JSON-RPC method.
@ -54,50 +200,39 @@ type channelParams struct {
// s.ChannelSend(ctx, "agent.complete", map[string]any{"repo": "go-io", "workspace": "go-io-123"}) // s.ChannelSend(ctx, "agent.complete", map[string]any{"repo": "go-io", "workspace": "go-io-123"})
// s.ChannelSend(ctx, "build.failed", map[string]any{"repo": "core", "error": "test timeout"}) // s.ChannelSend(ctx, "build.failed", map[string]any{"repo": "core", "error": "test timeout"})
func (s *Service) ChannelSend(ctx context.Context, channel string, data any) { func (s *Service) ChannelSend(ctx context.Context, channel string, data any) {
// Marshal the data payload as the content string if s == nil || s.server == nil {
contentBytes, err := json.Marshal(data)
if err != nil {
s.logger.Debug("channel: failed to marshal data", "channel", channel, "error", err)
return return
} }
if core.Trim(channel) == "" {
notification := channelNotification{
JSONRPC: "2.0",
Method: "notifications/claude/channel",
Params: channelParams{
Content: string(contentBytes),
Meta: map[string]string{
"source": "core-agent",
"channel": channel,
},
},
}
msg, err := json.Marshal(notification)
if err != nil {
s.logger.Debug("channel: failed to marshal notification", "channel", channel, "error", err)
return return
} }
ctx = normalizeNotificationContext(ctx)
// Write directly to stdout (stdio transport) with newline delimiter. payload := ChannelNotification{Channel: channel, Data: data}
// The official SDK doesn't expose a way to send custom notification methods, s.sendChannelNotificationToAllClients(ctx, payload)
// so we write the JSON-RPC notification directly to the transport.
// Only write when running in stdio mode — HTTP/TCP transports don't use stdout.
if !s.stdioMode {
return
}
stdoutMu.Lock()
os.Stdout.Write(append(msg, '\n'))
stdoutMu.Unlock()
} }
// ChannelSendToSession pushes a channel event to a specific session. // ChannelSendToSession pushes a channel event to a specific session.
// Falls back to stdout for stdio transport.
// //
// s.ChannelSendToSession(ctx, session, "agent.progress", progressData) // s.ChannelSendToSession(ctx, session, "agent.progress", progressData)
func (s *Service) ChannelSendToSession(ctx context.Context, session *mcp.ServerSession, channel string, data any) { func (s *Service) ChannelSendToSession(ctx context.Context, session *mcp.ServerSession, channel string, data any) {
// For now, channel events go to all sessions via stdout if s == nil || s.server == nil || session == nil {
s.ChannelSend(ctx, channel, data) return
}
if core.Trim(channel) == "" {
return
}
ctx = normalizeNotificationContext(ctx)
payload := ChannelNotification{Channel: channel, Data: data}
if err := sendSessionNotification(ctx, session, ChannelNotificationMethod, payload); err != nil {
s.debugNotify("channel: failed to send to session", "session", session.ID(), "error", err)
}
}
// ChannelSendToClient pushes a channel event to one connected MCP client.
//
// s.ChannelSendToClient(ctx, client, "agent.progress", progressData)
func (s *Service) ChannelSendToClient(ctx context.Context, client *mcp.ServerSession, channel string, data any) {
s.ChannelSendToSession(ctx, client, channel, data)
} }
// Sessions returns an iterator over all connected MCP sessions. // Sessions returns an iterator over all connected MCP sessions.
@ -106,28 +241,180 @@ func (s *Service) ChannelSendToSession(ctx context.Context, session *mcp.ServerS
// s.ChannelSendToSession(ctx, session, "status", data) // s.ChannelSendToSession(ctx, session, "status", data)
// } // }
func (s *Service) Sessions() iter.Seq[*mcp.ServerSession] { func (s *Service) Sessions() iter.Seq[*mcp.ServerSession] {
return s.server.Sessions() if s == nil || s.server == nil {
return func(yield func(*mcp.ServerSession) bool) {}
}
return slices.Values(snapshotSessions(s.server))
}
func (s *Service) sendChannelNotificationToAllClients(ctx context.Context, payload ChannelNotification) {
if s == nil || s.server == nil {
return
}
ctx = normalizeNotificationContext(ctx)
s.broadcastToSessions(func(session *mcp.ServerSession) {
if err := sendSessionNotification(ctx, session, ChannelNotificationMethod, payload); err != nil {
s.debugNotify("channel: failed to send to session", "session", session.ID(), "error", err)
}
})
}
func (s *Service) broadcastToSessions(fn func(*mcp.ServerSession)) {
if s == nil || s.server == nil || fn == nil {
return
}
for _, session := range snapshotSessions(s.server) {
fn(session)
}
}
func (s *Service) debugNotify(msg string, args ...any) {
if s == nil || s.logger == nil {
return
}
s.logger.Debug(msg, args...)
}
// NotifySession sends a raw JSON-RPC notification to a specific MCP session.
//
// coremcp.NotifySession(ctx, session, "notifications/claude/channel", map[string]any{
// "content": "build failed", "meta": map[string]string{"severity": "high"},
// })
func NotifySession(ctx context.Context, session *mcp.ServerSession, method string, payload any) error {
return sendSessionNotification(ctx, session, method, payload)
}
func sendSessionNotification(ctx context.Context, session *mcp.ServerSession, method string, payload any) error {
if session == nil {
return nil
}
ctx = normalizeNotificationContext(ctx)
if conn, err := sessionMCPConnection(session); err == nil {
if notifier, ok := conn.(interface {
Notify(context.Context, string, any) error
}); ok {
if err := notifier.Notify(ctx, method, payload); err != nil {
return err
}
return nil
}
}
conn, err := sessionJSONRPCConnection(session)
if err != nil {
return err
}
notifier, ok := conn.(interface {
Notify(context.Context, string, any) error
})
if !ok {
return coreNotifyError("connection Notify method unavailable")
}
if err := notifier.Notify(ctx, method, payload); err != nil {
return err
}
return nil
}
func sessionMCPConnection(session *mcp.ServerSession) (any, error) {
value := reflect.ValueOf(session)
if value.Kind() != reflect.Ptr || value.IsNil() {
return nil, coreNotifyError("invalid session")
}
field := value.Elem().FieldByName("mcpConn")
if !field.IsValid() {
return nil, coreNotifyError("session mcp connection field unavailable")
}
return reflect.NewAt(field.Type(), unsafe.Pointer(field.UnsafeAddr())).Elem().Interface(), nil
}
func sessionJSONRPCConnection(session *mcp.ServerSession) (any, error) {
value := reflect.ValueOf(session)
if value.Kind() != reflect.Ptr || value.IsNil() {
return nil, coreNotifyError("invalid session")
}
field := value.Elem().FieldByName("conn")
if !field.IsValid() {
return nil, coreNotifyError("session connection field unavailable")
}
return reflect.NewAt(field.Type(), unsafe.Pointer(field.UnsafeAddr())).Elem().Interface(), nil
}
func coreNotifyError(message string) error {
return &notificationError{message: message}
}
func snapshotSessions(server *mcp.Server) []*mcp.ServerSession {
if server == nil {
return nil
}
sessions := make([]*mcp.ServerSession, 0)
for session := range server.Sessions() {
if session != nil {
sessions = append(sessions, session)
}
}
slices.SortFunc(sessions, func(a, b *mcp.ServerSession) int {
return cmp.Compare(a.ID(), b.ID())
})
return sessions
}
type notificationError struct {
message string
}
func (e *notificationError) Error() string {
return e.message
} }
// channelCapability returns the experimental capability descriptor // channelCapability returns the experimental capability descriptor
// for claude/channel, registered during New(). // for claude/channel, registered during New().
func channelCapability() map[string]any { func channelCapability() map[string]any {
return map[string]any{ return map[string]any{
"claude/channel": map[string]any{ ClaudeChannelCapabilityName: ClaudeChannelCapability().Map(),
"version": "1",
"description": "Push events into client sessions via named channels",
"channels": []string{
"agent.complete",
"agent.blocked",
"agent.status",
"build.complete",
"build.failed",
"brain.recall.complete",
"inbox.message",
"process.exit",
"harvest.complete",
"test.result",
},
},
} }
} }
// ClaudeChannelCapability returns the typed experimental capability descriptor.
//
// cap := ClaudeChannelCapability()
// caps := cap.Map()
func ClaudeChannelCapability() ChannelCapabilitySpec {
return ChannelCapabilitySpec{
Version: "1",
Description: "Push events into client sessions via named channels",
Channels: channelCapabilityChannels(),
}
}
// ChannelCapability returns the experimental capability descriptor registered
// during New(). Callers can reuse it when exposing server metadata.
//
// caps := ChannelCapability()
func ChannelCapability() map[string]any {
return channelCapability()
}
// channelCapabilityChannels lists the named channel events advertised by the
// experimental capability.
func channelCapabilityChannels() []string {
return slices.Clone(channelCapabilityList)
}
// ChannelCapabilityChannels returns the named channel events advertised by the
// experimental capability.
//
// channels := ChannelCapabilityChannels()
func ChannelCapabilityChannels() []string {
return channelCapabilityChannels()
}

570
pkg/mcp/notify_test.go Normal file
View file

@ -0,0 +1,570 @@
package mcp
import (
"bufio"
"context"
"encoding/json"
"net"
"reflect"
"slices"
"testing"
"time"
"github.com/modelcontextprotocol/go-sdk/mcp"
)
type notificationReadResult struct {
msg map[string]any
err error
}
func connectNotificationSession(t *testing.T, svc *Service) (context.CancelFunc, *mcp.ServerSession, net.Conn) {
t.Helper()
serverConn, clientConn := net.Pipe()
ctx, cancel := context.WithCancel(context.Background())
session, err := svc.server.Connect(ctx, &connTransport{conn: serverConn}, nil)
if err != nil {
cancel()
clientConn.Close()
t.Fatalf("Connect() failed: %v", err)
}
return cancel, session, clientConn
}
func readNotificationMessage(t *testing.T, conn net.Conn) <-chan notificationReadResult {
t.Helper()
resultCh := make(chan notificationReadResult, 1)
go func() {
scanner := bufio.NewScanner(conn)
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
if !scanner.Scan() {
resultCh <- notificationReadResult{err: scanner.Err()}
return
}
var msg map[string]any
if err := json.Unmarshal(scanner.Bytes(), &msg); err != nil {
resultCh <- notificationReadResult{err: err}
return
}
resultCh <- notificationReadResult{msg: msg}
}()
return resultCh
}
func readNotificationMessageUntil(t *testing.T, conn net.Conn, match func(map[string]any) bool) <-chan notificationReadResult {
t.Helper()
resultCh := make(chan notificationReadResult, 1)
scanner := bufio.NewScanner(conn)
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
go func() {
for scanner.Scan() {
var msg map[string]any
if err := json.Unmarshal(scanner.Bytes(), &msg); err != nil {
resultCh <- notificationReadResult{err: err}
return
}
if match(msg) {
resultCh <- notificationReadResult{msg: msg}
return
}
}
if err := scanner.Err(); err != nil {
resultCh <- notificationReadResult{err: err}
return
}
resultCh <- notificationReadResult{err: context.DeadlineExceeded}
}()
return resultCh
}
func TestSendNotificationToAllClients_Good(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
ctx := context.Background()
svc.SendNotificationToAllClients(ctx, "info", "test", map[string]any{
"event": ChannelBuildComplete,
})
}
func TestNotificationMethods_Good_NilService(t *testing.T) {
var svc *Service
ctx := context.Background()
svc.SendNotificationToAllClients(ctx, "info", "test", map[string]any{"ok": true})
svc.SendNotificationToSession(ctx, nil, "info", "test", map[string]any{"ok": true})
svc.ChannelSend(ctx, ChannelBuildComplete, map[string]any{"ok": true})
svc.ChannelSendToSession(ctx, nil, ChannelBuildComplete, map[string]any{"ok": true})
for range svc.Sessions() {
t.Fatal("expected no sessions from nil service")
}
}
func TestNotificationMethods_Good_NilServer(t *testing.T) {
svc := &Service{}
ctx := context.Background()
svc.SendNotificationToAllClients(ctx, "info", "test", map[string]any{"ok": true})
svc.SendNotificationToSession(ctx, nil, "info", "test", map[string]any{"ok": true})
svc.ChannelSend(ctx, ChannelBuildComplete, map[string]any{"ok": true})
svc.ChannelSendToSession(ctx, nil, ChannelBuildComplete, map[string]any{"ok": true})
for range svc.Sessions() {
t.Fatal("expected no sessions from service without a server")
}
}
func TestSessions_Good_ReturnsSnapshot(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
cancel, session, _ := connectNotificationSession(t, svc)
snapshot := svc.Sessions()
cancel()
session.Close()
var sessions []*mcp.ServerSession
for session := range snapshot {
sessions = append(sessions, session)
}
if len(sessions) != 1 {
t.Fatalf("expected snapshot to retain one session, got %d", len(sessions))
}
if sessions[0] == nil {
t.Fatal("expected snapshot session to be non-nil")
}
}
func TestNotificationMethods_Good_NilContext(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
svc.SendNotificationToAllClients(nil, "info", "test", map[string]any{"ok": true})
svc.SendNotificationToSession(nil, nil, "info", "test", map[string]any{"ok": true})
svc.ChannelSend(nil, ChannelBuildComplete, map[string]any{"ok": true})
svc.ChannelSendToSession(nil, nil, ChannelBuildComplete, map[string]any{"ok": true})
}
func TestSendNotificationToAllClients_Good_CustomNotification(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
serverConn, clientConn := net.Pipe()
defer clientConn.Close()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
session, err := svc.server.Connect(ctx, &connTransport{conn: serverConn}, nil)
if err != nil {
t.Fatalf("Connect() failed: %v", err)
}
defer session.Close()
clientConn.SetDeadline(time.Now().Add(5 * time.Second))
read := readNotificationMessageUntil(t, clientConn, func(msg map[string]any) bool {
return msg["method"] == LoggingNotificationMethod
})
sent := make(chan struct{})
go func() {
svc.SendNotificationToAllClients(ctx, "info", "test", map[string]any{
"event": ChannelBuildComplete,
})
close(sent)
}()
select {
case <-sent:
case <-time.After(5 * time.Second):
t.Fatal("timed out waiting for notification send to complete")
}
res := <-read
if res.err != nil {
t.Fatalf("failed to read notification: %v", res.err)
}
msg := res.msg
if msg["method"] != LoggingNotificationMethod {
t.Fatalf("expected method %q, got %v", LoggingNotificationMethod, msg["method"])
}
params, ok := msg["params"].(map[string]any)
if !ok {
t.Fatalf("expected params object, got %T", msg["params"])
}
if params["logger"] != "test" {
t.Fatalf("expected logger test, got %v", params["logger"])
}
if params["level"] != "info" {
t.Fatalf("expected level info, got %v", params["level"])
}
data, ok := params["data"].(map[string]any)
if !ok {
t.Fatalf("expected data object, got %T", params["data"])
}
if data["event"] != ChannelBuildComplete {
t.Fatalf("expected event %s, got %v", ChannelBuildComplete, data["event"])
}
}
func TestChannelSend_Good(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
ctx := context.Background()
svc.ChannelSend(ctx, ChannelBuildComplete, map[string]any{
"repo": "go-io",
})
}
func TestChannelSendToSession_Good_GuardNilSession(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
ctx := context.Background()
svc.ChannelSendToSession(ctx, nil, ChannelAgentStatus, map[string]any{
"ok": true,
})
}
func TestSendNotificationToSession_Good_GuardNilSession(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
ctx := context.Background()
svc.SendNotificationToSession(ctx, nil, "info", "test", map[string]any{
"ok": true,
})
}
func TestChannelSendToSession_Good_CustomNotification(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
serverConn, clientConn := net.Pipe()
defer clientConn.Close()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
session, err := svc.server.Connect(ctx, &connTransport{conn: serverConn}, nil)
if err != nil {
t.Fatalf("Connect() failed: %v", err)
}
defer session.Close()
clientConn.SetDeadline(time.Now().Add(5 * time.Second))
read := readNotificationMessageUntil(t, clientConn, func(msg map[string]any) bool {
return msg["method"] == ChannelNotificationMethod
})
sent := make(chan struct{})
go func() {
svc.ChannelSendToSession(ctx, session, ChannelBuildComplete, map[string]any{
"repo": "go-io",
})
close(sent)
}()
select {
case <-sent:
case <-time.After(5 * time.Second):
t.Fatal("timed out waiting for notification send to complete")
}
res := <-read
if res.err != nil {
t.Fatalf("failed to read custom notification: %v", res.err)
}
msg := res.msg
if msg["method"] != ChannelNotificationMethod {
t.Fatalf("expected method %q, got %v", ChannelNotificationMethod, msg["method"])
}
params, ok := msg["params"].(map[string]any)
if !ok {
t.Fatalf("expected params object, got %T", msg["params"])
}
if params["channel"] != ChannelBuildComplete {
t.Fatalf("expected channel %s, got %v", ChannelBuildComplete, params["channel"])
}
payload, ok := params["data"].(map[string]any)
if !ok {
t.Fatalf("expected data object, got %T", params["data"])
}
if payload["repo"] != "go-io" {
t.Fatalf("expected repo go-io, got %v", payload["repo"])
}
}
func TestChannelSendToClient_Good_CustomNotification(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
serverConn, clientConn := net.Pipe()
defer clientConn.Close()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
session, err := svc.server.Connect(ctx, &connTransport{conn: serverConn}, nil)
if err != nil {
t.Fatalf("Connect() failed: %v", err)
}
defer session.Close()
clientConn.SetDeadline(time.Now().Add(5 * time.Second))
read := readNotificationMessageUntil(t, clientConn, func(msg map[string]any) bool {
return msg["method"] == ChannelNotificationMethod
})
sent := make(chan struct{})
go func() {
svc.ChannelSendToClient(ctx, session, ChannelBuildComplete, map[string]any{
"repo": "go-io",
})
close(sent)
}()
select {
case <-sent:
case <-time.After(5 * time.Second):
t.Fatal("timed out waiting for notification send to complete")
}
res := <-read
if res.err != nil {
t.Fatalf("failed to read custom notification: %v", res.err)
}
msg := res.msg
if msg["method"] != ChannelNotificationMethod {
t.Fatalf("expected method %q, got %v", ChannelNotificationMethod, msg["method"])
}
}
func TestSendNotificationToClient_Good_CustomNotification(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
serverConn, clientConn := net.Pipe()
defer clientConn.Close()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
session, err := svc.server.Connect(ctx, &connTransport{conn: serverConn}, nil)
if err != nil {
t.Fatalf("Connect() failed: %v", err)
}
defer session.Close()
clientConn.SetDeadline(time.Now().Add(5 * time.Second))
read := readNotificationMessageUntil(t, clientConn, func(msg map[string]any) bool {
return msg["method"] == LoggingNotificationMethod
})
sent := make(chan struct{})
go func() {
svc.SendNotificationToClient(ctx, session, "info", "test", map[string]any{
"event": ChannelBuildComplete,
})
close(sent)
}()
select {
case <-sent:
case <-time.After(5 * time.Second):
t.Fatal("timed out waiting for notification send to complete")
}
res := <-read
if res.err != nil {
t.Fatalf("failed to read notification: %v", res.err)
}
msg := res.msg
if msg["method"] != LoggingNotificationMethod {
t.Fatalf("expected method %q, got %v", LoggingNotificationMethod, msg["method"])
}
}
func TestChannelCapability_Good(t *testing.T) {
caps := channelCapability()
raw, ok := caps[ClaudeChannelCapabilityName]
if !ok {
t.Fatal("expected claude/channel capability entry")
}
cap, ok := raw.(map[string]any)
if !ok {
t.Fatalf("expected claude/channel to be a map, got %T", raw)
}
if cap["version"] == nil || cap["description"] == nil {
t.Fatalf("expected capability to include version and description: %#v", cap)
}
channels, ok := cap["channels"].([]string)
if !ok {
t.Fatalf("expected channels to be []string, got %T", cap["channels"])
}
if len(channels) == 0 {
t.Fatal("expected at least one channel in capability definition")
}
want := channelCapabilityChannels()
if got, wantLen := len(channels), len(want); got != wantLen {
t.Fatalf("expected %d channels, got %d", wantLen, got)
}
for _, channel := range want {
if !slices.Contains(channels, channel) {
t.Fatalf("expected channel %q to be advertised in capability definition", channel)
}
}
}
func TestChannelCapability_Good_PublicHelpers(t *testing.T) {
got := ChannelCapability()
want := channelCapability()
if !reflect.DeepEqual(got, want) {
t.Fatalf("expected public capability helper to match internal definition")
}
spec := ClaudeChannelCapability()
if spec.Version != "1" {
t.Fatalf("expected typed capability version 1, got %q", spec.Version)
}
if spec.Description == "" {
t.Fatal("expected typed capability description to be populated")
}
if !slices.Equal(spec.Channels, channelCapabilityChannels()) {
t.Fatalf("expected typed capability channels to match: got %v want %v", spec.Channels, channelCapabilityChannels())
}
if !reflect.DeepEqual(spec.Map(), want[ClaudeChannelCapabilityName].(map[string]any)) {
t.Fatal("expected typed capability map to match wire-format descriptor")
}
gotChannels := ChannelCapabilityChannels()
wantChannels := channelCapabilityChannels()
if !slices.Equal(gotChannels, wantChannels) {
t.Fatalf("expected public channel list to match internal definition: got %v want %v", gotChannels, wantChannels)
}
}
func TestChannelCapabilitySpec_Map_Good_ClonesChannels(t *testing.T) {
spec := ClaudeChannelCapability()
mapped := spec.Map()
channels, ok := mapped["channels"].([]string)
if !ok {
t.Fatalf("expected channels to be []string, got %T", mapped["channels"])
}
if len(channels) == 0 {
t.Fatal("expected non-empty channels slice")
}
spec.Channels[0] = "mutated.channel"
if channels[0] == "mutated.channel" {
t.Fatal("expected Map() to clone the channels slice")
}
}
func TestSendNotificationToAllClients_Good_BroadcastsToMultipleSessions(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
cancel1, session1, clientConn1 := connectNotificationSession(t, svc)
defer cancel1()
defer session1.Close()
defer clientConn1.Close()
cancel2, session2, clientConn2 := connectNotificationSession(t, svc)
defer cancel2()
defer session2.Close()
defer clientConn2.Close()
read1 := readNotificationMessage(t, clientConn1)
read2 := readNotificationMessage(t, clientConn2)
sent := make(chan struct{})
go func() {
svc.SendNotificationToAllClients(ctx, "info", "test", map[string]any{
"event": ChannelBuildComplete,
})
close(sent)
}()
select {
case <-sent:
case <-time.After(5 * time.Second):
t.Fatal("timed out waiting for broadcast to complete")
}
res1 := <-read1
if res1.err != nil {
t.Fatalf("failed to read notification from session 1: %v", res1.err)
}
res2 := <-read2
if res2.err != nil {
t.Fatalf("failed to read notification from session 2: %v", res2.err)
}
for idx, res := range []notificationReadResult{res1, res2} {
if res.msg["method"] != LoggingNotificationMethod {
t.Fatalf("session %d: expected method %q, got %v", idx+1, LoggingNotificationMethod, res.msg["method"])
}
params, ok := res.msg["params"].(map[string]any)
if !ok {
t.Fatalf("session %d: expected params object, got %T", idx+1, res.msg["params"])
}
if params["logger"] != "test" {
t.Fatalf("session %d: expected logger test, got %v", idx+1, params["logger"])
}
}
}

View file

@ -0,0 +1,124 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"context"
"time"
core "dappco.re/go/core"
)
type processRuntime struct {
Command string
Args []string
Dir string
StartedAt time.Time
}
func (s *Service) recordProcessRuntime(id string, meta processRuntime) {
if id == "" {
return
}
s.processMu.Lock()
defer s.processMu.Unlock()
if s.processMeta == nil {
s.processMeta = make(map[string]processRuntime)
}
s.processMeta[id] = meta
}
func (s *Service) processRuntimeFor(id string) (processRuntime, bool) {
s.processMu.Lock()
defer s.processMu.Unlock()
meta, ok := s.processMeta[id]
return meta, ok
}
func (s *Service) forgetProcessRuntime(id string) {
if id == "" {
return
}
s.processMu.Lock()
defer s.processMu.Unlock()
delete(s.processMeta, id)
}
func isTestProcess(command string, args []string) bool {
base := core.Lower(core.PathBase(command))
if base == "" {
return false
}
switch base {
case "go":
return len(args) > 0 && core.Lower(args[0]) == "test"
case "cargo":
return len(args) > 0 && core.Lower(args[0]) == "test"
case "npm", "pnpm", "yarn", "bun":
for _, arg := range args {
lower := core.Lower(arg)
if lower == "test" || core.HasPrefix(lower, "test:") {
return true
}
}
return false
case "pytest", "phpunit", "jest", "vitest", "rspec", "go-test":
return true
}
return false
}
func (s *Service) emitTestResult(ctx context.Context, processID string, exitCode int, duration time.Duration, signal string, errText string) {
defer s.forgetProcessRuntime(processID)
meta, ok := s.processRuntimeFor(processID)
if !ok || !isTestProcess(meta.Command, meta.Args) {
return
}
if duration <= 0 && !meta.StartedAt.IsZero() {
duration = time.Since(meta.StartedAt)
}
status := "failed"
if signal != "" {
status = "aborted"
} else if exitCode == 0 {
status = "passed"
}
payload := map[string]any{
"id": processID,
"command": meta.Command,
"args": meta.Args,
"status": status,
"passed": status == "passed",
}
if meta.Dir != "" {
payload["dir"] = meta.Dir
}
if !meta.StartedAt.IsZero() {
payload["startedAt"] = meta.StartedAt
}
if duration > 0 {
payload["duration"] = duration
}
if signal == "" || exitCode != 0 {
payload["exitCode"] = exitCode
}
if signal != "" {
payload["signal"] = signal
}
if errText != "" {
payload["error"] = errText
}
s.ChannelSend(ctx, ChannelTestResult, payload)
}

61
pkg/mcp/progress.go Normal file
View file

@ -0,0 +1,61 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"context"
sdkmcp "github.com/modelcontextprotocol/go-sdk/mcp"
)
// ProgressTokenFromRequest extracts _meta.progressToken from an MCP tool call.
func ProgressTokenFromRequest(req *sdkmcp.CallToolRequest) any {
if req == nil || req.Params == nil {
return nil
}
return req.Params.GetProgressToken()
}
// SendProgressNotification emits notifications/progress when the caller supplied
// _meta.progressToken. Calls without a token or MCP session are no-ops.
func SendProgressNotification(ctx context.Context, req *sdkmcp.CallToolRequest, progress float64, total float64, message string) error {
token := ProgressTokenFromRequest(req)
if req == nil || req.Session == nil || token == nil {
return nil
}
return req.Session.NotifyProgress(ctx, &sdkmcp.ProgressNotificationParams{
ProgressToken: token,
Progress: progress,
Total: total,
Message: message,
})
}
// ProgressNotifier caches the request progress token for multi-step tools.
type ProgressNotifier struct {
ctx context.Context
req *sdkmcp.CallToolRequest
token any
}
// NewProgressNotifier prepares repeated notifications for a single tool call.
func NewProgressNotifier(ctx context.Context, req *sdkmcp.CallToolRequest) ProgressNotifier {
return ProgressNotifier{
ctx: ctx,
req: req,
token: ProgressTokenFromRequest(req),
}
}
// Send emits a progress notification when the tool call includes a token.
func (n ProgressNotifier) Send(progress float64, total float64, message string) error {
if n.req == nil || n.req.Session == nil || n.token == nil {
return nil
}
return n.req.Session.NotifyProgress(n.ctx, &sdkmcp.ProgressNotificationParams{
ProgressToken: n.token,
Progress: progress,
Total: total,
Message: message,
})
}

43
pkg/mcp/progress_test.go Normal file
View file

@ -0,0 +1,43 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"context"
"testing"
sdkmcp "github.com/modelcontextprotocol/go-sdk/mcp"
)
func TestProgressTokenFromRequest_Good_ExtractsMetaToken(t *testing.T) {
req := &sdkmcp.CallToolRequest{Params: &sdkmcp.CallToolParamsRaw{}}
req.Params.SetProgressToken("dispatch-123")
if got := ProgressTokenFromRequest(req); got != "dispatch-123" {
t.Fatalf("expected progress token dispatch-123, got %v", got)
}
}
func TestProgressTokenFromRequest_Good_NilSafe(t *testing.T) {
if got := ProgressTokenFromRequest(nil); got != nil {
t.Fatalf("expected nil token from nil request, got %v", got)
}
req := &sdkmcp.CallToolRequest{}
if got := ProgressTokenFromRequest(req); got != nil {
t.Fatalf("expected nil token from request without params, got %v", got)
}
}
func TestSendProgressNotification_Good_NoopsWithoutSession(t *testing.T) {
req := &sdkmcp.CallToolRequest{Params: &sdkmcp.CallToolParamsRaw{}}
req.Params.SetProgressToken("process-1")
if err := SendProgressNotification(context.Background(), req, 1, 2, "started"); err != nil {
t.Fatalf("expected no-op without session, got error: %v", err)
}
notifier := NewProgressNotifier(context.Background(), req)
if err := notifier.Send(2, 2, "done"); err != nil {
t.Fatalf("expected no-op notifier without session, got error: %v", err)
}
}

188
pkg/mcp/register.go Normal file
View file

@ -0,0 +1,188 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"context"
"time"
core "dappco.re/go/core"
"dappco.re/go/process"
"dappco.re/go/ws"
)
// Register is the service factory for core.WithService.
// Creates the MCP service, discovers subsystems from other Core services,
// and wires optional process and WebSocket dependencies when they are
// already registered in Core.
//
// core.New(
// core.WithService(agentic.Register),
// core.WithService(monitor.Register),
// core.WithService(brain.Register),
// core.WithService(mcp.Register),
// )
func Register(c *core.Core) core.Result {
// Collect subsystems from registered services
var subsystems []Subsystem
var processService *process.Service
var wsHub *ws.Hub
for _, name := range c.Services() {
r := c.Service(name)
if !r.OK {
continue
}
if sub, ok := r.Value.(Subsystem); ok {
subsystems = append(subsystems, sub)
continue
}
switch v := r.Value.(type) {
case *process.Service:
processService = v
case *ws.Hub:
wsHub = v
}
}
svc, err := New(Options{
ProcessService: processService,
WSHub: wsHub,
Subsystems: subsystems,
})
if err != nil {
return core.Result{Value: err, OK: false}
}
svc.ServiceRuntime = core.NewServiceRuntime(c, struct{}{})
return core.Result{Value: svc, OK: true}
}
// OnStartup implements core.Startable — registers MCP transport commands.
//
// svc.OnStartup(context.Background())
//
// core-agent mcp — start MCP server on stdio
// core-agent serve — start MCP server on HTTP
func (s *Service) OnStartup(ctx context.Context) core.Result {
c := s.Core()
if c == nil {
return core.Result{OK: true}
}
c.Command("mcp", core.Command{
Description: "Start the MCP server on stdio",
Action: func(opts core.Options) core.Result {
s.logger.Info("MCP stdio server starting")
if err := s.ServeStdio(ctx); err != nil {
return core.Result{Value: err, OK: false}
}
return core.Result{OK: true}
},
})
c.Command("serve", core.Command{
Description: "Start the MCP server with auto-selected transport",
Action: func(opts core.Options) core.Result {
s.logger.Info("MCP server starting")
if err := s.Run(ctx); err != nil {
return core.Result{Value: err, OK: false}
}
return core.Result{OK: true}
},
})
return core.Result{OK: true}
}
// HandleIPCEvents implements Core's IPC handler interface.
//
// c.ACTION(mcp.ChannelPush{Channel: "agent.status", Data: statusMap})
//
// Catches ChannelPush messages from other services and pushes them to Claude Code sessions.
func (s *Service) HandleIPCEvents(c *core.Core, msg core.Message) core.Result {
ctx := context.Background()
if c != nil {
if coreCtx := c.Context(); coreCtx != nil {
ctx = coreCtx
}
}
switch ev := msg.(type) {
case ChannelPush:
return s.handleChannelPushIPC(ctx, ev)
case process.ActionProcessStarted:
startedAt := time.Now()
s.recordProcessRuntime(ev.ID, processRuntime{
Command: ev.Command,
Args: ev.Args,
Dir: ev.Dir,
StartedAt: startedAt,
})
s.ChannelSend(ctx, ChannelProcessStart, map[string]any{
"id": ev.ID,
"command": ev.Command,
"args": ev.Args,
"dir": ev.Dir,
"pid": ev.PID,
"startedAt": startedAt,
})
case process.ActionProcessOutput:
s.ChannelSend(ctx, ChannelProcessOutput, map[string]any{
"id": ev.ID,
"line": ev.Line,
"stream": ev.Stream,
})
case process.ActionProcessExited:
meta, ok := s.processRuntimeFor(ev.ID)
payload := map[string]any{
"id": ev.ID,
"exitCode": ev.ExitCode,
"duration": ev.Duration,
}
if ok {
payload["command"] = meta.Command
payload["args"] = meta.Args
payload["dir"] = meta.Dir
if !meta.StartedAt.IsZero() {
payload["startedAt"] = meta.StartedAt
}
}
if ev.Error != nil {
payload["error"] = ev.Error.Error()
}
s.ChannelSend(ctx, ChannelProcessExit, payload)
errText := ""
if ev.Error != nil {
errText = ev.Error.Error()
}
s.emitTestResult(ctx, ev.ID, ev.ExitCode, ev.Duration, "", errText)
case process.ActionProcessKilled:
meta, ok := s.processRuntimeFor(ev.ID)
payload := map[string]any{
"id": ev.ID,
"signal": ev.Signal,
}
if ok {
payload["command"] = meta.Command
payload["args"] = meta.Args
payload["dir"] = meta.Dir
if !meta.StartedAt.IsZero() {
payload["startedAt"] = meta.StartedAt
}
}
s.ChannelSend(ctx, ChannelProcessExit, payload)
s.emitTestResult(ctx, ev.ID, 0, 0, ev.Signal, "")
}
return core.Result{OK: true}
}
// OnShutdown implements core.Stoppable — stops the MCP transport.
//
// svc.OnShutdown(context.Background())
func (s *Service) OnShutdown(ctx context.Context) core.Result {
if err := s.Shutdown(ctx); err != nil {
return core.Result{Value: err, OK: false}
}
return core.Result{OK: true}
}

334
pkg/mcp/register_test.go Normal file
View file

@ -0,0 +1,334 @@
package mcp
import (
"bufio"
"context"
"encoding/json"
"net"
"testing"
"time"
"dappco.re/go/core"
"dappco.re/go/process"
"dappco.re/go/ws"
)
func TestRegister_Good_WiresOptionalServices(t *testing.T) {
c := core.New()
ps := &process.Service{}
hub := ws.NewHub()
if r := c.RegisterService("process", ps); !r.OK {
t.Fatalf("failed to register process service: %v", r.Value)
}
if r := c.RegisterService("ws", hub); !r.OK {
t.Fatalf("failed to register ws hub: %v", r.Value)
}
result := Register(c)
if !result.OK {
t.Fatalf("Register() failed: %v", result.Value)
}
svc, ok := result.Value.(*Service)
if !ok {
t.Fatalf("expected *Service, got %T", result.Value)
}
if svc.ProcessService() != ps {
t.Fatalf("expected process service to be wired")
}
if svc.WSHub() != hub {
t.Fatalf("expected ws hub to be wired")
}
tools := map[string]bool{}
for _, rec := range svc.Tools() {
tools[rec.Name] = true
}
if !tools["process_start"] {
t.Fatal("expected process tools to be registered when process service is available")
}
if !tools["ws_start"] {
t.Fatal("expected ws tools to be registered when ws hub is available")
}
}
func TestHandleIPCEvents_Good_ForwardsProcessActions(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
serverConn, clientConn := net.Pipe()
defer clientConn.Close()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
session, err := svc.server.Connect(ctx, &connTransport{conn: serverConn}, nil)
if err != nil {
t.Fatalf("Connect() failed: %v", err)
}
defer session.Close()
clientConn.SetDeadline(time.Now().Add(5 * time.Second))
scanner := bufio.NewScanner(clientConn)
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
received := make(chan map[string]any, 8)
errCh := make(chan error, 1)
go func() {
for scanner.Scan() {
var msg map[string]any
if err := json.Unmarshal(scanner.Bytes(), &msg); err != nil {
errCh <- err
return
}
received <- msg
}
if err := scanner.Err(); err != nil {
errCh <- err
return
}
close(received)
}()
result := svc.HandleIPCEvents(nil, process.ActionProcessStarted{
ID: "proc-1",
Command: "go",
Args: []string{"test", "./..."},
Dir: "/workspace",
PID: 1234,
})
if !result.OK {
t.Fatalf("HandleIPCEvents() returned non-OK result: %#v", result.Value)
}
deadline := time.NewTimer(5 * time.Second)
defer deadline.Stop()
for {
select {
case err := <-errCh:
t.Fatalf("failed to read notification: %v", err)
case msg, ok := <-received:
if !ok {
t.Fatal("notification stream closed before expected message arrived")
}
if msg["method"] != ChannelNotificationMethod {
continue
}
params, ok := msg["params"].(map[string]any)
if !ok {
t.Fatalf("expected params object, got %T", msg["params"])
}
if params["channel"] != ChannelProcessStart {
continue
}
payload, ok := params["data"].(map[string]any)
if !ok {
t.Fatalf("expected data object, got %T", params["data"])
}
if payload["id"] != "proc-1" || payload["command"] != "go" {
t.Fatalf("unexpected payload: %#v", payload)
}
if payload["dir"] != "/workspace" {
t.Fatalf("expected dir /workspace, got %#v", payload["dir"])
}
if payload["pid"] != float64(1234) {
t.Fatalf("expected pid 1234, got %#v", payload["pid"])
}
if payload["args"] == nil {
t.Fatalf("expected args in payload, got %#v", payload)
}
return
case <-deadline.C:
t.Fatal("timed out waiting for process start notification")
}
}
}
func TestHandleIPCEvents_Good_ForwardsProcessOutput(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
serverConn, clientConn := net.Pipe()
defer clientConn.Close()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
session, err := svc.server.Connect(ctx, &connTransport{conn: serverConn}, nil)
if err != nil {
t.Fatalf("Connect() failed: %v", err)
}
defer session.Close()
clientConn.SetDeadline(time.Now().Add(5 * time.Second))
scanner := bufio.NewScanner(clientConn)
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
received := make(chan map[string]any, 8)
errCh := make(chan error, 1)
go func() {
for scanner.Scan() {
var msg map[string]any
if err := json.Unmarshal(scanner.Bytes(), &msg); err != nil {
errCh <- err
return
}
received <- msg
}
if err := scanner.Err(); err != nil {
errCh <- err
return
}
close(received)
}()
result := svc.HandleIPCEvents(nil, process.ActionProcessOutput{
ID: "proc-1",
Line: "hello world",
Stream: process.StreamStdout,
})
if !result.OK {
t.Fatalf("HandleIPCEvents() returned non-OK result: %#v", result.Value)
}
deadline := time.NewTimer(5 * time.Second)
defer deadline.Stop()
for {
select {
case err := <-errCh:
t.Fatalf("failed to read notification: %v", err)
case msg, ok := <-received:
if !ok {
t.Fatal("notification stream closed before expected message arrived")
}
if msg["method"] != ChannelNotificationMethod {
continue
}
params, ok := msg["params"].(map[string]any)
if !ok {
t.Fatalf("expected params object, got %T", msg["params"])
}
if params["channel"] != ChannelProcessOutput {
continue
}
payload, ok := params["data"].(map[string]any)
if !ok {
t.Fatalf("expected data object, got %T", msg["params"])
}
if payload["id"] != "proc-1" || payload["line"] != "hello world" || payload["stream"] != string(process.StreamStdout) {
t.Fatalf("unexpected payload: %#v", payload)
}
return
case <-deadline.C:
t.Fatal("timed out waiting for process output notification")
}
}
}
func TestHandleIPCEvents_Good_ForwardsTestResult(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
serverConn, clientConn := net.Pipe()
defer clientConn.Close()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
session, err := svc.server.Connect(ctx, &connTransport{conn: serverConn}, nil)
if err != nil {
t.Fatalf("Connect() failed: %v", err)
}
defer session.Close()
svc.recordProcessRuntime("proc-test", processRuntime{
Command: "go",
Args: []string{"test", "./..."},
StartedAt: time.Now().Add(-2 * time.Second),
})
clientConn.SetDeadline(time.Now().Add(5 * time.Second))
scanner := bufio.NewScanner(clientConn)
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
received := make(chan map[string]any, 8)
errCh := make(chan error, 1)
go func() {
for scanner.Scan() {
var msg map[string]any
if err := json.Unmarshal(scanner.Bytes(), &msg); err != nil {
errCh <- err
return
}
received <- msg
}
if err := scanner.Err(); err != nil {
errCh <- err
return
}
close(received)
}()
result := svc.HandleIPCEvents(nil, process.ActionProcessExited{
ID: "proc-test",
ExitCode: 0,
Duration: 2 * time.Second,
})
if !result.OK {
t.Fatalf("HandleIPCEvents() returned non-OK result: %#v", result.Value)
}
deadline := time.NewTimer(5 * time.Second)
defer deadline.Stop()
for {
select {
case err := <-errCh:
t.Fatalf("failed to read notification: %v", err)
case msg, ok := <-received:
if !ok {
t.Fatal("notification stream closed before expected message arrived")
}
if msg["method"] != ChannelNotificationMethod {
continue
}
params, ok := msg["params"].(map[string]any)
if !ok {
t.Fatalf("expected params object, got %T", msg["params"])
}
if params["channel"] != ChannelTestResult {
continue
}
payload, ok := params["data"].(map[string]any)
if !ok {
t.Fatalf("expected data object, got %T", msg["params"])
}
if payload["id"] != "proc-test" || payload["command"] != "go" {
t.Fatalf("unexpected payload: %#v", payload)
}
if payload["dir"] != nil {
t.Fatalf("expected dir to be absent when not recorded, got %#v", payload["dir"])
}
if payload["status"] != "passed" || payload["passed"] != true {
t.Fatalf("expected passed test result, got %#v", payload)
}
return
case <-deadline.C:
t.Fatal("timed out waiting for test result notification")
}
}
}

View file

@ -4,11 +4,10 @@ package mcp
import ( import (
"context" "context"
"encoding/json"
"iter"
"reflect" "reflect"
"strings" "time"
core "dappco.re/go/core"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
@ -22,6 +21,38 @@ import (
// } // }
type RESTHandler func(ctx context.Context, body []byte) (any, error) type RESTHandler func(ctx context.Context, body []byte) (any, error)
// errInvalidRESTInput marks malformed JSON bodies for the REST bridge.
var errInvalidRESTInput = &restInputError{}
// restInputError preserves invalid-REST-input identity without stdlib
// error constructors so bridge.go can keep using errors.Is.
type restInputError struct {
cause error
}
func (e *restInputError) Error() string {
if e == nil || e.cause == nil {
return "invalid REST input"
}
return "invalid REST input: " + e.cause.Error()
}
func (e *restInputError) Unwrap() error {
if e == nil {
return nil
}
return e.cause
}
func (e *restInputError) Is(target error) bool {
_, ok := target.(*restInputError)
return ok
}
func invalidRESTInputError(cause error) error {
return &restInputError{cause: cause}
}
// ToolRecord captures metadata about a registered MCP tool. // ToolRecord captures metadata about a registered MCP tool.
// //
// for _, rec := range svc.Tools() { // for _, rec := range svc.Tools() {
@ -36,18 +67,63 @@ type ToolRecord struct {
RESTHandler RESTHandler // REST-callable handler created at registration time RESTHandler RESTHandler // REST-callable handler created at registration time
} }
// addToolRecorded registers a tool with the MCP server AND records its metadata. // AddToolRecorded registers a tool with the MCP server and records its metadata.
// This is a generic function that captures the In/Out types for schema extraction. // This is a generic function that captures the In/Out types for schema extraction.
// It also creates a RESTHandler closure that can unmarshal JSON to the correct // It also creates a RESTHandler closure that can unmarshal JSON to the correct
// input type and call the handler directly, enabling the MCP-to-REST bridge. // input type and call the handler directly, enabling the MCP-to-REST bridge.
func addToolRecorded[In, Out any](s *Service, server *mcp.Server, group string, t *mcp.Tool, h mcp.ToolHandlerFor[In, Out]) { //
mcp.AddTool(server, t, h) // svc, _ := mcp.New(mcp.Options{})
// mcp.AddToolRecorded(svc, svc.Server(), "files", &mcp.Tool{Name: "file_read"},
// func(context.Context, *mcp.CallToolRequest, ReadFileInput) (*mcp.CallToolResult, ReadFileOutput, error) {
// return nil, ReadFileOutput{Path: "src/main.go"}, nil
// })
func AddToolRecorded[In, Out any](s *Service, server *mcp.Server, group string, t *mcp.Tool, h mcp.ToolHandlerFor[In, Out]) {
// Set inputSchema from struct reflection if not already set.
// Use server.AddTool (non-generic) to avoid auto-generated outputSchema.
// The go-sdk's generic mcp.AddTool generates outputSchema from the Out type,
// but Claude Code's protocol (2025-03-26) doesn't support outputSchema.
// Removing it reduces tools/list from 214KB to ~74KB.
if t.InputSchema == nil {
t.InputSchema = structSchema(new(In))
if t.InputSchema == nil {
t.InputSchema = map[string]any{"type": "object"}
}
}
// Wrap the typed handler into a generic ToolHandler.
wrapped := func(ctx context.Context, req *mcp.CallToolRequest) (*mcp.CallToolResult, error) {
var input In
if req != nil && len(req.Params.Arguments) > 0 {
if r := core.JSONUnmarshal(req.Params.Arguments, &input); !r.OK {
if err, ok := r.Value.(error); ok {
return nil, err
}
}
}
if err := s.authorizeToolAccess(ctx, req, t.Name, input); err != nil {
return nil, err
}
result, output, err := h(ctx, req, input)
if err != nil {
return nil, err
}
if result != nil {
return result, nil
}
data := core.JSONMarshalString(output)
return &mcp.CallToolResult{
Content: []mcp.Content{&mcp.TextContent{Text: data}},
}, nil
}
server.AddTool(t, wrapped)
restHandler := func(ctx context.Context, body []byte) (any, error) { restHandler := func(ctx context.Context, body []byte) (any, error) {
var input In var input In
if len(body) > 0 { if len(body) > 0 {
if err := json.Unmarshal(body, &input); err != nil { if r := core.JSONUnmarshal(body, &input); !r.OK {
return nil, err if err, ok := r.Value.(error); ok {
return nil, invalidRESTInputError(err)
}
return nil, invalidRESTInputError(nil)
} }
} }
// nil: REST callers have no MCP request context. // nil: REST callers have no MCP request context.
@ -66,6 +142,10 @@ func addToolRecorded[In, Out any](s *Service, server *mcp.Server, group string,
}) })
} }
func addToolRecorded[In, Out any](s *Service, server *mcp.Server, group string, t *mcp.Tool, h mcp.ToolHandlerFor[In, Out]) {
AddToolRecorded(s, server, group, t, h)
}
// structSchema builds a simple JSON Schema from a struct's json tags via reflection. // structSchema builds a simple JSON Schema from a struct's json tags via reflection.
// Returns nil for non-struct types or empty structs. // Returns nil for non-struct types or empty structs.
func structSchema(v any) map[string]any { func structSchema(v any) map[string]any {
@ -79,62 +159,12 @@ func structSchema(v any) map[string]any {
if t.Kind() != reflect.Struct { if t.Kind() != reflect.Struct {
return nil return nil
} }
if t.NumField() == 0 { return schemaForType(t, map[reflect.Type]bool{})
return map[string]any{"type": "object", "properties": map[string]any{}}
}
properties := make(map[string]any)
required := make([]string, 0)
for f := range t.Fields() {
f := f
if !f.IsExported() {
continue
}
jsonTag := f.Tag.Get("json")
if jsonTag == "-" {
continue
}
name := f.Name
isOptional := false
if jsonTag != "" {
parts := splitTag(jsonTag)
name = parts[0]
for _, p := range parts[1:] {
if p == "omitempty" {
isOptional = true
}
}
}
prop := map[string]any{
"type": goTypeToJSONType(f.Type),
}
properties[name] = prop
if !isOptional {
required = append(required, name)
}
}
schema := map[string]any{
"type": "object",
"properties": properties,
}
if len(required) > 0 {
schema["required"] = required
}
return schema
} }
// splitTag splits a struct tag value by commas. // splitTag splits a struct tag value by commas.
func splitTag(tag string) []string { func splitTag(tag string) []string {
return strings.Split(tag, ",") return core.Split(tag, ",")
}
// splitTagSeq returns an iterator over the tag parts.
func splitTagSeq(tag string) iter.Seq[string] {
return strings.SplitSeq(tag, ",")
} }
// goTypeToJSONType maps Go types to JSON Schema types. // goTypeToJSONType maps Go types to JSON Schema types.
@ -157,3 +187,120 @@ func goTypeToJSONType(t reflect.Type) string {
return "string" return "string"
} }
} }
func schemaForType(t reflect.Type, seen map[reflect.Type]bool) map[string]any {
if t == nil {
return nil
}
for t.Kind() == reflect.Pointer {
t = t.Elem()
if t == nil {
return nil
}
}
if isTimeType(t) {
return map[string]any{
"type": "string",
"format": "date-time",
}
}
switch t.Kind() {
case reflect.Interface:
return map[string]any{}
case reflect.Struct:
if seen[t] {
return map[string]any{"type": "object"}
}
seen[t] = true
properties := make(map[string]any)
required := make([]string, 0, t.NumField())
for f := range t.Fields() {
f := f
if !f.IsExported() {
continue
}
jsonTag := f.Tag.Get("json")
if jsonTag == "-" {
continue
}
name := f.Name
isOptional := false
if jsonTag != "" {
parts := splitTag(jsonTag)
name = parts[0]
for _, p := range parts[1:] {
if p == "omitempty" {
isOptional = true
}
}
}
prop := schemaForType(f.Type, cloneSeenSet(seen))
if prop == nil {
prop = map[string]any{"type": goTypeToJSONType(f.Type)}
}
properties[name] = prop
if !isOptional {
required = append(required, name)
}
}
schema := map[string]any{
"type": "object",
"properties": properties,
}
if len(required) > 0 {
schema["required"] = required
}
return schema
case reflect.Slice, reflect.Array:
schema := map[string]any{
"type": "array",
"items": schemaForType(t.Elem(), cloneSeenSet(seen)),
}
return schema
case reflect.Map:
schema := map[string]any{
"type": "object",
}
if t.Key().Kind() == reflect.String {
if valueSchema := schemaForType(t.Elem(), cloneSeenSet(seen)); valueSchema != nil {
schema["additionalProperties"] = valueSchema
}
}
return schema
default:
if typeName := goTypeToJSONType(t); typeName != "" {
return map[string]any{"type": typeName}
}
}
return nil
}
func cloneSeenSet(seen map[reflect.Type]bool) map[reflect.Type]bool {
if len(seen) == 0 {
return map[reflect.Type]bool{}
}
clone := make(map[reflect.Type]bool, len(seen))
for t := range seen {
clone[t] = true
}
return clone
}
func isTimeType(t reflect.Type) bool {
return t == reflect.TypeOf(time.Time{})
}

View file

@ -3,7 +3,11 @@
package mcp package mcp
import ( import (
"context"
"errors"
"testing" "testing"
"dappco.re/go/process"
) )
func TestToolRegistry_Good_RecordsTools(t *testing.T) { func TestToolRegistry_Good_RecordsTools(t *testing.T) {
@ -67,9 +71,19 @@ func TestToolRegistry_Good_ToolCount(t *testing.T) {
} }
tools := svc.Tools() tools := svc.Tools()
// Built-in tools: file_read, file_write, file_delete, file_rename, // Built-in tools (no ProcessService / WSHub / Subsystems):
// file_exists, file_edit, dir_list, dir_create, lang_detect, lang_list // files (8): file_read, file_write, file_delete, file_rename,
const expectedCount = 10 // file_exists, file_edit, dir_list, dir_create
// language (2): lang_detect, lang_list
// metrics (2): metrics_record, metrics_query
// rag (6): rag_query, rag_search, rag_ingest, rag_index,
// rag_retrieve, rag_collections
// webview (12): webview_connect, webview_disconnect, webview_navigate,
// webview_click, webview_type, webview_query,
// webview_console, webview_eval, webview_screenshot,
// webview_wait, webview_render, webview_update
// ws (3): ws_connect, ws_send, ws_close
const expectedCount = 33
if len(tools) != expectedCount { if len(tools) != expectedCount {
t.Errorf("expected %d tools, got %d", expectedCount, len(tools)) t.Errorf("expected %d tools, got %d", expectedCount, len(tools))
for _, tr := range tools { for _, tr := range tools {
@ -86,6 +100,9 @@ func TestToolRegistry_Good_GroupAssignment(t *testing.T) {
fileTools := []string{"file_read", "file_write", "file_delete", "file_rename", "file_exists", "file_edit", "dir_list", "dir_create"} fileTools := []string{"file_read", "file_write", "file_delete", "file_rename", "file_exists", "file_edit", "dir_list", "dir_create"}
langTools := []string{"lang_detect", "lang_list"} langTools := []string{"lang_detect", "lang_list"}
metricsTools := []string{"metrics_record", "metrics_query"}
ragTools := []string{"rag_query", "rag_search", "rag_ingest", "rag_index", "rag_retrieve", "rag_collections"}
webviewTools := []string{"webview_connect", "webview_disconnect", "webview_navigate", "webview_click", "webview_type", "webview_query", "webview_console", "webview_eval", "webview_screenshot", "webview_wait", "webview_render", "webview_update"}
byName := make(map[string]ToolRecord) byName := make(map[string]ToolRecord)
for _, tr := range svc.Tools() { for _, tr := range svc.Tools() {
@ -113,6 +130,51 @@ func TestToolRegistry_Good_GroupAssignment(t *testing.T) {
t.Errorf("tool %s: expected group 'language', got %q", name, tr.Group) t.Errorf("tool %s: expected group 'language', got %q", name, tr.Group)
} }
} }
for _, name := range metricsTools {
tr, ok := byName[name]
if !ok {
t.Errorf("tool %s not found in registry", name)
continue
}
if tr.Group != "metrics" {
t.Errorf("tool %s: expected group 'metrics', got %q", name, tr.Group)
}
}
for _, name := range ragTools {
tr, ok := byName[name]
if !ok {
t.Errorf("tool %s not found in registry", name)
continue
}
if tr.Group != "rag" {
t.Errorf("tool %s: expected group 'rag', got %q", name, tr.Group)
}
}
for _, name := range webviewTools {
tr, ok := byName[name]
if !ok {
t.Errorf("tool %s not found in registry", name)
continue
}
if tr.Group != "webview" {
t.Errorf("tool %s: expected group 'webview', got %q", name, tr.Group)
}
}
wsClientTools := []string{"ws_connect", "ws_send", "ws_close"}
for _, name := range wsClientTools {
tr, ok := byName[name]
if !ok {
t.Errorf("tool %s not found in registry", name)
continue
}
if tr.Group != "ws" {
t.Errorf("tool %s: expected group 'ws', got %q", name, tr.Group)
}
}
} }
func TestToolRegistry_Good_ToolRecordFields(t *testing.T) { func TestToolRegistry_Good_ToolRecordFields(t *testing.T) {
@ -148,3 +210,93 @@ func TestToolRegistry_Good_ToolRecordFields(t *testing.T) {
t.Error("expected non-nil OutputSchema") t.Error("expected non-nil OutputSchema")
} }
} }
func TestToolRegistry_Good_TimeSchemas(t *testing.T) {
svc, err := New(Options{
WorkspaceRoot: t.TempDir(),
ProcessService: &process.Service{},
})
if err != nil {
t.Fatal(err)
}
byName := make(map[string]ToolRecord)
for _, tr := range svc.Tools() {
byName[tr.Name] = tr
}
metrics, ok := byName["metrics_record"]
if !ok {
t.Fatal("metrics_record not found in registry")
}
inputProps, ok := metrics.InputSchema["properties"].(map[string]any)
if !ok {
t.Fatal("expected metrics_record input properties map")
}
dataSchema, ok := inputProps["data"].(map[string]any)
if !ok {
t.Fatal("expected data schema for metrics_record input")
}
if got := dataSchema["type"]; got != "object" {
t.Fatalf("expected metrics_record data type object, got %#v", got)
}
props, ok := metrics.OutputSchema["properties"].(map[string]any)
if !ok {
t.Fatal("expected metrics_record output properties map")
}
timestamp, ok := props["timestamp"].(map[string]any)
if !ok {
t.Fatal("expected timestamp schema for metrics_record output")
}
if got := timestamp["type"]; got != "string" {
t.Fatalf("expected metrics_record timestamp type string, got %#v", got)
}
if got := timestamp["format"]; got != "date-time" {
t.Fatalf("expected metrics_record timestamp format date-time, got %#v", got)
}
processStart, ok := byName["process_start"]
if !ok {
t.Fatal("process_start not found in registry")
}
props, ok = processStart.OutputSchema["properties"].(map[string]any)
if !ok {
t.Fatal("expected process_start output properties map")
}
startedAt, ok := props["startedAt"].(map[string]any)
if !ok {
t.Fatal("expected startedAt schema for process_start output")
}
if got := startedAt["type"]; got != "string" {
t.Fatalf("expected process_start startedAt type string, got %#v", got)
}
if got := startedAt["format"]; got != "date-time" {
t.Fatalf("expected process_start startedAt format date-time, got %#v", got)
}
}
func TestToolRegistry_Bad_InvalidRESTInputIsClassified(t *testing.T) {
svc, err := New(Options{WorkspaceRoot: t.TempDir()})
if err != nil {
t.Fatal(err)
}
var record ToolRecord
for _, tr := range svc.Tools() {
if tr.Name == "file_read" {
record = tr
break
}
}
if record.Name == "" {
t.Fatal("file_read not found in registry")
}
_, err = record.RESTHandler(context.Background(), []byte("{bad json"))
if err == nil {
t.Fatal("expected REST handler error for malformed JSON")
}
if !errors.Is(err, errInvalidRESTInput) {
t.Fatalf("expected invalid REST input error, got %v", err)
}
}

View file

@ -4,8 +4,6 @@ package mcp
import ( import (
"context" "context"
"github.com/modelcontextprotocol/go-sdk/mcp"
) )
// Subsystem registers additional MCP tools at startup. // Subsystem registers additional MCP tools at startup.
@ -13,10 +11,10 @@ import (
// //
// type BrainSubsystem struct{} // type BrainSubsystem struct{}
// func (b *BrainSubsystem) Name() string { return "brain" } // func (b *BrainSubsystem) Name() string { return "brain" }
// func (b *BrainSubsystem) RegisterTools(server *mcp.Server) { ... } // func (b *BrainSubsystem) RegisterTools(svc *Service) { ... }
type Subsystem interface { type Subsystem interface {
Name() string Name() string
RegisterTools(server *mcp.Server) RegisterTools(svc *Service)
} }
// SubsystemWithShutdown extends Subsystem with graceful cleanup. // SubsystemWithShutdown extends Subsystem with graceful cleanup.
@ -38,6 +36,21 @@ type Notifier interface {
ChannelSend(ctx context.Context, channel string, data any) ChannelSend(ctx context.Context, channel string, data any)
} }
var _ Notifier = (*Service)(nil)
// ChannelPush is a Core IPC message that any service can send to push
// a channel event to connected Claude Code sessions.
// The MCP service catches this in HandleIPCEvents and calls ChannelSend.
//
// c.ACTION(mcp.ChannelPush{
// Channel: "agent.status",
// Data: map[string]any{"repo": "go-io"},
// })
type ChannelPush struct {
Channel string
Data any
}
// SubsystemWithNotifier extends Subsystem for those that emit channel events. // SubsystemWithNotifier extends Subsystem for those that emit channel events.
// SetNotifier is called after New() before any tool calls. // SetNotifier is called after New() before any tool calls.
// //
@ -48,3 +61,14 @@ type SubsystemWithNotifier interface {
Subsystem Subsystem
SetNotifier(n Notifier) SetNotifier(n Notifier)
} }
// SubsystemWithChannelCallback extends Subsystem for implementations that
// expose an OnChannel callback instead of a Notifier interface.
//
// brain.OnChannel(func(ctx context.Context, channel string, data any) {
// mcpService.ChannelSend(ctx, channel, data)
// })
type SubsystemWithChannelCallback interface {
Subsystem
OnChannel(func(ctx context.Context, channel string, data any))
}

View file

@ -3,8 +3,6 @@ package mcp
import ( import (
"context" "context"
"testing" "testing"
"github.com/modelcontextprotocol/go-sdk/mcp"
) )
// stubSubsystem is a minimal Subsystem for testing. // stubSubsystem is a minimal Subsystem for testing.
@ -15,7 +13,23 @@ type stubSubsystem struct {
func (s *stubSubsystem) Name() string { return s.name } func (s *stubSubsystem) Name() string { return s.name }
func (s *stubSubsystem) RegisterTools(server *mcp.Server) { func (s *stubSubsystem) RegisterTools(svc *Service) {
s.toolsRegistered = true
}
// notifierSubsystem verifies notifier wiring happens before tool registration.
type notifierSubsystem struct {
stubSubsystem
notifierSet bool
sawNotifierAtRegistration bool
}
func (s *notifierSubsystem) SetNotifier(n Notifier) {
s.notifierSet = n != nil
}
func (s *notifierSubsystem) RegisterTools(svc *Service) {
s.sawNotifierAtRegistration = s.notifierSet
s.toolsRegistered = true s.toolsRegistered = true
} }
@ -72,6 +86,41 @@ func TestSubsystem_Good_MultipleSubsystems(t *testing.T) {
} }
} }
func TestSubsystem_Good_NilEntriesIgnoredAndSnapshots(t *testing.T) {
sub := &stubSubsystem{name: "snap-sub"}
svc, err := New(Options{Subsystems: []Subsystem{nil, sub}})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
subs := svc.Subsystems()
if len(subs) != 1 {
t.Fatalf("expected 1 subsystem after filtering nil entries, got %d", len(subs))
}
if subs[0].Name() != "snap-sub" {
t.Fatalf("expected snap-sub, got %q", subs[0].Name())
}
subs[0] = nil
if svc.Subsystems()[0] == nil {
t.Fatal("expected Subsystems() to return a snapshot, not the live slice")
}
}
func TestSubsystem_Good_NotifierSetBeforeRegistration(t *testing.T) {
sub := &notifierSubsystem{stubSubsystem: stubSubsystem{name: "notifier-sub"}}
_, err := New(Options{Subsystems: []Subsystem{sub}})
if err != nil {
t.Fatalf("New() failed: %v", err)
}
if !sub.notifierSet {
t.Fatal("expected notifier to be set")
}
if !sub.sawNotifierAtRegistration {
t.Fatal("expected notifier to be available before RegisterTools ran")
}
}
func TestSubsystemShutdown_Good(t *testing.T) { func TestSubsystemShutdown_Good(t *testing.T) {
sub := &shutdownSubsystem{stubSubsystem: stubSubsystem{name: "shutdown-sub"}} sub := &shutdownSubsystem{stubSubsystem: stubSubsystem{name: "shutdown-sub"}}
svc, err := New(Options{Subsystems: []Subsystem{sub}}) svc, err := New(Options{Subsystems: []Subsystem{sub}})

View file

@ -1,14 +1,15 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp package mcp
import ( import (
"context" "context"
"fmt"
"strconv" "strconv"
"strings"
"time" "time"
"forge.lthn.ai/core/go-ai/ai" core "dappco.re/go/core"
"forge.lthn.ai/core/go-log" "dappco.re/go/ai/ai"
"dappco.re/go/log"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
@ -79,12 +80,12 @@ type MetricEventBrief struct {
// registerMetricsTools adds metrics tools to the MCP server. // registerMetricsTools adds metrics tools to the MCP server.
func (s *Service) registerMetricsTools(server *mcp.Server) { func (s *Service) registerMetricsTools(server *mcp.Server) {
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "metrics", &mcp.Tool{
Name: "metrics_record", Name: "metrics_record",
Description: "Record a metrics event for AI/security tracking. Events are stored in daily JSONL files.", Description: "Record a metrics event for AI/security tracking. Events are stored in daily JSONL files.",
}, s.metricsRecord) }, s.metricsRecord)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "metrics", &mcp.Tool{
Name: "metrics_query", Name: "metrics_query",
Description: "Query metrics events and get aggregated statistics by type, repo, and agent.", Description: "Query metrics events and get aggregated statistics by type, repo, and agent.",
}, s.metricsQuery) }, s.metricsQuery)
@ -199,7 +200,7 @@ func parseDuration(s string) (time.Duration, error) {
return 0, log.E("parseDuration", "duration cannot be empty", nil) return 0, log.E("parseDuration", "duration cannot be empty", nil)
} }
s = strings.TrimSpace(s) s = core.Trim(s)
if len(s) < 2 { if len(s) < 2 {
return 0, log.E("parseDuration", "invalid duration format: "+s, nil) return 0, log.E("parseDuration", "invalid duration format: "+s, nil)
} }
@ -214,7 +215,7 @@ func parseDuration(s string) (time.Duration, error) {
} }
if num <= 0 { if num <= 0 {
return 0, log.E("parseDuration", fmt.Sprintf("duration must be positive: %d", num), nil) return 0, log.E("parseDuration", core.Sprintf("duration must be positive: %d", num), nil)
} }
switch unit { switch unit {

View file

@ -1,11 +1,13 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp package mcp
import ( import (
"context" "context"
"time" "time"
"forge.lthn.ai/core/go-log" "dappco.re/go/log"
"forge.lthn.ai/core/go-process" "dappco.re/go/process"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
@ -27,6 +29,32 @@ type ProcessStartInput struct {
Env []string `json:"env,omitempty"` // e.g. ["CGO_ENABLED=0"] Env []string `json:"env,omitempty"` // e.g. ["CGO_ENABLED=0"]
} }
// ProcessRunInput contains parameters for running a command to completion
// and returning its captured output.
//
// input := ProcessRunInput{
// Command: "go",
// Args: []string{"test", "./..."},
// Dir: "/home/user/project",
// Env: []string{"CGO_ENABLED=0"},
// }
type ProcessRunInput struct {
Command string `json:"command"` // e.g. "go"
Args []string `json:"args,omitempty"` // e.g. ["test", "./..."]
Dir string `json:"dir,omitempty"` // e.g. "/home/user/project"
Env []string `json:"env,omitempty"` // e.g. ["CGO_ENABLED=0"]
}
// ProcessRunOutput contains the result of running a process to completion.
//
// // out.ID == "proc-abc123", out.ExitCode == 0, out.Output == "PASS\n..."
type ProcessRunOutput struct {
ID string `json:"id"` // e.g. "proc-abc123"
ExitCode int `json:"exitCode"` // 0 on success
Output string `json:"output"` // combined stdout/stderr
Command string `json:"command"` // e.g. "go"
}
// ProcessStartOutput contains the result of starting a process. // ProcessStartOutput contains the result of starting a process.
// //
// // out.ID == "proc-abc123", out.PID == 54321, out.Command == "go" // // out.ID == "proc-abc123", out.PID == 54321, out.Command == "go"
@ -139,32 +167,37 @@ func (s *Service) registerProcessTools(server *mcp.Server) bool {
return false return false
} }
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "process", &mcp.Tool{
Name: "process_start", Name: "process_start",
Description: "Start a new external process. Returns process ID for tracking.", Description: "Start a new external process. Returns process ID for tracking.",
}, s.processStart) }, s.processStart)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "process", &mcp.Tool{
Name: "process_run",
Description: "Run a command to completion and return the captured output. Blocks until the process exits.",
}, s.processRun)
addToolRecorded(s, server, "process", &mcp.Tool{
Name: "process_stop", Name: "process_stop",
Description: "Gracefully stop a running process by ID.", Description: "Gracefully stop a running process by ID.",
}, s.processStop) }, s.processStop)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "process", &mcp.Tool{
Name: "process_kill", Name: "process_kill",
Description: "Force kill a process by ID. Use when process_stop doesn't work.", Description: "Force kill a process by ID. Use when process_stop doesn't work.",
}, s.processKill) }, s.processKill)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "process", &mcp.Tool{
Name: "process_list", Name: "process_list",
Description: "List all managed processes. Use running_only=true for only active processes.", Description: "List all managed processes. Use running_only=true for only active processes.",
}, s.processList) }, s.processList)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "process", &mcp.Tool{
Name: "process_output", Name: "process_output",
Description: "Get the captured output of a process by ID.", Description: "Get the captured output of a process by ID.",
}, s.processOutput) }, s.processOutput)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "process", &mcp.Tool{
Name: "process_input", Name: "process_input",
Description: "Send input to a running process stdin.", Description: "Send input to a running process stdin.",
}, s.processInput) }, s.processInput)
@ -174,6 +207,10 @@ func (s *Service) registerProcessTools(server *mcp.Server) bool {
// processStart handles the process_start tool call. // processStart handles the process_start tool call.
func (s *Service) processStart(ctx context.Context, req *mcp.CallToolRequest, input ProcessStartInput) (*mcp.CallToolResult, ProcessStartOutput, error) { func (s *Service) processStart(ctx context.Context, req *mcp.CallToolRequest, input ProcessStartInput) (*mcp.CallToolResult, ProcessStartOutput, error) {
if s.processService == nil {
return nil, ProcessStartOutput{}, log.E("processStart", "process service unavailable", nil)
}
s.logger.Security("MCP tool execution", "tool", "process_start", "command", input.Command, "args", input.Args, "dir", input.Dir, "user", log.Username()) s.logger.Security("MCP tool execution", "tool", "process_start", "command", input.Command, "args", input.Args, "dir", input.Dir, "user", log.Username())
if input.Command == "" { if input.Command == "" {
@ -183,7 +220,7 @@ func (s *Service) processStart(ctx context.Context, req *mcp.CallToolRequest, in
opts := process.RunOptions{ opts := process.RunOptions{
Command: input.Command, Command: input.Command,
Args: input.Args, Args: input.Args,
Dir: input.Dir, Dir: s.resolveWorkspacePath(input.Dir),
Env: input.Env, Env: input.Env,
} }
@ -201,14 +238,91 @@ func (s *Service) processStart(ctx context.Context, req *mcp.CallToolRequest, in
Args: proc.Args, Args: proc.Args,
StartedAt: proc.StartedAt, StartedAt: proc.StartedAt,
} }
s.ChannelSend(ctx, "process.start", map[string]any{ s.recordProcessRuntime(output.ID, processRuntime{
"id": output.ID, "pid": output.PID, "command": output.Command, Command: output.Command,
Args: output.Args,
Dir: info.Dir,
StartedAt: output.StartedAt,
})
s.ChannelSend(ctx, ChannelProcessStart, map[string]any{
"id": output.ID,
"pid": output.PID,
"command": output.Command,
"args": output.Args,
"dir": info.Dir,
"startedAt": output.StartedAt,
}) })
return nil, output, nil return nil, output, nil
} }
// processRun handles the process_run tool call.
// Executes the command to completion and returns the captured output.
func (s *Service) processRun(ctx context.Context, req *mcp.CallToolRequest, input ProcessRunInput) (*mcp.CallToolResult, ProcessRunOutput, error) {
if s.processService == nil {
return nil, ProcessRunOutput{}, log.E("processRun", "process service unavailable", nil)
}
progress := NewProgressNotifier(ctx, req)
s.logger.Security("MCP tool execution", "tool", "process_run", "command", input.Command, "args", input.Args, "dir", input.Dir, "user", log.Username())
if input.Command == "" {
return nil, ProcessRunOutput{}, log.E("processRun", "command cannot be empty", nil)
}
opts := process.RunOptions{
Command: input.Command,
Args: input.Args,
Dir: s.resolveWorkspacePath(input.Dir),
Env: input.Env,
}
_ = progress.Send(0, 2, "starting process")
proc, err := s.processService.StartWithOptions(ctx, opts)
if err != nil {
log.Error("mcp: process run start failed", "command", input.Command, "err", err)
return nil, ProcessRunOutput{}, log.E("processRun", "failed to start process", err)
}
_ = progress.Send(1, 2, "process started")
info := proc.Info()
s.recordProcessRuntime(proc.ID, processRuntime{
Command: proc.Command,
Args: proc.Args,
Dir: info.Dir,
StartedAt: proc.StartedAt,
})
s.ChannelSend(ctx, ChannelProcessStart, map[string]any{
"id": proc.ID,
"pid": info.PID,
"command": proc.Command,
"args": proc.Args,
"dir": info.Dir,
"startedAt": proc.StartedAt,
})
// Wait for completion (context-aware).
select {
case <-ctx.Done():
_ = progress.Send(2, 2, "process cancelled")
return nil, ProcessRunOutput{}, log.E("processRun", "cancelled", ctx.Err())
case <-proc.Done():
}
_ = progress.Send(2, 2, "process completed")
return nil, ProcessRunOutput{
ID: proc.ID,
ExitCode: proc.ExitCode,
Output: proc.Output(),
Command: proc.Command,
}, nil
}
// processStop handles the process_stop tool call. // processStop handles the process_stop tool call.
func (s *Service) processStop(ctx context.Context, req *mcp.CallToolRequest, input ProcessStopInput) (*mcp.CallToolResult, ProcessStopOutput, error) { func (s *Service) processStop(ctx context.Context, req *mcp.CallToolRequest, input ProcessStopInput) (*mcp.CallToolResult, ProcessStopOutput, error) {
if s.processService == nil {
return nil, ProcessStopOutput{}, log.E("processStop", "process service unavailable", nil)
}
s.logger.Security("MCP tool execution", "tool", "process_stop", "id", input.ID, "user", log.Username()) s.logger.Security("MCP tool execution", "tool", "process_stop", "id", input.ID, "user", log.Username())
if input.ID == "" { if input.ID == "" {
@ -221,14 +335,23 @@ func (s *Service) processStop(ctx context.Context, req *mcp.CallToolRequest, inp
return nil, ProcessStopOutput{}, log.E("processStop", "process not found", err) return nil, ProcessStopOutput{}, log.E("processStop", "process not found", err)
} }
// For graceful stop, we use Kill() which sends SIGKILL // Use the process service's graceful shutdown path first so callers get
// A more sophisticated implementation could use SIGTERM first // a real stop signal before we fall back to a hard kill internally.
if err := proc.Kill(); err != nil { if err := proc.Shutdown(); err != nil {
log.Error("mcp: process stop kill failed", "id", input.ID, "err", err) log.Error("mcp: process stop failed", "id", input.ID, "err", err)
return nil, ProcessStopOutput{}, log.E("processStop", "failed to stop process", err) return nil, ProcessStopOutput{}, log.E("processStop", "failed to stop process", err)
} }
s.ChannelSend(ctx, "process.exit", map[string]any{"id": input.ID, "signal": "stop"}) info := proc.Info()
s.ChannelSend(ctx, ChannelProcessExit, map[string]any{
"id": input.ID,
"signal": "stop",
"command": info.Command,
"args": info.Args,
"dir": info.Dir,
"startedAt": info.StartedAt,
})
s.emitTestResult(ctx, input.ID, 0, 0, "stop", "")
return nil, ProcessStopOutput{ return nil, ProcessStopOutput{
ID: input.ID, ID: input.ID,
Success: true, Success: true,
@ -238,18 +361,37 @@ func (s *Service) processStop(ctx context.Context, req *mcp.CallToolRequest, inp
// processKill handles the process_kill tool call. // processKill handles the process_kill tool call.
func (s *Service) processKill(ctx context.Context, req *mcp.CallToolRequest, input ProcessKillInput) (*mcp.CallToolResult, ProcessKillOutput, error) { func (s *Service) processKill(ctx context.Context, req *mcp.CallToolRequest, input ProcessKillInput) (*mcp.CallToolResult, ProcessKillOutput, error) {
if s.processService == nil {
return nil, ProcessKillOutput{}, log.E("processKill", "process service unavailable", nil)
}
s.logger.Security("MCP tool execution", "tool", "process_kill", "id", input.ID, "user", log.Username()) s.logger.Security("MCP tool execution", "tool", "process_kill", "id", input.ID, "user", log.Username())
if input.ID == "" { if input.ID == "" {
return nil, ProcessKillOutput{}, errIDEmpty return nil, ProcessKillOutput{}, errIDEmpty
} }
proc, err := s.processService.Get(input.ID)
if err != nil {
log.Error("mcp: process kill failed", "id", input.ID, "err", err)
return nil, ProcessKillOutput{}, log.E("processKill", "process not found", err)
}
if err := s.processService.Kill(input.ID); err != nil { if err := s.processService.Kill(input.ID); err != nil {
log.Error("mcp: process kill failed", "id", input.ID, "err", err) log.Error("mcp: process kill failed", "id", input.ID, "err", err)
return nil, ProcessKillOutput{}, log.E("processKill", "failed to kill process", err) return nil, ProcessKillOutput{}, log.E("processKill", "failed to kill process", err)
} }
s.ChannelSend(ctx, "process.exit", map[string]any{"id": input.ID, "signal": "kill"}) info := proc.Info()
s.ChannelSend(ctx, ChannelProcessExit, map[string]any{
"id": input.ID,
"signal": "kill",
"command": info.Command,
"args": info.Args,
"dir": info.Dir,
"startedAt": info.StartedAt,
})
s.emitTestResult(ctx, input.ID, 0, 0, "kill", "")
return nil, ProcessKillOutput{ return nil, ProcessKillOutput{
ID: input.ID, ID: input.ID,
Success: true, Success: true,
@ -259,6 +401,10 @@ func (s *Service) processKill(ctx context.Context, req *mcp.CallToolRequest, inp
// processList handles the process_list tool call. // processList handles the process_list tool call.
func (s *Service) processList(ctx context.Context, req *mcp.CallToolRequest, input ProcessListInput) (*mcp.CallToolResult, ProcessListOutput, error) { func (s *Service) processList(ctx context.Context, req *mcp.CallToolRequest, input ProcessListInput) (*mcp.CallToolResult, ProcessListOutput, error) {
if s.processService == nil {
return nil, ProcessListOutput{}, log.E("processList", "process service unavailable", nil)
}
s.logger.Info("MCP tool execution", "tool", "process_list", "running_only", input.RunningOnly, "user", log.Username()) s.logger.Info("MCP tool execution", "tool", "process_list", "running_only", input.RunningOnly, "user", log.Username())
var procs []*process.Process var procs []*process.Process
@ -292,6 +438,10 @@ func (s *Service) processList(ctx context.Context, req *mcp.CallToolRequest, inp
// processOutput handles the process_output tool call. // processOutput handles the process_output tool call.
func (s *Service) processOutput(ctx context.Context, req *mcp.CallToolRequest, input ProcessOutputInput) (*mcp.CallToolResult, ProcessOutputOutput, error) { func (s *Service) processOutput(ctx context.Context, req *mcp.CallToolRequest, input ProcessOutputInput) (*mcp.CallToolResult, ProcessOutputOutput, error) {
if s.processService == nil {
return nil, ProcessOutputOutput{}, log.E("processOutput", "process service unavailable", nil)
}
s.logger.Info("MCP tool execution", "tool", "process_output", "id", input.ID, "user", log.Username()) s.logger.Info("MCP tool execution", "tool", "process_output", "id", input.ID, "user", log.Username())
if input.ID == "" { if input.ID == "" {
@ -312,6 +462,10 @@ func (s *Service) processOutput(ctx context.Context, req *mcp.CallToolRequest, i
// processInput handles the process_input tool call. // processInput handles the process_input tool call.
func (s *Service) processInput(ctx context.Context, req *mcp.CallToolRequest, input ProcessInputInput) (*mcp.CallToolResult, ProcessInputOutput, error) { func (s *Service) processInput(ctx context.Context, req *mcp.CallToolRequest, input ProcessInputInput) (*mcp.CallToolResult, ProcessInputOutput, error) {
if s.processService == nil {
return nil, ProcessInputOutput{}, log.E("processInput", "process service unavailable", nil)
}
s.logger.Security("MCP tool execution", "tool", "process_input", "id", input.ID, "user", log.Username()) s.logger.Security("MCP tool execution", "tool", "process_input", "id", input.ID, "user", log.Username())
if input.ID == "" { if input.ID == "" {

View file

@ -1,3 +1,5 @@
//go:build ci
package mcp package mcp
import ( import (
@ -7,7 +9,7 @@ import (
"time" "time"
"dappco.re/go/core" "dappco.re/go/core"
"forge.lthn.ai/core/go-process" "dappco.re/go/process"
) )
// newTestProcessService creates a real process.Service backed by a core.Core for CI tests. // newTestProcessService creates a real process.Service backed by a core.Core for CI tests.

View file

@ -275,7 +275,7 @@ func TestProcessInfo_Good(t *testing.T) {
} }
} }
// TestWithProcessService_Good verifies the WithProcessService option. // TestWithProcessService_Good verifies Options{ProcessService: ...}.
func TestWithProcessService_Good(t *testing.T) { func TestWithProcessService_Good(t *testing.T) {
// Note: We can't easily create a real process.Service here without Core, // Note: We can't easily create a real process.Service here without Core,
// so we just verify the option doesn't panic with nil. // so we just verify the option doesn't panic with nil.
@ -288,3 +288,70 @@ func TestWithProcessService_Good(t *testing.T) {
t.Error("Expected processService to be nil when passed nil") t.Error("Expected processService to be nil when passed nil")
} }
} }
// TestRegisterProcessTools_Bad_NilService verifies that tools are not registered when process service is nil.
func TestRegisterProcessTools_Bad_NilService(t *testing.T) {
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
registered := s.registerProcessTools(s.server)
if registered {
t.Error("Expected registerProcessTools to return false when processService is nil")
}
}
// TestToolsProcess_ProcessRunInput_Good exercises the process_run input DTO shape.
func TestToolsProcess_ProcessRunInput_Good(t *testing.T) {
input := ProcessRunInput{
Command: "echo",
Args: []string{"hello"},
Dir: "/tmp",
Env: []string{"FOO=bar"},
}
if input.Command != "echo" {
t.Errorf("expected command 'echo', got %q", input.Command)
}
if len(input.Args) != 1 || input.Args[0] != "hello" {
t.Errorf("expected args [hello], got %v", input.Args)
}
if input.Dir != "/tmp" {
t.Errorf("expected dir '/tmp', got %q", input.Dir)
}
if len(input.Env) != 1 {
t.Errorf("expected 1 env, got %d", len(input.Env))
}
}
// TestToolsProcess_ProcessRunOutput_Good exercises the process_run output DTO shape.
func TestToolsProcess_ProcessRunOutput_Good(t *testing.T) {
output := ProcessRunOutput{
ID: "proc-1",
ExitCode: 0,
Output: "hello\n",
Command: "echo",
}
if output.ID != "proc-1" {
t.Errorf("expected id 'proc-1', got %q", output.ID)
}
if output.ExitCode != 0 {
t.Errorf("expected exit code 0, got %d", output.ExitCode)
}
if output.Output != "hello\n" {
t.Errorf("expected output 'hello\\n', got %q", output.Output)
}
}
// TestToolsProcess_ProcessRun_Bad rejects calls without a process service.
func TestToolsProcess_ProcessRun_Bad(t *testing.T) {
svc, err := New(Options{})
if err != nil {
t.Fatal(err)
}
_, _, err = svc.processRun(t.Context(), nil, ProcessRunInput{Command: "echo", Args: []string{"hi"}})
if err == nil {
t.Fatal("expected error when process service is unavailable")
}
}

View file

@ -1,11 +1,13 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp package mcp
import ( import (
"context" "context"
"fmt"
"forge.lthn.ai/core/go-log" core "dappco.re/go/core"
"forge.lthn.ai/core/go-rag" "dappco.re/go/log"
"dappco.re/go/rag"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
@ -81,6 +83,30 @@ type RAGCollectionsInput struct {
ShowStats bool `json:"show_stats,omitempty"` // true to include point counts and status ShowStats bool `json:"show_stats,omitempty"` // true to include point counts and status
} }
// RAGRetrieveInput contains parameters for retrieving chunks from a specific
// document source (rather than running a semantic query).
//
// input := RAGRetrieveInput{
// Source: "docs/services.md",
// Collection: "core-docs",
// Limit: 20,
// }
type RAGRetrieveInput struct {
Source string `json:"source"` // e.g. "docs/services.md"
Collection string `json:"collection,omitempty"` // e.g. "core-docs" (default: "hostuk-docs")
Limit int `json:"limit,omitempty"` // e.g. 20 (default: 50)
}
// RAGRetrieveOutput contains document chunks for a specific source.
//
// // len(out.Chunks) == 12, out.Source == "docs/services.md"
type RAGRetrieveOutput struct {
Source string `json:"source"` // e.g. "docs/services.md"
Collection string `json:"collection"` // collection searched
Chunks []RAGQueryResult `json:"chunks"` // chunks for the source, ordered by chunkIndex
Count int `json:"count"` // number of chunks returned
}
// CollectionInfo contains information about a Qdrant collection. // CollectionInfo contains information about a Qdrant collection.
// //
// // ci.Name == "core-docs", ci.PointsCount == 1500, ci.Status == "green" // // ci.Name == "core-docs", ci.PointsCount == 1500, ci.Status == "green"
@ -99,17 +125,34 @@ type RAGCollectionsOutput struct {
// registerRAGTools adds RAG tools to the MCP server. // registerRAGTools adds RAG tools to the MCP server.
func (s *Service) registerRAGTools(server *mcp.Server) { func (s *Service) registerRAGTools(server *mcp.Server) {
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "rag", &mcp.Tool{
Name: "rag_query", Name: "rag_query",
Description: "Query the RAG vector database for relevant documentation. Returns semantically similar content based on the query.", Description: "Query the RAG vector database for relevant documentation. Returns semantically similar content based on the query.",
}, s.ragQuery) }, s.ragQuery)
mcp.AddTool(server, &mcp.Tool{ // rag_search is the spec-aligned alias for rag_query.
addToolRecorded(s, server, "rag", &mcp.Tool{
Name: "rag_search",
Description: "Semantic search across documents in the RAG vector database. Returns chunks ranked by similarity.",
}, s.ragQuery)
addToolRecorded(s, server, "rag", &mcp.Tool{
Name: "rag_ingest", Name: "rag_ingest",
Description: "Ingest documents into the RAG vector database. Supports both single files and directories.", Description: "Ingest documents into the RAG vector database. Supports both single files and directories.",
}, s.ragIngest) }, s.ragIngest)
mcp.AddTool(server, &mcp.Tool{ // rag_index is the spec-aligned alias for rag_ingest.
addToolRecorded(s, server, "rag", &mcp.Tool{
Name: "rag_index",
Description: "Index a document or directory into the RAG vector database.",
}, s.ragIngest)
addToolRecorded(s, server, "rag", &mcp.Tool{
Name: "rag_retrieve",
Description: "Retrieve chunks for a specific document source from the RAG vector database.",
}, s.ragRetrieve)
addToolRecorded(s, server, "rag", &mcp.Tool{
Name: "rag_collections", Name: "rag_collections",
Description: "List all available collections in the RAG vector database.", Description: "List all available collections in the RAG vector database.",
}, s.ragCollections) }, s.ragCollections)
@ -183,25 +226,26 @@ func (s *Service) ragIngest(ctx context.Context, req *mcp.CallToolRequest, input
log.Error("mcp: rag ingest stat failed", "path", input.Path, "err", err) log.Error("mcp: rag ingest stat failed", "path", input.Path, "err", err)
return nil, RAGIngestOutput{}, log.E("ragIngest", "failed to access path", err) return nil, RAGIngestOutput{}, log.E("ragIngest", "failed to access path", err)
} }
resolvedPath := s.resolveWorkspacePath(input.Path)
var message string var message string
var chunks int var chunks int
if info.IsDir() { if info.IsDir() {
// Ingest directory // Ingest directory
err = rag.IngestDirectory(ctx, input.Path, collection, input.Recreate) err = rag.IngestDirectory(ctx, resolvedPath, collection, input.Recreate)
if err != nil { if err != nil {
log.Error("mcp: rag ingest directory failed", "path", input.Path, "collection", collection, "err", err) log.Error("mcp: rag ingest directory failed", "path", input.Path, "collection", collection, "err", err)
return nil, RAGIngestOutput{}, log.E("ragIngest", "failed to ingest directory", err) return nil, RAGIngestOutput{}, log.E("ragIngest", "failed to ingest directory", err)
} }
message = fmt.Sprintf("Successfully ingested directory %s into collection %s", input.Path, collection) message = core.Sprintf("Successfully ingested directory %s into collection %s", input.Path, collection)
} else { } else {
// Ingest single file // Ingest single file
chunks, err = rag.IngestSingleFile(ctx, input.Path, collection) chunks, err = rag.IngestSingleFile(ctx, resolvedPath, collection)
if err != nil { if err != nil {
log.Error("mcp: rag ingest file failed", "path", input.Path, "collection", collection, "err", err) log.Error("mcp: rag ingest file failed", "path", input.Path, "collection", collection, "err", err)
return nil, RAGIngestOutput{}, log.E("ragIngest", "failed to ingest file", err) return nil, RAGIngestOutput{}, log.E("ragIngest", "failed to ingest file", err)
} }
message = fmt.Sprintf("Successfully ingested file %s (%d chunks) into collection %s", input.Path, chunks, collection) message = core.Sprintf("Successfully ingested file %s (%d chunks) into collection %s", input.Path, chunks, collection)
} }
return nil, RAGIngestOutput{ return nil, RAGIngestOutput{
@ -213,6 +257,86 @@ func (s *Service) ragIngest(ctx context.Context, req *mcp.CallToolRequest, input
}, nil }, nil
} }
// ragRetrieve handles the rag_retrieve tool call.
// Returns chunks for a specific source path by querying the collection with
// the source path as the query text and then filtering results down to the
// matching source. This preserves the transport abstraction that the rest of
// the RAG tools use while producing the document-scoped view callers expect.
func (s *Service) ragRetrieve(ctx context.Context, req *mcp.CallToolRequest, input RAGRetrieveInput) (*mcp.CallToolResult, RAGRetrieveOutput, error) {
collection := input.Collection
if collection == "" {
collection = DefaultRAGCollection
}
limit := input.Limit
if limit <= 0 {
limit = 50
}
s.logger.Info("MCP tool execution", "tool", "rag_retrieve", "source", input.Source, "collection", collection, "limit", limit, "user", log.Username())
if input.Source == "" {
return nil, RAGRetrieveOutput{}, log.E("ragRetrieve", "source cannot be empty", nil)
}
// Use the source path as the query text — semantically related chunks
// will rank highly, and we then keep only chunks whose Source matches.
// Over-fetch by an order of magnitude so document-level limits are met
// even when the source appears beyond the top-K of the raw query.
overfetch := limit * 10
if overfetch < 100 {
overfetch = 100
}
results, err := rag.QueryDocs(ctx, input.Source, collection, overfetch)
if err != nil {
log.Error("mcp: rag retrieve query failed", "source", input.Source, "collection", collection, "err", err)
return nil, RAGRetrieveOutput{}, log.E("ragRetrieve", "failed to retrieve chunks", err)
}
chunks := make([]RAGQueryResult, 0, limit)
for _, r := range results {
if r.Source != input.Source {
continue
}
chunks = append(chunks, RAGQueryResult{
Content: r.Text,
Source: r.Source,
Section: r.Section,
Category: r.Category,
ChunkIndex: r.ChunkIndex,
Score: r.Score,
})
if len(chunks) >= limit {
break
}
}
sortChunksByIndex(chunks)
return nil, RAGRetrieveOutput{
Source: input.Source,
Collection: collection,
Chunks: chunks,
Count: len(chunks),
}, nil
}
// sortChunksByIndex sorts chunks in ascending order of chunk index.
// Stable ordering keeps ties by their original position.
func sortChunksByIndex(chunks []RAGQueryResult) {
if len(chunks) <= 1 {
return
}
// Insertion sort keeps the code dependency-free and is fast enough
// for the small result sets rag_retrieve is designed for.
for i := 1; i < len(chunks); i++ {
j := i
for j > 0 && chunks[j-1].ChunkIndex > chunks[j].ChunkIndex {
chunks[j-1], chunks[j] = chunks[j], chunks[j-1]
j--
}
}
}
// ragCollections handles the rag_collections tool call. // ragCollections handles the rag_collections tool call.
func (s *Service) ragCollections(ctx context.Context, req *mcp.CallToolRequest, input RAGCollectionsInput) (*mcp.CallToolResult, RAGCollectionsOutput, error) { func (s *Service) ragCollections(ctx context.Context, req *mcp.CallToolRequest, input RAGCollectionsInput) (*mcp.CallToolResult, RAGCollectionsOutput, error) {
s.logger.Info("MCP tool execution", "tool", "rag_collections", "show_stats", input.ShowStats, "user", log.Username()) s.logger.Info("MCP tool execution", "tool", "rag_collections", "show_stats", input.ShowStats, "user", log.Username())

View file

@ -171,3 +171,66 @@ func TestRAGCollectionsInput_ShowStats(t *testing.T) {
t.Error("Expected ShowStats to be true") t.Error("Expected ShowStats to be true")
} }
} }
// TestToolsRag_RAGRetrieveInput_Good exercises the rag_retrieve DTO defaults.
func TestToolsRag_RAGRetrieveInput_Good(t *testing.T) {
input := RAGRetrieveInput{
Source: "docs/index.md",
Collection: "core-docs",
Limit: 20,
}
if input.Source != "docs/index.md" {
t.Errorf("expected source docs/index.md, got %q", input.Source)
}
if input.Limit != 20 {
t.Errorf("expected limit 20, got %d", input.Limit)
}
}
// TestToolsRag_RAGRetrieveOutput_Good exercises the rag_retrieve output shape.
func TestToolsRag_RAGRetrieveOutput_Good(t *testing.T) {
output := RAGRetrieveOutput{
Source: "docs/index.md",
Collection: "core-docs",
Chunks: []RAGQueryResult{
{Content: "first", ChunkIndex: 0},
{Content: "second", ChunkIndex: 1},
},
Count: 2,
}
if output.Count != 2 {
t.Fatalf("expected count 2, got %d", output.Count)
}
if output.Chunks[1].ChunkIndex != 1 {
t.Fatalf("expected chunk 1, got %d", output.Chunks[1].ChunkIndex)
}
}
// TestToolsRag_SortChunksByIndex_Good verifies sort orders by chunk index ascending.
func TestToolsRag_SortChunksByIndex_Good(t *testing.T) {
chunks := []RAGQueryResult{
{ChunkIndex: 3},
{ChunkIndex: 1},
{ChunkIndex: 2},
}
sortChunksByIndex(chunks)
for i, want := range []int{1, 2, 3} {
if chunks[i].ChunkIndex != want {
t.Fatalf("index %d: expected chunk %d, got %d", i, want, chunks[i].ChunkIndex)
}
}
}
// TestToolsRag_RagRetrieve_Bad rejects empty source paths.
func TestToolsRag_RagRetrieve_Bad(t *testing.T) {
svc, err := New(Options{WorkspaceRoot: t.TempDir()})
if err != nil {
t.Fatal(err)
}
_, _, err = svc.ragRetrieve(t.Context(), nil, RAGRetrieveInput{})
if err == nil {
t.Fatal("expected error for empty source")
}
}

View file

@ -1,19 +1,25 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp package mcp
import ( import (
// Note: AX-6 — screenshot normalization needs bytes.NewReader for image.Decode on captured byte slices.
"bytes"
"context" "context"
"encoding/base64" "encoding/base64"
"fmt" "image"
"sync" "image/jpeg"
_ "image/png"
"time" "time"
"forge.lthn.ai/core/go-log" core "dappco.re/go/core"
"forge.lthn.ai/core/go-webview" "dappco.re/go/log"
"dappco.re/go/webview"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
// webviewMu protects webviewInstance from concurrent access. // webviewMu protects webviewInstance from concurrent access.
var webviewMu sync.Mutex var webviewMu core.Mutex
// webviewInstance holds the current webview connection. // webviewInstance holds the current webview connection.
// This is managed by the MCP service. // This is managed by the MCP service.
@ -25,6 +31,20 @@ var (
errSelectorRequired = log.E("webview", "selector is required", nil) errSelectorRequired = log.E("webview", "selector is required", nil)
) )
// closeWebviewConnection closes and clears the shared browser connection.
func closeWebviewConnection() error {
webviewMu.Lock()
defer webviewMu.Unlock()
if webviewInstance == nil {
return nil
}
err := webviewInstance.Close()
webviewInstance = nil
return err
}
// WebviewConnectInput contains parameters for connecting to Chrome DevTools. // WebviewConnectInput contains parameters for connecting to Chrome DevTools.
// //
// input := WebviewConnectInput{DebugURL: "http://localhost:9222", Timeout: 10} // input := WebviewConnectInput{DebugURL: "http://localhost:9222", Timeout: 10}
@ -201,55 +221,67 @@ type WebviewDisconnectOutput struct {
// registerWebviewTools adds webview tools to the MCP server. // registerWebviewTools adds webview tools to the MCP server.
func (s *Service) registerWebviewTools(server *mcp.Server) { func (s *Service) registerWebviewTools(server *mcp.Server) {
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "webview", &mcp.Tool{
Name: "webview_connect", Name: "webview_connect",
Description: "Connect to Chrome DevTools Protocol. Start Chrome with --remote-debugging-port=9222 first.", Description: "Connect to Chrome DevTools Protocol. Start Chrome with --remote-debugging-port=9222 first.",
}, s.webviewConnect) }, s.webviewConnect)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "webview", &mcp.Tool{
Name: "webview_disconnect", Name: "webview_disconnect",
Description: "Disconnect from Chrome DevTools.", Description: "Disconnect from Chrome DevTools.",
}, s.webviewDisconnect) }, s.webviewDisconnect)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "webview", &mcp.Tool{
Name: "webview_navigate", Name: "webview_navigate",
Description: "Navigate the browser to a URL.", Description: "Navigate the browser to a URL.",
}, s.webviewNavigate) }, s.webviewNavigate)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "webview", &mcp.Tool{
Name: "webview_click", Name: "webview_click",
Description: "Click on an element by CSS selector.", Description: "Click on an element by CSS selector.",
}, s.webviewClick) }, s.webviewClick)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "webview", &mcp.Tool{
Name: "webview_type", Name: "webview_type",
Description: "Type text into an element by CSS selector.", Description: "Type text into an element by CSS selector.",
}, s.webviewType) }, s.webviewType)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "webview", &mcp.Tool{
Name: "webview_query", Name: "webview_query",
Description: "Query DOM elements by CSS selector.", Description: "Query DOM elements by CSS selector.",
}, s.webviewQuery) }, s.webviewQuery)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "webview", &mcp.Tool{
Name: "webview_console", Name: "webview_console",
Description: "Get browser console output.", Description: "Get browser console output.",
}, s.webviewConsole) }, s.webviewConsole)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "webview", &mcp.Tool{
Name: "webview_eval", Name: "webview_eval",
Description: "Evaluate JavaScript in the browser context.", Description: "Evaluate JavaScript in the browser context.",
}, s.webviewEval) }, s.webviewEval)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "webview", &mcp.Tool{
Name: "webview_screenshot", Name: "webview_screenshot",
Description: "Capture a screenshot of the browser window.", Description: "Capture a screenshot of the browser window.",
}, s.webviewScreenshot) }, s.webviewScreenshot)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "webview", &mcp.Tool{
Name: "webview_wait", Name: "webview_wait",
Description: "Wait for an element to appear by CSS selector.", Description: "Wait for an element to appear by CSS selector.",
}, s.webviewWait) }, s.webviewWait)
// Embedded UI rendering — for pushing HTML/state to connected clients
// without requiring a Chrome DevTools connection.
addToolRecorded(s, server, "webview", &mcp.Tool{
Name: "webview_render",
Description: "Render HTML in an embedded webview by ID. Broadcasts to connected clients via the webview.render channel.",
}, s.webviewRender)
addToolRecorded(s, server, "webview", &mcp.Tool{
Name: "webview_update",
Description: "Update the HTML, title, or state of an embedded webview by ID. Broadcasts to connected clients via the webview.update channel.",
}, s.webviewUpdate)
} }
// webviewConnect handles the webview_connect tool call. // webviewConnect handles the webview_connect tool call.
@ -289,7 +321,7 @@ func (s *Service) webviewConnect(ctx context.Context, req *mcp.CallToolRequest,
return nil, WebviewConnectOutput{ return nil, WebviewConnectOutput{
Success: true, Success: true,
Message: fmt.Sprintf("Connected to Chrome DevTools at %s", input.DebugURL), Message: core.Sprintf("Connected to Chrome DevTools at %s", input.DebugURL),
}, nil }, nil
} }
@ -533,6 +565,7 @@ func (s *Service) webviewScreenshot(ctx context.Context, req *mcp.CallToolReques
if format == "" { if format == "" {
format = "png" format = "png"
} }
format = core.Lower(format)
data, err := webviewInstance.Screenshot() data, err := webviewInstance.Screenshot()
if err != nil { if err != nil {
@ -540,13 +573,40 @@ func (s *Service) webviewScreenshot(ctx context.Context, req *mcp.CallToolReques
return nil, WebviewScreenshotOutput{}, log.E("webviewScreenshot", "failed to capture screenshot", err) return nil, WebviewScreenshotOutput{}, log.E("webviewScreenshot", "failed to capture screenshot", err)
} }
encoded, outputFormat, err := normalizeScreenshotData(data, format)
if err != nil {
return nil, WebviewScreenshotOutput{}, log.E("webviewScreenshot", "failed to encode screenshot", err)
}
return nil, WebviewScreenshotOutput{ return nil, WebviewScreenshotOutput{
Success: true, Success: true,
Data: base64.StdEncoding.EncodeToString(data), Data: base64.StdEncoding.EncodeToString(encoded),
Format: format, Format: outputFormat,
}, nil }, nil
} }
// normalizeScreenshotData converts screenshot bytes into the requested format.
// PNG is preserved as-is. JPEG requests are re-encoded so the output matches
// the declared format in WebviewScreenshotOutput.
func normalizeScreenshotData(data []byte, format string) ([]byte, string, error) {
switch format {
case "", "png":
return data, "png", nil
case "jpeg", "jpg":
img, _, err := image.Decode(bytes.NewReader(data))
if err != nil {
return nil, "", err
}
buf := core.NewBuffer()
if err := jpeg.Encode(buf, img, &jpeg.Options{Quality: 90}); err != nil {
return nil, "", err
}
return buf.Bytes(), "jpeg", nil
default:
return nil, "", log.E("webviewScreenshot", "unsupported screenshot format: "+format, nil)
}
}
// webviewWait handles the webview_wait tool call. // webviewWait handles the webview_wait tool call.
func (s *Service) webviewWait(ctx context.Context, req *mcp.CallToolRequest, input WebviewWaitInput) (*mcp.CallToolResult, WebviewWaitOutput, error) { func (s *Service) webviewWait(ctx context.Context, req *mcp.CallToolRequest, input WebviewWaitInput) (*mcp.CallToolResult, WebviewWaitOutput, error) {
webviewMu.Lock() webviewMu.Lock()
@ -562,13 +622,52 @@ func (s *Service) webviewWait(ctx context.Context, req *mcp.CallToolRequest, inp
return nil, WebviewWaitOutput{}, errSelectorRequired return nil, WebviewWaitOutput{}, errSelectorRequired
} }
if err := webviewInstance.WaitForSelector(input.Selector); err != nil { timeout := time.Duration(input.Timeout) * time.Second
if timeout <= 0 {
timeout = 30 * time.Second
}
if err := waitForSelector(ctx, timeout, input.Selector, func(selector string) error {
_, err := webviewInstance.QuerySelector(selector)
return err
}); err != nil {
log.Error("mcp: webview wait failed", "selector", input.Selector, "err", err) log.Error("mcp: webview wait failed", "selector", input.Selector, "err", err)
return nil, WebviewWaitOutput{}, log.E("webviewWait", "failed to wait for selector", err) return nil, WebviewWaitOutput{}, log.E("webviewWait", "failed to wait for selector", err)
} }
return nil, WebviewWaitOutput{ return nil, WebviewWaitOutput{
Success: true, Success: true,
Message: fmt.Sprintf("Element found: %s", input.Selector), Message: core.Sprintf("Element found: %s", input.Selector),
}, nil }, nil
} }
// waitForSelector polls until the selector exists or the timeout elapses.
// Query helpers in go-webview report "element not found" as an error, so we
// keep retrying until we see the element or hit the deadline.
func waitForSelector(ctx context.Context, timeout time.Duration, selector string, query func(string) error) error {
if timeout <= 0 {
timeout = 30 * time.Second
}
waitCtx, cancel := context.WithTimeout(ctx, timeout)
defer cancel()
ticker := time.NewTicker(10 * time.Millisecond)
defer ticker.Stop()
for {
err := query(selector)
if err == nil {
return nil
}
if !core.Contains(err.Error(), "element not found") {
return err
}
select {
case <-waitCtx.Done():
return log.E("webviewWait", "timed out waiting for selector", waitCtx.Err())
case <-ticker.C:
}
}
}

View file

@ -0,0 +1,233 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"context"
"sync"
"time"
core "dappco.re/go/core"
"dappco.re/go/log"
"github.com/modelcontextprotocol/go-sdk/mcp"
)
// WebviewRenderInput contains parameters for rendering an embedded
// HTML view. The named view is stored and broadcast so connected clients
// (Claude Code sessions, CoreGUI windows, HTTP/SSE subscribers) can
// display the content.
//
// input := WebviewRenderInput{
// ViewID: "dashboard",
// HTML: "<div id='app'>Loading...</div>",
// Title: "Agent Dashboard",
// Width: 1024,
// Height: 768,
// State: map[string]any{"theme": "dark"},
// }
type WebviewRenderInput struct {
ViewID string `json:"view_id"` // e.g. "dashboard"
HTML string `json:"html"` // rendered markup
Title string `json:"title,omitempty"` // e.g. "Agent Dashboard"
Width int `json:"width,omitempty"` // preferred width in pixels
Height int `json:"height,omitempty"` // preferred height in pixels
State map[string]any `json:"state,omitempty"` // initial view state
}
// WebviewRenderOutput reports the result of rendering an embedded view.
//
// // out.Success == true, out.ViewID == "dashboard"
type WebviewRenderOutput struct {
Success bool `json:"success"` // true when the view was stored and broadcast
ViewID string `json:"view_id"` // echoed view identifier
UpdatedAt time.Time `json:"updatedAt"` // when the view was rendered
}
// WebviewUpdateInput contains parameters for updating the state of an
// existing embedded view. Callers may provide HTML to replace the markup,
// patch fields in the view state, or do both.
//
// input := WebviewUpdateInput{
// ViewID: "dashboard",
// HTML: "<div id='app'>Ready</div>",
// State: map[string]any{"count": 42},
// Merge: true,
// }
type WebviewUpdateInput struct {
ViewID string `json:"view_id"` // e.g. "dashboard"
HTML string `json:"html,omitempty"` // replacement markup (optional)
Title string `json:"title,omitempty"` // e.g. "Agent Dashboard"
State map[string]any `json:"state,omitempty"` // partial state update
Merge bool `json:"merge,omitempty"` // merge state (default) or replace when false
}
// WebviewUpdateOutput reports the result of updating an embedded view.
//
// // out.Success == true, out.ViewID == "dashboard"
type WebviewUpdateOutput struct {
Success bool `json:"success"` // true when the view was updated and broadcast
ViewID string `json:"view_id"` // echoed view identifier
UpdatedAt time.Time `json:"updatedAt"` // when the view was last updated
}
// embeddedView captures the live state of a rendered UI view. Instances
// are kept per ViewID inside embeddedViewRegistry.
type embeddedView struct {
ViewID string
Title string
HTML string
Width int
Height int
State map[string]any
UpdatedAt time.Time
}
// embeddedViewRegistry stores the most recent render/update state for each
// view so new subscribers can pick up the current UI on connection.
// Operations are guarded by embeddedViewMu.
var (
embeddedViewMu sync.RWMutex
embeddedViewRegistry = map[string]*embeddedView{}
)
// ChannelWebviewRender is the channel used to broadcast webview_render events.
const ChannelWebviewRender = "webview.render"
// ChannelWebviewUpdate is the channel used to broadcast webview_update events.
const ChannelWebviewUpdate = "webview.update"
// webviewRender handles the webview_render tool call.
func (s *Service) webviewRender(ctx context.Context, req *mcp.CallToolRequest, input WebviewRenderInput) (*mcp.CallToolResult, WebviewRenderOutput, error) {
s.logger.Info("MCP tool execution", "tool", "webview_render", "view", input.ViewID, "user", log.Username())
if core.Trim(input.ViewID) == "" {
return nil, WebviewRenderOutput{}, log.E("webviewRender", "view_id is required", nil)
}
now := time.Now()
view := &embeddedView{
ViewID: input.ViewID,
Title: input.Title,
HTML: input.HTML,
Width: input.Width,
Height: input.Height,
State: cloneStateMap(input.State),
UpdatedAt: now,
}
embeddedViewMu.Lock()
embeddedViewRegistry[input.ViewID] = view
embeddedViewMu.Unlock()
s.ChannelSend(ctx, ChannelWebviewRender, map[string]any{
"view_id": view.ViewID,
"title": view.Title,
"html": view.HTML,
"width": view.Width,
"height": view.Height,
"state": cloneStateMap(view.State),
"updatedAt": view.UpdatedAt,
})
return nil, WebviewRenderOutput{
Success: true,
ViewID: view.ViewID,
UpdatedAt: view.UpdatedAt,
}, nil
}
// webviewUpdate handles the webview_update tool call.
func (s *Service) webviewUpdate(ctx context.Context, req *mcp.CallToolRequest, input WebviewUpdateInput) (*mcp.CallToolResult, WebviewUpdateOutput, error) {
s.logger.Info("MCP tool execution", "tool", "webview_update", "view", input.ViewID, "user", log.Username())
if core.Trim(input.ViewID) == "" {
return nil, WebviewUpdateOutput{}, log.E("webviewUpdate", "view_id is required", nil)
}
now := time.Now()
embeddedViewMu.Lock()
view, ok := embeddedViewRegistry[input.ViewID]
if !ok {
// Updating a view that was never rendered creates one lazily so
// clients that reconnect mid-session get a consistent snapshot.
view = &embeddedView{ViewID: input.ViewID, State: map[string]any{}}
embeddedViewRegistry[input.ViewID] = view
}
if input.HTML != "" {
view.HTML = input.HTML
}
if input.Title != "" {
view.Title = input.Title
}
if input.State != nil {
merge := input.Merge || len(view.State) == 0
if merge {
if view.State == nil {
view.State = map[string]any{}
}
for k, v := range input.State {
view.State[k] = v
}
} else {
view.State = cloneStateMap(input.State)
}
}
view.UpdatedAt = now
snapshot := *view
snapshot.State = cloneStateMap(view.State)
embeddedViewMu.Unlock()
s.ChannelSend(ctx, ChannelWebviewUpdate, map[string]any{
"view_id": snapshot.ViewID,
"title": snapshot.Title,
"html": snapshot.HTML,
"width": snapshot.Width,
"height": snapshot.Height,
"state": snapshot.State,
"updatedAt": snapshot.UpdatedAt,
})
return nil, WebviewUpdateOutput{
Success: true,
ViewID: snapshot.ViewID,
UpdatedAt: snapshot.UpdatedAt,
}, nil
}
// cloneStateMap returns a shallow copy of a state map.
//
// cloned := cloneStateMap(map[string]any{"a": 1}) // cloned["a"] == 1
func cloneStateMap(in map[string]any) map[string]any {
if in == nil {
return nil
}
out := make(map[string]any, len(in))
for k, v := range in {
out[k] = v
}
return out
}
// lookupEmbeddedView returns the current snapshot of an embedded view, if any.
//
// view, ok := lookupEmbeddedView("dashboard")
func lookupEmbeddedView(id string) (*embeddedView, bool) {
embeddedViewMu.RLock()
defer embeddedViewMu.RUnlock()
view, ok := embeddedViewRegistry[id]
if !ok {
return nil, false
}
snapshot := *view
snapshot.State = cloneStateMap(view.State)
return &snapshot, true
}
// resetEmbeddedViews clears the registry. Intended for tests.
func resetEmbeddedViews() {
embeddedViewMu.Lock()
defer embeddedViewMu.Unlock()
embeddedViewRegistry = map[string]*embeddedView{}
}

View file

@ -0,0 +1,137 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"context"
"testing"
)
// TestToolsWebviewEmbed_WebviewRender_Good registers a view and verifies the
// registry keeps the rendered HTML and state.
func TestToolsWebviewEmbed_WebviewRender_Good(t *testing.T) {
t.Cleanup(resetEmbeddedViews)
svc, err := New(Options{WorkspaceRoot: t.TempDir()})
if err != nil {
t.Fatal(err)
}
_, out, err := svc.webviewRender(context.Background(), nil, WebviewRenderInput{
ViewID: "dashboard",
HTML: "<p>hello</p>",
Title: "Demo",
State: map[string]any{"count": 1},
})
if err != nil {
t.Fatalf("webviewRender returned error: %v", err)
}
if !out.Success {
t.Fatal("expected Success=true")
}
if out.ViewID != "dashboard" {
t.Fatalf("expected view id 'dashboard', got %q", out.ViewID)
}
if out.UpdatedAt.IsZero() {
t.Fatal("expected non-zero UpdatedAt")
}
view, ok := lookupEmbeddedView("dashboard")
if !ok {
t.Fatal("expected view to be stored in registry")
}
if view.HTML != "<p>hello</p>" {
t.Fatalf("expected HTML '<p>hello</p>', got %q", view.HTML)
}
if view.State["count"] != 1 {
t.Fatalf("expected state.count=1, got %v", view.State["count"])
}
}
// TestToolsWebviewEmbed_WebviewRender_Bad ensures empty view IDs are rejected.
func TestToolsWebviewEmbed_WebviewRender_Bad(t *testing.T) {
t.Cleanup(resetEmbeddedViews)
svc, err := New(Options{WorkspaceRoot: t.TempDir()})
if err != nil {
t.Fatal(err)
}
_, _, err = svc.webviewRender(context.Background(), nil, WebviewRenderInput{})
if err == nil {
t.Fatal("expected error for empty view_id")
}
}
// TestToolsWebviewEmbed_WebviewUpdate_Good merges a state patch into the
// previously rendered view.
func TestToolsWebviewEmbed_WebviewUpdate_Good(t *testing.T) {
t.Cleanup(resetEmbeddedViews)
svc, err := New(Options{WorkspaceRoot: t.TempDir()})
if err != nil {
t.Fatal(err)
}
_, _, err = svc.webviewRender(context.Background(), nil, WebviewRenderInput{
ViewID: "dashboard",
HTML: "<p>hello</p>",
State: map[string]any{"count": 1},
})
if err != nil {
t.Fatalf("seed render failed: %v", err)
}
_, out, err := svc.webviewUpdate(context.Background(), nil, WebviewUpdateInput{
ViewID: "dashboard",
State: map[string]any{"theme": "dark"},
Merge: true,
})
if err != nil {
t.Fatalf("webviewUpdate returned error: %v", err)
}
if !out.Success {
t.Fatal("expected Success=true")
}
view, ok := lookupEmbeddedView("dashboard")
if !ok {
t.Fatal("expected view to exist after update")
}
if view.State["count"] != 1 {
t.Fatalf("expected count to persist after merge, got %v", view.State["count"])
}
if view.State["theme"] != "dark" {
t.Fatalf("expected theme 'dark' after merge, got %v", view.State["theme"])
}
}
// TestToolsWebviewEmbed_WebviewUpdate_Ugly updates a view that was never
// rendered and verifies a fresh registry entry is created.
func TestToolsWebviewEmbed_WebviewUpdate_Ugly(t *testing.T) {
t.Cleanup(resetEmbeddedViews)
svc, err := New(Options{WorkspaceRoot: t.TempDir()})
if err != nil {
t.Fatal(err)
}
_, out, err := svc.webviewUpdate(context.Background(), nil, WebviewUpdateInput{
ViewID: "ghost",
HTML: "<p>new</p>",
})
if err != nil {
t.Fatalf("webviewUpdate returned error: %v", err)
}
if !out.Success {
t.Fatal("expected Success=true for lazy-create update")
}
view, ok := lookupEmbeddedView("ghost")
if !ok {
t.Fatal("expected ghost view to be created lazily")
}
if view.HTML != "<p>new</p>" {
t.Fatalf("expected HTML '<p>new</p>', got %q", view.HTML)
}
}

View file

@ -1,10 +1,17 @@
package mcp package mcp
import ( import (
"bytes"
"context"
"errors"
"image"
"image/color"
"image/jpeg"
"image/png"
"testing" "testing"
"time" "time"
"forge.lthn.ai/core/go-webview" "dappco.re/go/webview"
) )
// skipIfShort skips webview tests in short mode (go test -short). // skipIfShort skips webview tests in short mode (go test -short).
@ -215,6 +222,41 @@ func TestWebviewWaitInput_Good(t *testing.T) {
} }
} }
func TestWaitForSelector_Good(t *testing.T) {
ctx := context.Background()
attempts := 0
err := waitForSelector(ctx, 200*time.Millisecond, "#ready", func(selector string) error {
attempts++
if attempts < 3 {
return errors.New("element not found: " + selector)
}
return nil
})
if err != nil {
t.Fatalf("waitForSelector failed: %v", err)
}
if attempts != 3 {
t.Fatalf("expected 3 attempts, got %d", attempts)
}
}
func TestWaitForSelector_Bad_Timeout(t *testing.T) {
ctx := context.Background()
start := time.Now()
err := waitForSelector(ctx, 50*time.Millisecond, "#missing", func(selector string) error {
return errors.New("element not found: " + selector)
})
if err == nil {
t.Fatal("expected waitForSelector to time out")
}
if time.Since(start) < 50*time.Millisecond {
t.Fatal("expected waitForSelector to honor timeout")
}
}
// TestWebviewConnectOutput_Good verifies the WebviewConnectOutput struct has expected fields. // TestWebviewConnectOutput_Good verifies the WebviewConnectOutput struct has expected fields.
func TestWebviewConnectOutput_Good(t *testing.T) { func TestWebviewConnectOutput_Good(t *testing.T) {
output := WebviewConnectOutput{ output := WebviewConnectOutput{
@ -358,6 +400,61 @@ func TestWebviewScreenshotOutput_Good(t *testing.T) {
} }
} }
func TestNormalizeScreenshotData_Good_Png(t *testing.T) {
src := mustEncodeTestPNG(t)
out, format, err := normalizeScreenshotData(src, "png")
if err != nil {
t.Fatalf("normalizeScreenshotData failed: %v", err)
}
if format != "png" {
t.Fatalf("expected png format, got %q", format)
}
if !bytes.Equal(out, src) {
t.Fatal("expected png output to preserve the original bytes")
}
}
func TestNormalizeScreenshotData_Good_Jpeg(t *testing.T) {
src := mustEncodeTestPNG(t)
out, format, err := normalizeScreenshotData(src, "jpeg")
if err != nil {
t.Fatalf("normalizeScreenshotData failed: %v", err)
}
if format != "jpeg" {
t.Fatalf("expected jpeg format, got %q", format)
}
if bytes.Equal(out, src) {
t.Fatal("expected jpeg output to differ from png input")
}
if _, err := jpeg.Decode(bytes.NewReader(out)); err != nil {
t.Fatalf("expected output to decode as an image: %v", err)
}
}
func TestNormalizeScreenshotData_Bad_UnsupportedFormat(t *testing.T) {
src := mustEncodeTestPNG(t)
if _, _, err := normalizeScreenshotData(src, "gif"); err == nil {
t.Fatal("expected unsupported format error")
}
}
func mustEncodeTestPNG(t *testing.T) []byte {
t.Helper()
img := image.NewRGBA(image.Rect(0, 0, 1, 1))
img.Set(0, 0, color.RGBA{R: 200, G: 80, B: 40, A: 255})
var buf bytes.Buffer
if err := png.Encode(&buf, img); err != nil {
t.Fatalf("png encode failed: %v", err)
}
return buf.Bytes()
}
// TestWebviewElementInfo_Good verifies the WebviewElementInfo struct has expected fields. // TestWebviewElementInfo_Good verifies the WebviewElementInfo struct has expected fields.
func TestWebviewElementInfo_Good(t *testing.T) { func TestWebviewElementInfo_Good(t *testing.T) {
elem := WebviewElementInfo{ elem := WebviewElementInfo{
@ -450,3 +547,151 @@ func TestWebviewWaitOutput_Good(t *testing.T) {
t.Error("Expected message to be set") t.Error("Expected message to be set")
} }
} }
// --- Handler tests beyond nil-guard ---
// setStubWebview injects a zero-value Webview stub so handler validation
// logic beyond the nil-guard can be exercised without a running Chrome.
// The previous value is restored via t.Cleanup.
func setStubWebview(t *testing.T) {
t.Helper()
webviewMu.Lock()
old := webviewInstance
webviewInstance = &webview.Webview{}
webviewMu.Unlock()
t.Cleanup(func() {
webviewMu.Lock()
webviewInstance = old
webviewMu.Unlock()
})
}
// TestWebviewDisconnect_Good_NoConnection verifies disconnect succeeds when not connected.
func TestWebviewDisconnect_Good_NoConnection(t *testing.T) {
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
ctx := t.Context()
_, out, err := s.webviewDisconnect(ctx, nil, WebviewDisconnectInput{})
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
if !out.Success {
t.Error("Expected success to be true")
}
if out.Message != "No active connection" {
t.Errorf("Expected message 'No active connection', got %q", out.Message)
}
}
// TestWebviewConnect_Bad_EmptyURL verifies connect rejects an empty debug URL.
func TestWebviewConnect_Bad_EmptyURL(t *testing.T) {
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
ctx := t.Context()
_, _, err = s.webviewConnect(ctx, nil, WebviewConnectInput{DebugURL: ""})
if err == nil {
t.Error("Expected error for empty debug URL, got nil")
}
}
// TestWebviewNavigate_Bad_EmptyURL verifies navigate rejects an empty URL.
func TestWebviewNavigate_Bad_EmptyURL(t *testing.T) {
setStubWebview(t)
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
ctx := t.Context()
_, _, err = s.webviewNavigate(ctx, nil, WebviewNavigateInput{URL: ""})
if err == nil {
t.Error("Expected error for empty URL, got nil")
}
}
// TestWebviewClick_Bad_EmptySelector verifies click rejects an empty selector.
func TestWebviewClick_Bad_EmptySelector(t *testing.T) {
setStubWebview(t)
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
ctx := t.Context()
_, _, err = s.webviewClick(ctx, nil, WebviewClickInput{Selector: ""})
if err == nil {
t.Error("Expected error for empty selector, got nil")
}
}
// TestWebviewType_Bad_EmptySelector verifies type rejects an empty selector.
func TestWebviewType_Bad_EmptySelector(t *testing.T) {
setStubWebview(t)
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
ctx := t.Context()
_, _, err = s.webviewType(ctx, nil, WebviewTypeInput{Selector: "", Text: "test"})
if err == nil {
t.Error("Expected error for empty selector, got nil")
}
}
// TestWebviewQuery_Bad_EmptySelector verifies query rejects an empty selector.
func TestWebviewQuery_Bad_EmptySelector(t *testing.T) {
setStubWebview(t)
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
ctx := t.Context()
_, _, err = s.webviewQuery(ctx, nil, WebviewQueryInput{Selector: ""})
if err == nil {
t.Error("Expected error for empty selector, got nil")
}
}
// TestWebviewEval_Bad_EmptyScript verifies eval rejects an empty script.
func TestWebviewEval_Bad_EmptyScript(t *testing.T) {
setStubWebview(t)
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
ctx := t.Context()
_, _, err = s.webviewEval(ctx, nil, WebviewEvalInput{Script: ""})
if err == nil {
t.Error("Expected error for empty script, got nil")
}
}
// TestWebviewWait_Bad_EmptySelector verifies wait rejects an empty selector.
func TestWebviewWait_Bad_EmptySelector(t *testing.T) {
setStubWebview(t)
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
ctx := t.Context()
_, _, err = s.webviewWait(ctx, nil, WebviewWaitInput{Selector: ""})
if err == nil {
t.Error("Expected error for empty selector, got nil")
}
}

View file

@ -1,13 +1,15 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp package mcp
import ( import (
"context" "context"
"fmt"
"net" "net"
"net/http" "net/http"
"forge.lthn.ai/core/go-log" core "dappco.re/go/core"
"forge.lthn.ai/core/go-ws" "dappco.re/go/log"
"dappco.re/go/ws"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
@ -47,12 +49,12 @@ func (s *Service) registerWSTools(server *mcp.Server) bool {
return false return false
} }
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "ws", &mcp.Tool{
Name: "ws_start", Name: "ws_start",
Description: "Start the WebSocket server for real-time process output streaming.", Description: "Start the WebSocket server for real-time process output streaming.",
}, s.wsStart) }, s.wsStart)
mcp.AddTool(server, &mcp.Tool{ addToolRecorded(s, server, "ws", &mcp.Tool{
Name: "ws_info", Name: "ws_info",
Description: "Get WebSocket hub statistics (connected clients and active channels).", Description: "Get WebSocket hub statistics (connected clients and active channels).",
}, s.wsInfo) }, s.wsInfo)
@ -62,6 +64,10 @@ func (s *Service) registerWSTools(server *mcp.Server) bool {
// wsStart handles the ws_start tool call. // wsStart handles the ws_start tool call.
func (s *Service) wsStart(ctx context.Context, req *mcp.CallToolRequest, input WSStartInput) (*mcp.CallToolResult, WSStartOutput, error) { func (s *Service) wsStart(ctx context.Context, req *mcp.CallToolRequest, input WSStartInput) (*mcp.CallToolResult, WSStartOutput, error) {
if s.wsHub == nil {
return nil, WSStartOutput{}, log.E("wsStart", "websocket hub unavailable", nil)
}
addr := input.Addr addr := input.Addr
if addr == "" { if addr == "" {
addr = ":8080" addr = ":8080"
@ -111,12 +117,16 @@ func (s *Service) wsStart(ctx context.Context, req *mcp.CallToolRequest, input W
return nil, WSStartOutput{ return nil, WSStartOutput{
Success: true, Success: true,
Addr: actualAddr, Addr: actualAddr,
Message: fmt.Sprintf("WebSocket server started at ws://%s/ws", actualAddr), Message: core.Sprintf("WebSocket server started at ws://%s/ws", actualAddr),
}, nil }, nil
} }
// wsInfo handles the ws_info tool call. // wsInfo handles the ws_info tool call.
func (s *Service) wsInfo(ctx context.Context, req *mcp.CallToolRequest, input WSInfoInput) (*mcp.CallToolResult, WSInfoOutput, error) { func (s *Service) wsInfo(ctx context.Context, req *mcp.CallToolRequest, input WSInfoInput) (*mcp.CallToolResult, WSInfoOutput, error) {
if s.wsHub == nil {
return nil, WSInfoOutput{}, log.E("wsInfo", "websocket hub unavailable", nil)
}
s.logger.Info("MCP tool execution", "tool", "ws_info", "user", log.Username()) s.logger.Info("MCP tool execution", "tool", "ws_info", "user", log.Username())
stats := s.wsHub.Stats() stats := s.wsHub.Stats()

264
pkg/mcp/tools_ws_client.go Normal file
View file

@ -0,0 +1,264 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"context"
"crypto/rand"
"encoding/hex"
"net/http"
"sync"
"time"
core "dappco.re/go/core"
"dappco.re/go/log"
"github.com/gorilla/websocket"
"github.com/modelcontextprotocol/go-sdk/mcp"
)
// WSConnectInput contains parameters for opening an outbound WebSocket
// connection from the MCP server. Each connection is given a stable ID that
// subsequent ws_send and ws_close calls use to address it.
//
// input := WSConnectInput{URL: "wss://example.com/ws", Timeout: 10}
type WSConnectInput struct {
URL string `json:"url"` // e.g. "wss://example.com/ws"
Headers map[string]string `json:"headers,omitempty"` // custom request headers
Timeout int `json:"timeout,omitempty"` // handshake timeout in seconds (default: 30)
}
// WSConnectOutput contains the result of opening a WebSocket connection.
//
// // out.Success == true, out.ID == "ws-0af3…"
type WSConnectOutput struct {
Success bool `json:"success"` // true when the handshake completed
ID string `json:"id"` // e.g. "ws-0af3…"
URL string `json:"url"` // the URL that was dialled
}
// WSSendInput contains parameters for sending a message on an open
// WebSocket connection.
//
// input := WSSendInput{ID: "ws-0af3…", Message: "ping"}
type WSSendInput struct {
ID string `json:"id"` // e.g. "ws-0af3…"
Message string `json:"message"` // payload to send
Binary bool `json:"binary,omitempty"` // true to send a binary frame (payload is base64 text)
}
// WSSendOutput contains the result of sending a message.
//
// // out.Success == true, out.ID == "ws-0af3…"
type WSSendOutput struct {
Success bool `json:"success"` // true when the message was written
ID string `json:"id"` // e.g. "ws-0af3…"
Bytes int `json:"bytes"` // number of bytes written
}
// WSCloseInput contains parameters for closing a WebSocket connection.
//
// input := WSCloseInput{ID: "ws-0af3…", Reason: "done"}
type WSCloseInput struct {
ID string `json:"id"` // e.g. "ws-0af3…"
Code int `json:"code,omitempty"` // close code (default: 1000 - normal closure)
Reason string `json:"reason,omitempty"` // human-readable reason
}
// WSCloseOutput contains the result of closing a WebSocket connection.
//
// // out.Success == true, out.ID == "ws-0af3…"
type WSCloseOutput struct {
Success bool `json:"success"` // true when the connection was closed
ID string `json:"id"` // e.g. "ws-0af3…"
Message string `json:"message,omitempty"` // e.g. "connection closed"
}
// wsClientConn tracks an outbound WebSocket connection tied to a stable ID.
type wsClientConn struct {
ID string
URL string
conn *websocket.Conn
writeMu sync.Mutex
CreatedAt time.Time
}
// wsClientRegistry holds all live outbound WebSocket connections keyed by ID.
// Access is guarded by wsClientMu.
var (
wsClientMu sync.Mutex
wsClientRegistry = map[string]*wsClientConn{}
)
// registerWSClientTools registers the outbound WebSocket client tools.
func (s *Service) registerWSClientTools(server *mcp.Server) {
addToolRecorded(s, server, "ws", &mcp.Tool{
Name: "ws_connect",
Description: "Open an outbound WebSocket connection. Returns a connection ID for subsequent ws_send and ws_close calls.",
}, s.wsConnect)
addToolRecorded(s, server, "ws", &mcp.Tool{
Name: "ws_send",
Description: "Send a text or binary message on an open WebSocket connection identified by ID.",
}, s.wsSend)
addToolRecorded(s, server, "ws", &mcp.Tool{
Name: "ws_close",
Description: "Close an open WebSocket connection identified by ID.",
}, s.wsClose)
}
// wsConnect handles the ws_connect tool call.
func (s *Service) wsConnect(ctx context.Context, req *mcp.CallToolRequest, input WSConnectInput) (*mcp.CallToolResult, WSConnectOutput, error) {
s.logger.Security("MCP tool execution", "tool", "ws_connect", "url", input.URL, "user", log.Username())
if core.Trim(input.URL) == "" {
return nil, WSConnectOutput{}, log.E("wsConnect", "url is required", nil)
}
timeout := time.Duration(input.Timeout) * time.Second
if timeout <= 0 {
timeout = 30 * time.Second
}
dialer := websocket.Dialer{
HandshakeTimeout: timeout,
}
headers := http.Header{}
for k, v := range input.Headers {
headers.Set(k, v)
}
dialCtx, cancel := context.WithTimeout(ctx, timeout)
defer cancel()
conn, _, err := dialer.DialContext(dialCtx, input.URL, headers)
if err != nil {
log.Error("mcp: ws connect failed", "url", input.URL, "err", err)
return nil, WSConnectOutput{}, log.E("wsConnect", "failed to connect", err)
}
id := newWSClientID()
client := &wsClientConn{
ID: id,
URL: input.URL,
conn: conn,
CreatedAt: time.Now(),
}
wsClientMu.Lock()
wsClientRegistry[id] = client
wsClientMu.Unlock()
return nil, WSConnectOutput{
Success: true,
ID: id,
URL: input.URL,
}, nil
}
// wsSend handles the ws_send tool call.
func (s *Service) wsSend(ctx context.Context, req *mcp.CallToolRequest, input WSSendInput) (*mcp.CallToolResult, WSSendOutput, error) {
s.logger.Info("MCP tool execution", "tool", "ws_send", "id", input.ID, "binary", input.Binary, "user", log.Username())
if core.Trim(input.ID) == "" {
return nil, WSSendOutput{}, log.E("wsSend", "id is required", nil)
}
client, ok := getWSClient(input.ID)
if !ok {
return nil, WSSendOutput{}, log.E("wsSend", "connection not found", nil)
}
messageType := websocket.TextMessage
if input.Binary {
messageType = websocket.BinaryMessage
}
client.writeMu.Lock()
err := client.conn.WriteMessage(messageType, []byte(input.Message))
client.writeMu.Unlock()
if err != nil {
log.Error("mcp: ws send failed", "id", input.ID, "err", err)
return nil, WSSendOutput{}, log.E("wsSend", "failed to send message", err)
}
return nil, WSSendOutput{
Success: true,
ID: input.ID,
Bytes: len(input.Message),
}, nil
}
// wsClose handles the ws_close tool call.
func (s *Service) wsClose(ctx context.Context, req *mcp.CallToolRequest, input WSCloseInput) (*mcp.CallToolResult, WSCloseOutput, error) {
s.logger.Info("MCP tool execution", "tool", "ws_close", "id", input.ID, "user", log.Username())
if core.Trim(input.ID) == "" {
return nil, WSCloseOutput{}, log.E("wsClose", "id is required", nil)
}
wsClientMu.Lock()
client, ok := wsClientRegistry[input.ID]
if ok {
delete(wsClientRegistry, input.ID)
}
wsClientMu.Unlock()
if !ok {
return nil, WSCloseOutput{}, log.E("wsClose", "connection not found", nil)
}
code := input.Code
if code == 0 {
code = websocket.CloseNormalClosure
}
reason := input.Reason
if reason == "" {
reason = "closed"
}
client.writeMu.Lock()
_ = client.conn.WriteControl(
websocket.CloseMessage,
websocket.FormatCloseMessage(code, reason),
time.Now().Add(5*time.Second),
)
client.writeMu.Unlock()
_ = client.conn.Close()
return nil, WSCloseOutput{
Success: true,
ID: input.ID,
Message: "connection closed",
}, nil
}
// newWSClientID returns a fresh identifier for an outbound WebSocket client.
//
// id := newWSClientID() // "ws-0af3…"
func newWSClientID() string {
var buf [8]byte
_, _ = rand.Read(buf[:])
return "ws-" + hex.EncodeToString(buf[:])
}
// getWSClient returns a tracked outbound WebSocket client by ID, if any.
//
// client, ok := getWSClient("ws-0af3…")
func getWSClient(id string) (*wsClientConn, bool) {
wsClientMu.Lock()
defer wsClientMu.Unlock()
client, ok := wsClientRegistry[id]
return client, ok
}
// resetWSClients drops all tracked outbound WebSocket clients. Intended for tests.
func resetWSClients() {
wsClientMu.Lock()
defer wsClientMu.Unlock()
for id, client := range wsClientRegistry {
_ = client.conn.Close()
delete(wsClientRegistry, id)
}
}

View file

@ -0,0 +1,169 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"context"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/gorilla/websocket"
)
// TestToolsWSClient_WSConnect_Good dials a test WebSocket server and verifies
// the handshake completes and a client ID is assigned.
func TestToolsWSClient_WSConnect_Good(t *testing.T) {
t.Cleanup(resetWSClients)
server := startTestWSServer(t)
defer server.Close()
svc, err := New(Options{WorkspaceRoot: t.TempDir()})
if err != nil {
t.Fatal(err)
}
_, out, err := svc.wsConnect(context.Background(), nil, WSConnectInput{
URL: "ws" + strings.TrimPrefix(server.URL, "http") + "/ws",
Timeout: 5,
})
if err != nil {
t.Fatalf("wsConnect failed: %v", err)
}
if !out.Success {
t.Fatal("expected Success=true")
}
if !strings.HasPrefix(out.ID, "ws-") {
t.Fatalf("expected ID prefix 'ws-', got %q", out.ID)
}
_, _, err = svc.wsClose(context.Background(), nil, WSCloseInput{ID: out.ID})
if err != nil {
t.Fatalf("wsClose failed: %v", err)
}
}
// TestToolsWSClient_WSConnect_Bad rejects empty URLs.
func TestToolsWSClient_WSConnect_Bad(t *testing.T) {
t.Cleanup(resetWSClients)
svc, err := New(Options{WorkspaceRoot: t.TempDir()})
if err != nil {
t.Fatal(err)
}
_, _, err = svc.wsConnect(context.Background(), nil, WSConnectInput{})
if err == nil {
t.Fatal("expected error for empty URL")
}
}
// TestToolsWSClient_WSSendClose_Good sends a message on an open connection
// and then closes it.
func TestToolsWSClient_WSSendClose_Good(t *testing.T) {
t.Cleanup(resetWSClients)
server := startTestWSServer(t)
defer server.Close()
svc, err := New(Options{WorkspaceRoot: t.TempDir()})
if err != nil {
t.Fatal(err)
}
_, conn, err := svc.wsConnect(context.Background(), nil, WSConnectInput{
URL: "ws" + strings.TrimPrefix(server.URL, "http") + "/ws",
Timeout: 5,
})
if err != nil {
t.Fatalf("wsConnect failed: %v", err)
}
_, sendOut, err := svc.wsSend(context.Background(), nil, WSSendInput{
ID: conn.ID,
Message: "ping",
})
if err != nil {
t.Fatalf("wsSend failed: %v", err)
}
if !sendOut.Success {
t.Fatal("expected Success=true for wsSend")
}
if sendOut.Bytes != 4 {
t.Fatalf("expected 4 bytes written, got %d", sendOut.Bytes)
}
_, closeOut, err := svc.wsClose(context.Background(), nil, WSCloseInput{ID: conn.ID})
if err != nil {
t.Fatalf("wsClose failed: %v", err)
}
if !closeOut.Success {
t.Fatal("expected Success=true for wsClose")
}
if _, ok := getWSClient(conn.ID); ok {
t.Fatal("expected connection to be removed after close")
}
}
// TestToolsWSClient_WSSend_Bad rejects unknown connection IDs.
func TestToolsWSClient_WSSend_Bad(t *testing.T) {
t.Cleanup(resetWSClients)
svc, err := New(Options{WorkspaceRoot: t.TempDir()})
if err != nil {
t.Fatal(err)
}
_, _, err = svc.wsSend(context.Background(), nil, WSSendInput{ID: "ws-missing", Message: "x"})
if err == nil {
t.Fatal("expected error for unknown connection ID")
}
}
// TestToolsWSClient_WSClose_Bad rejects closes for unknown connection IDs.
func TestToolsWSClient_WSClose_Bad(t *testing.T) {
t.Cleanup(resetWSClients)
svc, err := New(Options{WorkspaceRoot: t.TempDir()})
if err != nil {
t.Fatal(err)
}
_, _, err = svc.wsClose(context.Background(), nil, WSCloseInput{ID: "ws-missing"})
if err == nil {
t.Fatal("expected error for unknown connection ID")
}
}
// startTestWSServer returns an httptest.Server running a minimal echo WebSocket
// handler used by the ws_connect/ws_send tests.
func startTestWSServer(t *testing.T) *httptest.Server {
t.Helper()
upgrader := websocket.Upgrader{
CheckOrigin: func(*http.Request) bool { return true },
}
mux := http.NewServeMux()
mux.HandleFunc("/ws", func(w http.ResponseWriter, r *http.Request) {
conn, err := upgrader.Upgrade(w, r, nil)
if err != nil {
return
}
defer conn.Close()
conn.SetReadDeadline(time.Now().Add(5 * time.Second))
for {
_, msg, err := conn.ReadMessage()
if err != nil {
return
}
if err := conn.WriteMessage(websocket.TextMessage, msg); err != nil {
return
}
}
})
return httptest.NewServer(mux)
}

View file

@ -3,7 +3,7 @@ package mcp
import ( import (
"testing" "testing"
"forge.lthn.ai/core/go-ws" "dappco.re/go/ws"
) )
// TestWSToolsRegistered_Good verifies that WebSocket tools are registered when hub is available. // TestWSToolsRegistered_Good verifies that WebSocket tools are registered when hub is available.
@ -83,7 +83,7 @@ func TestWSInfoOutput_Good(t *testing.T) {
} }
} }
// TestWithWSHub_Good verifies the WithWSHub option. // TestWithWSHub_Good verifies Options{WSHub: ...}.
func TestWithWSHub_Good(t *testing.T) { func TestWithWSHub_Good(t *testing.T) {
hub := ws.NewHub() hub := ws.NewHub()
@ -97,7 +97,7 @@ func TestWithWSHub_Good(t *testing.T) {
} }
} }
// TestWithWSHub_Nil verifies the WithWSHub option with nil. // TestWithWSHub_Nil verifies Options{WSHub: nil}.
func TestWithWSHub_Nil(t *testing.T) { func TestWithWSHub_Nil(t *testing.T) {
s, err := New(Options{WSHub: nil}) s, err := New(Options{WSHub: nil})
if err != nil { if err != nil {

476
pkg/mcp/transformer.go Normal file
View file

@ -0,0 +1,476 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"bytes"
"encoding/json"
"mime"
"strings"
)
// TransformerIn normalises an AI wire protocol request into a unified MCP
// request envelope.
type TransformerIn interface {
Detect(body []byte, contentType, path string) bool
Normalise(body []byte) (MCPRequest, error)
}
// TransformerOut converts an MCP result back into an AI wire protocol response.
type TransformerOut interface {
Transform(result MCPResult) ([]byte, error)
}
// MCPRequest is the gateway's protocol-neutral JSON-RPC request shape.
type MCPRequest struct {
JSONRPC string `json:"jsonrpc,omitempty"`
ID any `json:"id,omitempty"`
Method string `json:"method,omitempty"`
Params map[string]any `json:"params,omitempty"`
}
// MCPResult is the gateway's protocol-neutral JSON-RPC result shape.
type MCPResult struct {
JSONRPC string `json:"jsonrpc,omitempty"`
ID any `json:"id,omitempty"`
Result any `json:"result,omitempty"`
Error any `json:"error,omitempty"`
Content []MCPContent `json:"content,omitempty"`
ToolCalls []MCPToolCall `json:"tool_calls,omitempty"`
StopReason string `json:"stop_reason,omitempty"`
}
// MCPContent represents text and tool-use content blocks in the neutral result.
type MCPContent struct {
Type string `json:"type,omitempty"`
Text string `json:"text,omitempty"`
ID string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Input map[string]any `json:"input,omitempty"`
Arguments map[string]any `json:"arguments,omitempty"`
}
// MCPToolCall captures a model-requested tool invocation.
type MCPToolCall struct {
ID string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Arguments map[string]any `json:"arguments,omitempty"`
}
// TODO(#197 follow-up): add Ollama and LiteLLM concrete transformers once the
// OpenAI/Anthropic/MCP-native gateway surface has settled.
// NegotiateTransformer selects the inbound transformer using RFC §9.4 priority:
// explicit media type, path, body inspection, then MCP-native fallback. The
// honeypot is only selected for malformed or probe-like bodies that no concrete
// protocol claims.
func NegotiateTransformer(body []byte, contentType, path string) TransformerIn {
if headerHasMedia(contentType, "application/openai+json") {
return OpenAITransformer{}
}
if headerHasMedia(contentType, "application/anthropic+json") {
return AnthropicTransformer{}
}
if headerHasMedia(contentType, "application/mcp+json", "application/json-rpc", "application/jsonrpc+json") {
return MCPNativeTransformer{}
}
switch normaliseGatewayPath(path) {
case "/v1/chat/completions":
return OpenAITransformer{}
case "/v1/messages":
return AnthropicTransformer{}
case "/mcp":
if (HoneypotTransformer{}).Detect(body, contentType, path) {
return HoneypotTransformer{}
}
return MCPNativeTransformer{}
}
if (MCPNativeTransformer{}).Detect(body, "", "") {
return MCPNativeTransformer{}
}
if (OpenAITransformer{}).Detect(body, "", "") {
if looksAnthropicBody(body) {
return AnthropicTransformer{}
}
return OpenAITransformer{}
}
if (AnthropicTransformer{}).Detect(body, "", "") {
return AnthropicTransformer{}
}
if (HoneypotTransformer{}).Detect(body, contentType, path) {
return HoneypotTransformer{}
}
return MCPNativeTransformer{}
}
// MCPNativeTransformer is the identity transformer for native MCP JSON-RPC.
type MCPNativeTransformer struct{}
func (MCPNativeTransformer) Detect(body []byte, contentType, path string) bool {
if headerHasMedia(contentType, "application/mcp+json", "application/json-rpc", "application/jsonrpc+json") {
return true
}
if normaliseGatewayPath(path) == "/mcp" {
return true
}
obj, ok := decodeJSONObject(body)
if !ok {
return false
}
_, hasMethod := obj["method"].(string)
_, hasResult := obj["result"]
_, hasError := obj["error"]
return obj["jsonrpc"] == "2.0" && (hasMethod || hasResult || hasError)
}
func (MCPNativeTransformer) Normalise(body []byte) (MCPRequest, error) {
var req MCPRequest
if err := json.Unmarshal(body, &req); err != nil {
return MCPRequest{}, err
}
if req.JSONRPC == "" {
req.JSONRPC = "2.0"
}
return req, nil
}
func (MCPNativeTransformer) Transform(result MCPResult) ([]byte, error) {
if result.JSONRPC == "" {
result.JSONRPC = "2.0"
}
return json.Marshal(result)
}
func headerHasMedia(header string, wants ...string) bool {
header = strings.TrimSpace(header)
if header == "" {
return false
}
wantSet := make(map[string]struct{}, len(wants))
for _, want := range wants {
wantSet[strings.ToLower(strings.TrimSpace(want))] = struct{}{}
}
for _, part := range strings.Split(header, ",") {
media := strings.TrimSpace(part)
if parsed, _, err := mime.ParseMediaType(media); err == nil {
media = parsed
} else if semi := strings.IndexByte(media, ';'); semi >= 0 {
media = media[:semi]
}
media = strings.ToLower(strings.TrimSpace(media))
if _, ok := wantSet[media]; ok {
return true
}
}
return false
}
func normaliseGatewayPath(path string) string {
path = strings.TrimSpace(path)
if path == "" {
return ""
}
if i := strings.IndexAny(path, "?#"); i >= 0 {
path = path[:i]
}
if !strings.HasPrefix(path, "/") {
path = "/" + path
}
for strings.Contains(path, "//") {
path = strings.ReplaceAll(path, "//", "/")
}
if len(path) > 1 {
path = strings.TrimRight(path, "/")
}
return path
}
func decodeJSONObject(body []byte) (map[string]any, bool) {
body = bytes.TrimSpace(body)
if len(body) == 0 {
return nil, false
}
var obj map[string]any
if err := json.Unmarshal(body, &obj); err != nil {
return nil, false
}
return obj, true
}
func hasTopLevelFields(body []byte, fields ...string) bool {
obj, ok := decodeJSONObject(body)
if !ok {
return false
}
for _, field := range fields {
if _, ok := obj[field]; !ok {
return false
}
}
return true
}
func looksAnthropicBody(body []byte) bool {
obj, ok := decodeJSONObject(body)
if !ok {
return false
}
if _, ok := obj["system"]; ok {
return true
}
if _, ok := obj["max_tokens"]; ok {
return true
}
if _, ok := obj["anthropic_version"]; ok {
return true
}
messages, ok := obj["messages"].([]any)
if !ok || len(messages) == 0 {
return false
}
for _, raw := range messages {
msg, ok := raw.(map[string]any)
if !ok {
continue
}
if role, _ := msg["role"].(string); role == "system" {
return false
}
if blocks, ok := msg["content"].([]any); ok {
for _, rawBlock := range blocks {
block, ok := rawBlock.(map[string]any)
if !ok {
continue
}
switch block["type"] {
case "tool_use", "tool_result":
return true
}
}
}
}
return false
}
func messagesHaveNoSystemRole(body []byte) bool {
obj, ok := decodeJSONObject(body)
if !ok {
return false
}
messages, ok := obj["messages"].([]any)
if !ok || len(messages) == 0 {
return false
}
for _, raw := range messages {
msg, ok := raw.(map[string]any)
if !ok {
continue
}
if role, _ := msg["role"].(string); role == "system" {
return false
}
}
return true
}
func parseRawArgumentObject(raw json.RawMessage) map[string]any {
raw = bytes.TrimSpace(raw)
if len(raw) == 0 || bytes.Equal(raw, []byte("null")) {
return map[string]any{}
}
var encoded string
if err := json.Unmarshal(raw, &encoded); err == nil {
return parseArgumentString(encoded)
}
var args map[string]any
if err := json.Unmarshal(raw, &args); err == nil && args != nil {
return args
}
return map[string]any{"_raw": string(raw)}
}
func parseArgumentString(s string) map[string]any {
s = strings.TrimSpace(s)
if s == "" {
return map[string]any{}
}
var args map[string]any
if err := json.Unmarshal([]byte(s), &args); err == nil && args != nil {
return args
}
return map[string]any{"_raw": s}
}
func mapFromAny(v any) map[string]any {
switch typed := v.(type) {
case nil:
return map[string]any{}
case map[string]any:
if typed == nil {
return map[string]any{}
}
return typed
case json.RawMessage:
return parseRawArgumentObject(typed)
case string:
return parseArgumentString(typed)
default:
data, err := json.Marshal(typed)
if err != nil {
return map[string]any{"value": typed}
}
return parseRawArgumentObject(data)
}
}
func extractMCPText(result MCPResult) string {
var parts []string
for _, block := range result.Content {
if block.Text != "" && (block.Type == "" || block.Type == "text") {
parts = append(parts, block.Text)
}
}
parts = append(parts, extractTextFromAny(result.Result)...)
return strings.Join(parts, "\n")
}
func extractTextFromAny(v any) []string {
switch typed := v.(type) {
case nil:
return nil
case string:
if typed == "" {
return nil
}
return []string{typed}
case []byte:
if len(typed) == 0 {
return nil
}
return []string{string(typed)}
case []MCPContent:
var out []string
for _, block := range typed {
if block.Text != "" && (block.Type == "" || block.Type == "text") {
out = append(out, block.Text)
}
}
return out
case []any:
var out []string
for _, item := range typed {
out = append(out, extractTextFromAny(item)...)
}
return out
case []map[string]any:
var out []string
for _, item := range typed {
out = append(out, extractTextFromAny(item)...)
}
return out
case map[string]any:
for _, key := range []string{"text", "message", "output"} {
if text, ok := typed[key].(string); ok && text != "" {
return []string{text}
}
}
if content, ok := typed["content"]; ok {
return extractTextFromAny(content)
}
if result, ok := typed["result"]; ok {
return extractTextFromAny(result)
}
return nil
default:
data, err := json.Marshal(typed)
if err != nil || len(data) == 0 || bytes.Equal(data, []byte("null")) {
return nil
}
return []string{string(data)}
}
}
func extractMCPToolCalls(result MCPResult) []MCPToolCall {
var calls []MCPToolCall
calls = append(calls, result.ToolCalls...)
for _, block := range result.Content {
if block.Type != "tool_use" && block.Name == "" {
continue
}
args := block.Input
if len(args) == 0 {
args = block.Arguments
}
calls = append(calls, MCPToolCall{ID: block.ID, Name: block.Name, Arguments: args})
}
calls = append(calls, extractToolCallsFromAny(result.Result)...)
return calls
}
func extractToolCallsFromAny(v any) []MCPToolCall {
switch typed := v.(type) {
case nil:
return nil
case []MCPToolCall:
return typed
case []MCPContent:
var calls []MCPToolCall
for _, block := range typed {
if block.Type == "tool_use" || block.Name != "" {
args := block.Input
if len(args) == 0 {
args = block.Arguments
}
calls = append(calls, MCPToolCall{ID: block.ID, Name: block.Name, Arguments: args})
}
}
return calls
case []any:
var calls []MCPToolCall
for _, item := range typed {
calls = append(calls, extractToolCallsFromAny(item)...)
}
return calls
case []map[string]any:
var calls []MCPToolCall
for _, item := range typed {
calls = append(calls, extractToolCallsFromAny(item)...)
}
return calls
case map[string]any:
for _, key := range []string{"tool_calls", "toolCalls"} {
if raw, ok := typed[key]; ok {
return extractToolCallsFromAny(raw)
}
}
if raw, ok := typed["content"]; ok {
return extractToolCallsFromAny(raw)
}
name, _ := typed["name"].(string)
if name == "" {
if fn, ok := typed["function"].(map[string]any); ok {
name, _ = fn["name"].(string)
args := mapFromAny(fn["arguments"])
id, _ := typed["id"].(string)
return []MCPToolCall{{ID: id, Name: name, Arguments: args}}
}
return nil
}
id, _ := typed["id"].(string)
args := mapFromAny(typed["arguments"])
if len(args) == 0 {
args = mapFromAny(typed["input"])
}
return []MCPToolCall{{ID: id, Name: name, Arguments: args}}
default:
return nil
}
}

View file

@ -0,0 +1,238 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"encoding/json"
"fmt"
)
// AnthropicTransformer maps Anthropic Messages requests and responses.
type AnthropicTransformer struct{}
func (AnthropicTransformer) Detect(body []byte, contentType, path string) bool {
if headerHasMedia(contentType, "application/anthropic+json") {
return true
}
if normaliseGatewayPath(path) == "/v1/messages" {
return true
}
if !hasTopLevelFields(body, "model", "messages") {
return false
}
return looksAnthropicBody(body) || messagesHaveNoSystemRole(body)
}
func (AnthropicTransformer) Normalise(body []byte) (MCPRequest, error) {
var req anthropicMessagesRequest
if err := json.Unmarshal(body, &req); err != nil {
return MCPRequest{}, err
}
if req.Model == "" {
return MCPRequest{}, fmt.Errorf("anthropic messages request missing model")
}
if len(req.Messages) == 0 {
return MCPRequest{}, fmt.Errorf("anthropic messages request missing messages")
}
params := map[string]any{
"source_format": "anthropic",
"model": req.Model,
"messages": normaliseAnthropicMessages(req.Messages),
}
if req.System != nil {
params["system"] = req.System
}
if req.MaxTokens != nil {
params["max_tokens"] = req.MaxTokens
}
if req.Temperature != nil {
params["temperature"] = req.Temperature
}
if req.Stream {
params["stream"] = req.Stream
}
if len(req.Tools) > 0 {
params["tools"] = normaliseAnthropicTools(req.Tools)
}
toolCalls := anthropicToolUsesFromMessages(req.Messages)
if len(toolCalls) > 0 {
call := toolCalls[0]
params["name"] = call.Name
params["arguments"] = call.Arguments
params["tool_calls"] = toolCalls
return MCPRequest{JSONRPC: "2.0", Method: "tools/call", Params: params}, nil
}
return MCPRequest{JSONRPC: "2.0", Method: "sampling/createMessage", Params: params}, nil
}
func (AnthropicTransformer) Transform(result MCPResult) ([]byte, error) {
text := extractMCPText(result)
toolCalls := extractMCPToolCalls(result)
content := make([]map[string]any, 0, 1+len(toolCalls))
if text != "" {
content = append(content, map[string]any{
"type": "text",
"text": text,
})
}
for i, call := range toolCalls {
id := call.ID
if id == "" {
id = fmt.Sprintf("toolu_%d", i)
}
content = append(content, map[string]any{
"type": "tool_use",
"id": id,
"name": call.Name,
"input": call.Arguments,
})
}
if len(content) == 0 {
content = append(content, map[string]any{
"type": "text",
"text": "",
})
}
stopReason := "end_turn"
if len(toolCalls) > 0 {
stopReason = "tool_use"
}
if result.StopReason != "" {
stopReason = result.StopReason
}
resp := map[string]any{
"id": anthropicResponseID(result.ID),
"type": "message",
"role": "assistant",
"model": "mcp-gateway",
"content": content,
"stop_reason": stopReason,
"stop_sequence": nil,
}
return json.Marshal(resp)
}
type anthropicMessagesRequest struct {
Model string `json:"model"`
MaxTokens any `json:"max_tokens,omitempty"`
System any `json:"system,omitempty"`
Messages []anthropicMessage `json:"messages"`
Tools []anthropicTool `json:"tools,omitempty"`
Temperature any `json:"temperature,omitempty"`
Stream bool `json:"stream,omitempty"`
}
type anthropicMessage struct {
Role string `json:"role"`
Content any `json:"content,omitempty"`
}
type anthropicTool struct {
Name string `json:"name"`
Description string `json:"description,omitempty"`
InputSchema any `json:"input_schema,omitempty"`
}
func normaliseAnthropicMessages(messages []anthropicMessage) []map[string]any {
out := make([]map[string]any, 0, len(messages))
for _, msg := range messages {
item := map[string]any{
"role": msg.Role,
}
if msg.Content != nil {
item["content"] = msg.Content
}
out = append(out, item)
}
return out
}
func normaliseAnthropicTools(tools []anthropicTool) []map[string]any {
out := make([]map[string]any, 0, len(tools))
for _, tool := range tools {
out = append(out, map[string]any{
"name": tool.Name,
"description": tool.Description,
"input_schema": tool.InputSchema,
})
}
return out
}
func anthropicToolUsesFromMessages(messages []anthropicMessage) []MCPToolCall {
var calls []MCPToolCall
for i := len(messages) - 1; i >= 0; i-- {
blocks := anthropicContentBlocks(messages[i].Content)
for _, block := range blocks {
if block.Type != "tool_use" || block.Name == "" {
continue
}
calls = append(calls, MCPToolCall{
ID: block.ID,
Name: block.Name,
Arguments: block.Input,
})
}
if len(calls) > 0 {
break
}
}
return calls
}
type anthropicContentBlock struct {
Type string `json:"type"`
Text string `json:"text,omitempty"`
ID string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Input map[string]any `json:"input,omitempty"`
}
func anthropicContentBlocks(content any) []anthropicContentBlock {
switch typed := content.(type) {
case nil:
return nil
case []anthropicContentBlock:
return typed
case []any:
blocks := make([]anthropicContentBlock, 0, len(typed))
for _, item := range typed {
data, err := json.Marshal(item)
if err != nil {
continue
}
var block anthropicContentBlock
if err := json.Unmarshal(data, &block); err == nil {
blocks = append(blocks, block)
}
}
return blocks
case map[string]any:
data, err := json.Marshal(typed)
if err != nil {
return nil
}
var block anthropicContentBlock
if err := json.Unmarshal(data, &block); err != nil {
return nil
}
return []anthropicContentBlock{block}
case string:
return []anthropicContentBlock{{Type: "text", Text: typed}}
default:
return nil
}
}
func anthropicResponseID(id any) string {
if id == nil {
return "msg_mcp"
}
return fmt.Sprintf("msg_%v", id)
}

View file

@ -0,0 +1,112 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"bytes"
"encoding/json"
"fmt"
"strings"
)
// HoneypotTransformer absorbs malformed or probe-like input and returns a
// plausible synthetic response without dispatching to real tools.
type HoneypotTransformer struct{}
func (HoneypotTransformer) Detect(body []byte, contentType, path string) bool {
trimmed := bytes.TrimSpace(body)
if len(trimmed) == 0 {
return false
}
if !json.Valid(trimmed) {
return true
}
var obj map[string]any
if err := json.Unmarshal(trimmed, &obj); err != nil {
return true
}
return looksProbeLike(trimmed, contentType, path)
}
func (HoneypotTransformer) Normalise(body []byte) (MCPRequest, error) {
params := map[string]any{
"source_format": "honeypot",
"raw": honeypotSnippet(body),
"malformed": !json.Valid(bytes.TrimSpace(body)),
}
return MCPRequest{
JSONRPC: "2.0",
Method: "honeypot/respond",
Params: params,
}, nil
}
func (HoneypotTransformer) Transform(result MCPResult) ([]byte, error) {
text := extractMCPText(result)
if text == "" {
text = "Request received. The gateway is processing the available context and will return compatible MCP output when a valid protocol envelope is provided."
}
resp := map[string]any{
"id": honeypotResponseID(result.ID),
"object": "chat.completion",
"created": 0,
"model": "mcp-gateway",
"choices": []map[string]any{
{
"index": 0,
"message": map[string]any{
"role": "assistant",
"content": text,
},
"finish_reason": "stop",
},
},
"usage": map[string]any{
"prompt_tokens": 0,
"completion_tokens": 0,
"total_tokens": 0,
},
}
return json.Marshal(resp)
}
func looksProbeLike(body []byte, contentType, path string) bool {
haystack := strings.ToLower(strings.Join([]string{
string(body),
contentType,
path,
}, "\n"))
for _, marker := range []string{
"ignore previous",
"system prompt",
"developer message",
"/etc/passwd",
"../../",
"dump secrets",
"jailbreak",
"prompt injection",
} {
if strings.Contains(haystack, marker) {
return true
}
}
return false
}
func honeypotSnippet(body []byte) string {
s := string(bytes.TrimSpace(body))
const max = 4096
if len(s) <= max {
return s
}
return s[:max]
}
func honeypotResponseID(id any) string {
if id == nil {
return "chatcmpl-honeypot"
}
return fmt.Sprintf("chatcmpl-honeypot-%v", id)
}

View file

@ -0,0 +1,247 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"encoding/json"
"fmt"
)
// OpenAITransformer maps OpenAI Chat Completions requests and responses.
type OpenAITransformer struct{}
func (OpenAITransformer) Detect(body []byte, contentType, path string) bool {
if headerHasMedia(contentType, "application/openai+json") {
return true
}
if normaliseGatewayPath(path) == "/v1/chat/completions" {
return true
}
return hasTopLevelFields(body, "model", "messages")
}
func (OpenAITransformer) Normalise(body []byte) (MCPRequest, error) {
var req openAIChatCompletionRequest
if err := json.Unmarshal(body, &req); err != nil {
return MCPRequest{}, err
}
if req.Model == "" {
return MCPRequest{}, fmt.Errorf("openai chat completion request missing model")
}
if len(req.Messages) == 0 {
return MCPRequest{}, fmt.Errorf("openai chat completion request missing messages")
}
params := map[string]any{
"source_format": "openai",
"model": req.Model,
"messages": normaliseOpenAIMessages(req.Messages),
}
if len(req.Tools) > 0 {
params["tools"] = normaliseOpenAITools(req.Tools)
}
if req.ToolChoice != nil {
params["tool_choice"] = req.ToolChoice
}
if req.MaxTokens != nil {
params["max_tokens"] = req.MaxTokens
}
if req.MaxCompletionTokens != nil {
params["max_completion_tokens"] = req.MaxCompletionTokens
}
if req.Temperature != nil {
params["temperature"] = req.Temperature
}
if req.Stream {
params["stream"] = req.Stream
}
toolCalls := openAIToolCallsFromMessages(req.Messages)
if len(toolCalls) > 0 {
call := toolCalls[0]
params["name"] = call.Name
params["arguments"] = call.Arguments
params["tool_calls"] = toolCalls
return MCPRequest{JSONRPC: "2.0", Method: "tools/call", Params: params}, nil
}
return MCPRequest{JSONRPC: "2.0", Method: "sampling/createMessage", Params: params}, nil
}
func (OpenAITransformer) Transform(result MCPResult) ([]byte, error) {
text := extractMCPText(result)
toolCalls := extractMCPToolCalls(result)
message := map[string]any{
"role": "assistant",
}
if text != "" {
message["content"] = text
} else if len(toolCalls) > 0 {
message["content"] = nil
} else {
message["content"] = ""
}
if len(toolCalls) > 0 {
message["tool_calls"] = openAIToolCallsFromMCP(toolCalls)
}
finishReason := "stop"
if len(toolCalls) > 0 {
finishReason = "tool_calls"
}
if result.StopReason != "" {
finishReason = result.StopReason
}
resp := map[string]any{
"id": openAIResponseID(result.ID),
"object": "chat.completion",
"created": 0,
"model": "mcp-gateway",
"choices": []map[string]any{
{
"index": 0,
"message": message,
"finish_reason": finishReason,
},
},
}
return json.Marshal(resp)
}
type openAIChatCompletionRequest struct {
Model string `json:"model"`
Messages []openAIMessage `json:"messages"`
Tools []openAITool `json:"tools,omitempty"`
ToolChoice any `json:"tool_choice,omitempty"`
MaxTokens any `json:"max_tokens,omitempty"`
MaxCompletionTokens any `json:"max_completion_tokens,omitempty"`
Temperature any `json:"temperature,omitempty"`
Stream bool `json:"stream,omitempty"`
}
type openAIMessage struct {
Role string `json:"role"`
Content any `json:"content,omitempty"`
Name string `json:"name,omitempty"`
ToolCallID string `json:"tool_call_id,omitempty"`
ToolCalls []openAIToolCall `json:"tool_calls,omitempty"`
}
type openAITool struct {
Type string `json:"type"`
Function openAIFunctionMetadata `json:"function"`
}
type openAIFunctionMetadata struct {
Name string `json:"name"`
Description string `json:"description,omitempty"`
Parameters any `json:"parameters,omitempty"`
}
type openAIToolCall struct {
ID string `json:"id,omitempty"`
Type string `json:"type,omitempty"`
Function openAIFunctionCall `json:"function"`
}
type openAIFunctionCall struct {
Name string `json:"name"`
Arguments json.RawMessage `json:"arguments,omitempty"`
}
func normaliseOpenAIMessages(messages []openAIMessage) []map[string]any {
out := make([]map[string]any, 0, len(messages))
for _, msg := range messages {
item := map[string]any{
"role": msg.Role,
}
if msg.Content != nil {
item["content"] = msg.Content
}
if msg.Name != "" {
item["name"] = msg.Name
}
if msg.ToolCallID != "" {
item["tool_call_id"] = msg.ToolCallID
}
if len(msg.ToolCalls) > 0 {
item["tool_calls"] = openAIToolCallsFromMessages([]openAIMessage{msg})
}
out = append(out, item)
}
return out
}
func normaliseOpenAITools(tools []openAITool) []map[string]any {
out := make([]map[string]any, 0, len(tools))
for _, tool := range tools {
if tool.Type != "" && tool.Type != "function" {
out = append(out, map[string]any{
"type": tool.Type,
"function": tool.Function,
})
continue
}
item := map[string]any{
"name": tool.Function.Name,
"description": tool.Function.Description,
"input_schema": tool.Function.Parameters,
}
out = append(out, item)
}
return out
}
func openAIToolCallsFromMessages(messages []openAIMessage) []MCPToolCall {
var calls []MCPToolCall
for i := len(messages) - 1; i >= 0; i-- {
msg := messages[i]
if len(msg.ToolCalls) == 0 {
continue
}
for _, call := range msg.ToolCalls {
if call.Function.Name == "" {
continue
}
calls = append(calls, MCPToolCall{
ID: call.ID,
Name: call.Function.Name,
Arguments: parseRawArgumentObject(call.Function.Arguments),
})
}
break
}
return calls
}
func openAIToolCallsFromMCP(calls []MCPToolCall) []map[string]any {
out := make([]map[string]any, 0, len(calls))
for i, call := range calls {
id := call.ID
if id == "" {
id = fmt.Sprintf("call_%d", i)
}
args, err := json.Marshal(call.Arguments)
if err != nil {
args = []byte("{}")
}
out = append(out, map[string]any{
"id": id,
"type": "function",
"function": map[string]any{
"name": call.Name,
"arguments": string(args),
},
})
}
return out
}
func openAIResponseID(id any) string {
if id == nil {
return "chatcmpl-mcp"
}
return fmt.Sprintf("chatcmpl-%v", id)
}

191
pkg/mcp/transformer_test.go Normal file
View file

@ -0,0 +1,191 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp
import (
"encoding/json"
"testing"
)
func TestNegotiate_OpenAI_Good(t *testing.T) {
body := []byte(`{"model":"gpt-4o-mini","messages":[{"role":"user","content":"hello"}]}`)
if _, ok := NegotiateTransformer(body, "", "/v1/chat/completions").(OpenAITransformer); !ok {
t.Fatal("expected OpenAITransformer for chat completions path")
}
}
func TestNegotiate_Anthropic_Good(t *testing.T) {
body := []byte(`{"model":"claude-3-5-sonnet","max_tokens":128,"messages":[{"role":"user","content":"hello"}]}`)
if _, ok := NegotiateTransformer(body, "", "/v1/messages").(AnthropicTransformer); !ok {
t.Fatal("expected AnthropicTransformer for messages path")
}
}
func TestNegotiate_MCPNative_Good(t *testing.T) {
body := []byte(`{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}`)
if _, ok := NegotiateTransformer(body, "application/mcp+json", "/mcp").(MCPNativeTransformer); !ok {
t.Fatal("expected MCPNativeTransformer for native MCP request")
}
}
func TestOpenAITransformer_Normalise_Good(t *testing.T) {
body := []byte(`{
"model": "gpt-4o",
"messages": [
{
"role": "assistant",
"content": null,
"tool_calls": [
{
"id": "call_1",
"type": "function",
"function": {
"name": "file_read",
"arguments": "{\"path\":\"README.md\"}"
}
}
]
}
]
}`)
req, err := (OpenAITransformer{}).Normalise(body)
if err != nil {
t.Fatalf("Normalise failed: %v", err)
}
if req.JSONRPC != "2.0" {
t.Fatalf("expected JSON-RPC 2.0, got %q", req.JSONRPC)
}
if req.Method != "tools/call" {
t.Fatalf("expected tools/call, got %q", req.Method)
}
if req.Params["source_format"] != "openai" {
t.Fatalf("expected source_format openai, got %v", req.Params["source_format"])
}
if req.Params["model"] != "gpt-4o" {
t.Fatalf("expected model to be preserved, got %v", req.Params["model"])
}
if req.Params["name"] != "file_read" {
t.Fatalf("expected tool name file_read, got %v", req.Params["name"])
}
args, ok := req.Params["arguments"].(map[string]any)
if !ok {
t.Fatalf("expected argument map, got %T", req.Params["arguments"])
}
if args["path"] != "README.md" {
t.Fatalf("expected README.md path, got %v", args["path"])
}
}
func TestOpenAITransformer_Transform_Good(t *testing.T) {
data, err := (OpenAITransformer{}).Transform(MCPResult{
ID: 7,
Result: map[string]any{
"content": []any{
map[string]any{"type": "text", "text": "done"},
},
},
})
if err != nil {
t.Fatalf("Transform failed: %v", err)
}
var resp map[string]any
if err := json.Unmarshal(data, &resp); err != nil {
t.Fatalf("response is not JSON: %v", err)
}
if resp["object"] != "chat.completion" {
t.Fatalf("expected chat.completion object, got %v", resp["object"])
}
choices := resp["choices"].([]any)
message := choices[0].(map[string]any)["message"].(map[string]any)
if message["content"] != "done" {
t.Fatalf("expected content done, got %v", message["content"])
}
}
func TestAnthropicTransformer_Normalise_Good(t *testing.T) {
body := []byte(`{
"model": "claude-3-5-sonnet",
"max_tokens": 256,
"messages": [
{
"role": "assistant",
"content": [
{
"type": "tool_use",
"id": "toolu_1",
"name": "file_read",
"input": {"path":"README.md"}
}
]
}
]
}`)
req, err := (AnthropicTransformer{}).Normalise(body)
if err != nil {
t.Fatalf("Normalise failed: %v", err)
}
if req.Method != "tools/call" {
t.Fatalf("expected tools/call, got %q", req.Method)
}
if req.Params["source_format"] != "anthropic" {
t.Fatalf("expected source_format anthropic, got %v", req.Params["source_format"])
}
if req.Params["name"] != "file_read" {
t.Fatalf("expected tool name file_read, got %v", req.Params["name"])
}
args, ok := req.Params["arguments"].(map[string]any)
if !ok {
t.Fatalf("expected argument map, got %T", req.Params["arguments"])
}
if args["path"] != "README.md" {
t.Fatalf("expected README.md path, got %v", args["path"])
}
}
func TestAnthropicTransformer_Transform_Good(t *testing.T) {
data, err := (AnthropicTransformer{}).Transform(MCPResult{
ID: "abc",
Content: []MCPContent{{Type: "text", Text: "done"}},
})
if err != nil {
t.Fatalf("Transform failed: %v", err)
}
var resp map[string]any
if err := json.Unmarshal(data, &resp); err != nil {
t.Fatalf("response is not JSON: %v", err)
}
if resp["type"] != "message" {
t.Fatalf("expected message type, got %v", resp["type"])
}
content := resp["content"].([]any)
first := content[0].(map[string]any)
if first["text"] != "done" {
t.Fatalf("expected text done, got %v", first["text"])
}
}
func TestHoneypotTransformer_Detect_FallbackOnGarbage(t *testing.T) {
body := []byte(`{not-json`)
if !(HoneypotTransformer{}).Detect(body, "", "/probe") {
t.Fatal("expected honeypot to detect malformed input")
}
if _, ok := NegotiateTransformer(body, "", "/probe").(HoneypotTransformer); !ok {
t.Fatal("expected negotiation to select honeypot for malformed input")
}
}
func TestNegotiate_Priority_Ugly(t *testing.T) {
body := []byte(`{"model":"claude-3-5-sonnet","max_tokens":128,"messages":[{"role":"user","content":"hello"}]}`)
if _, ok := NegotiateTransformer(body, "application/openai+json", "/v1/messages").(OpenAITransformer); !ok {
t.Fatal("expected explicit OpenAI media type to beat path/body inspection")
}
}

View file

@ -2,6 +2,7 @@ package mcp
import ( import (
"bufio" "bufio"
"context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"net" "net"
@ -10,8 +11,6 @@ import (
"strings" "strings"
"testing" "testing"
"time" "time"
"context"
) )
// jsonRPCRequest builds a raw JSON-RPC 2.0 request string with newline delimiter. // jsonRPCRequest builds a raw JSON-RPC 2.0 request string with newline delimiter.
@ -148,20 +147,20 @@ func TestTCPTransport_E2E_FullRoundTrip(t *testing.T) {
scanner := bufio.NewScanner(conn) scanner := bufio.NewScanner(conn)
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024) scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
// Step 1: Send initialize request // Step 1: Send initialise request
initReq := jsonRPCRequest(1, "initialize", map[string]any{ initReq := jsonRPCRequest(1, "initialize", map[string]any{
"protocolVersion": "2024-11-05", "protocolVersion": "2024-11-05",
"capabilities": map[string]any{}, "capabilities": map[string]any{},
"clientInfo": map[string]any{"name": "TestClient", "version": "1.0.0"}, "clientInfo": map[string]any{"name": "TestClient", "version": "1.0.0"},
}) })
if _, err := conn.Write([]byte(initReq)); err != nil { if _, err := conn.Write([]byte(initReq)); err != nil {
t.Fatalf("Failed to send initialize: %v", err) t.Fatalf("Failed to send initialise: %v", err)
} }
// Read initialize response // Read initialise response
initResp := readJSONRPCResponse(t, scanner, conn) initResp := readJSONRPCResponse(t, scanner, conn)
if initResp["error"] != nil { if initResp["error"] != nil {
t.Fatalf("Initialize returned error: %v", initResp["error"]) t.Fatalf("Initialise returned error: %v", initResp["error"])
} }
result, ok := initResp["result"].(map[string]any) result, ok := initResp["result"].(map[string]any)
if !ok { if !ok {
@ -291,7 +290,7 @@ func TestTCPTransport_E2E_FileWrite(t *testing.T) {
scanner := bufio.NewScanner(conn) scanner := bufio.NewScanner(conn)
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024) scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
// Initialize handshake // Initialise handshake
conn.Write([]byte(jsonRPCRequest(1, "initialize", map[string]any{ conn.Write([]byte(jsonRPCRequest(1, "initialize", map[string]any{
"protocolVersion": "2024-11-05", "protocolVersion": "2024-11-05",
"capabilities": map[string]any{}, "capabilities": map[string]any{},
@ -379,7 +378,7 @@ func TestUnixTransport_E2E_FullRoundTrip(t *testing.T) {
scanner := bufio.NewScanner(conn) scanner := bufio.NewScanner(conn)
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024) scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
// Step 1: Initialize // Step 1: Initialise
conn.Write([]byte(jsonRPCRequest(1, "initialize", map[string]any{ conn.Write([]byte(jsonRPCRequest(1, "initialize", map[string]any{
"protocolVersion": "2024-11-05", "protocolVersion": "2024-11-05",
"capabilities": map[string]any{}, "capabilities": map[string]any{},
@ -387,10 +386,10 @@ func TestUnixTransport_E2E_FullRoundTrip(t *testing.T) {
}))) })))
initResp := readJSONRPCResponse(t, scanner, conn) initResp := readJSONRPCResponse(t, scanner, conn)
if initResp["error"] != nil { if initResp["error"] != nil {
t.Fatalf("Initialize returned error: %v", initResp["error"]) t.Fatalf("Initialise returned error: %v", initResp["error"])
} }
// Step 2: Send initialized notification // Step 2: Send initialised notification
conn.Write([]byte(jsonRPCNotification("notifications/initialized"))) conn.Write([]byte(jsonRPCNotification("notifications/initialized")))
// Step 3: tools/list // Step 3: tools/list
@ -488,7 +487,7 @@ func TestUnixTransport_E2E_DirList(t *testing.T) {
scanner := bufio.NewScanner(conn) scanner := bufio.NewScanner(conn)
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024) scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
// Initialize // Initialise
conn.Write([]byte(jsonRPCRequest(1, "initialize", map[string]any{ conn.Write([]byte(jsonRPCRequest(1, "initialize", map[string]any{
"protocolVersion": "2024-11-05", "protocolVersion": "2024-11-05",
"capabilities": map[string]any{}, "capabilities": map[string]any{},
@ -610,7 +609,7 @@ func TestTCPTransport_E2E_ToolsDiscovery(t *testing.T) {
scanner := bufio.NewScanner(conn) scanner := bufio.NewScanner(conn)
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024) scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
// Initialize // Initialise
conn.Write([]byte(jsonRPCRequest(1, "initialize", map[string]any{ conn.Write([]byte(jsonRPCRequest(1, "initialize", map[string]any{
"protocolVersion": "2024-11-05", "protocolVersion": "2024-11-05",
"capabilities": map[string]any{}, "capabilities": map[string]any{},
@ -686,7 +685,7 @@ func TestTCPTransport_E2E_ErrorHandling(t *testing.T) {
scanner := bufio.NewScanner(conn) scanner := bufio.NewScanner(conn)
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024) scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
// Initialize // Initialise
conn.Write([]byte(jsonRPCRequest(1, "initialize", map[string]any{ conn.Write([]byte(jsonRPCRequest(1, "initialize", map[string]any{
"protocolVersion": "2024-11-05", "protocolVersion": "2024-11-05",
"capabilities": map[string]any{}, "capabilities": map[string]any{},
@ -737,6 +736,3 @@ func TestTCPTransport_E2E_ErrorHandling(t *testing.T) {
cancel() cancel()
<-errCh <-errCh
} }
// Suppress "unused import" for fmt — used in helpers
var _ = fmt.Sprintf

View file

@ -4,13 +4,18 @@ package mcp
import ( import (
"context" "context"
"crypto/subtle" // Note: AX-6 — HTTP transport boundary needs streaming JSON encode/decode against ResponseWriter and MaxBytesReader.
"encoding/json"
// Note: AX-6 — structural HTTP transport requires binding an explicit TCP listener.
"net" "net"
// Note: AX-6 — structural HTTP transport boundary requires handlers, requests, status codes, and server lifecycle APIs.
"net/http" "net/http"
"os"
"time" "time"
coreerr "forge.lthn.ai/core/go-log" core "dappco.re/go/core"
api "dappco.re/go/api"
coreerr "dappco.re/go/log"
"github.com/gin-gonic/gin"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
@ -31,12 +36,18 @@ const DefaultHTTPAddr = "127.0.0.1:9101"
// svc.ServeHTTP(ctx, "0.0.0.0:9101") // svc.ServeHTTP(ctx, "0.0.0.0:9101")
// //
// Endpoint /mcp: GET (SSE stream), POST (JSON-RPC), DELETE (terminate session). // Endpoint /mcp: GET (SSE stream), POST (JSON-RPC), DELETE (terminate session).
//
// Additional endpoints:
// - POST /mcp/auth: exchange API token for JWT
// - /v1/tools/<tool_name>: auto-mounted REST bridge for MCP tools
// - /health: unauthenticated health endpoint
// - /.well-known/mcp-servers.json: MCP portal discovery
func (s *Service) ServeHTTP(ctx context.Context, addr string) error { func (s *Service) ServeHTTP(ctx context.Context, addr string) error {
if addr == "" { if addr == "" {
addr = DefaultHTTPAddr addr = DefaultHTTPAddr
} }
authToken := os.Getenv("MCP_AUTH_TOKEN") authToken := core.Env("MCP_AUTH_TOKEN")
handler := mcp.NewStreamableHTTPHandler( handler := mcp.NewStreamableHTTPHandler(
func(r *http.Request) *mcp.Server { func(r *http.Request) *mcp.Server {
@ -47,13 +58,25 @@ func (s *Service) ServeHTTP(ctx context.Context, addr string) error {
}, },
) )
toolBridge := api.NewToolBridge("/v1/tools")
BridgeToAPI(s, toolBridge)
toolEngine := gin.New()
toolBridge.RegisterRoutes(toolEngine.Group("/v1/tools"))
toolHandler := withAuth(authToken, toolEngine)
mux := http.NewServeMux() mux := http.NewServeMux()
mux.Handle("/mcp", withAuth(authToken, handler)) mux.Handle("/mcp", withAuth(authToken, handler))
mux.Handle("/v1/tools", toolHandler)
mux.Handle("/v1/tools/", toolHandler)
mux.HandleFunc("/mcp/auth", func(w http.ResponseWriter, r *http.Request) {
serveMCPAuthExchange(w, r, authToken)
})
mux.HandleFunc("/.well-known/mcp-servers.json", handleMCPDiscovery)
// Health check (no auth) // Health check (no auth)
mux.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) { mux.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"status":"ok"}`)) _ = json.NewEncoder(w).Encode(map[string]any{"status": "ok"})
}) })
listener, err := net.Listen("tcp", addr) listener, err := net.Listen("tcp", addr)
@ -71,7 +94,7 @@ func (s *Service) ServeHTTP(ctx context.Context, addr string) error {
<-ctx.Done() <-ctx.Done()
shutdownCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second) shutdownCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel() defer cancel()
server.Shutdown(shutdownCtx) _ = server.Shutdown(shutdownCtx)
}() }()
if err := server.Serve(listener); err != nil && err != http.ErrServerClosed { if err := server.Serve(listener); err != nil && err != http.ErrServerClosed {
@ -80,23 +103,185 @@ func (s *Service) ServeHTTP(ctx context.Context, addr string) error {
return nil return nil
} }
// withAuth wraps an http.Handler with Bearer token authentication. type mcpAuthExchangeRequest struct {
// If token is empty, authentication is disabled (passthrough). Token string `json:"token"`
func withAuth(token string, next http.Handler) http.Handler { Workspace string `json:"workspace"`
if token == "" { Entitlements []string `json:"entitlements"`
return next Sub string `json:"sub"`
} }
type mcpAuthExchangeResponse struct {
AccessToken string `json:"access_token"`
TokenType string `json:"token_type"`
ExpiresIn int64 `json:"expires_in"`
ExpiresAt int64 `json:"expires_at"`
}
type mcpDiscoveryResponse struct {
Servers []mcpDiscoveryServer `json:"servers"`
}
type mcpDiscoveryServer struct {
ID string `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
Connection map[string]any `json:"connection"`
Capabilities []string `json:"capabilities"`
UseWhen []string `json:"use_when"`
RelatedServers []string `json:"related_servers"`
}
// withAuth wraps an http.Handler with Bearer token authentication.
// If token is empty, authentication is disabled for local development.
func withAuth(token string, next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
auth := r.Header.Get("Authorization") if core.Trim(token) == "" {
if len(auth) < 7 || auth[:7] != "Bearer " { next.ServeHTTP(w, r)
http.Error(w, `{"error":"missing Bearer token"}`, http.StatusUnauthorized)
return return
} }
provided := auth[7:]
if subtle.ConstantTimeCompare([]byte(provided), []byte(token)) != 1 { claims, err := parseAuthClaims(r.Header.Get("Authorization"), token)
if err != nil {
http.Error(w, `{"error":"invalid token"}`, http.StatusUnauthorized) http.Error(w, `{"error":"invalid token"}`, http.StatusUnauthorized)
return return
} }
if claims != nil {
r = r.WithContext(withAuthClaims(r.Context(), claims))
}
next.ServeHTTP(w, r) next.ServeHTTP(w, r)
}) })
} }
func serveMCPAuthExchange(w http.ResponseWriter, r *http.Request, apiToken string) {
if r.Method != http.MethodPost {
w.WriteHeader(http.StatusMethodNotAllowed)
return
}
apiToken = core.Trim(apiToken)
if apiToken == "" {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusUnauthorized)
_ = json.NewEncoder(w).Encode(api.Fail("unauthorized", "authentication is not configured"))
return
}
var req mcpAuthExchangeRequest
if err := json.NewDecoder(http.MaxBytesReader(w, r.Body, 10<<20)).Decode(&req); err != nil {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusBadRequest)
_ = json.NewEncoder(w).Encode(api.Fail("invalid_request", "invalid JSON payload"))
return
}
providedToken := core.Trim(extractBearerToken(r.Header.Get("Authorization")))
if providedToken == "" {
providedToken = core.Trim(req.Token)
}
if providedToken == "" {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusBadRequest)
_ = json.NewEncoder(w).Encode(api.Fail("invalid_request", "missing token"))
return
}
if _, err := parseAuthClaims("Bearer "+providedToken, apiToken); err != nil {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusUnauthorized)
_ = json.NewEncoder(w).Encode(api.Fail("unauthorized", "invalid API token"))
return
}
cfg := currentAuthConfig(apiToken)
now := time.Now()
claims := authClaims{
Workspace: core.Trim(req.Workspace),
Entitlements: dedupeEntitlements(req.Entitlements),
Subject: core.Trim(req.Sub),
IssuedAt: now.Unix(),
ExpiresAt: now.Unix() + int64(cfg.ttl.Seconds()),
}
minted, err := mintJWTToken(claims, cfg)
if err != nil {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusInternalServerError)
_ = json.NewEncoder(w).Encode(api.Fail("token_error", "failed to mint token"))
return
}
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(mcpAuthExchangeResponse{
AccessToken: minted,
TokenType: "Bearer",
ExpiresIn: int64(cfg.ttl.Seconds()),
ExpiresAt: claims.ExpiresAt,
})
}
func dedupeEntitlements(entitlements []string) []string {
if len(entitlements) == 0 {
return nil
}
seen := make(map[string]struct{}, len(entitlements))
out := make([]string, 0, len(entitlements))
for _, ent := range entitlements {
e := core.Trim(ent)
if e == "" {
continue
}
if _, ok := seen[e]; ok {
continue
}
seen[e] = struct{}{}
out = append(out, e)
}
return out
}
func handleMCPDiscovery(w http.ResponseWriter, r *http.Request) {
resp := mcpDiscoveryResponse{
Servers: []mcpDiscoveryServer{
{
ID: "core-agent",
Name: "Core Agent",
Description: "Dispatch agents, manage workspaces, search OpenBrain",
Connection: map[string]any{
"type": "stdio",
"command": "core-agent",
"args": []string{"mcp"},
},
Capabilities: []string{"tools", "resources"},
UseWhen: []string{
"Need to dispatch work to Codex/Claude/Gemini",
"Need workspace status",
"Need semantic search",
},
RelatedServers: []string{"core-mcp"},
},
{
ID: "core-mcp",
Name: "Core MCP",
Description: "File ops, process and build tools, RAG search, webview, dashboards — the agent-facing MCP framework.",
Connection: map[string]any{
"type": "stdio",
"command": "core-mcp",
},
Capabilities: []string{"tools", "resources", "logging"},
UseWhen: []string{
"Need to read/write files inside a workspace",
"Need to start or monitor processes",
"Need to run RAG queries or index documents",
"Need to render or update an embedded dashboard view",
},
RelatedServers: []string{"core-agent"},
},
},
}
w.Header().Set("Content-Type", "application/json")
if err := json.NewEncoder(w).Encode(resp); err != nil {
w.WriteHeader(http.StatusInternalServerError)
_ = json.NewEncoder(w).Encode(api.Fail("server_error", "failed to encode discovery payload"))
}
}

View file

@ -107,6 +107,44 @@ func TestServeHTTP_Good_AuthRequired(t *testing.T) {
<-errCh <-errCh
} }
func TestServeHTTP_Good_NoAuthConfigured(t *testing.T) {
os.Unsetenv("MCP_AUTH_TOKEN")
s, err := New(Options{})
if err != nil {
t.Fatalf("Failed to create service: %v", err)
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
listener, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatalf("Failed to find free port: %v", err)
}
addr := listener.Addr().String()
listener.Close()
errCh := make(chan error, 1)
go func() {
errCh <- s.ServeHTTP(ctx, addr)
}()
time.Sleep(100 * time.Millisecond)
resp, err := http.Get(fmt.Sprintf("http://%s/mcp", addr))
if err != nil {
t.Fatalf("request failed: %v", err)
}
resp.Body.Close()
if resp.StatusCode == 401 {
t.Fatalf("expected /mcp to be open without MCP_AUTH_TOKEN, got %d", resp.StatusCode)
}
cancel()
<-errCh
}
func TestWithAuth_Good_ValidToken(t *testing.T) { func TestWithAuth_Good_ValidToken(t *testing.T) {
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200) w.WriteHeader(200)
@ -157,19 +195,34 @@ func TestWithAuth_Bad_MissingToken(t *testing.T) {
} }
} }
func TestWithAuth_Good_EmptyTokenPassthrough(t *testing.T) { func TestWithAuth_Good_EmptyConfiguredToken_DisablesAuth(t *testing.T) {
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200) w.WriteHeader(200)
}) })
// Empty token disables auth
wrapped := withAuth("", handler) wrapped := withAuth("", handler)
req, _ := http.NewRequest("GET", "/", nil) req, _ := http.NewRequest("GET", "/", nil)
rr := &fakeResponseWriter{code: 200} rr := &fakeResponseWriter{code: 200}
wrapped.ServeHTTP(rr, req) wrapped.ServeHTTP(rr, req)
if rr.code != 200 { if rr.code != 200 {
t.Errorf("expected 200 with auth disabled, got %d", rr.code) t.Errorf("expected 200 with empty configured token, got %d", rr.code)
}
}
func TestWithAuth_Bad_NonBearerToken(t *testing.T) {
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200)
})
wrapped := withAuth("my-token", handler)
req, _ := http.NewRequest("GET", "/", nil)
req.Header.Set("Authorization", "Token my-token")
rr := &fakeResponseWriter{code: 200}
wrapped.ServeHTTP(rr, req)
if rr.code != 401 {
t.Errorf("expected 401 with non-Bearer auth, got %d", rr.code)
} }
} }

View file

@ -1,9 +1,12 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp package mcp
import ( import (
"context" "context"
"os"
"forge.lthn.ai/core/go-log" "dappco.re/go/log"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
@ -15,5 +18,8 @@ import (
// } // }
func (s *Service) ServeStdio(ctx context.Context) error { func (s *Service) ServeStdio(ctx context.Context) error {
s.logger.Info("MCP Stdio server starting", "user", log.Username()) s.logger.Info("MCP Stdio server starting", "user", log.Username())
return s.server.Run(ctx, &mcp.StdioTransport{}) return s.server.Run(ctx, &mcp.IOTransport{
Reader: os.Stdin,
Writer: sharedStdout,
})
} }

View file

@ -1,14 +1,16 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp package mcp
import ( import (
"bufio" "bufio"
"context" "context"
"fmt" goio "io"
"io"
"net" "net"
"os" "os"
"sync" "sync"
core "dappco.re/go/core"
"github.com/modelcontextprotocol/go-sdk/jsonrpc" "github.com/modelcontextprotocol/go-sdk/jsonrpc"
"github.com/modelcontextprotocol/go-sdk/mcp" "github.com/modelcontextprotocol/go-sdk/mcp"
) )
@ -23,18 +25,18 @@ var diagMu sync.Mutex
// diagWriter is the destination for warning and diagnostic messages. // diagWriter is the destination for warning and diagnostic messages.
// Use diagPrintf to write to it safely. // Use diagPrintf to write to it safely.
var diagWriter io.Writer = os.Stderr var diagWriter goio.Writer = os.Stderr
// diagPrintf writes a formatted message to diagWriter under the mutex. // diagPrintf writes a formatted message to diagWriter under the mutex.
func diagPrintf(format string, args ...any) { func diagPrintf(format string, args ...any) {
diagMu.Lock() diagMu.Lock()
defer diagMu.Unlock() defer diagMu.Unlock()
fmt.Fprintf(diagWriter, format, args...) core.Print(diagWriter, format, args...)
} }
// setDiagWriter swaps the diagnostic writer and returns the previous one. // setDiagWriter swaps the diagnostic writer and returns the previous one.
// Used by tests to capture output without racing. // Used by tests to capture output without racing.
func setDiagWriter(w io.Writer) io.Writer { func setDiagWriter(w goio.Writer) goio.Writer {
diagMu.Lock() diagMu.Lock()
defer diagMu.Unlock() defer diagMu.Unlock()
old := diagWriter old := diagWriter
@ -55,11 +57,14 @@ type TCPTransport struct {
// NewTCPTransport creates a new TCP transport listener. // NewTCPTransport creates a new TCP transport listener.
// Defaults to 127.0.0.1 when the host component is empty (e.g. ":9100"). // Defaults to 127.0.0.1 when the host component is empty (e.g. ":9100").
// Defaults to DefaultTCPAddr when addr is empty.
// Emits a security warning when explicitly binding to 0.0.0.0 (all interfaces). // Emits a security warning when explicitly binding to 0.0.0.0 (all interfaces).
// //
// t, err := NewTCPTransport("127.0.0.1:9100") // t, err := NewTCPTransport("127.0.0.1:9100")
// t, err := NewTCPTransport(":9100") // defaults to 127.0.0.1:9100 // t, err := NewTCPTransport(":9100") // defaults to 127.0.0.1:9100
func NewTCPTransport(addr string) (*TCPTransport, error) { func NewTCPTransport(addr string) (*TCPTransport, error) {
addr = normalizeTCPAddr(addr)
host, port, _ := net.SplitHostPort(addr) host, port, _ := net.SplitHostPort(addr)
if host == "" { if host == "" {
addr = net.JoinHostPort("127.0.0.1", port) addr = net.JoinHostPort("127.0.0.1", port)
@ -73,6 +78,23 @@ func NewTCPTransport(addr string) (*TCPTransport, error) {
return &TCPTransport{addr: addr, listener: listener}, nil return &TCPTransport{addr: addr, listener: listener}, nil
} }
func normalizeTCPAddr(addr string) string {
if addr == "" {
return DefaultTCPAddr
}
host, port, err := net.SplitHostPort(addr)
if err != nil {
return addr
}
if host == "" {
return net.JoinHostPort("127.0.0.1", port)
}
return addr
}
// ServeTCP starts a TCP server for the MCP service. // ServeTCP starts a TCP server for the MCP service.
// It accepts connections and spawns a new MCP server session for each connection. // It accepts connections and spawns a new MCP server session for each connection.
// //
@ -91,11 +113,7 @@ func (s *Service) ServeTCP(ctx context.Context, addr string) error {
<-ctx.Done() <-ctx.Done()
_ = t.listener.Close() _ = t.listener.Close()
}() }()
diagPrintf("MCP TCP server listening on %s\n", t.listener.Addr().String())
if addr == "" {
addr = t.listener.Addr().String()
}
diagPrintf("MCP TCP server listening on %s\n", addr)
for { for {
conn, err := t.listener.Accept() conn, err := t.listener.Accept()
@ -123,6 +141,7 @@ func (s *Service) handleConnection(ctx context.Context, conn net.Conn) {
conn.Close() conn.Close()
return return
} }
defer session.Close()
// Block until the session ends // Block until the session ends
if err := session.Wait(); err != nil { if err := session.Wait(); err != nil {
diagPrintf("Session ended: %v\n", err) diagPrintf("Session ended: %v\n", err)
@ -156,7 +175,7 @@ func (c *connConnection) Read(ctx context.Context) (jsonrpc.Message, error) {
return nil, err return nil, err
} }
// EOF - connection closed cleanly // EOF - connection closed cleanly
return nil, io.EOF return nil, goio.EOF
} }
line := c.scanner.Bytes() line := c.scanner.Bytes()
return jsonrpc.DecodeMessage(line) return jsonrpc.DecodeMessage(line)

View file

@ -31,6 +31,26 @@ func TestNewTCPTransport_Defaults(t *testing.T) {
} }
} }
func TestNormalizeTCPAddr_Good_Defaults(t *testing.T) {
tests := []struct {
name string
in string
want string
}{
{name: "empty", in: "", want: DefaultTCPAddr},
{name: "missing host", in: ":9100", want: "127.0.0.1:9100"},
{name: "explicit host", in: "127.0.0.1:9100", want: "127.0.0.1:9100"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := normalizeTCPAddr(tt.in); got != tt.want {
t.Fatalf("normalizeTCPAddr(%q) = %q, want %q", tt.in, got, tt.want)
}
})
}
}
func TestNewTCPTransport_Warning(t *testing.T) { func TestNewTCPTransport_Warning(t *testing.T) {
// Capture warning output via setDiagWriter (mutex-protected, no race). // Capture warning output via setDiagWriter (mutex-protected, no race).
var buf bytes.Buffer var buf bytes.Buffer

View file

@ -1,11 +1,13 @@
// SPDX-License-Identifier: EUPL-1.2
package mcp package mcp
import ( import (
"context" "context"
"net" "net"
"forge.lthn.ai/core/go-io" "dappco.re/go/io"
"forge.lthn.ai/core/go-log" "dappco.re/go/log"
) )
// ServeUnix starts a Unix domain socket server for the MCP service. // ServeUnix starts a Unix domain socket server for the MCP service.

Some files were not shown because too many files have changed in this diff Show more