diff --git a/docs/plans/2026-02-05-core-ide-job-runner-plan.md b/docs/plans/2026-02-05-core-ide-job-runner-plan.md deleted file mode 100644 index 4b3be94..0000000 --- a/docs/plans/2026-02-05-core-ide-job-runner-plan.md +++ /dev/null @@ -1,2116 +0,0 @@ -# Core-IDE Job Runner Implementation Plan - -> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. - -**Goal:** Turn core-ide into an autonomous job runner that polls GitHub for pipeline work, executes it via typed handlers, and captures JSONL training data. - -**Architecture:** Go workspace (`go.work`) linking root module + core-ide module. Pluggable `JobSource` interface with GitHub as first adapter. `JobHandler` interface for each pipeline action (publish draft, resolve threads, etc.). `Poller` orchestrates discovery and dispatch. `Journal` writes JSONL. Headless mode reuses existing `pkg/cli.Daemon` infrastructure. Handlers live in `pkg/jobrunner/` (root module), core-ide imports them via workspace. - -**Tech Stack:** Go 1.25, GitHub REST API (via `oauth2`), `pkg/cli.Daemon` for headless, `testify/assert` + `httptest` for tests. - ---- - -### Task 0: Set Up Go Workspace (`go.work`) - -**Files:** -- Create: `go.work` - -**Context:** The repo has two real modules — the root (`forge.lthn.ai/core/cli`) and core-ide (`forge.lthn.ai/core/cli/internal/core-ide`). Without a workspace, core-ide can't import `pkg/jobrunner` from the root module during local development without fragile `replace` directives. A `go.work` file makes cross-module imports resolve locally, keeps each module's `go.mod` clean, and lets CI build each variant independently. - -**Step 1: Create the workspace file** - -```bash -cd /Users/snider/Code/host-uk/core -go work init . ./internal/core-ide -``` - -This generates `go.work`: -``` -go 1.25.5 - -use ( - . - ./internal/core-ide -) -``` - -**Step 2: Sync dependency versions across modules** - -```bash -go work sync -``` - -This aligns shared dependency versions between the two modules. - -**Step 3: Verify the workspace** - -Run: `go build ./...` -Expected: Root module builds successfully. - -Run: `cd internal/core-ide && go build .` -Expected: core-ide builds successfully. - -Run: `go test ./pkg/... -count=1` -Expected: All existing tests pass (workspace doesn't change behaviour, just resolution). - -**Step 4: Add go.work.sum to gitignore** - -`go.work.sum` is generated and shouldn't be committed (it's machine-specific like `go.sum` but for the workspace). Check if `.gitignore` already excludes it: - -```bash -grep -q 'go.work.sum' .gitignore || echo 'go.work.sum' >> .gitignore -``` - -**Note:** Whether to commit `go.work` itself is a choice. Committing it means all developers and CI share the same workspace layout. Since the module layout is fixed (root + core-ide), committing it is the right call — it documents the build variants explicitly. - -**Step 5: Commit** - -```bash -git add go.work .gitignore -git commit -m "build: add Go workspace for root + core-ide modules" -``` - ---- - -### Task 1: Core Types (`pkg/jobrunner/types.go`) - -**Files:** -- Create: `pkg/jobrunner/types.go` -- Test: `pkg/jobrunner/types_test.go` - -**Step 1: Write the test file** - -```go -package jobrunner - -import ( - "testing" - "time" - - "github.com/stretchr/testify/assert" -) - -func TestPipelineSignal_RepoFullName_Good(t *testing.T) { - s := &PipelineSignal{RepoOwner: "host-uk", RepoName: "core"} - assert.Equal(t, "host-uk/core", s.RepoFullName()) -} - -func TestPipelineSignal_HasUnresolvedThreads_Good(t *testing.T) { - s := &PipelineSignal{ThreadsTotal: 5, ThreadsResolved: 3} - assert.True(t, s.HasUnresolvedThreads()) -} - -func TestPipelineSignal_HasUnresolvedThreads_Bad_AllResolved(t *testing.T) { - s := &PipelineSignal{ThreadsTotal: 5, ThreadsResolved: 5} - assert.False(t, s.HasUnresolvedThreads()) -} - -func TestActionResult_JSON_Good(t *testing.T) { - r := &ActionResult{ - Action: "publish_draft", - RepoOwner: "host-uk", - RepoName: "core", - PRNumber: 315, - Success: true, - Timestamp: time.Date(2026, 2, 5, 12, 0, 0, 0, time.UTC), - } - assert.Equal(t, "publish_draft", r.Action) - assert.True(t, r.Success) -} -``` - -**Step 2: Run test to verify it fails** - -Run: `go test ./pkg/jobrunner/ -v -count=1` -Expected: FAIL — package does not exist yet. - -**Step 3: Write the types** - -```go -package jobrunner - -import ( - "context" - "time" -) - -// PipelineSignal is the structural snapshot of a child issue/PR. -// Never contains comment bodies or free text — structural signals only. -type PipelineSignal struct { - EpicNumber int - ChildNumber int - PRNumber int - RepoOwner string - RepoName string - PRState string // OPEN, MERGED, CLOSED - IsDraft bool - Mergeable string // MERGEABLE, CONFLICTING, UNKNOWN - CheckStatus string // SUCCESS, FAILURE, PENDING - ThreadsTotal int - ThreadsResolved int - LastCommitSHA string - LastCommitAt time.Time - LastReviewAt time.Time -} - -// RepoFullName returns "owner/repo". -func (s *PipelineSignal) RepoFullName() string { - return s.RepoOwner + "/" + s.RepoName -} - -// HasUnresolvedThreads returns true if there are unresolved review threads. -func (s *PipelineSignal) HasUnresolvedThreads() bool { - return s.ThreadsTotal > s.ThreadsResolved -} - -// ActionResult carries the outcome of a handler execution. -type ActionResult struct { - Action string `json:"action"` - RepoOwner string `json:"repo_owner"` - RepoName string `json:"repo_name"` - EpicNumber int `json:"epic"` - ChildNumber int `json:"child"` - PRNumber int `json:"pr"` - Success bool `json:"success"` - Error string `json:"error,omitempty"` - Timestamp time.Time `json:"ts"` - Duration time.Duration `json:"duration_ms"` - Cycle int `json:"cycle"` -} - -// JobSource discovers actionable work from an external system. -type JobSource interface { - Name() string - Poll(ctx context.Context) ([]*PipelineSignal, error) - Report(ctx context.Context, result *ActionResult) error -} - -// JobHandler processes a single pipeline signal. -type JobHandler interface { - Name() string - Match(signal *PipelineSignal) bool - Execute(ctx context.Context, signal *PipelineSignal) (*ActionResult, error) -} -``` - -**Step 4: Run tests** - -Run: `go test ./pkg/jobrunner/ -v -count=1` -Expected: PASS (4 tests). - -**Step 5: Commit** - -```bash -git add pkg/jobrunner/types.go pkg/jobrunner/types_test.go -git commit -m "feat(jobrunner): add core types — PipelineSignal, ActionResult, JobSource, JobHandler" -``` - ---- - -### Task 2: Journal JSONL Writer (`pkg/jobrunner/journal.go`) - -**Files:** -- Create: `pkg/jobrunner/journal.go` -- Test: `pkg/jobrunner/journal_test.go` - -**Step 1: Write the test** - -```go -package jobrunner - -import ( - "encoding/json" - "os" - "path/filepath" - "strings" - "testing" - "time" - - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -func TestJournal_Append_Good(t *testing.T) { - dir := t.TempDir() - j, err := NewJournal(dir) - require.NoError(t, err) - - signal := &PipelineSignal{ - EpicNumber: 299, - ChildNumber: 212, - PRNumber: 316, - RepoOwner: "host-uk", - RepoName: "core", - PRState: "OPEN", - IsDraft: true, - CheckStatus: "SUCCESS", - } - - result := &ActionResult{ - Action: "publish_draft", - RepoOwner: "host-uk", - RepoName: "core", - PRNumber: 316, - Success: true, - Timestamp: time.Date(2026, 2, 5, 12, 0, 0, 0, time.UTC), - Duration: 340 * time.Millisecond, - Cycle: 1, - } - - err = j.Append(signal, result) - require.NoError(t, err) - - // Read the file back - pattern := filepath.Join(dir, "host-uk", "core", "*.jsonl") - files, _ := filepath.Glob(pattern) - require.Len(t, files, 1) - - data, err := os.ReadFile(files[0]) - require.NoError(t, err) - - var entry JournalEntry - err = json.Unmarshal([]byte(strings.TrimSpace(string(data))), &entry) - require.NoError(t, err) - - assert.Equal(t, "publish_draft", entry.Action) - assert.Equal(t, 316, entry.PR) - assert.Equal(t, 299, entry.Epic) - assert.True(t, entry.Result.Success) -} - -func TestJournal_Append_Bad_NilSignal(t *testing.T) { - dir := t.TempDir() - j, err := NewJournal(dir) - require.NoError(t, err) - - err = j.Append(nil, &ActionResult{}) - assert.Error(t, err) -} -``` - -**Step 2: Run test to verify it fails** - -Run: `go test ./pkg/jobrunner/ -run TestJournal -v -count=1` -Expected: FAIL — `NewJournal` undefined. - -**Step 3: Write the implementation** - -```go -package jobrunner - -import ( - "encoding/json" - "fmt" - "os" - "path/filepath" - "sync" - "time" -) - -// JournalEntry is a single JSONL record for training data. -type JournalEntry struct { - Timestamp time.Time `json:"ts"` - Epic int `json:"epic"` - Child int `json:"child"` - PR int `json:"pr"` - Repo string `json:"repo"` - Action string `json:"action"` - Signals SignalSnapshot `json:"signals"` - Result ResultSnapshot `json:"result"` - Cycle int `json:"cycle"` -} - -// SignalSnapshot captures the structural state at action time. -type SignalSnapshot struct { - PRState string `json:"pr_state"` - IsDraft bool `json:"is_draft"` - CheckStatus string `json:"check_status"` - Mergeable string `json:"mergeable"` - ThreadsTotal int `json:"threads_total"` - ThreadsResolved int `json:"threads_resolved"` -} - -// ResultSnapshot captures the action outcome. -type ResultSnapshot struct { - Success bool `json:"success"` - Error string `json:"error,omitempty"` - DurationMs int64 `json:"duration_ms"` -} - -// Journal writes append-only JSONL files organised by repo and date. -type Journal struct { - baseDir string - mu sync.Mutex -} - -// NewJournal creates a journal writer rooted at baseDir. -// Files are written to baseDir///YYYY-MM-DD.jsonl. -func NewJournal(baseDir string) (*Journal, error) { - if baseDir == "" { - return nil, fmt.Errorf("journal base directory is required") - } - return &Journal{baseDir: baseDir}, nil -} - -// Append writes a journal entry for the given signal and result. -func (j *Journal) Append(signal *PipelineSignal, result *ActionResult) error { - if signal == nil { - return fmt.Errorf("signal is required") - } - if result == nil { - return fmt.Errorf("result is required") - } - - entry := JournalEntry{ - Timestamp: result.Timestamp, - Epic: signal.EpicNumber, - Child: signal.ChildNumber, - PR: signal.PRNumber, - Repo: signal.RepoFullName(), - Action: result.Action, - Signals: SignalSnapshot{ - PRState: signal.PRState, - IsDraft: signal.IsDraft, - CheckStatus: signal.CheckStatus, - Mergeable: signal.Mergeable, - ThreadsTotal: signal.ThreadsTotal, - ThreadsResolved: signal.ThreadsResolved, - }, - Result: ResultSnapshot{ - Success: result.Success, - Error: result.Error, - DurationMs: result.Duration.Milliseconds(), - }, - Cycle: result.Cycle, - } - - data, err := json.Marshal(entry) - if err != nil { - return fmt.Errorf("marshal journal entry: %w", err) - } - data = append(data, '\n') - - // Build path: baseDir/owner/repo/YYYY-MM-DD.jsonl - date := result.Timestamp.UTC().Format("2006-01-02") - dir := filepath.Join(j.baseDir, signal.RepoOwner, signal.RepoName) - - j.mu.Lock() - defer j.mu.Unlock() - - if err := os.MkdirAll(dir, 0o755); err != nil { - return fmt.Errorf("create journal directory: %w", err) - } - - path := filepath.Join(dir, date+".jsonl") - f, err := os.OpenFile(path, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o644) - if err != nil { - return fmt.Errorf("open journal file: %w", err) - } - defer f.Close() - - _, err = f.Write(data) - return err -} -``` - -**Step 4: Run tests** - -Run: `go test ./pkg/jobrunner/ -v -count=1` -Expected: PASS (all tests including Task 1). - -**Step 5: Commit** - -```bash -git add pkg/jobrunner/journal.go pkg/jobrunner/journal_test.go -git commit -m "feat(jobrunner): add JSONL journal writer for training data" -``` - ---- - -### Task 3: Poller and Dispatcher (`pkg/jobrunner/poller.go`) - -**Files:** -- Create: `pkg/jobrunner/poller.go` -- Test: `pkg/jobrunner/poller_test.go` - -**Step 1: Write the test** - -```go -package jobrunner - -import ( - "context" - "sync" - "testing" - "time" - - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -type mockSource struct { - name string - signals []*PipelineSignal - reports []*ActionResult - mu sync.Mutex -} - -func (m *mockSource) Name() string { return m.name } -func (m *mockSource) Poll(_ context.Context) ([]*PipelineSignal, error) { - return m.signals, nil -} -func (m *mockSource) Report(_ context.Context, r *ActionResult) error { - m.mu.Lock() - m.reports = append(m.reports, r) - m.mu.Unlock() - return nil -} - -type mockHandler struct { - name string - matchFn func(*PipelineSignal) bool - executed []*PipelineSignal - mu sync.Mutex -} - -func (m *mockHandler) Name() string { return m.name } -func (m *mockHandler) Match(s *PipelineSignal) bool { - if m.matchFn != nil { - return m.matchFn(s) - } - return true -} -func (m *mockHandler) Execute(_ context.Context, s *PipelineSignal) (*ActionResult, error) { - m.mu.Lock() - m.executed = append(m.executed, s) - m.mu.Unlock() - return &ActionResult{ - Action: m.name, - Success: true, - Timestamp: time.Now().UTC(), - }, nil -} - -func TestPoller_RunOnce_Good(t *testing.T) { - signal := &PipelineSignal{ - PRNumber: 315, - RepoOwner: "host-uk", - RepoName: "core", - IsDraft: true, - PRState: "OPEN", - } - - source := &mockSource{name: "test", signals: []*PipelineSignal{signal}} - handler := &mockHandler{name: "publish_draft"} - journal, err := NewJournal(t.TempDir()) - require.NoError(t, err) - - p := NewPoller(PollerConfig{ - Sources: []JobSource{source}, - Handlers: []JobHandler{handler}, - Journal: journal, - PollInterval: time.Second, - }) - - err = p.RunOnce(context.Background()) - require.NoError(t, err) - - handler.mu.Lock() - assert.Len(t, handler.executed, 1) - handler.mu.Unlock() -} - -func TestPoller_RunOnce_Good_NoSignals(t *testing.T) { - source := &mockSource{name: "test", signals: nil} - handler := &mockHandler{name: "noop"} - journal, err := NewJournal(t.TempDir()) - require.NoError(t, err) - - p := NewPoller(PollerConfig{ - Sources: []JobSource{source}, - Handlers: []JobHandler{handler}, - Journal: journal, - }) - - err = p.RunOnce(context.Background()) - require.NoError(t, err) - - handler.mu.Lock() - assert.Len(t, handler.executed, 0) - handler.mu.Unlock() -} - -func TestPoller_RunOnce_Good_NoMatchingHandler(t *testing.T) { - signal := &PipelineSignal{PRNumber: 1, RepoOwner: "a", RepoName: "b"} - source := &mockSource{name: "test", signals: []*PipelineSignal{signal}} - handler := &mockHandler{ - name: "never_match", - matchFn: func(*PipelineSignal) bool { return false }, - } - journal, err := NewJournal(t.TempDir()) - require.NoError(t, err) - - p := NewPoller(PollerConfig{ - Sources: []JobSource{source}, - Handlers: []JobHandler{handler}, - Journal: journal, - }) - - err = p.RunOnce(context.Background()) - require.NoError(t, err) - - handler.mu.Lock() - assert.Len(t, handler.executed, 0) - handler.mu.Unlock() -} -``` - -**Step 2: Run test to verify it fails** - -Run: `go test ./pkg/jobrunner/ -run TestPoller -v -count=1` -Expected: FAIL — `NewPoller` undefined. - -**Step 3: Write the implementation** - -```go -package jobrunner - -import ( - "context" - "fmt" - "time" - - "forge.lthn.ai/core/cli/pkg/log" -) - -// PollerConfig configures the job runner poller. -type PollerConfig struct { - Sources []JobSource - Handlers []JobHandler - Journal *Journal - PollInterval time.Duration - DryRun bool -} - -// Poller discovers and dispatches pipeline work. -type Poller struct { - cfg PollerConfig - cycle int -} - -// NewPoller creates a poller with the given configuration. -func NewPoller(cfg PollerConfig) *Poller { - if cfg.PollInterval == 0 { - cfg.PollInterval = 60 * time.Second - } - return &Poller{cfg: cfg} -} - -// Run starts the polling loop. Blocks until context is cancelled. -func (p *Poller) Run(ctx context.Context) error { - ticker := time.NewTicker(p.cfg.PollInterval) - defer ticker.Stop() - - // Run once immediately - if err := p.RunOnce(ctx); err != nil { - log.Info("poller", "cycle_error", err) - } - - for { - select { - case <-ctx.Done(): - return ctx.Err() - case <-ticker.C: - if err := p.RunOnce(ctx); err != nil { - log.Info("poller", "cycle_error", err) - } - } - } -} - -// RunOnce performs a single poll-dispatch cycle across all sources. -func (p *Poller) RunOnce(ctx context.Context) error { - p.cycle++ - - for _, source := range p.cfg.Sources { - if err := ctx.Err(); err != nil { - return err - } - - signals, err := source.Poll(ctx) - if err != nil { - log.Info("poller", "source", source.Name(), "poll_error", err) - continue - } - - for _, signal := range signals { - if err := ctx.Err(); err != nil { - return err - } - p.dispatch(ctx, source, signal) - } - } - - return nil -} - -// dispatch finds the first matching handler and executes it. -// One action per signal per cycle. -func (p *Poller) dispatch(ctx context.Context, source JobSource, signal *PipelineSignal) { - for _, handler := range p.cfg.Handlers { - if !handler.Match(signal) { - continue - } - - if p.cfg.DryRun { - log.Info("poller", - "dry_run", handler.Name(), - "repo", signal.RepoFullName(), - "pr", signal.PRNumber, - ) - return - } - - start := time.Now() - result, err := handler.Execute(ctx, signal) - if err != nil { - log.Info("poller", - "handler", handler.Name(), - "error", err, - "repo", signal.RepoFullName(), - "pr", signal.PRNumber, - ) - return - } - - result.Cycle = p.cycle - result.EpicNumber = signal.EpicNumber - result.ChildNumber = signal.ChildNumber - result.Duration = time.Since(start) - - // Write to journal - if p.cfg.Journal != nil { - if err := p.cfg.Journal.Append(signal, result); err != nil { - log.Info("poller", "journal_error", err) - } - } - - // Report back to source - if err := source.Report(ctx, result); err != nil { - log.Info("poller", "report_error", err) - } - - return // one action per signal per cycle - } -} - -// Cycle returns the current cycle count. -func (p *Poller) Cycle() int { - return p.cycle -} - -// DryRun returns whether the poller is in dry-run mode. -func (p *Poller) DryRun() bool { - return p.cfg.DryRun -} - -// SetDryRun enables or disables dry-run mode. -func (p *Poller) SetDryRun(v bool) { - p.cfg.DryRun = v -} - -// AddSource appends a job source to the poller. -func (p *Poller) AddSource(s JobSource) { - p.cfg.Sources = append(p.cfg.Sources, s) -} - -// AddHandler appends a job handler to the poller. -func (p *Poller) AddHandler(h JobHandler) { - p.cfg.Handlers = append(p.cfg.Handlers, h) -} - -_ = fmt.Sprintf // ensure fmt imported for future use -``` - -Wait — remove that last line. The `fmt` import is only needed if used. Let me correct: the implementation above doesn't use `fmt` directly, so remove it from imports. The `log` package import path is `forge.lthn.ai/core/cli/pkg/log`. - -**Step 4: Run tests** - -Run: `go test ./pkg/jobrunner/ -v -count=1` -Expected: PASS (all tests). - -**Step 5: Commit** - -```bash -git add pkg/jobrunner/poller.go pkg/jobrunner/poller_test.go -git commit -m "feat(jobrunner): add Poller with multi-source dispatch and journal integration" -``` - ---- - -### Task 4: GitHub Source — Signal Builder (`pkg/jobrunner/github/`) - -**Files:** -- Create: `pkg/jobrunner/github/source.go` -- Create: `pkg/jobrunner/github/signals.go` -- Test: `pkg/jobrunner/github/source_test.go` - -**Context:** This package lives in the root go.mod (`forge.lthn.ai/core/cli`), NOT in the core-ide module. It uses `oauth2` and the GitHub REST API (same pattern as `internal/cmd/updater/github.go`). Uses conditional requests (ETag/If-None-Match) to conserve rate limit. - -**Step 1: Write the test** - -```go -package github - -import ( - "context" - "encoding/json" - "net/http" - "net/http/httptest" - "testing" - - "forge.lthn.ai/core/cli/pkg/jobrunner" - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -func TestGitHubSource_Poll_Good(t *testing.T) { - // Mock GitHub API: return one open PR that's a draft with passing checks - mux := http.NewServeMux() - - // GET /repos/host-uk/core/issues?labels=epic&state=open - mux.HandleFunc("/repos/host-uk/core/issues", func(w http.ResponseWriter, r *http.Request) { - if r.URL.Query().Get("labels") == "epic" { - json.NewEncoder(w).Encode([]map[string]any{ - { - "number": 299, - "body": "- [ ] #212\n- [x] #213", - "state": "open", - }, - }) - return - } - json.NewEncoder(w).Encode([]map[string]any{}) - }) - - // GET /repos/host-uk/core/pulls?state=open - mux.HandleFunc("/repos/host-uk/core/pulls", func(w http.ResponseWriter, r *http.Request) { - json.NewEncoder(w).Encode([]map[string]any{ - { - "number": 316, - "state": "open", - "draft": true, - "mergeable_state": "clean", - "body": "Closes #212", - "head": map[string]any{"sha": "abc123"}, - }, - }) - }) - - // GET /repos/host-uk/core/commits/abc123/check-suites - mux.HandleFunc("/repos/host-uk/core/commits/", func(w http.ResponseWriter, r *http.Request) { - json.NewEncoder(w).Encode(map[string]any{ - "check_suites": []map[string]any{ - {"conclusion": "success", "status": "completed"}, - }, - }) - }) - - server := httptest.NewServer(mux) - defer server.Close() - - src := NewGitHubSource(Config{ - Repos: []string{"host-uk/core"}, - APIURL: server.URL, - }) - - signals, err := src.Poll(context.Background()) - require.NoError(t, err) - require.NotEmpty(t, signals) - - assert.Equal(t, 316, signals[0].PRNumber) - assert.True(t, signals[0].IsDraft) - assert.Equal(t, "host-uk", signals[0].RepoOwner) - assert.Equal(t, "core", signals[0].RepoName) -} - -func TestGitHubSource_Name_Good(t *testing.T) { - src := NewGitHubSource(Config{Repos: []string{"host-uk/core"}}) - assert.Equal(t, "github", src.Name()) -} -``` - -**Step 2: Run test to verify it fails** - -Run: `go test ./pkg/jobrunner/github/ -v -count=1` -Expected: FAIL — package does not exist. - -**Step 3: Write `signals.go`** — PR/issue data structures and signal extraction - -```go -package github - -import ( - "regexp" - "strconv" - "strings" - "time" - - "forge.lthn.ai/core/cli/pkg/jobrunner" -) - -// ghIssue is the minimal structure from GitHub Issues API. -type ghIssue struct { - Number int `json:"number"` - Body string `json:"body"` - State string `json:"state"` -} - -// ghPR is the minimal structure from GitHub Pull Requests API. -type ghPR struct { - Number int `json:"number"` - State string `json:"state"` - Draft bool `json:"draft"` - MergeableState string `json:"mergeable_state"` - Body string `json:"body"` - Head ghRef `json:"head"` - UpdatedAt time.Time `json:"updated_at"` -} - -type ghRef struct { - SHA string `json:"sha"` -} - -// ghCheckSuites is the response from /commits/:sha/check-suites. -type ghCheckSuites struct { - CheckSuites []ghCheckSuite `json:"check_suites"` -} - -type ghCheckSuite struct { - Conclusion string `json:"conclusion"` - Status string `json:"status"` -} - -// ghReviewThread counts (from GraphQL or approximated from review comments). -type ghReviewCounts struct { - Total int - Resolved int -} - -// parseEpicChildren extracts unchecked child issue numbers from an epic body. -// Matches: - [ ] #123 -var checklistRe = regexp.MustCompile(`- \[( |x)\] #(\d+)`) - -func parseEpicChildren(body string) (unchecked []int, checked []int) { - matches := checklistRe.FindAllStringSubmatch(body, -1) - for _, m := range matches { - num, _ := strconv.Atoi(m[2]) - if m[1] == "x" { - checked = append(checked, num) - } else { - unchecked = append(unchecked, num) - } - } - return -} - -// findLinkedPR finds a PR that references an issue number in its body. -// Matches: Closes #123, Fixes #123, Resolves #123 -func findLinkedPR(prs []ghPR, issueNumber int) *ghPR { - pattern := strconv.Itoa(issueNumber) - for i := range prs { - if strings.Contains(prs[i].Body, "#"+pattern) { - return &prs[i] - } - } - return nil -} - -// aggregateCheckStatus returns the overall check status from check suites. -func aggregateCheckStatus(suites []ghCheckSuite) string { - if len(suites) == 0 { - return "PENDING" - } - for _, s := range suites { - if s.Status != "completed" { - return "PENDING" - } - if s.Conclusion == "failure" || s.Conclusion == "timed_out" || s.Conclusion == "cancelled" { - return "FAILURE" - } - } - return "SUCCESS" -} - -// mergeableToString normalises GitHub's mergeable_state to our enum. -func mergeableToString(state string) string { - switch state { - case "clean", "has_hooks", "unstable": - return "MERGEABLE" - case "dirty": - return "CONFLICTING" - default: - return "UNKNOWN" - } -} - -// buildSignal creates a PipelineSignal from GitHub API data. -func buildSignal(owner, repo string, epic ghIssue, childNum int, pr ghPR, checks ghCheckSuites) *jobrunner.PipelineSignal { - return &jobrunner.PipelineSignal{ - EpicNumber: epic.Number, - ChildNumber: childNum, - PRNumber: pr.Number, - RepoOwner: owner, - RepoName: repo, - PRState: strings.ToUpper(pr.State), - IsDraft: pr.Draft, - Mergeable: mergeableToString(pr.MergeableState), - CheckStatus: aggregateCheckStatus(checks.CheckSuites), - LastCommitSHA: pr.Head.SHA, - LastCommitAt: pr.UpdatedAt, - } -} -``` - -**Step 4: Write `source.go`** — GitHubSource implementing JobSource - -```go -package github - -import ( - "context" - "encoding/json" - "fmt" - "net/http" - "os" - "strings" - - "forge.lthn.ai/core/cli/pkg/jobrunner" - "forge.lthn.ai/core/cli/pkg/log" - "golang.org/x/oauth2" -) - -// Config for the GitHub job source. -type Config struct { - Repos []string // "owner/repo" format - APIURL string // override for testing (default: https://api.github.com) -} - -// GitHubSource polls GitHub for pipeline signals. -type GitHubSource struct { - cfg Config - client *http.Client - etags map[string]string // URL -> ETag for conditional requests -} - -// NewGitHubSource creates a GitHub job source. -func NewGitHubSource(cfg Config) *GitHubSource { - if cfg.APIURL == "" { - cfg.APIURL = "https://api.github.com" - } - - var client *http.Client - token := os.Getenv("GITHUB_TOKEN") - if token != "" { - ts := oauth2.StaticTokenSource(&oauth2.Token{AccessToken: token}) - client = oauth2.NewClient(context.Background(), ts) - } else { - client = http.DefaultClient - } - - return &GitHubSource{ - cfg: cfg, - client: client, - etags: make(map[string]string), - } -} - -func (g *GitHubSource) Name() string { return "github" } - -// Poll scans all configured repos for actionable pipeline signals. -func (g *GitHubSource) Poll(ctx context.Context) ([]*jobrunner.PipelineSignal, error) { - var all []*jobrunner.PipelineSignal - - for _, repoSpec := range g.cfg.Repos { - parts := strings.SplitN(repoSpec, "/", 2) - if len(parts) != 2 { - continue - } - owner, repo := parts[0], parts[1] - - signals, err := g.pollRepo(ctx, owner, repo) - if err != nil { - log.Info("github_source", "repo", repoSpec, "error", err) - continue - } - all = append(all, signals...) - } - - return all, nil -} - -func (g *GitHubSource) pollRepo(ctx context.Context, owner, repo string) ([]*jobrunner.PipelineSignal, error) { - // 1. Fetch epic issues - epics, err := g.fetchEpics(ctx, owner, repo) - if err != nil { - return nil, err - } - - // 2. Fetch open PRs - prs, err := g.fetchPRs(ctx, owner, repo) - if err != nil { - return nil, err - } - - var signals []*jobrunner.PipelineSignal - - for _, epic := range epics { - unchecked, _ := parseEpicChildren(epic.Body) - for _, childNum := range unchecked { - pr := findLinkedPR(prs, childNum) - if pr == nil { - continue // no PR yet for this child - } - - checks, err := g.fetchCheckSuites(ctx, owner, repo, pr.Head.SHA) - if err != nil { - log.Info("github_source", "pr", pr.Number, "check_error", err) - checks = ghCheckSuites{} - } - - signals = append(signals, buildSignal(owner, repo, epic, childNum, *pr, checks)) - } - } - - return signals, nil -} - -func (g *GitHubSource) fetchEpics(ctx context.Context, owner, repo string) ([]ghIssue, error) { - url := fmt.Sprintf("%s/repos/%s/%s/issues?labels=epic&state=open&per_page=100", g.cfg.APIURL, owner, repo) - var issues []ghIssue - return issues, g.getJSON(ctx, url, &issues) -} - -func (g *GitHubSource) fetchPRs(ctx context.Context, owner, repo string) ([]ghPR, error) { - url := fmt.Sprintf("%s/repos/%s/%s/pulls?state=open&per_page=100", g.cfg.APIURL, owner, repo) - var prs []ghPR - return prs, g.getJSON(ctx, url, &prs) -} - -func (g *GitHubSource) fetchCheckSuites(ctx context.Context, owner, repo, sha string) (ghCheckSuites, error) { - url := fmt.Sprintf("%s/repos/%s/%s/commits/%s/check-suites", g.cfg.APIURL, owner, repo, sha) - var result ghCheckSuites - return result, g.getJSON(ctx, url, &result) -} - -// getJSON performs a GET with conditional request support. -func (g *GitHubSource) getJSON(ctx context.Context, url string, out any) error { - req, err := http.NewRequestWithContext(ctx, "GET", url, nil) - if err != nil { - return err - } - req.Header.Set("Accept", "application/vnd.github+json") - - if etag, ok := g.etags[url]; ok { - req.Header.Set("If-None-Match", etag) - } - - resp, err := g.client.Do(req) - if err != nil { - return err - } - defer resp.Body.Close() - - // Store ETag for next request - if etag := resp.Header.Get("ETag"); etag != "" { - g.etags[url] = etag - } - - if resp.StatusCode == http.StatusNotModified { - return nil // no change since last poll - } - - if resp.StatusCode != http.StatusOK { - return fmt.Errorf("HTTP %d for %s", resp.StatusCode, url) - } - - return json.NewDecoder(resp.Body).Decode(out) -} - -// Report is a no-op for GitHub (actions are performed directly via API). -func (g *GitHubSource) Report(_ context.Context, _ *jobrunner.ActionResult) error { - return nil -} -``` - -**Step 5: Run tests** - -Run: `go test ./pkg/jobrunner/github/ -v -count=1` -Expected: PASS. - -**Step 6: Commit** - -```bash -git add pkg/jobrunner/github/ -git commit -m "feat(jobrunner): add GitHub source adapter with ETag conditional requests" -``` - ---- - -### Task 5: Publish Draft Handler (`pkg/jobrunner/handlers/`) - -**Files:** -- Create: `pkg/jobrunner/handlers/publish_draft.go` -- Test: `pkg/jobrunner/handlers/publish_draft_test.go` - -**Context:** Handlers live in `pkg/jobrunner/handlers/` (root module). They use `net/http` to call GitHub REST API directly. Each handler implements `jobrunner.JobHandler`. - -**Step 1: Write the test** - -```go -package handlers - -import ( - "context" - "net/http" - "net/http/httptest" - "testing" - - "forge.lthn.ai/core/cli/pkg/jobrunner" - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -func TestPublishDraft_Match_Good(t *testing.T) { - h := NewPublishDraft(nil) - signal := &jobrunner.PipelineSignal{ - IsDraft: true, - PRState: "OPEN", - CheckStatus: "SUCCESS", - } - assert.True(t, h.Match(signal)) -} - -func TestPublishDraft_Match_Bad_NotDraft(t *testing.T) { - h := NewPublishDraft(nil) - signal := &jobrunner.PipelineSignal{ - IsDraft: false, - PRState: "OPEN", - CheckStatus: "SUCCESS", - } - assert.False(t, h.Match(signal)) -} - -func TestPublishDraft_Match_Bad_ChecksFailing(t *testing.T) { - h := NewPublishDraft(nil) - signal := &jobrunner.PipelineSignal{ - IsDraft: true, - PRState: "OPEN", - CheckStatus: "FAILURE", - } - assert.False(t, h.Match(signal)) -} - -func TestPublishDraft_Execute_Good(t *testing.T) { - var calledURL string - var calledMethod string - server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - calledURL = r.URL.Path - calledMethod = r.Method - w.WriteHeader(http.StatusOK) - w.Write([]byte(`{"number":316}`)) - })) - defer server.Close() - - h := NewPublishDraft(&http.Client{}) - h.apiURL = server.URL - - signal := &jobrunner.PipelineSignal{ - PRNumber: 316, - RepoOwner: "host-uk", - RepoName: "core", - IsDraft: true, - PRState: "OPEN", - } - - result, err := h.Execute(context.Background(), signal) - require.NoError(t, err) - assert.True(t, result.Success) - assert.Equal(t, "publish_draft", result.Action) - assert.Equal(t, "/repos/host-uk/core/pulls/316", calledURL) - assert.Equal(t, "PATCH", calledMethod) -} -``` - -**Step 2: Run test to verify it fails** - -Run: `go test ./pkg/jobrunner/handlers/ -run TestPublishDraft -v -count=1` -Expected: FAIL — package does not exist. - -**Step 3: Write the implementation** - -```go -package handlers - -import ( - "bytes" - "context" - "fmt" - "net/http" - "time" - - "forge.lthn.ai/core/cli/pkg/jobrunner" -) - -// PublishDraft marks a draft PR as ready for review. -type PublishDraft struct { - client *http.Client - apiURL string -} - -// NewPublishDraft creates a publish_draft handler. -// Pass nil client to use http.DefaultClient. -func NewPublishDraft(client *http.Client) *PublishDraft { - if client == nil { - client = http.DefaultClient - } - return &PublishDraft{ - client: client, - apiURL: "https://api.github.com", - } -} - -func (h *PublishDraft) Name() string { return "publish_draft" } - -// Match returns true for open draft PRs with passing checks. -func (h *PublishDraft) Match(s *jobrunner.PipelineSignal) bool { - return s.IsDraft && s.PRState == "OPEN" && s.CheckStatus == "SUCCESS" -} - -// Execute calls PATCH /repos/:owner/:repo/pulls/:number with draft=false. -func (h *PublishDraft) Execute(ctx context.Context, s *jobrunner.PipelineSignal) (*jobrunner.ActionResult, error) { - url := fmt.Sprintf("%s/repos/%s/%s/pulls/%d", h.apiURL, s.RepoOwner, s.RepoName, s.PRNumber) - body := bytes.NewBufferString(`{"draft":false}`) - - req, err := http.NewRequestWithContext(ctx, "PATCH", url, body) - if err != nil { - return nil, err - } - req.Header.Set("Accept", "application/vnd.github+json") - req.Header.Set("Content-Type", "application/json") - - resp, err := h.client.Do(req) - if err != nil { - return nil, err - } - defer resp.Body.Close() - - result := &jobrunner.ActionResult{ - Action: "publish_draft", - RepoOwner: s.RepoOwner, - RepoName: s.RepoName, - PRNumber: s.PRNumber, - Timestamp: time.Now().UTC(), - } - - if resp.StatusCode >= 200 && resp.StatusCode < 300 { - result.Success = true - } else { - result.Error = fmt.Sprintf("HTTP %d", resp.StatusCode) - } - - return result, nil -} -``` - -**Step 4: Run tests** - -Run: `go test ./pkg/jobrunner/handlers/ -v -count=1` -Expected: PASS. - -**Step 5: Commit** - -```bash -git add pkg/jobrunner/handlers/ -git commit -m "feat(jobrunner): add publish_draft handler" -``` - ---- - -### Task 6: Send Fix Command Handler - -**Files:** -- Create: `pkg/jobrunner/handlers/send_fix_command.go` -- Test: `pkg/jobrunner/handlers/send_fix_command_test.go` - -**Step 1: Write the test** - -```go -package handlers - -import ( - "context" - "encoding/json" - "net/http" - "net/http/httptest" - "testing" - - "forge.lthn.ai/core/cli/pkg/jobrunner" - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -func TestSendFixCommand_Match_Good_Conflicting(t *testing.T) { - h := NewSendFixCommand(nil) - signal := &jobrunner.PipelineSignal{ - PRState: "OPEN", - Mergeable: "CONFLICTING", - } - assert.True(t, h.Match(signal)) -} - -func TestSendFixCommand_Match_Good_UnresolvedThreads(t *testing.T) { - h := NewSendFixCommand(nil) - signal := &jobrunner.PipelineSignal{ - PRState: "OPEN", - Mergeable: "MERGEABLE", - ThreadsTotal: 3, - ThreadsResolved: 1, - CheckStatus: "FAILURE", - } - assert.True(t, h.Match(signal)) -} - -func TestSendFixCommand_Match_Bad_Clean(t *testing.T) { - h := NewSendFixCommand(nil) - signal := &jobrunner.PipelineSignal{ - PRState: "OPEN", - Mergeable: "MERGEABLE", - CheckStatus: "SUCCESS", - ThreadsTotal: 0, - ThreadsResolved: 0, - } - assert.False(t, h.Match(signal)) -} - -func TestSendFixCommand_Execute_Good_Conflict(t *testing.T) { - var postedBody map[string]string - server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - json.NewDecoder(r.Body).Decode(&postedBody) - w.WriteHeader(http.StatusCreated) - w.Write([]byte(`{"id":1}`)) - })) - defer server.Close() - - h := NewSendFixCommand(&http.Client{}) - h.apiURL = server.URL - - signal := &jobrunner.PipelineSignal{ - PRNumber: 296, - RepoOwner: "host-uk", - RepoName: "core", - PRState: "OPEN", - Mergeable: "CONFLICTING", - } - - result, err := h.Execute(context.Background(), signal) - require.NoError(t, err) - assert.True(t, result.Success) - assert.Contains(t, postedBody["body"], "fix the merge conflict") -} -``` - -**Step 2: Run test to verify it fails** - -Run: `go test ./pkg/jobrunner/handlers/ -run TestSendFixCommand -v -count=1` -Expected: FAIL — `NewSendFixCommand` undefined. - -**Step 3: Write the implementation** - -```go -package handlers - -import ( - "bytes" - "context" - "encoding/json" - "fmt" - "net/http" - "time" - - "forge.lthn.ai/core/cli/pkg/jobrunner" -) - -// SendFixCommand comments on a PR to request a fix. -type SendFixCommand struct { - client *http.Client - apiURL string -} - -func NewSendFixCommand(client *http.Client) *SendFixCommand { - if client == nil { - client = http.DefaultClient - } - return &SendFixCommand{client: client, apiURL: "https://api.github.com"} -} - -func (h *SendFixCommand) Name() string { return "send_fix_command" } - -// Match returns true for open PRs that are conflicting OR have unresolved -// review threads with failing checks (indicating reviews need fixing). -func (h *SendFixCommand) Match(s *jobrunner.PipelineSignal) bool { - if s.PRState != "OPEN" { - return false - } - if s.Mergeable == "CONFLICTING" { - return true - } - if s.HasUnresolvedThreads() && s.CheckStatus == "FAILURE" { - return true - } - return false -} - -// Execute posts a comment with the appropriate fix command. -func (h *SendFixCommand) Execute(ctx context.Context, s *jobrunner.PipelineSignal) (*jobrunner.ActionResult, error) { - msg := "Can you fix the code reviews?" - if s.Mergeable == "CONFLICTING" { - msg = "Can you fix the merge conflict?" - } - - url := fmt.Sprintf("%s/repos/%s/%s/issues/%d/comments", h.apiURL, s.RepoOwner, s.RepoName, s.PRNumber) - payload, _ := json.Marshal(map[string]string{"body": msg}) - - req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload)) - if err != nil { - return nil, err - } - req.Header.Set("Accept", "application/vnd.github+json") - req.Header.Set("Content-Type", "application/json") - - resp, err := h.client.Do(req) - if err != nil { - return nil, err - } - defer resp.Body.Close() - - result := &jobrunner.ActionResult{ - Action: "send_fix_command", - RepoOwner: s.RepoOwner, - RepoName: s.RepoName, - PRNumber: s.PRNumber, - Timestamp: time.Now().UTC(), - } - - if resp.StatusCode == http.StatusCreated { - result.Success = true - } else { - result.Error = fmt.Sprintf("HTTP %d", resp.StatusCode) - } - - return result, nil -} -``` - -**Step 4: Run tests** - -Run: `go test ./pkg/jobrunner/handlers/ -v -count=1` -Expected: PASS. - -**Step 5: Commit** - -```bash -git add pkg/jobrunner/handlers/send_fix_command.go pkg/jobrunner/handlers/send_fix_command_test.go -git commit -m "feat(jobrunner): add send_fix_command handler" -``` - ---- - -### Task 7: Remaining Handlers (enable_auto_merge, tick_parent, close_child) - -**Files:** -- Create: `pkg/jobrunner/handlers/enable_auto_merge.go` + test -- Create: `pkg/jobrunner/handlers/tick_parent.go` + test -- Create: `pkg/jobrunner/handlers/close_child.go` + test - -**Context:** Same pattern as Tasks 5-6. Each handler: Match checks signal conditions, Execute calls GitHub REST API. Tests use httptest. - -**Step 1: Write tests for all three** (one test file per handler, same pattern as above) - -**enable_auto_merge:** -- Match: `PRState=OPEN && Mergeable=MERGEABLE && CheckStatus=SUCCESS && !IsDraft && ThreadsTotal==ThreadsResolved` -- Execute: `PUT /repos/:owner/:repo/pulls/:number/merge` with `merge_method=squash` — actually, auto-merge uses `gh api` to enable. For REST: `POST /repos/:owner/:repo/pulls/:number/merge` — No. Auto-merge is enabled via GraphQL `enablePullRequestAutoMerge`. For REST fallback, use: `PATCH /repos/:owner/:repo/pulls/:number` — No, that's not right either. - -Actually, auto-merge via REST requires: `PUT /repos/:owner/:repo/pulls/:number/auto_merge`. This is not a standard GitHub REST endpoint. Auto-merge is enabled via the GraphQL API: - -```graphql -mutation { enablePullRequestAutoMerge(input: {pullRequestId: "..."}) { ... } } -``` - -**Simpler approach:** Shell out to `gh pr merge --auto -R owner/repo`. This is what the pipeline flow does today. Let's use `os/exec` with the `gh` CLI. - -```go -// enable_auto_merge.go -package handlers - -import ( - "context" - "fmt" - "os/exec" - "time" - - "forge.lthn.ai/core/cli/pkg/jobrunner" -) - -type EnableAutoMerge struct{} - -func NewEnableAutoMerge() *EnableAutoMerge { return &EnableAutoMerge{} } - -func (h *EnableAutoMerge) Name() string { return "enable_auto_merge" } - -func (h *EnableAutoMerge) Match(s *jobrunner.PipelineSignal) bool { - return s.PRState == "OPEN" && - !s.IsDraft && - s.Mergeable == "MERGEABLE" && - s.CheckStatus == "SUCCESS" && - !s.HasUnresolvedThreads() -} - -func (h *EnableAutoMerge) Execute(ctx context.Context, s *jobrunner.PipelineSignal) (*jobrunner.ActionResult, error) { - cmd := exec.CommandContext(ctx, "gh", "pr", "merge", "--auto", - fmt.Sprintf("%d", s.PRNumber), - "-R", s.RepoFullName(), - ) - output, err := cmd.CombinedOutput() - - result := &jobrunner.ActionResult{ - Action: "enable_auto_merge", - RepoOwner: s.RepoOwner, - RepoName: s.RepoName, - PRNumber: s.PRNumber, - Timestamp: time.Now().UTC(), - } - - if err != nil { - result.Error = fmt.Sprintf("%v: %s", err, string(output)) - } else { - result.Success = true - } - - return result, nil -} -``` - -**tick_parent and close_child** follow the same `gh` CLI pattern: -- `tick_parent`: Reads epic issue body, checks the child's checkbox, updates via `gh issue edit` -- `close_child`: `gh issue close -R owner/repo` - -**Step 2-5:** Same TDD cycle as Tasks 5-6. Write test, verify fail, implement, verify pass, commit. - -For brevity, the exact test code follows the same pattern. Key test assertions: -- `tick_parent`: Verify `gh issue edit` is called with updated body -- `close_child`: Verify `gh issue close` is called -- `enable_auto_merge`: Verify `gh pr merge --auto` is called - -**Testability:** Use a command factory variable for mocking `exec.Command`: - -```go -// In each handler file: -var execCommand = exec.CommandContext - -// In tests: -originalExecCommand := execCommand -defer func() { execCommand = originalExecCommand }() -execCommand = func(ctx context.Context, name string, args ...string) *exec.Cmd { - // return a mock command -} -``` - -**Step 6: Commit** - -```bash -git add pkg/jobrunner/handlers/enable_auto_merge.go pkg/jobrunner/handlers/enable_auto_merge_test.go -git add pkg/jobrunner/handlers/tick_parent.go pkg/jobrunner/handlers/tick_parent_test.go -git add pkg/jobrunner/handlers/close_child.go pkg/jobrunner/handlers/close_child_test.go -git commit -m "feat(jobrunner): add enable_auto_merge, tick_parent, close_child handlers" -``` - ---- - -### Task 8: Resolve Threads Handler - -**Files:** -- Create: `pkg/jobrunner/handlers/resolve_threads.go` -- Test: `pkg/jobrunner/handlers/resolve_threads_test.go` - -**Context:** This handler is special — it needs GraphQL to resolve review threads (no REST endpoint exists). Use a minimal GraphQL client (raw `net/http` POST to `https://api.github.com/graphql`). - -**Step 1: Write the test** - -```go -package handlers - -import ( - "context" - "encoding/json" - "net/http" - "net/http/httptest" - "testing" - - "forge.lthn.ai/core/cli/pkg/jobrunner" - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -func TestResolveThreads_Match_Good(t *testing.T) { - h := NewResolveThreads(nil) - signal := &jobrunner.PipelineSignal{ - PRState: "OPEN", - ThreadsTotal: 3, - ThreadsResolved: 1, - } - assert.True(t, h.Match(signal)) -} - -func TestResolveThreads_Match_Bad_AllResolved(t *testing.T) { - h := NewResolveThreads(nil) - signal := &jobrunner.PipelineSignal{ - PRState: "OPEN", - ThreadsTotal: 3, - ThreadsResolved: 3, - } - assert.False(t, h.Match(signal)) -} - -func TestResolveThreads_Execute_Good(t *testing.T) { - callCount := 0 - server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - callCount++ - var req map[string]any - json.NewDecoder(r.Body).Decode(&req) - - query := req["query"].(string) - - // First call: fetch threads - if callCount == 1 { - json.NewEncoder(w).Encode(map[string]any{ - "data": map[string]any{ - "repository": map[string]any{ - "pullRequest": map[string]any{ - "reviewThreads": map[string]any{ - "nodes": []map[string]any{ - {"id": "PRRT_1", "isResolved": false}, - {"id": "PRRT_2", "isResolved": true}, - }, - }, - }, - }, - }, - }) - return - } - - // Subsequent calls: resolve thread - json.NewEncoder(w).Encode(map[string]any{ - "data": map[string]any{ - "resolveReviewThread": map[string]any{ - "thread": map[string]any{"isResolved": true}, - }, - }, - }) - })) - defer server.Close() - - h := NewResolveThreads(&http.Client{}) - h.graphqlURL = server.URL - - signal := &jobrunner.PipelineSignal{ - PRNumber: 315, - RepoOwner: "host-uk", - RepoName: "core", - PRState: "OPEN", - ThreadsTotal: 2, - ThreadsResolved: 1, - } - - result, err := h.Execute(context.Background(), signal) - require.NoError(t, err) - assert.True(t, result.Success) - assert.Equal(t, 2, callCount) // 1 fetch + 1 resolve (only PRRT_1 unresolved) -} -``` - -**Step 2: Run test to verify it fails** - -Run: `go test ./pkg/jobrunner/handlers/ -run TestResolveThreads -v -count=1` -Expected: FAIL — `NewResolveThreads` undefined. - -**Step 3: Write the implementation** - -```go -package handlers - -import ( - "bytes" - "context" - "encoding/json" - "fmt" - "net/http" - "time" - - "forge.lthn.ai/core/cli/pkg/jobrunner" -) - -// ResolveThreads resolves all unresolved review threads on a PR. -type ResolveThreads struct { - client *http.Client - graphqlURL string -} - -func NewResolveThreads(client *http.Client) *ResolveThreads { - if client == nil { - client = http.DefaultClient - } - return &ResolveThreads{ - client: client, - graphqlURL: "https://api.github.com/graphql", - } -} - -func (h *ResolveThreads) Name() string { return "resolve_threads" } - -func (h *ResolveThreads) Match(s *jobrunner.PipelineSignal) bool { - return s.PRState == "OPEN" && s.HasUnresolvedThreads() -} - -func (h *ResolveThreads) Execute(ctx context.Context, s *jobrunner.PipelineSignal) (*jobrunner.ActionResult, error) { - // 1. Fetch unresolved thread IDs - threadIDs, err := h.fetchUnresolvedThreads(ctx, s.RepoOwner, s.RepoName, s.PRNumber) - if err != nil { - return nil, fmt.Errorf("fetch threads: %w", err) - } - - // 2. Resolve each thread - resolved := 0 - for _, id := range threadIDs { - if err := h.resolveThread(ctx, id); err != nil { - // Log but continue — some threads may not be resolvable - continue - } - resolved++ - } - - result := &jobrunner.ActionResult{ - Action: "resolve_threads", - RepoOwner: s.RepoOwner, - RepoName: s.RepoName, - PRNumber: s.PRNumber, - Success: resolved > 0, - Timestamp: time.Now().UTC(), - } - - if resolved == 0 && len(threadIDs) > 0 { - result.Error = fmt.Sprintf("0/%d threads resolved", len(threadIDs)) - } - - return result, nil -} - -func (h *ResolveThreads) fetchUnresolvedThreads(ctx context.Context, owner, repo string, pr int) ([]string, error) { - query := fmt.Sprintf(`{ - repository(owner: %q, name: %q) { - pullRequest(number: %d) { - reviewThreads(first: 100) { - nodes { id isResolved } - } - } - } - }`, owner, repo, pr) - - resp, err := h.graphql(ctx, query) - if err != nil { - return nil, err - } - - type thread struct { - ID string `json:"id"` - IsResolved bool `json:"isResolved"` - } - var result struct { - Data struct { - Repository struct { - PullRequest struct { - ReviewThreads struct { - Nodes []thread `json:"nodes"` - } `json:"reviewThreads"` - } `json:"pullRequest"` - } `json:"repository"` - } `json:"data"` - } - - if err := json.Unmarshal(resp, &result); err != nil { - return nil, err - } - - var ids []string - for _, t := range result.Data.Repository.PullRequest.ReviewThreads.Nodes { - if !t.IsResolved { - ids = append(ids, t.ID) - } - } - return ids, nil -} - -func (h *ResolveThreads) resolveThread(ctx context.Context, threadID string) error { - mutation := fmt.Sprintf(`mutation { - resolveReviewThread(input: {threadId: %q}) { - thread { isResolved } - } - }`, threadID) - - _, err := h.graphql(ctx, mutation) - return err -} - -func (h *ResolveThreads) graphql(ctx context.Context, query string) (json.RawMessage, error) { - payload, _ := json.Marshal(map[string]string{"query": query}) - - req, err := http.NewRequestWithContext(ctx, "POST", h.graphqlURL, bytes.NewReader(payload)) - if err != nil { - return nil, err - } - req.Header.Set("Content-Type", "application/json") - - resp, err := h.client.Do(req) - if err != nil { - return nil, err - } - defer resp.Body.Close() - - if resp.StatusCode != http.StatusOK { - return nil, fmt.Errorf("GraphQL HTTP %d", resp.StatusCode) - } - - var raw json.RawMessage - err = json.NewDecoder(resp.Body).Decode(&raw) - return raw, err -} -``` - -**Step 4: Run tests** - -Run: `go test ./pkg/jobrunner/handlers/ -v -count=1` -Expected: PASS. - -**Step 5: Commit** - -```bash -git add pkg/jobrunner/handlers/resolve_threads.go pkg/jobrunner/handlers/resolve_threads_test.go -git commit -m "feat(jobrunner): add resolve_threads handler with GraphQL" -``` - ---- - -### Task 9: Headless Mode in core-ide - -**Files:** -- Modify: `internal/core-ide/main.go` - -**Context:** core-ide currently always creates a Wails app. We need to branch: headless starts the poller + MCP bridge directly; desktop mode keeps the existing Wails app with poller as an optional service. - -Note: core-ide has its own `go.mod` (`forge.lthn.ai/core/cli/internal/core-ide`). The jobrunner package lives in the root module. We need to add the root module as a dependency of core-ide, OR move the handler wiring into the root module. **Simplest approach:** core-ide imports `forge.lthn.ai/core/cli/pkg/jobrunner` — this requires adding the root module as a dependency in core-ide's go.mod. - -**Step 1: Update core-ide go.mod** - -Run: `cd /Users/snider/Code/host-uk/core/internal/core-ide && go get forge.lthn.ai/core/cli/pkg/jobrunner` - -If this fails because the package isn't published yet, use a `replace` directive temporarily: - -``` -replace forge.lthn.ai/core/cli => ../.. -``` - -Then `go mod tidy`. - -**Step 2: Modify main.go** - -Add `--headless` flag parsing, `hasDisplay()` detection, and the headless startup path. - -The headless path: -1. Create `cli.Daemon` with PID file + health server -2. Create `Journal` at `~/.core/journal/` -3. Create `GitHubSource` with repos from config/env -4. Create all handlers -5. Create `Poller` with sources + handlers + journal -6. Start daemon, run poller in goroutine, block on `daemon.Run(ctx)` - -The desktop path: -- Existing Wails app code, unchanged for now -- Poller can be added as a Wails service later - -```go -// At top of main(): -headless := false -for _, arg := range os.Args[1:] { - if arg == "--headless" { - headless = true - } -} - -if headless || !hasDisplay() { - startHeadless() - return -} -// ... existing Wails app code ... -``` - -**Step 3: Run core-ide with --headless --dry-run to verify** - -Run: `cd /Users/snider/Code/host-uk/core/internal/core-ide && go run . --headless --dry-run` -Expected: Starts, logs poll cycle, exits cleanly on Ctrl+C. - -**Step 4: Commit** - -```bash -git add internal/core-ide/main.go internal/core-ide/go.mod internal/core-ide/go.sum -git commit -m "feat(core-ide): add headless mode with job runner poller" -``` - ---- - -### Task 10: Register Handlers as MCP Tools - -**Files:** -- Modify: `internal/core-ide/mcp_bridge.go` - -**Context:** Register each JobHandler as an MCP tool so they're callable via the HTTP API (POST /mcp/call). This lets external tools invoke handlers manually. - -**Step 1: Add handler registration to MCPBridge** - -Add a `handlers` field and register them in `ServiceStartup`. Add a `job_*` prefix to distinguish from webview tools. - -```go -// In handleMCPTools — append job handler tools to the tool list -// In handleMCPCall — add a job_* dispatch path -``` - -**Step 2: Test via curl** - -Run: `curl -X POST http://localhost:9877/mcp/call -d '{"tool":"job_publish_draft","params":{"pr":316,"owner":"host-uk","repo":"core"}}'` -Expected: Returns handler result JSON. - -**Step 3: Commit** - -```bash -git add internal/core-ide/mcp_bridge.go -git commit -m "feat(core-ide): register job handlers as MCP tools" -``` - ---- - -### Task 11: Updater Integration in core-ide - -**Files:** -- Modify: `internal/core-ide/main.go` (headless startup path) - -**Context:** Wire the existing `internal/cmd/updater` package into core-ide's headless startup. Check for updates on startup, auto-apply in headless mode. - -**Step 1: Add updater to headless startup** - -```go -// In startHeadless(), before starting poller: -updaterSvc, err := updater.NewUpdateService(updater.UpdateServiceConfig{ - RepoURL: "https://forge.lthn.ai/core/cli", - Channel: "alpha", - CheckOnStartup: updater.CheckAndUpdateOnStartup, -}) -if err == nil { - _ = updaterSvc.Start() // will auto-update and restart if newer version exists -} -``` - -**Step 2: Test by running headless** - -Run: `core-ide --headless` — should check for updates on startup, then start polling. - -**Step 3: Commit** - -```bash -git add internal/core-ide/main.go -git commit -m "feat(core-ide): integrate updater for headless auto-update" -``` - ---- - -### Task 12: Systemd Service File - -**Files:** -- Create: `internal/core-ide/build/linux/core-ide.service` - -**Step 1: Write the systemd unit** - -```ini -[Unit] -Description=Core IDE Job Runner -After=network-online.target -Wants=network-online.target - -[Service] -Type=simple -ExecStart=/usr/local/bin/core-ide --headless -Restart=always -RestartSec=10 -Environment=CORE_DAEMON=1 -Environment=GITHUB_TOKEN= - -[Install] -WantedBy=multi-user.target -``` - -**Step 2: Add to nfpm.yaml** so it's included in the Linux package: - -In `internal/core-ide/build/linux/nfpm/nfpm.yaml`, add to `contents`: -```yaml -- src: ../core-ide.service - dst: /etc/systemd/system/core-ide.service - type: config -``` - -**Step 3: Commit** - -```bash -git add internal/core-ide/build/linux/core-ide.service internal/core-ide/build/linux/nfpm/nfpm.yaml -git commit -m "feat(core-ide): add systemd service for headless mode" -``` - ---- - -### Task 13: Run Full Test Suite - -**Step 1: Run all jobrunner tests** - -Run: `go test ./pkg/jobrunner/... -v -count=1` -Expected: All tests pass. - -**Step 2: Run core-ide build** - -Run: `cd /Users/snider/Code/host-uk/core/internal/core-ide && go build -o /dev/null .` -Expected: Builds without errors. - -**Step 3: Run dry-run integration test** - -Run: `cd /Users/snider/Code/host-uk/core/internal/core-ide && go run . --headless --dry-run` -Expected: Polls GitHub, logs signals, takes no actions, exits on Ctrl+C. - ---- - -## Batch Execution Plan - -| Batch | Tasks | Description | -|-------|-------|-------------| -| 0 | 0 | Go workspace setup | -| 1 | 1-2 | Core types + Journal | -| 2 | 3-4 | Poller + GitHub Source | -| 3 | 5-8 | All handlers | -| 4 | 9-11 | core-ide integration (headless, MCP, updater) | -| 5 | 12-13 | Systemd + verification | diff --git a/docs/plans/2026-02-05-mcp-integration.md b/docs/plans/2026-02-05-mcp-integration.md deleted file mode 100644 index c63285a..0000000 --- a/docs/plans/2026-02-05-mcp-integration.md +++ /dev/null @@ -1,849 +0,0 @@ -# MCP Integration Implementation Plan - -> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. - -**Goal:** Add `core mcp serve` command with RAG and metrics tools, then configure the agentic-flows plugin to use it. - -**Architecture:** Create a new `mcp` command package that starts the pkg/mcp server with extended tools. RAG tools call the existing exported functions in internal/cmd/rag. Metrics tools call pkg/ai directly. The agentic-flows plugin gets a `.mcp.json` that spawns `core mcp serve`. - -**Tech Stack:** Go 1.25, github.com/modelcontextprotocol/go-sdk/mcp, pkg/rag, pkg/ai - ---- - -## Task 1: Add RAG tools to pkg/mcp - -**Files:** -- Create: `pkg/mcp/tools_rag.go` -- Modify: `pkg/mcp/mcp.go:99-101` (registerTools) -- Test: `pkg/mcp/tools_rag_test.go` - -**Step 1: Write the failing test** - -Create `pkg/mcp/tools_rag_test.go`: - -```go -package mcp - -import ( - "context" - "testing" - - "github.com/modelcontextprotocol/go-sdk/mcp" -) - -func TestRAGQueryTool_Good(t *testing.T) { - // This test verifies the tool is registered and callable. - // It doesn't require Qdrant/Ollama running - just checks structure. - s, err := New(WithWorkspaceRoot("")) - if err != nil { - t.Fatalf("New() error: %v", err) - } - - // Check that rag_query tool is registered - tools := s.Server().ListTools() - found := false - for _, tool := range tools { - if tool.Name == "rag_query" { - found = true - break - } - } - if !found { - t.Error("rag_query tool not registered") - } -} - -func TestRAGQueryInput_Good(t *testing.T) { - input := RAGQueryInput{ - Question: "how do I deploy?", - Collection: "hostuk-docs", - TopK: 5, - } - if input.Question == "" { - t.Error("Question should not be empty") - } -} -``` - -**Step 2: Run test to verify it fails** - -Run: `go test -run TestRAGQueryTool ./pkg/mcp/... -v` -Expected: FAIL with "rag_query tool not registered" - -**Step 3: Create tools_rag.go with types and tool registration** - -Create `pkg/mcp/tools_rag.go`: - -```go -package mcp - -import ( - "context" - "fmt" - - ragcmd "forge.lthn.ai/core/cli/internal/cmd/rag" - "forge.lthn.ai/core/cli/pkg/rag" - "github.com/modelcontextprotocol/go-sdk/mcp" -) - -// RAG tool input/output types - -// RAGQueryInput contains parameters for querying the vector database. -type RAGQueryInput struct { - Question string `json:"question"` - Collection string `json:"collection,omitempty"` - TopK int `json:"top_k,omitempty"` -} - -// RAGQueryOutput contains the query results. -type RAGQueryOutput struct { - Results []RAGResult `json:"results"` - Context string `json:"context"` -} - -// RAGResult represents a single search result. -type RAGResult struct { - Content string `json:"content"` - Score float32 `json:"score"` - Source string `json:"source"` - Metadata map[string]string `json:"metadata,omitempty"` -} - -// RAGIngestInput contains parameters for ingesting documents. -type RAGIngestInput struct { - Path string `json:"path"` - Collection string `json:"collection,omitempty"` - Recreate bool `json:"recreate,omitempty"` -} - -// RAGIngestOutput contains the ingestion results. -type RAGIngestOutput struct { - Success bool `json:"success"` - Path string `json:"path"` - Chunks int `json:"chunks"` - Message string `json:"message,omitempty"` -} - -// RAGCollectionsInput contains parameters for listing collections. -type RAGCollectionsInput struct { - ShowStats bool `json:"show_stats,omitempty"` -} - -// RAGCollectionsOutput contains the list of collections. -type RAGCollectionsOutput struct { - Collections []CollectionInfo `json:"collections"` -} - -// CollectionInfo describes a Qdrant collection. -type CollectionInfo struct { - Name string `json:"name"` - PointsCount uint64 `json:"points_count,omitempty"` - Status string `json:"status,omitempty"` -} - -// registerRAGTools adds RAG tools to the MCP server. -func (s *Service) registerRAGTools(server *mcp.Server) { - mcp.AddTool(server, &mcp.Tool{ - Name: "rag_query", - Description: "Query the vector database for relevant documents using semantic search", - }, s.ragQuery) - - mcp.AddTool(server, &mcp.Tool{ - Name: "rag_ingest", - Description: "Ingest a file or directory into the vector database", - }, s.ragIngest) - - mcp.AddTool(server, &mcp.Tool{ - Name: "rag_collections", - Description: "List available vector database collections", - }, s.ragCollections) -} - -func (s *Service) ragQuery(ctx context.Context, req *mcp.CallToolRequest, input RAGQueryInput) (*mcp.CallToolResult, RAGQueryOutput, error) { - s.logger.Info("MCP tool execution", "tool", "rag_query", "question", input.Question) - - collection := input.Collection - if collection == "" { - collection = "hostuk-docs" - } - topK := input.TopK - if topK <= 0 { - topK = 5 - } - - results, err := ragcmd.QueryDocs(ctx, input.Question, collection, topK) - if err != nil { - return nil, RAGQueryOutput{}, fmt.Errorf("query failed: %w", err) - } - - // Convert to output format - out := RAGQueryOutput{ - Results: make([]RAGResult, 0, len(results)), - Context: rag.FormatResultsContext(results), - } - for _, r := range results { - out.Results = append(out.Results, RAGResult{ - Content: r.Content, - Score: r.Score, - Source: r.Source, - Metadata: r.Metadata, - }) - } - - return nil, out, nil -} - -func (s *Service) ragIngest(ctx context.Context, req *mcp.CallToolRequest, input RAGIngestInput) (*mcp.CallToolResult, RAGIngestOutput, error) { - s.logger.Security("MCP tool execution", "tool", "rag_ingest", "path", input.Path) - - collection := input.Collection - if collection == "" { - collection = "hostuk-docs" - } - - // Check if path is a file or directory - info, err := s.medium.Stat(input.Path) - if err != nil { - return nil, RAGIngestOutput{}, fmt.Errorf("path not found: %w", err) - } - - if info.IsDir() { - err = ragcmd.IngestDirectory(ctx, input.Path, collection, input.Recreate) - if err != nil { - return nil, RAGIngestOutput{}, fmt.Errorf("ingest directory failed: %w", err) - } - return nil, RAGIngestOutput{ - Success: true, - Path: input.Path, - Message: fmt.Sprintf("Ingested directory into collection %s", collection), - }, nil - } - - chunks, err := ragcmd.IngestFile(ctx, input.Path, collection) - if err != nil { - return nil, RAGIngestOutput{}, fmt.Errorf("ingest file failed: %w", err) - } - - return nil, RAGIngestOutput{ - Success: true, - Path: input.Path, - Chunks: chunks, - Message: fmt.Sprintf("Ingested %d chunks into collection %s", chunks, collection), - }, nil -} - -func (s *Service) ragCollections(ctx context.Context, req *mcp.CallToolRequest, input RAGCollectionsInput) (*mcp.CallToolResult, RAGCollectionsOutput, error) { - s.logger.Info("MCP tool execution", "tool", "rag_collections") - - client, err := rag.NewQdrantClient(rag.DefaultQdrantConfig()) - if err != nil { - return nil, RAGCollectionsOutput{}, fmt.Errorf("connect to Qdrant: %w", err) - } - defer func() { _ = client.Close() }() - - names, err := client.ListCollections(ctx) - if err != nil { - return nil, RAGCollectionsOutput{}, fmt.Errorf("list collections: %w", err) - } - - out := RAGCollectionsOutput{ - Collections: make([]CollectionInfo, 0, len(names)), - } - - for _, name := range names { - info := CollectionInfo{Name: name} - if input.ShowStats { - cinfo, err := client.CollectionInfo(ctx, name) - if err == nil { - info.PointsCount = cinfo.PointsCount - info.Status = cinfo.Status.String() - } - } - out.Collections = append(out.Collections, info) - } - - return nil, out, nil -} -``` - -**Step 4: Update mcp.go to call registerRAGTools** - -In `pkg/mcp/mcp.go`, modify the `registerTools` function (around line 104) to add: - -```go -func (s *Service) registerTools(server *mcp.Server) { - // File operations (existing) - // ... existing code ... - - // RAG operations - s.registerRAGTools(server) -} -``` - -**Step 5: Run test to verify it passes** - -Run: `go test -run TestRAGQuery ./pkg/mcp/... -v` -Expected: PASS - -**Step 6: Commit** - -```bash -git add pkg/mcp/tools_rag.go pkg/mcp/tools_rag_test.go pkg/mcp/mcp.go -git commit -m "feat(mcp): add RAG tools (query, ingest, collections)" -``` - ---- - -## Task 2: Add metrics tools to pkg/mcp - -**Files:** -- Create: `pkg/mcp/tools_metrics.go` -- Modify: `pkg/mcp/mcp.go` (registerTools) -- Test: `pkg/mcp/tools_metrics_test.go` - -**Step 1: Write the failing test** - -Create `pkg/mcp/tools_metrics_test.go`: - -```go -package mcp - -import ( - "testing" -) - -func TestMetricsRecordTool_Good(t *testing.T) { - s, err := New(WithWorkspaceRoot("")) - if err != nil { - t.Fatalf("New() error: %v", err) - } - - tools := s.Server().ListTools() - found := false - for _, tool := range tools { - if tool.Name == "metrics_record" { - found = true - break - } - } - if !found { - t.Error("metrics_record tool not registered") - } -} - -func TestMetricsQueryTool_Good(t *testing.T) { - s, err := New(WithWorkspaceRoot("")) - if err != nil { - t.Fatalf("New() error: %v", err) - } - - tools := s.Server().ListTools() - found := false - for _, tool := range tools { - if tool.Name == "metrics_query" { - found = true - break - } - } - if !found { - t.Error("metrics_query tool not registered") - } -} -``` - -**Step 2: Run test to verify it fails** - -Run: `go test -run TestMetrics ./pkg/mcp/... -v` -Expected: FAIL - -**Step 3: Create tools_metrics.go** - -Create `pkg/mcp/tools_metrics.go`: - -```go -package mcp - -import ( - "context" - "fmt" - "time" - - "forge.lthn.ai/core/cli/pkg/ai" - "github.com/modelcontextprotocol/go-sdk/mcp" -) - -// Metrics tool input/output types - -// MetricsRecordInput contains parameters for recording a metric event. -type MetricsRecordInput struct { - Type string `json:"type"` - AgentID string `json:"agent_id,omitempty"` - Repo string `json:"repo,omitempty"` - Data map[string]any `json:"data,omitempty"` -} - -// MetricsRecordOutput contains the result of recording. -type MetricsRecordOutput struct { - Success bool `json:"success"` - Timestamp time.Time `json:"timestamp"` -} - -// MetricsQueryInput contains parameters for querying metrics. -type MetricsQueryInput struct { - Since string `json:"since,omitempty"` // e.g., "7d", "24h" -} - -// MetricsQueryOutput contains the query results. -type MetricsQueryOutput struct { - Total int `json:"total"` - ByType []MetricCount `json:"by_type"` - ByRepo []MetricCount `json:"by_repo"` - ByAgent []MetricCount `json:"by_agent"` - Events []MetricEventBrief `json:"events,omitempty"` -} - -// MetricCount represents a count by key. -type MetricCount struct { - Key string `json:"key"` - Count int `json:"count"` -} - -// MetricEventBrief is a simplified event for output. -type MetricEventBrief struct { - Type string `json:"type"` - Timestamp time.Time `json:"timestamp"` - AgentID string `json:"agent_id,omitempty"` - Repo string `json:"repo,omitempty"` -} - -// registerMetricsTools adds metrics tools to the MCP server. -func (s *Service) registerMetricsTools(server *mcp.Server) { - mcp.AddTool(server, &mcp.Tool{ - Name: "metrics_record", - Description: "Record a metric event (AI task, security scan, job creation, etc.)", - }, s.metricsRecord) - - mcp.AddTool(server, &mcp.Tool{ - Name: "metrics_query", - Description: "Query recorded metrics with aggregation by type, repo, and agent", - }, s.metricsQuery) -} - -func (s *Service) metricsRecord(ctx context.Context, req *mcp.CallToolRequest, input MetricsRecordInput) (*mcp.CallToolResult, MetricsRecordOutput, error) { - s.logger.Info("MCP tool execution", "tool", "metrics_record", "type", input.Type) - - if input.Type == "" { - return nil, MetricsRecordOutput{}, fmt.Errorf("type is required") - } - - event := ai.Event{ - Type: input.Type, - Timestamp: time.Now(), - AgentID: input.AgentID, - Repo: input.Repo, - Data: input.Data, - } - - if err := ai.Record(event); err != nil { - return nil, MetricsRecordOutput{}, fmt.Errorf("record event: %w", err) - } - - return nil, MetricsRecordOutput{ - Success: true, - Timestamp: event.Timestamp, - }, nil -} - -func (s *Service) metricsQuery(ctx context.Context, req *mcp.CallToolRequest, input MetricsQueryInput) (*mcp.CallToolResult, MetricsQueryOutput, error) { - s.logger.Info("MCP tool execution", "tool", "metrics_query", "since", input.Since) - - since := input.Since - if since == "" { - since = "7d" - } - - duration, err := parseDuration(since) - if err != nil { - return nil, MetricsQueryOutput{}, fmt.Errorf("invalid since value: %w", err) - } - - sinceTime := time.Now().Add(-duration) - events, err := ai.ReadEvents(sinceTime) - if err != nil { - return nil, MetricsQueryOutput{}, fmt.Errorf("read events: %w", err) - } - - summary := ai.Summary(events) - - out := MetricsQueryOutput{ - Total: summary["total"].(int), - } - - // Convert by_type - if byType, ok := summary["by_type"].([]map[string]any); ok { - for _, entry := range byType { - out.ByType = append(out.ByType, MetricCount{ - Key: entry["key"].(string), - Count: entry["count"].(int), - }) - } - } - - // Convert by_repo - if byRepo, ok := summary["by_repo"].([]map[string]any); ok { - for _, entry := range byRepo { - out.ByRepo = append(out.ByRepo, MetricCount{ - Key: entry["key"].(string), - Count: entry["count"].(int), - }) - } - } - - // Convert by_agent - if byAgent, ok := summary["by_agent"].([]map[string]any); ok { - for _, entry := range byAgent { - out.ByAgent = append(out.ByAgent, MetricCount{ - Key: entry["key"].(string), - Count: entry["count"].(int), - }) - } - } - - // Include last 10 events for context - limit := 10 - if len(events) < limit { - limit = len(events) - } - for i := len(events) - limit; i < len(events); i++ { - ev := events[i] - out.Events = append(out.Events, MetricEventBrief{ - Type: ev.Type, - Timestamp: ev.Timestamp, - AgentID: ev.AgentID, - Repo: ev.Repo, - }) - } - - return nil, out, nil -} - -// parseDuration parses a human-friendly duration like "7d", "24h", "30d". -func parseDuration(s string) (time.Duration, error) { - if len(s) < 2 { - return 0, fmt.Errorf("invalid duration: %s", s) - } - - unit := s[len(s)-1] - value := s[:len(s)-1] - - var n int - if _, err := fmt.Sscanf(value, "%d", &n); err != nil { - return 0, fmt.Errorf("invalid duration: %s", s) - } - - if n <= 0 { - return 0, fmt.Errorf("duration must be positive: %s", s) - } - - switch unit { - case 'd': - return time.Duration(n) * 24 * time.Hour, nil - case 'h': - return time.Duration(n) * time.Hour, nil - case 'm': - return time.Duration(n) * time.Minute, nil - default: - return 0, fmt.Errorf("unknown unit %c in duration: %s", unit, s) - } -} -``` - -**Step 4: Update mcp.go to call registerMetricsTools** - -In `pkg/mcp/mcp.go`, add to `registerTools`: - -```go -func (s *Service) registerTools(server *mcp.Server) { - // ... existing file operations ... - - // RAG operations - s.registerRAGTools(server) - - // Metrics operations - s.registerMetricsTools(server) -} -``` - -**Step 5: Run test to verify it passes** - -Run: `go test -run TestMetrics ./pkg/mcp/... -v` -Expected: PASS - -**Step 6: Commit** - -```bash -git add pkg/mcp/tools_metrics.go pkg/mcp/tools_metrics_test.go pkg/mcp/mcp.go -git commit -m "feat(mcp): add metrics tools (record, query)" -``` - ---- - -## Task 3: Create `core mcp serve` command - -**Files:** -- Create: `internal/cmd/mcpcmd/cmd_mcp.go` -- Modify: `internal/variants/full.go` (add import) -- Test: Manual test via `core mcp serve` - -**Step 1: Create the mcp command package** - -Create `internal/cmd/mcpcmd/cmd_mcp.go`: - -```go -package mcpcmd - -import ( - "context" - "os" - "os/signal" - "syscall" - - "forge.lthn.ai/core/cli/pkg/cli" - "forge.lthn.ai/core/cli/pkg/i18n" - "forge.lthn.ai/core/cli/pkg/mcp" -) - -func init() { - cli.RegisterCommands(AddMCPCommands) -} - -var ( - mcpWorkspace string -) - -var mcpCmd = &cli.Command{ - Use: "mcp", - Short: i18n.T("cmd.mcp.short"), - Long: i18n.T("cmd.mcp.long"), -} - -var serveCmd = &cli.Command{ - Use: "serve", - Short: i18n.T("cmd.mcp.serve.short"), - Long: i18n.T("cmd.mcp.serve.long"), - RunE: func(cmd *cli.Command, args []string) error { - return runServe() - }, -} - -func AddMCPCommands(root *cli.Command) { - initMCPFlags() - mcpCmd.AddCommand(serveCmd) - root.AddCommand(mcpCmd) -} - -func initMCPFlags() { - serveCmd.Flags().StringVar(&mcpWorkspace, "workspace", "", i18n.T("cmd.mcp.serve.flag.workspace")) -} - -func runServe() error { - opts := []mcp.Option{} - - if mcpWorkspace != "" { - opts = append(opts, mcp.WithWorkspaceRoot(mcpWorkspace)) - } else { - // Default to unrestricted for MCP server - opts = append(opts, mcp.WithWorkspaceRoot("")) - } - - svc, err := mcp.New(opts...) - if err != nil { - return cli.Wrap(err, "create MCP service") - } - - ctx, cancel := context.WithCancel(context.Background()) - defer cancel() - - // Handle shutdown signals - sigCh := make(chan os.Signal, 1) - signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM) - go func() { - <-sigCh - cancel() - }() - - return svc.Run(ctx) -} -``` - -**Step 2: Add i18n strings** - -Create or update `pkg/i18n/en.yaml` (if it exists) or add to the existing i18n mechanism: - -```yaml -cmd.mcp.short: "MCP (Model Context Protocol) server" -cmd.mcp.long: "Start an MCP server for Claude Code integration with file, RAG, and metrics tools." -cmd.mcp.serve.short: "Start the MCP server" -cmd.mcp.serve.long: "Start the MCP server in stdio mode. Use MCP_ADDR env var for TCP mode." -cmd.mcp.serve.flag.workspace: "Restrict file operations to this directory (empty = unrestricted)" -``` - -**Step 3: Add import to full.go** - -Modify `internal/variants/full.go` to add: - -```go -import ( - // ... existing imports ... - _ "forge.lthn.ai/core/cli/internal/cmd/mcpcmd" -) -``` - -**Step 4: Build and test** - -Run: `go build && ./core mcp serve --help` -Expected: Help output showing the serve command - -**Step 5: Test MCP server manually** - -Run: `echo '{"jsonrpc":"2.0","method":"tools/list","id":1}' | ./core mcp serve` -Expected: JSON response listing all tools including rag_query, metrics_record, etc. - -**Step 6: Commit** - -```bash -git add internal/cmd/mcpcmd/cmd_mcp.go internal/variants/full.go -git commit -m "feat: add 'core mcp serve' command" -``` - ---- - -## Task 4: Configure agentic-flows plugin with .mcp.json - -**Files:** -- Create: `/home/shared/hostuk/claude-plugins/plugins/agentic-flows/.mcp.json` -- Modify: `/home/shared/hostuk/claude-plugins/plugins/agentic-flows/.claude-plugin/plugin.json` (optional, add mcpServers) - -**Step 1: Create .mcp.json** - -Create `/home/shared/hostuk/claude-plugins/plugins/agentic-flows/.mcp.json`: - -```json -{ - "core-cli": { - "command": "core", - "args": ["mcp", "serve"], - "env": { - "MCP_WORKSPACE": "" - } - } -} -``` - -**Step 2: Verify plugin loads** - -Restart Claude Code and run `/mcp` to verify the core-cli server appears. - -**Step 3: Test MCP tools** - -Test that tools are available: -- `mcp__plugin_agentic-flows_core-cli__rag_query` -- `mcp__plugin_agentic-flows_core-cli__rag_ingest` -- `mcp__plugin_agentic-flows_core-cli__rag_collections` -- `mcp__plugin_agentic-flows_core-cli__metrics_record` -- `mcp__plugin_agentic-flows_core-cli__metrics_query` -- `mcp__plugin_agentic-flows_core-cli__file_read` -- etc. - -**Step 4: Commit plugin changes** - -```bash -cd /home/shared/hostuk/claude-plugins -git add plugins/agentic-flows/.mcp.json -git commit -m "feat(agentic-flows): add MCP server configuration for core-cli" -``` - ---- - -## Task 5: Update documentation - -**Files:** -- Modify: `/home/claude/.claude/projects/-home-claude/memory/MEMORY.md` -- Modify: `/home/claude/.claude/projects/-home-claude/memory/plugin-dev-notes.md` - -**Step 1: Update MEMORY.md** - -Add under "Core CLI MCP Server" section: - -```markdown -### Core CLI MCP Server -- **Command:** `core mcp serve` (stdio mode) or `MCP_ADDR=:9000 core mcp serve` (TCP) -- **Tools available:** - - File ops: file_read, file_write, file_edit, file_delete, file_rename, file_exists, dir_list, dir_create - - RAG: rag_query, rag_ingest, rag_collections - - Metrics: metrics_record, metrics_query - - Language: lang_detect, lang_list -- **Plugin config:** `plugins/agentic-flows/.mcp.json` -``` - -**Step 2: Update plugin-dev-notes.md** - -Add section: - -```markdown -## MCP Server (core mcp serve) - -### Available Tools -| Tool | Description | -|------|-------------| -| file_read | Read file contents | -| file_write | Write file contents | -| file_edit | Edit file (replace string) | -| file_delete | Delete file | -| file_rename | Rename/move file | -| file_exists | Check if file exists | -| dir_list | List directory contents | -| dir_create | Create directory | -| rag_query | Query vector DB | -| rag_ingest | Ingest file/directory | -| rag_collections | List collections | -| metrics_record | Record event | -| metrics_query | Query events | -| lang_detect | Detect file language | -| lang_list | List supported languages | - -### Example .mcp.json -```json -{ - "core-cli": { - "command": "core", - "args": ["mcp", "serve"] - } -} -``` -``` - -**Step 3: Commit documentation** - -```bash -git add ~/.claude/projects/-home-claude/memory/*.md -git commit -m "docs: update memory with MCP server tools" -``` - ---- - -## Summary - -| Task | Files | Purpose | -|------|-------|---------| -| 1 | `pkg/mcp/tools_rag.go` | RAG tools (query, ingest, collections) | -| 2 | `pkg/mcp/tools_metrics.go` | Metrics tools (record, query) | -| 3 | `internal/cmd/mcpcmd/cmd_mcp.go` | `core mcp serve` command | -| 4 | `plugins/agentic-flows/.mcp.json` | Plugin MCP configuration | -| 5 | Memory docs | Documentation updates | - -## Services Required - -- **Qdrant:** localhost:6333 (verified running) -- **Ollama:** localhost:11434 with nomic-embed-text (verified running) -- **InfluxDB:** localhost:8086 (optional, for future time-series metrics)