feat(workspace): add Go-native prep command, align PHP to .core/ convention

Go cmd_prep.go mirrors PHP PrepWorkspaceCommand — pulls wiki KB, copies
specs, generates TODO from issue, recalls context from OpenBrain. PHP
output dir changed from ./workspace/ to ./.core/ with lowercase filenames.

Co-Authored-By: Virgil <virgil@lethean.io>
This commit is contained in:
Snider 2026-03-13 09:29:43 +00:00
parent 1aa1afcb0f
commit 63cb1e31bb
5 changed files with 693 additions and 169 deletions

274
CLAUDE.md
View file

@ -4,172 +4,160 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## Overview
**core-agent** is a monorepo of Claude Code plugins for the Host UK federated monorepo. It contains multiple focused plugins that can be installed individually or together.
**core-agent** is a polyglot monorepo (Go + PHP) for AI agent orchestration. The Go side handles agent-side execution, CLI commands, and autonomous agent loops. The PHP side (Laravel package `lthn/agent`) provides the backend API, persistent storage, multi-provider AI services, and admin panel. They communicate via REST API.
## Plugins
The repo also contains Claude Code plugins (5), Codex plugins (13), a Gemini CLI extension, and two MCP servers.
| Plugin | Description | Install |
|--------|-------------|---------|
| **code** | Core development - hooks, scripts, data collection | `claude plugin add host-uk/core-agent/claude/code` |
| **review** | Code review automation | `claude plugin add host-uk/core-agent/claude/review` |
| **verify** | Work verification | `claude plugin add host-uk/core-agent/claude/verify` |
| **qa** | Quality assurance loops | `claude plugin add host-uk/core-agent/claude/qa` |
| **ci** | CI/CD integration | `claude plugin add host-uk/core-agent/claude/ci` |
## Core CLI — Always Use It
Or install all via marketplace:
```bash
claude plugin add host-uk/core-agent
```
## Repository Structure
```
core-agent/
├── .claude-plugin/
│ └── marketplace.json # Plugin registry (enables auto-updates)
├── claude/
│ ├── code/ # Core development plugin
│ │ ├── .claude-plugin/
│ │ │ └── plugin.json
│ │ ├── hooks.json
│ │ ├── hooks/
│ │ ├── scripts/
│ │ ├── commands/ # /code:remember, /code:yes
│ │ ├── skills/ # Data collection skills
│ │ └── collection/ # Collection event hooks
│ ├── review/ # Code review plugin
│ │ ├── .claude-plugin/
│ │ │ └── plugin.json
│ │ └── commands/ # /review:review
│ ├── verify/ # Verification plugin
│ │ ├── .claude-plugin/
│ │ │ └── plugin.json
│ │ └── commands/ # /verify:verify
│ ├── qa/ # QA plugin
│ │ ├── .claude-plugin/
│ │ │ └── plugin.json
│ │ ├── scripts/
│ │ └── commands/ # /qa:qa, /qa:fix
│ └── ci/ # CI plugin
│ ├── .claude-plugin/
│ │ └── plugin.json
│ └── commands/ # /ci:ci, /ci:workflow
├── CLAUDE.md
└── .gitignore
```
## Plugin Commands
### code
- `/code:remember <fact>` - Save context that persists across compaction
- `/code:yes <task>` - Auto-approve mode with commit requirement
### review
- `/review:review [range]` - Code review on staged changes or commits
### verify
- `/verify:verify [--quick|--full]` - Verify work is complete
### qa
- `/qa:qa` - Iterative QA fix loop (runs until all checks pass)
- `/qa:fix <issue>` - Fix a specific QA issue
### ci
- `/ci:ci [status|run|logs|fix]` - CI status and management
- `/ci:workflow <type>` - Generate GitHub Actions workflows
## Core CLI Philosophy
**Always use `core` CLI instead of raw commands.** The `core` binary handles the full E2E development lifecycle for Go and PHP ecosystems.
### Command Mappings
**Never use raw `go`, `php`, or `composer` commands.** The `core` CLI wraps both toolchains and is enforced by PreToolUse hooks that will block violations.
| Instead of... | Use... |
|---------------|--------|
| `go test` | `core go test` |
| `go build` | `core build` |
| `go fmt` | `core go fmt` |
| `go vet` | `core go vet` |
| `golangci-lint` | `core go lint` |
| `composer test` | `core php test` |
| `./vendor/bin/pint` | `core php fmt` |
| `composer test` / `./vendor/bin/pest` | `core php test` |
| `./vendor/bin/pint` / `composer lint` | `core php fmt` |
| `./vendor/bin/phpstan` | `core php stan` |
| `php artisan serve` | `core php dev` |
### Key Commands
## Build & Test Commands
```bash
# Development
core dev health # Status across repos
core dev work # Full workflow: status → commit → push
# Go
core go test # Run tests
core go qa # Full QA pipeline
core go test # Run all Go tests
core go test --run TestMemoryRegistry_Register_Good # Run single test
core go qa # Full QA: fmt + vet + lint + test
core go qa full # QA + race detector + vuln scan
core go cov # Test coverage
core build # Verify Go packages compile
# PHP
core php test # Run Pest tests
core php qa # Full QA pipeline
core php test # Run Pest suite
core php test --filter=AgenticManagerTest # Run specific test file
core php fmt # Format (Laravel Pint)
core php stan # Static analysis (PHPStan)
core php qa # Full PHP QA pipeline
# Building
core build # Auto-detect and build
# MCP servers (standalone builds)
cd cmd/mcp && go build -o agent-mcp . # Stdio MCP server
cd google/mcp && go build -o google-mcp . # HTTP MCP server (port 8080)
# AI
core ai task # Auto-select a task
core ai task:pr # Create PR for task
# Workspace
make setup # Full bootstrap (deps + core + clone repos)
core dev health # Status across repos
```
## code Plugin Features
## Architecture
### Hooks
| Hook | File | Purpose |
|------|------|---------|
| PreToolUse | `prefer-core.sh` | Block dangerous commands, enforce `core` CLI |
| PostToolUse | `php-format.sh` | Auto-format PHP |
| PostToolUse | `go-format.sh` | Auto-format Go |
| PostToolUse | `check-debug.sh` | Warn about debug statements |
| PreCompact | `pre-compact.sh` | Save state before compaction |
| SessionStart | `session-start.sh` | Restore context on startup |
### Blocked Patterns
**Destructive operations:**
- `rm -rf` / `rm -r` (except node_modules, vendor, .cache)
- `mv`/`cp` with wildcards
- `xargs` with rm/mv/cp
- `find -exec` with file operations
- `sed -i` (in-place editing)
**Raw commands (use core instead):**
- `go test/build/fmt/mod``core go *`
- `composer test``core php test`
### Data Collection Skills
| Skill | Purpose |
|-------|---------|
| `ledger-papers/` | 91+ distributed ledger whitepapers |
| `project-archaeology/` | Dead project excavation |
| `bitcointalk/` | Forum thread archival |
| `coinmarketcap/` | Historical price data |
| `github-history/` | Repository history preservation |
## Development
### Adding a new plugin
1. Create `claude/<name>/.claude-plugin/plugin.json`
2. Add commands to `claude/<name>/commands/`
3. Register in `.claude-plugin/marketplace.json`
### Testing hooks locally
```bash
echo '{"tool_input": {"command": "rm -rf /"}}' | bash ./claude/code/hooks/prefer-core.sh
```
Forgejo
|
[ForgejoSource polls]
|
v
+-- Go: jobrunner Poller --+ +-- PHP: Laravel Backend --+
| ForgejoSource | | AgentApiController |
| DispatchHandler ---------|----->| /v1/plans |
| CompletionHandler | | /v1/sessions |
| ResolveThreadsHandler | | /v1/plans/*/phases |
+--------------------------+ +-------------+------------+
|
[Eloquent models]
AgentPlan, AgentPhase,
AgentSession, BrainMemory
```
### Go Packages (`pkg/`)
- **`lifecycle/`** — Core domain layer. Task, AgentInfo, Plan, Phase, Session types. Agent registry (Memory/SQLite/Redis backends), task router (capability matching + load scoring), allowance system (quota enforcement), dispatcher (orchestrates dispatch with exponential backoff), event system, brain (vector store), context (git integration).
- **`loop/`** — Autonomous agent reasoning engine. Prompt-parse-execute cycle against any `inference.TextModel` with tool calling and streaming.
- **`orchestrator/`** — Clotho protocol for dual-run verification and agent orchestration.
- **`jobrunner/`** — Poll-dispatch engine for agent-side work execution. Polls Forgejo for work items, executes phases, reports results.
### Go Commands (`cmd/`)
- **`tasks/`** — `core ai tasks`, `core ai task [id]` — task management
- **`agent/`** — `core ai agent` — agent machine management (add, list, status, fleet)
- **`dispatch/`** — `core ai dispatch` — work queue processor (watch, run)
- **`workspace/`** — `core workspace task`, `core workspace agent` — git worktree isolation
- **`mcp/`** — Standalone stdio MCP server exposing `marketplace_list`, `marketplace_plugin_info`, `core_cli`, `ethics_check`
### PHP (`src/php/`)
- **Namespace**: `Core\Mod\Agentic\` (service provider: `Boot`)
- **Models/** — 19 Eloquent models (AgentPlan, AgentPhase, AgentSession, BrainMemory, Task, Prompt, etc.)
- **Services/** — AgenticManager (multi-provider: Claude/Gemini/OpenAI), BrainService (Ollama+Qdrant), ForgejoService, AI services with stream parsing and retry traits
- **Controllers/** — AgentApiController (REST endpoints)
- **Actions/** — Single-purpose action classes (Brain, Forge, Phase, Plan, Session, Task)
- **View/** — Livewire admin panel components (Dashboard, Plans, Sessions, ApiKeys, Templates, Playground, etc.)
- **Mcp/** — MCP tool implementations (Brain, Content, Phase, Plan, Session, State, Task, Template)
- **Migrations/** — 10 migrations (run automatically on boot)
## Claude Code Plugins (`claude/`)
Five plugins installable individually or via marketplace:
| Plugin | Commands |
|--------|----------|
| **code** | `/code:remember`, `/code:yes`, `/code:qa` |
| **review** | `/review:review`, `/review:security`, `/review:pr`, `/review:pipeline` |
| **verify** | `/verify:verify`, `/verify:ready`, `/verify:tests` |
| **qa** | `/qa:qa`, `/qa:fix`, `/qa:check`, `/qa:lint` |
| **ci** | `/ci:ci`, `/ci:workflow`, `/ci:fix`, `/ci:run`, `/ci:status` |
### Hooks (code plugin)
**PreToolUse**: `prefer-core.sh` blocks destructive operations (`rm -rf`, `sed -i`, `xargs rm`, `find -exec rm`, `grep -l | ...`, `mv/cp *`) and raw go/php commands. `block-docs.sh` prevents random `.md` file creation.
**PostToolUse**: Auto-formats Go (`gofmt`) and PHP (`pint`) after edits. Warns about debug statements (`dd()`, `dump()`, `fmt.Println()`).
**PreCompact**: Saves session state. **SessionStart**: Restores session context.
## Other Directories
- **`codex/`** — 13 Codex plugins mirroring Claude structure plus ethics, guardrails, perf, issue, coolify, awareness
- **`agents/`** — 13 specialist agent categories (design, engineering, marketing, product, testing, etc.) with example configs and system prompts
- **`google/gemini-cli/`** — Gemini CLI extension (TypeScript, `npm run build`)
- **`google/mcp/`** — HTTP MCP server exposing `core_go_test`, `core_dev_health`, `core_dev_commit`
- **`docs/`** — `architecture.md` (deep dive), `development.md` (comprehensive dev guide), `docs/plans/` (design documents)
- **`scripts/`** — Environment setup scripts (`install-core.sh`, `install-deps.sh`, `agent-runner.sh`, etc.)
## Testing Conventions
### Go
Uses `testify/assert` and `testify/require`. Name tests with suffixes:
- `_Good` — happy path
- `_Bad` — expected error conditions
- `_Ugly` — panics and edge cases
Use `require` for preconditions (stops on failure), `assert` for verifications (reports all failures).
### PHP
Pest with Orchestra Testbench. Feature tests use `RefreshDatabase`. Helpers: `createWorkspace()`, `createApiKey($workspace, ...)`.
## Coding Standards
- **UK English**: colour, organisation, centre
- **Shell scripts**: Use `#!/bin/bash`, read JSON with `jq`
- **Hook output**: JSON with `decision` (approve/block) and optional `message`
- **License**: EUPL-1.2 CIC
- **UK English**: colour, organisation, centre, licence, behaviour
- **Go**: standard `gofmt`, errors via `core.E("scope.Method", "what failed", err)`
- **PHP**: `declare(strict_types=1)`, full type hints, PSR-12 via Pint, Pest syntax for tests
- **Shell**: `#!/bin/bash`, JSON input via `jq`, output `{"decision": "approve"|"block", "message": "..."}`
- **Commits**: conventional — `type(scope): description` (e.g. `feat(lifecycle): add exponential backoff`)
- **Licence**: EUPL-1.2 CIC
## Prerequisites
| Tool | Version | Purpose |
|------|---------|---------|
| Go | 1.26+ | Go packages, CLI, MCP servers |
| PHP | 8.2+ | Laravel package, Pest tests |
| Composer | 2.x | PHP dependencies |
| `core` CLI | latest | Wraps Go/PHP toolchains (enforced by hooks) |
| `jq` | any | JSON parsing in shell hooks |
Go module is `forge.lthn.ai/core/agent`, participates in a Go workspace (`go.work`) resolving all `forge.lthn.ai/core/*` dependencies locally.

View file

@ -25,9 +25,13 @@ fi
# === HARD BLOCKS - Never allow these ===
# Block rm -rf, rm -r (except for known safe paths like node_modules, vendor, .cache)
# Allow git rm -r (safe — git tracks everything, easily reversible)
if echo "$command" | grep -qE 'rm\s+(-[a-zA-Z]*r[a-zA-Z]*|-[a-zA-Z]*f[a-zA-Z]*r|--recursive)'; then
# Allow only specific safe directories
if ! echo "$command" | grep -qE 'rm\s+(-rf|-r)\s+(node_modules|vendor|\.cache|dist|build|__pycache__|\.pytest_cache|/tmp/)'; then
# git rm -r is safe — everything is tracked and recoverable
if echo "$command" | grep -qE 'git\s+rm\s'; then
: # allow git rm through
# Allow only specific safe directories for raw rm
elif ! echo "$command" | grep -qE 'rm\s+(-rf|-r)\s+(node_modules|vendor|\.cache|dist|build|__pycache__|\.pytest_cache|/tmp/)'; then
echo '{"decision": "block", "message": "BLOCKED: Recursive delete is not allowed. Delete files individually or ask the user to run this command."}'
exit 0
fi

543
cmd/workspace/cmd_prep.go Normal file
View file

@ -0,0 +1,543 @@
// cmd_prep.go implements the `workspace prep` command.
//
// Prepares an agent workspace with wiki KB, protocol specs, a TODO from a
// Forge issue, and vector-recalled context from OpenBrain. All output goes
// to .core/ in the current directory, matching the convention used by
// KBConfig (go-scm) and build/release config.
package workspace
import (
"context"
"encoding/base64"
"encoding/json"
"fmt"
"net/http"
"net/url"
"os"
"path/filepath"
"regexp"
"strings"
"time"
"forge.lthn.ai/core/agent/pkg/lifecycle"
"forge.lthn.ai/core/cli/pkg/cli"
coreio "forge.lthn.ai/core/go-io"
"forge.lthn.ai/core/go-log"
"forge.lthn.ai/core/go-scm/forge"
)
var (
prepRepo string
prepIssue int
prepOrg string
prepOutput string
prepSpecsPath string
prepDryRun bool
)
func addPrepCommands(parent *cli.Command) {
prepCmd := &cli.Command{
Use: "prep",
Short: "Prepare agent workspace with wiki KB, specs, TODO, and vector context",
Long: `Fetches wiki pages from Forge, copies protocol specs, generates a task
file from a Forge issue, and queries OpenBrain for relevant context.
All output is written to .core/ in the current directory.`,
RunE: runPrep,
}
prepCmd.Flags().StringVar(&prepRepo, "repo", "", "Forge repo name (e.g. go-ai)")
prepCmd.Flags().IntVar(&prepIssue, "issue", 0, "Issue number to build TODO from")
prepCmd.Flags().StringVar(&prepOrg, "org", "core", "Forge organisation")
prepCmd.Flags().StringVar(&prepOutput, "output", "", "Output directory (default: ./.core)")
prepCmd.Flags().StringVar(&prepSpecsPath, "specs-path", "", "Path to specs dir")
prepCmd.Flags().BoolVar(&prepDryRun, "dry-run", false, "Preview without writing files")
_ = prepCmd.MarkFlagRequired("repo")
parent.AddCommand(prepCmd)
}
func runPrep(cmd *cli.Command, args []string) error {
ctx := context.Background()
// Resolve output directory
outputDir := prepOutput
if outputDir == "" {
cwd, err := os.Getwd()
if err != nil {
return cli.Err("failed to get working directory")
}
outputDir = filepath.Join(cwd, ".core")
}
// Resolve specs path
specsPath := prepSpecsPath
if specsPath == "" {
home, err := os.UserHomeDir()
if err == nil {
specsPath = filepath.Join(home, "Code", "host-uk", "specs")
}
}
// Resolve Forge connection
forgeURL, forgeToken, err := forge.ResolveConfig("", "")
if err != nil {
return log.E("workspace.prep", "failed to resolve Forge config", err)
}
if forgeToken == "" {
return log.E("workspace.prep", "no Forge token configured — set FORGE_TOKEN or run: core forge login", nil)
}
cli.Print("Preparing workspace for %s/%s\n", cli.ValueStyle.Render(prepOrg), cli.ValueStyle.Render(prepRepo))
cli.Print("Output: %s\n", cli.DimStyle.Render(outputDir))
if prepDryRun {
cli.Print("%s No files will be written.\n", cli.WarningStyle.Render("[DRY RUN]"))
}
fmt.Println()
// Create output directory structure
if !prepDryRun {
if err := coreio.Local.EnsureDir(filepath.Join(outputDir, "kb")); err != nil {
return log.E("workspace.prep", "failed to create kb directory", err)
}
if err := coreio.Local.EnsureDir(filepath.Join(outputDir, "specs")); err != nil {
return log.E("workspace.prep", "failed to create specs directory", err)
}
}
// Step 1: Pull wiki pages
wikiCount, err := prepPullWiki(ctx, forgeURL, forgeToken, prepOrg, prepRepo, outputDir, prepDryRun)
if err != nil {
cli.Print("%s wiki: %v\n", cli.WarningStyle.Render("warn"), err)
}
// Step 2: Copy spec files
specsCount := prepCopySpecs(specsPath, outputDir, prepDryRun)
// Step 3: Generate TODO from issue
var issueTitle, issueBody string
if prepIssue > 0 {
issueTitle, issueBody, err = prepGenerateTodo(ctx, forgeURL, forgeToken, prepOrg, prepRepo, prepIssue, outputDir, prepDryRun)
if err != nil {
cli.Print("%s todo: %v\n", cli.WarningStyle.Render("warn"), err)
prepGenerateTodoSkeleton(prepOrg, prepRepo, outputDir, prepDryRun)
}
} else {
prepGenerateTodoSkeleton(prepOrg, prepRepo, outputDir, prepDryRun)
}
// Step 4: Generate context from OpenBrain
contextCount := prepGenerateContext(ctx, prepRepo, issueTitle, issueBody, outputDir, prepDryRun)
// Summary
fmt.Println()
prefix := ""
if prepDryRun {
prefix = "[DRY RUN] "
}
cli.Print("%s%s\n", prefix, cli.SuccessStyle.Render("Workspace prep complete:"))
cli.Print(" Wiki pages: %s\n", cli.ValueStyle.Render(fmt.Sprintf("%d", wikiCount)))
cli.Print(" Spec files: %s\n", cli.ValueStyle.Render(fmt.Sprintf("%d", specsCount)))
if issueTitle != "" {
cli.Print(" TODO: %s\n", cli.ValueStyle.Render(fmt.Sprintf("from issue #%d", prepIssue)))
} else {
cli.Print(" TODO: %s\n", cli.DimStyle.Render("skeleton"))
}
cli.Print(" Context: %s\n", cli.ValueStyle.Render(fmt.Sprintf("%d memories", contextCount)))
return nil
}
// --- Step 1: Pull wiki pages from Forge API ---
type wikiPageRef struct {
Title string `json:"title"`
SubURL string `json:"sub_url"`
}
type wikiPageContent struct {
ContentBase64 string `json:"content_base64"`
}
func prepPullWiki(ctx context.Context, forgeURL, token, org, repo, outputDir string, dryRun bool) (int, error) {
cli.Print("Fetching wiki pages for %s/%s...\n", org, repo)
endpoint := fmt.Sprintf("%s/api/v1/repos/%s/%s/wiki/pages", forgeURL, org, repo)
resp, err := forgeGet(ctx, endpoint, token)
if err != nil {
return 0, log.E("workspace.prep.wiki", "API request failed", err)
}
defer resp.Body.Close()
if resp.StatusCode == http.StatusNotFound {
cli.Print(" %s No wiki found for %s\n", cli.WarningStyle.Render("warn"), repo)
if !dryRun {
content := fmt.Sprintf("# No wiki found for %s\n\nThis repo has no wiki pages on Forge.\n", repo)
_ = coreio.Local.Write(filepath.Join(outputDir, "kb", "README.md"), content)
}
return 0, nil
}
if resp.StatusCode != http.StatusOK {
return 0, log.E("workspace.prep.wiki", fmt.Sprintf("API error: %d", resp.StatusCode), nil)
}
var pages []wikiPageRef
if err := json.NewDecoder(resp.Body).Decode(&pages); err != nil {
return 0, log.E("workspace.prep.wiki", "failed to decode pages", err)
}
if len(pages) == 0 {
cli.Print(" %s Wiki exists but has no pages.\n", cli.WarningStyle.Render("warn"))
return 0, nil
}
count := 0
for _, page := range pages {
title := page.Title
if title == "" {
title = "Untitled"
}
subURL := page.SubURL
if subURL == "" {
subURL = title
}
if dryRun {
cli.Print(" [would fetch] %s\n", title)
count++
continue
}
pageEndpoint := fmt.Sprintf("%s/api/v1/repos/%s/%s/wiki/page/%s",
forgeURL, org, repo, url.PathEscape(subURL))
pageResp, err := forgeGet(ctx, pageEndpoint, token)
if err != nil || pageResp.StatusCode != http.StatusOK {
cli.Print(" %s Failed to fetch: %s\n", cli.WarningStyle.Render("warn"), title)
if pageResp != nil {
pageResp.Body.Close()
}
continue
}
var pageData wikiPageContent
if err := json.NewDecoder(pageResp.Body).Decode(&pageData); err != nil {
pageResp.Body.Close()
continue
}
pageResp.Body.Close()
if pageData.ContentBase64 == "" {
continue
}
decoded, err := base64.StdEncoding.DecodeString(pageData.ContentBase64)
if err != nil {
continue
}
filename := sanitiseFilename(title) + ".md"
_ = coreio.Local.Write(filepath.Join(outputDir, "kb", filename), string(decoded))
cli.Print(" %s\n", title)
count++
}
cli.Print(" %d wiki page(s) saved to kb/\n", count)
return count, nil
}
// --- Step 2: Copy protocol spec files ---
func prepCopySpecs(specsPath, outputDir string, dryRun bool) int {
cli.Print("Copying spec files...\n")
specFiles := []string{"AGENT_CONTEXT.md", "TASK_PROTOCOL.md"}
count := 0
for _, file := range specFiles {
source := filepath.Join(specsPath, file)
if !coreio.Local.IsFile(source) {
cli.Print(" %s Not found: %s\n", cli.WarningStyle.Render("warn"), source)
continue
}
if dryRun {
cli.Print(" [would copy] %s\n", file)
count++
continue
}
content, err := coreio.Local.Read(source)
if err != nil {
cli.Print(" %s Failed to read: %s\n", cli.WarningStyle.Render("warn"), file)
continue
}
dest := filepath.Join(outputDir, "specs", file)
if err := coreio.Local.Write(dest, content); err != nil {
cli.Print(" %s Failed to write: %s\n", cli.WarningStyle.Render("warn"), file)
continue
}
cli.Print(" %s\n", file)
count++
}
cli.Print(" %d spec file(s) copied.\n", count)
return count
}
// --- Step 3: Generate TODO from Forge issue ---
type forgeIssue struct {
Title string `json:"title"`
Body string `json:"body"`
}
func prepGenerateTodo(ctx context.Context, forgeURL, token, org, repo string, issueNum int, outputDir string, dryRun bool) (string, string, error) {
cli.Print("Generating TODO from issue #%d...\n", issueNum)
endpoint := fmt.Sprintf("%s/api/v1/repos/%s/%s/issues/%d", forgeURL, org, repo, issueNum)
resp, err := forgeGet(ctx, endpoint, token)
if err != nil {
return "", "", log.E("workspace.prep.todo", "issue API request failed", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return "", "", log.E("workspace.prep.todo", fmt.Sprintf("failed to fetch issue #%d: %d", issueNum, resp.StatusCode), nil)
}
var issue forgeIssue
if err := json.NewDecoder(resp.Body).Decode(&issue); err != nil {
return "", "", log.E("workspace.prep.todo", "failed to decode issue", err)
}
title := issue.Title
if title == "" {
title = "Untitled"
}
objective := extractObjective(issue.Body)
checklist := extractChecklist(issue.Body)
var b strings.Builder
fmt.Fprintf(&b, "# TASK: %s\n\n", title)
fmt.Fprintf(&b, "**Status:** ready\n")
fmt.Fprintf(&b, "**Source:** %s/%s/%s/issues/%d\n", forgeURL, org, repo, issueNum)
fmt.Fprintf(&b, "**Created:** %s\n", time.Now().Format("2006-01-02 15:04:05"))
fmt.Fprintf(&b, "**Repo:** %s/%s\n", org, repo)
b.WriteString("\n---\n\n")
fmt.Fprintf(&b, "## Objective\n\n%s\n", objective)
b.WriteString("\n---\n\n")
b.WriteString("## Acceptance Criteria\n\n")
if len(checklist) > 0 {
for _, item := range checklist {
fmt.Fprintf(&b, "- [ ] %s\n", item)
}
} else {
b.WriteString("_No checklist items found in issue. Agent should define acceptance criteria._\n")
}
b.WriteString("\n---\n\n")
b.WriteString("## Implementation Checklist\n\n")
b.WriteString("_To be filled by the agent during planning._\n")
b.WriteString("\n---\n\n")
b.WriteString("## Notes\n\n")
b.WriteString("Full issue body preserved below for reference.\n\n")
b.WriteString("<details>\n<summary>Original Issue</summary>\n\n")
b.WriteString(issue.Body)
b.WriteString("\n\n</details>\n")
if dryRun {
cli.Print(" [would write] todo.md from: %s\n", title)
} else {
if err := coreio.Local.Write(filepath.Join(outputDir, "todo.md"), b.String()); err != nil {
return title, issue.Body, log.E("workspace.prep.todo", "failed to write todo.md", err)
}
cli.Print(" todo.md generated from: %s\n", title)
}
return title, issue.Body, nil
}
func prepGenerateTodoSkeleton(org, repo, outputDir string, dryRun bool) {
var b strings.Builder
b.WriteString("# TASK: [Define task]\n\n")
fmt.Fprintf(&b, "**Status:** ready\n")
fmt.Fprintf(&b, "**Created:** %s\n", time.Now().Format("2006-01-02 15:04:05"))
fmt.Fprintf(&b, "**Repo:** %s/%s\n", org, repo)
b.WriteString("\n---\n\n")
b.WriteString("## Objective\n\n_Define the objective._\n")
b.WriteString("\n---\n\n")
b.WriteString("## Acceptance Criteria\n\n- [ ] _Define criteria_\n")
b.WriteString("\n---\n\n")
b.WriteString("## Implementation Checklist\n\n_To be filled by the agent._\n")
if dryRun {
cli.Print(" [would write] todo.md skeleton\n")
} else {
_ = coreio.Local.Write(filepath.Join(outputDir, "todo.md"), b.String())
cli.Print(" todo.md skeleton generated (no --issue provided)\n")
}
}
// --- Step 4: Generate context from OpenBrain ---
func prepGenerateContext(ctx context.Context, repo, issueTitle, issueBody, outputDir string, dryRun bool) int {
cli.Print("Querying vector DB for context...\n")
apiURL := os.Getenv("CORE_API_URL")
if apiURL == "" {
apiURL = "http://localhost:8000"
}
apiToken := os.Getenv("CORE_API_TOKEN")
client := lifecycle.NewClient(apiURL, apiToken)
// Query 1: Repo-specific knowledge
repoResult, err := client.Recall(ctx, lifecycle.RecallRequest{
Query: "How does " + repo + " work? Architecture and key interfaces.",
TopK: 10,
Project: repo,
})
if err != nil {
cli.Print(" %s BrainService unavailable: %v\n", cli.WarningStyle.Render("warn"), err)
writeBrainUnavailable(repo, outputDir, dryRun)
return 0
}
repoMemories := repoResult.Memories
repoScores := repoResult.Scores
// Query 2: Issue-specific context
var issueMemories []lifecycle.Memory
var issueScores map[string]float64
if issueTitle != "" {
query := issueTitle
if len(issueBody) > 500 {
query += " " + issueBody[:500]
} else if issueBody != "" {
query += " " + issueBody
}
issueResult, err := client.Recall(ctx, lifecycle.RecallRequest{
Query: query,
TopK: 5,
})
if err == nil {
issueMemories = issueResult.Memories
issueScores = issueResult.Scores
}
}
totalMemories := len(repoMemories) + len(issueMemories)
var b strings.Builder
fmt.Fprintf(&b, "# Agent Context — %s\n\n", repo)
b.WriteString("> Auto-generated by `core workspace prep`. Query the vector DB for more.\n\n")
b.WriteString("## Repo Knowledge\n\n")
if len(repoMemories) > 0 {
for i, mem := range repoMemories {
score := repoScores[mem.ID]
project := mem.Project
if project == "" {
project = "unknown"
}
memType := mem.Type
if memType == "" {
memType = "memory"
}
fmt.Fprintf(&b, "### %d. %s [%s] (score: %.3f)\n\n", i+1, project, memType, score)
fmt.Fprintf(&b, "%s\n\n", mem.Content)
}
} else {
b.WriteString("_No repo-specific memories found. The vector DB may not have been seeded for this repo._\n\n")
}
b.WriteString("## Task-Relevant Context\n\n")
if len(issueMemories) > 0 {
for i, mem := range issueMemories {
score := issueScores[mem.ID]
project := mem.Project
if project == "" {
project = "unknown"
}
memType := mem.Type
if memType == "" {
memType = "memory"
}
fmt.Fprintf(&b, "### %d. %s [%s] (score: %.3f)\n\n", i+1, project, memType, score)
fmt.Fprintf(&b, "%s\n\n", mem.Content)
}
} else if issueTitle != "" {
b.WriteString("_No task-relevant memories found._\n\n")
} else {
b.WriteString("_No issue provided — skipped task-specific recall._\n\n")
}
if dryRun {
cli.Print(" [would write] context.md with %d memories\n", totalMemories)
} else {
_ = coreio.Local.Write(filepath.Join(outputDir, "context.md"), b.String())
cli.Print(" context.md generated with %d memories\n", totalMemories)
}
return totalMemories
}
func writeBrainUnavailable(repo, outputDir string, dryRun bool) {
var b strings.Builder
fmt.Fprintf(&b, "# Agent Context — %s\n\n", repo)
b.WriteString("> Vector DB was unavailable when this workspace was prepared.\n")
b.WriteString("> Run `core workspace prep` again once Ollama/Qdrant are reachable.\n")
if !dryRun {
_ = coreio.Local.Write(filepath.Join(outputDir, "context.md"), b.String())
}
}
// --- Helpers ---
func forgeGet(ctx context.Context, endpoint, token string) (*http.Response, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodGet, endpoint, nil)
if err != nil {
return nil, err
}
req.Header.Set("Authorization", "token "+token)
client := &http.Client{Timeout: 30 * time.Second}
return client.Do(req)
}
var nonAlphanumeric = regexp.MustCompile(`[^a-zA-Z0-9_\-.]`)
func sanitiseFilename(title string) string {
return nonAlphanumeric.ReplaceAllString(title, "-")
}
func extractObjective(body string) string {
if body == "" {
return "_No description provided._"
}
parts := strings.SplitN(body, "\n\n", 2)
first := strings.TrimSpace(parts[0])
if len(first) > 500 {
return first[:497] + "..."
}
return first
}
func extractChecklist(body string) []string {
re := regexp.MustCompile(`- \[[ xX]\] (.+)`)
matches := re.FindAllStringSubmatch(body, -1)
var items []string
for _, m := range matches {
items = append(items, strings.TrimSpace(m[1]))
}
return items
}

View file

@ -21,6 +21,7 @@ func AddWorkspaceCommands(root *cli.Command) {
})
addTaskCommands(wsCmd)
addPrepCommands(wsCmd)
root.AddCommand(wsCmd)
}

View file

@ -30,7 +30,7 @@ class PrepWorkspaceCommand extends Command
{--repo= : Forge repo (e.g. go-ai)}
{--issue= : Issue number to build TODO from}
{--org=core : Forge organisation}
{--output= : Output directory (default: ./workspace)}
{--output= : Output directory (default: ./.core)}
{--specs-path= : Path to specs dir (default: ~/Code/host-uk/specs)}
{--dry-run : Preview without writing files}';
@ -51,7 +51,7 @@ class PrepWorkspaceCommand extends Command
$this->baseUrl = rtrim((string) config('upstream.gitea.url', 'https://forge.lthn.ai'), '/');
$this->token = (string) config('upstream.gitea.token', config('agentic.forge_token', ''));
$this->org = (string) $this->option('org');
$this->outputDir = (string) ($this->option('output') ?? getcwd() . '/workspace');
$this->outputDir = (string) ($this->option('output') ?? getcwd() . '/.core');
$this->dryRun = (bool) $this->option('dry-run');
$repo = $this->option('repo');
@ -298,13 +298,13 @@ class PrepWorkspaceCommand extends Command
$todoContent .= "</details>\n";
if ($this->dryRun) {
$this->line(' [would write] TODO.md from: ' . $title);
$this->line(' [would write] todo.md from: ' . $title);
if (! empty($checklistItems)) {
$this->line(' Checklist items: ' . count($checklistItems));
}
} else {
File::put($this->outputDir . '/TODO.md', $todoContent);
$this->line(' TODO.md generated from: ' . $title);
File::put($this->outputDir . '/todo.md', $todoContent);
$this->line(' todo.md generated from: ' . $title);
}
return [$title, $body];
@ -327,10 +327,10 @@ class PrepWorkspaceCommand extends Command
$content .= "## Implementation Checklist\n\n_To be filled by the agent._\n";
if ($this->dryRun) {
$this->line(' [would write] TODO.md skeleton');
$this->line(' [would write] todo.md skeleton');
} else {
File::put($this->outputDir . '/TODO.md', $content);
$this->line(' TODO.md skeleton generated (no --issue provided)');
File::put($this->outputDir . '/todo.md', $content);
$this->line(' todo.md skeleton generated (no --issue provided)');
}
}
@ -403,10 +403,10 @@ class PrepWorkspaceCommand extends Command
}
if ($this->dryRun) {
$this->line(' [would write] CONTEXT.md with ' . $totalMemories . ' memories');
$this->line(' [would write] context.md with ' . $totalMemories . ' memories');
} else {
File::put($this->outputDir . '/CONTEXT.md', $content);
$this->line(' CONTEXT.md generated with ' . $totalMemories . ' memories');
File::put($this->outputDir . '/context.md', $content);
$this->line(' context.md generated with ' . $totalMemories . ' memories');
}
return $totalMemories;
@ -418,7 +418,7 @@ class PrepWorkspaceCommand extends Command
$content .= "> Run `agentic:prep-workspace` again once Ollama/Qdrant are reachable.\n";
if (! $this->dryRun) {
File::put($this->outputDir . '/CONTEXT.md', $content);
File::put($this->outputDir . '/context.md', $content);
}
return 0;
@ -465,18 +465,6 @@ class PrepWorkspaceCommand extends Command
return $items;
}
/**
* Truncate a string to a maximum length.
*/
private function truncate(string $text, int $length): string
{
if (mb_strlen($text) <= $length) {
return $text;
}
return mb_substr($text, 0, $length - 3) . '...';
}
/**
* Expand ~ to the user's home directory.
*/