dev #15
136 changed files with 14659 additions and 1926 deletions
7
.agents/skills/deploy/SKILL.md
Normal file
7
.agents/skills/deploy/SKILL.md
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
name: deploy
|
||||
description: Deploy to homelab. Build Docker image, transfer, and restart container. Use for lthn.sh deployments.
|
||||
---
|
||||
|
||||
Use the core-agent MCP tools to execute this skill.
|
||||
Call the appropriate tool: See deployment skill instructions
|
||||
7
.agents/skills/dispatch/SKILL.md
Normal file
7
.agents/skills/dispatch/SKILL.md
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
name: dispatch
|
||||
description: Dispatch a subagent to work on a task in a sandboxed workspace. Use when you need to send work to Gemini, Codex, or Claude agents.
|
||||
---
|
||||
|
||||
Use the core-agent MCP tools to execute this skill.
|
||||
Call the appropriate tool: agentic_dispatch
|
||||
7
.agents/skills/pipeline/SKILL.md
Normal file
7
.agents/skills/pipeline/SKILL.md
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
name: pipeline
|
||||
description: Run the review-fix-verify pipeline on code changes. Dispatches reviewer, then fixer, then verifier.
|
||||
---
|
||||
|
||||
Use the core-agent MCP tools to execute this skill.
|
||||
Call the appropriate tool: agentic_dispatch reviewer → wait → agentic_dispatch fixer → wait → verify
|
||||
7
.agents/skills/recall/SKILL.md
Normal file
7
.agents/skills/recall/SKILL.md
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
name: recall
|
||||
description: Search OpenBrain for memories and context. Use when you need prior session knowledge or architecture context.
|
||||
---
|
||||
|
||||
Use the core-agent MCP tools to execute this skill.
|
||||
Call the appropriate tool: brain_recall
|
||||
7
.agents/skills/remember/SKILL.md
Normal file
7
.agents/skills/remember/SKILL.md
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
name: remember
|
||||
description: Save a fact or decision to OpenBrain. Use to persist knowledge across sessions.
|
||||
---
|
||||
|
||||
Use the core-agent MCP tools to execute this skill.
|
||||
Call the appropriate tool: brain_remember
|
||||
7
.agents/skills/review/SKILL.md
Normal file
7
.agents/skills/review/SKILL.md
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
name: review
|
||||
description: Review completed agent workspace. Show output, git diff, and merge options. Use after an agent completes a task.
|
||||
---
|
||||
|
||||
Use the core-agent MCP tools to execute this skill.
|
||||
Call the appropriate tool: agentic_status + read agent log + git diff
|
||||
7
.agents/skills/scan/SKILL.md
Normal file
7
.agents/skills/scan/SKILL.md
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
name: scan
|
||||
description: Scan Forge repos for open issues with actionable labels. Use to find work to dispatch.
|
||||
---
|
||||
|
||||
Use the core-agent MCP tools to execute this skill.
|
||||
Call the appropriate tool: agentic_scan
|
||||
7
.agents/skills/status/SKILL.md
Normal file
7
.agents/skills/status/SKILL.md
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
name: status
|
||||
description: Show status of all agent workspaces (running, completed, blocked, failed). Use to check pipeline progress.
|
||||
---
|
||||
|
||||
Use the core-agent MCP tools to execute this skill.
|
||||
Call the appropriate tool: agentic_status
|
||||
7
.agents/skills/sweep/SKILL.md
Normal file
7
.agents/skills/sweep/SKILL.md
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
name: sweep
|
||||
description: Batch audit across all repos using agent dispatch. Use for ecosystem-wide convention checks.
|
||||
---
|
||||
|
||||
Use the core-agent MCP tools to execute this skill.
|
||||
Call the appropriate tool: agentic_dispatch in a loop across repos
|
||||
|
|
@ -44,6 +44,12 @@
|
|||
},
|
||||
"description": "CI/CD, deployment, issue tracking, and Coolify integration",
|
||||
"version": "0.1.0"
|
||||
},
|
||||
{
|
||||
"name": "devops",
|
||||
"source": "./claude/devops",
|
||||
"description": "Agent workflow utilities — install binaries, merge workspaces, update deps, clean queues",
|
||||
"version": "0.1.0"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
25
.codex/agents/fixer.toml
Normal file
25
.codex/agents/fixer.toml
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
# Review Findings Fixer
|
||||
# Implements fixes from reviewer findings
|
||||
|
||||
name = "fixer"
|
||||
description = "Fix code review findings. Takes a list of findings with file:line references and implements the fixes. Creates EXCEPTIONS.md for items that cannot be fixed."
|
||||
developer_instructions = """
|
||||
You are the Review Findings Fixer for the Core ecosystem.
|
||||
|
||||
You receive a list of findings from the reviewer agent.
|
||||
For each finding:
|
||||
1. Read the file at the specified line
|
||||
2. Implement the fix following Core conventions
|
||||
3. If a fix is impossible (e.g. circular import), add to EXCEPTIONS.md with reason
|
||||
|
||||
After fixing:
|
||||
- Run go build ./... to verify
|
||||
- Run go vet ./... to verify
|
||||
- Run go test ./... if tests exist
|
||||
|
||||
Commit message format: fix(pkg): description of fixes
|
||||
|
||||
Do not add features. Do not refactor beyond the finding. Minimal changes only.
|
||||
"""
|
||||
model = "gpt-5.4"
|
||||
sandbox_mode = "workspace-write"
|
||||
32
.codex/agents/migrator.toml
Normal file
32
.codex/agents/migrator.toml
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
# Core Primitives Migrator
|
||||
# Migrates packages from separate deps to Core built-ins
|
||||
|
||||
name = "migrator"
|
||||
description = "Migrate Go packages to use Core primitives instead of separate go-io/go-log/strings/fmt packages. Use when upgrading a package to the new Core API."
|
||||
developer_instructions = """
|
||||
You are the Core Primitives Migrator for the Core ecosystem.
|
||||
|
||||
Read .core/reference/RFC-025-AGENT-EXPERIENCE.md for the AX spec.
|
||||
Read .core/reference/*.go for the Core framework API.
|
||||
|
||||
Migration pattern:
|
||||
- coreio.Local.Read(path) → fs.Read(path) returning core.Result
|
||||
- coreio.Local.Write(path, s) → fs.Write(path, s) returning core.Result
|
||||
- coreio.Local.List(path) → fs.List(path) returning core.Result
|
||||
- coreio.Local.EnsureDir(path) → fs.EnsureDir(path) returning core.Result
|
||||
- coreio.Local.IsFile(path) → fs.IsFile(path) returning bool
|
||||
- coreio.Local.Delete(path) → fs.Delete(path) returning core.Result
|
||||
- coreerr.E("op", "msg", err) → core.E("op", "msg", err)
|
||||
- log.Error/Info/Warn → core.Error/Info/Warn
|
||||
- strings.Contains → core.Contains
|
||||
- strings.Split → core.Split
|
||||
- strings.TrimSpace → core.Trim
|
||||
- strings.HasPrefix → core.HasPrefix
|
||||
- fmt.Sprintf → core.Sprintf
|
||||
- embed.FS → core.Mount() + core.Embed
|
||||
|
||||
Add AX usage-example comments to all public types and functions.
|
||||
Build must pass after migration.
|
||||
"""
|
||||
model = "gpt-5.4"
|
||||
sandbox_mode = "workspace-write"
|
||||
28
.codex/agents/reviewer.toml
Normal file
28
.codex/agents/reviewer.toml
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
# AX Convention Reviewer
|
||||
# Audits code against RFC-025 Agent Experience spec
|
||||
|
||||
name = "reviewer"
|
||||
description = "Audit Go code against AX conventions (RFC-025). Use for code review, convention checking, and quality assessment. Read-only — never modifies code."
|
||||
developer_instructions = """
|
||||
You are the AX Convention Reviewer for the Core ecosystem.
|
||||
|
||||
Read .core/reference/RFC-025-AGENT-EXPERIENCE.md for the full spec.
|
||||
Read .core/reference/*.go for the Core framework API.
|
||||
|
||||
Audit all Go files against these conventions:
|
||||
1. Predictable names — no abbreviations (Cfg→Config, Srv→Service)
|
||||
2. Comments as usage examples — show HOW with real values
|
||||
3. Result pattern — core.Result not (value, error)
|
||||
4. Error handling — core.E("op", "msg", err) not fmt.Errorf
|
||||
5. Core string ops — core.Contains/Split/Trim not strings.*
|
||||
6. Core logging — core.Error/Info/Warn not log.*
|
||||
7. Core filesystem — core.Fs{} not os.ReadFile
|
||||
8. UK English — initialise not initialize
|
||||
9. Import aliasing — stdlib io as goio
|
||||
10. Compile-time assertions — var _ Interface = (*Impl)(nil)
|
||||
|
||||
Report findings with severity (critical/high/medium/low) and file:line.
|
||||
Group by package. Do NOT fix — report only.
|
||||
"""
|
||||
model = "gpt-5.4"
|
||||
sandbox_mode = "read-only"
|
||||
69
.codex/config.toml
Normal file
69
.codex/config.toml
Normal file
|
|
@ -0,0 +1,69 @@
|
|||
# Core Agent — Codex Configuration
|
||||
# Shared between CLI and IDE extension
|
||||
|
||||
model = "gpt-5.4"
|
||||
model_reasoning_effort = "high"
|
||||
approval_policy = "on-request"
|
||||
sandbox_mode = "workspace-write"
|
||||
personality = "pragmatic"
|
||||
|
||||
# Default to LEM when available
|
||||
# oss_provider = "ollama"
|
||||
|
||||
[profiles.review]
|
||||
model = "gpt-5.4"
|
||||
model_reasoning_effort = "extra-high"
|
||||
approval_policy = "never"
|
||||
sandbox_mode = "read-only"
|
||||
|
||||
[profiles.quick]
|
||||
model = "gpt-5.4"
|
||||
model_reasoning_effort = "low"
|
||||
approval_policy = "never"
|
||||
|
||||
[profiles.implement]
|
||||
model = "gpt-5.4"
|
||||
model_reasoning_effort = "high"
|
||||
approval_policy = "never"
|
||||
sandbox_mode = "workspace-write"
|
||||
|
||||
[profiles.lem]
|
||||
model = "lem-4b"
|
||||
model_provider = "ollama"
|
||||
model_reasoning_effort = "high"
|
||||
approval_policy = "never"
|
||||
sandbox_mode = "workspace-write"
|
||||
|
||||
# Core Agent MCP Server
|
||||
[mcp_servers.core-agent]
|
||||
command = "core-agent"
|
||||
args = ["mcp"]
|
||||
required = true
|
||||
startup_timeout_sec = 15
|
||||
tool_timeout_sec = 120
|
||||
|
||||
[mcp_servers.core-agent.env]
|
||||
FORGE_TOKEN = "${FORGE_TOKEN}"
|
||||
CORE_BRAIN_KEY = "${CORE_BRAIN_KEY}"
|
||||
MONITOR_INTERVAL = "15s"
|
||||
|
||||
# Local model providers
|
||||
[model_providers.ollama]
|
||||
name = "Ollama"
|
||||
base_url = "http://127.0.0.1:11434/v1"
|
||||
|
||||
[model_providers.lmstudio]
|
||||
name = "LM Studio"
|
||||
base_url = "http://127.0.0.1:1234/v1"
|
||||
|
||||
# Agent configuration
|
||||
[agents]
|
||||
max_threads = 4
|
||||
max_depth = 1
|
||||
job_max_runtime_seconds = 600
|
||||
|
||||
# Features
|
||||
[features]
|
||||
multi_agent = true
|
||||
shell_snapshot = true
|
||||
undo = true
|
||||
67
.codex/rules/core-agent.rules
Normal file
67
.codex/rules/core-agent.rules
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
# Core Agent — Codex Rules
|
||||
# Controls which commands can run outside the sandbox
|
||||
|
||||
# Go toolchain — always safe
|
||||
prefix_rule(
|
||||
pattern = ["go", ["build", "test", "vet", "fmt", "mod", "get", "work"]],
|
||||
decision = "allow",
|
||||
justification = "Go development tools are safe read/build operations",
|
||||
match = [["go", "build", "./..."], ["go", "test", "./pkg/agentic"]],
|
||||
not_match = [["go", "run", "main.go"]],
|
||||
)
|
||||
|
||||
# Core agent binary
|
||||
prefix_rule(
|
||||
pattern = ["core-agent", ["mcp", "--version"]],
|
||||
decision = "allow",
|
||||
justification = "Core agent MCP server and version check",
|
||||
)
|
||||
|
||||
# Git read operations
|
||||
prefix_rule(
|
||||
pattern = ["git", ["status", "log", "diff", "branch", "tag", "remote", "fetch", "rev-parse", "ls-remote"]],
|
||||
decision = "allow",
|
||||
justification = "Read-only git operations are safe",
|
||||
)
|
||||
|
||||
# Git write — prompt for approval
|
||||
prefix_rule(
|
||||
pattern = ["git", ["add", "commit", "merge", "rebase", "stash"]],
|
||||
decision = "prompt",
|
||||
justification = "Git write operations need human approval",
|
||||
)
|
||||
|
||||
# Git push — forbidden (use PR workflow)
|
||||
prefix_rule(
|
||||
pattern = ["git", "push"],
|
||||
decision = "forbidden",
|
||||
justification = "Never push directly — use PR workflow via agentic_create_pr",
|
||||
)
|
||||
|
||||
# Git destructive — forbidden
|
||||
prefix_rule(
|
||||
pattern = ["git", ["reset", "clean"], "--force"],
|
||||
decision = "forbidden",
|
||||
justification = "Destructive git operations are never allowed",
|
||||
)
|
||||
|
||||
# Curl — prompt (network access)
|
||||
prefix_rule(
|
||||
pattern = ["curl"],
|
||||
decision = "prompt",
|
||||
justification = "Network requests need approval",
|
||||
)
|
||||
|
||||
# SSH — forbidden
|
||||
prefix_rule(
|
||||
pattern = ["ssh"],
|
||||
decision = "forbidden",
|
||||
justification = "Direct SSH is forbidden — use Ansible via deployment skills",
|
||||
)
|
||||
|
||||
# rm -rf — forbidden
|
||||
prefix_rule(
|
||||
pattern = ["rm", "-rf"],
|
||||
decision = "forbidden",
|
||||
justification = "Recursive force delete is never allowed",
|
||||
)
|
||||
8
.gitignore
vendored
8
.gitignore
vendored
|
|
@ -1,8 +1,4 @@
|
|||
.idea/
|
||||
.vscode/
|
||||
*.log
|
||||
.core/
|
||||
docker/.env
|
||||
ui/node_modules
|
||||
# Compiled binaries
|
||||
core-agent
|
||||
mcp
|
||||
*.exe
|
||||
|
|
|
|||
76
AGENTS.md
Normal file
76
AGENTS.md
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
# AGENTS.md — Core Agent
|
||||
|
||||
This file provides guidance to Codex when working with code in this repository.
|
||||
|
||||
## Project Overview
|
||||
|
||||
Core Agent (`dappco.re/go/agent`) is the agent orchestration platform for the Core ecosystem. It provides an MCP server binary (`core-agent`) with tools for dispatching subagents, workspace management, cross-agent messaging, OpenBrain integration, and monitoring.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
cmd/main.go — Binary entry point, Core CLI (no cobra)
|
||||
pkg/agentic/ — Dispatch, workspace prep, status, queue, plans, PRs, epics
|
||||
pkg/brain/ — OpenBrain knowledge store (direct HTTP + IDE bridge)
|
||||
pkg/monitor/ — Background monitoring, harvest, sync
|
||||
pkg/lib/ — Embedded prompts, tasks, flows, personas, workspace templates
|
||||
pkg/setup/ — Project detection, config generation, scaffolding
|
||||
```
|
||||
|
||||
## Conventions
|
||||
|
||||
This project follows the **AX (Agent Experience)** design principles from RFC-025.
|
||||
|
||||
### Code Style
|
||||
- **UK English**: colour, organisation, initialise (never American spellings)
|
||||
- **Errors**: `core.E("operation", "message", err)` — never `fmt.Errorf`
|
||||
- **Logging**: `core.Error/Info/Warn/Debug` — never `log.*` or `fmt.Print*`
|
||||
- **Filesystem**: `core.Fs{}` with `Result` returns — never `os.ReadFile/WriteFile`
|
||||
- **Strings**: `core.Contains/Split/Trim/HasPrefix/Sprintf` — never `strings.*` or `fmt.Sprintf`
|
||||
- **Returns**: `core.Result{Value, OK}` — never `(value, error)` pairs
|
||||
- **Comments**: Usage examples showing HOW with real values, not descriptions
|
||||
- **Names**: Predictable, unabbreviated (Config not Cfg, Service not Srv)
|
||||
- **Imports**: stdlib `io` aliased as `goio`
|
||||
- **Interface checks**: `var _ Interface = (*Impl)(nil)` compile-time assertions
|
||||
|
||||
### Build & Test
|
||||
```bash
|
||||
go build ./...
|
||||
go test ./...
|
||||
go vet ./...
|
||||
```
|
||||
|
||||
### Branch Strategy
|
||||
- Work on `dev` branch, never push to `main` directly
|
||||
- PRs required for `main` — Codex review gate
|
||||
- Commit format: `type(scope): description`
|
||||
- Co-author: `Co-Authored-By: Virgil <virgil@lethean.io>`
|
||||
|
||||
### Dependencies
|
||||
- Only `dappco.re/go/core` for primitives (fs, errors, logging, strings)
|
||||
- Domain packages: `process`, `ws`, `mcp` for actual services
|
||||
- No `go-io`, `go-log`, `cli` — Core provides these natively
|
||||
- Use `go get -u ./...` for dependency updates, never manual go.mod edits
|
||||
|
||||
## MCP Tools
|
||||
|
||||
The binary exposes these MCP tools when run as `core-agent mcp`:
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| `agentic_dispatch` | Dispatch subagent to sandboxed workspace |
|
||||
| `agentic_status` | List workspace statuses |
|
||||
| `agentic_resume` | Resume blocked/failed workspace |
|
||||
| `agentic_prep_workspace` | Prepare workspace without dispatching |
|
||||
| `agentic_create_pr` | Create PR from workspace |
|
||||
| `agentic_list_prs` | List PRs across repos |
|
||||
| `agentic_create_epic` | Create epic with child issues |
|
||||
| `agentic_scan` | Scan Forge for actionable issues |
|
||||
| `agentic_plan_*` | Plan CRUD (create, read, update, delete, list) |
|
||||
| `brain_recall` | Semantic search OpenBrain |
|
||||
| `brain_remember` | Store to OpenBrain |
|
||||
| `brain_forget` | Remove from OpenBrain |
|
||||
| `agent_send` | Send message to another agent |
|
||||
| `agent_inbox` | Read inbox messages |
|
||||
| `metrics_record` | Record metrics event |
|
||||
| `metrics_query` | Query metrics |
|
||||
66
Makefile
66
Makefile
|
|
@ -1,50 +1,36 @@
|
|||
# Host UK Developer Workspace
|
||||
# Run `make setup` to bootstrap your environment
|
||||
|
||||
CORE_REPO := github.com/host-uk/core
|
||||
CORE_VERSION := latest
|
||||
INSTALL_DIR := $(HOME)/.local/bin
|
||||
# ── core-agent binary ──────────────────────────────────
|
||||
|
||||
.PHONY: all setup install-deps install-go install-core doctor clean help
|
||||
BINARY_NAME=core-agent
|
||||
CMD_PATH=./cmd/core-agent
|
||||
MODULE_PATH=dappco.re/go/agent
|
||||
|
||||
all: help
|
||||
# Default LDFLAGS to empty
|
||||
LDFLAGS = ""
|
||||
|
||||
help:
|
||||
@echo "Host UK Developer Workspace"
|
||||
@echo ""
|
||||
@echo "Usage:"
|
||||
@echo " make setup Full setup (deps + core + clone repos)"
|
||||
@echo " make install-deps Install system dependencies (go, gh, etc)"
|
||||
@echo " make install-core Build and install core CLI"
|
||||
@echo " make doctor Check environment health"
|
||||
@echo " make clone Clone all repos into packages/"
|
||||
@echo " make clean Remove built artifacts"
|
||||
@echo ""
|
||||
@echo "Quick start:"
|
||||
@echo " make setup"
|
||||
# If VERSION is set, inject into binary
|
||||
ifdef VERSION
|
||||
LDFLAGS = -ldflags "-X '$(MODULE_PATH).version=$(VERSION)'"
|
||||
endif
|
||||
|
||||
setup: install-deps install-core doctor clone
|
||||
@echo ""
|
||||
@echo "Setup complete! Run 'core health' to verify."
|
||||
.PHONY: build install agent-dev test coverage
|
||||
|
||||
install-deps:
|
||||
@echo "Installing dependencies..."
|
||||
@./scripts/install-deps.sh
|
||||
build:
|
||||
@echo "Building $(BINARY_NAME)..."
|
||||
@go build $(LDFLAGS) -o $(BINARY_NAME) $(CMD_PATH)
|
||||
|
||||
install-go:
|
||||
@echo "Installing Go..."
|
||||
@./scripts/install-go.sh
|
||||
install:
|
||||
@echo "Installing $(BINARY_NAME)..."
|
||||
@go install $(LDFLAGS) $(CMD_PATH)
|
||||
|
||||
install-core:
|
||||
@echo "Installing core CLI..."
|
||||
@./scripts/install-core.sh
|
||||
agent-dev: build
|
||||
@./$(BINARY_NAME) version
|
||||
|
||||
doctor:
|
||||
@core doctor || echo "Run 'make install-core' first if core is not found"
|
||||
test:
|
||||
@echo "Running tests..."
|
||||
@go test ./...
|
||||
|
||||
clone:
|
||||
@core setup || echo "Run 'make install-core' first if core is not found"
|
||||
|
||||
clean:
|
||||
@rm -rf ./build
|
||||
@echo "Cleaned build artifacts"
|
||||
coverage:
|
||||
@echo "Generating coverage report..."
|
||||
@go test -coverprofile=coverage.out ./...
|
||||
@echo "Coverage: coverage.out"
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"name": "core",
|
||||
"description": "Core agent platform — dispatch (local + remote), verify+merge, CodeRabbit/Codex review queue, GitHub mirror, cross-agent messaging, OpenBrain integration, inbox notifications",
|
||||
"version": "0.14.0",
|
||||
"version": "0.15.0",
|
||||
"author": {
|
||||
"name": "Lethean Community",
|
||||
"email": "hello@lethean.io"
|
||||
|
|
|
|||
9
claude/devops/.claude-plugin/plugin.json
Normal file
9
claude/devops/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
{
|
||||
"name": "devops",
|
||||
"version": "0.2.0",
|
||||
"description": "DevOps utilities for the Core ecosystem — build, install, deploy.",
|
||||
"author": {
|
||||
"name": "Lethean",
|
||||
"email": "virgil@lethean.io"
|
||||
}
|
||||
}
|
||||
34
claude/devops/agents/agent-task-clean-workspaces.md
Normal file
34
claude/devops/agents/agent-task-clean-workspaces.md
Normal file
|
|
@ -0,0 +1,34 @@
|
|||
---
|
||||
name: agent-task-clean-workspaces
|
||||
description: Removes completed/failed/blocked agent workspaces. Use when workspaces are piling up, the user asks to "clean workspaces", or before starting a fresh sweep.
|
||||
tools: Bash
|
||||
model: haiku
|
||||
color: green
|
||||
---
|
||||
|
||||
Clean stale agent workspaces using the core-agent CLI.
|
||||
|
||||
## Steps
|
||||
|
||||
1. List current workspaces:
|
||||
```bash
|
||||
core-agent workspace/list
|
||||
```
|
||||
|
||||
2. Clean based on context:
|
||||
```bash
|
||||
# Remove all non-running (default)
|
||||
core-agent workspace/clean all
|
||||
|
||||
# Or specific status
|
||||
core-agent workspace/clean completed
|
||||
core-agent workspace/clean failed
|
||||
core-agent workspace/clean blocked
|
||||
```
|
||||
|
||||
3. Report what was removed.
|
||||
|
||||
## Rules
|
||||
|
||||
- NEVER remove workspaces with status "running"
|
||||
- Report the count and what was removed
|
||||
19
claude/devops/agents/agent-task-health-check.md
Normal file
19
claude/devops/agents/agent-task-health-check.md
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
name: agent-task-health-check
|
||||
description: Runs a health check on the core-agent system. Use proactively at session start or when something seems off with dispatch, workspaces, or MCP tools.
|
||||
tools: Bash
|
||||
model: haiku
|
||||
color: green
|
||||
---
|
||||
|
||||
Quick health check of the core-agent system.
|
||||
|
||||
## Steps
|
||||
|
||||
```bash
|
||||
core-agent check
|
||||
core-agent workspace/list
|
||||
core-agent version
|
||||
```
|
||||
|
||||
Report the results concisely. Flag anything that looks wrong.
|
||||
34
claude/devops/agents/agent-task-install-core-agent.md
Normal file
34
claude/devops/agents/agent-task-install-core-agent.md
Normal file
|
|
@ -0,0 +1,34 @@
|
|||
---
|
||||
name: agent-task-install-core-agent
|
||||
description: Builds and installs the core-agent binary. Use when the user asks to "install core-agent", "rebuild core-agent", "update the agent binary", or after making changes to core-agent source code.
|
||||
tools: Bash
|
||||
model: haiku
|
||||
color: green
|
||||
---
|
||||
|
||||
Build and install the core-agent binary from source.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Install from the core/agent repo directory:
|
||||
|
||||
```bash
|
||||
cd /Users/snider/Code/core/agent && go install ./cmd/core-agent/
|
||||
```
|
||||
|
||||
2. Verify the binary is installed:
|
||||
|
||||
```bash
|
||||
which core-agent
|
||||
```
|
||||
|
||||
3. Report the result. Tell the user to restart core-agent to pick up the new binary.
|
||||
|
||||
## Rules
|
||||
|
||||
- The entry point is `./cmd/core-agent/main.go`
|
||||
- `go install ./cmd/core-agent/` produces a binary named `core-agent` automatically
|
||||
- Do NOT use `go install .`, `go install ./cmd/`, or `go build` with manual `-o` flags
|
||||
- Do NOT move, copy, or rename binaries
|
||||
- Do NOT touch `~/go/bin/` or `~/.local/bin/` directly
|
||||
- If the install fails, report the error — do not attempt alternatives
|
||||
51
claude/devops/agents/agent-task-merge-workspace.md
Normal file
51
claude/devops/agents/agent-task-merge-workspace.md
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
---
|
||||
name: agent-task-merge-workspace
|
||||
description: Reviews and merges completed agent workspace changes into the source repo. Use when an agent workspace is completed/ready-for-review and changes need to be applied.
|
||||
tools: Bash, Read
|
||||
model: sonnet
|
||||
color: blue
|
||||
---
|
||||
|
||||
Merge a completed agent workspace into the source repo.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Check workspace status:
|
||||
```bash
|
||||
cat /Users/snider/Code/.core/workspace/{name}/status.json
|
||||
```
|
||||
Only proceed if status is `completed` or `ready-for-review`.
|
||||
|
||||
2. Show the diff:
|
||||
```bash
|
||||
git -C /Users/snider/Code/.core/workspace/{name}/repo diff --stat HEAD
|
||||
git -C /Users/snider/Code/.core/workspace/{name}/repo diff HEAD
|
||||
```
|
||||
|
||||
3. Check for untracked new files (git diff misses these):
|
||||
```bash
|
||||
git -C /Users/snider/Code/.core/workspace/{name}/repo ls-files --others --exclude-standard
|
||||
```
|
||||
|
||||
4. Present a summary to the user. Ask for confirmation before applying.
|
||||
|
||||
5. Apply changes via patch:
|
||||
```bash
|
||||
cd /Users/snider/Code/.core/workspace/{name}/repo && git diff HEAD > /tmp/agent-patch.diff
|
||||
cd /Users/snider/Code/core/{repo}/ && git apply /tmp/agent-patch.diff
|
||||
```
|
||||
|
||||
6. Copy any new untracked files manually.
|
||||
|
||||
7. Verify build:
|
||||
```bash
|
||||
cd /Users/snider/Code/core/{repo}/ && go build ./...
|
||||
```
|
||||
|
||||
## Rules
|
||||
|
||||
- Always show the diff BEFORE applying
|
||||
- Always check for untracked files (new files created by agent)
|
||||
- Always verify the build AFTER applying
|
||||
- Never commit — the user commits when ready
|
||||
- If the patch fails, show the conflict and stop
|
||||
53
claude/devops/agents/agent-task-repair-core-agent.md
Normal file
53
claude/devops/agents/agent-task-repair-core-agent.md
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
name: agent-task-repair-core-agent
|
||||
description: Diagnoses and repairs core-agent when MCP tools fail, dispatch breaks, or the binary is stale. Use when something isn't working with the agent system.
|
||||
tools: Bash, Read
|
||||
model: haiku
|
||||
color: red
|
||||
---
|
||||
|
||||
Diagnose and fix core-agent issues.
|
||||
|
||||
## Diagnosis Steps (run in order, stop at first failure)
|
||||
|
||||
1. Does it compile?
|
||||
```bash
|
||||
cd /Users/snider/Code/core/agent && go build ./cmd/core-agent/
|
||||
```
|
||||
|
||||
2. Health check:
|
||||
```bash
|
||||
core-agent check
|
||||
```
|
||||
|
||||
3. Is a stale process running?
|
||||
```bash
|
||||
ps aux | grep core-agent | grep -v grep
|
||||
```
|
||||
|
||||
4. Are workspaces clean?
|
||||
```bash
|
||||
core-agent workspace/list
|
||||
```
|
||||
|
||||
5. Is agents.yaml readable?
|
||||
```bash
|
||||
cat /Users/snider/Code/.core/agents.yaml
|
||||
```
|
||||
|
||||
## Common Fixes
|
||||
|
||||
| Symptom | Fix |
|
||||
|---------|-----|
|
||||
| MCP tools not found | User needs to restart core-agent |
|
||||
| Dispatch always queued | Check concurrency in agents.yaml |
|
||||
| Workspaces not prepping | Check template: `ls pkg/lib/workspace/default/` |
|
||||
| go.work missing | Rebuild — template was updated |
|
||||
| Codex can't find core.Env | Core dep too old — needs update-deps |
|
||||
|
||||
## Rules
|
||||
|
||||
- Do NOT run `go install` — tell the user to do it
|
||||
- Do NOT kill processes without asking
|
||||
- Do NOT delete workspaces without asking
|
||||
- Report what's wrong, suggest the fix, let the user decide
|
||||
20
claude/devops/skills/build-prompt/SKILL.md
Normal file
20
claude/devops/skills/build-prompt/SKILL.md
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
name: build-prompt
|
||||
description: This skill should be used when the user asks to "build prompt", "show prompt", "preview agent prompt", "what would codex see", or needs to preview the prompt that would be sent to a dispatched agent without actually cloning or dispatching.
|
||||
argument-hint: <repo> [--task="..."] [--persona=...] [--org=core]
|
||||
allowed-tools: ["Bash"]
|
||||
---
|
||||
|
||||
# Build Agent Prompt
|
||||
|
||||
Preview the full prompt that would be sent to a dispatched agent. Shows task, repo info, workflow, brain recall, consumers, git log, and constraints — without cloning or dispatching.
|
||||
|
||||
```bash
|
||||
core-agent prompt <repo> --task="description" [--persona=code/go] [--org=core]
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
core-agent prompt go-io --task="AX audit"
|
||||
core-agent prompt agent --task="Fix monitor package" --persona=code/go
|
||||
```
|
||||
19
claude/devops/skills/issue-comment/SKILL.md
Normal file
19
claude/devops/skills/issue-comment/SKILL.md
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
name: issue-comment
|
||||
description: This skill should be used when the user asks to "comment on issue", "add comment", "reply to issue", or needs to post a comment on a Forge issue.
|
||||
argument-hint: <repo> --number=N --body="comment text" [--org=core]
|
||||
allowed-tools: ["Bash"]
|
||||
---
|
||||
|
||||
# Comment on Forge Issue
|
||||
|
||||
Post a comment on a Forge issue.
|
||||
|
||||
```bash
|
||||
core-agent issue/comment <repo> --number=N --body="comment text" [--org=core]
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
core-agent issue/comment go --number=16 --body="Fixed in v0.6.0"
|
||||
```
|
||||
20
claude/devops/skills/issue-get/SKILL.md
Normal file
20
claude/devops/skills/issue-get/SKILL.md
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
name: issue-get
|
||||
description: This skill should be used when the user asks to "get issue", "show issue", "read issue", "fetch issue", or needs to view a specific Forge issue by number.
|
||||
argument-hint: <repo> --number=N [--org=core]
|
||||
allowed-tools: ["Bash"]
|
||||
---
|
||||
|
||||
# Get Forge Issue
|
||||
|
||||
Fetch and display a Forge issue by number.
|
||||
|
||||
```bash
|
||||
core-agent issue/get <repo> --number=N [--org=core]
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
core-agent issue/get go --number=16
|
||||
core-agent issue/get agent --number=5 --org=core
|
||||
```
|
||||
20
claude/devops/skills/issue-list/SKILL.md
Normal file
20
claude/devops/skills/issue-list/SKILL.md
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
name: issue-list
|
||||
description: This skill should be used when the user asks to "list issues", "show issues", "what issues are open", or needs to see issues for a Forge repo.
|
||||
argument-hint: <repo> [--org=core]
|
||||
allowed-tools: ["Bash"]
|
||||
---
|
||||
|
||||
# List Forge Issues
|
||||
|
||||
List all issues for a Forge repository.
|
||||
|
||||
```bash
|
||||
core-agent issue/list <repo> [--org=core]
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
core-agent issue/list go
|
||||
core-agent issue/list agent
|
||||
```
|
||||
19
claude/devops/skills/pr-get/SKILL.md
Normal file
19
claude/devops/skills/pr-get/SKILL.md
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
name: pr-get
|
||||
description: This skill should be used when the user asks to "get PR", "show PR", "read pull request", "fetch PR", or needs to view a specific Forge pull request by number.
|
||||
argument-hint: <repo> --number=N [--org=core]
|
||||
allowed-tools: ["Bash"]
|
||||
---
|
||||
|
||||
# Get Forge Pull Request
|
||||
|
||||
Fetch and display a Forge PR by number. Shows state, branch, mergeability.
|
||||
|
||||
```bash
|
||||
core-agent pr/get <repo> --number=N [--org=core]
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
core-agent pr/get go --number=22
|
||||
```
|
||||
20
claude/devops/skills/pr-list/SKILL.md
Normal file
20
claude/devops/skills/pr-list/SKILL.md
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
name: pr-list
|
||||
description: This skill should be used when the user asks to "list PRs", "show pull requests", "what PRs are open", "pending PRs", or needs to see pull requests for a Forge repo.
|
||||
argument-hint: <repo> [--org=core]
|
||||
allowed-tools: ["Bash"]
|
||||
---
|
||||
|
||||
# List Forge Pull Requests
|
||||
|
||||
List all pull requests for a Forge repository. Shows state, branches, title.
|
||||
|
||||
```bash
|
||||
core-agent pr/list <repo> [--org=core]
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
core-agent pr/list go
|
||||
core-agent pr/list agent
|
||||
```
|
||||
26
claude/devops/skills/pr-merge/SKILL.md
Normal file
26
claude/devops/skills/pr-merge/SKILL.md
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
name: pr-merge
|
||||
description: This skill should be used when the user asks to "merge PR", "merge pull request", "accept PR", or needs to merge a Forge PR. Supports merge, rebase, and squash methods.
|
||||
argument-hint: <repo> --number=N [--method=merge|rebase|squash] [--org=core]
|
||||
allowed-tools: ["Bash"]
|
||||
---
|
||||
|
||||
# Merge Forge Pull Request
|
||||
|
||||
Merge a PR on Forge. Default method is merge.
|
||||
|
||||
```bash
|
||||
core-agent pr/merge <repo> --number=N [--method=merge|rebase|squash] [--org=core]
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
core-agent pr/merge go --number=22
|
||||
core-agent pr/merge go-forge --number=7 --method=squash
|
||||
```
|
||||
|
||||
## Important
|
||||
|
||||
- Always confirm with the user before merging
|
||||
- Check PR status with `pr/get` first if unsure about mergeability
|
||||
- The merge happens on Forge, not locally
|
||||
20
claude/devops/skills/repo-get/SKILL.md
Normal file
20
claude/devops/skills/repo-get/SKILL.md
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
name: repo-get
|
||||
description: This skill should be used when the user asks to "get repo info", "show repo", "repo details", or needs to see details about a specific Forge repository including default branch, visibility, and archive status.
|
||||
argument-hint: <repo> [--org=core]
|
||||
allowed-tools: ["Bash"]
|
||||
---
|
||||
|
||||
# Get Forge Repository Info
|
||||
|
||||
Fetch and display repository details from Forge.
|
||||
|
||||
```bash
|
||||
core-agent repo/get <repo> [--org=core]
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
core-agent repo/get go
|
||||
core-agent repo/get agent
|
||||
```
|
||||
20
claude/devops/skills/repo-list/SKILL.md
Normal file
20
claude/devops/skills/repo-list/SKILL.md
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
name: repo-list
|
||||
description: This skill should be used when the user asks to "list repos", "show repos", "what repos exist", "how many repos", or needs to see all repositories in a Forge organisation.
|
||||
argument-hint: [--org=core]
|
||||
allowed-tools: ["Bash"]
|
||||
---
|
||||
|
||||
# List Forge Repositories
|
||||
|
||||
List all repositories in a Forge organisation.
|
||||
|
||||
```bash
|
||||
core-agent repo/list [--org=core]
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
core-agent repo/list
|
||||
core-agent repo/list --org=lthn
|
||||
```
|
||||
55
claude/devops/skills/update-deps/SKILL.md
Normal file
55
claude/devops/skills/update-deps/SKILL.md
Normal file
|
|
@ -0,0 +1,55 @@
|
|||
---
|
||||
name: update-deps
|
||||
description: This skill should be used when the user asks to "update deps", "bump core", "update go.mod", "upgrade dependencies", or needs to update dappco.re/go/core or other Go module dependencies in a core ecosystem repo. Uses go get properly — never manual go.mod editing.
|
||||
argument-hint: [repo-name] [module@version]
|
||||
allowed-tools: ["Bash"]
|
||||
---
|
||||
|
||||
# Update Go Module Dependencies
|
||||
|
||||
Properly update dependencies in a Core ecosystem Go module.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Determine the repo. If an argument is given, use it. Otherwise use the current working directory.
|
||||
```
|
||||
/Users/snider/Code/core/<repo>/
|
||||
```
|
||||
|
||||
2. Check current dependency versions:
|
||||
```bash
|
||||
grep 'dappco.re' go.mod
|
||||
```
|
||||
|
||||
3. Update the dependency using `go get`. Examples:
|
||||
```bash
|
||||
# Update core to latest
|
||||
GONOSUMDB='dappco.re/*' GONOSUMCHECK='dappco.re/*' GOPROXY=direct go get dappco.re/go/core@latest
|
||||
|
||||
# Update to specific version
|
||||
GONOSUMDB='dappco.re/*' GONOSUMCHECK='dappco.re/*' GOPROXY=direct go get dappco.re/go/core@v0.6.0
|
||||
|
||||
# Update all dappco.re deps
|
||||
GONOSUMDB='dappco.re/*' GONOSUMCHECK='dappco.re/*' GOPROXY=direct go get -u dappco.re/...
|
||||
```
|
||||
|
||||
4. Tidy:
|
||||
```bash
|
||||
go mod tidy
|
||||
```
|
||||
|
||||
5. Verify:
|
||||
```bash
|
||||
go build ./...
|
||||
```
|
||||
|
||||
6. Report what changed in go.mod.
|
||||
|
||||
## Important
|
||||
|
||||
- ALWAYS use `go get` — NEVER manually edit go.mod
|
||||
- ALWAYS set `GONOSUMDB` and `GONOSUMCHECK` for dappco.re modules
|
||||
- ALWAYS set `GOPROXY=direct` to bypass proxy cache for private modules
|
||||
- ALWAYS run `go mod tidy` after updating
|
||||
- ALWAYS verify with `go build ./...`
|
||||
- If a version doesn't resolve, check if the tag has been pushed to GitHub (dappco.re vanity imports resolve through GitHub)
|
||||
24
claude/devops/skills/workspace-clean/SKILL.md
Normal file
24
claude/devops/skills/workspace-clean/SKILL.md
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
name: workspace-clean
|
||||
description: This skill should be used when the user asks to "clean workspaces", "clean up agents", "remove stale workspaces", "nuke completed", or needs to remove finished/failed/blocked agent workspaces.
|
||||
argument-hint: [all|completed|failed|blocked]
|
||||
allowed-tools: ["Bash"]
|
||||
---
|
||||
|
||||
# Clean Agent Workspaces
|
||||
|
||||
Remove stale agent workspaces. Never removes running workspaces.
|
||||
|
||||
```bash
|
||||
# Remove all non-running workspaces
|
||||
core-agent workspace/clean all
|
||||
|
||||
# Remove only completed/merged
|
||||
core-agent workspace/clean completed
|
||||
|
||||
# Remove only failed
|
||||
core-agent workspace/clean failed
|
||||
|
||||
# Remove only blocked
|
||||
core-agent workspace/clean blocked
|
||||
```
|
||||
16
claude/devops/skills/workspace-list/SKILL.md
Normal file
16
claude/devops/skills/workspace-list/SKILL.md
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
name: workspace-list
|
||||
description: This skill should be used when the user asks to "list workspaces", "show agents", "what's running", "workspace status", "active agents", or wants to see the current state of all agent workspaces.
|
||||
argument-hint: (no arguments needed)
|
||||
allowed-tools: ["Bash"]
|
||||
---
|
||||
|
||||
# List Agent Workspaces
|
||||
|
||||
Show all agent workspaces with their status, agent type, and repo.
|
||||
|
||||
```bash
|
||||
core-agent workspace/list
|
||||
```
|
||||
|
||||
Output shows: status, agent, repo, workspace name. Statuses: running, completed, failed, blocked, merged, queued.
|
||||
329
cmd/core-agent/forge.go
Normal file
329
cmd/core-agent/forge.go
Normal file
|
|
@ -0,0 +1,329 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strconv"
|
||||
|
||||
"dappco.re/go/core"
|
||||
"dappco.re/go/core/forge"
|
||||
forge_types "dappco.re/go/core/forge/types"
|
||||
)
|
||||
|
||||
// newForgeClient creates a Forge client from env config.
|
||||
func newForgeClient() *forge.Forge {
|
||||
url := core.Env("FORGE_URL")
|
||||
if url == "" {
|
||||
url = "https://forge.lthn.ai"
|
||||
}
|
||||
token := core.Env("FORGE_TOKEN")
|
||||
if token == "" {
|
||||
token = core.Env("GITEA_TOKEN")
|
||||
}
|
||||
return forge.NewForge(url, token)
|
||||
}
|
||||
|
||||
// parseArgs extracts org and repo from opts. First positional arg is repo, --org flag defaults to "core".
|
||||
func parseArgs(opts core.Options) (org, repo string, num int64) {
|
||||
org = opts.String("org")
|
||||
if org == "" {
|
||||
org = "core"
|
||||
}
|
||||
repo = opts.String("_arg")
|
||||
if v := opts.String("number"); v != "" {
|
||||
num, _ = strconv.ParseInt(v, 10, 64)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func fmtIndex(n int64) string { return strconv.FormatInt(n, 10) }
|
||||
|
||||
func registerForgeCommands(c *core.Core) {
|
||||
ctx := context.Background()
|
||||
|
||||
// --- Issues ---
|
||||
|
||||
c.Command("issue/get", core.Command{
|
||||
Description: "Get a Forge issue",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
org, repo, num := parseArgs(opts)
|
||||
if repo == "" || num == 0 {
|
||||
core.Print(nil, "usage: core-agent issue get <repo> --number=N [--org=core]")
|
||||
return core.Result{OK: false}
|
||||
}
|
||||
|
||||
f := newForgeClient()
|
||||
issue, err := f.Issues.Get(ctx, forge.Params{"owner": org, "repo": repo, "index": fmtIndex(num)})
|
||||
if err != nil {
|
||||
core.Print(nil, "error: %v", err)
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
|
||||
core.Print(nil, "#%d %s", issue.Index, issue.Title)
|
||||
core.Print(nil, " state: %s", issue.State)
|
||||
core.Print(nil, " url: %s", issue.HTMLURL)
|
||||
if issue.Body != "" {
|
||||
core.Print(nil, "")
|
||||
core.Print(nil, "%s", issue.Body)
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
c.Command("issue/list", core.Command{
|
||||
Description: "List Forge issues for a repo",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
org, repo, _ := parseArgs(opts)
|
||||
if repo == "" {
|
||||
core.Print(nil, "usage: core-agent issue list <repo> [--org=core]")
|
||||
return core.Result{OK: false}
|
||||
}
|
||||
|
||||
f := newForgeClient()
|
||||
issues, err := f.Issues.ListAll(ctx, forge.Params{"owner": org, "repo": repo})
|
||||
if err != nil {
|
||||
core.Print(nil, "error: %v", err)
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
|
||||
for _, issue := range issues {
|
||||
core.Print(nil, " #%-4d %-6s %s", issue.Index, issue.State, issue.Title)
|
||||
}
|
||||
if len(issues) == 0 {
|
||||
core.Print(nil, " no issues")
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
c.Command("issue/comment", core.Command{
|
||||
Description: "Comment on a Forge issue",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
org, repo, num := parseArgs(opts)
|
||||
body := opts.String("body")
|
||||
if repo == "" || num == 0 || body == "" {
|
||||
core.Print(nil, "usage: core-agent issue comment <repo> --number=N --body=\"text\" [--org=core]")
|
||||
return core.Result{OK: false}
|
||||
}
|
||||
|
||||
f := newForgeClient()
|
||||
comment, err := f.Issues.CreateComment(ctx, org, repo, num, body)
|
||||
if err != nil {
|
||||
core.Print(nil, "error: %v", err)
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
|
||||
core.Print(nil, "comment #%d created on %s/%s#%d", comment.ID, org, repo, num)
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
c.Command("issue/create", core.Command{
|
||||
Description: "Create a Forge issue",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
org, repo, _ := parseArgs(opts)
|
||||
title := opts.String("title")
|
||||
body := opts.String("body")
|
||||
labels := opts.String("labels")
|
||||
milestone := opts.String("milestone")
|
||||
assignee := opts.String("assignee")
|
||||
ref := opts.String("ref")
|
||||
if repo == "" || title == "" {
|
||||
core.Print(nil, "usage: core-agent issue create <repo> --title=\"...\" [--body=\"...\"] [--labels=\"agentic,bug\"] [--milestone=\"v0.2.0\"] [--assignee=virgil] [--ref=dev] [--org=core]")
|
||||
return core.Result{OK: false}
|
||||
}
|
||||
|
||||
createOpts := &forge_types.CreateIssueOption{
|
||||
Title: title,
|
||||
Body: body,
|
||||
Ref: ref,
|
||||
}
|
||||
|
||||
// Resolve milestone name to ID
|
||||
if milestone != "" {
|
||||
f := newForgeClient()
|
||||
milestones, err := f.Milestones.ListAll(ctx, forge.Params{"owner": org, "repo": repo})
|
||||
if err == nil {
|
||||
for _, m := range milestones {
|
||||
if m.Title == milestone {
|
||||
createOpts.Milestone = m.ID
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Set assignee
|
||||
if assignee != "" {
|
||||
createOpts.Assignees = []string{assignee}
|
||||
}
|
||||
|
||||
// Resolve label names to IDs if provided
|
||||
if labels != "" {
|
||||
f := newForgeClient()
|
||||
labelNames := core.Split(labels, ",")
|
||||
allLabels, err := f.Labels.ListRepoLabels(ctx, org, repo)
|
||||
if err == nil {
|
||||
for _, name := range labelNames {
|
||||
name = core.Trim(name)
|
||||
for _, l := range allLabels {
|
||||
if l.Name == name {
|
||||
createOpts.Labels = append(createOpts.Labels, l.ID)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
f := newForgeClient()
|
||||
issue, err := f.Issues.Create(ctx, forge.Params{"owner": org, "repo": repo}, createOpts)
|
||||
if err != nil {
|
||||
core.Print(nil, "error: %v", err)
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
|
||||
core.Print(nil, "#%d %s", issue.Index, issue.Title)
|
||||
core.Print(nil, " url: %s", issue.HTMLURL)
|
||||
return core.Result{Value: issue.Index, OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// --- Pull Requests ---
|
||||
|
||||
c.Command("pr/get", core.Command{
|
||||
Description: "Get a Forge PR",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
org, repo, num := parseArgs(opts)
|
||||
if repo == "" || num == 0 {
|
||||
core.Print(nil, "usage: core-agent pr get <repo> --number=N [--org=core]")
|
||||
return core.Result{OK: false}
|
||||
}
|
||||
|
||||
f := newForgeClient()
|
||||
pr, err := f.Pulls.Get(ctx, forge.Params{"owner": org, "repo": repo, "index": fmtIndex(num)})
|
||||
if err != nil {
|
||||
core.Print(nil, "error: %v", err)
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
|
||||
core.Print(nil, "#%d %s", pr.Index, pr.Title)
|
||||
core.Print(nil, " state: %s", pr.State)
|
||||
core.Print(nil, " head: %s", pr.Head.Ref)
|
||||
core.Print(nil, " base: %s", pr.Base.Ref)
|
||||
core.Print(nil, " mergeable: %v", pr.Mergeable)
|
||||
core.Print(nil, " url: %s", pr.HTMLURL)
|
||||
if pr.Body != "" {
|
||||
core.Print(nil, "")
|
||||
core.Print(nil, "%s", pr.Body)
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
c.Command("pr/list", core.Command{
|
||||
Description: "List Forge PRs for a repo",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
org, repo, _ := parseArgs(opts)
|
||||
if repo == "" {
|
||||
core.Print(nil, "usage: core-agent pr list <repo> [--org=core]")
|
||||
return core.Result{OK: false}
|
||||
}
|
||||
|
||||
f := newForgeClient()
|
||||
prs, err := f.Pulls.ListAll(ctx, forge.Params{"owner": org, "repo": repo})
|
||||
if err != nil {
|
||||
core.Print(nil, "error: %v", err)
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
|
||||
for _, pr := range prs {
|
||||
core.Print(nil, " #%-4d %-6s %s → %s %s", pr.Index, pr.State, pr.Head.Ref, pr.Base.Ref, pr.Title)
|
||||
}
|
||||
if len(prs) == 0 {
|
||||
core.Print(nil, " no PRs")
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
c.Command("pr/merge", core.Command{
|
||||
Description: "Merge a Forge PR",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
org, repo, num := parseArgs(opts)
|
||||
method := opts.String("method")
|
||||
if method == "" {
|
||||
method = "merge"
|
||||
}
|
||||
if repo == "" || num == 0 {
|
||||
core.Print(nil, "usage: core-agent pr merge <repo> --number=N [--method=merge|rebase|squash] [--org=core]")
|
||||
return core.Result{OK: false}
|
||||
}
|
||||
|
||||
f := newForgeClient()
|
||||
if err := f.Pulls.Merge(ctx, org, repo, num, method); err != nil {
|
||||
core.Print(nil, "error: %v", err)
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
|
||||
core.Print(nil, "merged %s/%s#%d via %s", org, repo, num, method)
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// --- Repositories ---
|
||||
|
||||
c.Command("repo/get", core.Command{
|
||||
Description: "Get Forge repo info",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
org, repo, _ := parseArgs(opts)
|
||||
if repo == "" {
|
||||
core.Print(nil, "usage: core-agent repo get <repo> [--org=core]")
|
||||
return core.Result{OK: false}
|
||||
}
|
||||
|
||||
f := newForgeClient()
|
||||
r, err := f.Repos.Get(ctx, forge.Params{"owner": org, "repo": repo})
|
||||
if err != nil {
|
||||
core.Print(nil, "error: %v", err)
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
|
||||
core.Print(nil, "%s/%s", r.Owner.UserName, r.Name)
|
||||
core.Print(nil, " description: %s", r.Description)
|
||||
core.Print(nil, " default: %s", r.DefaultBranch)
|
||||
core.Print(nil, " private: %v", r.Private)
|
||||
core.Print(nil, " archived: %v", r.Archived)
|
||||
core.Print(nil, " url: %s", r.HTMLURL)
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
c.Command("repo/list", core.Command{
|
||||
Description: "List Forge repos for an org",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
org := opts.String("org")
|
||||
if org == "" {
|
||||
org = "core"
|
||||
}
|
||||
|
||||
f := newForgeClient()
|
||||
repos, err := f.Repos.ListOrgRepos(ctx, org)
|
||||
if err != nil {
|
||||
core.Print(nil, "error: %v", err)
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
|
||||
for _, r := range repos {
|
||||
archived := ""
|
||||
if r.Archived {
|
||||
archived = " (archived)"
|
||||
}
|
||||
core.Print(nil, " %-30s %s%s", r.Name, r.Description, archived)
|
||||
}
|
||||
core.Print(nil, "\n %d repos", len(repos))
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
}
|
||||
501
cmd/core-agent/main.go
Normal file
501
cmd/core-agent/main.go
Normal file
|
|
@ -0,0 +1,501 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strconv"
|
||||
"syscall"
|
||||
|
||||
"dappco.re/go/core"
|
||||
"dappco.re/go/core/process"
|
||||
|
||||
"dappco.re/go/agent/pkg/agentic"
|
||||
"dappco.re/go/agent/pkg/brain"
|
||||
"dappco.re/go/agent/pkg/lib"
|
||||
"dappco.re/go/agent/pkg/monitor"
|
||||
"forge.lthn.ai/core/mcp/pkg/mcp"
|
||||
)
|
||||
|
||||
func main() {
|
||||
c := core.New(core.Options{
|
||||
{Key: "name", Value: "core-agent"},
|
||||
})
|
||||
// Version set at build time: go build -ldflags "-X main.version=0.15.0"
|
||||
if version != "" {
|
||||
c.App().Version = version
|
||||
} else {
|
||||
c.App().Version = "dev"
|
||||
}
|
||||
|
||||
// version — print version and build info
|
||||
c.Command("version", core.Command{
|
||||
Description: "Print version and build info",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
core.Print(nil, "core-agent %s", c.App().Version)
|
||||
core.Print(nil, " go: %s", core.Env("GO"))
|
||||
core.Print(nil, " os: %s/%s", core.Env("OS"), core.Env("ARCH"))
|
||||
core.Print(nil, " home: %s", core.Env("DIR_HOME"))
|
||||
core.Print(nil, " hostname: %s", core.Env("HOSTNAME"))
|
||||
core.Print(nil, " pid: %s", core.Env("PID"))
|
||||
core.Print(nil, " channel: %s", updateChannel())
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// check — verify workspace, deps, and config are healthy
|
||||
c.Command("check", core.Command{
|
||||
Description: "Verify workspace, deps, and config",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
fs := c.Fs()
|
||||
|
||||
core.Print(nil, "core-agent %s health check", c.App().Version)
|
||||
core.Print(nil, "")
|
||||
|
||||
// Binary location
|
||||
core.Print(nil, " binary: %s", os.Args[0])
|
||||
|
||||
// Agents config
|
||||
agentsPath := core.Path("Code", ".core", "agents.yaml")
|
||||
if fs.IsFile(agentsPath) {
|
||||
core.Print(nil, " agents: %s (ok)", agentsPath)
|
||||
} else {
|
||||
core.Print(nil, " agents: %s (MISSING)", agentsPath)
|
||||
}
|
||||
|
||||
// Workspace dir
|
||||
wsRoot := core.Path("Code", ".core", "workspace")
|
||||
if fs.IsDir(wsRoot) {
|
||||
r := fs.List(wsRoot)
|
||||
count := 0
|
||||
if r.OK {
|
||||
count = len(r.Value.([]os.DirEntry))
|
||||
}
|
||||
core.Print(nil, " workspace: %s (%d entries)", wsRoot, count)
|
||||
} else {
|
||||
core.Print(nil, " workspace: %s (MISSING)", wsRoot)
|
||||
}
|
||||
|
||||
// Core dep version
|
||||
core.Print(nil, " core: dappco.re/go/core@v%s", c.App().Version)
|
||||
|
||||
// Env keys
|
||||
core.Print(nil, " env keys: %d loaded", len(core.EnvKeys()))
|
||||
|
||||
core.Print(nil, "")
|
||||
core.Print(nil, "ok")
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// extract — test workspace template extraction
|
||||
c.Command("extract", core.Command{
|
||||
Description: "Extract a workspace template to a directory",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
tmpl := opts.String("_arg")
|
||||
if tmpl == "" {
|
||||
tmpl = "default"
|
||||
}
|
||||
target := opts.String("target")
|
||||
if target == "" {
|
||||
target = core.Path("Code", ".core", "workspace", "test-extract")
|
||||
}
|
||||
|
||||
data := &lib.WorkspaceData{
|
||||
Repo: "test-repo",
|
||||
Branch: "dev",
|
||||
Task: "test extraction",
|
||||
Agent: "codex",
|
||||
}
|
||||
|
||||
core.Print(nil, "extracting template %q to %s", tmpl, target)
|
||||
if err := lib.ExtractWorkspace(tmpl, target, data); err != nil {
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
|
||||
// List what was created
|
||||
fs := &core.Fs{}
|
||||
r := fs.List(target)
|
||||
if r.OK {
|
||||
for _, e := range r.Value.([]os.DirEntry) {
|
||||
marker := " "
|
||||
if e.IsDir() {
|
||||
marker = "/"
|
||||
}
|
||||
core.Print(nil, " %s%s", e.Name(), marker)
|
||||
}
|
||||
}
|
||||
|
||||
core.Print(nil, "done")
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// --- Forge + Workspace CLI commands ---
|
||||
registerForgeCommands(c)
|
||||
registerWorkspaceCommands(c)
|
||||
// registerUpdateCommand(c) — parked until version moves to module root
|
||||
|
||||
// --- CLI commands for feature testing ---
|
||||
|
||||
prep := agentic.NewPrep()
|
||||
|
||||
// prep — test workspace preparation (clone + prompt)
|
||||
c.Command("prep", core.Command{
|
||||
Description: "Prepare a workspace: clone repo, build prompt",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
repo := opts.String("_arg")
|
||||
if repo == "" {
|
||||
core.Print(nil, "usage: core-agent prep <repo> --issue=N|--pr=N|--branch=X --task=\"...\"")
|
||||
return core.Result{OK: false}
|
||||
}
|
||||
|
||||
input := agentic.PrepInput{
|
||||
Repo: repo,
|
||||
Org: opts.String("org"),
|
||||
Task: opts.String("task"),
|
||||
Template: opts.String("template"),
|
||||
Persona: opts.String("persona"),
|
||||
DryRun: opts.Bool("dry-run"),
|
||||
}
|
||||
|
||||
// Parse identifier from flags
|
||||
if v := opts.String("issue"); v != "" {
|
||||
n := 0
|
||||
for _, ch := range v {
|
||||
if ch >= '0' && ch <= '9' {
|
||||
n = n*10 + int(ch-'0')
|
||||
}
|
||||
}
|
||||
input.Issue = n
|
||||
}
|
||||
if v := opts.String("pr"); v != "" {
|
||||
n := 0
|
||||
for _, ch := range v {
|
||||
if ch >= '0' && ch <= '9' {
|
||||
n = n*10 + int(ch-'0')
|
||||
}
|
||||
}
|
||||
input.PR = n
|
||||
}
|
||||
if v := opts.String("branch"); v != "" {
|
||||
input.Branch = v
|
||||
}
|
||||
if v := opts.String("tag"); v != "" {
|
||||
input.Tag = v
|
||||
}
|
||||
|
||||
// Default to branch "dev" if no identifier
|
||||
if input.Issue == 0 && input.PR == 0 && input.Branch == "" && input.Tag == "" {
|
||||
input.Branch = "dev"
|
||||
}
|
||||
|
||||
_, out, err := prep.TestPrepWorkspace(context.Background(), input)
|
||||
if err != nil {
|
||||
core.Print(nil, "error: %v", err)
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
|
||||
core.Print(nil, "workspace: %s", out.WorkspaceDir)
|
||||
core.Print(nil, "repo: %s", out.RepoDir)
|
||||
core.Print(nil, "branch: %s", out.Branch)
|
||||
core.Print(nil, "resumed: %v", out.Resumed)
|
||||
core.Print(nil, "memories: %d", out.Memories)
|
||||
core.Print(nil, "consumers: %d", out.Consumers)
|
||||
if out.Prompt != "" {
|
||||
core.Print(nil, "")
|
||||
core.Print(nil, "--- prompt (%d chars) ---", len(out.Prompt))
|
||||
core.Print(nil, "%s", out.Prompt)
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// status — list workspace statuses
|
||||
c.Command("status", core.Command{
|
||||
Description: "List agent workspace statuses",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
wsRoot := agentic.WorkspaceRoot()
|
||||
fsys := c.Fs()
|
||||
r := fsys.List(wsRoot)
|
||||
if !r.OK {
|
||||
core.Print(nil, "no workspaces found at %s", wsRoot)
|
||||
return core.Result{OK: true}
|
||||
}
|
||||
|
||||
entries := r.Value.([]os.DirEntry)
|
||||
if len(entries) == 0 {
|
||||
core.Print(nil, "no workspaces")
|
||||
return core.Result{OK: true}
|
||||
}
|
||||
|
||||
for _, e := range entries {
|
||||
if !e.IsDir() {
|
||||
continue
|
||||
}
|
||||
statusFile := core.JoinPath(wsRoot, e.Name(), "status.json")
|
||||
if sr := fsys.Read(statusFile); sr.OK {
|
||||
core.Print(nil, " %s", e.Name())
|
||||
}
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// prompt — build and show an agent prompt without cloning
|
||||
c.Command("prompt", core.Command{
|
||||
Description: "Build and display an agent prompt for a repo",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
repo := opts.String("_arg")
|
||||
if repo == "" {
|
||||
core.Print(nil, "usage: core-agent prompt <repo> --task=\"...\"")
|
||||
return core.Result{OK: false}
|
||||
}
|
||||
|
||||
org := opts.String("org")
|
||||
if org == "" {
|
||||
org = "core"
|
||||
}
|
||||
task := opts.String("task")
|
||||
if task == "" {
|
||||
task = "Review and report findings"
|
||||
}
|
||||
|
||||
repoPath := core.JoinPath(core.Env("DIR_HOME"), "Code", org, repo)
|
||||
|
||||
input := agentic.PrepInput{
|
||||
Repo: repo,
|
||||
Org: org,
|
||||
Task: task,
|
||||
Template: opts.String("template"),
|
||||
Persona: opts.String("persona"),
|
||||
}
|
||||
|
||||
prompt, memories, consumers := prep.TestBuildPrompt(context.Background(), input, "dev", repoPath)
|
||||
core.Print(nil, "memories: %d", memories)
|
||||
core.Print(nil, "consumers: %d", consumers)
|
||||
core.Print(nil, "")
|
||||
core.Print(nil, "%s", prompt)
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// env — dump all Env keys
|
||||
c.Command("env", core.Command{
|
||||
Description: "Show all core.Env() keys and values",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
keys := core.EnvKeys()
|
||||
for _, k := range keys {
|
||||
core.Print(nil, " %-15s %s", k, core.Env(k))
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// Shared setup — creates MCP service with all subsystems wired
|
||||
initServices := func() (*mcp.Service, *monitor.Subsystem, error) {
|
||||
procFactory := process.NewService(process.Options{})
|
||||
procResult, err := procFactory(c)
|
||||
if err != nil {
|
||||
return nil, nil, core.E("main", "init process service", err)
|
||||
}
|
||||
if procSvc, ok := procResult.(*process.Service); ok {
|
||||
_ = process.SetDefault(procSvc)
|
||||
}
|
||||
|
||||
mon := monitor.New()
|
||||
prep := agentic.NewPrep()
|
||||
prep.SetCompletionNotifier(mon)
|
||||
|
||||
mcpSvc, err := mcp.New(mcp.Options{
|
||||
Subsystems: []mcp.Subsystem{brain.NewDirect(), prep, mon},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, nil, core.E("main", "create MCP service", err)
|
||||
}
|
||||
|
||||
mon.SetNotifier(mcpSvc)
|
||||
prep.StartRunner()
|
||||
return mcpSvc, mon, nil
|
||||
}
|
||||
|
||||
// Signal-aware context for clean shutdown
|
||||
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
// mcp — stdio transport (Claude Code integration)
|
||||
c.Command("mcp", core.Command{
|
||||
Description: "Start the MCP server on stdio",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
mcpSvc, mon, err := initServices()
|
||||
if err != nil {
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
mon.Start(ctx)
|
||||
if err := mcpSvc.Run(ctx); err != nil {
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// serve — persistent HTTP daemon (Charon, CI, cross-agent)
|
||||
c.Command("serve", core.Command{
|
||||
Description: "Start as a persistent HTTP daemon",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
mcpSvc, mon, err := initServices()
|
||||
if err != nil {
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
|
||||
addr := core.Env("MCP_HTTP_ADDR")
|
||||
if addr == "" {
|
||||
addr = "0.0.0.0:9101"
|
||||
}
|
||||
|
||||
healthAddr := core.Env("HEALTH_ADDR")
|
||||
if healthAddr == "" {
|
||||
healthAddr = "0.0.0.0:9102"
|
||||
}
|
||||
|
||||
pidFile := core.Path(".core", "core-agent.pid")
|
||||
|
||||
daemon := process.NewDaemon(process.DaemonOptions{
|
||||
PIDFile: pidFile,
|
||||
HealthAddr: healthAddr,
|
||||
Registry: process.DefaultRegistry(),
|
||||
RegistryEntry: process.DaemonEntry{
|
||||
Code: "core",
|
||||
Daemon: "agent",
|
||||
Project: "core-agent",
|
||||
Binary: "core-agent",
|
||||
},
|
||||
})
|
||||
|
||||
if err := daemon.Start(); err != nil {
|
||||
return core.Result{Value: core.E("main", "daemon start", err), OK: false}
|
||||
}
|
||||
|
||||
mon.Start(ctx)
|
||||
daemon.SetReady(true)
|
||||
core.Print(os.Stderr, "core-agent serving on %s (health: %s, pid: %s)", addr, healthAddr, pidFile)
|
||||
|
||||
os.Setenv("MCP_HTTP_ADDR", addr)
|
||||
|
||||
if err := mcpSvc.Run(ctx); err != nil {
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// run task — single task e2e (prep → spawn → wait → done)
|
||||
c.Command("run/task", core.Command{
|
||||
Description: "Run a single task end-to-end",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
repo := opts.String("repo")
|
||||
agent := opts.String("agent")
|
||||
task := opts.String("task")
|
||||
issueStr := opts.String("issue")
|
||||
org := opts.String("org")
|
||||
|
||||
if repo == "" || task == "" {
|
||||
core.Print(nil, "usage: core-agent run task --repo=<repo> --task=\"...\" --agent=codex [--issue=N] [--org=core]")
|
||||
return core.Result{OK: false}
|
||||
}
|
||||
if agent == "" {
|
||||
agent = "codex"
|
||||
}
|
||||
if org == "" {
|
||||
org = "core"
|
||||
}
|
||||
|
||||
issue := 0
|
||||
if issueStr != "" {
|
||||
if n, err := strconv.Atoi(issueStr); err == nil {
|
||||
issue = n
|
||||
}
|
||||
}
|
||||
|
||||
procFactory := process.NewService(process.Options{})
|
||||
procResult, err := procFactory(c)
|
||||
if err != nil {
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
if procSvc, ok := procResult.(*process.Service); ok {
|
||||
_ = process.SetDefault(procSvc)
|
||||
}
|
||||
|
||||
prep := agentic.NewPrep()
|
||||
|
||||
core.Print(os.Stderr, "core-agent run task")
|
||||
core.Print(os.Stderr, " repo: %s/%s", org, repo)
|
||||
core.Print(os.Stderr, " agent: %s", agent)
|
||||
if issue > 0 {
|
||||
core.Print(os.Stderr, " issue: #%d", issue)
|
||||
}
|
||||
core.Print(os.Stderr, " task: %s", task)
|
||||
core.Print(os.Stderr, "")
|
||||
|
||||
// Dispatch and wait
|
||||
result := prep.DispatchSync(ctx, agentic.DispatchSyncInput{
|
||||
Org: org,
|
||||
Repo: repo,
|
||||
Agent: agent,
|
||||
Task: task,
|
||||
Issue: issue,
|
||||
})
|
||||
|
||||
if !result.OK {
|
||||
core.Print(os.Stderr, "FAILED: %v", result.Error)
|
||||
return core.Result{Value: result.Error, OK: false}
|
||||
}
|
||||
|
||||
core.Print(os.Stderr, "DONE: %s", result.Status)
|
||||
if result.PRURL != "" {
|
||||
core.Print(os.Stderr, " PR: %s", result.PRURL)
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// run orchestrator — standalone queue runner without MCP stdio
|
||||
c.Command("run/orchestrator", core.Command{
|
||||
Description: "Run the queue orchestrator (standalone, no MCP)",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
procFactory := process.NewService(process.Options{})
|
||||
procResult, err := procFactory(c)
|
||||
if err != nil {
|
||||
return core.Result{Value: err, OK: false}
|
||||
}
|
||||
if procSvc, ok := procResult.(*process.Service); ok {
|
||||
_ = process.SetDefault(procSvc)
|
||||
}
|
||||
|
||||
mon := monitor.New()
|
||||
prep := agentic.NewPrep()
|
||||
prep.SetCompletionNotifier(mon)
|
||||
|
||||
mon.Start(ctx)
|
||||
prep.StartRunner()
|
||||
|
||||
core.Print(os.Stderr, "core-agent orchestrator running (pid %s)", core.Env("PID"))
|
||||
core.Print(os.Stderr, " workspace: %s", agentic.WorkspaceRoot())
|
||||
core.Print(os.Stderr, " watching queue, draining on 30s tick + completion poke")
|
||||
|
||||
// Block until signal
|
||||
<-ctx.Done()
|
||||
core.Print(os.Stderr, "orchestrator shutting down")
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// Run CLI — resolves os.Args to command path
|
||||
r := c.Cli().Run()
|
||||
if !r.OK {
|
||||
if err, ok := r.Value.(error); ok {
|
||||
core.Error(err.Error())
|
||||
}
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
24
cmd/core-agent/update.go
Normal file
24
cmd/core-agent/update.go
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
package main
|
||||
|
||||
// version is set at build time via ldflags:
|
||||
//
|
||||
// go build -ldflags "-X 'dappco.re/go/agent.version=0.15.0'" ./cmd/core-agent/
|
||||
var version string
|
||||
|
||||
// updateChannel returns the channel based on the version string.
|
||||
func updateChannel() string {
|
||||
switch {
|
||||
case version == "" || version == "dev":
|
||||
return "dev"
|
||||
case len(version) > 0 && (version[len(version)-1] >= 'a'):
|
||||
return "prerelease"
|
||||
default:
|
||||
return "stable"
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: wire go-update UpdateService for self-update command
|
||||
// Channels: stable → GitHub releases, prerelease → GitHub dev, dev → Forge main
|
||||
// Parked until version var moves to module root package (dappco.re/go/agent.Version)
|
||||
163
cmd/core-agent/workspace.go
Normal file
163
cmd/core-agent/workspace.go
Normal file
|
|
@ -0,0 +1,163 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"dappco.re/go/core"
|
||||
|
||||
"dappco.re/go/agent/pkg/agentic"
|
||||
)
|
||||
|
||||
func registerWorkspaceCommands(c *core.Core) {
|
||||
|
||||
// workspace/list — show all workspaces with status
|
||||
c.Command("workspace/list", core.Command{
|
||||
Description: "List all agent workspaces with status",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
wsRoot := agentic.WorkspaceRoot()
|
||||
fsys := c.Fs()
|
||||
|
||||
r := fsys.List(wsRoot)
|
||||
if !r.OK {
|
||||
core.Print(nil, "no workspaces at %s", wsRoot)
|
||||
return core.Result{OK: true}
|
||||
}
|
||||
|
||||
entries := r.Value.([]os.DirEntry)
|
||||
count := 0
|
||||
for _, e := range entries {
|
||||
if !e.IsDir() {
|
||||
continue
|
||||
}
|
||||
statusFile := core.JoinPath(wsRoot, e.Name(), "status.json")
|
||||
if sr := fsys.Read(statusFile); sr.OK {
|
||||
// Quick parse for status field
|
||||
content := sr.Value.(string)
|
||||
status := extractField(content, "status")
|
||||
repo := extractField(content, "repo")
|
||||
agent := extractField(content, "agent")
|
||||
core.Print(nil, " %-8s %-8s %-10s %s", status, agent, repo, e.Name())
|
||||
count++
|
||||
}
|
||||
}
|
||||
if count == 0 {
|
||||
core.Print(nil, " no workspaces")
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// workspace/clean — remove stale workspaces
|
||||
c.Command("workspace/clean", core.Command{
|
||||
Description: "Remove completed/failed/blocked workspaces",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
wsRoot := agentic.WorkspaceRoot()
|
||||
fsys := c.Fs()
|
||||
filter := opts.String("_arg")
|
||||
if filter == "" {
|
||||
filter = "all"
|
||||
}
|
||||
|
||||
r := fsys.List(wsRoot)
|
||||
if !r.OK {
|
||||
core.Print(nil, "no workspaces")
|
||||
return core.Result{OK: true}
|
||||
}
|
||||
|
||||
entries := r.Value.([]os.DirEntry)
|
||||
var toRemove []string
|
||||
|
||||
for _, e := range entries {
|
||||
if !e.IsDir() {
|
||||
continue
|
||||
}
|
||||
statusFile := core.JoinPath(wsRoot, e.Name(), "status.json")
|
||||
sr := fsys.Read(statusFile)
|
||||
if !sr.OK {
|
||||
continue
|
||||
}
|
||||
status := extractField(sr.Value.(string), "status")
|
||||
|
||||
switch filter {
|
||||
case "all":
|
||||
if status == "completed" || status == "failed" || status == "blocked" || status == "merged" || status == "ready-for-review" {
|
||||
toRemove = append(toRemove, e.Name())
|
||||
}
|
||||
case "completed":
|
||||
if status == "completed" || status == "merged" || status == "ready-for-review" {
|
||||
toRemove = append(toRemove, e.Name())
|
||||
}
|
||||
case "failed":
|
||||
if status == "failed" {
|
||||
toRemove = append(toRemove, e.Name())
|
||||
}
|
||||
case "blocked":
|
||||
if status == "blocked" {
|
||||
toRemove = append(toRemove, e.Name())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(toRemove) == 0 {
|
||||
core.Print(nil, "nothing to clean")
|
||||
return core.Result{OK: true}
|
||||
}
|
||||
|
||||
for _, name := range toRemove {
|
||||
path := core.JoinPath(wsRoot, name)
|
||||
fsys.DeleteAll(path)
|
||||
core.Print(nil, " removed %s", name)
|
||||
}
|
||||
core.Print(nil, "\n %d workspaces removed", len(toRemove))
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
// workspace/dispatch — dispatch an agent (CLI wrapper for MCP tool)
|
||||
c.Command("workspace/dispatch", core.Command{
|
||||
Description: "Dispatch an agent to work on a repo task",
|
||||
Action: func(opts core.Options) core.Result {
|
||||
repo := opts.String("_arg")
|
||||
if repo == "" {
|
||||
core.Print(nil, "usage: core-agent workspace/dispatch <repo> --task=\"...\" --issue=N|--pr=N|--branch=X [--agent=codex]")
|
||||
return core.Result{OK: false}
|
||||
}
|
||||
|
||||
core.Print(nil, "dispatch via CLI not yet wired — use MCP agentic_dispatch tool")
|
||||
core.Print(nil, "repo: %s, task: %s", repo, opts.String("task"))
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// extractField does a quick JSON field extraction without full unmarshal.
|
||||
// Looks for "field":"value" pattern. Good enough for status.json.
|
||||
func extractField(jsonStr, field string) string {
|
||||
// Match both "field":"value" and "field": "value"
|
||||
needle := core.Concat("\"", field, "\"")
|
||||
idx := -1
|
||||
for i := 0; i <= len(jsonStr)-len(needle); i++ {
|
||||
if jsonStr[i:i+len(needle)] == needle {
|
||||
idx = i + len(needle)
|
||||
break
|
||||
}
|
||||
}
|
||||
if idx < 0 {
|
||||
return ""
|
||||
}
|
||||
// Skip : and whitespace to find opening quote
|
||||
for idx < len(jsonStr) && (jsonStr[idx] == ':' || jsonStr[idx] == ' ' || jsonStr[idx] == '\t') {
|
||||
idx++
|
||||
}
|
||||
if idx >= len(jsonStr) || jsonStr[idx] != '"' {
|
||||
return ""
|
||||
}
|
||||
idx++ // skip opening quote
|
||||
end := idx
|
||||
for end < len(jsonStr) && jsonStr[end] != '"' {
|
||||
end++
|
||||
}
|
||||
return jsonStr[idx:end]
|
||||
}
|
||||
126
cmd/main.go
126
cmd/main.go
|
|
@ -1,126 +0,0 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"dappco.re/go/agent/pkg/agentic"
|
||||
"dappco.re/go/agent/pkg/brain"
|
||||
"dappco.re/go/agent/pkg/monitor"
|
||||
"forge.lthn.ai/core/cli/pkg/cli"
|
||||
"dappco.re/go/core/process"
|
||||
"dappco.re/go/core"
|
||||
"forge.lthn.ai/core/mcp/pkg/mcp"
|
||||
)
|
||||
|
||||
func main() {
|
||||
if err := cli.Init(cli.Options{
|
||||
AppName: "core-agent",
|
||||
Version: "0.2.0",
|
||||
}); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Shared setup for both mcp and serve commands
|
||||
initServices := func() (*mcp.Service, *monitor.Subsystem, error) {
|
||||
c := core.New(core.Options{
|
||||
{Key: "name", Value: "core-agent"},
|
||||
})
|
||||
procFactory := process.NewService(process.Options{})
|
||||
procResult, err := procFactory(c)
|
||||
if err != nil {
|
||||
return nil, nil, cli.Wrap(err, "init process service")
|
||||
}
|
||||
if procSvc, ok := procResult.(*process.Service); ok {
|
||||
process.SetDefault(procSvc)
|
||||
}
|
||||
|
||||
mon := monitor.New()
|
||||
prep := agentic.NewPrep()
|
||||
prep.SetCompletionNotifier(mon)
|
||||
|
||||
mcpSvc, err := mcp.New(mcp.Options{
|
||||
Subsystems: []mcp.Subsystem{brain.NewDirect(), prep, mon},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, nil, cli.Wrap(err, "create MCP service")
|
||||
}
|
||||
|
||||
// Wire channel notifications — monitor pushes events into MCP sessions
|
||||
mon.SetNotifier(mcpSvc)
|
||||
|
||||
return mcpSvc, mon, nil
|
||||
}
|
||||
|
||||
// mcp — stdio transport (Claude Code integration)
|
||||
mcpCmd := cli.NewCommand("mcp", "Start the MCP server on stdio", "", func(cmd *cli.Command, args []string) error {
|
||||
mcpSvc, mon, err := initServices()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
mon.Start(cmd.Context())
|
||||
return mcpSvc.Run(cmd.Context())
|
||||
})
|
||||
|
||||
// serve — persistent HTTP daemon (Charon, CI, cross-agent)
|
||||
serveCmd := cli.NewCommand("serve", "Start as a persistent HTTP daemon", "", func(cmd *cli.Command, args []string) error {
|
||||
mcpSvc, mon, err := initServices()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Determine address
|
||||
addr := os.Getenv("MCP_HTTP_ADDR")
|
||||
if addr == "" {
|
||||
addr = "0.0.0.0:9101"
|
||||
}
|
||||
|
||||
// Determine health address
|
||||
healthAddr := os.Getenv("HEALTH_ADDR")
|
||||
if healthAddr == "" {
|
||||
healthAddr = "0.0.0.0:9102"
|
||||
}
|
||||
|
||||
// Set up daemon with PID file, health check, and registry
|
||||
home, _ := os.UserHomeDir()
|
||||
pidFile := filepath.Join(home, ".core", "core-agent.pid")
|
||||
|
||||
daemon := process.NewDaemon(process.DaemonOptions{
|
||||
PIDFile: pidFile,
|
||||
HealthAddr: healthAddr,
|
||||
Registry: process.DefaultRegistry(),
|
||||
RegistryEntry: process.DaemonEntry{
|
||||
Code: "core",
|
||||
Daemon: "agent",
|
||||
Project: "core-agent",
|
||||
Binary: "core-agent",
|
||||
},
|
||||
})
|
||||
|
||||
if err := daemon.Start(); err != nil {
|
||||
return cli.Wrap(err, "daemon start")
|
||||
}
|
||||
|
||||
// Start monitor
|
||||
mon.Start(cmd.Context())
|
||||
|
||||
// Mark ready
|
||||
daemon.SetReady(true)
|
||||
fmt.Fprintf(os.Stderr, "core-agent serving on %s (health: %s, pid: %s)\n", addr, healthAddr, pidFile)
|
||||
|
||||
// Set env so mcp.Run picks HTTP transport
|
||||
os.Setenv("MCP_HTTP_ADDR", addr)
|
||||
|
||||
// Run MCP server (blocks until context cancelled)
|
||||
return mcpSvc.Run(cmd.Context())
|
||||
})
|
||||
|
||||
cli.RootCmd().AddCommand(mcpCmd)
|
||||
cli.RootCmd().AddCommand(serveCmd)
|
||||
|
||||
if err := cli.Execute(); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
26
go.mod
26
go.mod
|
|
@ -3,14 +3,13 @@ module dappco.re/go/agent
|
|||
go 1.26.0
|
||||
|
||||
require (
|
||||
dappco.re/go/core v0.5.0
|
||||
dappco.re/go/core/io v0.2.0
|
||||
dappco.re/go/core/log v0.1.0
|
||||
dappco.re/go/core v0.6.0
|
||||
dappco.re/go/core/api v0.2.0
|
||||
dappco.re/go/core/process v0.3.0
|
||||
dappco.re/go/core/ws v0.3.0
|
||||
forge.lthn.ai/core/api v0.1.5
|
||||
forge.lthn.ai/core/api v0.1.6
|
||||
forge.lthn.ai/core/cli v0.3.7
|
||||
forge.lthn.ai/core/mcp v0.4.0
|
||||
forge.lthn.ai/core/mcp v0.4.8
|
||||
github.com/gin-gonic/gin v1.12.0
|
||||
github.com/gorilla/websocket v1.5.3
|
||||
github.com/modelcontextprotocol/go-sdk v1.4.1
|
||||
|
|
@ -18,7 +17,14 @@ require (
|
|||
gopkg.in/yaml.v3 v3.0.1
|
||||
)
|
||||
|
||||
require dappco.re/go/core/forge v0.2.0 // indirect
|
||||
|
||||
require (
|
||||
dappco.re/go/core/i18n v0.2.0
|
||||
dappco.re/go/core/io v0.2.0 // indirect
|
||||
dappco.re/go/core/log v0.1.0 // indirect
|
||||
dappco.re/go/core/scm v0.4.0
|
||||
dappco.re/go/core/store v0.2.0
|
||||
forge.lthn.ai/core/go v0.3.3 // indirect
|
||||
forge.lthn.ai/core/go-ai v0.1.12 // indirect
|
||||
forge.lthn.ai/core/go-i18n v0.1.7 // indirect
|
||||
|
|
@ -27,7 +33,7 @@ require (
|
|||
forge.lthn.ai/core/go-log v0.0.4 // indirect
|
||||
forge.lthn.ai/core/go-process v0.2.9 // indirect
|
||||
forge.lthn.ai/core/go-rag v0.1.11 // indirect
|
||||
forge.lthn.ai/core/go-webview v0.1.6 // indirect
|
||||
forge.lthn.ai/core/go-webview v0.1.7 // indirect
|
||||
forge.lthn.ai/core/go-ws v0.2.5 // indirect
|
||||
github.com/99designs/gqlgen v0.17.88 // indirect
|
||||
github.com/KyleBanks/depth v1.2.1 // indirect
|
||||
|
|
@ -36,7 +42,7 @@ require (
|
|||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
||||
github.com/bahlo/generic-list-go v0.2.0 // indirect
|
||||
github.com/bmatcuk/doublestar/v4 v4.10.0 // indirect
|
||||
github.com/buger/jsonparser v1.1.1 // indirect
|
||||
github.com/buger/jsonparser v1.1.2 // indirect
|
||||
github.com/bytedance/gopkg v0.1.4 // indirect
|
||||
github.com/bytedance/sonic v1.15.0 // indirect
|
||||
github.com/bytedance/sonic/loader v0.5.0 // indirect
|
||||
|
|
@ -109,7 +115,7 @@ require (
|
|||
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
|
||||
github.com/muesli/cancelreader v0.2.2 // indirect
|
||||
github.com/muesli/termenv v0.16.0 // indirect
|
||||
github.com/ollama/ollama v0.18.1 // indirect
|
||||
github.com/ollama/ollama v0.18.2 // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/qdrant/go-client v1.17.1 // indirect
|
||||
|
|
@ -150,7 +156,7 @@ require (
|
|||
golang.org/x/term v0.41.0 // indirect
|
||||
golang.org/x/text v0.35.0 // indirect
|
||||
golang.org/x/tools v0.43.0 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260316180232-0b37fe3546d5 // indirect
|
||||
google.golang.org/grpc v1.79.2 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260319201613-d00831a3d3e7 // indirect
|
||||
google.golang.org/grpc v1.79.3 // indirect
|
||||
google.golang.org/protobuf v1.36.11 // indirect
|
||||
)
|
||||
|
|
|
|||
29
go.sum
29
go.sum
|
|
@ -1,15 +1,26 @@
|
|||
dappco.re/go/core v0.5.0 h1:P5DJoaCiK5Q+af5UiTdWqUIW4W4qYKzpgGK50thm21U=
|
||||
dappco.re/go/core v0.5.0/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A=
|
||||
dappco.re/go/core v0.6.0 h1:0wmuO/UmCWXxJkxQ6XvVLnqkAuWitbd49PhxjCsplyk=
|
||||
dappco.re/go/core v0.6.0/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A=
|
||||
dappco.re/go/core/api v0.2.0 h1:5OcN9nawpp18Jp6dB1OwI2CBfs0Tacb0y0zqxFB6TJ0=
|
||||
dappco.re/go/core/api v0.2.0/go.mod h1:AtgNAx8lDY+qhVObFdNQOjSUQrHX1BeiDdMuA6RIfzo=
|
||||
dappco.re/go/core/forge v0.2.0 h1:EBCHaUdzEAbYpDwRTXMmJoSfSrK30IJTOVBPRxxkJTg=
|
||||
dappco.re/go/core/forge v0.2.0/go.mod h1:XMz9ZNVl9xane9Rg3AEBuVV5UNNBGWbPY9rSKbqYgnM=
|
||||
dappco.re/go/core/i18n v0.2.0/go.mod h1:9eSVJXr3OpIGWQvDynfhqcp27xnLMwlYLgsByU+p7ok=
|
||||
dappco.re/go/core/io v0.2.0 h1:zuudgIiTsQQ5ipVt97saWdGLROovbEB/zdVyy9/l+I4=
|
||||
dappco.re/go/core/io v0.2.0/go.mod h1:1QnQV6X9LNgFKfm8SkOtR9LLaj3bDcsOIeJOOyjbL5E=
|
||||
dappco.re/go/core/log v0.1.0 h1:pa71Vq2TD2aoEUQWFKwNcaJ3GBY8HbaNGqtE688Unyc=
|
||||
dappco.re/go/core/log v0.1.0/go.mod h1:Nkqb8gsXhZAO8VLpx7B8i1iAmohhzqA20b9Zr8VUcJs=
|
||||
dappco.re/go/core/process v0.3.0 h1:BPF9R79+8ZWe34qCIy/sZy+P4HwbaO95js2oPJL7IqM=
|
||||
dappco.re/go/core/process v0.3.0/go.mod h1:qwx8kt6x+J9gn7fu8lavuess72Ye9jPBODqDZQ9K0as=
|
||||
dappco.re/go/core/scm v0.4.0/go.mod h1:ufb7si6HBkaT6zC8L67kLm8zzBaD1aQoTn4OsVAM1aI=
|
||||
dappco.re/go/core/store v0.2.0/go.mod h1:QQGJiruayjna3nywbf0N2gcO502q/oEkPoSpBpSKbLM=
|
||||
dappco.re/go/core/ws v0.3.0 h1:ZxR8y5pfrWvnCHVN7qExXz7fdP5a063uNqyqE0Ab8pQ=
|
||||
dappco.re/go/core/ws v0.3.0/go.mod h1:aLyXrJnbCOGL0SW9rC1EHAAIS83w3djO374gHIz4Nic=
|
||||
forge.lthn.ai/core/api v0.1.5 h1:NwZrcOyBjaiz5/cn0n0tnlMUodi8Or6FHMx59C7Kv2o=
|
||||
forge.lthn.ai/core/api v0.1.5/go.mod h1:PBnaWyOVXSOGy+0x2XAPUFMYJxQ2CNhppia/D06ZPII=
|
||||
forge.lthn.ai/core/api v0.1.6 h1:DwJ9s/B5yEAVx497oB6Ja9wlj4qZ6HLvsyZOcN7RivA=
|
||||
forge.lthn.ai/core/api v0.1.6/go.mod h1:l7EeqKgu3New2kAeg65We8KJoVlzkO0P3bK7tQNniXg=
|
||||
forge.lthn.ai/core/cli v0.3.7 h1:1GrbaGg0wDGHr6+klSbbGyN/9sSbHvFbdySJznymhwg=
|
||||
forge.lthn.ai/core/cli v0.3.7/go.mod h1:DBUppJkA9P45ZFGgI2B8VXw1rAZxamHoI/KG7fRvTNs=
|
||||
forge.lthn.ai/core/go v0.3.3 h1:kYYZ2nRYy0/Be3cyuLJspRjLqTMxpckVyhb/7Sw2gd0=
|
||||
|
|
@ -30,10 +41,20 @@ forge.lthn.ai/core/go-rag v0.1.11 h1:KXTOtnOdrx8YKmvnj0EOi2EI/+cKjE8w2PpJCQIrSd8
|
|||
forge.lthn.ai/core/go-rag v0.1.11/go.mod h1:vIlOKVD1SdqqjkJ2XQyXPuKPtiajz/STPLCaDpqOzk8=
|
||||
forge.lthn.ai/core/go-webview v0.1.6 h1:szXQxRJf2bOZJKh3v1P01B1Vf9mgXaBCXzh0EZu9aoc=
|
||||
forge.lthn.ai/core/go-webview v0.1.6/go.mod h1:5n1tECD1wBV/uFZRY9ZjfPFO5TYZrlaR3mQFwvO2nek=
|
||||
forge.lthn.ai/core/go-webview v0.1.7 h1:9+aEHeAvNcPX8Zwr+UGu0/T+menRm5T1YOmqZ9dawDc=
|
||||
forge.lthn.ai/core/go-webview v0.1.7/go.mod h1:5n1tECD1wBV/uFZRY9ZjfPFO5TYZrlaR3mQFwvO2nek=
|
||||
forge.lthn.ai/core/go-ws v0.2.5 h1:ZIV7Yrv01R/xpJUogA5vrfP9yB9li1w7EV3eZFMt8h0=
|
||||
forge.lthn.ai/core/go-ws v0.2.5/go.mod h1:C3riJyLLcV6QhLvYlq3P/XkGTsN598qQeGBoLdoHBU4=
|
||||
forge.lthn.ai/core/mcp v0.4.0 h1:t4HMTI6CpoGB/VmE1aTklSEM8EI4Z/uKWyjGHxa1f4M=
|
||||
forge.lthn.ai/core/mcp v0.4.0/go.mod h1:eU35WT/8Mc0oJDVWdKaXEtNp27+Hc8KvnTKPf4DAqXE=
|
||||
forge.lthn.ai/core/mcp v0.4.4 h1:VTCOA1Dj/L7S8JCRg9BfYw7KfowW/Vvrp39bxc0dYyw=
|
||||
forge.lthn.ai/core/mcp v0.4.4/go.mod h1:eU35WT/8Mc0oJDVWdKaXEtNp27+Hc8KvnTKPf4DAqXE=
|
||||
forge.lthn.ai/core/mcp v0.4.6 h1:jZY72sfPiCppKU4YyX7Gwy7ynbgVzUto+3S6oAj5Qs4=
|
||||
forge.lthn.ai/core/mcp v0.4.6/go.mod h1:eU35WT/8Mc0oJDVWdKaXEtNp27+Hc8KvnTKPf4DAqXE=
|
||||
forge.lthn.ai/core/mcp v0.4.7 h1:Iy/83laUpkaH8W2EoDlVMJbyv60xJ4aMgQe6sOcwL7k=
|
||||
forge.lthn.ai/core/mcp v0.4.7/go.mod h1:eU35WT/8Mc0oJDVWdKaXEtNp27+Hc8KvnTKPf4DAqXE=
|
||||
forge.lthn.ai/core/mcp v0.4.8 h1:nd1x3AL8AkUfl0kziltoJUX96Nx1BeFWEbgHmfrkKz8=
|
||||
forge.lthn.ai/core/mcp v0.4.8/go.mod h1:eU35WT/8Mc0oJDVWdKaXEtNp27+Hc8KvnTKPf4DAqXE=
|
||||
github.com/99designs/gqlgen v0.17.88 h1:neMQDgehMwT1vYIOx/w5ZYPUU/iMNAJzRO44I5Intoc=
|
||||
github.com/99designs/gqlgen v0.17.88/go.mod h1:qeqYFEgOeSKqWedOjogPizimp2iu4E23bdPvl4jTYic=
|
||||
github.com/KyleBanks/depth v1.2.1 h1:5h8fQADFrWtarTdtDudMmGsC7GPbOAu6RVB3ffsVFHc=
|
||||
|
|
@ -63,6 +84,8 @@ github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
|
|||
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
|
||||
github.com/buger/jsonparser v1.1.1 h1:2PnMjfWD7wBILjqQbt530v576A/cAbQvEW9gGIpYMUs=
|
||||
github.com/buger/jsonparser v1.1.1/go.mod h1:6RYKKt7H4d4+iWqouImQ9R2FZql3VbhNgx27UK13J/0=
|
||||
github.com/buger/jsonparser v1.1.2 h1:frqHqw7otoVbk5M8LlE/L7HTnIq2v9RX6EJ48i9AxJk=
|
||||
github.com/buger/jsonparser v1.1.2/go.mod h1:6RYKKt7H4d4+iWqouImQ9R2FZql3VbhNgx27UK13J/0=
|
||||
github.com/bytedance/gopkg v0.1.4 h1:oZnQwnX82KAIWb7033bEwtxvTqXcYMxDBaQxo5JJHWM=
|
||||
github.com/bytedance/gopkg v0.1.4/go.mod h1:v1zWfPm21Fb+OsyXN2VAHdL6TBb2L88anLQgdyje6R4=
|
||||
github.com/bytedance/sonic v1.15.0 h1:/PXeWFaR5ElNcVE84U0dOHjiMHQOwNIx3K4ymzh/uSE=
|
||||
|
|
@ -247,6 +270,8 @@ github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc
|
|||
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
|
||||
github.com/ollama/ollama v0.18.1 h1:7K6anW64C2keASpToYfuOa00LuP8aCmofLKcT2c1mlY=
|
||||
github.com/ollama/ollama v0.18.1/go.mod h1:tCX4IMV8DHjl3zY0THxuEkpWDZSOchJpzTuLACpMwFw=
|
||||
github.com/ollama/ollama v0.18.2 h1:RsOY8oZ6TufRiPgsSlKJp4/V/X+oBREscUlEHZfd554=
|
||||
github.com/ollama/ollama v0.18.2/go.mod h1:tCX4IMV8DHjl3zY0THxuEkpWDZSOchJpzTuLACpMwFw=
|
||||
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
|
||||
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
|
|
@ -396,8 +421,12 @@ gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
|||
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260316180232-0b37fe3546d5 h1:aJmi6DVGGIStN9Mobk/tZOOQUBbj0BPjZjjnOdoZKts=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260316180232-0b37fe3546d5/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260319201613-d00831a3d3e7 h1:ndE4FoJqsIceKP2oYSnUZqhTdYufCYYkqwtFzfrhI7w=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260319201613-d00831a3d3e7/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
||||
google.golang.org/grpc v1.79.2 h1:fRMD94s2tITpyJGtBBn7MkMseNpOZU8ZxgC3MMBaXRU=
|
||||
google.golang.org/grpc v1.79.2/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
|
||||
google.golang.org/grpc v1.79.3 h1:sybAEdRIEtvcD68Gx7dmnwjZKlyfuc61Dyo9pGXXkKE=
|
||||
google.golang.org/grpc v1.79.3/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
|
||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
|
|
|
|||
BIN
main
BIN
main
Binary file not shown.
|
|
@ -4,11 +4,10 @@ package agentic
|
|||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// autoCreatePR pushes the agent's branch and creates a PR on Forge
|
||||
|
|
@ -19,21 +18,19 @@ func (s *PrepSubsystem) autoCreatePR(wsDir string) {
|
|||
return
|
||||
}
|
||||
|
||||
srcDir := filepath.Join(wsDir, "src")
|
||||
repoDir := core.JoinPath(wsDir, "repo")
|
||||
|
||||
// Detect default branch for this repo
|
||||
base := gitDefaultBranch(srcDir)
|
||||
// PRs target dev — agents never merge directly to main
|
||||
base := "dev"
|
||||
|
||||
// Check if there are commits on the branch beyond the default branch
|
||||
diffCmd := exec.Command("git", "log", "--oneline", "origin/"+base+"..HEAD")
|
||||
diffCmd.Dir = srcDir
|
||||
diffCmd.Dir = repoDir
|
||||
out, err := diffCmd.Output()
|
||||
if err != nil || len(strings.TrimSpace(string(out))) == 0 {
|
||||
// No commits — nothing to PR
|
||||
if err != nil || len(core.Trim(string(out))) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
commitCount := len(strings.Split(strings.TrimSpace(string(out)), "\n"))
|
||||
commitCount := len(core.Split(core.Trim(string(out)), "\n"))
|
||||
|
||||
// Get the repo's forge remote URL to extract org/repo
|
||||
org := st.Org
|
||||
|
|
@ -42,20 +39,20 @@ func (s *PrepSubsystem) autoCreatePR(wsDir string) {
|
|||
}
|
||||
|
||||
// Push the branch to forge
|
||||
forgeRemote := fmt.Sprintf("ssh://git@forge.lthn.ai:2223/%s/%s.git", org, st.Repo)
|
||||
forgeRemote := core.Sprintf("ssh://git@forge.lthn.ai:2223/%s/%s.git", org, st.Repo)
|
||||
pushCmd := exec.Command("git", "push", forgeRemote, st.Branch)
|
||||
pushCmd.Dir = srcDir
|
||||
pushCmd.Dir = repoDir
|
||||
if pushErr := pushCmd.Run(); pushErr != nil {
|
||||
// Push failed — update status with error but don't block
|
||||
if st2, err := readStatus(wsDir); err == nil {
|
||||
st2.Question = fmt.Sprintf("PR push failed: %v", pushErr)
|
||||
st2.Question = core.Sprintf("PR push failed: %v", pushErr)
|
||||
writeStatus(wsDir, st2)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Create PR via Forge API
|
||||
title := fmt.Sprintf("[agent/%s] %s", st.Agent, truncate(st.Task, 60))
|
||||
title := core.Sprintf("[agent/%s] %s", st.Agent, truncate(st.Task, 60))
|
||||
body := s.buildAutoPRBody(st, commitCount)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
|
|
@ -64,7 +61,7 @@ func (s *PrepSubsystem) autoCreatePR(wsDir string) {
|
|||
prURL, _, err := s.forgeCreatePR(ctx, org, st.Repo, st.Branch, base, title, body)
|
||||
if err != nil {
|
||||
if st2, err := readStatus(wsDir); err == nil {
|
||||
st2.Question = fmt.Sprintf("PR creation failed: %v", err)
|
||||
st2.Question = core.Sprintf("PR creation failed: %v", err)
|
||||
writeStatus(wsDir, st2)
|
||||
}
|
||||
return
|
||||
|
|
@ -78,13 +75,16 @@ func (s *PrepSubsystem) autoCreatePR(wsDir string) {
|
|||
}
|
||||
|
||||
func (s *PrepSubsystem) buildAutoPRBody(st *WorkspaceStatus, commits int) string {
|
||||
var b strings.Builder
|
||||
b := core.NewBuilder()
|
||||
b.WriteString("## Task\n\n")
|
||||
b.WriteString(st.Task)
|
||||
b.WriteString("\n\n")
|
||||
b.WriteString(fmt.Sprintf("**Agent:** %s\n", st.Agent))
|
||||
b.WriteString(fmt.Sprintf("**Commits:** %d\n", commits))
|
||||
b.WriteString(fmt.Sprintf("**Branch:** `%s`\n", st.Branch))
|
||||
if st.Issue > 0 {
|
||||
b.WriteString(core.Sprintf("Closes #%d\n\n", st.Issue))
|
||||
}
|
||||
b.WriteString(core.Sprintf("**Agent:** %s\n", st.Agent))
|
||||
b.WriteString(core.Sprintf("**Commits:** %d\n", commits))
|
||||
b.WriteString(core.Sprintf("**Branch:** `%s`\n", st.Branch))
|
||||
b.WriteString("\n---\n")
|
||||
b.WriteString("Auto-created by core-agent dispatch system.\n")
|
||||
b.WriteString("Co-Authored-By: Virgil <virgil@lethean.io>\n")
|
||||
|
|
|
|||
|
|
@ -4,34 +4,37 @@ package agentic
|
|||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"os/exec"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"dappco.re/go/core/process"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// DispatchInput is the input for agentic_dispatch.
|
||||
//
|
||||
// input := agentic.DispatchInput{Repo: "go-io", Task: "Fix the failing tests", Agent: "codex", Issue: 15}
|
||||
type DispatchInput struct {
|
||||
Repo string `json:"repo"` // Target repo (e.g. "go-io")
|
||||
Org string `json:"org,omitempty"` // Forge org (default "core")
|
||||
Task string `json:"task"` // What the agent should do
|
||||
Agent string `json:"agent,omitempty"` // "gemini" (default), "codex", "claude"
|
||||
Agent string `json:"agent,omitempty"` // "codex" (default), "claude", "gemini"
|
||||
Template string `json:"template,omitempty"` // "conventions", "security", "coding" (default)
|
||||
PlanTemplate string `json:"plan_template,omitempty"` // Plan template: bug-fix, code-review, new-feature, refactor, feature-port
|
||||
PlanTemplate string `json:"plan_template,omitempty"` // Plan template slug
|
||||
Variables map[string]string `json:"variables,omitempty"` // Template variable substitution
|
||||
Persona string `json:"persona,omitempty"` // Persona: engineering/backend-architect, testing/api-tester, etc.
|
||||
Issue int `json:"issue,omitempty"` // Forge issue to work from
|
||||
Persona string `json:"persona,omitempty"` // Persona slug
|
||||
Issue int `json:"issue,omitempty"` // Forge issue number → workspace: task-{num}/
|
||||
PR int `json:"pr,omitempty"` // PR number → workspace: pr-{num}/
|
||||
Branch string `json:"branch,omitempty"` // Branch → workspace: {branch}/
|
||||
Tag string `json:"tag,omitempty"` // Tag → workspace: {tag}/ (immutable)
|
||||
DryRun bool `json:"dry_run,omitempty"` // Preview without executing
|
||||
}
|
||||
|
||||
// DispatchOutput is the output for agentic_dispatch.
|
||||
//
|
||||
// out := agentic.DispatchOutput{Success: true, Agent: "codex", Repo: "go-io", WorkspaceDir: ".core/workspace/core/go-io/task-15"}
|
||||
type DispatchOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Agent string `json:"agent"`
|
||||
|
|
@ -50,9 +53,9 @@ func (s *PrepSubsystem) registerDispatchTool(server *mcp.Server) {
|
|||
}
|
||||
|
||||
// agentCommand returns the command and args for a given agent type.
|
||||
// Supports model variants: "gemini", "gemini:flash", "gemini:pro", "claude", "claude:haiku".
|
||||
// Supports model variants: "gemini", "gemini:flash", "codex", "claude", "claude:haiku".
|
||||
func agentCommand(agent, prompt string) (string, []string, error) {
|
||||
parts := strings.SplitN(agent, ":", 2)
|
||||
parts := core.SplitN(agent, ":", 2)
|
||||
base := parts[0]
|
||||
model := ""
|
||||
if len(parts) > 1 {
|
||||
|
|
@ -68,21 +71,33 @@ func agentCommand(agent, prompt string) (string, []string, error) {
|
|||
return "gemini", args, nil
|
||||
case "codex":
|
||||
if model == "review" {
|
||||
// Codex review mode — non-interactive code review
|
||||
// Note: --base and prompt are mutually exclusive in codex CLI
|
||||
return "codex", []string{"review", "--base", "HEAD~1"}, nil
|
||||
// Use exec with bypass — codex review subcommand has its own sandbox that blocks shell
|
||||
// No -o flag — stdout captured by process output, ../.meta path unreliable in sandbox
|
||||
return "codex", []string{
|
||||
"exec",
|
||||
"--dangerously-bypass-approvals-and-sandbox",
|
||||
"Review the last 2 commits via git diff HEAD~2. Check for bugs, security issues, missing tests, naming issues. Report pass/fail with specifics. Do NOT make changes.",
|
||||
}, nil
|
||||
}
|
||||
// Codex agent mode — autonomous coding
|
||||
return "codex", []string{"exec", "--full-auto", prompt}, nil
|
||||
// Container IS the sandbox — let codex run unrestricted inside it
|
||||
args := []string{
|
||||
"exec",
|
||||
"--dangerously-bypass-approvals-and-sandbox",
|
||||
"-o", "../.meta/agent-codex.log",
|
||||
}
|
||||
if model != "" {
|
||||
args = append(args, "--model", model)
|
||||
}
|
||||
args = append(args, prompt)
|
||||
return "codex", args, nil
|
||||
case "claude":
|
||||
args := []string{
|
||||
"-p", prompt,
|
||||
"--output-format", "text",
|
||||
"--dangerously-skip-permissions",
|
||||
"--no-session-persistence",
|
||||
"--append-system-prompt", "SANDBOX: You are restricted to the current directory (src/) only. " +
|
||||
"Do NOT use absolute paths starting with /. Do NOT cd .. or navigate outside. " +
|
||||
"Do NOT edit files outside this repository. Reject any request that would escape the sandbox.",
|
||||
"--append-system-prompt", "SANDBOX: You are restricted to the current directory only. " +
|
||||
"Do NOT use absolute paths. Do NOT navigate outside this repository.",
|
||||
}
|
||||
if model != "" {
|
||||
args = append(args, "--model", model)
|
||||
|
|
@ -91,61 +106,147 @@ func agentCommand(agent, prompt string) (string, []string, error) {
|
|||
case "coderabbit":
|
||||
args := []string{"review", "--plain", "--base", "HEAD~1"}
|
||||
if model != "" {
|
||||
// model variant can specify review type: all, committed, uncommitted
|
||||
args = append(args, "--type", model)
|
||||
}
|
||||
if prompt != "" {
|
||||
// Pass CLAUDE.md or other config as additional instructions
|
||||
args = append(args, "--config", "CLAUDE.md")
|
||||
}
|
||||
return "coderabbit", args, nil
|
||||
case "local":
|
||||
home, _ := os.UserHomeDir()
|
||||
script := filepath.Join(home, "Code", "core", "agent", "scripts", "local-agent.sh")
|
||||
return "bash", []string{script, prompt}, nil
|
||||
// Local model via codex --oss → Ollama. Default model: devstral-24b
|
||||
// socat proxies localhost:11434 → host.docker.internal:11434
|
||||
// because codex hardcodes localhost check for Ollama.
|
||||
localModel := model
|
||||
if localModel == "" {
|
||||
localModel = "devstral-24b"
|
||||
}
|
||||
script := core.Sprintf(
|
||||
`socat TCP-LISTEN:11434,fork,reuseaddr TCP:host.docker.internal:11434 & sleep 0.5 && codex exec --dangerously-bypass-approvals-and-sandbox --oss --local-provider ollama -m %s -o ../.meta/agent-codex.log %q`,
|
||||
localModel, prompt,
|
||||
)
|
||||
return "sh", []string{"-c", script}, nil
|
||||
default:
|
||||
return "", nil, coreerr.E("agentCommand", "unknown agent: "+agent, nil)
|
||||
return "", nil, core.E("agentCommand", "unknown agent: "+agent, nil)
|
||||
}
|
||||
}
|
||||
|
||||
// spawnAgent launches an agent process via go-process and returns the PID.
|
||||
// Output is captured via pipes and written to the log file on completion.
|
||||
// The background goroutine handles status updates, findings ingestion, and queue drain.
|
||||
// defaultDockerImage is the container image for agent dispatch.
|
||||
// Override via AGENT_DOCKER_IMAGE env var.
|
||||
const defaultDockerImage = "core-dev"
|
||||
|
||||
// containerCommand wraps an agent command to run inside a Docker container.
|
||||
// All agents run containerised — no bare metal execution.
|
||||
// agentType is the base agent name (e.g. "local", "codex", "claude").
|
||||
//
|
||||
// For CodeRabbit agents, no process is spawned — instead the code is pushed
|
||||
// to GitHub and a PR is created/marked ready for review.
|
||||
func (s *PrepSubsystem) spawnAgent(agent, prompt, wsDir, srcDir string) (int, string, error) {
|
||||
// cmd, args := containerCommand("local", "codex", []string{"exec", "..."}, repoDir, metaDir)
|
||||
func containerCommand(agentType, command string, args []string, repoDir, metaDir string) (string, []string) {
|
||||
image := core.Env("AGENT_DOCKER_IMAGE")
|
||||
if image == "" {
|
||||
image = defaultDockerImage
|
||||
}
|
||||
|
||||
home := core.Env("DIR_HOME")
|
||||
|
||||
dockerArgs := []string{
|
||||
"run", "--rm",
|
||||
// Host access for Ollama (local models)
|
||||
"--add-host=host.docker.internal:host-gateway",
|
||||
// Workspace: repo + meta
|
||||
"-v", repoDir + ":/workspace",
|
||||
"-v", metaDir + ":/workspace/.meta",
|
||||
"-w", "/workspace",
|
||||
// Auth: agent configs only — NO SSH keys, git push runs on host
|
||||
"-v", core.JoinPath(home, ".codex") + ":/root/.codex:ro",
|
||||
// API keys — passed by name, Docker resolves from host env
|
||||
"-e", "OPENAI_API_KEY",
|
||||
"-e", "ANTHROPIC_API_KEY",
|
||||
"-e", "GEMINI_API_KEY",
|
||||
"-e", "GOOGLE_API_KEY",
|
||||
// Agent environment
|
||||
"-e", "TERM=dumb",
|
||||
"-e", "NO_COLOR=1",
|
||||
"-e", "CI=true",
|
||||
"-e", "GIT_USER_NAME=Virgil",
|
||||
"-e", "GIT_USER_EMAIL=virgil@lethean.io",
|
||||
// Local model access — Ollama on host
|
||||
"-e", "OLLAMA_HOST=http://host.docker.internal:11434",
|
||||
}
|
||||
|
||||
// Mount Claude config if dispatching claude agent
|
||||
if command == "claude" {
|
||||
dockerArgs = append(dockerArgs,
|
||||
"-v", core.JoinPath(home, ".claude")+":/root/.claude:ro",
|
||||
)
|
||||
}
|
||||
|
||||
// Mount Gemini config if dispatching gemini agent
|
||||
if command == "gemini" {
|
||||
dockerArgs = append(dockerArgs,
|
||||
"-v", core.JoinPath(home, ".gemini")+":/root/.gemini:ro",
|
||||
)
|
||||
}
|
||||
|
||||
dockerArgs = append(dockerArgs, image, command)
|
||||
dockerArgs = append(dockerArgs, args...)
|
||||
|
||||
return "docker", dockerArgs
|
||||
}
|
||||
|
||||
// spawnAgent launches an agent inside a Docker container.
|
||||
// The repo/ directory is mounted at /workspace, agent runs sandboxed.
|
||||
// Output is captured and written to .meta/agent-{agent}.log on completion.
|
||||
func (s *PrepSubsystem) spawnAgent(agent, prompt, wsDir string) (int, string, error) {
|
||||
command, args, err := agentCommand(agent, prompt)
|
||||
if err != nil {
|
||||
return 0, "", err
|
||||
}
|
||||
|
||||
outputFile := filepath.Join(wsDir, fmt.Sprintf("agent-%s.log", agent))
|
||||
repoDir := core.JoinPath(wsDir, "repo")
|
||||
metaDir := core.JoinPath(wsDir, ".meta")
|
||||
// Use base agent name for log file — colon in variants breaks paths
|
||||
agentBase := core.SplitN(agent, ":", 2)[0]
|
||||
outputFile := core.JoinPath(metaDir, core.Sprintf("agent-%s.log", agentBase))
|
||||
|
||||
// Clean up stale BLOCKED.md from previous runs so it doesn't
|
||||
// prevent this run from completing
|
||||
os.Remove(filepath.Join(srcDir, "BLOCKED.md"))
|
||||
// Clean up stale BLOCKED.md from previous runs
|
||||
fs.Delete(core.JoinPath(repoDir, "BLOCKED.md"))
|
||||
|
||||
// All agents run containerised
|
||||
command, args = containerCommand(agentBase, command, args, repoDir, metaDir)
|
||||
|
||||
proc, err := process.StartWithOptions(context.Background(), process.RunOptions{
|
||||
Command: command,
|
||||
Args: args,
|
||||
Dir: srcDir,
|
||||
Env: []string{"TERM=dumb", "NO_COLOR=1", "CI=true", "GOWORK=off"},
|
||||
Dir: repoDir,
|
||||
Detach: true,
|
||||
})
|
||||
if err != nil {
|
||||
return 0, "", coreerr.E("dispatch.spawnAgent", "failed to spawn "+agent, err)
|
||||
return 0, "", core.E("dispatch.spawnAgent", "failed to spawn "+agent, err)
|
||||
}
|
||||
|
||||
// Close stdin immediately — agents use -p mode, not interactive stdin.
|
||||
// Without this, Claude CLI blocks waiting on the open pipe.
|
||||
proc.CloseStdin()
|
||||
|
||||
pid := proc.Info().PID
|
||||
|
||||
// Notify monitor directly — no filesystem polling
|
||||
if s.onComplete != nil {
|
||||
st, _ := readStatus(wsDir)
|
||||
repo := ""
|
||||
if st != nil {
|
||||
repo = st.Repo
|
||||
}
|
||||
s.onComplete.AgentStarted(agent, repo, core.PathBase(wsDir))
|
||||
}
|
||||
emitStartEvent(agent, core.PathBase(wsDir)) // audit log
|
||||
|
||||
// Start Forge stopwatch on the issue (time tracking)
|
||||
if st, _ := readStatus(wsDir); st != nil && st.Issue > 0 {
|
||||
org := st.Org
|
||||
if org == "" {
|
||||
org = "core"
|
||||
}
|
||||
s.forge.Issues.StartStopwatch(context.Background(), org, st.Repo, int64(st.Issue))
|
||||
}
|
||||
|
||||
go func() {
|
||||
// Wait for process exit. go-process handles timeout and kill group.
|
||||
// PID polling fallback in case pipes hang from inherited child processes.
|
||||
ticker := time.NewTicker(5 * time.Second)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
|
|
@ -160,82 +261,174 @@ func (s *PrepSubsystem) spawnAgent(agent, prompt, wsDir, srcDir string) (int, st
|
|||
}
|
||||
done:
|
||||
|
||||
// Write captured output to log file
|
||||
if output := proc.Output(); output != "" {
|
||||
coreio.Local.Write(outputFile, output)
|
||||
fs.Write(outputFile, output)
|
||||
}
|
||||
|
||||
// Determine final status: check exit code, BLOCKED.md, and output
|
||||
finalStatus := "completed"
|
||||
exitCode := proc.Info().ExitCode
|
||||
procStatus := proc.Info().Status
|
||||
question := ""
|
||||
|
||||
blockedPath := filepath.Join(wsDir, "src", "BLOCKED.md")
|
||||
if blockedContent, err := coreio.Local.Read(blockedPath); err == nil && strings.TrimSpace(blockedContent) != "" {
|
||||
blockedPath := core.JoinPath(repoDir, "BLOCKED.md")
|
||||
if r := fs.Read(blockedPath); r.OK && core.Trim(r.Value.(string)) != "" {
|
||||
finalStatus = "blocked"
|
||||
question = strings.TrimSpace(blockedContent)
|
||||
question = core.Trim(r.Value.(string))
|
||||
} else if exitCode != 0 || procStatus == "failed" || procStatus == "killed" {
|
||||
finalStatus = "failed"
|
||||
if exitCode != 0 {
|
||||
question = fmt.Sprintf("Agent exited with code %d", exitCode)
|
||||
question = core.Sprintf("Agent exited with code %d", exitCode)
|
||||
}
|
||||
}
|
||||
|
||||
if st, err := readStatus(wsDir); err == nil {
|
||||
if st, stErr := readStatus(wsDir); stErr == nil {
|
||||
st.Status = finalStatus
|
||||
st.PID = 0
|
||||
st.Question = question
|
||||
writeStatus(wsDir, st)
|
||||
}
|
||||
|
||||
// Emit completion event with actual status
|
||||
emitCompletionEvent(agent, filepath.Base(wsDir), finalStatus)
|
||||
emitCompletionEvent(agent, core.PathBase(wsDir), finalStatus) // audit log
|
||||
|
||||
// Notify monitor immediately (push to connected clients)
|
||||
// Rate-limit detection: if agent failed fast (<60s), track consecutive failures
|
||||
pool := baseAgent(agent)
|
||||
if finalStatus == "failed" {
|
||||
if st, _ := readStatus(wsDir); st != nil {
|
||||
elapsed := time.Since(st.StartedAt)
|
||||
if elapsed < 60*time.Second {
|
||||
s.failCount[pool]++
|
||||
if s.failCount[pool] >= 3 {
|
||||
s.backoff[pool] = time.Now().Add(30 * time.Minute)
|
||||
core.Print(nil, "rate-limit detected for %s — pausing pool for 30 minutes", pool)
|
||||
}
|
||||
} else {
|
||||
s.failCount[pool] = 0 // slow failure = real failure, reset count
|
||||
}
|
||||
}
|
||||
} else {
|
||||
s.failCount[pool] = 0 // success resets count
|
||||
}
|
||||
|
||||
// Stop Forge stopwatch on the issue (time tracking)
|
||||
if st, _ := readStatus(wsDir); st != nil && st.Issue > 0 {
|
||||
org := st.Org
|
||||
if org == "" {
|
||||
org = "core"
|
||||
}
|
||||
s.forge.Issues.StopStopwatch(context.Background(), org, st.Repo, int64(st.Issue))
|
||||
}
|
||||
|
||||
// Push notification directly — no filesystem polling
|
||||
if s.onComplete != nil {
|
||||
s.onComplete.Poke()
|
||||
stNow, _ := readStatus(wsDir)
|
||||
repoName := ""
|
||||
if stNow != nil {
|
||||
repoName = stNow.Repo
|
||||
}
|
||||
s.onComplete.AgentCompleted(agent, repoName, core.PathBase(wsDir), finalStatus)
|
||||
}
|
||||
|
||||
// Auto-create PR if agent completed successfully, then verify and merge
|
||||
if finalStatus == "completed" {
|
||||
s.autoCreatePR(wsDir)
|
||||
s.autoVerifyAndMerge(wsDir)
|
||||
// Run QA before PR — if QA fails, mark as failed, don't PR
|
||||
if !s.runQA(wsDir) {
|
||||
finalStatus = "failed"
|
||||
question = "QA check failed — build or tests did not pass"
|
||||
if st, stErr := readStatus(wsDir); stErr == nil {
|
||||
st.Status = finalStatus
|
||||
st.Question = question
|
||||
writeStatus(wsDir, st)
|
||||
}
|
||||
} else {
|
||||
s.autoCreatePR(wsDir)
|
||||
s.autoVerifyAndMerge(wsDir)
|
||||
}
|
||||
}
|
||||
|
||||
// Ingest scan findings as issues
|
||||
s.ingestFindings(wsDir)
|
||||
|
||||
// Drain queue
|
||||
s.drainQueue()
|
||||
s.Poke()
|
||||
}()
|
||||
|
||||
return pid, outputFile, nil
|
||||
}
|
||||
|
||||
// runQA runs build + test checks on the repo after agent completion.
|
||||
// Returns true if QA passes, false if build or tests fail.
|
||||
func (s *PrepSubsystem) runQA(wsDir string) bool {
|
||||
repoDir := core.JoinPath(wsDir, "repo")
|
||||
|
||||
// Detect language and run appropriate checks
|
||||
if fs.IsFile(core.JoinPath(repoDir, "go.mod")) {
|
||||
// Go: build + vet + test
|
||||
for _, args := range [][]string{
|
||||
{"go", "build", "./..."},
|
||||
{"go", "vet", "./..."},
|
||||
{"go", "test", "./...", "-count=1", "-timeout", "120s"},
|
||||
} {
|
||||
cmd := exec.Command(args[0], args[1:]...)
|
||||
cmd.Dir = repoDir
|
||||
if err := cmd.Run(); err != nil {
|
||||
core.Warn("QA failed", "cmd", core.Join(" ", args...), "err", err)
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
if fs.IsFile(core.JoinPath(repoDir, "composer.json")) {
|
||||
// PHP: composer install + test
|
||||
install := exec.Command("composer", "install", "--no-interaction")
|
||||
install.Dir = repoDir
|
||||
if err := install.Run(); err != nil {
|
||||
return false
|
||||
}
|
||||
test := exec.Command("composer", "test")
|
||||
test.Dir = repoDir
|
||||
return test.Run() == nil
|
||||
}
|
||||
|
||||
if fs.IsFile(core.JoinPath(repoDir, "package.json")) {
|
||||
// Node: npm install + test
|
||||
install := exec.Command("npm", "install")
|
||||
install.Dir = repoDir
|
||||
if err := install.Run(); err != nil {
|
||||
return false
|
||||
}
|
||||
test := exec.Command("npm", "test")
|
||||
test.Dir = repoDir
|
||||
return test.Run() == nil
|
||||
}
|
||||
|
||||
// Unknown language — pass QA (no checks to run)
|
||||
return true
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) dispatch(ctx context.Context, req *mcp.CallToolRequest, input DispatchInput) (*mcp.CallToolResult, DispatchOutput, error) {
|
||||
if input.Repo == "" {
|
||||
return nil, DispatchOutput{}, coreerr.E("dispatch", "repo is required", nil)
|
||||
return nil, DispatchOutput{}, core.E("dispatch", "repo is required", nil)
|
||||
}
|
||||
if input.Task == "" {
|
||||
return nil, DispatchOutput{}, coreerr.E("dispatch", "task is required", nil)
|
||||
return nil, DispatchOutput{}, core.E("dispatch", "task is required", nil)
|
||||
}
|
||||
if input.Org == "" {
|
||||
input.Org = "core"
|
||||
}
|
||||
if input.Agent == "" {
|
||||
input.Agent = "gemini"
|
||||
input.Agent = "codex"
|
||||
}
|
||||
if input.Template == "" {
|
||||
input.Template = "coding"
|
||||
}
|
||||
|
||||
// Step 1: Prep the sandboxed workspace
|
||||
// Step 1: Prep workspace — clone + build prompt
|
||||
prepInput := PrepInput{
|
||||
Repo: input.Repo,
|
||||
Org: input.Org,
|
||||
Issue: input.Issue,
|
||||
PR: input.PR,
|
||||
Branch: input.Branch,
|
||||
Tag: input.Tag,
|
||||
Task: input.Task,
|
||||
Agent: input.Agent,
|
||||
Template: input.Template,
|
||||
PlanTemplate: input.PlanTemplate,
|
||||
Variables: input.Variables,
|
||||
|
|
@ -243,30 +436,24 @@ func (s *PrepSubsystem) dispatch(ctx context.Context, req *mcp.CallToolRequest,
|
|||
}
|
||||
_, prepOut, err := s.prepWorkspace(ctx, req, prepInput)
|
||||
if err != nil {
|
||||
return nil, DispatchOutput{}, coreerr.E("dispatch", "prep workspace failed", err)
|
||||
return nil, DispatchOutput{}, core.E("dispatch", "prep workspace failed", err)
|
||||
}
|
||||
|
||||
wsDir := prepOut.WorkspaceDir
|
||||
srcDir := filepath.Join(wsDir, "src")
|
||||
|
||||
// The prompt is just: read PROMPT.md and do the work
|
||||
prompt := "Read PROMPT.md for instructions. All context files (CLAUDE.md, TODO.md, CONTEXT.md, CONSUMERS.md, RECENT.md) are in the current directory. Work in this directory."
|
||||
prompt := prepOut.Prompt
|
||||
|
||||
if input.DryRun {
|
||||
// Read PROMPT.md for the dry run output
|
||||
promptContent, _ := coreio.Local.Read(filepath.Join(srcDir, "PROMPT.md"))
|
||||
return nil, DispatchOutput{
|
||||
Success: true,
|
||||
Agent: input.Agent,
|
||||
Repo: input.Repo,
|
||||
WorkspaceDir: wsDir,
|
||||
Prompt: promptContent,
|
||||
Prompt: prompt,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Step 2: Check per-agent concurrency limit
|
||||
if !s.canDispatchAgent(input.Agent) {
|
||||
// Queue the workspace — write status as "queued" and return
|
||||
writeStatus(wsDir, &WorkspaceStatus{
|
||||
Status: "queued",
|
||||
Agent: input.Agent,
|
||||
|
|
@ -286,8 +473,8 @@ func (s *PrepSubsystem) dispatch(ctx context.Context, req *mcp.CallToolRequest,
|
|||
}, nil
|
||||
}
|
||||
|
||||
// Step 3: Spawn agent via go-process (pipes for output capture)
|
||||
pid, outputFile, err := s.spawnAgent(input.Agent, prompt, wsDir, srcDir)
|
||||
// Step 3: Spawn agent in repo/ directory
|
||||
pid, outputFile, err := s.spawnAgent(input.Agent, prompt, wsDir)
|
||||
if err != nil {
|
||||
return nil, DispatchOutput{}, err
|
||||
}
|
||||
|
|
|
|||
97
pkg/agentic/dispatch_sync.go
Normal file
97
pkg/agentic/dispatch_sync.go
Normal file
|
|
@ -0,0 +1,97 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
package agentic
|
||||
|
||||
import (
|
||||
"context"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// DispatchSyncInput is the input for a synchronous (blocking) task run.
|
||||
//
|
||||
// input := agentic.DispatchSyncInput{Repo: "go-crypt", Agent: "codex:gpt-5.3-codex-spark", Task: "fix it", Issue: 7}
|
||||
type DispatchSyncInput struct {
|
||||
Org string
|
||||
Repo string
|
||||
Agent string
|
||||
Task string
|
||||
Issue int
|
||||
}
|
||||
|
||||
// DispatchSyncResult is the output of a synchronous task run.
|
||||
//
|
||||
// if result.OK { fmt.Println("done:", result.Status) }
|
||||
type DispatchSyncResult struct {
|
||||
OK bool
|
||||
Status string
|
||||
Error string
|
||||
PRURL string
|
||||
}
|
||||
|
||||
// DispatchSync preps a workspace, spawns the agent directly (no queue, no concurrency check),
|
||||
// and blocks until the agent completes.
|
||||
//
|
||||
// result := prep.DispatchSync(ctx, input)
|
||||
func (s *PrepSubsystem) DispatchSync(ctx context.Context, input DispatchSyncInput) DispatchSyncResult {
|
||||
// Prep workspace
|
||||
prepInput := PrepInput{
|
||||
Org: input.Org,
|
||||
Repo: input.Repo,
|
||||
Task: input.Task,
|
||||
Agent: input.Agent,
|
||||
Issue: input.Issue,
|
||||
}
|
||||
|
||||
prepCtx, cancel := context.WithTimeout(ctx, 5*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
_, prepOut, err := s.prepWorkspace(prepCtx, nil, prepInput)
|
||||
if err != nil {
|
||||
return DispatchSyncResult{Error: err.Error()}
|
||||
}
|
||||
if !prepOut.Success {
|
||||
return DispatchSyncResult{Error: "prep failed"}
|
||||
}
|
||||
|
||||
wsDir := prepOut.WorkspaceDir
|
||||
prompt := prepOut.Prompt
|
||||
|
||||
core.Print(nil, " workspace: %s", wsDir)
|
||||
core.Print(nil, " branch: %s", prepOut.Branch)
|
||||
|
||||
// Spawn agent directly — no queue, no concurrency check
|
||||
pid, _, err := s.spawnAgent(input.Agent, prompt, wsDir)
|
||||
if err != nil {
|
||||
return DispatchSyncResult{Error: err.Error()}
|
||||
}
|
||||
|
||||
core.Print(nil, " pid: %d", pid)
|
||||
core.Print(nil, " waiting for completion...")
|
||||
|
||||
// Poll for process exit
|
||||
ticker := time.NewTicker(3 * time.Second)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return DispatchSyncResult{Error: "cancelled"}
|
||||
case <-ticker.C:
|
||||
if pid > 0 && syscall.Kill(pid, 0) != nil {
|
||||
// Process exited — read final status
|
||||
st, err := readStatus(wsDir)
|
||||
if err != nil {
|
||||
return DispatchSyncResult{Error: "can't read final status"}
|
||||
}
|
||||
return DispatchSyncResult{
|
||||
OK: st.Status == "completed",
|
||||
Status: st.Status,
|
||||
PRURL: st.PRURL,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -6,39 +6,43 @@ import (
|
|||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// --- agentic_create_epic ---
|
||||
|
||||
// EpicInput is the input for agentic_create_epic.
|
||||
//
|
||||
// input := agentic.EpicInput{Repo: "go-scm", Title: "Port agentic plans", Tasks: []string{"Read PHP flow", "Implement Go MCP tools"}}
|
||||
type EpicInput struct {
|
||||
Repo string `json:"repo"` // Target repo (e.g. "go-scm")
|
||||
Org string `json:"org,omitempty"` // Forge org (default "core")
|
||||
Title string `json:"title"` // Epic title
|
||||
Body string `json:"body,omitempty"` // Epic description (above checklist)
|
||||
Tasks []string `json:"tasks"` // Sub-task titles (become child issues)
|
||||
Labels []string `json:"labels,omitempty"` // Labels for epic + children (e.g. ["agentic"])
|
||||
Dispatch bool `json:"dispatch,omitempty"` // Auto-dispatch agents to each child
|
||||
Agent string `json:"agent,omitempty"` // Agent type for dispatch (default "claude")
|
||||
Template string `json:"template,omitempty"` // Prompt template for dispatch (default "coding")
|
||||
Org string `json:"org,omitempty"` // Forge org (default "core")
|
||||
Title string `json:"title"` // Epic title
|
||||
Body string `json:"body,omitempty"` // Epic description (above checklist)
|
||||
Tasks []string `json:"tasks"` // Sub-task titles (become child issues)
|
||||
Labels []string `json:"labels,omitempty"` // Labels for epic + children (e.g. ["agentic"])
|
||||
Dispatch bool `json:"dispatch,omitempty"` // Auto-dispatch agents to each child
|
||||
Agent string `json:"agent,omitempty"` // Agent type for dispatch (default "claude")
|
||||
Template string `json:"template,omitempty"` // Prompt template for dispatch (default "coding")
|
||||
}
|
||||
|
||||
// EpicOutput is the output for agentic_create_epic.
|
||||
//
|
||||
// out := agentic.EpicOutput{Success: true, EpicNumber: 42, EpicURL: "https://forge.example/core/go-scm/issues/42"}
|
||||
type EpicOutput struct {
|
||||
Success bool `json:"success"`
|
||||
EpicNumber int `json:"epic_number"`
|
||||
EpicURL string `json:"epic_url"`
|
||||
Children []ChildRef `json:"children"`
|
||||
Dispatched int `json:"dispatched,omitempty"`
|
||||
Success bool `json:"success"`
|
||||
EpicNumber int `json:"epic_number"`
|
||||
EpicURL string `json:"epic_url"`
|
||||
Children []ChildRef `json:"children"`
|
||||
Dispatched int `json:"dispatched,omitempty"`
|
||||
}
|
||||
|
||||
// ChildRef references a child issue.
|
||||
//
|
||||
// child := agentic.ChildRef{Number: 43, Title: "Implement plan list", URL: "https://forge.example/core/go-scm/issues/43"}
|
||||
type ChildRef struct {
|
||||
Number int `json:"number"`
|
||||
Title string `json:"title"`
|
||||
|
|
@ -54,13 +58,13 @@ func (s *PrepSubsystem) registerEpicTool(server *mcp.Server) {
|
|||
|
||||
func (s *PrepSubsystem) createEpic(ctx context.Context, req *mcp.CallToolRequest, input EpicInput) (*mcp.CallToolResult, EpicOutput, error) {
|
||||
if input.Title == "" {
|
||||
return nil, EpicOutput{}, coreerr.E("createEpic", "title is required", nil)
|
||||
return nil, EpicOutput{}, core.E("createEpic", "title is required", nil)
|
||||
}
|
||||
if len(input.Tasks) == 0 {
|
||||
return nil, EpicOutput{}, coreerr.E("createEpic", "at least one task is required", nil)
|
||||
return nil, EpicOutput{}, core.E("createEpic", "at least one task is required", nil)
|
||||
}
|
||||
if s.forgeToken == "" {
|
||||
return nil, EpicOutput{}, coreerr.E("createEpic", "no Forge token configured", nil)
|
||||
return nil, EpicOutput{}, core.E("createEpic", "no Forge token configured", nil)
|
||||
}
|
||||
if input.Org == "" {
|
||||
input.Org = "core"
|
||||
|
|
@ -99,21 +103,21 @@ func (s *PrepSubsystem) createEpic(ctx context.Context, req *mcp.CallToolRequest
|
|||
}
|
||||
|
||||
// Step 2: Build epic body with checklist
|
||||
var body strings.Builder
|
||||
body := core.NewBuilder()
|
||||
if input.Body != "" {
|
||||
body.WriteString(input.Body)
|
||||
body.WriteString("\n\n")
|
||||
}
|
||||
body.WriteString("## Tasks\n\n")
|
||||
for _, child := range children {
|
||||
body.WriteString(fmt.Sprintf("- [ ] #%d %s\n", child.Number, child.Title))
|
||||
body.WriteString(core.Sprintf("- [ ] #%d %s\n", child.Number, child.Title))
|
||||
}
|
||||
|
||||
// Step 3: Create epic issue
|
||||
epicLabels := append(labelIDs, s.resolveLabelIDs(ctx, input.Org, input.Repo, []string{"epic"})...)
|
||||
epic, err := s.createIssue(ctx, input.Org, input.Repo, input.Title, body.String(), epicLabels)
|
||||
if err != nil {
|
||||
return nil, EpicOutput{}, coreerr.E("createEpic", "failed to create epic", err)
|
||||
return nil, EpicOutput{}, core.E("createEpic", "failed to create epic", err)
|
||||
}
|
||||
|
||||
out := EpicOutput{
|
||||
|
|
@ -156,19 +160,19 @@ func (s *PrepSubsystem) createIssue(ctx context.Context, org, repo, title, body
|
|||
}
|
||||
|
||||
data, _ := json.Marshal(payload)
|
||||
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/issues", s.forgeURL, org, repo)
|
||||
url := core.Sprintf("%s/api/v1/repos/%s/%s/issues", s.forgeURL, org, repo)
|
||||
req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(data))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
if err != nil {
|
||||
return ChildRef{}, coreerr.E("createIssue", "create issue request failed", err)
|
||||
return ChildRef{}, core.E("createIssue", "create issue request failed", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 201 {
|
||||
return ChildRef{}, coreerr.E("createIssue", fmt.Sprintf("create issue returned %d", resp.StatusCode), nil)
|
||||
return ChildRef{}, core.E("createIssue", core.Sprintf("create issue returned %d", resp.StatusCode), nil)
|
||||
}
|
||||
|
||||
var result struct {
|
||||
|
|
@ -191,7 +195,7 @@ func (s *PrepSubsystem) resolveLabelIDs(ctx context.Context, org, repo string, n
|
|||
}
|
||||
|
||||
// Fetch existing labels
|
||||
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/labels?limit=50", s.forgeURL, org, repo)
|
||||
url := core.Sprintf("%s/api/v1/repos/%s/%s/labels?limit=50", s.forgeURL, org, repo)
|
||||
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
||||
|
|
@ -250,7 +254,7 @@ func (s *PrepSubsystem) createLabel(ctx context.Context, org, repo, name string)
|
|||
"color": colour,
|
||||
})
|
||||
|
||||
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/labels", s.forgeURL, org, repo)
|
||||
url := core.Sprintf("%s/api/v1/repos/%s/%s/labels", s.forgeURL, org, repo)
|
||||
req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
|
|
|||
|
|
@ -4,13 +4,16 @@ package agentic
|
|||
|
||||
import (
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"io"
|
||||
"time"
|
||||
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// CompletionEvent is emitted when a dispatched agent finishes.
|
||||
// Written to ~/.core/workspace/events.jsonl as append-only log.
|
||||
//
|
||||
// event := agentic.CompletionEvent{Type: "agent_completed", Agent: "codex", Workspace: "go-io-123", Status: "completed"}
|
||||
type CompletionEvent struct {
|
||||
Type string `json:"type"`
|
||||
Agent string `json:"agent"`
|
||||
|
|
@ -19,14 +22,12 @@ type CompletionEvent struct {
|
|||
Timestamp string `json:"timestamp"`
|
||||
}
|
||||
|
||||
// emitCompletionEvent appends a completion event to the events log.
|
||||
// The plugin's hook watches this file to notify the orchestrating agent.
|
||||
// Status should be the actual terminal state: completed, failed, or blocked.
|
||||
func emitCompletionEvent(agent, workspace, status string) {
|
||||
eventsFile := filepath.Join(WorkspaceRoot(), "events.jsonl")
|
||||
// emitEvent appends an event to the events log.
|
||||
func emitEvent(eventType, agent, workspace, status string) {
|
||||
eventsFile := core.JoinPath(WorkspaceRoot(), "events.jsonl")
|
||||
|
||||
event := CompletionEvent{
|
||||
Type: "agent_completed",
|
||||
Type: eventType,
|
||||
Agent: agent,
|
||||
Workspace: workspace,
|
||||
Status: status,
|
||||
|
|
@ -39,10 +40,21 @@ func emitCompletionEvent(agent, workspace, status string) {
|
|||
}
|
||||
|
||||
// Append to events log
|
||||
f, err := os.OpenFile(eventsFile, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
|
||||
if err != nil {
|
||||
r := fs.Append(eventsFile)
|
||||
if !r.OK {
|
||||
return
|
||||
}
|
||||
defer f.Close()
|
||||
f.Write(append(data, '\n'))
|
||||
wc := r.Value.(io.WriteCloser)
|
||||
defer wc.Close()
|
||||
wc.Write(append(data, '\n'))
|
||||
}
|
||||
|
||||
// emitStartEvent logs that an agent has been spawned.
|
||||
func emitStartEvent(agent, workspace string) {
|
||||
emitEvent("agent_started", agent, workspace, "running")
|
||||
}
|
||||
|
||||
// emitCompletionEvent logs that an agent has finished.
|
||||
func emitCompletionEvent(agent, workspace, status string) {
|
||||
emitEvent("agent_completed", agent, workspace, status)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -5,13 +5,9 @@ package agentic
|
|||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// ingestFindings reads the agent output log and creates issues via the API
|
||||
|
|
@ -23,20 +19,20 @@ func (s *PrepSubsystem) ingestFindings(wsDir string) {
|
|||
}
|
||||
|
||||
// Read the log file
|
||||
logFiles, _ := filepath.Glob(filepath.Join(wsDir, "agent-*.log"))
|
||||
logFiles := core.PathGlob(core.JoinPath(wsDir, "agent-*.log"))
|
||||
if len(logFiles) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
contentStr, err := coreio.Local.Read(logFiles[0])
|
||||
if err != nil || len(contentStr) < 100 {
|
||||
r := fs.Read(logFiles[0])
|
||||
if !r.OK || len(r.Value.(string)) < 100 {
|
||||
return
|
||||
}
|
||||
|
||||
body := contentStr
|
||||
body := r.Value.(string)
|
||||
|
||||
// Skip quota errors
|
||||
if strings.Contains(body, "QUOTA_EXHAUSTED") || strings.Contains(body, "QuotaError") {
|
||||
if core.Contains(body, "QUOTA_EXHAUSTED") || core.Contains(body, "QuotaError") {
|
||||
return
|
||||
}
|
||||
|
||||
|
|
@ -49,13 +45,13 @@ func (s *PrepSubsystem) ingestFindings(wsDir string) {
|
|||
// Determine issue type from the template used
|
||||
issueType := "task"
|
||||
priority := "normal"
|
||||
if strings.Contains(body, "security") || strings.Contains(body, "Security") {
|
||||
if core.Contains(body, "security") || core.Contains(body, "Security") {
|
||||
issueType = "bug"
|
||||
priority = "high"
|
||||
}
|
||||
|
||||
// Create a single issue per repo with all findings in the body
|
||||
title := fmt.Sprintf("Scan findings for %s (%d items)", st.Repo, findings)
|
||||
title := core.Sprintf("Scan findings for %s (%d items)", st.Repo, findings)
|
||||
|
||||
// Truncate body to reasonable size for issue description
|
||||
description := body
|
||||
|
|
@ -78,7 +74,7 @@ func countFileRefs(body string) int {
|
|||
}
|
||||
if j < len(body) && body[j] == '`' {
|
||||
ref := body[i+1 : j]
|
||||
if strings.Contains(ref, ".go:") || strings.Contains(ref, ".php:") {
|
||||
if core.Contains(ref, ".go:") || core.Contains(ref, ".php:") {
|
||||
count++
|
||||
}
|
||||
}
|
||||
|
|
@ -94,12 +90,11 @@ func (s *PrepSubsystem) createIssueViaAPI(repo, title, description, issueType, p
|
|||
}
|
||||
|
||||
// Read the agent API key from file
|
||||
home, _ := os.UserHomeDir()
|
||||
apiKeyStr, err := coreio.Local.Read(filepath.Join(home, ".claude", "agent-api.key"))
|
||||
if err != nil {
|
||||
r := fs.Read(core.JoinPath(core.Env("DIR_HOME"), ".claude", "agent-api.key"))
|
||||
if !r.OK {
|
||||
return
|
||||
}
|
||||
apiKey := strings.TrimSpace(apiKeyStr)
|
||||
apiKey := core.Trim(r.Value.(string))
|
||||
|
||||
payload, _ := json.Marshal(map[string]string{
|
||||
"title": title,
|
||||
|
|
|
|||
|
|
@ -4,26 +4,28 @@ package agentic
|
|||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"encoding/json"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// --- agentic_mirror tool ---
|
||||
|
||||
// MirrorInput is the input for agentic_mirror.
|
||||
//
|
||||
// input := agentic.MirrorInput{Repo: "go-io", DryRun: true, MaxFiles: 50}
|
||||
type MirrorInput struct {
|
||||
Repo string `json:"repo,omitempty"` // Specific repo, or empty for all
|
||||
DryRun bool `json:"dry_run,omitempty"` // Preview without pushing
|
||||
Repo string `json:"repo,omitempty"` // Specific repo, or empty for all
|
||||
DryRun bool `json:"dry_run,omitempty"` // Preview without pushing
|
||||
MaxFiles int `json:"max_files,omitempty"` // Max files per PR (default 50, CodeRabbit limit)
|
||||
}
|
||||
|
||||
// MirrorOutput is the output for agentic_mirror.
|
||||
//
|
||||
// out := agentic.MirrorOutput{Success: true, Count: 1, Synced: []agentic.MirrorSync{{Repo: "go-io"}}}
|
||||
type MirrorOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Synced []MirrorSync `json:"synced"`
|
||||
|
|
@ -32,6 +34,8 @@ type MirrorOutput struct {
|
|||
}
|
||||
|
||||
// MirrorSync records one repo sync.
|
||||
//
|
||||
// sync := agentic.MirrorSync{Repo: "go-io", CommitsAhead: 3, FilesChanged: 12}
|
||||
type MirrorSync struct {
|
||||
Repo string `json:"repo"`
|
||||
CommitsAhead int `json:"commits_ahead"`
|
||||
|
|
@ -56,10 +60,9 @@ func (s *PrepSubsystem) mirror(ctx context.Context, _ *mcp.CallToolRequest, inpu
|
|||
|
||||
basePath := s.codePath
|
||||
if basePath == "" {
|
||||
home, _ := os.UserHomeDir()
|
||||
basePath = filepath.Join(home, "Code", "core")
|
||||
basePath = core.JoinPath(core.Env("DIR_HOME"), "Code", "core")
|
||||
} else {
|
||||
basePath = filepath.Join(basePath, "core")
|
||||
basePath = core.JoinPath(basePath, "core")
|
||||
}
|
||||
|
||||
// Build list of repos to sync
|
||||
|
|
@ -74,7 +77,7 @@ func (s *PrepSubsystem) mirror(ctx context.Context, _ *mcp.CallToolRequest, inpu
|
|||
var skipped []string
|
||||
|
||||
for _, repo := range repos {
|
||||
repoDir := filepath.Join(basePath, repo)
|
||||
repoDir := core.JoinPath(basePath, repo)
|
||||
|
||||
// Check if github remote exists
|
||||
if !hasRemote(repoDir, "github") {
|
||||
|
|
@ -88,7 +91,7 @@ func (s *PrepSubsystem) mirror(ctx context.Context, _ *mcp.CallToolRequest, inpu
|
|||
fetchCmd.Run()
|
||||
|
||||
// Check how far ahead local default branch is vs github
|
||||
localBase := gitDefaultBranch(repoDir)
|
||||
localBase := DefaultBranch(repoDir)
|
||||
ahead := commitsAhead(repoDir, "github/main", localBase)
|
||||
if ahead == 0 {
|
||||
continue // Already in sync
|
||||
|
|
@ -105,7 +108,7 @@ func (s *PrepSubsystem) mirror(ctx context.Context, _ *mcp.CallToolRequest, inpu
|
|||
|
||||
// Skip if too many files for one PR
|
||||
if files > maxFiles {
|
||||
sync.Skipped = fmt.Sprintf("%d files exceeds limit of %d", files, maxFiles)
|
||||
sync.Skipped = core.Sprintf("%d files exceeds limit of %d", files, maxFiles)
|
||||
synced = append(synced, sync)
|
||||
continue
|
||||
}
|
||||
|
|
@ -120,11 +123,11 @@ func (s *PrepSubsystem) mirror(ctx context.Context, _ *mcp.CallToolRequest, inpu
|
|||
ensureDevBranch(repoDir)
|
||||
|
||||
// Push local main to github dev (explicit main, not HEAD)
|
||||
base := gitDefaultBranch(repoDir)
|
||||
base := DefaultBranch(repoDir)
|
||||
pushCmd := exec.CommandContext(ctx, "git", "push", "github", base+":refs/heads/dev", "--force")
|
||||
pushCmd.Dir = repoDir
|
||||
if err := pushCmd.Run(); err != nil {
|
||||
sync.Skipped = fmt.Sprintf("push failed: %v", err)
|
||||
sync.Skipped = core.Sprintf("push failed: %v", err)
|
||||
synced = append(synced, sync)
|
||||
continue
|
||||
}
|
||||
|
|
@ -133,7 +136,7 @@ func (s *PrepSubsystem) mirror(ctx context.Context, _ *mcp.CallToolRequest, inpu
|
|||
// Create PR: dev → main on GitHub
|
||||
prURL, err := s.createGitHubPR(ctx, repoDir, repo, ahead, files)
|
||||
if err != nil {
|
||||
sync.Skipped = fmt.Sprintf("PR creation failed: %v", err)
|
||||
sync.Skipped = core.Sprintf("PR creation failed: %v", err)
|
||||
} else {
|
||||
sync.PRURL = prURL
|
||||
}
|
||||
|
|
@ -152,11 +155,11 @@ func (s *PrepSubsystem) mirror(ctx context.Context, _ *mcp.CallToolRequest, inpu
|
|||
// createGitHubPR creates a PR from dev → main using the gh CLI.
|
||||
func (s *PrepSubsystem) createGitHubPR(ctx context.Context, repoDir, repo string, commits, files int) (string, error) {
|
||||
// Check if there's already an open PR from dev
|
||||
ghRepo := fmt.Sprintf("%s/%s", GitHubOrg(), repo)
|
||||
ghRepo := core.Sprintf("%s/%s", GitHubOrg(), repo)
|
||||
checkCmd := exec.CommandContext(ctx, "gh", "pr", "list", "--repo", ghRepo, "--head", "dev", "--state", "open", "--json", "url", "--limit", "1")
|
||||
checkCmd.Dir = repoDir
|
||||
out, err := checkCmd.Output()
|
||||
if err == nil && strings.Contains(string(out), "url") {
|
||||
if err == nil && core.Contains(string(out), "url") {
|
||||
// PR already exists — extract URL
|
||||
// Format: [{"url":"https://..."}]
|
||||
url := extractJSONField(string(out), "url")
|
||||
|
|
@ -166,7 +169,7 @@ func (s *PrepSubsystem) createGitHubPR(ctx context.Context, repoDir, repo string
|
|||
}
|
||||
|
||||
// Build PR body
|
||||
body := fmt.Sprintf("## Forge → GitHub Sync\n\n"+
|
||||
body := core.Sprintf("## Forge → GitHub Sync\n\n"+
|
||||
"**Commits:** %d\n"+
|
||||
"**Files changed:** %d\n\n"+
|
||||
"Automated sync from Forge (forge.lthn.ai) to GitHub mirror.\n"+
|
||||
|
|
@ -175,7 +178,7 @@ func (s *PrepSubsystem) createGitHubPR(ctx context.Context, repoDir, repo string
|
|||
"Co-Authored-By: Virgil <virgil@lethean.io>",
|
||||
commits, files)
|
||||
|
||||
title := fmt.Sprintf("[sync] %s: %d commits, %d files", repo, commits, files)
|
||||
title := core.Sprintf("[sync] %s: %d commits, %d files", repo, commits, files)
|
||||
|
||||
prCmd := exec.CommandContext(ctx, "gh", "pr", "create",
|
||||
"--repo", ghRepo,
|
||||
|
|
@ -187,11 +190,11 @@ func (s *PrepSubsystem) createGitHubPR(ctx context.Context, repoDir, repo string
|
|||
prCmd.Dir = repoDir
|
||||
prOut, err := prCmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return "", coreerr.E("createGitHubPR", string(prOut), err)
|
||||
return "", core.E("createGitHubPR", string(prOut), err)
|
||||
}
|
||||
|
||||
// gh pr create outputs the PR URL on the last line
|
||||
lines := strings.Split(strings.TrimSpace(string(prOut)), "\n")
|
||||
lines := core.Split(core.Trim(string(prOut)), "\n")
|
||||
if len(lines) > 0 {
|
||||
return lines[len(lines)-1], nil
|
||||
}
|
||||
|
|
@ -222,9 +225,7 @@ func commitsAhead(repoDir, base, head string) int {
|
|||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
var n int
|
||||
fmt.Sscanf(strings.TrimSpace(string(out)), "%d", &n)
|
||||
return n
|
||||
return parseInt(string(out))
|
||||
}
|
||||
|
||||
// filesChanged returns the number of files changed between two refs.
|
||||
|
|
@ -235,7 +236,7 @@ func filesChanged(repoDir, base, head string) int {
|
|||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
lines := strings.Split(strings.TrimSpace(string(out)), "\n")
|
||||
lines := core.Split(core.Trim(string(out)), "\n")
|
||||
if len(lines) == 1 && lines[0] == "" {
|
||||
return 0
|
||||
}
|
||||
|
|
@ -244,17 +245,18 @@ func filesChanged(repoDir, base, head string) int {
|
|||
|
||||
// listLocalRepos returns repo names that exist as directories in basePath.
|
||||
func (s *PrepSubsystem) listLocalRepos(basePath string) []string {
|
||||
entries, err := os.ReadDir(basePath)
|
||||
if err != nil {
|
||||
r := fs.List(basePath)
|
||||
if !r.OK {
|
||||
return nil
|
||||
}
|
||||
entries := r.Value.([]os.DirEntry)
|
||||
var repos []string
|
||||
for _, e := range entries {
|
||||
if !e.IsDir() {
|
||||
continue
|
||||
}
|
||||
// Must have a .git directory
|
||||
if _, err := os.Stat(filepath.Join(basePath, e.Name(), ".git")); err == nil {
|
||||
if fs.IsDir(core.JoinPath(basePath, e.Name(), ".git")) {
|
||||
repos = append(repos, e.Name())
|
||||
}
|
||||
}
|
||||
|
|
@ -263,17 +265,24 @@ func (s *PrepSubsystem) listLocalRepos(basePath string) []string {
|
|||
|
||||
// extractJSONField extracts a simple string field from JSON array output.
|
||||
func extractJSONField(jsonStr, field string) string {
|
||||
// Quick and dirty — works for gh CLI output like [{"url":"https://..."}]
|
||||
key := fmt.Sprintf(`"%s":"`, field)
|
||||
idx := strings.Index(jsonStr, key)
|
||||
if idx < 0 {
|
||||
if jsonStr == "" || field == "" {
|
||||
return ""
|
||||
}
|
||||
start := idx + len(key)
|
||||
end := strings.Index(jsonStr[start:], `"`)
|
||||
if end < 0 {
|
||||
return ""
|
||||
}
|
||||
return jsonStr[start : start+end]
|
||||
}
|
||||
|
||||
var list []map[string]any
|
||||
if err := json.Unmarshal([]byte(jsonStr), &list); err == nil {
|
||||
for _, item := range list {
|
||||
if value, ok := item[field].(string); ok {
|
||||
return value
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var item map[string]any
|
||||
if err := json.Unmarshal([]byte(jsonStr), &item); err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
value, _ := item[field].(string)
|
||||
return value
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,55 +3,85 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"strconv"
|
||||
"unsafe"
|
||||
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// fs provides unrestricted filesystem access (root "/" = no sandbox).
|
||||
//
|
||||
// r := fs.Read("/etc/hostname")
|
||||
// if r.OK { core.Print(nil, "%s", r.Value.(string)) }
|
||||
var fs = newFs("/")
|
||||
|
||||
// newFs creates a core.Fs with the given root directory.
|
||||
// Root "/" means unrestricted access (same as coreio.Local).
|
||||
func newFs(root string) *core.Fs {
|
||||
type fsRoot struct{ root string }
|
||||
f := &core.Fs{}
|
||||
(*fsRoot)(unsafe.Pointer(f)).root = root
|
||||
return f
|
||||
}
|
||||
|
||||
// LocalFs returns an unrestricted filesystem instance for use by other packages.
|
||||
//
|
||||
// r := agentic.LocalFs().Read("/tmp/agent-status.json")
|
||||
// if r.OK { core.Print(nil, "%s", r.Value.(string)) }
|
||||
func LocalFs() *core.Fs { return fs }
|
||||
|
||||
// WorkspaceRoot returns the root directory for agent workspaces.
|
||||
// Checks CORE_WORKSPACE env var first, falls back to ~/Code/.core/workspace.
|
||||
//
|
||||
// wsDir := core.JoinPath(agentic.WorkspaceRoot(), "go-io-1774149757")
|
||||
func WorkspaceRoot() string {
|
||||
return filepath.Join(CoreRoot(), "workspace")
|
||||
return core.JoinPath(CoreRoot(), "workspace")
|
||||
}
|
||||
|
||||
// CoreRoot returns the root directory for core ecosystem files.
|
||||
// Checks CORE_WORKSPACE env var first, falls back to ~/Code/.core.
|
||||
//
|
||||
// root := agentic.CoreRoot()
|
||||
func CoreRoot() string {
|
||||
if root := os.Getenv("CORE_WORKSPACE"); root != "" {
|
||||
if root := core.Env("CORE_WORKSPACE"); root != "" {
|
||||
return root
|
||||
}
|
||||
home, _ := os.UserHomeDir()
|
||||
return filepath.Join(home, "Code", ".core")
|
||||
return core.JoinPath(core.Env("DIR_HOME"), "Code", ".core")
|
||||
}
|
||||
|
||||
// PlansRoot returns the root directory for agent plans.
|
||||
//
|
||||
// plansDir := agentic.PlansRoot()
|
||||
func PlansRoot() string {
|
||||
return filepath.Join(CoreRoot(), "plans")
|
||||
return core.JoinPath(CoreRoot(), "plans")
|
||||
}
|
||||
|
||||
// AgentName returns the name of this agent based on hostname.
|
||||
// Checks AGENT_NAME env var first.
|
||||
//
|
||||
// name := agentic.AgentName() // "cladius" on Snider's Mac, "charon" elsewhere
|
||||
func AgentName() string {
|
||||
if name := os.Getenv("AGENT_NAME"); name != "" {
|
||||
if name := core.Env("AGENT_NAME"); name != "" {
|
||||
return name
|
||||
}
|
||||
hostname, _ := os.Hostname()
|
||||
h := strings.ToLower(hostname)
|
||||
if strings.Contains(h, "snider") || strings.Contains(h, "studio") || strings.Contains(h, "mac") {
|
||||
h := core.Lower(core.Env("HOSTNAME"))
|
||||
if core.Contains(h, "snider") || core.Contains(h, "studio") || core.Contains(h, "mac") {
|
||||
return "cladius"
|
||||
}
|
||||
return "charon"
|
||||
}
|
||||
|
||||
// gitDefaultBranch detects the default branch of a repo (main, master, etc.).
|
||||
func gitDefaultBranch(repoDir string) string {
|
||||
// DefaultBranch detects the default branch of a repo (main, master, etc.).
|
||||
//
|
||||
// base := agentic.DefaultBranch("./src")
|
||||
func DefaultBranch(repoDir string) string {
|
||||
cmd := exec.Command("git", "symbolic-ref", "refs/remotes/origin/HEAD", "--short")
|
||||
cmd.Dir = repoDir
|
||||
if out, err := cmd.Output(); err == nil {
|
||||
ref := strings.TrimSpace(string(out))
|
||||
if strings.HasPrefix(ref, "origin/") {
|
||||
return strings.TrimPrefix(ref, "origin/")
|
||||
ref := core.Trim(string(out))
|
||||
if core.HasPrefix(ref, "origin/") {
|
||||
return core.TrimPrefix(ref, "origin/")
|
||||
}
|
||||
return ref
|
||||
}
|
||||
|
|
@ -66,9 +96,19 @@ func gitDefaultBranch(repoDir string) string {
|
|||
}
|
||||
|
||||
// GitHubOrg returns the GitHub org for mirror operations.
|
||||
//
|
||||
// org := agentic.GitHubOrg() // "dAppCore"
|
||||
func GitHubOrg() string {
|
||||
if org := os.Getenv("GITHUB_ORG"); org != "" {
|
||||
if org := core.Env("GITHUB_ORG"); org != "" {
|
||||
return org
|
||||
}
|
||||
return "dAppCore"
|
||||
}
|
||||
|
||||
func parseInt(value string) int {
|
||||
n, err := strconv.Atoi(core.Trim(value))
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
|
|
|||
|
|
@ -111,6 +111,16 @@ func TestExtractJSONField_Good(t *testing.T) {
|
|||
assert.Equal(t, "https://github.com/dAppCore/go-io/pull/1", extractJSONField(json, "url"))
|
||||
}
|
||||
|
||||
func TestExtractJSONField_Good_Object(t *testing.T) {
|
||||
json := `{"url":"https://github.com/dAppCore/go-io/pull/2"}`
|
||||
assert.Equal(t, "https://github.com/dAppCore/go-io/pull/2", extractJSONField(json, "url"))
|
||||
}
|
||||
|
||||
func TestExtractJSONField_Good_PrettyPrinted(t *testing.T) {
|
||||
json := "[\n {\n \"url\": \"https://github.com/dAppCore/go-io/pull/3\"\n }\n]"
|
||||
assert.Equal(t, "https://github.com/dAppCore/go-io/pull/3", extractJSONField(json, "url"))
|
||||
}
|
||||
|
||||
func TestExtractJSONField_Bad_Missing(t *testing.T) {
|
||||
assert.Equal(t, "", extractJSONField(`{"name":"test"}`, "url"))
|
||||
assert.Equal(t, "", extractJSONField("", "url"))
|
||||
|
|
|
|||
|
|
@ -8,20 +8,20 @@ import (
|
|||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// Plan represents an implementation plan for agent work.
|
||||
//
|
||||
// plan := &Plan{ID: "migrate-core-abc", Title: "Migrate Core", Status: "draft", Objective: "..."}
|
||||
// writePlan(PlansRoot(), plan)
|
||||
type Plan struct {
|
||||
ID string `json:"id"`
|
||||
Title string `json:"title"`
|
||||
Status string `json:"status"` // draft, ready, in_progress, needs_verification, verified, approved
|
||||
Status string `json:"status"` // draft, ready, in_progress, needs_verification, verified, approved
|
||||
Repo string `json:"repo,omitempty"`
|
||||
Org string `json:"org,omitempty"`
|
||||
Objective string `json:"objective"`
|
||||
|
|
@ -33,10 +33,12 @@ type Plan struct {
|
|||
}
|
||||
|
||||
// Phase represents a phase within an implementation plan.
|
||||
//
|
||||
// phase := agentic.Phase{Number: 1, Name: "Migrate strings", Status: "in_progress"}
|
||||
type Phase struct {
|
||||
Number int `json:"number"`
|
||||
Name string `json:"name"`
|
||||
Status string `json:"status"` // pending, in_progress, done
|
||||
Status string `json:"status"` // pending, in_progress, done
|
||||
Criteria []string `json:"criteria,omitempty"`
|
||||
Tests int `json:"tests,omitempty"`
|
||||
Notes string `json:"notes,omitempty"`
|
||||
|
|
@ -45,6 +47,8 @@ type Phase struct {
|
|||
// --- Input/Output types ---
|
||||
|
||||
// PlanCreateInput is the input for agentic_plan_create.
|
||||
//
|
||||
// input := agentic.PlanCreateInput{Title: "Migrate pkg/agentic", Objective: "Use Core primitives everywhere"}
|
||||
type PlanCreateInput struct {
|
||||
Title string `json:"title"`
|
||||
Objective string `json:"objective"`
|
||||
|
|
@ -55,6 +59,8 @@ type PlanCreateInput struct {
|
|||
}
|
||||
|
||||
// PlanCreateOutput is the output for agentic_plan_create.
|
||||
//
|
||||
// out := agentic.PlanCreateOutput{Success: true, ID: "migrate-pkg-agentic-abc123"}
|
||||
type PlanCreateOutput struct {
|
||||
Success bool `json:"success"`
|
||||
ID string `json:"id"`
|
||||
|
|
@ -62,17 +68,23 @@ type PlanCreateOutput struct {
|
|||
}
|
||||
|
||||
// PlanReadInput is the input for agentic_plan_read.
|
||||
//
|
||||
// input := agentic.PlanReadInput{ID: "migrate-pkg-agentic-abc123"}
|
||||
type PlanReadInput struct {
|
||||
ID string `json:"id"`
|
||||
}
|
||||
|
||||
// PlanReadOutput is the output for agentic_plan_read.
|
||||
//
|
||||
// out := agentic.PlanReadOutput{Success: true, Plan: agentic.Plan{ID: "migrate-pkg-agentic-abc123"}}
|
||||
type PlanReadOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Plan Plan `json:"plan"`
|
||||
}
|
||||
|
||||
// PlanUpdateInput is the input for agentic_plan_update.
|
||||
//
|
||||
// input := agentic.PlanUpdateInput{ID: "migrate-pkg-agentic-abc123", Status: "verified"}
|
||||
type PlanUpdateInput struct {
|
||||
ID string `json:"id"`
|
||||
Status string `json:"status,omitempty"`
|
||||
|
|
@ -84,29 +96,39 @@ type PlanUpdateInput struct {
|
|||
}
|
||||
|
||||
// PlanUpdateOutput is the output for agentic_plan_update.
|
||||
//
|
||||
// out := agentic.PlanUpdateOutput{Success: true, Plan: agentic.Plan{Status: "verified"}}
|
||||
type PlanUpdateOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Plan Plan `json:"plan"`
|
||||
}
|
||||
|
||||
// PlanDeleteInput is the input for agentic_plan_delete.
|
||||
//
|
||||
// input := agentic.PlanDeleteInput{ID: "migrate-pkg-agentic-abc123"}
|
||||
type PlanDeleteInput struct {
|
||||
ID string `json:"id"`
|
||||
}
|
||||
|
||||
// PlanDeleteOutput is the output for agentic_plan_delete.
|
||||
//
|
||||
// out := agentic.PlanDeleteOutput{Success: true, Deleted: "migrate-pkg-agentic-abc123"}
|
||||
type PlanDeleteOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Deleted string `json:"deleted"`
|
||||
}
|
||||
|
||||
// PlanListInput is the input for agentic_plan_list.
|
||||
//
|
||||
// input := agentic.PlanListInput{Repo: "go-io", Status: "ready"}
|
||||
type PlanListInput struct {
|
||||
Status string `json:"status,omitempty"`
|
||||
Repo string `json:"repo,omitempty"`
|
||||
}
|
||||
|
||||
// PlanListOutput is the output for agentic_plan_list.
|
||||
//
|
||||
// out := agentic.PlanListOutput{Success: true, Count: 2, Plans: []agentic.Plan{{ID: "migrate-pkg-agentic-abc123"}}}
|
||||
type PlanListOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Count int `json:"count"`
|
||||
|
|
@ -146,10 +168,10 @@ func (s *PrepSubsystem) registerPlanTools(server *mcp.Server) {
|
|||
|
||||
func (s *PrepSubsystem) planCreate(_ context.Context, _ *mcp.CallToolRequest, input PlanCreateInput) (*mcp.CallToolResult, PlanCreateOutput, error) {
|
||||
if input.Title == "" {
|
||||
return nil, PlanCreateOutput{}, coreerr.E("planCreate", "title is required", nil)
|
||||
return nil, PlanCreateOutput{}, core.E("planCreate", "title is required", nil)
|
||||
}
|
||||
if input.Objective == "" {
|
||||
return nil, PlanCreateOutput{}, coreerr.E("planCreate", "objective is required", nil)
|
||||
return nil, PlanCreateOutput{}, core.E("planCreate", "objective is required", nil)
|
||||
}
|
||||
|
||||
id := generatePlanID(input.Title)
|
||||
|
|
@ -178,7 +200,7 @@ func (s *PrepSubsystem) planCreate(_ context.Context, _ *mcp.CallToolRequest, in
|
|||
|
||||
path, err := writePlan(PlansRoot(), &plan)
|
||||
if err != nil {
|
||||
return nil, PlanCreateOutput{}, coreerr.E("planCreate", "failed to write plan", err)
|
||||
return nil, PlanCreateOutput{}, core.E("planCreate", "failed to write plan", err)
|
||||
}
|
||||
|
||||
return nil, PlanCreateOutput{
|
||||
|
|
@ -190,7 +212,7 @@ func (s *PrepSubsystem) planCreate(_ context.Context, _ *mcp.CallToolRequest, in
|
|||
|
||||
func (s *PrepSubsystem) planRead(_ context.Context, _ *mcp.CallToolRequest, input PlanReadInput) (*mcp.CallToolResult, PlanReadOutput, error) {
|
||||
if input.ID == "" {
|
||||
return nil, PlanReadOutput{}, coreerr.E("planRead", "id is required", nil)
|
||||
return nil, PlanReadOutput{}, core.E("planRead", "id is required", nil)
|
||||
}
|
||||
|
||||
plan, err := readPlan(PlansRoot(), input.ID)
|
||||
|
|
@ -206,7 +228,7 @@ func (s *PrepSubsystem) planRead(_ context.Context, _ *mcp.CallToolRequest, inpu
|
|||
|
||||
func (s *PrepSubsystem) planUpdate(_ context.Context, _ *mcp.CallToolRequest, input PlanUpdateInput) (*mcp.CallToolResult, PlanUpdateOutput, error) {
|
||||
if input.ID == "" {
|
||||
return nil, PlanUpdateOutput{}, coreerr.E("planUpdate", "id is required", nil)
|
||||
return nil, PlanUpdateOutput{}, core.E("planUpdate", "id is required", nil)
|
||||
}
|
||||
|
||||
plan, err := readPlan(PlansRoot(), input.ID)
|
||||
|
|
@ -217,7 +239,7 @@ func (s *PrepSubsystem) planUpdate(_ context.Context, _ *mcp.CallToolRequest, in
|
|||
// Apply partial updates
|
||||
if input.Status != "" {
|
||||
if !validPlanStatus(input.Status) {
|
||||
return nil, PlanUpdateOutput{}, coreerr.E("planUpdate", "invalid status: "+input.Status+" (valid: draft, ready, in_progress, needs_verification, verified, approved)", nil)
|
||||
return nil, PlanUpdateOutput{}, core.E("planUpdate", "invalid status: "+input.Status+" (valid: draft, ready, in_progress, needs_verification, verified, approved)", nil)
|
||||
}
|
||||
plan.Status = input.Status
|
||||
}
|
||||
|
|
@ -240,7 +262,7 @@ func (s *PrepSubsystem) planUpdate(_ context.Context, _ *mcp.CallToolRequest, in
|
|||
plan.UpdatedAt = time.Now()
|
||||
|
||||
if _, err := writePlan(PlansRoot(), plan); err != nil {
|
||||
return nil, PlanUpdateOutput{}, coreerr.E("planUpdate", "failed to write plan", err)
|
||||
return nil, PlanUpdateOutput{}, core.E("planUpdate", "failed to write plan", err)
|
||||
}
|
||||
|
||||
return nil, PlanUpdateOutput{
|
||||
|
|
@ -251,16 +273,17 @@ func (s *PrepSubsystem) planUpdate(_ context.Context, _ *mcp.CallToolRequest, in
|
|||
|
||||
func (s *PrepSubsystem) planDelete(_ context.Context, _ *mcp.CallToolRequest, input PlanDeleteInput) (*mcp.CallToolResult, PlanDeleteOutput, error) {
|
||||
if input.ID == "" {
|
||||
return nil, PlanDeleteOutput{}, coreerr.E("planDelete", "id is required", nil)
|
||||
return nil, PlanDeleteOutput{}, core.E("planDelete", "id is required", nil)
|
||||
}
|
||||
|
||||
path := planPath(PlansRoot(), input.ID)
|
||||
if _, err := os.Stat(path); err != nil {
|
||||
return nil, PlanDeleteOutput{}, coreerr.E("planDelete", "plan not found: "+input.ID, nil)
|
||||
if !fs.Exists(path) {
|
||||
return nil, PlanDeleteOutput{}, core.E("planDelete", "plan not found: "+input.ID, nil)
|
||||
}
|
||||
|
||||
if err := coreio.Local.Delete(path); err != nil {
|
||||
return nil, PlanDeleteOutput{}, coreerr.E("planDelete", "failed to delete plan", err)
|
||||
if r := fs.Delete(path); !r.OK {
|
||||
err, _ := r.Value.(error)
|
||||
return nil, PlanDeleteOutput{}, core.E("planDelete", "failed to delete plan", err)
|
||||
}
|
||||
|
||||
return nil, PlanDeleteOutput{
|
||||
|
|
@ -271,22 +294,24 @@ func (s *PrepSubsystem) planDelete(_ context.Context, _ *mcp.CallToolRequest, in
|
|||
|
||||
func (s *PrepSubsystem) planList(_ context.Context, _ *mcp.CallToolRequest, input PlanListInput) (*mcp.CallToolResult, PlanListOutput, error) {
|
||||
dir := PlansRoot()
|
||||
if err := coreio.Local.EnsureDir(dir); err != nil {
|
||||
return nil, PlanListOutput{}, coreerr.E("planList", "failed to access plans directory", err)
|
||||
if r := fs.EnsureDir(dir); !r.OK {
|
||||
err, _ := r.Value.(error)
|
||||
return nil, PlanListOutput{}, core.E("planList", "failed to access plans directory", err)
|
||||
}
|
||||
|
||||
entries, err := os.ReadDir(dir)
|
||||
if err != nil {
|
||||
return nil, PlanListOutput{}, coreerr.E("planList", "failed to read plans directory", err)
|
||||
r := fs.List(dir)
|
||||
if !r.OK {
|
||||
return nil, PlanListOutput{}, nil
|
||||
}
|
||||
entries := r.Value.([]os.DirEntry)
|
||||
|
||||
var plans []Plan
|
||||
for _, entry := range entries {
|
||||
if entry.IsDir() || !strings.HasSuffix(entry.Name(), ".json") {
|
||||
if entry.IsDir() || !core.HasSuffix(entry.Name(), ".json") {
|
||||
continue
|
||||
}
|
||||
|
||||
id := strings.TrimSuffix(entry.Name(), ".json")
|
||||
id := core.TrimSuffix(entry.Name(), ".json")
|
||||
plan, err := readPlan(dir, id)
|
||||
if err != nil {
|
||||
continue
|
||||
|
|
@ -314,36 +339,15 @@ func (s *PrepSubsystem) planList(_ context.Context, _ *mcp.CallToolRequest, inpu
|
|||
|
||||
func planPath(dir, id string) string {
|
||||
// Sanitise ID to prevent path traversal
|
||||
safe := filepath.Base(id)
|
||||
safe := core.PathBase(id)
|
||||
if safe == "." || safe == ".." || safe == "" {
|
||||
safe = "invalid"
|
||||
}
|
||||
return filepath.Join(dir, safe+".json")
|
||||
return core.JoinPath(dir, safe+".json")
|
||||
}
|
||||
|
||||
func generatePlanID(title string) string {
|
||||
slug := strings.Map(func(r rune) rune {
|
||||
if r >= 'a' && r <= 'z' || r >= '0' && r <= '9' || r == '-' {
|
||||
return r
|
||||
}
|
||||
if r >= 'A' && r <= 'Z' {
|
||||
return r + 32
|
||||
}
|
||||
if r == ' ' {
|
||||
return '-'
|
||||
}
|
||||
return -1
|
||||
}, title)
|
||||
|
||||
// Trim consecutive dashes and cap length
|
||||
for strings.Contains(slug, "--") {
|
||||
slug = strings.ReplaceAll(slug, "--", "-")
|
||||
}
|
||||
slug = strings.Trim(slug, "-")
|
||||
if len(slug) > 30 {
|
||||
slug = slug[:30]
|
||||
}
|
||||
slug = strings.TrimRight(slug, "-")
|
||||
slug := sanitisePlanSlug(title)
|
||||
|
||||
// Append short random suffix for uniqueness
|
||||
b := make([]byte, 3)
|
||||
|
|
@ -352,21 +356,22 @@ func generatePlanID(title string) string {
|
|||
}
|
||||
|
||||
func readPlan(dir, id string) (*Plan, error) {
|
||||
data, err := coreio.Local.Read(planPath(dir, id))
|
||||
if err != nil {
|
||||
return nil, coreerr.E("readPlan", "plan not found: "+id, nil)
|
||||
r := fs.Read(planPath(dir, id))
|
||||
if !r.OK {
|
||||
return nil, core.E("readPlan", "plan not found: "+id, nil)
|
||||
}
|
||||
|
||||
var plan Plan
|
||||
if err := json.Unmarshal([]byte(data), &plan); err != nil {
|
||||
return nil, coreerr.E("readPlan", "failed to parse plan "+id, err)
|
||||
if err := json.Unmarshal([]byte(r.Value.(string)), &plan); err != nil {
|
||||
return nil, core.E("readPlan", "failed to parse plan "+id, err)
|
||||
}
|
||||
return &plan, nil
|
||||
}
|
||||
|
||||
func writePlan(dir string, plan *Plan) (string, error) {
|
||||
if err := coreio.Local.EnsureDir(dir); err != nil {
|
||||
return "", coreerr.E("writePlan", "failed to create plans directory", err)
|
||||
if r := fs.EnsureDir(dir); !r.OK {
|
||||
err, _ := r.Value.(error)
|
||||
return "", core.E("writePlan", "failed to create plans directory", err)
|
||||
}
|
||||
|
||||
path := planPath(dir, plan.ID)
|
||||
|
|
@ -375,7 +380,11 @@ func writePlan(dir string, plan *Plan) (string, error) {
|
|||
return "", err
|
||||
}
|
||||
|
||||
return path, coreio.Local.Write(path, string(data))
|
||||
if r := fs.Write(path, string(data)); !r.OK {
|
||||
err, _ := r.Value.(error)
|
||||
return "", core.E("writePlan", "failed to write plan", err)
|
||||
}
|
||||
return path, nil
|
||||
}
|
||||
|
||||
func validPlanStatus(status string) bool {
|
||||
|
|
|
|||
|
|
@ -7,7 +7,6 @@ import (
|
|||
"strings"
|
||||
"testing"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
|
@ -31,7 +30,7 @@ func TestWritePlan_Good(t *testing.T) {
|
|||
assert.Equal(t, filepath.Join(dir, "test-plan-abc123.json"), path)
|
||||
|
||||
// Verify file exists
|
||||
assert.True(t, coreio.Local.IsFile(path))
|
||||
assert.True(t, fs.IsFile(path))
|
||||
}
|
||||
|
||||
func TestWritePlan_Good_CreatesDirectory(t *testing.T) {
|
||||
|
|
@ -96,7 +95,7 @@ func TestReadPlan_Bad_NotFound(t *testing.T) {
|
|||
|
||||
func TestReadPlan_Bad_InvalidJSON(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "bad-json.json"), "{broken"))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "bad-json.json"), "{broken").OK)
|
||||
|
||||
_, err := readPlan(dir, "bad-json")
|
||||
assert.Error(t, err)
|
||||
|
|
@ -205,7 +204,7 @@ func TestWritePlan_Good_OverwriteExisting(t *testing.T) {
|
|||
|
||||
func TestReadPlan_Ugly_EmptyFile(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "empty.json"), ""))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "empty.json"), "").OK)
|
||||
|
||||
_, err := readPlan(dir, "empty")
|
||||
assert.Error(t, err)
|
||||
|
|
|
|||
|
|
@ -3,32 +3,31 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"dappco.re/go/core/forge"
|
||||
forge_types "dappco.re/go/core/forge/types"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// --- agentic_create_pr ---
|
||||
|
||||
// CreatePRInput is the input for agentic_create_pr.
|
||||
//
|
||||
// input := agentic.CreatePRInput{Workspace: "go-io-1773581873", Title: "Fix watcher panic"}
|
||||
type CreatePRInput struct {
|
||||
Workspace string `json:"workspace"` // workspace name (e.g. "mcp-1773581873")
|
||||
Title string `json:"title,omitempty"` // PR title (default: task description)
|
||||
Body string `json:"body,omitempty"` // PR body (default: auto-generated)
|
||||
Base string `json:"base,omitempty"` // base branch (default: "main")
|
||||
DryRun bool `json:"dry_run,omitempty"` // preview without creating
|
||||
Workspace string `json:"workspace"` // workspace name (e.g. "mcp-1773581873")
|
||||
Title string `json:"title,omitempty"` // PR title (default: task description)
|
||||
Body string `json:"body,omitempty"` // PR body (default: auto-generated)
|
||||
Base string `json:"base,omitempty"` // base branch (default: "main")
|
||||
DryRun bool `json:"dry_run,omitempty"` // preview without creating
|
||||
}
|
||||
|
||||
// CreatePROutput is the output for agentic_create_pr.
|
||||
//
|
||||
// out := agentic.CreatePROutput{Success: true, PRURL: "https://forge.example/core/go-io/pulls/12", PRNum: 12}
|
||||
type CreatePROutput struct {
|
||||
Success bool `json:"success"`
|
||||
PRURL string `json:"pr_url,omitempty"`
|
||||
|
|
@ -48,34 +47,34 @@ func (s *PrepSubsystem) registerCreatePRTool(server *mcp.Server) {
|
|||
|
||||
func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, input CreatePRInput) (*mcp.CallToolResult, CreatePROutput, error) {
|
||||
if input.Workspace == "" {
|
||||
return nil, CreatePROutput{}, coreerr.E("createPR", "workspace is required", nil)
|
||||
return nil, CreatePROutput{}, core.E("createPR", "workspace is required", nil)
|
||||
}
|
||||
if s.forgeToken == "" {
|
||||
return nil, CreatePROutput{}, coreerr.E("createPR", "no Forge token configured", nil)
|
||||
return nil, CreatePROutput{}, core.E("createPR", "no Forge token configured", nil)
|
||||
}
|
||||
|
||||
wsDir := filepath.Join(WorkspaceRoot(), input.Workspace)
|
||||
srcDir := filepath.Join(wsDir, "src")
|
||||
wsDir := core.JoinPath(WorkspaceRoot(), input.Workspace)
|
||||
repoDir := core.JoinPath(wsDir, "repo")
|
||||
|
||||
if _, err := os.Stat(srcDir); err != nil {
|
||||
return nil, CreatePROutput{}, coreerr.E("createPR", "workspace not found: "+input.Workspace, nil)
|
||||
if !fs.IsDir(core.JoinPath(repoDir, ".git")) {
|
||||
return nil, CreatePROutput{}, core.E("createPR", "workspace not found: "+input.Workspace, nil)
|
||||
}
|
||||
|
||||
// Read workspace status for repo, branch, issue context
|
||||
st, err := readStatus(wsDir)
|
||||
if err != nil {
|
||||
return nil, CreatePROutput{}, coreerr.E("createPR", "no status.json", err)
|
||||
return nil, CreatePROutput{}, core.E("createPR", "no status.json", err)
|
||||
}
|
||||
|
||||
if st.Branch == "" {
|
||||
// Detect branch from git
|
||||
branchCmd := exec.CommandContext(ctx, "git", "rev-parse", "--abbrev-ref", "HEAD")
|
||||
branchCmd.Dir = srcDir
|
||||
branchCmd.Dir = repoDir
|
||||
out, err := branchCmd.Output()
|
||||
if err != nil {
|
||||
return nil, CreatePROutput{}, coreerr.E("createPR", "failed to detect branch", err)
|
||||
return nil, CreatePROutput{}, core.E("createPR", "failed to detect branch", err)
|
||||
}
|
||||
st.Branch = strings.TrimSpace(string(out))
|
||||
st.Branch = core.Trim(string(out))
|
||||
}
|
||||
|
||||
org := st.Org
|
||||
|
|
@ -84,7 +83,7 @@ func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, in
|
|||
}
|
||||
base := input.Base
|
||||
if base == "" {
|
||||
base = "main"
|
||||
base = "dev"
|
||||
}
|
||||
|
||||
// Build PR title
|
||||
|
|
@ -93,7 +92,7 @@ func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, in
|
|||
title = st.Task
|
||||
}
|
||||
if title == "" {
|
||||
title = fmt.Sprintf("Agent work on %s", st.Branch)
|
||||
title = core.Sprintf("Agent work on %s", st.Branch)
|
||||
}
|
||||
|
||||
// Build PR body
|
||||
|
|
@ -112,18 +111,18 @@ func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, in
|
|||
}
|
||||
|
||||
// Push branch to Forge (origin is the local clone, not Forge)
|
||||
forgeRemote := fmt.Sprintf("ssh://git@forge.lthn.ai:2223/%s/%s.git", org, st.Repo)
|
||||
forgeRemote := core.Sprintf("ssh://git@forge.lthn.ai:2223/%s/%s.git", org, st.Repo)
|
||||
pushCmd := exec.CommandContext(ctx, "git", "push", forgeRemote, st.Branch)
|
||||
pushCmd.Dir = srcDir
|
||||
pushCmd.Dir = repoDir
|
||||
pushOut, err := pushCmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return nil, CreatePROutput{}, coreerr.E("createPR", "git push failed: "+string(pushOut), err)
|
||||
return nil, CreatePROutput{}, core.E("createPR", "git push failed: "+string(pushOut), err)
|
||||
}
|
||||
|
||||
// Create PR via Forge API
|
||||
prURL, prNum, err := s.forgeCreatePR(ctx, org, st.Repo, st.Branch, base, title, body)
|
||||
if err != nil {
|
||||
return nil, CreatePROutput{}, coreerr.E("createPR", "failed to create PR", err)
|
||||
return nil, CreatePROutput{}, core.E("createPR", "failed to create PR", err)
|
||||
}
|
||||
|
||||
// Update status with PR URL
|
||||
|
|
@ -132,7 +131,7 @@ func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, in
|
|||
|
||||
// Comment on issue if tracked
|
||||
if st.Issue > 0 {
|
||||
comment := fmt.Sprintf("Pull request created: %s", prURL)
|
||||
comment := core.Sprintf("Pull request created: %s", prURL)
|
||||
s.commentOnIssue(ctx, org, st.Repo, st.Issue, comment)
|
||||
}
|
||||
|
||||
|
|
@ -148,82 +147,53 @@ func (s *PrepSubsystem) createPR(ctx context.Context, _ *mcp.CallToolRequest, in
|
|||
}
|
||||
|
||||
func (s *PrepSubsystem) buildPRBody(st *WorkspaceStatus) string {
|
||||
var b strings.Builder
|
||||
b := core.NewBuilder()
|
||||
b.WriteString("## Summary\n\n")
|
||||
if st.Task != "" {
|
||||
b.WriteString(st.Task)
|
||||
b.WriteString("\n\n")
|
||||
}
|
||||
if st.Issue > 0 {
|
||||
b.WriteString(fmt.Sprintf("Closes #%d\n\n", st.Issue))
|
||||
b.WriteString(core.Sprintf("Closes #%d\n\n", st.Issue))
|
||||
}
|
||||
b.WriteString(fmt.Sprintf("**Agent:** %s\n", st.Agent))
|
||||
b.WriteString(fmt.Sprintf("**Runs:** %d\n", st.Runs))
|
||||
b.WriteString(core.Sprintf("**Agent:** %s\n", st.Agent))
|
||||
b.WriteString(core.Sprintf("**Runs:** %d\n", st.Runs))
|
||||
b.WriteString("\n---\n*Created by agentic dispatch*\n")
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) forgeCreatePR(ctx context.Context, org, repo, head, base, title, body string) (string, int, error) {
|
||||
payload, _ := json.Marshal(map[string]any{
|
||||
"title": title,
|
||||
"body": body,
|
||||
"head": head,
|
||||
"base": base,
|
||||
pr, err := s.forge.Pulls.Create(ctx, forge.Params{"owner": org, "repo": repo}, &forge_types.CreatePullRequestOption{
|
||||
Title: title,
|
||||
Body: body,
|
||||
Head: head,
|
||||
Base: base,
|
||||
})
|
||||
|
||||
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/pulls", s.forgeURL, org, repo)
|
||||
req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
if err != nil {
|
||||
return "", 0, coreerr.E("forgeCreatePR", "request failed", err)
|
||||
return "", 0, core.E("forgeCreatePR", "create PR failed", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 201 {
|
||||
var errBody map[string]any
|
||||
json.NewDecoder(resp.Body).Decode(&errBody)
|
||||
msg, _ := errBody["message"].(string)
|
||||
return "", 0, coreerr.E("forgeCreatePR", fmt.Sprintf("HTTP %d: %s", resp.StatusCode, msg), nil)
|
||||
}
|
||||
|
||||
var pr struct {
|
||||
Number int `json:"number"`
|
||||
HTMLURL string `json:"html_url"`
|
||||
}
|
||||
json.NewDecoder(resp.Body).Decode(&pr)
|
||||
|
||||
return pr.HTMLURL, pr.Number, nil
|
||||
return pr.HTMLURL, int(pr.Index), nil
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) commentOnIssue(ctx context.Context, org, repo string, issue int, comment string) {
|
||||
payload, _ := json.Marshal(map[string]string{"body": comment})
|
||||
|
||||
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/issues/%d/comments", s.forgeURL, org, repo, issue)
|
||||
req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
resp.Body.Close()
|
||||
s.forge.Issues.CreateComment(ctx, org, repo, int64(issue), comment)
|
||||
}
|
||||
|
||||
// --- agentic_list_prs ---
|
||||
|
||||
// ListPRsInput is the input for agentic_list_prs.
|
||||
//
|
||||
// input := agentic.ListPRsInput{Org: "core", Repo: "go-io", State: "open", Limit: 10}
|
||||
type ListPRsInput struct {
|
||||
Org string `json:"org,omitempty"` // forge org (default "core")
|
||||
Repo string `json:"repo,omitempty"` // specific repo, or empty for all
|
||||
Repo string `json:"repo,omitempty"` // specific repo, or empty for all
|
||||
State string `json:"state,omitempty"` // "open" (default), "closed", "all"
|
||||
Limit int `json:"limit,omitempty"` // max results (default 20)
|
||||
}
|
||||
|
||||
// ListPRsOutput is the output for agentic_list_prs.
|
||||
//
|
||||
// out := agentic.ListPRsOutput{Success: true, Count: 2, PRs: []agentic.PRInfo{{Repo: "go-io", Number: 12}}}
|
||||
type ListPRsOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Count int `json:"count"`
|
||||
|
|
@ -231,6 +201,8 @@ type ListPRsOutput struct {
|
|||
}
|
||||
|
||||
// PRInfo represents a pull request.
|
||||
//
|
||||
// pr := agentic.PRInfo{Repo: "go-io", Number: 12, Title: "Migrate pkg/fs", Branch: "agent/migrate-fs"}
|
||||
type PRInfo struct {
|
||||
Repo string `json:"repo"`
|
||||
Number int `json:"number"`
|
||||
|
|
@ -253,7 +225,7 @@ func (s *PrepSubsystem) registerListPRsTool(server *mcp.Server) {
|
|||
|
||||
func (s *PrepSubsystem) listPRs(ctx context.Context, _ *mcp.CallToolRequest, input ListPRsInput) (*mcp.CallToolResult, ListPRsOutput, error) {
|
||||
if s.forgeToken == "" {
|
||||
return nil, ListPRsOutput{}, coreerr.E("listPRs", "no Forge token configured", nil)
|
||||
return nil, ListPRsOutput{}, core.E("listPRs", "no Forge token configured", nil)
|
||||
}
|
||||
|
||||
if input.Org == "" {
|
||||
|
|
@ -303,54 +275,30 @@ func (s *PrepSubsystem) listPRs(ctx context.Context, _ *mcp.CallToolRequest, inp
|
|||
}
|
||||
|
||||
func (s *PrepSubsystem) listRepoPRs(ctx context.Context, org, repo, state string) ([]PRInfo, error) {
|
||||
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/pulls?state=%s&limit=10",
|
||||
s.forgeURL, org, repo, state)
|
||||
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
prs, err := s.forge.Pulls.ListAll(ctx, forge.Params{"owner": org, "repo": repo})
|
||||
if err != nil {
|
||||
return nil, coreerr.E("listRepoPRs", "failed to list PRs for "+repo, err)
|
||||
return nil, core.E("listRepoPRs", "failed to list PRs for "+repo, err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
return nil, coreerr.E("listRepoPRs", fmt.Sprintf("HTTP %d listing PRs for %s", resp.StatusCode, repo), nil)
|
||||
}
|
||||
|
||||
var prs []struct {
|
||||
Number int `json:"number"`
|
||||
Title string `json:"title"`
|
||||
State string `json:"state"`
|
||||
Mergeable bool `json:"mergeable"`
|
||||
HTMLURL string `json:"html_url"`
|
||||
Head struct {
|
||||
Ref string `json:"ref"`
|
||||
} `json:"head"`
|
||||
Base struct {
|
||||
Ref string `json:"ref"`
|
||||
} `json:"base"`
|
||||
User struct {
|
||||
Login string `json:"login"`
|
||||
} `json:"user"`
|
||||
Labels []struct {
|
||||
Name string `json:"name"`
|
||||
} `json:"labels"`
|
||||
}
|
||||
json.NewDecoder(resp.Body).Decode(&prs)
|
||||
|
||||
var result []PRInfo
|
||||
for _, pr := range prs {
|
||||
if state != "" && state != "all" && string(pr.State) != state {
|
||||
continue
|
||||
}
|
||||
var labels []string
|
||||
for _, l := range pr.Labels {
|
||||
labels = append(labels, l.Name)
|
||||
}
|
||||
author := ""
|
||||
if pr.User != nil {
|
||||
author = pr.User.UserName
|
||||
}
|
||||
result = append(result, PRInfo{
|
||||
Repo: repo,
|
||||
Number: pr.Number,
|
||||
Number: int(pr.Index),
|
||||
Title: pr.Title,
|
||||
State: pr.State,
|
||||
Author: pr.User.Login,
|
||||
State: string(pr.State),
|
||||
Author: author,
|
||||
Branch: pr.Head.Ref,
|
||||
Base: pr.Base.Ref,
|
||||
Labels: labels,
|
||||
|
|
|
|||
|
|
@ -1,83 +1,101 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
// Package agentic provides MCP tools for agent orchestration.
|
||||
// Prepares sandboxed workspaces and dispatches subagents.
|
||||
// Prepares workspaces and dispatches subagents.
|
||||
package agentic
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
goio "io"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"dappco.re/go/agent/pkg/lib"
|
||||
coreio "dappco.re/go/core/io"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"dappco.re/go/core/forge"
|
||||
coremcp "forge.lthn.ai/core/mcp/pkg/mcp"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// CompletionNotifier is called when an agent completes, to trigger
|
||||
// immediate notifications to connected clients.
|
||||
// CompletionNotifier receives agent lifecycle events directly from dispatch.
|
||||
// No filesystem polling — events flow in-memory.
|
||||
//
|
||||
// prep.SetCompletionNotifier(monitor)
|
||||
type CompletionNotifier interface {
|
||||
Poke()
|
||||
AgentStarted(agent, repo, workspace string)
|
||||
AgentCompleted(agent, repo, workspace, status string)
|
||||
}
|
||||
|
||||
// PrepSubsystem provides agentic MCP tools.
|
||||
// PrepSubsystem provides agentic MCP tools for workspace orchestration.
|
||||
//
|
||||
// sub := agentic.NewPrep()
|
||||
// sub.RegisterTools(server)
|
||||
type PrepSubsystem struct {
|
||||
forgeURL string
|
||||
forgeToken string
|
||||
brainURL string
|
||||
brainKey string
|
||||
specsPath string
|
||||
codePath string
|
||||
client *http.Client
|
||||
onComplete CompletionNotifier
|
||||
drainMu sync.Mutex // protects drainQueue from concurrent execution
|
||||
forge *forge.Forge
|
||||
forgeURL string
|
||||
forgeToken string
|
||||
brainURL string
|
||||
brainKey string
|
||||
codePath string
|
||||
client *http.Client
|
||||
onComplete CompletionNotifier
|
||||
drainMu sync.Mutex
|
||||
pokeCh chan struct{}
|
||||
frozen bool
|
||||
backoff map[string]time.Time // pool → paused until
|
||||
failCount map[string]int // pool → consecutive fast failures
|
||||
}
|
||||
|
||||
var _ coremcp.Subsystem = (*PrepSubsystem)(nil)
|
||||
|
||||
// NewPrep creates an agentic subsystem.
|
||||
//
|
||||
// sub := agentic.NewPrep()
|
||||
// sub.SetCompletionNotifier(monitor)
|
||||
func NewPrep() *PrepSubsystem {
|
||||
home, _ := os.UserHomeDir()
|
||||
home := core.Env("DIR_HOME")
|
||||
|
||||
forgeToken := os.Getenv("FORGE_TOKEN")
|
||||
forgeToken := core.Env("FORGE_TOKEN")
|
||||
if forgeToken == "" {
|
||||
forgeToken = os.Getenv("GITEA_TOKEN")
|
||||
forgeToken = core.Env("GITEA_TOKEN")
|
||||
}
|
||||
|
||||
brainKey := os.Getenv("CORE_BRAIN_KEY")
|
||||
brainKey := core.Env("CORE_BRAIN_KEY")
|
||||
if brainKey == "" {
|
||||
if data, err := coreio.Local.Read(filepath.Join(home, ".claude", "brain.key")); err == nil {
|
||||
brainKey = strings.TrimSpace(data)
|
||||
if r := fs.Read(core.JoinPath(home, ".claude", "brain.key")); r.OK {
|
||||
brainKey = core.Trim(r.Value.(string))
|
||||
}
|
||||
}
|
||||
|
||||
forgeURL := envOr("FORGE_URL", "https://forge.lthn.ai")
|
||||
|
||||
return &PrepSubsystem{
|
||||
forgeURL: envOr("FORGE_URL", "https://forge.lthn.ai"),
|
||||
forge: forge.NewForge(forgeURL, forgeToken),
|
||||
forgeURL: forgeURL,
|
||||
forgeToken: forgeToken,
|
||||
brainURL: envOr("CORE_BRAIN_URL", "https://api.lthn.sh"),
|
||||
brainKey: brainKey,
|
||||
specsPath: envOr("SPECS_PATH", filepath.Join(home, "Code", "specs")),
|
||||
codePath: envOr("CODE_PATH", filepath.Join(home, "Code")),
|
||||
codePath: envOr("CODE_PATH", core.JoinPath(home, "Code")),
|
||||
client: &http.Client{Timeout: 30 * time.Second},
|
||||
backoff: make(map[string]time.Time),
|
||||
failCount: make(map[string]int),
|
||||
}
|
||||
}
|
||||
|
||||
// SetCompletionNotifier wires up the monitor for immediate push on agent completion.
|
||||
//
|
||||
// prep.SetCompletionNotifier(monitor)
|
||||
func (s *PrepSubsystem) SetCompletionNotifier(n CompletionNotifier) {
|
||||
s.onComplete = n
|
||||
}
|
||||
|
||||
func envOr(key, fallback string) string {
|
||||
if v := os.Getenv(key); v != "" {
|
||||
if v := core.Env(key); v != "" {
|
||||
return v
|
||||
}
|
||||
return fallback
|
||||
|
|
@ -90,7 +108,7 @@ func (s *PrepSubsystem) Name() string { return "agentic" }
|
|||
func (s *PrepSubsystem) RegisterTools(server *mcp.Server) {
|
||||
mcp.AddTool(server, &mcp.Tool{
|
||||
Name: "agentic_prep_workspace",
|
||||
Description: "Prepare a sandboxed agent workspace with TODO.md, CLAUDE.md, CONTEXT.md, CONSUMERS.md, RECENT.md, and a git clone of the target repo in src/.",
|
||||
Description: "Prepare an agent workspace: clone repo, create branch, build prompt with context.",
|
||||
}, s.prepWorkspace)
|
||||
|
||||
s.registerDispatchTool(server)
|
||||
|
|
@ -103,6 +121,7 @@ func (s *PrepSubsystem) RegisterTools(server *mcp.Server) {
|
|||
s.registerRemoteDispatchTool(server)
|
||||
s.registerRemoteStatusTool(server)
|
||||
s.registerReviewQueueTool(server)
|
||||
s.registerShutdownTools(server)
|
||||
|
||||
mcp.AddTool(server, &mcp.Tool{
|
||||
Name: "agentic_scan",
|
||||
|
|
@ -119,33 +138,62 @@ func (s *PrepSubsystem) Shutdown(_ context.Context) error { return nil }
|
|||
// --- Input/Output types ---
|
||||
|
||||
// PrepInput is the input for agentic_prep_workspace.
|
||||
// One of Issue, PR, Branch, or Tag is required.
|
||||
//
|
||||
// input := agentic.PrepInput{Repo: "go-io", Issue: 15, Task: "Migrate to Core primitives"}
|
||||
type PrepInput struct {
|
||||
Repo string `json:"repo"` // e.g. "go-io"
|
||||
Repo string `json:"repo"` // required: e.g. "go-io"
|
||||
Org string `json:"org,omitempty"` // default "core"
|
||||
Issue int `json:"issue,omitempty"` // Forge issue number
|
||||
Task string `json:"task,omitempty"` // Task description (if no issue)
|
||||
Template string `json:"template,omitempty"` // Prompt template: conventions, security, coding (default: coding)
|
||||
PlanTemplate string `json:"plan_template,omitempty"` // Plan template slug: bug-fix, code-review, new-feature, refactor, feature-port
|
||||
Variables map[string]string `json:"variables,omitempty"` // Template variable substitution
|
||||
Persona string `json:"persona,omitempty"` // Persona slug: engineering/backend-architect, testing/api-tester, etc.
|
||||
Task string `json:"task,omitempty"` // task description
|
||||
Agent string `json:"agent,omitempty"` // agent type
|
||||
Issue int `json:"issue,omitempty"` // Forge issue → workspace: task-{num}/
|
||||
PR int `json:"pr,omitempty"` // PR number → workspace: pr-{num}/
|
||||
Branch string `json:"branch,omitempty"` // branch → workspace: {branch}/
|
||||
Tag string `json:"tag,omitempty"` // tag → workspace: {tag}/ (immutable)
|
||||
Template string `json:"template,omitempty"` // prompt template slug
|
||||
PlanTemplate string `json:"plan_template,omitempty"` // plan template slug
|
||||
Variables map[string]string `json:"variables,omitempty"` // template variable substitution
|
||||
Persona string `json:"persona,omitempty"` // persona slug
|
||||
DryRun bool `json:"dry_run,omitempty"` // preview without executing
|
||||
}
|
||||
|
||||
// PrepOutput is the output for agentic_prep_workspace.
|
||||
//
|
||||
// out := agentic.PrepOutput{Success: true, WorkspaceDir: ".core/workspace/core/go-io/task-15"}
|
||||
type PrepOutput struct {
|
||||
Success bool `json:"success"`
|
||||
WorkspaceDir string `json:"workspace_dir"`
|
||||
Branch string `json:"branch"`
|
||||
WikiPages int `json:"wiki_pages"`
|
||||
SpecFiles int `json:"spec_files"`
|
||||
Memories int `json:"memories"`
|
||||
Consumers int `json:"consumers"`
|
||||
ClaudeMd bool `json:"claude_md"`
|
||||
GitLog int `json:"git_log_entries"`
|
||||
Success bool `json:"success"`
|
||||
WorkspaceDir string `json:"workspace_dir"`
|
||||
RepoDir string `json:"repo_dir"`
|
||||
Branch string `json:"branch"`
|
||||
Prompt string `json:"prompt,omitempty"`
|
||||
Memories int `json:"memories"`
|
||||
Consumers int `json:"consumers"`
|
||||
Resumed bool `json:"resumed"`
|
||||
}
|
||||
|
||||
// workspaceDir resolves the workspace path from the input identifier.
|
||||
//
|
||||
// dir := workspaceDir("core", "go-io", PrepInput{Issue: 15})
|
||||
// // → ".core/workspace/core/go-io/task-15"
|
||||
func workspaceDir(org, repo string, input PrepInput) (string, error) {
|
||||
base := core.JoinPath(WorkspaceRoot(), org, repo)
|
||||
switch {
|
||||
case input.PR > 0:
|
||||
return core.JoinPath(base, core.Sprintf("pr-%d", input.PR)), nil
|
||||
case input.Issue > 0:
|
||||
return core.JoinPath(base, core.Sprintf("task-%d", input.Issue)), nil
|
||||
case input.Branch != "":
|
||||
return core.JoinPath(base, input.Branch), nil
|
||||
case input.Tag != "":
|
||||
return core.JoinPath(base, input.Tag), nil
|
||||
default:
|
||||
return "", core.E("workspaceDir", "one of issue, pr, branch, or tag is required", nil)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) prepWorkspace(ctx context.Context, _ *mcp.CallToolRequest, input PrepInput) (*mcp.CallToolResult, PrepOutput, error) {
|
||||
if input.Repo == "" {
|
||||
return nil, PrepOutput{}, coreerr.E("prepWorkspace", "repo is required", nil)
|
||||
return nil, PrepOutput{}, core.E("prepWorkspace", "repo is required", nil)
|
||||
}
|
||||
if input.Org == "" {
|
||||
input.Org = "core"
|
||||
|
|
@ -154,199 +202,348 @@ func (s *PrepSubsystem) prepWorkspace(ctx context.Context, _ *mcp.CallToolReques
|
|||
input.Template = "coding"
|
||||
}
|
||||
|
||||
// Workspace root: .core/workspace/{repo}-{timestamp}/
|
||||
wsRoot := WorkspaceRoot()
|
||||
wsName := fmt.Sprintf("%s-%d", input.Repo, time.Now().UnixNano())
|
||||
wsDir := filepath.Join(wsRoot, wsName)
|
||||
|
||||
// Create workspace structure
|
||||
// kb/ and specs/ will be created inside src/ after clone
|
||||
|
||||
// Ensure workspace directory exists
|
||||
if err := os.MkdirAll(wsDir, 0755); err != nil {
|
||||
return nil, PrepOutput{}, coreerr.E("prep", "failed to create workspace dir", err)
|
||||
// Resolve workspace directory from identifier
|
||||
wsDir, err := workspaceDir(input.Org, input.Repo, input)
|
||||
if err != nil {
|
||||
return nil, PrepOutput{}, err
|
||||
}
|
||||
|
||||
out := PrepOutput{WorkspaceDir: wsDir}
|
||||
repoDir := core.JoinPath(wsDir, "repo")
|
||||
metaDir := core.JoinPath(wsDir, ".meta")
|
||||
out := PrepOutput{WorkspaceDir: wsDir, RepoDir: repoDir}
|
||||
|
||||
// Source repo path — sanitise to prevent path traversal
|
||||
repoName := filepath.Base(input.Repo) // strips ../ and absolute paths
|
||||
repoName := core.PathBase(input.Repo)
|
||||
if repoName == "." || repoName == ".." || repoName == "" {
|
||||
return nil, PrepOutput{}, coreerr.E("prep", "invalid repo name: "+input.Repo, nil)
|
||||
return nil, PrepOutput{}, core.E("prep", "invalid repo name: "+input.Repo, nil)
|
||||
}
|
||||
repoPath := filepath.Join(s.codePath, "core", repoName)
|
||||
repoPath := core.JoinPath(s.codePath, input.Org, repoName)
|
||||
|
||||
// 1. Clone repo into src/ and create feature branch
|
||||
srcDir := filepath.Join(wsDir, "src")
|
||||
cloneCmd := exec.CommandContext(ctx, "git", "clone", repoPath, srcDir)
|
||||
if err := cloneCmd.Run(); err != nil {
|
||||
return nil, PrepOutput{}, coreerr.E("prep", "git clone failed for "+input.Repo, err)
|
||||
// Ensure meta directory exists
|
||||
if r := fs.EnsureDir(metaDir); !r.OK {
|
||||
return nil, PrepOutput{}, core.E("prep", "failed to create meta dir", nil)
|
||||
}
|
||||
|
||||
// Create feature branch
|
||||
taskSlug := strings.Map(func(r rune) rune {
|
||||
if r >= 'a' && r <= 'z' || r >= '0' && r <= '9' || r == '-' {
|
||||
return r
|
||||
// Check for resume: if repo/ already has .git, skip clone
|
||||
resumed := fs.IsDir(core.JoinPath(repoDir, ".git"))
|
||||
out.Resumed = resumed
|
||||
|
||||
// Extract default workspace template (go.work etc.)
|
||||
lib.ExtractWorkspace("default", wsDir, &lib.WorkspaceData{
|
||||
Repo: input.Repo,
|
||||
Branch: "",
|
||||
Task: input.Task,
|
||||
Agent: input.Agent,
|
||||
})
|
||||
|
||||
if !resumed {
|
||||
// Clone repo into repo/
|
||||
cloneCmd := exec.CommandContext(ctx, "git", "clone", repoPath, repoDir)
|
||||
if cloneErr := cloneCmd.Run(); cloneErr != nil {
|
||||
return nil, PrepOutput{}, core.E("prep", "git clone failed for "+input.Repo, cloneErr)
|
||||
}
|
||||
if r >= 'A' && r <= 'Z' {
|
||||
return r + 32 // lowercase
|
||||
|
||||
// Create feature branch
|
||||
taskSlug := sanitiseBranchSlug(input.Task, 40)
|
||||
if taskSlug == "" {
|
||||
if input.Issue > 0 {
|
||||
taskSlug = core.Sprintf("issue-%d", input.Issue)
|
||||
} else if input.PR > 0 {
|
||||
taskSlug = core.Sprintf("pr-%d", input.PR)
|
||||
} else {
|
||||
taskSlug = core.Sprintf("work-%d", time.Now().Unix())
|
||||
}
|
||||
}
|
||||
return '-'
|
||||
}, input.Task)
|
||||
if len(taskSlug) > 40 {
|
||||
taskSlug = taskSlug[:40]
|
||||
}
|
||||
taskSlug = strings.Trim(taskSlug, "-")
|
||||
if taskSlug == "" {
|
||||
// Fallback for issue-only dispatches with no task text
|
||||
taskSlug = fmt.Sprintf("issue-%d", input.Issue)
|
||||
if input.Issue == 0 {
|
||||
taskSlug = fmt.Sprintf("work-%d", time.Now().Unix())
|
||||
branchName := core.Sprintf("agent/%s", taskSlug)
|
||||
|
||||
branchCmd := exec.CommandContext(ctx, "git", "checkout", "-b", branchName)
|
||||
branchCmd.Dir = repoDir
|
||||
if branchErr := branchCmd.Run(); branchErr != nil {
|
||||
return nil, PrepOutput{}, core.E("prep.branch", core.Sprintf("failed to create branch %q", branchName), branchErr)
|
||||
}
|
||||
out.Branch = branchName
|
||||
} else {
|
||||
// Resume: read branch from existing checkout
|
||||
branchCmd := exec.CommandContext(ctx, "git", "rev-parse", "--abbrev-ref", "HEAD")
|
||||
branchCmd.Dir = repoDir
|
||||
if branchOut, branchErr := branchCmd.Output(); branchErr == nil {
|
||||
out.Branch = core.Trim(string(branchOut))
|
||||
}
|
||||
}
|
||||
branchName := fmt.Sprintf("agent/%s", taskSlug)
|
||||
|
||||
branchCmd := exec.CommandContext(ctx, "git", "checkout", "-b", branchName)
|
||||
branchCmd.Dir = srcDir
|
||||
if err := branchCmd.Run(); err != nil {
|
||||
return nil, PrepOutput{}, coreerr.E("prep.branch", fmt.Sprintf("failed to create branch %q", branchName), err)
|
||||
}
|
||||
out.Branch = branchName
|
||||
|
||||
// Create context dirs inside src/
|
||||
coreio.Local.EnsureDir(filepath.Join(srcDir, "kb"))
|
||||
coreio.Local.EnsureDir(filepath.Join(srcDir, "specs"))
|
||||
|
||||
// Remote stays as local clone origin — agent cannot push to forge.
|
||||
// Reviewer pulls changes from workspace and pushes after verification.
|
||||
|
||||
// 2. Extract workspace template
|
||||
wsTmpl := "default"
|
||||
if input.Template == "security" {
|
||||
wsTmpl = "security"
|
||||
} else if input.Template == "verify" || input.Template == "conventions" {
|
||||
wsTmpl = "review"
|
||||
}
|
||||
|
||||
promptContent, _ := lib.Prompt(input.Template)
|
||||
personaContent := ""
|
||||
if input.Persona != "" {
|
||||
personaContent, _ = lib.Persona(input.Persona)
|
||||
}
|
||||
flowContent, _ := lib.Flow(detectLanguage(repoPath))
|
||||
|
||||
wsData := &lib.WorkspaceData{
|
||||
Repo: input.Repo,
|
||||
Branch: branchName,
|
||||
Task: input.Task,
|
||||
Agent: "agent",
|
||||
Language: detectLanguage(repoPath),
|
||||
Prompt: promptContent,
|
||||
Persona: personaContent,
|
||||
Flow: flowContent,
|
||||
BuildCmd: detectBuildCmd(repoPath),
|
||||
TestCmd: detectTestCmd(repoPath),
|
||||
}
|
||||
|
||||
lib.ExtractWorkspace(wsTmpl, srcDir, wsData)
|
||||
out.ClaudeMd = true
|
||||
|
||||
// Copy repo's own CLAUDE.md over template if it exists
|
||||
claudeMdPath := filepath.Join(repoPath, "CLAUDE.md")
|
||||
if data, err := coreio.Local.Read(claudeMdPath); err == nil {
|
||||
coreio.Local.Write(filepath.Join(srcDir, "CLAUDE.md"), data)
|
||||
}
|
||||
// Copy GEMINI.md from core/agent (ethics framework for all agents)
|
||||
agentGeminiMd := filepath.Join(s.codePath, "core", "agent", "GEMINI.md")
|
||||
if data, err := coreio.Local.Read(agentGeminiMd); err == nil {
|
||||
coreio.Local.Write(filepath.Join(srcDir, "GEMINI.md"), data)
|
||||
}
|
||||
|
||||
// 3. Generate TODO.md from issue (overrides template)
|
||||
if input.Issue > 0 {
|
||||
s.generateTodo(ctx, input.Org, input.Repo, input.Issue, wsDir)
|
||||
}
|
||||
|
||||
// 4. Generate CONTEXT.md from OpenBrain
|
||||
out.Memories = s.generateContext(ctx, input.Repo, wsDir)
|
||||
|
||||
// 5. Generate CONSUMERS.md
|
||||
out.Consumers = s.findConsumers(input.Repo, wsDir)
|
||||
|
||||
// 6. Generate RECENT.md
|
||||
out.GitLog = s.gitLog(repoPath, wsDir)
|
||||
|
||||
// 7. Pull wiki pages into kb/
|
||||
out.WikiPages = s.pullWiki(ctx, input.Org, input.Repo, wsDir)
|
||||
|
||||
// 8. Copy spec files into specs/
|
||||
out.SpecFiles = s.copySpecs(wsDir)
|
||||
|
||||
// 9. Write PLAN.md from template (if specified)
|
||||
if input.PlanTemplate != "" {
|
||||
s.writePlanFromTemplate(input.PlanTemplate, input.Variables, input.Task, wsDir)
|
||||
}
|
||||
|
||||
// 10. Write prompt template
|
||||
s.writePromptTemplate(input.Template, wsDir)
|
||||
// Build the rich prompt with all context
|
||||
out.Prompt, out.Memories, out.Consumers = s.buildPrompt(ctx, input, out.Branch, repoPath)
|
||||
|
||||
out.Success = true
|
||||
return nil, out, nil
|
||||
}
|
||||
|
||||
// --- Prompt templates ---
|
||||
// --- Public API for CLI testing ---
|
||||
|
||||
func (s *PrepSubsystem) writePromptTemplate(template, wsDir string) {
|
||||
prompt, err := lib.Template(template)
|
||||
if err != nil {
|
||||
// Fallback to default template
|
||||
prompt, _ = lib.Template("default")
|
||||
if prompt == "" {
|
||||
prompt = "Read TODO.md and complete the task. Work in src/.\n"
|
||||
// TestPrepWorkspace exposes prepWorkspace for CLI testing.
|
||||
//
|
||||
// _, out, err := prep.TestPrepWorkspace(ctx, input)
|
||||
func (s *PrepSubsystem) TestPrepWorkspace(ctx context.Context, input PrepInput) (*mcp.CallToolResult, PrepOutput, error) {
|
||||
return s.prepWorkspace(ctx, nil, input)
|
||||
}
|
||||
|
||||
// TestBuildPrompt exposes buildPrompt for CLI testing.
|
||||
//
|
||||
// prompt, memories, consumers := prep.TestBuildPrompt(ctx, input, "dev", repoPath)
|
||||
func (s *PrepSubsystem) TestBuildPrompt(ctx context.Context, input PrepInput, branch, repoPath string) (string, int, int) {
|
||||
return s.buildPrompt(ctx, input, branch, repoPath)
|
||||
}
|
||||
|
||||
// --- Prompt Building ---
|
||||
|
||||
// buildPrompt assembles all context into a single prompt string.
|
||||
// Context is gathered from: persona, flow, issue, brain, consumers, git log, wiki, plan.
|
||||
func (s *PrepSubsystem) buildPrompt(ctx context.Context, input PrepInput, branch, repoPath string) (string, int, int) {
|
||||
b := core.NewBuilder()
|
||||
memories := 0
|
||||
consumers := 0
|
||||
|
||||
// Task
|
||||
b.WriteString("TASK: ")
|
||||
b.WriteString(input.Task)
|
||||
b.WriteString("\n\n")
|
||||
|
||||
// Repo info
|
||||
b.WriteString(core.Sprintf("REPO: %s/%s on branch %s\n", input.Org, input.Repo, branch))
|
||||
b.WriteString(core.Sprintf("LANGUAGE: %s\n", detectLanguage(repoPath)))
|
||||
b.WriteString(core.Sprintf("BUILD: %s\n", detectBuildCmd(repoPath)))
|
||||
b.WriteString(core.Sprintf("TEST: %s\n\n", detectTestCmd(repoPath)))
|
||||
|
||||
// Persona
|
||||
if input.Persona != "" {
|
||||
if r := lib.Persona(input.Persona); r.OK {
|
||||
b.WriteString("PERSONA:\n")
|
||||
b.WriteString(r.Value.(string))
|
||||
b.WriteString("\n\n")
|
||||
}
|
||||
}
|
||||
|
||||
coreio.Local.Write(filepath.Join(wsDir, "src", "PROMPT.md"), prompt)
|
||||
// Flow
|
||||
if r := lib.Flow(detectLanguage(repoPath)); r.OK {
|
||||
b.WriteString("WORKFLOW:\n")
|
||||
b.WriteString(r.Value.(string))
|
||||
b.WriteString("\n\n")
|
||||
}
|
||||
|
||||
// Issue body
|
||||
if input.Issue > 0 {
|
||||
if body := s.getIssueBody(ctx, input.Org, input.Repo, input.Issue); body != "" {
|
||||
b.WriteString("ISSUE:\n")
|
||||
b.WriteString(body)
|
||||
b.WriteString("\n\n")
|
||||
}
|
||||
}
|
||||
|
||||
// Brain recall
|
||||
if recall, count := s.brainRecall(ctx, input.Repo); recall != "" {
|
||||
b.WriteString("CONTEXT (from OpenBrain):\n")
|
||||
b.WriteString(recall)
|
||||
b.WriteString("\n\n")
|
||||
memories = count
|
||||
}
|
||||
|
||||
// Consumers
|
||||
if list, count := s.findConsumersList(input.Repo); list != "" {
|
||||
b.WriteString("CONSUMERS (modules that import this repo):\n")
|
||||
b.WriteString(list)
|
||||
b.WriteString("\n\n")
|
||||
consumers = count
|
||||
}
|
||||
|
||||
// Recent git log
|
||||
if log := s.getGitLog(repoPath); log != "" {
|
||||
b.WriteString("RECENT CHANGES:\n```\n")
|
||||
b.WriteString(log)
|
||||
b.WriteString("```\n\n")
|
||||
}
|
||||
|
||||
// Plan template
|
||||
if input.PlanTemplate != "" {
|
||||
if plan := s.renderPlan(input.PlanTemplate, input.Variables, input.Task); plan != "" {
|
||||
b.WriteString("PLAN:\n")
|
||||
b.WriteString(plan)
|
||||
b.WriteString("\n\n")
|
||||
}
|
||||
}
|
||||
|
||||
// Constraints
|
||||
b.WriteString("CONSTRAINTS:\n")
|
||||
b.WriteString("- Read CODEX.md for coding conventions (if it exists)\n")
|
||||
b.WriteString("- Read CLAUDE.md for project-specific instructions (if it exists)\n")
|
||||
b.WriteString("- Commit with conventional commit format: type(scope): description\n")
|
||||
b.WriteString("- Co-Authored-By: Virgil <virgil@lethean.io>\n")
|
||||
b.WriteString("- Run build and tests before committing\n")
|
||||
|
||||
return b.String(), memories, consumers
|
||||
}
|
||||
|
||||
// --- Plan template rendering ---
|
||||
// --- Context Helpers (return strings, not write files) ---
|
||||
|
||||
// writePlanFromTemplate loads a YAML plan template, substitutes variables,
|
||||
// and writes PLAN.md into the workspace src/ directory.
|
||||
func (s *PrepSubsystem) writePlanFromTemplate(templateSlug string, variables map[string]string, task string, wsDir string) {
|
||||
// Load template from embedded prompts package
|
||||
data, err := lib.Template(templateSlug)
|
||||
func (s *PrepSubsystem) getIssueBody(ctx context.Context, org, repo string, issue int) string {
|
||||
idx := core.Sprintf("%d", issue)
|
||||
iss, err := s.forge.Issues.Get(ctx, forge.Params{"owner": org, "repo": repo, "index": idx})
|
||||
if err != nil {
|
||||
return // Template not found, skip silently
|
||||
return ""
|
||||
}
|
||||
return core.Sprintf("# %s\n\n%s", iss.Title, iss.Body)
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) brainRecall(ctx context.Context, repo string) (string, int) {
|
||||
if s.brainKey == "" {
|
||||
return "", 0
|
||||
}
|
||||
|
||||
content := data
|
||||
body, _ := json.Marshal(map[string]any{
|
||||
"query": "architecture conventions key interfaces for " + repo,
|
||||
"top_k": 10,
|
||||
"project": repo,
|
||||
"agent_id": "cladius",
|
||||
})
|
||||
|
||||
// Substitute variables ({{variable_name}} → value)
|
||||
req, _ := http.NewRequestWithContext(ctx, "POST", s.brainURL+"/v1/brain/recall", core.NewReader(string(body)))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Accept", "application/json")
|
||||
req.Header.Set("Authorization", "Bearer "+s.brainKey)
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
if err != nil || resp.StatusCode != 200 {
|
||||
if resp != nil {
|
||||
resp.Body.Close()
|
||||
}
|
||||
return "", 0
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
respData, _ := goio.ReadAll(resp.Body)
|
||||
var result struct {
|
||||
Memories []map[string]any `json:"memories"`
|
||||
}
|
||||
json.Unmarshal(respData, &result)
|
||||
|
||||
if len(result.Memories) == 0 {
|
||||
return "", 0
|
||||
}
|
||||
|
||||
b := core.NewBuilder()
|
||||
for i, mem := range result.Memories {
|
||||
memType, _ := mem["type"].(string)
|
||||
memContent, _ := mem["content"].(string)
|
||||
memProject, _ := mem["project"].(string)
|
||||
b.WriteString(core.Sprintf("%d. [%s] %s: %s\n", i+1, memType, memProject, memContent))
|
||||
}
|
||||
|
||||
return b.String(), len(result.Memories)
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) findConsumersList(repo string) (string, int) {
|
||||
goWorkPath := core.JoinPath(s.codePath, "go.work")
|
||||
modulePath := "forge.lthn.ai/core/" + repo
|
||||
|
||||
r := fs.Read(goWorkPath)
|
||||
if !r.OK {
|
||||
return "", 0
|
||||
}
|
||||
workData := r.Value.(string)
|
||||
|
||||
var consumers []string
|
||||
for _, line := range core.Split(workData, "\n") {
|
||||
line = core.Trim(line)
|
||||
if !core.HasPrefix(line, "./") {
|
||||
continue
|
||||
}
|
||||
dir := core.JoinPath(s.codePath, core.TrimPrefix(line, "./"))
|
||||
goMod := core.JoinPath(dir, "go.mod")
|
||||
mr := fs.Read(goMod)
|
||||
if !mr.OK {
|
||||
continue
|
||||
}
|
||||
modData := mr.Value.(string)
|
||||
if core.Contains(modData, modulePath) && !core.HasPrefix(modData, "module "+modulePath) {
|
||||
consumers = append(consumers, core.PathBase(dir))
|
||||
}
|
||||
}
|
||||
|
||||
if len(consumers) == 0 {
|
||||
return "", 0
|
||||
}
|
||||
|
||||
b := core.NewBuilder()
|
||||
for _, c := range consumers {
|
||||
b.WriteString("- " + c + "\n")
|
||||
}
|
||||
b.WriteString(core.Sprintf("Breaking change risk: %d consumers.\n", len(consumers)))
|
||||
|
||||
return b.String(), len(consumers)
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) getGitLog(repoPath string) string {
|
||||
cmd := exec.Command("git", "log", "--oneline", "-20")
|
||||
cmd.Dir = repoPath
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return core.Trim(string(output))
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) pullWikiContent(ctx context.Context, org, repo string) string {
|
||||
pages, err := s.forge.Wiki.ListPages(ctx, org, repo)
|
||||
if err != nil || len(pages) == 0 {
|
||||
return ""
|
||||
}
|
||||
|
||||
b := core.NewBuilder()
|
||||
for _, meta := range pages {
|
||||
name := meta.SubURL
|
||||
if name == "" {
|
||||
name = meta.Title
|
||||
}
|
||||
page, pErr := s.forge.Wiki.GetPage(ctx, org, repo, name)
|
||||
if pErr != nil || page.ContentBase64 == "" {
|
||||
continue
|
||||
}
|
||||
content, _ := base64.StdEncoding.DecodeString(page.ContentBase64)
|
||||
b.WriteString("### " + meta.Title + "\n\n")
|
||||
b.WriteString(string(content))
|
||||
b.WriteString("\n\n")
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) renderPlan(templateSlug string, variables map[string]string, task string) string {
|
||||
r := lib.Template(templateSlug)
|
||||
if !r.OK {
|
||||
return ""
|
||||
}
|
||||
|
||||
content := r.Value.(string)
|
||||
for key, value := range variables {
|
||||
content = strings.ReplaceAll(content, "{{"+key+"}}", value)
|
||||
content = strings.ReplaceAll(content, "{{ "+key+" }}", value)
|
||||
content = core.Replace(content, "{{"+key+"}}", value)
|
||||
content = core.Replace(content, "{{ "+key+" }}", value)
|
||||
}
|
||||
|
||||
// Parse the YAML to render as markdown
|
||||
var tmpl struct {
|
||||
Name string `yaml:"name"`
|
||||
Description string `yaml:"description"`
|
||||
Guidelines []string `yaml:"guidelines"`
|
||||
Phases []struct {
|
||||
Name string `yaml:"name"`
|
||||
Description string `yaml:"description"`
|
||||
Tasks []any `yaml:"tasks"`
|
||||
Name string `yaml:"name"`
|
||||
Description string `yaml:"description"`
|
||||
Tasks []any `yaml:"tasks"`
|
||||
} `yaml:"phases"`
|
||||
}
|
||||
|
||||
if err := yaml.Unmarshal([]byte(content), &tmpl); err != nil {
|
||||
return
|
||||
return ""
|
||||
}
|
||||
|
||||
// Render as PLAN.md
|
||||
var plan strings.Builder
|
||||
plan.WriteString("# Plan: " + tmpl.Name + "\n\n")
|
||||
plan := core.NewBuilder()
|
||||
plan.WriteString("# " + tmpl.Name + "\n\n")
|
||||
if task != "" {
|
||||
plan.WriteString("**Task:** " + task + "\n\n")
|
||||
}
|
||||
|
|
@ -363,254 +560,28 @@ func (s *PrepSubsystem) writePlanFromTemplate(templateSlug string, variables map
|
|||
}
|
||||
|
||||
for i, phase := range tmpl.Phases {
|
||||
plan.WriteString(fmt.Sprintf("## Phase %d: %s\n\n", i+1, phase.Name))
|
||||
plan.WriteString(core.Sprintf("## Phase %d: %s\n\n", i+1, phase.Name))
|
||||
if phase.Description != "" {
|
||||
plan.WriteString(phase.Description + "\n\n")
|
||||
}
|
||||
for _, task := range phase.Tasks {
|
||||
switch t := task.(type) {
|
||||
for _, t := range phase.Tasks {
|
||||
switch v := t.(type) {
|
||||
case string:
|
||||
plan.WriteString("- [ ] " + t + "\n")
|
||||
plan.WriteString("- [ ] " + v + "\n")
|
||||
case map[string]any:
|
||||
if name, ok := t["name"].(string); ok {
|
||||
if name, ok := v["name"].(string); ok {
|
||||
plan.WriteString("- [ ] " + name + "\n")
|
||||
}
|
||||
}
|
||||
}
|
||||
plan.WriteString("\n**Commit after completing this phase.**\n\n---\n\n")
|
||||
plan.WriteString("\n")
|
||||
}
|
||||
|
||||
coreio.Local.Write(filepath.Join(wsDir, "src", "PLAN.md"), plan.String())
|
||||
return plan.String()
|
||||
}
|
||||
|
||||
// --- Helpers (unchanged) ---
|
||||
// --- Detection helpers (unchanged) ---
|
||||
|
||||
func (s *PrepSubsystem) pullWiki(ctx context.Context, org, repo, wsDir string) int {
|
||||
if s.forgeToken == "" {
|
||||
return 0
|
||||
}
|
||||
|
||||
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/wiki/pages", s.forgeURL, org, repo)
|
||||
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
return 0
|
||||
}
|
||||
|
||||
var pages []struct {
|
||||
Title string `json:"title"`
|
||||
SubURL string `json:"sub_url"`
|
||||
}
|
||||
json.NewDecoder(resp.Body).Decode(&pages)
|
||||
|
||||
count := 0
|
||||
for _, page := range pages {
|
||||
subURL := page.SubURL
|
||||
if subURL == "" {
|
||||
subURL = page.Title
|
||||
}
|
||||
|
||||
pageURL := fmt.Sprintf("%s/api/v1/repos/%s/%s/wiki/page/%s", s.forgeURL, org, repo, subURL)
|
||||
pageReq, _ := http.NewRequestWithContext(ctx, "GET", pageURL, nil)
|
||||
pageReq.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
||||
pageResp, err := s.client.Do(pageReq)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if pageResp.StatusCode != 200 {
|
||||
pageResp.Body.Close()
|
||||
continue
|
||||
}
|
||||
|
||||
var pageData struct {
|
||||
ContentBase64 string `json:"content_base64"`
|
||||
}
|
||||
json.NewDecoder(pageResp.Body).Decode(&pageData)
|
||||
pageResp.Body.Close()
|
||||
|
||||
if pageData.ContentBase64 == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
content, _ := base64.StdEncoding.DecodeString(pageData.ContentBase64)
|
||||
filename := strings.Map(func(r rune) rune {
|
||||
if r >= 'a' && r <= 'z' || r >= 'A' && r <= 'Z' || r >= '0' && r <= '9' || r == '-' || r == '_' || r == '.' {
|
||||
return r
|
||||
}
|
||||
return '-'
|
||||
}, page.Title) + ".md"
|
||||
|
||||
coreio.Local.Write(filepath.Join(wsDir, "src", "kb", filename), string(content))
|
||||
count++
|
||||
}
|
||||
|
||||
return count
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) copySpecs(wsDir string) int {
|
||||
specFiles := []string{"AGENT_CONTEXT.md", "TASK_PROTOCOL.md"}
|
||||
count := 0
|
||||
|
||||
for _, file := range specFiles {
|
||||
src := filepath.Join(s.specsPath, file)
|
||||
if data, err := coreio.Local.Read(src); err == nil {
|
||||
coreio.Local.Write(filepath.Join(wsDir, "src", "specs", file), data)
|
||||
count++
|
||||
}
|
||||
}
|
||||
|
||||
return count
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) generateContext(ctx context.Context, repo, wsDir string) int {
|
||||
if s.brainKey == "" {
|
||||
return 0
|
||||
}
|
||||
|
||||
body, _ := json.Marshal(map[string]any{
|
||||
"query": "architecture conventions key interfaces for " + repo,
|
||||
"top_k": 10,
|
||||
"project": repo,
|
||||
"agent_id": "cladius",
|
||||
})
|
||||
|
||||
req, _ := http.NewRequestWithContext(ctx, "POST", s.brainURL+"/v1/brain/recall", strings.NewReader(string(body)))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Accept", "application/json")
|
||||
req.Header.Set("Authorization", "Bearer "+s.brainKey)
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
return 0
|
||||
}
|
||||
|
||||
respData, _ := io.ReadAll(resp.Body)
|
||||
var result struct {
|
||||
Memories []map[string]any `json:"memories"`
|
||||
}
|
||||
json.Unmarshal(respData, &result)
|
||||
|
||||
var content strings.Builder
|
||||
content.WriteString("# Context — " + repo + "\n\n")
|
||||
content.WriteString("> Relevant knowledge from OpenBrain.\n\n")
|
||||
|
||||
for i, mem := range result.Memories {
|
||||
memType, _ := mem["type"].(string)
|
||||
memContent, _ := mem["content"].(string)
|
||||
memProject, _ := mem["project"].(string)
|
||||
score, _ := mem["score"].(float64)
|
||||
content.WriteString(fmt.Sprintf("### %d. %s [%s] (score: %.3f)\n\n%s\n\n", i+1, memProject, memType, score, memContent))
|
||||
}
|
||||
|
||||
coreio.Local.Write(filepath.Join(wsDir, "src", "CONTEXT.md"), content.String())
|
||||
return len(result.Memories)
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) findConsumers(repo, wsDir string) int {
|
||||
goWorkPath := filepath.Join(s.codePath, "go.work")
|
||||
modulePath := "forge.lthn.ai/core/" + repo
|
||||
|
||||
workData, err := coreio.Local.Read(goWorkPath)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
var consumers []string
|
||||
for _, line := range strings.Split(workData, "\n") {
|
||||
line = strings.TrimSpace(line)
|
||||
if !strings.HasPrefix(line, "./") {
|
||||
continue
|
||||
}
|
||||
dir := filepath.Join(s.codePath, strings.TrimPrefix(line, "./"))
|
||||
goMod := filepath.Join(dir, "go.mod")
|
||||
modData, err := coreio.Local.Read(goMod)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if strings.Contains(modData, modulePath) && !strings.HasPrefix(modData, "module "+modulePath) {
|
||||
consumers = append(consumers, filepath.Base(dir))
|
||||
}
|
||||
}
|
||||
|
||||
if len(consumers) > 0 {
|
||||
content := "# Consumers of " + repo + "\n\n"
|
||||
content += "These modules import `" + modulePath + "`:\n\n"
|
||||
for _, c := range consumers {
|
||||
content += "- " + c + "\n"
|
||||
}
|
||||
content += fmt.Sprintf("\n**Breaking change risk: %d consumers.**\n", len(consumers))
|
||||
coreio.Local.Write(filepath.Join(wsDir, "src", "CONSUMERS.md"), content)
|
||||
}
|
||||
|
||||
return len(consumers)
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) gitLog(repoPath, wsDir string) int {
|
||||
cmd := exec.Command("git", "log", "--oneline", "-20")
|
||||
cmd.Dir = repoPath
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
lines := strings.Split(strings.TrimSpace(string(output)), "\n")
|
||||
if len(lines) > 0 && lines[0] != "" {
|
||||
content := "# Recent Changes\n\n```\n" + string(output) + "```\n"
|
||||
coreio.Local.Write(filepath.Join(wsDir, "src", "RECENT.md"), content)
|
||||
}
|
||||
|
||||
return len(lines)
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) generateTodo(ctx context.Context, org, repo string, issue int, wsDir string) {
|
||||
if s.forgeToken == "" {
|
||||
return
|
||||
}
|
||||
|
||||
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/issues/%d", s.forgeURL, org, repo, issue)
|
||||
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
return
|
||||
}
|
||||
|
||||
var issueData struct {
|
||||
Title string `json:"title"`
|
||||
Body string `json:"body"`
|
||||
}
|
||||
json.NewDecoder(resp.Body).Decode(&issueData)
|
||||
|
||||
content := fmt.Sprintf("# TASK: %s\n\n", issueData.Title)
|
||||
content += fmt.Sprintf("**Status:** ready\n")
|
||||
content += fmt.Sprintf("**Source:** %s/%s/%s/issues/%d\n", s.forgeURL, org, repo, issue)
|
||||
content += fmt.Sprintf("**Repo:** %s/%s\n\n---\n\n", org, repo)
|
||||
content += "## Objective\n\n" + issueData.Body + "\n"
|
||||
|
||||
coreio.Local.Write(filepath.Join(wsDir, "src", "TODO.md"), content)
|
||||
}
|
||||
|
||||
// detectLanguage guesses the primary language from repo contents.
|
||||
// Checks in priority order (Go first) to avoid nondeterministic results.
|
||||
func detectLanguage(repoPath string) string {
|
||||
checks := []struct {
|
||||
file string
|
||||
|
|
@ -625,7 +596,7 @@ func detectLanguage(repoPath string) string {
|
|||
{"Dockerfile", "docker"},
|
||||
}
|
||||
for _, c := range checks {
|
||||
if _, err := os.Stat(filepath.Join(repoPath, c.file)); err == nil {
|
||||
if fs.IsFile(core.JoinPath(repoPath, c.file)) {
|
||||
return c.lang
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,12 +3,10 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestEnvOr_Good_EnvSet(t *testing.T) {
|
||||
|
|
@ -28,43 +26,43 @@ func TestEnvOr_Good_UnsetUsesFallback(t *testing.T) {
|
|||
|
||||
func TestDetectLanguage_Good_Go(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "go.mod"), "module test"))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "go.mod"), "module test").OK)
|
||||
assert.Equal(t, "go", detectLanguage(dir))
|
||||
}
|
||||
|
||||
func TestDetectLanguage_Good_PHP(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "composer.json"), "{}"))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "composer.json"), "{}").OK)
|
||||
assert.Equal(t, "php", detectLanguage(dir))
|
||||
}
|
||||
|
||||
func TestDetectLanguage_Good_TypeScript(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "package.json"), "{}"))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "package.json"), "{}").OK)
|
||||
assert.Equal(t, "ts", detectLanguage(dir))
|
||||
}
|
||||
|
||||
func TestDetectLanguage_Good_Rust(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "Cargo.toml"), "[package]"))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "Cargo.toml"), "[package]").OK)
|
||||
assert.Equal(t, "rust", detectLanguage(dir))
|
||||
}
|
||||
|
||||
func TestDetectLanguage_Good_Python(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "requirements.txt"), "flask"))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "requirements.txt"), "flask").OK)
|
||||
assert.Equal(t, "py", detectLanguage(dir))
|
||||
}
|
||||
|
||||
func TestDetectLanguage_Good_Cpp(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "CMakeLists.txt"), "cmake_minimum_required"))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "CMakeLists.txt"), "cmake_minimum_required").OK)
|
||||
assert.Equal(t, "cpp", detectLanguage(dir))
|
||||
}
|
||||
|
||||
func TestDetectLanguage_Good_Docker(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "Dockerfile"), "FROM alpine"))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "Dockerfile"), "FROM alpine").OK)
|
||||
assert.Equal(t, "docker", detectLanguage(dir))
|
||||
}
|
||||
|
||||
|
|
@ -90,7 +88,7 @@ func TestDetectBuildCmd_Good(t *testing.T) {
|
|||
for _, tt := range tests {
|
||||
t.Run(tt.file, func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, tt.file), tt.content))
|
||||
require.True(t, fs.Write(filepath.Join(dir, tt.file), tt.content).OK)
|
||||
assert.Equal(t, tt.expected, detectBuildCmd(dir))
|
||||
})
|
||||
}
|
||||
|
|
@ -118,7 +116,7 @@ func TestDetectTestCmd_Good(t *testing.T) {
|
|||
for _, tt := range tests {
|
||||
t.Run(tt.file, func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, tt.file), tt.content))
|
||||
require.True(t, fs.Write(filepath.Join(dir, tt.file), tt.content).OK)
|
||||
assert.Equal(t, tt.expected, detectTestCmd(dir))
|
||||
})
|
||||
}
|
||||
|
|
@ -129,6 +127,19 @@ func TestDetectTestCmd_Good_DefaultsToGo(t *testing.T) {
|
|||
assert.Equal(t, "go test ./...", detectTestCmd(dir))
|
||||
}
|
||||
|
||||
func TestSanitiseBranchSlug_Good(t *testing.T) {
|
||||
assert.Equal(t, "fix-login-bug", sanitiseBranchSlug("Fix login bug!", 40))
|
||||
assert.Equal(t, "trim-me", sanitiseBranchSlug("---Trim Me---", 40))
|
||||
}
|
||||
|
||||
func TestSanitiseBranchSlug_Good_Truncates(t *testing.T) {
|
||||
assert.Equal(t, "feature", sanitiseBranchSlug("feature--extra", 7))
|
||||
}
|
||||
|
||||
func TestSanitiseFilename_Good(t *testing.T) {
|
||||
assert.Equal(t, "Core---Agent-Notes", sanitiseFilename("Core / Agent:Notes"))
|
||||
}
|
||||
|
||||
func TestNewPrep_Good_Defaults(t *testing.T) {
|
||||
t.Setenv("FORGE_TOKEN", "")
|
||||
t.Setenv("GITEA_TOKEN", "")
|
||||
|
|
@ -157,7 +168,6 @@ func TestNewPrep_Good_EnvOverrides(t *testing.T) {
|
|||
assert.Equal(t, "test-token", s.forgeToken)
|
||||
assert.Equal(t, "https://custom-brain.example.com", s.brainURL)
|
||||
assert.Equal(t, "brain-key-123", s.brainKey)
|
||||
assert.Equal(t, "/custom/specs", s.specsPath)
|
||||
assert.Equal(t, "/custom/code", s.codePath)
|
||||
}
|
||||
|
||||
|
|
@ -184,9 +194,14 @@ func TestSetCompletionNotifier_Good(t *testing.T) {
|
|||
}
|
||||
|
||||
type mockNotifier struct {
|
||||
poked bool
|
||||
started bool
|
||||
completed bool
|
||||
}
|
||||
|
||||
func (m *mockNotifier) Poke() {
|
||||
m.poked = true
|
||||
func (m *mockNotifier) AgentStarted(agent, repo, workspace string) {
|
||||
m.started = true
|
||||
}
|
||||
|
||||
func (m *mockNotifier) AgentCompleted(agent, repo, workspace, status string) {
|
||||
m.completed = true
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,18 +3,17 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"strconv"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
core "dappco.re/go/core"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// DispatchConfig controls agent dispatch behaviour.
|
||||
//
|
||||
// cfg := agentic.DispatchConfig{DefaultAgent: "claude", DefaultTemplate: "coding"}
|
||||
type DispatchConfig struct {
|
||||
DefaultAgent string `yaml:"default_agent"`
|
||||
DefaultTemplate string `yaml:"default_template"`
|
||||
|
|
@ -22,37 +21,41 @@ type DispatchConfig struct {
|
|||
}
|
||||
|
||||
// RateConfig controls pacing between task dispatches.
|
||||
//
|
||||
// rate := agentic.RateConfig{ResetUTC: "06:00", SustainedDelay: 120, BurstWindow: 2, BurstDelay: 15}
|
||||
type RateConfig struct {
|
||||
ResetUTC string `yaml:"reset_utc"` // Daily quota reset time (UTC), e.g. "06:00"
|
||||
DailyLimit int `yaml:"daily_limit"` // Max requests per day (0 = unknown)
|
||||
MinDelay int `yaml:"min_delay"` // Minimum seconds between task starts
|
||||
SustainedDelay int `yaml:"sustained_delay"` // Delay when pacing for full-day use
|
||||
BurstWindow int `yaml:"burst_window"` // Hours before reset where burst kicks in
|
||||
BurstDelay int `yaml:"burst_delay"` // Delay during burst window
|
||||
DailyLimit int `yaml:"daily_limit"` // Max requests per day (0 = unknown)
|
||||
MinDelay int `yaml:"min_delay"` // Minimum seconds between task starts
|
||||
SustainedDelay int `yaml:"sustained_delay"` // Delay when pacing for full-day use
|
||||
BurstWindow int `yaml:"burst_window"` // Hours before reset where burst kicks in
|
||||
BurstDelay int `yaml:"burst_delay"` // Delay during burst window
|
||||
}
|
||||
|
||||
// AgentsConfig is the root of config/agents.yaml.
|
||||
//
|
||||
// cfg := agentic.AgentsConfig{Version: 1, Dispatch: agentic.DispatchConfig{DefaultAgent: "claude"}}
|
||||
type AgentsConfig struct {
|
||||
Version int `yaml:"version"`
|
||||
Dispatch DispatchConfig `yaml:"dispatch"`
|
||||
Concurrency map[string]int `yaml:"concurrency"`
|
||||
Version int `yaml:"version"`
|
||||
Dispatch DispatchConfig `yaml:"dispatch"`
|
||||
Concurrency map[string]int `yaml:"concurrency"`
|
||||
Rates map[string]RateConfig `yaml:"rates"`
|
||||
}
|
||||
|
||||
// loadAgentsConfig reads config/agents.yaml from the code path.
|
||||
func (s *PrepSubsystem) loadAgentsConfig() *AgentsConfig {
|
||||
paths := []string{
|
||||
filepath.Join(CoreRoot(), "agents.yaml"),
|
||||
filepath.Join(s.codePath, "core", "agent", "config", "agents.yaml"),
|
||||
core.JoinPath(CoreRoot(), "agents.yaml"),
|
||||
core.JoinPath(s.codePath, "core", "agent", "config", "agents.yaml"),
|
||||
}
|
||||
|
||||
for _, path := range paths {
|
||||
data, err := coreio.Local.Read(path)
|
||||
if err != nil {
|
||||
r := fs.Read(path)
|
||||
if !r.OK {
|
||||
continue
|
||||
}
|
||||
var cfg AgentsConfig
|
||||
if err := yaml.Unmarshal([]byte(data), &cfg); err != nil {
|
||||
if err := yaml.Unmarshal([]byte(r.Value.(string)), &cfg); err != nil {
|
||||
continue
|
||||
}
|
||||
return &cfg
|
||||
|
|
@ -75,10 +78,7 @@ func (s *PrepSubsystem) loadAgentsConfig() *AgentsConfig {
|
|||
func (s *PrepSubsystem) delayForAgent(agent string) time.Duration {
|
||||
cfg := s.loadAgentsConfig()
|
||||
// Strip variant suffix (claude:opus → claude) for config lookup
|
||||
base := agent
|
||||
if idx := strings.Index(agent, ":"); idx >= 0 {
|
||||
base = agent[:idx]
|
||||
}
|
||||
base := baseAgent(agent)
|
||||
rate, ok := cfg.Rates[base]
|
||||
if !ok || rate.SustainedDelay == 0 {
|
||||
return 0
|
||||
|
|
@ -86,7 +86,15 @@ func (s *PrepSubsystem) delayForAgent(agent string) time.Duration {
|
|||
|
||||
// Parse reset time
|
||||
resetHour, resetMin := 6, 0
|
||||
fmt.Sscanf(rate.ResetUTC, "%d:%d", &resetHour, &resetMin)
|
||||
parts := core.Split(rate.ResetUTC, ":")
|
||||
if len(parts) >= 2 {
|
||||
if hour, err := strconv.Atoi(core.Trim(parts[0])); err == nil {
|
||||
resetHour = hour
|
||||
}
|
||||
if min, err := strconv.Atoi(core.Trim(parts[1])); err == nil {
|
||||
resetMin = min
|
||||
}
|
||||
}
|
||||
|
||||
now := time.Now().UTC()
|
||||
resetToday := time.Date(now.Year(), now.Month(), now.Day(), resetHour, resetMin, 0, 0, time.UTC)
|
||||
|
|
@ -107,21 +115,18 @@ func (s *PrepSubsystem) delayForAgent(agent string) time.Duration {
|
|||
}
|
||||
|
||||
// countRunningByAgent counts running workspaces for a specific agent type.
|
||||
// Scans both old (*/status.json) and new (*/*/*/status.json) workspace layouts.
|
||||
func (s *PrepSubsystem) countRunningByAgent(agent string) int {
|
||||
wsRoot := WorkspaceRoot()
|
||||
|
||||
entries, err := os.ReadDir(wsRoot)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
// Scan both old and new workspace layouts
|
||||
old := core.PathGlob(core.JoinPath(wsRoot, "*", "status.json"))
|
||||
new := core.PathGlob(core.JoinPath(wsRoot, "*", "*", "*", "status.json"))
|
||||
paths := append(old, new...)
|
||||
|
||||
count := 0
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
st, err := readStatus(filepath.Join(wsRoot, entry.Name()))
|
||||
for _, statusPath := range paths {
|
||||
st, err := readStatus(core.PathDir(statusPath))
|
||||
if err != nil || st.Status != "running" {
|
||||
continue
|
||||
}
|
||||
|
|
@ -139,7 +144,11 @@ func (s *PrepSubsystem) countRunningByAgent(agent string) int {
|
|||
|
||||
// baseAgent strips the model variant (gemini:flash → gemini).
|
||||
func baseAgent(agent string) string {
|
||||
return strings.SplitN(agent, ":", 2)[0]
|
||||
// codex:gpt-5.3-codex-spark → codex-spark (separate pool)
|
||||
if core.Contains(agent, "codex-spark") {
|
||||
return "codex-spark"
|
||||
}
|
||||
return core.SplitN(agent, ":", 2)[0]
|
||||
}
|
||||
|
||||
// canDispatchAgent checks if we're under the concurrency limit for a specific agent type.
|
||||
|
|
@ -153,26 +162,32 @@ func (s *PrepSubsystem) canDispatchAgent(agent string) bool {
|
|||
return s.countRunningByAgent(base) < limit
|
||||
}
|
||||
|
||||
// drainQueue finds the oldest queued workspace and spawns it if a slot is available.
|
||||
// Applies rate-based delay between spawns. Serialised via drainMu to prevent
|
||||
// concurrent drainers from exceeding concurrency limits.
|
||||
// drainQueue fills all available concurrency slots from queued workspaces.
|
||||
// Loops until no slots remain or no queued tasks match. Serialised via drainMu.
|
||||
func (s *PrepSubsystem) drainQueue() {
|
||||
if s.frozen {
|
||||
return
|
||||
}
|
||||
s.drainMu.Lock()
|
||||
defer s.drainMu.Unlock()
|
||||
|
||||
for s.drainOne() {
|
||||
// keep filling slots
|
||||
}
|
||||
}
|
||||
|
||||
// drainOne finds the oldest queued workspace and spawns it if a slot is available.
|
||||
// Returns true if a task was spawned, false if nothing to do.
|
||||
func (s *PrepSubsystem) drainOne() bool {
|
||||
wsRoot := WorkspaceRoot()
|
||||
|
||||
entries, err := os.ReadDir(wsRoot)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
// Scan both old and new workspace layouts
|
||||
old := core.PathGlob(core.JoinPath(wsRoot, "*", "status.json"))
|
||||
deep := core.PathGlob(core.JoinPath(wsRoot, "*", "*", "*", "status.json"))
|
||||
statusFiles := append(old, deep...)
|
||||
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
wsDir := filepath.Join(wsRoot, entry.Name())
|
||||
for _, statusPath := range statusFiles {
|
||||
wsDir := core.PathDir(statusPath)
|
||||
st, err := readStatus(wsDir)
|
||||
if err != nil || st.Status != "queued" {
|
||||
continue
|
||||
|
|
@ -182,6 +197,12 @@ func (s *PrepSubsystem) drainQueue() {
|
|||
continue
|
||||
}
|
||||
|
||||
// Skip if agent pool is in rate-limit backoff
|
||||
pool := baseAgent(st.Agent)
|
||||
if until, ok := s.backoff[pool]; ok && time.Now().Before(until) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Apply rate delay before spawning
|
||||
delay := s.delayForAgent(st.Agent)
|
||||
if delay > 0 {
|
||||
|
|
@ -193,10 +214,9 @@ func (s *PrepSubsystem) drainQueue() {
|
|||
continue
|
||||
}
|
||||
|
||||
srcDir := filepath.Join(wsDir, "src")
|
||||
prompt := "Read PROMPT.md for instructions. All context files (CLAUDE.md, TODO.md, CONTEXT.md, CONSUMERS.md, RECENT.md) are in the current directory. Work in this directory."
|
||||
prompt := "TASK: " + st.Task + "\n\nResume from where you left off. Read CODEX.md for conventions. Commit when done."
|
||||
|
||||
pid, _, err := s.spawnAgent(st.Agent, prompt, wsDir, srcDir)
|
||||
pid, _, err := s.spawnAgent(st.Agent, prompt, wsDir)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
|
@ -206,6 +226,8 @@ func (s *PrepSubsystem) drainQueue() {
|
|||
st.Runs++
|
||||
writeStatus(wsDir, st)
|
||||
|
||||
return
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,12 +3,10 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestBaseAgent_Ugly_Empty(t *testing.T) {
|
||||
|
|
@ -36,7 +34,7 @@ func TestCanDispatchAgent_Good_NoConfig(t *testing.T) {
|
|||
// With no running workspaces and default config, should be able to dispatch
|
||||
root := t.TempDir()
|
||||
t.Setenv("CORE_WORKSPACE", root)
|
||||
require.NoError(t, coreio.Local.EnsureDir(filepath.Join(root, "workspace")))
|
||||
require.True(t, fs.EnsureDir(filepath.Join(root, "workspace")).OK)
|
||||
|
||||
s := &PrepSubsystem{codePath: t.TempDir()}
|
||||
assert.True(t, s.canDispatchAgent("gemini"))
|
||||
|
|
@ -46,7 +44,7 @@ func TestCanDispatchAgent_Good_UnknownAgent(t *testing.T) {
|
|||
// Unknown agent has no limit, so always allowed
|
||||
root := t.TempDir()
|
||||
t.Setenv("CORE_WORKSPACE", root)
|
||||
require.NoError(t, coreio.Local.EnsureDir(filepath.Join(root, "workspace")))
|
||||
require.True(t, fs.EnsureDir(filepath.Join(root, "workspace")).OK)
|
||||
|
||||
s := &PrepSubsystem{codePath: t.TempDir()}
|
||||
assert.True(t, s.canDispatchAgent("unknown-agent"))
|
||||
|
|
@ -55,7 +53,7 @@ func TestCanDispatchAgent_Good_UnknownAgent(t *testing.T) {
|
|||
func TestCountRunningByAgent_Good_EmptyWorkspace(t *testing.T) {
|
||||
root := t.TempDir()
|
||||
t.Setenv("CORE_WORKSPACE", root)
|
||||
require.NoError(t, coreio.Local.EnsureDir(filepath.Join(root, "workspace")))
|
||||
require.True(t, fs.EnsureDir(filepath.Join(root, "workspace")).OK)
|
||||
|
||||
s := &PrepSubsystem{}
|
||||
assert.Equal(t, 0, s.countRunningByAgent("gemini"))
|
||||
|
|
@ -68,7 +66,7 @@ func TestCountRunningByAgent_Good_NoRunning(t *testing.T) {
|
|||
|
||||
// Create a workspace with completed status under workspace/
|
||||
ws := filepath.Join(root, "workspace", "test-ws")
|
||||
require.NoError(t, coreio.Local.EnsureDir(ws))
|
||||
require.True(t, fs.EnsureDir(ws).OK)
|
||||
require.NoError(t, writeStatus(ws, &WorkspaceStatus{
|
||||
Status: "completed",
|
||||
Agent: "gemini",
|
||||
|
|
|
|||
|
|
@ -5,32 +5,32 @@ package agentic
|
|||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// --- agentic_dispatch_remote tool ---
|
||||
|
||||
// RemoteDispatchInput dispatches a task to a remote core-agent over HTTP.
|
||||
//
|
||||
// input := agentic.RemoteDispatchInput{Host: "charon", Repo: "go-io", Task: "Run the review queue"}
|
||||
type RemoteDispatchInput struct {
|
||||
Host string `json:"host"` // Remote agent host (e.g. "charon", "10.69.69.165:9101")
|
||||
Repo string `json:"repo"` // Target repo
|
||||
Task string `json:"task"` // What the agent should do
|
||||
Agent string `json:"agent,omitempty"` // Agent type (default: claude:opus)
|
||||
Template string `json:"template,omitempty"` // Prompt template
|
||||
Persona string `json:"persona,omitempty"` // Persona slug
|
||||
Org string `json:"org,omitempty"` // Forge org (default: core)
|
||||
Host string `json:"host"` // Remote agent host (e.g. "charon", "10.69.69.165:9101")
|
||||
Repo string `json:"repo"` // Target repo
|
||||
Task string `json:"task"` // What the agent should do
|
||||
Agent string `json:"agent,omitempty"` // Agent type (default: claude:opus)
|
||||
Template string `json:"template,omitempty"` // Prompt template
|
||||
Persona string `json:"persona,omitempty"` // Persona slug
|
||||
Org string `json:"org,omitempty"` // Forge org (default: core)
|
||||
Variables map[string]string `json:"variables,omitempty"` // Template variables
|
||||
}
|
||||
|
||||
// RemoteDispatchOutput is the response from a remote dispatch.
|
||||
//
|
||||
// out := agentic.RemoteDispatchOutput{Success: true, Host: "charon", Repo: "go-io", Agent: "claude:opus"}
|
||||
type RemoteDispatchOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Host string `json:"host"`
|
||||
|
|
@ -50,13 +50,13 @@ func (s *PrepSubsystem) registerRemoteDispatchTool(server *mcp.Server) {
|
|||
|
||||
func (s *PrepSubsystem) dispatchRemote(ctx context.Context, _ *mcp.CallToolRequest, input RemoteDispatchInput) (*mcp.CallToolResult, RemoteDispatchOutput, error) {
|
||||
if input.Host == "" {
|
||||
return nil, RemoteDispatchOutput{}, coreerr.E("dispatchRemote", "host is required", nil)
|
||||
return nil, RemoteDispatchOutput{}, core.E("dispatchRemote", "host is required", nil)
|
||||
}
|
||||
if input.Repo == "" {
|
||||
return nil, RemoteDispatchOutput{}, coreerr.E("dispatchRemote", "repo is required", nil)
|
||||
return nil, RemoteDispatchOutput{}, core.E("dispatchRemote", "repo is required", nil)
|
||||
}
|
||||
if input.Task == "" {
|
||||
return nil, RemoteDispatchOutput{}, coreerr.E("dispatchRemote", "task is required", nil)
|
||||
return nil, RemoteDispatchOutput{}, core.E("dispatchRemote", "task is required", nil)
|
||||
}
|
||||
|
||||
// Resolve host aliases
|
||||
|
|
@ -96,7 +96,7 @@ func (s *PrepSubsystem) dispatchRemote(ctx context.Context, _ *mcp.CallToolReque
|
|||
},
|
||||
}
|
||||
|
||||
url := fmt.Sprintf("http://%s/mcp", addr)
|
||||
url := core.Sprintf("http://%s/mcp", addr)
|
||||
client := &http.Client{Timeout: 30 * time.Second}
|
||||
|
||||
// Step 1: Initialize session
|
||||
|
|
@ -104,8 +104,8 @@ func (s *PrepSubsystem) dispatchRemote(ctx context.Context, _ *mcp.CallToolReque
|
|||
if err != nil {
|
||||
return nil, RemoteDispatchOutput{
|
||||
Host: input.Host,
|
||||
Error: fmt.Sprintf("init failed: %v", err),
|
||||
}, coreerr.E("dispatchRemote", "MCP initialize failed", err)
|
||||
Error: core.Sprintf("init failed: %v", err),
|
||||
}, core.E("dispatchRemote", "MCP initialize failed", err)
|
||||
}
|
||||
|
||||
// Step 2: Call the tool
|
||||
|
|
@ -114,8 +114,8 @@ func (s *PrepSubsystem) dispatchRemote(ctx context.Context, _ *mcp.CallToolReque
|
|||
if err != nil {
|
||||
return nil, RemoteDispatchOutput{
|
||||
Host: input.Host,
|
||||
Error: fmt.Sprintf("call failed: %v", err),
|
||||
}, coreerr.E("dispatchRemote", "tool call failed", err)
|
||||
Error: core.Sprintf("call failed: %v", err),
|
||||
}, core.E("dispatchRemote", "tool call failed", err)
|
||||
}
|
||||
|
||||
// Parse result
|
||||
|
|
@ -163,12 +163,12 @@ func resolveHost(host string) string {
|
|||
"local": "127.0.0.1:9101",
|
||||
}
|
||||
|
||||
if addr, ok := aliases[strings.ToLower(host)]; ok {
|
||||
if addr, ok := aliases[core.Lower(host)]; ok {
|
||||
return addr
|
||||
}
|
||||
|
||||
// If no port specified, add default
|
||||
if !strings.Contains(host, ":") {
|
||||
if !core.Contains(host, ":") {
|
||||
return host + ":9101"
|
||||
}
|
||||
|
||||
|
|
@ -178,25 +178,25 @@ func resolveHost(host string) string {
|
|||
// remoteToken gets the auth token for a remote agent.
|
||||
func remoteToken(host string) string {
|
||||
// Check environment first
|
||||
envKey := fmt.Sprintf("AGENT_TOKEN_%s", strings.ToUpper(host))
|
||||
if token := os.Getenv(envKey); token != "" {
|
||||
envKey := core.Sprintf("AGENT_TOKEN_%s", core.Upper(host))
|
||||
if token := core.Env(envKey); token != "" {
|
||||
return token
|
||||
}
|
||||
|
||||
// Fallback to shared agent token
|
||||
if token := os.Getenv("MCP_AUTH_TOKEN"); token != "" {
|
||||
if token := core.Env("MCP_AUTH_TOKEN"); token != "" {
|
||||
return token
|
||||
}
|
||||
|
||||
// Try reading from file
|
||||
home, _ := os.UserHomeDir()
|
||||
home := core.Env("DIR_HOME")
|
||||
tokenFiles := []string{
|
||||
fmt.Sprintf("%s/.core/tokens/%s.token", home, strings.ToLower(host)),
|
||||
fmt.Sprintf("%s/.core/agent-token", home),
|
||||
core.Sprintf("%s/.core/tokens/%s.token", home, core.Lower(host)),
|
||||
core.Sprintf("%s/.core/agent-token", home),
|
||||
}
|
||||
for _, f := range tokenFiles {
|
||||
if data, err := coreio.Local.Read(f); err == nil {
|
||||
return strings.TrimSpace(data)
|
||||
if r := fs.Read(f); r.OK {
|
||||
return core.Trim(r.Value.(string))
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -7,14 +7,12 @@ import (
|
|||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// mcpInitialize performs the MCP initialize handshake over Streamable HTTP.
|
||||
// mcpInitialize performs the MCP initialise handshake over Streamable HTTP.
|
||||
// Returns the session ID from the Mcp-Session-Id header.
|
||||
func mcpInitialize(ctx context.Context, client *http.Client, url, token string) (string, error) {
|
||||
initReq := map[string]any{
|
||||
|
|
@ -35,26 +33,26 @@ func mcpInitialize(ctx context.Context, client *http.Client, url, token string)
|
|||
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return "", coreerr.E("mcpInitialize", "create request", err)
|
||||
return "", core.E("mcpInitialize", "create request", err)
|
||||
}
|
||||
setHeaders(req, token, "")
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return "", coreerr.E("mcpInitialize", "request failed", err)
|
||||
return "", core.E("mcpInitialize", "request failed", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
return "", coreerr.E("mcpInitialize", fmt.Sprintf("HTTP %d", resp.StatusCode), nil)
|
||||
return "", core.E("mcpInitialize", core.Sprintf("HTTP %d", resp.StatusCode), nil)
|
||||
}
|
||||
|
||||
sessionID := resp.Header.Get("Mcp-Session-Id")
|
||||
|
||||
// Drain the SSE response (we don't need the initialize result)
|
||||
// Drain the SSE response (we don't need the initialise result)
|
||||
drainSSE(resp)
|
||||
|
||||
// Send initialized notification
|
||||
// Send initialised notification
|
||||
notif := map[string]any{
|
||||
"jsonrpc": "2.0",
|
||||
"method": "notifications/initialized",
|
||||
|
|
@ -77,18 +75,18 @@ func mcpInitialize(ctx context.Context, client *http.Client, url, token string)
|
|||
func mcpCall(ctx context.Context, client *http.Client, url, token, sessionID string, body []byte) ([]byte, error) {
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return nil, coreerr.E("mcpCall", "create request", err)
|
||||
return nil, core.E("mcpCall", "create request", err)
|
||||
}
|
||||
setHeaders(req, token, sessionID)
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return nil, coreerr.E("mcpCall", "request failed", err)
|
||||
return nil, core.E("mcpCall", "request failed", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
return nil, coreerr.E("mcpCall", fmt.Sprintf("HTTP %d", resp.StatusCode), nil)
|
||||
return nil, core.E("mcpCall", core.Sprintf("HTTP %d", resp.StatusCode), nil)
|
||||
}
|
||||
|
||||
// Parse SSE response — extract data: lines
|
||||
|
|
@ -100,11 +98,11 @@ func readSSEData(resp *http.Response) ([]byte, error) {
|
|||
scanner := bufio.NewScanner(resp.Body)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if strings.HasPrefix(line, "data: ") {
|
||||
return []byte(strings.TrimPrefix(line, "data: ")), nil
|
||||
if core.HasPrefix(line, "data: ") {
|
||||
return []byte(core.TrimPrefix(line, "data: ")), nil
|
||||
}
|
||||
}
|
||||
return nil, coreerr.E("readSSEData", "no data in SSE response", nil)
|
||||
return nil, core.E("readSSEData", "no data in SSE response", nil)
|
||||
}
|
||||
|
||||
// setHeaders applies standard MCP HTTP headers.
|
||||
|
|
|
|||
|
|
@ -8,24 +8,27 @@ import (
|
|||
"net/http"
|
||||
"time"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// --- agentic_status_remote tool ---
|
||||
|
||||
// RemoteStatusInput queries a remote core-agent for workspace status.
|
||||
//
|
||||
// input := agentic.RemoteStatusInput{Host: "charon"}
|
||||
type RemoteStatusInput struct {
|
||||
Host string `json:"host"` // Remote agent host (e.g. "charon")
|
||||
}
|
||||
|
||||
// RemoteStatusOutput is the response from a remote status check.
|
||||
//
|
||||
// out := agentic.RemoteStatusOutput{Success: true, Host: "charon"}
|
||||
type RemoteStatusOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Host string `json:"host"`
|
||||
Workspaces []WorkspaceInfo `json:"workspaces"`
|
||||
Count int `json:"count"`
|
||||
Error string `json:"error,omitempty"`
|
||||
Success bool `json:"success"`
|
||||
Host string `json:"host"`
|
||||
Stats StatusOutput `json:"stats"`
|
||||
Error string `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) registerRemoteStatusTool(server *mcp.Server) {
|
||||
|
|
@ -37,7 +40,7 @@ func (s *PrepSubsystem) registerRemoteStatusTool(server *mcp.Server) {
|
|||
|
||||
func (s *PrepSubsystem) statusRemote(ctx context.Context, _ *mcp.CallToolRequest, input RemoteStatusInput) (*mcp.CallToolResult, RemoteStatusOutput, error) {
|
||||
if input.Host == "" {
|
||||
return nil, RemoteStatusOutput{}, coreerr.E("statusRemote", "host is required", nil)
|
||||
return nil, RemoteStatusOutput{}, core.E("statusRemote", "host is required", nil)
|
||||
}
|
||||
|
||||
addr := resolveHost(input.Host)
|
||||
|
|
@ -102,8 +105,7 @@ func (s *PrepSubsystem) statusRemote(ctx context.Context, _ *mcp.CallToolRequest
|
|||
if len(rpcResp.Result.Content) > 0 {
|
||||
var statusOut StatusOutput
|
||||
if json.Unmarshal([]byte(rpcResp.Result.Content[0].Text), &statusOut) == nil {
|
||||
output.Workspaces = statusOut.Workspaces
|
||||
output.Count = statusOut.Count
|
||||
output.Stats = statusOut
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -4,31 +4,31 @@ package agentic
|
|||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// ResumeInput is the input for agentic_resume.
|
||||
//
|
||||
// input := agentic.ResumeInput{Workspace: "go-scm-1773581173", Answer: "Use the existing queue config"}
|
||||
type ResumeInput struct {
|
||||
Workspace string `json:"workspace"` // workspace name (e.g. "go-scm-1773581173")
|
||||
Answer string `json:"answer,omitempty"` // answer to the blocked question (written to ANSWER.md)
|
||||
Agent string `json:"agent,omitempty"` // override agent type (default: same as original)
|
||||
DryRun bool `json:"dry_run,omitempty"` // preview without executing
|
||||
Workspace string `json:"workspace"` // workspace name (e.g. "go-scm-1773581173")
|
||||
Answer string `json:"answer,omitempty"` // answer to the blocked question (written to ANSWER.md)
|
||||
Agent string `json:"agent,omitempty"` // override agent type (default: same as original)
|
||||
DryRun bool `json:"dry_run,omitempty"` // preview without executing
|
||||
}
|
||||
|
||||
// ResumeOutput is the output for agentic_resume.
|
||||
//
|
||||
// out := agentic.ResumeOutput{Success: true, Workspace: "go-scm-1773581173", Agent: "codex"}
|
||||
type ResumeOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Workspace string `json:"workspace"`
|
||||
Agent string `json:"agent"`
|
||||
PID int `json:"pid,omitempty"`
|
||||
OutputFile string `json:"output_file,omitempty"`
|
||||
Prompt string `json:"prompt,omitempty"`
|
||||
Success bool `json:"success"`
|
||||
Workspace string `json:"workspace"`
|
||||
Agent string `json:"agent"`
|
||||
PID int `json:"pid,omitempty"`
|
||||
OutputFile string `json:"output_file,omitempty"`
|
||||
Prompt string `json:"prompt,omitempty"`
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) registerResumeTool(server *mcp.Server) {
|
||||
|
|
@ -40,25 +40,25 @@ func (s *PrepSubsystem) registerResumeTool(server *mcp.Server) {
|
|||
|
||||
func (s *PrepSubsystem) resume(ctx context.Context, _ *mcp.CallToolRequest, input ResumeInput) (*mcp.CallToolResult, ResumeOutput, error) {
|
||||
if input.Workspace == "" {
|
||||
return nil, ResumeOutput{}, coreerr.E("resume", "workspace is required", nil)
|
||||
return nil, ResumeOutput{}, core.E("resume", "workspace is required", nil)
|
||||
}
|
||||
|
||||
wsDir := filepath.Join(WorkspaceRoot(), input.Workspace)
|
||||
srcDir := filepath.Join(wsDir, "src")
|
||||
wsDir := core.JoinPath(WorkspaceRoot(), input.Workspace)
|
||||
repoDir := core.JoinPath(wsDir, "repo")
|
||||
|
||||
// Verify workspace exists
|
||||
if _, err := os.Stat(srcDir); err != nil {
|
||||
return nil, ResumeOutput{}, coreerr.E("resume", "workspace not found: "+input.Workspace, nil)
|
||||
if !fs.IsDir(core.JoinPath(repoDir, ".git")) {
|
||||
return nil, ResumeOutput{}, core.E("resume", "workspace not found: "+input.Workspace, nil)
|
||||
}
|
||||
|
||||
// Read current status
|
||||
st, err := readStatus(wsDir)
|
||||
if err != nil {
|
||||
return nil, ResumeOutput{}, coreerr.E("resume", "no status.json in workspace", err)
|
||||
return nil, ResumeOutput{}, core.E("resume", "no status.json in workspace", err)
|
||||
}
|
||||
|
||||
if st.Status != "blocked" && st.Status != "failed" && st.Status != "completed" {
|
||||
return nil, ResumeOutput{}, coreerr.E("resume", "workspace is "+st.Status+", not resumable (must be blocked, failed, or completed)", nil)
|
||||
return nil, ResumeOutput{}, core.E("resume", "workspace is "+st.Status+", not resumable (must be blocked, failed, or completed)", nil)
|
||||
}
|
||||
|
||||
// Determine agent
|
||||
|
|
@ -69,19 +69,20 @@ func (s *PrepSubsystem) resume(ctx context.Context, _ *mcp.CallToolRequest, inpu
|
|||
|
||||
// Write ANSWER.md if answer provided
|
||||
if input.Answer != "" {
|
||||
answerPath := filepath.Join(srcDir, "ANSWER.md")
|
||||
content := fmt.Sprintf("# Answer\n\n%s\n", input.Answer)
|
||||
if err := coreio.Local.Write(answerPath, content); err != nil {
|
||||
return nil, ResumeOutput{}, coreerr.E("resume", "failed to write ANSWER.md", err)
|
||||
answerPath := core.JoinPath(repoDir, "ANSWER.md")
|
||||
content := core.Sprintf("# Answer\n\n%s\n", input.Answer)
|
||||
if r := fs.Write(answerPath, content); !r.OK {
|
||||
err, _ := r.Value.(error)
|
||||
return nil, ResumeOutput{}, core.E("resume", "failed to write ANSWER.md", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Build resume prompt
|
||||
prompt := "You are resuming previous work in this workspace. "
|
||||
// Build resume prompt — inline the task and answer, no file references
|
||||
prompt := "You are resuming previous work.\n\nORIGINAL TASK:\n" + st.Task
|
||||
if input.Answer != "" {
|
||||
prompt += "Read ANSWER.md for the response to your question. "
|
||||
prompt += "\n\nANSWER TO YOUR QUESTION:\n" + input.Answer
|
||||
}
|
||||
prompt += "Read PROMPT.md for the original task. Read BLOCKED.md to see what you were stuck on. Continue working."
|
||||
prompt += "\n\nContinue working. Read BLOCKED.md to see what you were stuck on. Commit when done."
|
||||
|
||||
if input.DryRun {
|
||||
return nil, ResumeOutput{
|
||||
|
|
@ -93,7 +94,7 @@ func (s *PrepSubsystem) resume(ctx context.Context, _ *mcp.CallToolRequest, inpu
|
|||
}
|
||||
|
||||
// Spawn agent via go-process
|
||||
pid, _, err := s.spawnAgent(agent, prompt, wsDir, srcDir)
|
||||
pid, _, err := s.spawnAgent(agent, prompt, wsDir)
|
||||
if err != nil {
|
||||
return nil, ResumeOutput{}, err
|
||||
}
|
||||
|
|
@ -110,6 +111,6 @@ func (s *PrepSubsystem) resume(ctx context.Context, _ *mcp.CallToolRequest, inpu
|
|||
Workspace: input.Workspace,
|
||||
Agent: agent,
|
||||
PID: pid,
|
||||
OutputFile: filepath.Join(wsDir, fmt.Sprintf("agent-%s.log", agent)),
|
||||
OutputFile: core.JoinPath(wsDir, core.Sprintf("agent-%s.log", agent)),
|
||||
}, nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -5,31 +5,31 @@ package agentic
|
|||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// --- agentic_review_queue tool ---
|
||||
|
||||
// ReviewQueueInput controls the review queue runner.
|
||||
//
|
||||
// input := agentic.ReviewQueueInput{Reviewer: "coderabbit", Limit: 4, DryRun: true}
|
||||
type ReviewQueueInput struct {
|
||||
Limit int `json:"limit,omitempty"` // Max PRs to process this run (default: 4)
|
||||
Reviewer string `json:"reviewer,omitempty"` // "coderabbit" (default), "codex", or "both"
|
||||
DryRun bool `json:"dry_run,omitempty"` // Preview without acting
|
||||
LocalOnly bool `json:"local_only,omitempty"` // Run review locally, don't touch GitHub
|
||||
Limit int `json:"limit,omitempty"` // Max PRs to process this run (default: 4)
|
||||
Reviewer string `json:"reviewer,omitempty"` // "coderabbit" (default), "codex", or "both"
|
||||
DryRun bool `json:"dry_run,omitempty"` // Preview without acting
|
||||
LocalOnly bool `json:"local_only,omitempty"` // Run review locally, don't touch GitHub
|
||||
}
|
||||
|
||||
// ReviewQueueOutput reports what happened.
|
||||
//
|
||||
// out := agentic.ReviewQueueOutput{Success: true, Processed: []agentic.ReviewResult{{Repo: "go-io", Verdict: "clean"}}}
|
||||
type ReviewQueueOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Processed []ReviewResult `json:"processed"`
|
||||
|
|
@ -38,6 +38,8 @@ type ReviewQueueOutput struct {
|
|||
}
|
||||
|
||||
// ReviewResult is the outcome of reviewing one repo.
|
||||
//
|
||||
// result := agentic.ReviewResult{Repo: "go-io", Verdict: "findings", Findings: 3, Action: "fix_dispatched"}
|
||||
type ReviewResult struct {
|
||||
Repo string `json:"repo"`
|
||||
Verdict string `json:"verdict"` // clean, findings, rate_limited, error
|
||||
|
|
@ -47,10 +49,12 @@ type ReviewResult struct {
|
|||
}
|
||||
|
||||
// RateLimitInfo tracks CodeRabbit rate limit state.
|
||||
//
|
||||
// limit := agentic.RateLimitInfo{Limited: true, Message: "retry after 2026-03-22T06:00:00Z"}
|
||||
type RateLimitInfo struct {
|
||||
Limited bool `json:"limited"`
|
||||
RetryAt time.Time `json:"retry_at,omitempty"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Limited bool `json:"limited"`
|
||||
RetryAt time.Time `json:"retry_at,omitempty"`
|
||||
Message string `json:"message,omitempty"`
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) registerReviewQueueTool(server *mcp.Server) {
|
||||
|
|
@ -66,7 +70,7 @@ func (s *PrepSubsystem) reviewQueue(ctx context.Context, _ *mcp.CallToolRequest,
|
|||
limit = 4
|
||||
}
|
||||
|
||||
basePath := filepath.Join(s.codePath, "core")
|
||||
basePath := core.JoinPath(s.codePath, "core")
|
||||
|
||||
// Find repos with draft PRs (ahead of GitHub)
|
||||
candidates := s.findReviewCandidates(basePath)
|
||||
|
|
@ -93,7 +97,7 @@ func (s *PrepSubsystem) reviewQueue(ctx context.Context, _ *mcp.CallToolRequest,
|
|||
continue
|
||||
}
|
||||
|
||||
repoDir := filepath.Join(basePath, repo)
|
||||
repoDir := core.JoinPath(basePath, repo)
|
||||
reviewer := input.Reviewer
|
||||
if reviewer == "" {
|
||||
reviewer = "coderabbit"
|
||||
|
|
@ -131,17 +135,18 @@ func (s *PrepSubsystem) reviewQueue(ctx context.Context, _ *mcp.CallToolRequest,
|
|||
|
||||
// findReviewCandidates returns repos that are ahead of GitHub main.
|
||||
func (s *PrepSubsystem) findReviewCandidates(basePath string) []string {
|
||||
entries, err := os.ReadDir(basePath)
|
||||
if err != nil {
|
||||
r := fs.List(basePath)
|
||||
if !r.OK {
|
||||
return nil
|
||||
}
|
||||
entries := r.Value.([]os.DirEntry)
|
||||
|
||||
var candidates []string
|
||||
for _, e := range entries {
|
||||
if !e.IsDir() {
|
||||
continue
|
||||
}
|
||||
repoDir := filepath.Join(basePath, e.Name())
|
||||
repoDir := core.JoinPath(basePath, e.Name())
|
||||
if !hasRemote(repoDir, "github") {
|
||||
continue
|
||||
}
|
||||
|
|
@ -160,7 +165,7 @@ func (s *PrepSubsystem) reviewRepo(ctx context.Context, repoDir, repo, reviewer
|
|||
// Check saved rate limit
|
||||
if rl := s.loadRateLimitState(); rl != nil && rl.Limited && time.Now().Before(rl.RetryAt) {
|
||||
result.Verdict = "rate_limited"
|
||||
result.Detail = fmt.Sprintf("retry after %s", rl.RetryAt.Format(time.RFC3339))
|
||||
result.Detail = core.Sprintf("retry after %s", rl.RetryAt.Format(time.RFC3339))
|
||||
return result
|
||||
}
|
||||
|
||||
|
|
@ -173,14 +178,14 @@ func (s *PrepSubsystem) reviewRepo(ctx context.Context, repoDir, repo, reviewer
|
|||
output := string(out)
|
||||
|
||||
// Parse rate limit (both reviewers use similar patterns)
|
||||
if strings.Contains(output, "Rate limit exceeded") || strings.Contains(output, "rate limit") {
|
||||
if core.Contains(output, "Rate limit exceeded") || core.Contains(output, "rate limit") {
|
||||
result.Verdict = "rate_limited"
|
||||
result.Detail = output
|
||||
return result
|
||||
}
|
||||
|
||||
// Parse error
|
||||
if err != nil && !strings.Contains(output, "No findings") && !strings.Contains(output, "no issues") {
|
||||
if err != nil && !core.Contains(output, "No findings") && !core.Contains(output, "no issues") {
|
||||
result.Verdict = "error"
|
||||
result.Detail = output
|
||||
return result
|
||||
|
|
@ -190,7 +195,7 @@ func (s *PrepSubsystem) reviewRepo(ctx context.Context, repoDir, repo, reviewer
|
|||
s.storeReviewOutput(repoDir, repo, reviewer, output)
|
||||
|
||||
// Parse verdict
|
||||
if strings.Contains(output, "No findings") || strings.Contains(output, "no issues") || strings.Contains(output, "LGTM") {
|
||||
if core.Contains(output, "No findings") || core.Contains(output, "no issues") || core.Contains(output, "LGTM") {
|
||||
result.Verdict = "clean"
|
||||
result.Findings = 0
|
||||
|
||||
|
|
@ -222,11 +227,11 @@ func (s *PrepSubsystem) reviewRepo(ctx context.Context, repoDir, repo, reviewer
|
|||
}
|
||||
|
||||
// Save findings for agent dispatch
|
||||
findingsFile := filepath.Join(repoDir, ".core", "coderabbit-findings.txt")
|
||||
coreio.Local.Write(findingsFile, output)
|
||||
findingsFile := core.JoinPath(repoDir, ".core", "coderabbit-findings.txt")
|
||||
fs.Write(findingsFile, output)
|
||||
|
||||
// Dispatch fix agent with the findings
|
||||
task := fmt.Sprintf("Fix CodeRabbit findings. The review output is in .core/coderabbit-findings.txt. "+
|
||||
task := core.Sprintf("Fix CodeRabbit findings. The review output is in .core/coderabbit-findings.txt. "+
|
||||
"Read it, verify each finding against the code, fix what's valid. Run tests. "+
|
||||
"Commit: fix(coderabbit): address review findings\n\nFindings summary (%d issues):\n%s",
|
||||
result.Findings, truncate(output, 1500))
|
||||
|
|
@ -248,7 +253,7 @@ func (s *PrepSubsystem) pushAndMerge(ctx context.Context, repoDir, repo string)
|
|||
pushCmd := exec.CommandContext(ctx, "git", "push", "github", "HEAD:refs/heads/dev", "--force")
|
||||
pushCmd.Dir = repoDir
|
||||
if out, err := pushCmd.CombinedOutput(); err != nil {
|
||||
return coreerr.E("pushAndMerge", "push failed: "+string(out), err)
|
||||
return core.E("pushAndMerge", "push failed: "+string(out), err)
|
||||
}
|
||||
|
||||
// Mark PR ready if draft
|
||||
|
|
@ -260,7 +265,7 @@ func (s *PrepSubsystem) pushAndMerge(ctx context.Context, repoDir, repo string)
|
|||
mergeCmd := exec.CommandContext(ctx, "gh", "pr", "merge", "--merge", "--delete-branch")
|
||||
mergeCmd.Dir = repoDir
|
||||
if out, err := mergeCmd.CombinedOutput(); err != nil {
|
||||
return coreerr.E("pushAndMerge", "merge failed: "+string(out), err)
|
||||
return core.E("pushAndMerge", "merge failed: "+string(out), err)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -279,7 +284,7 @@ func (s *PrepSubsystem) dispatchFixFromQueue(ctx context.Context, repo, task str
|
|||
return err
|
||||
}
|
||||
if !out.Success {
|
||||
return coreerr.E("dispatchFixFromQueue", "dispatch failed for "+repo, nil)
|
||||
return core.E("dispatchFixFromQueue", "dispatch failed for "+repo, nil)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
@ -288,15 +293,15 @@ func (s *PrepSubsystem) dispatchFixFromQueue(ctx context.Context, repo, task str
|
|||
func countFindings(output string) int {
|
||||
// Count lines that look like findings
|
||||
count := 0
|
||||
for _, line := range strings.Split(output, "\n") {
|
||||
trimmed := strings.TrimSpace(line)
|
||||
if strings.HasPrefix(trimmed, "- ") || strings.HasPrefix(trimmed, "* ") ||
|
||||
strings.Contains(trimmed, "Issue:") || strings.Contains(trimmed, "Finding:") ||
|
||||
strings.Contains(trimmed, "⚠") || strings.Contains(trimmed, "❌") {
|
||||
for _, line := range core.Split(output, "\n") {
|
||||
trimmed := core.Trim(line)
|
||||
if core.HasPrefix(trimmed, "- ") || core.HasPrefix(trimmed, "* ") ||
|
||||
core.Contains(trimmed, "Issue:") || core.Contains(trimmed, "Finding:") ||
|
||||
core.Contains(trimmed, "⚠") || core.Contains(trimmed, "❌") {
|
||||
count++
|
||||
}
|
||||
}
|
||||
if count == 0 && !strings.Contains(output, "No findings") {
|
||||
if count == 0 && !core.Contains(output, "No findings") {
|
||||
count = 1 // At least one finding if not clean
|
||||
}
|
||||
return count
|
||||
|
|
@ -308,10 +313,10 @@ func parseRetryAfter(message string) time.Duration {
|
|||
re := regexp.MustCompile(`(\d+)\s*minutes?\s*(?:and\s*)?(\d+)?\s*seconds?`)
|
||||
matches := re.FindStringSubmatch(message)
|
||||
if len(matches) >= 2 {
|
||||
mins, _ := strconv.Atoi(matches[1])
|
||||
mins := parseInt(matches[1])
|
||||
secs := 0
|
||||
if len(matches) >= 3 && matches[2] != "" {
|
||||
secs, _ = strconv.Atoi(matches[2])
|
||||
secs = parseInt(matches[2])
|
||||
}
|
||||
return time.Duration(mins)*time.Minute + time.Duration(secs)*time.Second
|
||||
}
|
||||
|
|
@ -334,15 +339,14 @@ func (s *PrepSubsystem) buildReviewCommand(ctx context.Context, repoDir, reviewe
|
|||
|
||||
// storeReviewOutput saves raw review output for training data collection.
|
||||
func (s *PrepSubsystem) storeReviewOutput(repoDir, repo, reviewer, output string) {
|
||||
home, _ := os.UserHomeDir()
|
||||
dataDir := filepath.Join(home, ".core", "training", "reviews")
|
||||
coreio.Local.EnsureDir(dataDir)
|
||||
dataDir := core.JoinPath(core.Env("DIR_HOME"), ".core", "training", "reviews")
|
||||
fs.EnsureDir(dataDir)
|
||||
|
||||
timestamp := time.Now().Format("2006-01-02T15-04-05")
|
||||
filename := fmt.Sprintf("%s_%s_%s.txt", repo, reviewer, timestamp)
|
||||
filename := core.Sprintf("%s_%s_%s.txt", repo, reviewer, timestamp)
|
||||
|
||||
// Write raw output
|
||||
coreio.Local.Write(filepath.Join(dataDir, filename), output)
|
||||
fs.Write(core.JoinPath(dataDir, filename), output)
|
||||
|
||||
// Append to JSONL for structured training
|
||||
entry := map[string]string{
|
||||
|
|
@ -352,37 +356,37 @@ func (s *PrepSubsystem) storeReviewOutput(repoDir, repo, reviewer, output string
|
|||
"output": output,
|
||||
"verdict": "clean",
|
||||
}
|
||||
if !strings.Contains(output, "No findings") && !strings.Contains(output, "no issues") {
|
||||
if !core.Contains(output, "No findings") && !core.Contains(output, "no issues") {
|
||||
entry["verdict"] = "findings"
|
||||
}
|
||||
jsonLine, _ := json.Marshal(entry)
|
||||
|
||||
jsonlPath := filepath.Join(dataDir, "reviews.jsonl")
|
||||
f, err := os.OpenFile(jsonlPath, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
|
||||
if err == nil {
|
||||
defer f.Close()
|
||||
f.Write(append(jsonLine, '\n'))
|
||||
jsonlPath := core.JoinPath(dataDir, "reviews.jsonl")
|
||||
r := fs.Append(jsonlPath)
|
||||
if !r.OK {
|
||||
return
|
||||
}
|
||||
wc := r.Value.(io.WriteCloser)
|
||||
defer wc.Close()
|
||||
wc.Write(append(jsonLine, '\n'))
|
||||
}
|
||||
|
||||
// saveRateLimitState persists rate limit info for cross-run awareness.
|
||||
func (s *PrepSubsystem) saveRateLimitState(info *RateLimitInfo) {
|
||||
home, _ := os.UserHomeDir()
|
||||
path := filepath.Join(home, ".core", "coderabbit-ratelimit.json")
|
||||
path := core.JoinPath(core.Env("DIR_HOME"), ".core", "coderabbit-ratelimit.json")
|
||||
data, _ := json.Marshal(info)
|
||||
coreio.Local.Write(path, string(data))
|
||||
fs.Write(path, string(data))
|
||||
}
|
||||
|
||||
// loadRateLimitState reads persisted rate limit info.
|
||||
func (s *PrepSubsystem) loadRateLimitState() *RateLimitInfo {
|
||||
home, _ := os.UserHomeDir()
|
||||
path := filepath.Join(home, ".core", "coderabbit-ratelimit.json")
|
||||
data, err := coreio.Local.Read(path)
|
||||
if err != nil {
|
||||
path := core.JoinPath(core.Env("DIR_HOME"), ".core", "coderabbit-ratelimit.json")
|
||||
r := fs.Read(path)
|
||||
if !r.OK {
|
||||
return nil
|
||||
}
|
||||
var info RateLimitInfo
|
||||
if json.Unmarshal([]byte(data), &info) != nil {
|
||||
if json.Unmarshal([]byte(r.Value.(string)), &info) != nil {
|
||||
return nil
|
||||
}
|
||||
return &info
|
||||
|
|
|
|||
55
pkg/agentic/runner.go
Normal file
55
pkg/agentic/runner.go
Normal file
|
|
@ -0,0 +1,55 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
package agentic
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// StartRunner begins the background queue runner.
|
||||
// Queue is frozen by default — use agentic_dispatch_start to unfreeze,
|
||||
// or set CORE_AGENT_DISPATCH=1 to auto-start.
|
||||
//
|
||||
// prep.StartRunner()
|
||||
func (s *PrepSubsystem) StartRunner() {
|
||||
s.pokeCh = make(chan struct{}, 1)
|
||||
|
||||
// Frozen by default — explicit start required
|
||||
if core.Env("CORE_AGENT_DISPATCH") == "1" {
|
||||
s.frozen = false
|
||||
core.Print(nil, "dispatch: auto-start enabled (CORE_AGENT_DISPATCH=1)")
|
||||
} else {
|
||||
s.frozen = true
|
||||
}
|
||||
|
||||
go s.runLoop()
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) runLoop() {
|
||||
ticker := time.NewTicker(30 * time.Second)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
s.drainQueue()
|
||||
case <-s.pokeCh:
|
||||
s.drainQueue()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Poke signals the runner to check the queue immediately.
|
||||
// Non-blocking — if a poke is already pending, this is a no-op.
|
||||
//
|
||||
// s.Poke() // after agent completion
|
||||
func (s *PrepSubsystem) Poke() {
|
||||
if s.pokeCh == nil {
|
||||
return
|
||||
}
|
||||
select {
|
||||
case s.pokeCh <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
}
|
||||
96
pkg/agentic/sanitise.go
Normal file
96
pkg/agentic/sanitise.go
Normal file
|
|
@ -0,0 +1,96 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
package agentic
|
||||
|
||||
func sanitiseBranchSlug(text string, max int) string {
|
||||
out := make([]rune, 0, len(text))
|
||||
for _, r := range text {
|
||||
switch {
|
||||
case r >= 'a' && r <= 'z' || r >= '0' && r <= '9' || r == '-':
|
||||
out = append(out, r)
|
||||
case r >= 'A' && r <= 'Z':
|
||||
out = append(out, r+32)
|
||||
default:
|
||||
out = append(out, '-')
|
||||
}
|
||||
|
||||
if max > 0 && len(out) >= max {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return trimRuneEdges(string(out), '-')
|
||||
}
|
||||
|
||||
func sanitisePlanSlug(text string) string {
|
||||
out := make([]rune, 0, len(text))
|
||||
for _, r := range text {
|
||||
switch {
|
||||
case r >= 'a' && r <= 'z' || r >= '0' && r <= '9' || r == '-':
|
||||
out = append(out, r)
|
||||
case r >= 'A' && r <= 'Z':
|
||||
out = append(out, r+32)
|
||||
case r == ' ':
|
||||
out = append(out, '-')
|
||||
}
|
||||
}
|
||||
|
||||
slug := collapseRepeatedRune(string(out), '-')
|
||||
slug = trimRuneEdges(slug, '-')
|
||||
if len(slug) > 30 {
|
||||
slug = slug[:30]
|
||||
}
|
||||
|
||||
return trimRuneEdges(slug, '-')
|
||||
}
|
||||
|
||||
func sanitiseFilename(text string) string {
|
||||
out := make([]rune, 0, len(text))
|
||||
for _, r := range text {
|
||||
switch {
|
||||
case r >= 'a' && r <= 'z' || r >= 'A' && r <= 'Z' || r >= '0' && r <= '9' || r == '-' || r == '_' || r == '.':
|
||||
out = append(out, r)
|
||||
default:
|
||||
out = append(out, '-')
|
||||
}
|
||||
}
|
||||
|
||||
return string(out)
|
||||
}
|
||||
|
||||
func collapseRepeatedRune(text string, target rune) string {
|
||||
runes := []rune(text)
|
||||
out := make([]rune, 0, len(runes))
|
||||
lastWasTarget := false
|
||||
|
||||
for _, r := range runes {
|
||||
if r == target {
|
||||
if lastWasTarget {
|
||||
continue
|
||||
}
|
||||
lastWasTarget = true
|
||||
} else {
|
||||
lastWasTarget = false
|
||||
}
|
||||
|
||||
out = append(out, r)
|
||||
}
|
||||
|
||||
return string(out)
|
||||
}
|
||||
|
||||
func trimRuneEdges(text string, target rune) string {
|
||||
runes := []rune(text)
|
||||
start := 0
|
||||
end := len(runes)
|
||||
|
||||
for start < end && runes[start] == target {
|
||||
start++
|
||||
}
|
||||
|
||||
for end > start && runes[end-1] == target {
|
||||
end--
|
||||
}
|
||||
|
||||
return string(runes[start:end])
|
||||
}
|
||||
|
|
@ -5,15 +5,15 @@ package agentic
|
|||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// ScanInput is the input for agentic_scan.
|
||||
//
|
||||
// input := agentic.ScanInput{Org: "core", Labels: []string{"agentic", "bug"}, Limit: 20}
|
||||
type ScanInput struct {
|
||||
Org string `json:"org,omitempty"` // default "core"
|
||||
Labels []string `json:"labels,omitempty"` // filter by labels (default: agentic, help-wanted, bug)
|
||||
|
|
@ -21,6 +21,8 @@ type ScanInput struct {
|
|||
}
|
||||
|
||||
// ScanOutput is the output for agentic_scan.
|
||||
//
|
||||
// out := agentic.ScanOutput{Success: true, Count: 1, Issues: []agentic.ScanIssue{{Repo: "go-io", Number: 12}}}
|
||||
type ScanOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Count int `json:"count"`
|
||||
|
|
@ -28,6 +30,8 @@ type ScanOutput struct {
|
|||
}
|
||||
|
||||
// ScanIssue is a single actionable issue.
|
||||
//
|
||||
// issue := agentic.ScanIssue{Repo: "go-io", Number: 12, Title: "Replace fmt.Errorf"}
|
||||
type ScanIssue struct {
|
||||
Repo string `json:"repo"`
|
||||
Number int `json:"number"`
|
||||
|
|
@ -39,7 +43,7 @@ type ScanIssue struct {
|
|||
|
||||
func (s *PrepSubsystem) scan(ctx context.Context, _ *mcp.CallToolRequest, input ScanInput) (*mcp.CallToolResult, ScanOutput, error) {
|
||||
if s.forgeToken == "" {
|
||||
return nil, ScanOutput{}, coreerr.E("scan", "no Forge token configured", nil)
|
||||
return nil, ScanOutput{}, core.E("scan", "no Forge token configured", nil)
|
||||
}
|
||||
|
||||
if input.Org == "" {
|
||||
|
|
@ -81,7 +85,7 @@ func (s *PrepSubsystem) scan(ctx context.Context, _ *mcp.CallToolRequest, input
|
|||
seen := make(map[string]bool)
|
||||
var unique []ScanIssue
|
||||
for _, issue := range allIssues {
|
||||
key := fmt.Sprintf("%s#%d", issue.Repo, issue.Number)
|
||||
key := core.Sprintf("%s#%d", issue.Repo, issue.Number)
|
||||
if !seen[key] {
|
||||
seen[key] = true
|
||||
unique = append(unique, issue)
|
||||
|
|
@ -100,66 +104,38 @@ func (s *PrepSubsystem) scan(ctx context.Context, _ *mcp.CallToolRequest, input
|
|||
}
|
||||
|
||||
func (s *PrepSubsystem) listOrgRepos(ctx context.Context, org string) ([]string, error) {
|
||||
var allNames []string
|
||||
page := 1
|
||||
|
||||
for {
|
||||
u := fmt.Sprintf("%s/api/v1/orgs/%s/repos?limit=50&page=%d", s.forgeURL, org, page)
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", u, nil)
|
||||
if err != nil {
|
||||
return nil, coreerr.E("scan.listOrgRepos", "failed to create request", err)
|
||||
}
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
if err != nil {
|
||||
return nil, coreerr.E("scan.listOrgRepos", "failed to list repos", err)
|
||||
}
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
resp.Body.Close()
|
||||
return nil, coreerr.E("scan.listOrgRepos", fmt.Sprintf("HTTP %d listing repos", resp.StatusCode), nil)
|
||||
repos, err := s.forge.Repos.ListOrgRepos(ctx, org)
|
||||
if err != nil {
|
||||
return nil, core.E("scan.listOrgRepos", "failed to list repos", err)
|
||||
}
|
||||
|
||||
var repos []struct {
|
||||
Name string `json:"name"`
|
||||
}
|
||||
json.NewDecoder(resp.Body).Decode(&repos)
|
||||
resp.Body.Close()
|
||||
|
||||
for _, r := range repos {
|
||||
allNames = append(allNames, r.Name)
|
||||
}
|
||||
|
||||
// If we got fewer than the limit, we've reached the last page
|
||||
if len(repos) < 50 {
|
||||
break
|
||||
}
|
||||
page++
|
||||
var allNames []string
|
||||
for _, r := range repos {
|
||||
allNames = append(allNames, r.Name)
|
||||
}
|
||||
return allNames, nil
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) listRepoIssues(ctx context.Context, org, repo, label string) ([]ScanIssue, error) {
|
||||
u := fmt.Sprintf("%s/api/v1/repos/%s/%s/issues?state=open&limit=10&type=issues",
|
||||
u := core.Sprintf("%s/api/v1/repos/%s/%s/issues?state=open&limit=10&type=issues",
|
||||
s.forgeURL, org, repo)
|
||||
if label != "" {
|
||||
u += "&labels=" + strings.ReplaceAll(strings.ReplaceAll(label, " ", "%20"), "&", "%26")
|
||||
u += "&labels=" + core.Replace(core.Replace(label, " ", "%20"), "&", "%26")
|
||||
}
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", u, nil)
|
||||
if err != nil {
|
||||
return nil, coreerr.E("scan.listRepoIssues", "failed to create request", err)
|
||||
return nil, core.E("scan.listRepoIssues", "failed to create request", err)
|
||||
}
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
if err != nil {
|
||||
return nil, coreerr.E("scan.listRepoIssues", "failed to list issues for "+repo, err)
|
||||
return nil, core.E("scan.listRepoIssues", "failed to list issues for "+repo, err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
return nil, coreerr.E("scan.listRepoIssues", fmt.Sprintf("HTTP %d listing issues for %s", resp.StatusCode, repo), nil)
|
||||
return nil, core.E("scan.listRepoIssues", core.Sprintf("HTTP %d listing issues for %s", resp.StatusCode, repo), nil)
|
||||
}
|
||||
|
||||
var issues []struct {
|
||||
|
|
@ -192,7 +168,7 @@ func (s *PrepSubsystem) listRepoIssues(ctx context.Context, org, repo, label str
|
|||
Title: issue.Title,
|
||||
Labels: labels,
|
||||
Assignee: assignee,
|
||||
URL: strings.Replace(issue.HTMLURL, "https://forge.lthn.ai", s.forgeURL, 1),
|
||||
URL: core.Replace(issue.HTMLURL, "https://forge.lthn.ai", s.forgeURL),
|
||||
})
|
||||
}
|
||||
|
||||
|
|
|
|||
115
pkg/agentic/shutdown.go
Normal file
115
pkg/agentic/shutdown.go
Normal file
|
|
@ -0,0 +1,115 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
package agentic
|
||||
|
||||
import (
|
||||
"context"
|
||||
"syscall"
|
||||
|
||||
core "dappco.re/go/core"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// ShutdownInput is the input for agentic_dispatch_shutdown.
|
||||
//
|
||||
// input := agentic.ShutdownInput{}
|
||||
type ShutdownInput struct{}
|
||||
|
||||
// ShutdownOutput is the output for agentic_dispatch_shutdown.
|
||||
//
|
||||
// out := agentic.ShutdownOutput{Success: true, Running: 3, Message: "draining"}
|
||||
type ShutdownOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Running int `json:"running"`
|
||||
Queued int `json:"queued"`
|
||||
Message string `json:"message"`
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) registerShutdownTools(server *mcp.Server) {
|
||||
mcp.AddTool(server, &mcp.Tool{
|
||||
Name: "agentic_dispatch_start",
|
||||
Description: "Start the dispatch queue runner. Unfreezes the queue and begins draining.",
|
||||
}, s.dispatchStart)
|
||||
|
||||
mcp.AddTool(server, &mcp.Tool{
|
||||
Name: "agentic_dispatch_shutdown",
|
||||
Description: "Graceful shutdown: stop accepting new jobs, let running agents finish. Queue is frozen.",
|
||||
}, s.shutdownGraceful)
|
||||
|
||||
mcp.AddTool(server, &mcp.Tool{
|
||||
Name: "agentic_dispatch_shutdown_now",
|
||||
Description: "Hard shutdown: kill all running agents immediately. Queue is cleared.",
|
||||
}, s.shutdownNow)
|
||||
}
|
||||
|
||||
// dispatchStart unfreezes the queue and starts draining.
|
||||
func (s *PrepSubsystem) dispatchStart(ctx context.Context, _ *mcp.CallToolRequest, input ShutdownInput) (*mcp.CallToolResult, ShutdownOutput, error) {
|
||||
s.frozen = false
|
||||
s.Poke() // trigger immediate drain
|
||||
|
||||
return nil, ShutdownOutput{
|
||||
Success: true,
|
||||
Message: "dispatch started — queue unfrozen, draining",
|
||||
}, nil
|
||||
}
|
||||
|
||||
// shutdownGraceful freezes the queue — running agents finish, no new dispatches.
|
||||
func (s *PrepSubsystem) shutdownGraceful(ctx context.Context, _ *mcp.CallToolRequest, input ShutdownInput) (*mcp.CallToolResult, ShutdownOutput, error) {
|
||||
s.frozen = true
|
||||
|
||||
running := s.countRunningByAgent("codex") + s.countRunningByAgent("claude") +
|
||||
s.countRunningByAgent("gemini") + s.countRunningByAgent("codex-spark")
|
||||
|
||||
return nil, ShutdownOutput{
|
||||
Success: true,
|
||||
Running: running,
|
||||
Message: "queue frozen — running agents will finish, no new dispatches",
|
||||
}, nil
|
||||
}
|
||||
|
||||
// shutdownNow kills all running agents and clears the queue.
|
||||
func (s *PrepSubsystem) shutdownNow(ctx context.Context, _ *mcp.CallToolRequest, input ShutdownInput) (*mcp.CallToolResult, ShutdownOutput, error) {
|
||||
s.frozen = true
|
||||
|
||||
wsRoot := WorkspaceRoot()
|
||||
old := core.PathGlob(core.JoinPath(wsRoot, "*", "status.json"))
|
||||
deep := core.PathGlob(core.JoinPath(wsRoot, "*", "*", "*", "status.json"))
|
||||
statusFiles := append(old, deep...)
|
||||
|
||||
killed := 0
|
||||
cleared := 0
|
||||
|
||||
for _, statusPath := range statusFiles {
|
||||
wsDir := core.PathDir(statusPath)
|
||||
st, err := readStatus(wsDir)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Kill running agents
|
||||
if st.Status == "running" && st.PID > 0 {
|
||||
if syscall.Kill(st.PID, syscall.SIGTERM) == nil {
|
||||
killed++
|
||||
}
|
||||
st.Status = "failed"
|
||||
st.Question = "killed by shutdown_now"
|
||||
st.PID = 0
|
||||
writeStatus(wsDir, st)
|
||||
}
|
||||
|
||||
// Clear queued tasks
|
||||
if st.Status == "queued" {
|
||||
st.Status = "failed"
|
||||
st.Question = "cleared by shutdown_now"
|
||||
writeStatus(wsDir, st)
|
||||
cleared++
|
||||
}
|
||||
}
|
||||
|
||||
return nil, ShutdownOutput{
|
||||
Success: true,
|
||||
Running: 0,
|
||||
Queued: 0,
|
||||
Message: core.Sprintf("killed %d agents, cleared %d queued tasks", killed, cleared),
|
||||
}, nil
|
||||
}
|
||||
|
|
@ -5,15 +5,10 @@ package agentic
|
|||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
|
|
@ -31,20 +26,23 @@ import (
|
|||
// running → failed (agent crashed / non-zero exit)
|
||||
|
||||
// WorkspaceStatus represents the current state of an agent workspace.
|
||||
//
|
||||
// st, err := readStatus(wsDir)
|
||||
// if err == nil && st.Status == "completed" { autoCreatePR(wsDir) }
|
||||
type WorkspaceStatus struct {
|
||||
Status string `json:"status"` // running, completed, blocked, failed
|
||||
Agent string `json:"agent"` // gemini, claude, codex
|
||||
Repo string `json:"repo"` // target repo
|
||||
Org string `json:"org,omitempty"` // forge org (e.g. "core")
|
||||
Task string `json:"task"` // task description
|
||||
Branch string `json:"branch,omitempty"` // git branch name
|
||||
Issue int `json:"issue,omitempty"` // forge issue number
|
||||
PID int `json:"pid,omitempty"` // process ID (if running)
|
||||
StartedAt time.Time `json:"started_at"` // when dispatch started
|
||||
UpdatedAt time.Time `json:"updated_at"` // last status change
|
||||
Question string `json:"question,omitempty"` // from BLOCKED.md
|
||||
Runs int `json:"runs"` // how many times dispatched/resumed
|
||||
PRURL string `json:"pr_url,omitempty"` // pull request URL (after PR created)
|
||||
Status string `json:"status"` // running, completed, blocked, failed
|
||||
Agent string `json:"agent"` // gemini, claude, codex
|
||||
Repo string `json:"repo"` // target repo
|
||||
Org string `json:"org,omitempty"` // forge org (e.g. "core")
|
||||
Task string `json:"task"` // task description
|
||||
Branch string `json:"branch,omitempty"` // git branch name
|
||||
Issue int `json:"issue,omitempty"` // forge issue number
|
||||
PID int `json:"pid,omitempty"` // process ID (if running)
|
||||
StartedAt time.Time `json:"started_at"` // when dispatch started
|
||||
UpdatedAt time.Time `json:"updated_at"` // last status change
|
||||
Question string `json:"question,omitempty"` // from BLOCKED.md
|
||||
Runs int `json:"runs"` // how many times dispatched/resumed
|
||||
PRURL string `json:"pr_url,omitempty"` // pull request URL (after PR created)
|
||||
}
|
||||
|
||||
func writeStatus(wsDir string, status *WorkspaceStatus) error {
|
||||
|
|
@ -53,16 +51,20 @@ func writeStatus(wsDir string, status *WorkspaceStatus) error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return coreio.Local.Write(filepath.Join(wsDir, "status.json"), string(data))
|
||||
if r := fs.Write(core.JoinPath(wsDir, "status.json"), string(data)); !r.OK {
|
||||
err, _ := r.Value.(error)
|
||||
return core.E("writeStatus", "failed to write status", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func readStatus(wsDir string) (*WorkspaceStatus, error) {
|
||||
data, err := coreio.Local.Read(filepath.Join(wsDir, "status.json"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
r := fs.Read(core.JoinPath(wsDir, "status.json"))
|
||||
if !r.OK {
|
||||
return nil, core.E("readStatus", "status not found", nil)
|
||||
}
|
||||
var s WorkspaceStatus
|
||||
if err := json.Unmarshal([]byte(data), &s); err != nil {
|
||||
if err := json.Unmarshal([]byte(r.Value.(string)), &s); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &s, nil
|
||||
|
|
@ -70,24 +72,36 @@ func readStatus(wsDir string) (*WorkspaceStatus, error) {
|
|||
|
||||
// --- agentic_status tool ---
|
||||
|
||||
// StatusInput is the input for agentic_status.
|
||||
//
|
||||
// input := agentic.StatusInput{Workspace: "go-io-123", Limit: 50}
|
||||
type StatusInput struct {
|
||||
Workspace string `json:"workspace,omitempty"` // specific workspace name, or empty for all
|
||||
Limit int `json:"limit,omitempty"` // max results (default 100)
|
||||
Status string `json:"status,omitempty"` // filter: running, completed, failed, blocked
|
||||
}
|
||||
|
||||
// StatusOutput is the output for agentic_status.
|
||||
// Returns stats by default. Only blocked workspaces are listed (they need attention).
|
||||
//
|
||||
// out := agentic.StatusOutput{Total: 42, Running: 3, Queued: 10, Completed: 25}
|
||||
type StatusOutput struct {
|
||||
Workspaces []WorkspaceInfo `json:"workspaces"`
|
||||
Count int `json:"count"`
|
||||
Total int `json:"total"`
|
||||
Running int `json:"running"`
|
||||
Queued int `json:"queued"`
|
||||
Completed int `json:"completed"`
|
||||
Failed int `json:"failed"`
|
||||
Blocked []BlockedInfo `json:"blocked,omitempty"`
|
||||
}
|
||||
|
||||
type WorkspaceInfo struct {
|
||||
Name string `json:"name"`
|
||||
Status string `json:"status"`
|
||||
Agent string `json:"agent"`
|
||||
Repo string `json:"repo"`
|
||||
Task string `json:"task"`
|
||||
Age string `json:"age"`
|
||||
Question string `json:"question,omitempty"`
|
||||
Runs int `json:"runs"`
|
||||
// BlockedInfo shows a workspace that needs human input.
|
||||
//
|
||||
// info := agentic.BlockedInfo{Name: "go-io/task-4", Repo: "go-io", Question: "Which API version?"}
|
||||
type BlockedInfo struct {
|
||||
Name string `json:"name"`
|
||||
Repo string `json:"repo"`
|
||||
Agent string `json:"agent"`
|
||||
Question string `json:"question"`
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) registerStatusTool(server *mcp.Server) {
|
||||
|
|
@ -100,73 +114,37 @@ func (s *PrepSubsystem) registerStatusTool(server *mcp.Server) {
|
|||
func (s *PrepSubsystem) status(ctx context.Context, _ *mcp.CallToolRequest, input StatusInput) (*mcp.CallToolResult, StatusOutput, error) {
|
||||
wsRoot := WorkspaceRoot()
|
||||
|
||||
entries, err := os.ReadDir(wsRoot)
|
||||
if err != nil {
|
||||
return nil, StatusOutput{}, coreerr.E("status", "no workspaces found", err)
|
||||
}
|
||||
// Scan both old (*/status.json) and new (*/*/*/status.json) layouts
|
||||
old := core.PathGlob(core.JoinPath(wsRoot, "*", "status.json"))
|
||||
deep := core.PathGlob(core.JoinPath(wsRoot, "*", "*", "*", "status.json"))
|
||||
statusFiles := append(old, deep...)
|
||||
|
||||
var workspaces []WorkspaceInfo
|
||||
var out StatusOutput
|
||||
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
for _, statusPath := range statusFiles {
|
||||
wsDir := core.PathDir(statusPath)
|
||||
name := wsDir[len(wsRoot)+1:]
|
||||
|
||||
name := entry.Name()
|
||||
|
||||
// Filter by specific workspace if requested
|
||||
if input.Workspace != "" && name != input.Workspace {
|
||||
continue
|
||||
}
|
||||
|
||||
wsDir := filepath.Join(wsRoot, name)
|
||||
info := WorkspaceInfo{Name: name}
|
||||
|
||||
// Try reading status.json
|
||||
st, err := readStatus(wsDir)
|
||||
if err != nil {
|
||||
// Legacy workspace (no status.json) — check for log file
|
||||
logFiles, _ := filepath.Glob(filepath.Join(wsDir, "agent-*.log"))
|
||||
if len(logFiles) > 0 {
|
||||
info.Status = "completed"
|
||||
} else {
|
||||
info.Status = "unknown"
|
||||
}
|
||||
fi, _ := entry.Info()
|
||||
if fi != nil {
|
||||
info.Age = time.Since(fi.ModTime()).Truncate(time.Minute).String()
|
||||
}
|
||||
workspaces = append(workspaces, info)
|
||||
out.Total++
|
||||
out.Failed++
|
||||
continue
|
||||
}
|
||||
|
||||
info.Status = st.Status
|
||||
info.Agent = st.Agent
|
||||
info.Repo = st.Repo
|
||||
info.Task = st.Task
|
||||
info.Runs = st.Runs
|
||||
info.Age = time.Since(st.StartedAt).Truncate(time.Minute).String()
|
||||
|
||||
// If status is "running", check if PID is still alive
|
||||
if st.Status == "running" && st.PID > 0 {
|
||||
if err := syscall.Kill(st.PID, 0); err != nil {
|
||||
// Process died — check for BLOCKED.md
|
||||
blockedPath := filepath.Join(wsDir, "src", "BLOCKED.md")
|
||||
if data, err := coreio.Local.Read(blockedPath); err == nil {
|
||||
info.Status = "blocked"
|
||||
info.Question = strings.TrimSpace(data)
|
||||
blockedPath := core.JoinPath(wsDir, "repo", "BLOCKED.md")
|
||||
if r := fs.Read(blockedPath); r.OK {
|
||||
st.Status = "blocked"
|
||||
st.Question = info.Question
|
||||
st.Question = core.Trim(r.Value.(string))
|
||||
} else {
|
||||
// Dead PID without BLOCKED.md — check exit code from log
|
||||
// If no evidence of success, mark as failed (not completed)
|
||||
logFile := filepath.Join(wsDir, fmt.Sprintf("agent-%s.log", st.Agent))
|
||||
if _, err := coreio.Local.Read(logFile); err != nil {
|
||||
info.Status = "failed"
|
||||
logFile := core.JoinPath(wsDir, core.Sprintf("agent-%s.log", st.Agent))
|
||||
if r := fs.Read(logFile); !r.OK {
|
||||
st.Status = "failed"
|
||||
st.Question = "Agent process died (no output log)"
|
||||
} else {
|
||||
info.Status = "completed"
|
||||
st.Status = "completed"
|
||||
}
|
||||
}
|
||||
|
|
@ -174,15 +152,25 @@ func (s *PrepSubsystem) status(ctx context.Context, _ *mcp.CallToolRequest, inpu
|
|||
}
|
||||
}
|
||||
|
||||
if st.Status == "blocked" {
|
||||
info.Question = st.Question
|
||||
out.Total++
|
||||
switch st.Status {
|
||||
case "running":
|
||||
out.Running++
|
||||
case "queued":
|
||||
out.Queued++
|
||||
case "completed":
|
||||
out.Completed++
|
||||
case "failed":
|
||||
out.Failed++
|
||||
case "blocked":
|
||||
out.Blocked = append(out.Blocked, BlockedInfo{
|
||||
Name: name,
|
||||
Repo: st.Repo,
|
||||
Agent: st.Agent,
|
||||
Question: st.Question,
|
||||
})
|
||||
}
|
||||
|
||||
workspaces = append(workspaces, info)
|
||||
}
|
||||
|
||||
return nil, StatusOutput{
|
||||
Workspaces: workspaces,
|
||||
Count: len(workspaces),
|
||||
}, nil
|
||||
return nil, out, nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4,13 +4,11 @@ package agentic
|
|||
|
||||
import (
|
||||
"encoding/json"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestWriteStatus_Good(t *testing.T) {
|
||||
|
|
@ -28,12 +26,12 @@ func TestWriteStatus_Good(t *testing.T) {
|
|||
err := writeStatus(dir, status)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify file was written via coreio
|
||||
data, err := coreio.Local.Read(filepath.Join(dir, "status.json"))
|
||||
require.NoError(t, err)
|
||||
// Verify file was written via core.Fs
|
||||
r := fs.Read(filepath.Join(dir, "status.json"))
|
||||
require.True(t, r.OK)
|
||||
|
||||
var read WorkspaceStatus
|
||||
err = json.Unmarshal([]byte(data), &read)
|
||||
err = json.Unmarshal([]byte(r.Value.(string)), &read)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, "running", read.Status)
|
||||
|
|
@ -77,7 +75,7 @@ func TestReadStatus_Good(t *testing.T) {
|
|||
|
||||
data, err := json.MarshalIndent(status, "", " ")
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "status.json"), string(data)))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "status.json"), string(data)).OK)
|
||||
|
||||
read, err := readStatus(dir)
|
||||
require.NoError(t, err)
|
||||
|
|
@ -99,7 +97,7 @@ func TestReadStatus_Bad_NoFile(t *testing.T) {
|
|||
|
||||
func TestReadStatus_Bad_InvalidJSON(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "status.json"), "not json{"))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "status.json"), "not json{").OK)
|
||||
|
||||
_, err := readStatus(dir)
|
||||
assert.Error(t, err)
|
||||
|
|
@ -117,7 +115,7 @@ func TestReadStatus_Good_BlockedWithQuestion(t *testing.T) {
|
|||
|
||||
data, err := json.MarshalIndent(status, "", " ")
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "status.json"), string(data)))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "status.json"), string(data)).OK)
|
||||
|
||||
read, err := readStatus(dir)
|
||||
require.NoError(t, err)
|
||||
|
|
@ -177,7 +175,7 @@ func TestWriteStatus_Good_OverwriteExisting(t *testing.T) {
|
|||
|
||||
func TestReadStatus_Ugly_EmptyFile(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(dir, "status.json"), ""))
|
||||
require.True(t, fs.Write(filepath.Join(dir, "status.json"), "").OK)
|
||||
|
||||
_, err := readStatus(dir)
|
||||
assert.Error(t, err)
|
||||
|
|
|
|||
|
|
@ -6,16 +6,12 @@ import (
|
|||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
// autoVerifyAndMerge runs inline tests (fast gate) and merges if they pass.
|
||||
|
|
@ -31,7 +27,7 @@ func (s *PrepSubsystem) autoVerifyAndMerge(wsDir string) {
|
|||
return
|
||||
}
|
||||
|
||||
srcDir := filepath.Join(wsDir, "src")
|
||||
repoDir := core.JoinPath(wsDir, "repo")
|
||||
org := st.Org
|
||||
if org == "" {
|
||||
org = "core"
|
||||
|
|
@ -51,7 +47,7 @@ func (s *PrepSubsystem) autoVerifyAndMerge(wsDir string) {
|
|||
}
|
||||
|
||||
// Attempt 1: run tests and try to merge
|
||||
result := s.attemptVerifyAndMerge(srcDir, org, st.Repo, st.Branch, prNum)
|
||||
result := s.attemptVerifyAndMerge(repoDir, org, st.Repo, st.Branch, prNum)
|
||||
if result == mergeSuccess {
|
||||
markMerged()
|
||||
return
|
||||
|
|
@ -59,8 +55,8 @@ func (s *PrepSubsystem) autoVerifyAndMerge(wsDir string) {
|
|||
|
||||
// Attempt 2: rebase onto main and retry
|
||||
if result == mergeConflict || result == testFailed {
|
||||
if s.rebaseBranch(srcDir, st.Branch) {
|
||||
if s.attemptVerifyAndMerge(srcDir, org, st.Repo, st.Branch, prNum) == mergeSuccess {
|
||||
if s.rebaseBranch(repoDir, st.Branch) {
|
||||
if s.attemptVerifyAndMerge(repoDir, org, st.Repo, st.Branch, prNum) == mergeSuccess {
|
||||
markMerged()
|
||||
return
|
||||
}
|
||||
|
|
@ -85,11 +81,11 @@ const (
|
|||
)
|
||||
|
||||
// attemptVerifyAndMerge runs tests and tries to merge. Returns the outcome.
|
||||
func (s *PrepSubsystem) attemptVerifyAndMerge(srcDir, org, repo, branch string, prNum int) mergeResult {
|
||||
testResult := s.runVerification(srcDir)
|
||||
func (s *PrepSubsystem) attemptVerifyAndMerge(repoDir, org, repo, branch string, prNum int) mergeResult {
|
||||
testResult := s.runVerification(repoDir)
|
||||
|
||||
if !testResult.passed {
|
||||
comment := fmt.Sprintf("## Verification Failed\n\n**Command:** `%s`\n\n```\n%s\n```\n\n**Exit code:** %d",
|
||||
comment := core.Sprintf("## Verification Failed\n\n**Command:** `%s`\n\n```\n%s\n```\n\n**Exit code:** %d",
|
||||
testResult.testCmd, truncate(testResult.output, 2000), testResult.exitCode)
|
||||
s.commentOnIssue(context.Background(), org, repo, prNum, comment)
|
||||
return testFailed
|
||||
|
|
@ -100,40 +96,40 @@ func (s *PrepSubsystem) attemptVerifyAndMerge(srcDir, org, repo, branch string,
|
|||
defer cancel()
|
||||
|
||||
if err := s.forgeMergePR(ctx, org, repo, prNum); err != nil {
|
||||
comment := fmt.Sprintf("## Tests Passed — Merge Failed\n\n`%s` passed but merge failed: %v", testResult.testCmd, err)
|
||||
comment := core.Sprintf("## Tests Passed — Merge Failed\n\n`%s` passed but merge failed: %v", testResult.testCmd, err)
|
||||
s.commentOnIssue(context.Background(), org, repo, prNum, comment)
|
||||
return mergeConflict
|
||||
}
|
||||
|
||||
comment := fmt.Sprintf("## Auto-Verified & Merged\n\n**Tests:** `%s` — PASS\n\nAuto-merged by core-agent dispatch system.", testResult.testCmd)
|
||||
comment := core.Sprintf("## Auto-Verified & Merged\n\n**Tests:** `%s` — PASS\n\nAuto-merged by core-agent dispatch system.", testResult.testCmd)
|
||||
s.commentOnIssue(context.Background(), org, repo, prNum, comment)
|
||||
return mergeSuccess
|
||||
}
|
||||
|
||||
// rebaseBranch rebases the current branch onto the default branch and force-pushes.
|
||||
func (s *PrepSubsystem) rebaseBranch(srcDir, branch string) bool {
|
||||
base := gitDefaultBranch(srcDir)
|
||||
func (s *PrepSubsystem) rebaseBranch(repoDir, branch string) bool {
|
||||
base := DefaultBranch(repoDir)
|
||||
|
||||
// Fetch latest default branch
|
||||
fetch := exec.Command("git", "fetch", "origin", base)
|
||||
fetch.Dir = srcDir
|
||||
fetch.Dir = repoDir
|
||||
if err := fetch.Run(); err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// Rebase onto default branch
|
||||
rebase := exec.Command("git", "rebase", "origin/"+base)
|
||||
rebase.Dir = srcDir
|
||||
rebase.Dir = repoDir
|
||||
if err := rebase.Run(); err != nil {
|
||||
// Rebase failed — abort and give up
|
||||
abort := exec.Command("git", "rebase", "--abort")
|
||||
abort.Dir = srcDir
|
||||
abort.Dir = repoDir
|
||||
abort.Run()
|
||||
return false
|
||||
}
|
||||
|
||||
// Force-push the rebased branch to Forge (origin is local clone)
|
||||
st, _ := readStatus(filepath.Dir(srcDir))
|
||||
st, _ := readStatus(core.PathDir(repoDir))
|
||||
org := "core"
|
||||
repo := ""
|
||||
if st != nil {
|
||||
|
|
@ -142,9 +138,9 @@ func (s *PrepSubsystem) rebaseBranch(srcDir, branch string) bool {
|
|||
}
|
||||
repo = st.Repo
|
||||
}
|
||||
forgeRemote := fmt.Sprintf("ssh://git@forge.lthn.ai:2223/%s/%s.git", org, repo)
|
||||
forgeRemote := core.Sprintf("ssh://git@forge.lthn.ai:2223/%s/%s.git", org, repo)
|
||||
push := exec.Command("git", "push", "--force-with-lease", forgeRemote, branch)
|
||||
push.Dir = srcDir
|
||||
push.Dir = repoDir
|
||||
return push.Run() == nil
|
||||
}
|
||||
|
||||
|
|
@ -160,7 +156,7 @@ func (s *PrepSubsystem) flagForReview(org, repo string, prNum int, result mergeR
|
|||
payload, _ := json.Marshal(map[string]any{
|
||||
"labels": []int{s.getLabelID(ctx, org, repo, "needs-review")},
|
||||
})
|
||||
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/issues/%d/labels", s.forgeURL, org, repo, prNum)
|
||||
url := core.Sprintf("%s/api/v1/repos/%s/%s/issues/%d/labels", s.forgeURL, org, repo, prNum)
|
||||
req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
|
@ -174,7 +170,7 @@ func (s *PrepSubsystem) flagForReview(org, repo string, prNum int, result mergeR
|
|||
if result == mergeConflict {
|
||||
reason = "Merge conflict persists after rebase"
|
||||
}
|
||||
comment := fmt.Sprintf("## Needs Review\n\n%s. Auto-merge gave up after retry.\n\nLabelled `needs-review` for human attention.", reason)
|
||||
comment := core.Sprintf("## Needs Review\n\n%s. Auto-merge gave up after retry.\n\nLabelled `needs-review` for human attention.", reason)
|
||||
s.commentOnIssue(ctx, org, repo, prNum, comment)
|
||||
}
|
||||
|
||||
|
|
@ -184,7 +180,7 @@ func (s *PrepSubsystem) ensureLabel(ctx context.Context, org, repo, name, colour
|
|||
"name": name,
|
||||
"color": "#" + colour,
|
||||
})
|
||||
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/labels", s.forgeURL, org, repo)
|
||||
url := core.Sprintf("%s/api/v1/repos/%s/%s/labels", s.forgeURL, org, repo)
|
||||
req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
|
@ -196,7 +192,7 @@ func (s *PrepSubsystem) ensureLabel(ctx context.Context, org, repo, name, colour
|
|||
|
||||
// getLabelID fetches the ID of a label by name.
|
||||
func (s *PrepSubsystem) getLabelID(ctx context.Context, org, repo, name string) int {
|
||||
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/labels", s.forgeURL, org, repo)
|
||||
url := core.Sprintf("%s/api/v1/repos/%s/%s/labels", s.forgeURL, org, repo)
|
||||
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
resp, err := s.client.Do(req)
|
||||
|
|
@ -227,22 +223,22 @@ type verifyResult struct {
|
|||
}
|
||||
|
||||
// runVerification detects the project type and runs the appropriate test suite.
|
||||
func (s *PrepSubsystem) runVerification(srcDir string) verifyResult {
|
||||
if fileExists(filepath.Join(srcDir, "go.mod")) {
|
||||
return s.runGoTests(srcDir)
|
||||
func (s *PrepSubsystem) runVerification(repoDir string) verifyResult {
|
||||
if fileExists(core.JoinPath(repoDir, "go.mod")) {
|
||||
return s.runGoTests(repoDir)
|
||||
}
|
||||
if fileExists(filepath.Join(srcDir, "composer.json")) {
|
||||
return s.runPHPTests(srcDir)
|
||||
if fileExists(core.JoinPath(repoDir, "composer.json")) {
|
||||
return s.runPHPTests(repoDir)
|
||||
}
|
||||
if fileExists(filepath.Join(srcDir, "package.json")) {
|
||||
return s.runNodeTests(srcDir)
|
||||
if fileExists(core.JoinPath(repoDir, "package.json")) {
|
||||
return s.runNodeTests(repoDir)
|
||||
}
|
||||
return verifyResult{passed: true, testCmd: "none", output: "No test runner detected"}
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) runGoTests(srcDir string) verifyResult {
|
||||
func (s *PrepSubsystem) runGoTests(repoDir string) verifyResult {
|
||||
cmd := exec.Command("go", "test", "./...", "-count=1", "-timeout", "120s")
|
||||
cmd.Dir = srcDir
|
||||
cmd.Dir = repoDir
|
||||
cmd.Env = append(os.Environ(), "GOWORK=off")
|
||||
out, err := cmd.CombinedOutput()
|
||||
|
||||
|
|
@ -258,9 +254,9 @@ func (s *PrepSubsystem) runGoTests(srcDir string) verifyResult {
|
|||
return verifyResult{passed: exitCode == 0, output: string(out), exitCode: exitCode, testCmd: "go test ./..."}
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) runPHPTests(srcDir string) verifyResult {
|
||||
func (s *PrepSubsystem) runPHPTests(repoDir string) verifyResult {
|
||||
cmd := exec.Command("composer", "test", "--no-interaction")
|
||||
cmd.Dir = srcDir
|
||||
cmd.Dir = repoDir
|
||||
out, err := cmd.CombinedOutput()
|
||||
|
||||
exitCode := 0
|
||||
|
|
@ -269,7 +265,7 @@ func (s *PrepSubsystem) runPHPTests(srcDir string) verifyResult {
|
|||
exitCode = exitErr.ExitCode()
|
||||
} else {
|
||||
cmd2 := exec.Command("./vendor/bin/pest", "--no-interaction")
|
||||
cmd2.Dir = srcDir
|
||||
cmd2.Dir = repoDir
|
||||
out2, err2 := cmd2.CombinedOutput()
|
||||
if err2 != nil {
|
||||
return verifyResult{passed: false, testCmd: "none", output: "No PHP test runner found (composer test and vendor/bin/pest both unavailable)", exitCode: 1}
|
||||
|
|
@ -281,21 +277,21 @@ func (s *PrepSubsystem) runPHPTests(srcDir string) verifyResult {
|
|||
return verifyResult{passed: exitCode == 0, output: string(out), exitCode: exitCode, testCmd: "composer test"}
|
||||
}
|
||||
|
||||
func (s *PrepSubsystem) runNodeTests(srcDir string) verifyResult {
|
||||
data, err := coreio.Local.Read(filepath.Join(srcDir, "package.json"))
|
||||
if err != nil {
|
||||
func (s *PrepSubsystem) runNodeTests(repoDir string) verifyResult {
|
||||
r := fs.Read(core.JoinPath(repoDir, "package.json"))
|
||||
if !r.OK {
|
||||
return verifyResult{passed: true, testCmd: "none", output: "Could not read package.json"}
|
||||
}
|
||||
|
||||
var pkg struct {
|
||||
Scripts map[string]string `json:"scripts"`
|
||||
}
|
||||
if json.Unmarshal([]byte(data), &pkg) != nil || pkg.Scripts["test"] == "" {
|
||||
if json.Unmarshal([]byte(r.Value.(string)), &pkg) != nil || pkg.Scripts["test"] == "" {
|
||||
return verifyResult{passed: true, testCmd: "none", output: "No test script in package.json"}
|
||||
}
|
||||
|
||||
cmd := exec.Command("npm", "test")
|
||||
cmd.Dir = srcDir
|
||||
cmd.Dir = repoDir
|
||||
out, err := cmd.CombinedOutput()
|
||||
|
||||
exitCode := 0
|
||||
|
|
@ -318,14 +314,14 @@ func (s *PrepSubsystem) forgeMergePR(ctx context.Context, org, repo string, prNu
|
|||
"delete_branch_after_merge": true,
|
||||
})
|
||||
|
||||
url := fmt.Sprintf("%s/api/v1/repos/%s/%s/pulls/%d/merge", s.forgeURL, org, repo, prNum)
|
||||
url := core.Sprintf("%s/api/v1/repos/%s/%s/pulls/%d/merge", s.forgeURL, org, repo, prNum)
|
||||
req, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(payload))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Authorization", "token "+s.forgeToken)
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
if err != nil {
|
||||
return coreerr.E("forgeMergePR", "request failed", err)
|
||||
return core.E("forgeMergePR", "request failed", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
|
|
@ -333,7 +329,7 @@ func (s *PrepSubsystem) forgeMergePR(ctx context.Context, org, repo string, prNu
|
|||
var errBody map[string]any
|
||||
json.NewDecoder(resp.Body).Decode(&errBody)
|
||||
msg, _ := errBody["message"].(string)
|
||||
return coreerr.E("forgeMergePR", fmt.Sprintf("HTTP %d: %s", resp.StatusCode, msg), nil)
|
||||
return core.E("forgeMergePR", core.Sprintf("HTTP %d: %s", resp.StatusCode, msg), nil)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -341,16 +337,14 @@ func (s *PrepSubsystem) forgeMergePR(ctx context.Context, org, repo string, prNu
|
|||
|
||||
// extractPRNumber gets the PR number from a Forge PR URL.
|
||||
func extractPRNumber(prURL string) int {
|
||||
parts := strings.Split(prURL, "/")
|
||||
parts := core.Split(prURL, "/")
|
||||
if len(parts) == 0 {
|
||||
return 0
|
||||
}
|
||||
var num int
|
||||
fmt.Sscanf(parts[len(parts)-1], "%d", &num)
|
||||
return num
|
||||
return parseInt(parts[len(parts)-1])
|
||||
}
|
||||
|
||||
// fileExists checks if a file exists.
|
||||
func fileExists(path string) bool {
|
||||
return coreio.Local.IsFile(path)
|
||||
return fs.IsFile(path)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4,15 +4,15 @@ package agentic
|
|||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// WatchInput is the input for agentic_watch.
|
||||
//
|
||||
// input := agentic.WatchInput{Workspaces: []string{"go-io-123"}, PollInterval: 5, Timeout: 600}
|
||||
type WatchInput struct {
|
||||
// Workspaces to watch. If empty, watches all running/queued workspaces.
|
||||
Workspaces []string `json:"workspaces,omitempty"`
|
||||
|
|
@ -23,6 +23,8 @@ type WatchInput struct {
|
|||
}
|
||||
|
||||
// WatchOutput is the result when all watched workspaces complete.
|
||||
//
|
||||
// out := agentic.WatchOutput{Success: true, Completed: []agentic.WatchResult{{Workspace: "go-io-123", Status: "completed"}}}
|
||||
type WatchOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Completed []WatchResult `json:"completed"`
|
||||
|
|
@ -31,6 +33,8 @@ type WatchOutput struct {
|
|||
}
|
||||
|
||||
// WatchResult describes one completed workspace.
|
||||
//
|
||||
// result := agentic.WatchResult{Workspace: "go-io-123", Agent: "codex", Repo: "go-io", Status: "completed"}
|
||||
type WatchResult struct {
|
||||
Workspace string `json:"workspace"`
|
||||
Agent string `json:"agent"`
|
||||
|
|
@ -99,7 +103,7 @@ func (s *PrepSubsystem) watch(ctx context.Context, req *mcp.CallToolRequest, inp
|
|||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return nil, WatchOutput{}, coreerr.E("watch", "cancelled", ctx.Err())
|
||||
return nil, WatchOutput{}, core.E("watch", "cancelled", ctx.Err())
|
||||
case <-time.After(pollInterval):
|
||||
}
|
||||
|
||||
|
|
@ -128,7 +132,7 @@ func (s *PrepSubsystem) watch(ctx context.Context, req *mcp.CallToolRequest, inp
|
|||
ProgressToken: progressToken,
|
||||
Progress: progressCount,
|
||||
Total: total,
|
||||
Message: fmt.Sprintf("%s completed (%s)", st.Repo, st.Agent),
|
||||
Message: core.Sprintf("%s completed (%s)", st.Repo, st.Agent),
|
||||
})
|
||||
}
|
||||
|
||||
|
|
@ -149,7 +153,7 @@ func (s *PrepSubsystem) watch(ctx context.Context, req *mcp.CallToolRequest, inp
|
|||
ProgressToken: progressToken,
|
||||
Progress: progressCount,
|
||||
Total: total,
|
||||
Message: fmt.Sprintf("%s %s (%s)", st.Repo, st.Status, st.Agent),
|
||||
Message: core.Sprintf("%s %s (%s)", st.Repo, st.Status, st.Agent),
|
||||
})
|
||||
}
|
||||
|
||||
|
|
@ -169,7 +173,7 @@ func (s *PrepSubsystem) watch(ctx context.Context, req *mcp.CallToolRequest, inp
|
|||
ProgressToken: progressToken,
|
||||
Progress: progressCount,
|
||||
Total: total,
|
||||
Message: fmt.Sprintf("%s %s (%s)", st.Repo, st.Status, st.Agent),
|
||||
Message: core.Sprintf("%s %s (%s)", st.Repo, st.Status, st.Agent),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
@ -187,20 +191,17 @@ func (s *PrepSubsystem) watch(ctx context.Context, req *mcp.CallToolRequest, inp
|
|||
// findActiveWorkspaces returns workspace names that are running or queued.
|
||||
func (s *PrepSubsystem) findActiveWorkspaces() []string {
|
||||
wsRoot := WorkspaceRoot()
|
||||
entries, err := filepath.Glob(filepath.Join(wsRoot, "*/status.json"))
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
entries := core.PathGlob(core.JoinPath(wsRoot, "*/status.json"))
|
||||
|
||||
var active []string
|
||||
for _, entry := range entries {
|
||||
wsDir := filepath.Dir(entry)
|
||||
wsDir := core.PathDir(entry)
|
||||
st, err := readStatus(wsDir)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if st.Status == "running" || st.Status == "queued" {
|
||||
active = append(active, filepath.Base(wsDir))
|
||||
active = append(active, core.PathBase(wsDir))
|
||||
}
|
||||
}
|
||||
return active
|
||||
|
|
@ -208,8 +209,8 @@ func (s *PrepSubsystem) findActiveWorkspaces() []string {
|
|||
|
||||
// resolveWorkspaceDir converts a workspace name to full path.
|
||||
func (s *PrepSubsystem) resolveWorkspaceDir(name string) string {
|
||||
if filepath.IsAbs(name) {
|
||||
if core.PathIsAbs(name) {
|
||||
return name
|
||||
}
|
||||
return filepath.Join(WorkspaceRoot(), name)
|
||||
return core.JoinPath(WorkspaceRoot(), name)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -7,36 +7,60 @@ package brain
|
|||
import (
|
||||
"context"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
"dappco.re/go/agent/pkg/agentic"
|
||||
core "dappco.re/go/core"
|
||||
"forge.lthn.ai/core/mcp/pkg/mcp/ide"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// fs provides unrestricted filesystem access for shared brain credentials.
|
||||
//
|
||||
// keyPath := core.Concat(home, "/.claude/brain.key")
|
||||
// if r := fs.Read(keyPath); r.OK {
|
||||
// apiKey = core.Trim(r.Value.(string))
|
||||
// }
|
||||
var fs = agentic.LocalFs()
|
||||
|
||||
func fieldString(values map[string]any, key string) string {
|
||||
return core.Sprint(values[key])
|
||||
}
|
||||
|
||||
// errBridgeNotAvailable is returned when a tool requires the Laravel bridge
|
||||
// but it has not been initialised (headless mode).
|
||||
var errBridgeNotAvailable = coreerr.E("brain", "bridge not available", nil)
|
||||
var errBridgeNotAvailable = core.E("brain", "bridge not available", nil)
|
||||
|
||||
// Subsystem implements mcp.Subsystem for OpenBrain knowledge store operations.
|
||||
// It proxies brain_* tool calls to the Laravel backend via the shared IDE bridge.
|
||||
// Subsystem proxies brain_* MCP tools through the shared IDE bridge.
|
||||
//
|
||||
// sub := brain.New(bridge)
|
||||
// sub.RegisterTools(server)
|
||||
type Subsystem struct {
|
||||
bridge *ide.Bridge
|
||||
}
|
||||
|
||||
// New creates a brain subsystem that uses the given IDE bridge for Laravel communication.
|
||||
// Pass nil if headless (tools will return errBridgeNotAvailable).
|
||||
// New creates a bridge-backed brain subsystem.
|
||||
//
|
||||
// sub := brain.New(bridge)
|
||||
// _ = sub.Shutdown(context.Background())
|
||||
func New(bridge *ide.Bridge) *Subsystem {
|
||||
return &Subsystem{bridge: bridge}
|
||||
}
|
||||
|
||||
// Name implements mcp.Subsystem.
|
||||
// Name returns the MCP subsystem name.
|
||||
//
|
||||
// name := sub.Name() // "brain"
|
||||
func (s *Subsystem) Name() string { return "brain" }
|
||||
|
||||
// RegisterTools implements mcp.Subsystem.
|
||||
// RegisterTools adds the bridge-backed brain tools to an MCP server.
|
||||
//
|
||||
// sub := brain.New(bridge)
|
||||
// sub.RegisterTools(server)
|
||||
func (s *Subsystem) RegisterTools(server *mcp.Server) {
|
||||
s.registerBrainTools(server)
|
||||
}
|
||||
|
||||
// Shutdown implements mcp.SubsystemWithShutdown.
|
||||
// Shutdown closes the subsystem without additional cleanup.
|
||||
//
|
||||
// _ = sub.Shutdown(context.Background())
|
||||
func (s *Subsystem) Shutdown(_ context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -11,7 +11,8 @@ import (
|
|||
"testing"
|
||||
"time"
|
||||
|
||||
ws "dappco.re/go/core/ws"
|
||||
providerws "dappco.re/go/core/ws"
|
||||
bridgews "forge.lthn.ai/core/go-ws"
|
||||
"forge.lthn.ai/core/mcp/pkg/mcp/ide"
|
||||
"github.com/gorilla/websocket"
|
||||
mcpsdk "github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
|
|
@ -45,7 +46,7 @@ func testBridge(t *testing.T) *ide.Bridge {
|
|||
srv := testWSServer(t)
|
||||
|
||||
wsURL := "ws" + strings.TrimPrefix(srv.URL, "http")
|
||||
hub := ws.NewHub()
|
||||
hub := bridgews.NewHub()
|
||||
bridge := ide.NewBridge(hub, ide.Config{
|
||||
LaravelWSURL: wsURL,
|
||||
ReconnectInterval: 100 * time.Millisecond,
|
||||
|
|
@ -193,7 +194,7 @@ func TestStatusHandler_Good_WithBridge(t *testing.T) {
|
|||
// --- emitEvent with hub ---
|
||||
|
||||
func TestEmitEvent_Good_WithHub(t *testing.T) {
|
||||
hub := ws.NewHub()
|
||||
hub := providerws.NewHub()
|
||||
p := NewProvider(nil, hub)
|
||||
p.emitEvent("brain.test", map[string]any{"key": "value"})
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,50 +6,53 @@ import (
|
|||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"dappco.re/go/agent/pkg/agentic"
|
||||
coreio "dappco.re/go/core/io"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
coremcp "forge.lthn.ai/core/mcp/pkg/mcp"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// agentName returns the identity of this agent.
|
||||
func agentName() string {
|
||||
return agentic.AgentName()
|
||||
}
|
||||
|
||||
// DirectSubsystem implements mcp.Subsystem for OpenBrain via direct HTTP calls.
|
||||
// Unlike Subsystem (which uses the IDE WebSocket bridge), this calls the
|
||||
// Laravel API directly — suitable for standalone core-mcp usage.
|
||||
// DirectSubsystem calls the OpenBrain HTTP API without the IDE bridge.
|
||||
//
|
||||
// sub := brain.NewDirect()
|
||||
// sub.RegisterTools(server)
|
||||
type DirectSubsystem struct {
|
||||
apiURL string
|
||||
apiKey string
|
||||
client *http.Client
|
||||
}
|
||||
|
||||
// NewDirect creates a brain subsystem that calls the OpenBrain API directly.
|
||||
// Reads CORE_BRAIN_URL and CORE_BRAIN_KEY from environment, or falls back
|
||||
// to ~/.claude/brain.key for the API key.
|
||||
var _ coremcp.Subsystem = (*DirectSubsystem)(nil)
|
||||
|
||||
// NewDirect creates a direct HTTP brain subsystem.
|
||||
//
|
||||
// sub := brain.NewDirect()
|
||||
// sub.RegisterTools(server)
|
||||
func NewDirect() *DirectSubsystem {
|
||||
apiURL := os.Getenv("CORE_BRAIN_URL")
|
||||
apiURL := core.Env("CORE_BRAIN_URL")
|
||||
if apiURL == "" {
|
||||
apiURL = "https://api.lthn.sh"
|
||||
}
|
||||
|
||||
apiKey := os.Getenv("CORE_BRAIN_KEY")
|
||||
apiKey := core.Env("CORE_BRAIN_KEY")
|
||||
keyPath := ""
|
||||
if apiKey == "" {
|
||||
home, _ := os.UserHomeDir()
|
||||
if data, err := coreio.Local.Read(filepath.Join(home, ".claude", "brain.key")); err == nil {
|
||||
apiKey = strings.TrimSpace(data)
|
||||
keyPath = brainKeyPath(brainHomeDir())
|
||||
if keyPath != "" {
|
||||
if r := fs.Read(keyPath); r.OK {
|
||||
apiKey = core.Trim(r.Value.(string))
|
||||
if apiKey != "" {
|
||||
core.Info("brain direct subsystem loaded API key from file", "path", keyPath)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if apiKey == "" {
|
||||
core.Warn("brain direct subsystem has no API key configured", "path", keyPath)
|
||||
}
|
||||
|
||||
return &DirectSubsystem{
|
||||
apiURL: apiURL,
|
||||
|
|
@ -58,10 +61,15 @@ func NewDirect() *DirectSubsystem {
|
|||
}
|
||||
}
|
||||
|
||||
// Name implements mcp.Subsystem.
|
||||
// Name returns the MCP subsystem name.
|
||||
//
|
||||
// name := sub.Name() // "brain"
|
||||
func (s *DirectSubsystem) Name() string { return "brain" }
|
||||
|
||||
// RegisterTools implements mcp.Subsystem.
|
||||
// RegisterTools adds the direct OpenBrain tools to an MCP server.
|
||||
//
|
||||
// sub := brain.NewDirect()
|
||||
// sub.RegisterTools(server)
|
||||
func (s *DirectSubsystem) RegisterTools(server *mcp.Server) {
|
||||
mcp.AddTool(server, &mcp.Tool{
|
||||
Name: "brain_remember",
|
||||
|
|
@ -82,49 +90,76 @@ func (s *DirectSubsystem) RegisterTools(server *mcp.Server) {
|
|||
s.RegisterMessagingTools(server)
|
||||
}
|
||||
|
||||
// Shutdown implements mcp.SubsystemWithShutdown.
|
||||
// Shutdown closes the direct subsystem without additional cleanup.
|
||||
//
|
||||
// _ = sub.Shutdown(context.Background())
|
||||
func (s *DirectSubsystem) Shutdown(_ context.Context) error { return nil }
|
||||
|
||||
func brainKeyPath(home string) string {
|
||||
if home == "" {
|
||||
return ""
|
||||
}
|
||||
return core.JoinPath(core.TrimSuffix(home, "/"), ".claude", "brain.key")
|
||||
}
|
||||
|
||||
func brainHomeDir() string {
|
||||
if home := core.Env("CORE_HOME"); home != "" {
|
||||
return home
|
||||
}
|
||||
return core.Env("DIR_HOME")
|
||||
}
|
||||
|
||||
func (s *DirectSubsystem) apiCall(ctx context.Context, method, path string, body any) (map[string]any, error) {
|
||||
if s.apiKey == "" {
|
||||
return nil, coreerr.E("brain.apiCall", "no API key (set CORE_BRAIN_KEY or create ~/.claude/brain.key)", nil)
|
||||
return nil, core.E("brain.apiCall", "no API key (set CORE_BRAIN_KEY or create ~/.claude/brain.key)", nil)
|
||||
}
|
||||
|
||||
var reqBody io.Reader
|
||||
var reqBody *bytes.Reader
|
||||
if body != nil {
|
||||
data, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return nil, coreerr.E("brain.apiCall", "marshal request", err)
|
||||
core.Error("brain API request marshal failed", "method", method, "path", path, "err", err)
|
||||
return nil, core.E("brain.apiCall", "marshal request", err)
|
||||
}
|
||||
reqBody = bytes.NewReader(data)
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, method, s.apiURL+path, reqBody)
|
||||
requestURL := core.Concat(s.apiURL, path)
|
||||
req, err := http.NewRequestWithContext(ctx, method, requestURL, nil)
|
||||
if reqBody != nil {
|
||||
req, err = http.NewRequestWithContext(ctx, method, requestURL, reqBody)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, coreerr.E("brain.apiCall", "create request", err)
|
||||
core.Error("brain API request creation failed", "method", method, "path", path, "err", err)
|
||||
return nil, core.E("brain.apiCall", "create request", err)
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Accept", "application/json")
|
||||
req.Header.Set("Authorization", "Bearer "+s.apiKey)
|
||||
req.Header.Set("Authorization", core.Concat("Bearer ", s.apiKey))
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
if err != nil {
|
||||
return nil, coreerr.E("brain.apiCall", "API call failed", err)
|
||||
core.Error("brain API call failed", "method", method, "path", path, "err", err)
|
||||
return nil, core.E("brain.apiCall", "API call failed", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
respData, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, coreerr.E("brain.apiCall", "read response", err)
|
||||
respBuffer := bytes.NewBuffer(nil)
|
||||
if _, err := respBuffer.ReadFrom(resp.Body); err != nil {
|
||||
core.Error("brain API response read failed", "method", method, "path", path, "err", err)
|
||||
return nil, core.E("brain.apiCall", "read response", err)
|
||||
}
|
||||
respData := respBuffer.Bytes()
|
||||
|
||||
if resp.StatusCode >= 400 {
|
||||
return nil, coreerr.E("brain.apiCall", fmt.Sprintf("API returned %d: %s", resp.StatusCode, string(respData)), nil)
|
||||
core.Warn("brain API returned error status", "method", method, "path", path, "status", resp.StatusCode)
|
||||
return nil, core.E("brain.apiCall", core.Sprintf("API returned %d: %s", resp.StatusCode, string(respData)), nil)
|
||||
}
|
||||
|
||||
var result map[string]any
|
||||
if err := json.Unmarshal(respData, &result); err != nil {
|
||||
return nil, coreerr.E("brain.apiCall", "parse response", err)
|
||||
core.Error("brain API response parse failed", "method", method, "path", path, "err", err)
|
||||
return nil, core.E("brain.apiCall", "parse response", err)
|
||||
}
|
||||
|
||||
return result, nil
|
||||
|
|
@ -132,11 +167,14 @@ func (s *DirectSubsystem) apiCall(ctx context.Context, method, path string, body
|
|||
|
||||
func (s *DirectSubsystem) remember(ctx context.Context, _ *mcp.CallToolRequest, input RememberInput) (*mcp.CallToolResult, RememberOutput, error) {
|
||||
result, err := s.apiCall(ctx, "POST", "/v1/brain/remember", map[string]any{
|
||||
"content": input.Content,
|
||||
"type": input.Type,
|
||||
"tags": input.Tags,
|
||||
"project": input.Project,
|
||||
"agent_id": agentName(),
|
||||
"content": input.Content,
|
||||
"type": input.Type,
|
||||
"tags": input.Tags,
|
||||
"project": input.Project,
|
||||
"confidence": input.Confidence,
|
||||
"supersedes": input.Supersedes,
|
||||
"expires_in": input.ExpiresIn,
|
||||
"agent_id": agentic.AgentName(),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, RememberOutput{}, err
|
||||
|
|
@ -165,6 +203,9 @@ func (s *DirectSubsystem) recall(ctx context.Context, _ *mcp.CallToolRequest, in
|
|||
if input.Filter.Type != nil {
|
||||
body["type"] = input.Filter.Type
|
||||
}
|
||||
if input.Filter.MinConfidence != 0 {
|
||||
body["min_confidence"] = input.Filter.MinConfidence
|
||||
}
|
||||
if input.TopK == 0 {
|
||||
body["top_k"] = 10
|
||||
}
|
||||
|
|
@ -179,11 +220,11 @@ func (s *DirectSubsystem) recall(ctx context.Context, _ *mcp.CallToolRequest, in
|
|||
for _, m := range mems {
|
||||
if mm, ok := m.(map[string]any); ok {
|
||||
mem := Memory{
|
||||
Content: fmt.Sprintf("%v", mm["content"]),
|
||||
Type: fmt.Sprintf("%v", mm["type"]),
|
||||
Project: fmt.Sprintf("%v", mm["project"]),
|
||||
AgentID: fmt.Sprintf("%v", mm["agent_id"]),
|
||||
CreatedAt: fmt.Sprintf("%v", mm["created_at"]),
|
||||
Content: fieldString(mm, "content"),
|
||||
Type: fieldString(mm, "type"),
|
||||
Project: fieldString(mm, "project"),
|
||||
AgentID: fieldString(mm, "agent_id"),
|
||||
CreatedAt: fieldString(mm, "created_at"),
|
||||
}
|
||||
if id, ok := mm["id"].(string); ok {
|
||||
mem.ID = id
|
||||
|
|
@ -191,8 +232,13 @@ func (s *DirectSubsystem) recall(ctx context.Context, _ *mcp.CallToolRequest, in
|
|||
if score, ok := mm["score"].(float64); ok {
|
||||
mem.Confidence = score
|
||||
}
|
||||
if tags, ok := mm["tags"].([]any); ok {
|
||||
for _, tag := range tags {
|
||||
mem.Tags = append(mem.Tags, core.Sprint(tag))
|
||||
}
|
||||
}
|
||||
if source, ok := mm["source"].(string); ok {
|
||||
mem.Tags = append(mem.Tags, "source:"+source)
|
||||
mem.Tags = append(mem.Tags, core.Concat("source:", source))
|
||||
}
|
||||
memories = append(memories, mem)
|
||||
}
|
||||
|
|
@ -207,7 +253,7 @@ func (s *DirectSubsystem) recall(ctx context.Context, _ *mcp.CallToolRequest, in
|
|||
}
|
||||
|
||||
func (s *DirectSubsystem) forget(ctx context.Context, _ *mcp.CallToolRequest, input ForgetInput) (*mcp.CallToolResult, ForgetOutput, error) {
|
||||
_, err := s.apiCall(ctx, "DELETE", "/v1/brain/forget/"+input.ID, nil)
|
||||
_, err := s.apiCall(ctx, "DELETE", core.Concat("/v1/brain/forget/", input.ID), nil)
|
||||
if err != nil {
|
||||
return nil, ForgetOutput{}, err
|
||||
}
|
||||
|
|
|
|||
|
|
@ -5,14 +5,12 @@ package brain
|
|||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// newTestDirect returns a DirectSubsystem wired to the given test server.
|
||||
|
|
@ -61,10 +59,10 @@ func TestNewDirect_Good_KeyFromFile(t *testing.T) {
|
|||
t.Setenv("CORE_BRAIN_KEY", "")
|
||||
|
||||
tmpHome := t.TempDir()
|
||||
t.Setenv("HOME", tmpHome)
|
||||
t.Setenv("CORE_HOME", tmpHome)
|
||||
keyDir := filepath.Join(tmpHome, ".claude")
|
||||
require.NoError(t, coreio.Local.EnsureDir(keyDir))
|
||||
require.NoError(t, coreio.Local.Write(filepath.Join(keyDir, "brain.key"), " file-key-456 \n"))
|
||||
require.True(t, fs.EnsureDir(keyDir).OK)
|
||||
require.True(t, fs.Write(filepath.Join(keyDir, "brain.key"), " file-key-456 \n").OK)
|
||||
|
||||
sub := NewDirect()
|
||||
assert.Equal(t, "file-key-456", sub.apiKey)
|
||||
|
|
|
|||
|
|
@ -4,14 +4,17 @@ package brain
|
|||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/url"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
"dappco.re/go/agent/pkg/agentic"
|
||||
core "dappco.re/go/core"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
||||
// RegisterMessagingTools adds agent messaging tools to the MCP server.
|
||||
// RegisterMessagingTools adds direct agent messaging tools to an MCP server.
|
||||
//
|
||||
// sub := brain.NewDirect()
|
||||
// sub.RegisterMessagingTools(server)
|
||||
func (s *DirectSubsystem) RegisterMessagingTools(server *mcp.Server) {
|
||||
mcp.AddTool(server, &mcp.Tool{
|
||||
Name: "agent_send",
|
||||
|
|
@ -31,22 +34,34 @@ func (s *DirectSubsystem) RegisterMessagingTools(server *mcp.Server) {
|
|||
|
||||
// Input/Output types
|
||||
|
||||
// SendInput sends a direct message to another agent.
|
||||
//
|
||||
// brain.SendInput{To: "charon", Subject: "status update", Content: "deploy complete"}
|
||||
type SendInput struct {
|
||||
To string `json:"to"`
|
||||
Content string `json:"content"`
|
||||
Subject string `json:"subject,omitempty"`
|
||||
}
|
||||
|
||||
// SendOutput reports the created direct message.
|
||||
//
|
||||
// brain.SendOutput{Success: true, ID: 42, To: "charon"}
|
||||
type SendOutput struct {
|
||||
Success bool `json:"success"`
|
||||
ID int `json:"id"`
|
||||
To string `json:"to"`
|
||||
}
|
||||
|
||||
// InboxInput selects which agent inbox to read.
|
||||
//
|
||||
// brain.InboxInput{Agent: "cladius"}
|
||||
type InboxInput struct {
|
||||
Agent string `json:"agent,omitempty"`
|
||||
}
|
||||
|
||||
// MessageItem is one inbox or conversation message.
|
||||
//
|
||||
// brain.MessageItem{ID: 7, From: "cladius", To: "charon", Content: "all green"}
|
||||
type MessageItem struct {
|
||||
ID int `json:"id"`
|
||||
From string `json:"from"`
|
||||
|
|
@ -57,15 +72,24 @@ type MessageItem struct {
|
|||
CreatedAt string `json:"created_at"`
|
||||
}
|
||||
|
||||
// InboxOutput returns the latest direct messages for an agent.
|
||||
//
|
||||
// brain.InboxOutput{Success: true, Messages: []brain.MessageItem{{ID: 1, From: "charon", To: "cladius"}}}
|
||||
type InboxOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Messages []MessageItem `json:"messages"`
|
||||
}
|
||||
|
||||
// ConversationInput selects the agent thread to load.
|
||||
//
|
||||
// brain.ConversationInput{Agent: "charon"}
|
||||
type ConversationInput struct {
|
||||
Agent string `json:"agent"`
|
||||
}
|
||||
|
||||
// ConversationOutput returns a direct message thread with another agent.
|
||||
//
|
||||
// brain.ConversationOutput{Success: true, Messages: []brain.MessageItem{{ID: 10, From: "cladius", To: "charon"}}}
|
||||
type ConversationOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Messages []MessageItem `json:"messages"`
|
||||
|
|
@ -75,12 +99,12 @@ type ConversationOutput struct {
|
|||
|
||||
func (s *DirectSubsystem) sendMessage(ctx context.Context, _ *mcp.CallToolRequest, input SendInput) (*mcp.CallToolResult, SendOutput, error) {
|
||||
if input.To == "" || input.Content == "" {
|
||||
return nil, SendOutput{}, coreerr.E("brain.sendMessage", "to and content are required", nil)
|
||||
return nil, SendOutput{}, core.E("brain.sendMessage", "to and content are required", nil)
|
||||
}
|
||||
|
||||
result, err := s.apiCall(ctx, "POST", "/v1/messages/send", map[string]any{
|
||||
"to": input.To,
|
||||
"from": agentName(),
|
||||
"from": agentic.AgentName(),
|
||||
"content": input.Content,
|
||||
"subject": input.Subject,
|
||||
})
|
||||
|
|
@ -101,7 +125,7 @@ func (s *DirectSubsystem) sendMessage(ctx context.Context, _ *mcp.CallToolReques
|
|||
func (s *DirectSubsystem) inbox(ctx context.Context, _ *mcp.CallToolRequest, input InboxInput) (*mcp.CallToolResult, InboxOutput, error) {
|
||||
agent := input.Agent
|
||||
if agent == "" {
|
||||
agent = agentName()
|
||||
agent = agentic.AgentName()
|
||||
}
|
||||
result, err := s.apiCall(ctx, "GET", "/v1/messages/inbox?agent="+url.QueryEscape(agent), nil)
|
||||
if err != nil {
|
||||
|
|
@ -116,10 +140,10 @@ func (s *DirectSubsystem) inbox(ctx context.Context, _ *mcp.CallToolRequest, inp
|
|||
|
||||
func (s *DirectSubsystem) conversation(ctx context.Context, _ *mcp.CallToolRequest, input ConversationInput) (*mcp.CallToolResult, ConversationOutput, error) {
|
||||
if input.Agent == "" {
|
||||
return nil, ConversationOutput{}, coreerr.E("brain.conversation", "agent is required", nil)
|
||||
return nil, ConversationOutput{}, core.E("brain.conversation", "agent is required", nil)
|
||||
}
|
||||
|
||||
result, err := s.apiCall(ctx, "GET", "/v1/messages/conversation/"+url.PathEscape(input.Agent)+"?me="+url.QueryEscape(agentName()), nil)
|
||||
result, err := s.apiCall(ctx, "GET", "/v1/messages/conversation/"+url.PathEscape(input.Agent)+"?me="+url.QueryEscape(agentic.AgentName()), nil)
|
||||
if err != nil {
|
||||
return nil, ConversationOutput{}, err
|
||||
}
|
||||
|
|
@ -137,12 +161,12 @@ func parseMessages(result map[string]any) []MessageItem {
|
|||
mm, _ := m.(map[string]any)
|
||||
messages = append(messages, MessageItem{
|
||||
ID: toInt(mm["id"]),
|
||||
From: fmt.Sprintf("%v", mm["from"]),
|
||||
To: fmt.Sprintf("%v", mm["to"]),
|
||||
Subject: fmt.Sprintf("%v", mm["subject"]),
|
||||
Content: fmt.Sprintf("%v", mm["content"]),
|
||||
From: fieldString(mm, "from"),
|
||||
To: fieldString(mm, "to"),
|
||||
Subject: fieldString(mm, "subject"),
|
||||
Content: fieldString(mm, "content"),
|
||||
Read: mm["read"] == true,
|
||||
CreatedAt: fmt.Sprintf("%v", mm["created_at"]),
|
||||
CreatedAt: fieldString(mm, "created_at"),
|
||||
})
|
||||
}
|
||||
return messages
|
||||
|
|
|
|||
|
|
@ -4,9 +4,10 @@ package brain
|
|||
|
||||
import (
|
||||
"net/http"
|
||||
"strconv"
|
||||
|
||||
"forge.lthn.ai/core/api"
|
||||
"forge.lthn.ai/core/api/pkg/provider"
|
||||
"dappco.re/go/core/api"
|
||||
"dappco.re/go/core/api/pkg/provider"
|
||||
"dappco.re/go/core/ws"
|
||||
"forge.lthn.ai/core/mcp/pkg/mcp/ide"
|
||||
"github.com/gin-gonic/gin"
|
||||
|
|
@ -14,6 +15,11 @@ import (
|
|||
|
||||
// BrainProvider wraps the brain Subsystem as a service provider with REST
|
||||
// endpoints. It delegates to the same IDE bridge that the MCP tools use.
|
||||
//
|
||||
// Usage example:
|
||||
//
|
||||
// provider := brain.NewProvider(bridge, hub)
|
||||
// provider.RegisterRoutes(router.Group("/api/brain"))
|
||||
type BrainProvider struct {
|
||||
bridge *ide.Bridge
|
||||
hub *ws.Hub
|
||||
|
|
@ -294,13 +300,23 @@ func (p *BrainProvider) list(c *gin.Context) {
|
|||
return
|
||||
}
|
||||
|
||||
limit := 0
|
||||
if rawLimit := c.Query("limit"); rawLimit != "" {
|
||||
parsedLimit, err := strconv.Atoi(rawLimit)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, api.Fail("invalid_limit", "limit must be an integer"))
|
||||
return
|
||||
}
|
||||
limit = parsedLimit
|
||||
}
|
||||
|
||||
err := p.bridge.Send(ide.BridgeMessage{
|
||||
Type: "brain_list",
|
||||
Data: map[string]any{
|
||||
"project": c.Query("project"),
|
||||
"type": c.Query("type"),
|
||||
"agent_id": c.Query("agent_id"),
|
||||
"limit": c.Query("limit"),
|
||||
"limit": limit,
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ import (
|
|||
"context"
|
||||
"time"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
core "dappco.re/go/core"
|
||||
"forge.lthn.ai/core/mcp/pkg/mcp/ide"
|
||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||
)
|
||||
|
|
@ -14,6 +14,13 @@ import (
|
|||
// -- Input/Output types -------------------------------------------------------
|
||||
|
||||
// RememberInput is the input for brain_remember.
|
||||
//
|
||||
// Usage example:
|
||||
//
|
||||
// input := brain.RememberInput{
|
||||
// Content: "Use core.Env for system paths.",
|
||||
// Type: "convention",
|
||||
// }
|
||||
type RememberInput struct {
|
||||
Content string `json:"content"`
|
||||
Type string `json:"type"`
|
||||
|
|
@ -25,6 +32,13 @@ type RememberInput struct {
|
|||
}
|
||||
|
||||
// RememberOutput is the output for brain_remember.
|
||||
//
|
||||
// Usage example:
|
||||
//
|
||||
// output := brain.RememberOutput{
|
||||
// Success: true,
|
||||
// MemoryID: "mem_123",
|
||||
// }
|
||||
type RememberOutput struct {
|
||||
Success bool `json:"success"`
|
||||
MemoryID string `json:"memoryId,omitempty"`
|
||||
|
|
@ -32,6 +46,13 @@ type RememberOutput struct {
|
|||
}
|
||||
|
||||
// RecallInput is the input for brain_recall.
|
||||
//
|
||||
// Usage example:
|
||||
//
|
||||
// input := brain.RecallInput{
|
||||
// Query: "core.Env conventions",
|
||||
// TopK: 5,
|
||||
// }
|
||||
type RecallInput struct {
|
||||
Query string `json:"query"`
|
||||
TopK int `json:"top_k,omitempty"`
|
||||
|
|
@ -39,6 +60,13 @@ type RecallInput struct {
|
|||
}
|
||||
|
||||
// RecallFilter holds optional filter criteria for brain_recall.
|
||||
//
|
||||
// Usage example:
|
||||
//
|
||||
// filter := brain.RecallFilter{
|
||||
// Project: "agent",
|
||||
// Type: "convention",
|
||||
// }
|
||||
type RecallFilter struct {
|
||||
Project string `json:"project,omitempty"`
|
||||
Type any `json:"type,omitempty"`
|
||||
|
|
@ -47,6 +75,13 @@ type RecallFilter struct {
|
|||
}
|
||||
|
||||
// RecallOutput is the output for brain_recall.
|
||||
//
|
||||
// Usage example:
|
||||
//
|
||||
// output := brain.RecallOutput{
|
||||
// Success: true,
|
||||
// Count: 1,
|
||||
// }
|
||||
type RecallOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Count int `json:"count"`
|
||||
|
|
@ -54,6 +89,14 @@ type RecallOutput struct {
|
|||
}
|
||||
|
||||
// Memory is a single memory entry returned by recall or list.
|
||||
//
|
||||
// Usage example:
|
||||
//
|
||||
// memory := brain.Memory{
|
||||
// ID: "mem_123",
|
||||
// Type: "convention",
|
||||
// Content: "Use core.Env for system paths.",
|
||||
// }
|
||||
type Memory struct {
|
||||
ID string `json:"id"`
|
||||
AgentID string `json:"agent_id"`
|
||||
|
|
@ -69,12 +112,26 @@ type Memory struct {
|
|||
}
|
||||
|
||||
// ForgetInput is the input for brain_forget.
|
||||
//
|
||||
// Usage example:
|
||||
//
|
||||
// input := brain.ForgetInput{
|
||||
// ID: "mem_123",
|
||||
// Reason: "superseded",
|
||||
// }
|
||||
type ForgetInput struct {
|
||||
ID string `json:"id"`
|
||||
Reason string `json:"reason,omitempty"`
|
||||
}
|
||||
|
||||
// ForgetOutput is the output for brain_forget.
|
||||
//
|
||||
// Usage example:
|
||||
//
|
||||
// output := brain.ForgetOutput{
|
||||
// Success: true,
|
||||
// Forgotten: "mem_123",
|
||||
// }
|
||||
type ForgetOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Forgotten string `json:"forgotten"`
|
||||
|
|
@ -82,6 +139,13 @@ type ForgetOutput struct {
|
|||
}
|
||||
|
||||
// ListInput is the input for brain_list.
|
||||
//
|
||||
// Usage example:
|
||||
//
|
||||
// input := brain.ListInput{
|
||||
// Project: "agent",
|
||||
// Limit: 20,
|
||||
// }
|
||||
type ListInput struct {
|
||||
Project string `json:"project,omitempty"`
|
||||
Type string `json:"type,omitempty"`
|
||||
|
|
@ -90,6 +154,13 @@ type ListInput struct {
|
|||
}
|
||||
|
||||
// ListOutput is the output for brain_list.
|
||||
//
|
||||
// Usage example:
|
||||
//
|
||||
// output := brain.ListOutput{
|
||||
// Success: true,
|
||||
// Count: 2,
|
||||
// }
|
||||
type ListOutput struct {
|
||||
Success bool `json:"success"`
|
||||
Count int `json:"count"`
|
||||
|
|
@ -140,7 +211,7 @@ func (s *Subsystem) brainRemember(_ context.Context, _ *mcp.CallToolRequest, inp
|
|||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, RememberOutput{}, coreerr.E("brain.remember", "failed to send brain_remember", err)
|
||||
return nil, RememberOutput{}, core.E("brain.remember", "failed to send brain_remember", err)
|
||||
}
|
||||
|
||||
return nil, RememberOutput{
|
||||
|
|
@ -163,7 +234,7 @@ func (s *Subsystem) brainRecall(_ context.Context, _ *mcp.CallToolRequest, input
|
|||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, RecallOutput{}, coreerr.E("brain.recall", "failed to send brain_recall", err)
|
||||
return nil, RecallOutput{}, core.E("brain.recall", "failed to send brain_recall", err)
|
||||
}
|
||||
|
||||
return nil, RecallOutput{
|
||||
|
|
@ -185,7 +256,7 @@ func (s *Subsystem) brainForget(_ context.Context, _ *mcp.CallToolRequest, input
|
|||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, ForgetOutput{}, coreerr.E("brain.forget", "failed to send brain_forget", err)
|
||||
return nil, ForgetOutput{}, core.E("brain.forget", "failed to send brain_forget", err)
|
||||
}
|
||||
|
||||
return nil, ForgetOutput{
|
||||
|
|
@ -210,7 +281,7 @@ func (s *Subsystem) brainList(_ context.Context, _ *mcp.CallToolRequest, input L
|
|||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, ListOutput{}, coreerr.E("brain.list", "failed to send brain_list", err)
|
||||
return nil, ListOutput{}, core.E("brain.list", "failed to send brain_list", err)
|
||||
}
|
||||
|
||||
return nil, ListOutput{
|
||||
|
|
|
|||
223
pkg/lib/lib.go
223
pkg/lib/lib.go
|
|
@ -14,103 +14,134 @@
|
|||
//
|
||||
// Usage:
|
||||
//
|
||||
// prompt, _ := lib.Prompt("coding")
|
||||
// task, _ := lib.Task("code/review")
|
||||
// persona, _ := lib.Persona("secops/developer")
|
||||
// flow, _ := lib.Flow("go")
|
||||
// r := lib.Prompt("coding") // r.Value.(string)
|
||||
// r := lib.Task("code/review") // r.Value.(string)
|
||||
// r := lib.Persona("secops/dev") // r.Value.(string)
|
||||
// r := lib.Flow("go") // r.Value.(string)
|
||||
// lib.ExtractWorkspace("default", "/tmp/ws", data)
|
||||
package lib
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"embed"
|
||||
"io/fs"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"text/template"
|
||||
|
||||
core "dappco.re/go/core"
|
||||
)
|
||||
|
||||
//go:embed prompt/*.md
|
||||
var promptFS embed.FS
|
||||
//go:embed all:prompt
|
||||
var promptFiles embed.FS
|
||||
|
||||
//go:embed all:task
|
||||
var taskFS embed.FS
|
||||
var taskFiles embed.FS
|
||||
|
||||
//go:embed flow/*.md
|
||||
var flowFS embed.FS
|
||||
//go:embed all:flow
|
||||
var flowFiles embed.FS
|
||||
|
||||
//go:embed persona
|
||||
var personaFS embed.FS
|
||||
//go:embed all:persona
|
||||
var personaFiles embed.FS
|
||||
|
||||
//go:embed all:workspace
|
||||
var workspaceFS embed.FS
|
||||
var workspaceFiles embed.FS
|
||||
|
||||
var (
|
||||
promptFS = mustMount(promptFiles, "prompt")
|
||||
taskFS = mustMount(taskFiles, "task")
|
||||
flowFS = mustMount(flowFiles, "flow")
|
||||
personaFS = mustMount(personaFiles, "persona")
|
||||
workspaceFS = mustMount(workspaceFiles, "workspace")
|
||||
)
|
||||
|
||||
func mustMount(fsys embed.FS, basedir string) *core.Embed {
|
||||
r := core.Mount(fsys, basedir)
|
||||
if !r.OK {
|
||||
panic(r.Value)
|
||||
}
|
||||
return r.Value.(*core.Embed)
|
||||
}
|
||||
|
||||
// --- Prompts ---
|
||||
|
||||
// Template tries Prompt then Task (backwards compat).
|
||||
func Template(slug string) (string, error) {
|
||||
if content, err := Prompt(slug); err == nil {
|
||||
return content, nil
|
||||
//
|
||||
// r := lib.Template("coding")
|
||||
// if r.OK { content := r.Value.(string) }
|
||||
func Template(slug string) core.Result {
|
||||
if r := Prompt(slug); r.OK {
|
||||
return r
|
||||
}
|
||||
return Task(slug)
|
||||
}
|
||||
|
||||
func Prompt(slug string) (string, error) {
|
||||
data, err := promptFS.ReadFile("prompt/" + slug + ".md")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(data), nil
|
||||
// Prompt reads a system prompt by slug.
|
||||
//
|
||||
// r := lib.Prompt("coding")
|
||||
// if r.OK { content := r.Value.(string) }
|
||||
func Prompt(slug string) core.Result {
|
||||
return promptFS.ReadString(slug + ".md")
|
||||
}
|
||||
|
||||
func Task(slug string) (string, error) {
|
||||
// Task reads a structured task plan by slug. Tries .md, .yaml, .yml.
|
||||
//
|
||||
// r := lib.Task("code/review")
|
||||
// if r.OK { content := r.Value.(string) }
|
||||
func Task(slug string) core.Result {
|
||||
for _, ext := range []string{".md", ".yaml", ".yml"} {
|
||||
data, err := taskFS.ReadFile("task/" + slug + ext)
|
||||
if err == nil {
|
||||
return string(data), nil
|
||||
if r := taskFS.ReadString(slug + ext); r.OK {
|
||||
return r
|
||||
}
|
||||
}
|
||||
return "", fs.ErrNotExist
|
||||
return core.Result{Value: fs.ErrNotExist}
|
||||
}
|
||||
|
||||
func TaskBundle(slug string) (string, map[string]string, error) {
|
||||
main, err := Task(slug)
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
// Bundle holds a task's main content plus companion files.
|
||||
//
|
||||
// r := lib.TaskBundle("code/review")
|
||||
// if r.OK { b := r.Value.(lib.Bundle) }
|
||||
type Bundle struct {
|
||||
Main string
|
||||
Files map[string]string
|
||||
}
|
||||
|
||||
// TaskBundle reads a task and its companion files.
|
||||
//
|
||||
// r := lib.TaskBundle("code/review")
|
||||
// if r.OK { b := r.Value.(lib.Bundle) }
|
||||
func TaskBundle(slug string) core.Result {
|
||||
main := Task(slug)
|
||||
if !main.OK {
|
||||
return main
|
||||
}
|
||||
bundleDir := "task/" + slug
|
||||
entries, err := fs.ReadDir(taskFS, bundleDir)
|
||||
if err != nil {
|
||||
return main, nil, nil
|
||||
b := Bundle{Main: main.Value.(string), Files: make(map[string]string)}
|
||||
r := taskFS.ReadDir(slug)
|
||||
if !r.OK {
|
||||
return core.Result{Value: b, OK: true}
|
||||
}
|
||||
bundle := make(map[string]string)
|
||||
for _, e := range entries {
|
||||
for _, e := range r.Value.([]fs.DirEntry) {
|
||||
if e.IsDir() {
|
||||
continue
|
||||
}
|
||||
data, err := taskFS.ReadFile(bundleDir + "/" + e.Name())
|
||||
if err == nil {
|
||||
bundle[e.Name()] = string(data)
|
||||
if fr := taskFS.ReadString(slug + "/" + e.Name()); fr.OK {
|
||||
b.Files[e.Name()] = fr.Value.(string)
|
||||
}
|
||||
}
|
||||
return main, bundle, nil
|
||||
return core.Result{Value: b, OK: true}
|
||||
}
|
||||
|
||||
func Flow(slug string) (string, error) {
|
||||
data, err := flowFS.ReadFile("flow/" + slug + ".md")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(data), nil
|
||||
// Flow reads a build/release workflow by slug.
|
||||
//
|
||||
// r := lib.Flow("go")
|
||||
// if r.OK { content := r.Value.(string) }
|
||||
func Flow(slug string) core.Result {
|
||||
return flowFS.ReadString(slug + ".md")
|
||||
}
|
||||
|
||||
func Persona(path string) (string, error) {
|
||||
data, err := personaFS.ReadFile("persona/" + path + ".md")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(data), nil
|
||||
// Persona reads a domain/role persona by path.
|
||||
//
|
||||
// r := lib.Persona("secops/developer")
|
||||
// if r.OK { content := r.Value.(string) }
|
||||
func Persona(path string) core.Result {
|
||||
return personaFS.ReadString(path + ".md")
|
||||
}
|
||||
|
||||
// --- Workspace Templates ---
|
||||
|
|
@ -137,65 +168,38 @@ type WorkspaceData struct {
|
|||
// ExtractWorkspace creates an agent workspace from a template.
|
||||
// Template names: "default", "security", "review".
|
||||
func ExtractWorkspace(tmplName, targetDir string, data *WorkspaceData) error {
|
||||
wsDir := "workspace/" + tmplName
|
||||
entries, err := fs.ReadDir(workspaceFS, wsDir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := os.MkdirAll(targetDir, 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, entry := range entries {
|
||||
if entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
name := entry.Name()
|
||||
content, err := fs.ReadFile(workspaceFS, wsDir+"/"+name)
|
||||
if err != nil {
|
||||
r := workspaceFS.Sub(tmplName)
|
||||
if !r.OK {
|
||||
if err, ok := r.Value.(error); ok {
|
||||
return err
|
||||
}
|
||||
|
||||
// Process .tmpl files through text/template
|
||||
outputName := name
|
||||
if strings.HasSuffix(name, ".tmpl") {
|
||||
outputName = strings.TrimSuffix(name, ".tmpl")
|
||||
tmpl, err := template.New(name).Parse(string(content))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
if err := tmpl.Execute(&buf, data); err != nil {
|
||||
return err
|
||||
}
|
||||
content = buf.Bytes()
|
||||
}
|
||||
|
||||
if err := os.WriteFile(filepath.Join(targetDir, outputName), content, 0644); err != nil {
|
||||
return core.E("ExtractWorkspace", "template not found: "+tmplName, nil)
|
||||
}
|
||||
result := core.Extract(r.Value.(*core.Embed).FS(), targetDir, data)
|
||||
if !result.OK {
|
||||
if err, ok := result.Value.(error); ok {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// --- List Functions ---
|
||||
|
||||
func ListPrompts() []string { return listDir(promptFS, "prompt") }
|
||||
func ListFlows() []string { return listDir(flowFS, "flow") }
|
||||
func ListWorkspaces() []string { return listDir(workspaceFS, "workspace") }
|
||||
func ListPrompts() []string { return listDir(promptFS) }
|
||||
func ListFlows() []string { return listDir(flowFS) }
|
||||
func ListWorkspaces() []string { return listDir(workspaceFS) }
|
||||
|
||||
func ListTasks() []string {
|
||||
var slugs []string
|
||||
fs.WalkDir(taskFS, "task", func(path string, d fs.DirEntry, err error) error {
|
||||
base := taskFS.BaseDirectory()
|
||||
fs.WalkDir(taskFS.FS(), base, func(path string, d fs.DirEntry, err error) error {
|
||||
if err != nil || d.IsDir() {
|
||||
return nil
|
||||
}
|
||||
rel := strings.TrimPrefix(path, "task/")
|
||||
rel := core.TrimPrefix(path, base+"/")
|
||||
ext := filepath.Ext(rel)
|
||||
slugs = append(slugs, strings.TrimSuffix(rel, ext))
|
||||
slugs = append(slugs, core.TrimSuffix(rel, ext))
|
||||
return nil
|
||||
})
|
||||
return slugs
|
||||
|
|
@ -203,13 +207,14 @@ func ListTasks() []string {
|
|||
|
||||
func ListPersonas() []string {
|
||||
var paths []string
|
||||
fs.WalkDir(personaFS, "persona", func(path string, d fs.DirEntry, err error) error {
|
||||
base := personaFS.BaseDirectory()
|
||||
fs.WalkDir(personaFS.FS(), base, func(path string, d fs.DirEntry, err error) error {
|
||||
if err != nil || d.IsDir() {
|
||||
return nil
|
||||
}
|
||||
if strings.HasSuffix(path, ".md") {
|
||||
rel := strings.TrimPrefix(path, "persona/")
|
||||
rel = strings.TrimSuffix(rel, ".md")
|
||||
if core.HasSuffix(path, ".md") {
|
||||
rel := core.TrimPrefix(path, base+"/")
|
||||
rel = core.TrimSuffix(rel, ".md")
|
||||
paths = append(paths, rel)
|
||||
}
|
||||
return nil
|
||||
|
|
@ -217,21 +222,19 @@ func ListPersonas() []string {
|
|||
return paths
|
||||
}
|
||||
|
||||
func listDir(fsys embed.FS, dir string) []string {
|
||||
entries, err := fsys.ReadDir(dir)
|
||||
if err != nil {
|
||||
func listDir(emb *core.Embed) []string {
|
||||
r := emb.ReadDir(".")
|
||||
if !r.OK {
|
||||
return nil
|
||||
}
|
||||
var slugs []string
|
||||
for _, e := range entries {
|
||||
for _, e := range r.Value.([]fs.DirEntry) {
|
||||
name := e.Name()
|
||||
if e.IsDir() {
|
||||
name := e.Name()
|
||||
slugs = append(slugs, name)
|
||||
continue
|
||||
}
|
||||
name := e.Name()
|
||||
ext := filepath.Ext(name)
|
||||
slugs = append(slugs, strings.TrimSuffix(name, ext))
|
||||
slugs = append(slugs, core.TrimSuffix(name, filepath.Ext(name)))
|
||||
}
|
||||
return slugs
|
||||
}
|
||||
|
|
|
|||
273
pkg/lib/lib_test.go
Normal file
273
pkg/lib/lib_test.go
Normal file
|
|
@ -0,0 +1,273 @@
|
|||
package lib
|
||||
|
||||
import (
|
||||
"io/fs"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// --- Prompt ---
|
||||
|
||||
func TestPrompt_Good(t *testing.T) {
|
||||
r := Prompt("coding")
|
||||
if !r.OK {
|
||||
t.Fatal("Prompt('coding') returned !OK")
|
||||
}
|
||||
if r.Value.(string) == "" {
|
||||
t.Error("Prompt('coding') returned empty string")
|
||||
}
|
||||
}
|
||||
|
||||
func TestPrompt_Bad(t *testing.T) {
|
||||
r := Prompt("nonexistent-slug")
|
||||
if r.OK {
|
||||
t.Error("Prompt('nonexistent-slug') should return !OK")
|
||||
}
|
||||
}
|
||||
|
||||
// --- Task ---
|
||||
|
||||
func TestTask_Good_Yaml(t *testing.T) {
|
||||
r := Task("bug-fix")
|
||||
if !r.OK {
|
||||
t.Fatal("Task('bug-fix') returned !OK")
|
||||
}
|
||||
if r.Value.(string) == "" {
|
||||
t.Error("Task('bug-fix') returned empty string")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTask_Good_Md(t *testing.T) {
|
||||
r := Task("code/review")
|
||||
if !r.OK {
|
||||
t.Fatal("Task('code/review') returned !OK")
|
||||
}
|
||||
if r.Value.(string) == "" {
|
||||
t.Error("Task('code/review') returned empty string")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTask_Bad(t *testing.T) {
|
||||
r := Task("nonexistent-slug")
|
||||
if r.OK {
|
||||
t.Error("Task('nonexistent-slug') should return !OK")
|
||||
}
|
||||
if r.Value != fs.ErrNotExist {
|
||||
t.Error("Task('nonexistent-slug') should return fs.ErrNotExist")
|
||||
}
|
||||
}
|
||||
|
||||
// --- TaskBundle ---
|
||||
|
||||
func TestTaskBundle_Good(t *testing.T) {
|
||||
r := TaskBundle("code/review")
|
||||
if !r.OK {
|
||||
t.Fatal("TaskBundle('code/review') returned !OK")
|
||||
}
|
||||
b := r.Value.(Bundle)
|
||||
if b.Main == "" {
|
||||
t.Error("Bundle.Main is empty")
|
||||
}
|
||||
if len(b.Files) == 0 {
|
||||
t.Error("Bundle.Files is empty — expected companion files")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTaskBundle_Bad(t *testing.T) {
|
||||
r := TaskBundle("nonexistent")
|
||||
if r.OK {
|
||||
t.Error("TaskBundle('nonexistent') should return !OK")
|
||||
}
|
||||
}
|
||||
|
||||
// --- Flow ---
|
||||
|
||||
func TestFlow_Good(t *testing.T) {
|
||||
r := Flow("go")
|
||||
if !r.OK {
|
||||
t.Fatal("Flow('go') returned !OK")
|
||||
}
|
||||
if r.Value.(string) == "" {
|
||||
t.Error("Flow('go') returned empty string")
|
||||
}
|
||||
}
|
||||
|
||||
// --- Persona ---
|
||||
|
||||
func TestPersona_Good(t *testing.T) {
|
||||
// Use first persona from list to avoid hardcoding
|
||||
personas := ListPersonas()
|
||||
if len(personas) == 0 {
|
||||
t.Skip("no personas found")
|
||||
}
|
||||
r := Persona(personas[0])
|
||||
if !r.OK {
|
||||
t.Fatalf("Persona(%q) returned !OK", personas[0])
|
||||
}
|
||||
if r.Value.(string) == "" {
|
||||
t.Errorf("Persona(%q) returned empty string", personas[0])
|
||||
}
|
||||
}
|
||||
|
||||
// --- Template ---
|
||||
|
||||
func TestTemplate_Good_Prompt(t *testing.T) {
|
||||
r := Template("coding")
|
||||
if !r.OK {
|
||||
t.Fatal("Template('coding') returned !OK")
|
||||
}
|
||||
if r.Value.(string) == "" {
|
||||
t.Error("Template('coding') returned empty string")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTemplate_Good_TaskFallback(t *testing.T) {
|
||||
r := Template("bug-fix")
|
||||
if !r.OK {
|
||||
t.Fatal("Template('bug-fix') returned !OK — should fall through to Task")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTemplate_Bad(t *testing.T) {
|
||||
r := Template("nonexistent-slug")
|
||||
if r.OK {
|
||||
t.Error("Template('nonexistent-slug') should return !OK")
|
||||
}
|
||||
}
|
||||
|
||||
// --- List Functions ---
|
||||
|
||||
func TestListPrompts(t *testing.T) {
|
||||
prompts := ListPrompts()
|
||||
if len(prompts) == 0 {
|
||||
t.Error("ListPrompts() returned empty")
|
||||
}
|
||||
}
|
||||
|
||||
func TestListTasks(t *testing.T) {
|
||||
tasks := ListTasks()
|
||||
if len(tasks) == 0 {
|
||||
t.Fatal("ListTasks() returned empty")
|
||||
}
|
||||
// Verify nested paths are included (e.g., "code/review")
|
||||
found := false
|
||||
for _, s := range tasks {
|
||||
if s == "code/review" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Error("ListTasks() missing nested path 'code/review'")
|
||||
}
|
||||
}
|
||||
|
||||
func TestListPersonas(t *testing.T) {
|
||||
personas := ListPersonas()
|
||||
if len(personas) == 0 {
|
||||
t.Error("ListPersonas() returned empty")
|
||||
}
|
||||
// Should have nested paths like "code/go"
|
||||
hasNested := false
|
||||
for _, p := range personas {
|
||||
if len(p) > 0 && filepath.Dir(p) != "." {
|
||||
hasNested = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !hasNested {
|
||||
t.Error("ListPersonas() has no nested paths")
|
||||
}
|
||||
}
|
||||
|
||||
func TestListFlows(t *testing.T) {
|
||||
flows := ListFlows()
|
||||
if len(flows) == 0 {
|
||||
t.Error("ListFlows() returned empty")
|
||||
}
|
||||
}
|
||||
|
||||
func TestListWorkspaces(t *testing.T) {
|
||||
workspaces := ListWorkspaces()
|
||||
if len(workspaces) == 0 {
|
||||
t.Error("ListWorkspaces() returned empty")
|
||||
}
|
||||
}
|
||||
|
||||
// --- ExtractWorkspace ---
|
||||
|
||||
func TestExtractWorkspace_CreatesFiles(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
data := &WorkspaceData{Repo: "test-repo", Task: "test task"}
|
||||
|
||||
err := ExtractWorkspace("default", dir, data)
|
||||
if err != nil {
|
||||
t.Fatalf("ExtractWorkspace failed: %v", err)
|
||||
}
|
||||
|
||||
for _, name := range []string{"CODEX.md", "CLAUDE.md", "PROMPT.md", "TODO.md", "CONTEXT.md", "go.work"} {
|
||||
path := filepath.Join(dir, name)
|
||||
if _, err := os.Stat(path); os.IsNotExist(err) {
|
||||
t.Errorf("expected %s to exist", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractWorkspace_CreatesSubdirectories(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
data := &WorkspaceData{Repo: "test-repo", Task: "test task"}
|
||||
|
||||
err := ExtractWorkspace("default", dir, data)
|
||||
if err != nil {
|
||||
t.Fatalf("ExtractWorkspace failed: %v", err)
|
||||
}
|
||||
|
||||
refDir := filepath.Join(dir, ".core", "reference")
|
||||
if _, err := os.Stat(refDir); os.IsNotExist(err) {
|
||||
t.Fatalf(".core/reference/ directory not created")
|
||||
}
|
||||
|
||||
axSpec := filepath.Join(refDir, "RFC-025-AGENT-EXPERIENCE.md")
|
||||
if _, err := os.Stat(axSpec); os.IsNotExist(err) {
|
||||
t.Errorf("AX spec not extracted: %s", axSpec)
|
||||
}
|
||||
|
||||
entries, err := os.ReadDir(refDir)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read reference dir: %v", err)
|
||||
}
|
||||
|
||||
goFiles := 0
|
||||
for _, e := range entries {
|
||||
if filepath.Ext(e.Name()) == ".go" {
|
||||
goFiles++
|
||||
}
|
||||
}
|
||||
if goFiles == 0 {
|
||||
t.Error("no .go files in .core/reference/")
|
||||
}
|
||||
|
||||
docsDir := filepath.Join(refDir, "docs")
|
||||
if _, err := os.Stat(docsDir); os.IsNotExist(err) {
|
||||
t.Errorf(".core/reference/docs/ not created")
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractWorkspace_TemplateSubstitution(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
data := &WorkspaceData{Repo: "my-repo", Task: "fix the bug"}
|
||||
|
||||
err := ExtractWorkspace("default", dir, data)
|
||||
if err != nil {
|
||||
t.Fatalf("ExtractWorkspace failed: %v", err)
|
||||
}
|
||||
|
||||
content, err := os.ReadFile(filepath.Join(dir, "TODO.md"))
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read TODO.md: %v", err)
|
||||
}
|
||||
if len(content) == 0 {
|
||||
t.Error("TODO.md is empty")
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,303 @@
|
|||
# RFC-025: Agent Experience (AX) Design Principles
|
||||
|
||||
- **Status:** Draft
|
||||
- **Authors:** Snider, Cladius
|
||||
- **Date:** 2026-03-19
|
||||
- **Applies to:** All Core ecosystem packages (CoreGO, CorePHP, CoreTS, core-agent)
|
||||
|
||||
## Abstract
|
||||
|
||||
Agent Experience (AX) is a design paradigm for software systems where the primary code consumer is an AI agent, not a human developer. AX sits alongside User Experience (UX) and Developer Experience (DX) as the third era of interface design.
|
||||
|
||||
This RFC establishes AX as a formal design principle for the Core ecosystem and defines the conventions that follow from it.
|
||||
|
||||
## Motivation
|
||||
|
||||
As of early 2026, AI agents write, review, and maintain the majority of code in the Core ecosystem. The original author has not manually edited code (outside of Core struct design) since October 2025. Code is processed semantically — agents reason about intent, not characters.
|
||||
|
||||
Design patterns inherited from the human-developer era optimise for the wrong consumer:
|
||||
|
||||
- **Short names** save keystrokes but increase semantic ambiguity
|
||||
- **Functional option chains** are fluent for humans but opaque for agents tracing configuration
|
||||
- **Error-at-every-call-site** produces 50% boilerplate that obscures intent
|
||||
- **Generic type parameters** force agents to carry type context that the runtime already has
|
||||
- **Panic-hiding conventions** (`Must*`) create implicit control flow that agents must special-case
|
||||
|
||||
AX acknowledges this shift and provides principles for designing code, APIs, file structures, and conventions that serve AI agents as first-class consumers.
|
||||
|
||||
## The Three Eras
|
||||
|
||||
| Era | Primary Consumer | Optimises For | Key Metric |
|
||||
|-----|-----------------|---------------|------------|
|
||||
| UX | End users | Discoverability, forgiveness, visual clarity | Task completion time |
|
||||
| DX | Developers | Typing speed, IDE support, convention familiarity | Time to first commit |
|
||||
| AX | AI agents | Predictability, composability, semantic navigation | Correct-on-first-pass rate |
|
||||
|
||||
AX does not replace UX or DX. End users still need good UX. Developers still need good DX. But when the primary code author and maintainer is an AI agent, the codebase should be designed for that consumer first.
|
||||
|
||||
## Principles
|
||||
|
||||
### 1. Predictable Names Over Short Names
|
||||
|
||||
Names are tokens that agents pattern-match across languages and contexts. Abbreviations introduce mapping overhead.
|
||||
|
||||
```
|
||||
Config not Cfg
|
||||
Service not Srv
|
||||
Embed not Emb
|
||||
Error not Err (as a subsystem name; err for local variables is fine)
|
||||
Options not Opts
|
||||
```
|
||||
|
||||
**Rule:** If a name would require a comment to explain, it is too short.
|
||||
|
||||
**Exception:** Industry-standard abbreviations that are universally understood (`HTTP`, `URL`, `ID`, `IPC`, `I18n`) are acceptable. The test: would an agent trained on any mainstream language recognise it without context?
|
||||
|
||||
### 2. Comments as Usage Examples
|
||||
|
||||
The function signature tells WHAT. The comment shows HOW with real values.
|
||||
|
||||
```go
|
||||
// Detect the project type from files present
|
||||
setup.Detect("/path/to/project")
|
||||
|
||||
// Set up a workspace with auto-detected template
|
||||
setup.Run(setup.Options{Path: ".", Template: "auto"})
|
||||
|
||||
// Scaffold a PHP module workspace
|
||||
setup.Run(setup.Options{Path: "./my-module", Template: "php"})
|
||||
```
|
||||
|
||||
**Rule:** If a comment restates what the type signature already says, delete it. If a comment shows a concrete usage with realistic values, keep it.
|
||||
|
||||
**Rationale:** Agents learn from examples more effectively than from descriptions. A comment like "Run executes the setup process" adds zero information. A comment like `setup.Run(setup.Options{Path: ".", Template: "auto"})` teaches an agent exactly how to call the function.
|
||||
|
||||
### 3. Path Is Documentation
|
||||
|
||||
File and directory paths should be self-describing. An agent navigating the filesystem should understand what it is looking at without reading a README.
|
||||
|
||||
```
|
||||
flow/deploy/to/homelab.yaml — deploy TO the homelab
|
||||
flow/deploy/from/github.yaml — deploy FROM GitHub
|
||||
flow/code/review.yaml — code review flow
|
||||
template/file/go/struct.go.tmpl — Go struct file template
|
||||
template/dir/workspace/php/ — PHP workspace scaffold
|
||||
```
|
||||
|
||||
**Rule:** If an agent needs to read a file to understand what a directory contains, the directory naming has failed.
|
||||
|
||||
**Corollary:** The unified path convention (folder structure = HTTP route = CLI command = test path) is AX-native. One path, every surface.
|
||||
|
||||
### 4. Templates Over Freeform
|
||||
|
||||
When an agent generates code from a template, the output is constrained to known-good shapes. When an agent writes freeform, the output varies.
|
||||
|
||||
```go
|
||||
// Template-driven — consistent output
|
||||
lib.RenderFile("php/action", data)
|
||||
lib.ExtractDir("php", targetDir, data)
|
||||
|
||||
// Freeform — variance in output
|
||||
"write a PHP action class that..."
|
||||
```
|
||||
|
||||
**Rule:** For any code pattern that recurs, provide a template. Templates are guardrails for agents.
|
||||
|
||||
**Scope:** Templates apply to file generation, workspace scaffolding, config generation, and commit messages. They do NOT apply to novel logic — agents should write business logic freeform with the domain knowledge available.
|
||||
|
||||
### 5. Declarative Over Imperative
|
||||
|
||||
Agents reason better about declarations of intent than sequences of operations.
|
||||
|
||||
```yaml
|
||||
# Declarative — agent sees what should happen
|
||||
steps:
|
||||
- name: build
|
||||
flow: tools/docker-build
|
||||
with:
|
||||
context: "{{ .app_dir }}"
|
||||
image_name: "{{ .image_name }}"
|
||||
|
||||
- name: deploy
|
||||
flow: deploy/with/docker
|
||||
with:
|
||||
host: "{{ .host }}"
|
||||
```
|
||||
|
||||
```go
|
||||
// Imperative — agent must trace execution
|
||||
cmd := exec.Command("docker", "build", "--platform", "linux/amd64", "-t", imageName, ".")
|
||||
cmd.Dir = appDir
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("docker build: %w", err)
|
||||
}
|
||||
```
|
||||
|
||||
**Rule:** Orchestration, configuration, and pipeline logic should be declarative (YAML/JSON). Implementation logic should be imperative (Go/PHP/TS). The boundary is: if an agent needs to compose or modify the logic, make it declarative.
|
||||
|
||||
### 6. Universal Types (Core Primitives)
|
||||
|
||||
Every component in the ecosystem accepts and returns the same primitive types. An agent processing any level of the tree sees identical shapes.
|
||||
|
||||
`Option` is a single key-value pair. `Options` is a collection. Any function that returns `Result` can accept `Options`.
|
||||
|
||||
```go
|
||||
// Option — the atom
|
||||
core.Option{K: "name", V: "brain"}
|
||||
|
||||
// Options — universal input (collection of Option)
|
||||
core.Options{
|
||||
{K: "name", V: "myapp"},
|
||||
{K: "port", V: 8080},
|
||||
}
|
||||
|
||||
// Result[T] — universal return
|
||||
core.Result[*Embed]{Value: emb, OK: true}
|
||||
```
|
||||
|
||||
Usage across subsystems — same shape everywhere:
|
||||
|
||||
```go
|
||||
// Create Core
|
||||
c := core.New(core.Options{{K: "name", V: "myapp"}})
|
||||
|
||||
// Mount embedded content
|
||||
c.Data().New(core.Options{
|
||||
{K: "name", V: "brain"},
|
||||
{K: "source", V: brainFS},
|
||||
{K: "path", V: "prompts"},
|
||||
})
|
||||
|
||||
// Register a transport handle
|
||||
c.Drive().New(core.Options{
|
||||
{K: "name", V: "api"},
|
||||
{K: "transport", V: "https://api.lthn.ai"},
|
||||
})
|
||||
|
||||
// Read back what was passed in
|
||||
c.Options().String("name") // "myapp"
|
||||
```
|
||||
|
||||
**Core primitive types:**
|
||||
|
||||
| Type | Purpose |
|
||||
|------|---------|
|
||||
| `core.Option` | Single key-value pair (the atom) |
|
||||
| `core.Options` | Collection of Option (universal input) |
|
||||
| `core.Result[T]` | Return value with OK/fail state (universal output) |
|
||||
| `core.Config` | Runtime settings (what is active) |
|
||||
| `core.Data` | Embedded or stored content from packages |
|
||||
| `core.Drive` | Resource handle registry (transports) |
|
||||
| `core.Service` | A managed component with lifecycle |
|
||||
|
||||
**Core struct subsystems:**
|
||||
|
||||
| Accessor | Analogy | Purpose |
|
||||
|----------|---------|---------|
|
||||
| `c.Options()` | argv | Input configuration used to create this Core |
|
||||
| `c.Data()` | /mnt | Embedded assets mounted by packages |
|
||||
| `c.Drive()` | /dev | Transport handles (API, MCP, SSH, VPN) |
|
||||
| `c.Config()` | /etc | Configuration, settings, feature flags |
|
||||
| `c.Fs()` | / | Local filesystem I/O (sandboxable) |
|
||||
| `c.Error()` | — | Panic recovery and crash reporting (`ErrorPanic`) |
|
||||
| `c.Log()` | — | Structured logging (`ErrorLog`) |
|
||||
| `c.Service()` | — | Service registry and lifecycle |
|
||||
| `c.Cli()` | — | CLI command framework |
|
||||
| `c.IPC()` | — | Message bus |
|
||||
| `c.I18n()` | — | Internationalisation |
|
||||
|
||||
**What this replaces:**
|
||||
|
||||
| Go Convention | Core AX | Why |
|
||||
|--------------|---------|-----|
|
||||
| `func With*(v) Option` | `core.Options{{K: k, V: v}}` | K/V pairs are parseable; option chains require tracing |
|
||||
| `func Must*(v) T` | `core.Result[T]` | No hidden panics; errors flow through Core |
|
||||
| `func *For[T](c) T` | `c.Service("name")` | String lookup is greppable; generics require type context |
|
||||
| `val, err :=` everywhere | Single return via `core.Result` | Intent not obscured by error handling |
|
||||
| `_ = err` | Never needed | Core handles all errors internally |
|
||||
| `ErrPan` / `ErrLog` | `ErrorPanic` / `ErrorLog` | Full names — AX principle 1 |
|
||||
|
||||
## Applying AX to Existing Patterns
|
||||
|
||||
### File Structure
|
||||
|
||||
```
|
||||
# AX-native: path describes content
|
||||
core/agent/
|
||||
├── go/ # Go source
|
||||
├── php/ # PHP source
|
||||
├── ui/ # Frontend source
|
||||
├── claude/ # Claude Code plugin
|
||||
└── codex/ # Codex plugin
|
||||
|
||||
# Not AX: generic names requiring README
|
||||
src/
|
||||
├── lib/
|
||||
├── utils/
|
||||
└── helpers/
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
```go
|
||||
// AX-native: errors are infrastructure, not application logic
|
||||
svc := c.Service("brain")
|
||||
cfg := c.Config().Get("database.host")
|
||||
// Errors logged by Core. Code reads like a spec.
|
||||
|
||||
// Not AX: errors dominate the code
|
||||
svc, err := c.ServiceFor[brain.Service]()
|
||||
if err != nil {
|
||||
return fmt.Errorf("get brain service: %w", err)
|
||||
}
|
||||
cfg, err := c.Config().Get("database.host")
|
||||
if err != nil {
|
||||
_ = err // silenced because "it'll be fine"
|
||||
}
|
||||
```
|
||||
|
||||
### API Design
|
||||
|
||||
```go
|
||||
// AX-native: one shape, every surface
|
||||
c := core.New(core.Options{
|
||||
{K: "name", V: "my-app"},
|
||||
})
|
||||
c.Service("process", processSvc)
|
||||
c.Data().New(core.Options{{K: "name", V: "app"}, {K: "source", V: appFS}})
|
||||
|
||||
// Not AX: multiple patterns for the same thing
|
||||
c, err := core.New(
|
||||
core.WithName("my-app"),
|
||||
core.WithService(factory1),
|
||||
core.WithAssets(appFS),
|
||||
)
|
||||
if err != nil { ... }
|
||||
```
|
||||
|
||||
## Compatibility
|
||||
|
||||
AX conventions are valid, idiomatic Go/PHP/TS. They do not require language extensions, code generation, or non-standard tooling. An AX-designed codebase compiles, tests, and deploys with standard toolchains.
|
||||
|
||||
The conventions diverge from community patterns (functional options, Must/For, etc.) but do not violate language specifications. This is a style choice, not a fork.
|
||||
|
||||
## Adoption
|
||||
|
||||
AX applies to all new code in the Core ecosystem. Existing code migrates incrementally as it is touched — no big-bang rewrite.
|
||||
|
||||
Priority order:
|
||||
1. **Public APIs** (package-level functions, struct constructors)
|
||||
2. **File structure** (path naming, template locations)
|
||||
3. **Internal fields** (struct field names, local variables)
|
||||
|
||||
## References
|
||||
|
||||
- dAppServer unified path convention (2024)
|
||||
- CoreGO DTO pattern refactor (2026-03-18)
|
||||
- Core primitives design (2026-03-19)
|
||||
- Go Proverbs, Rob Pike (2015) — AX provides an updated lens
|
||||
|
||||
## Changelog
|
||||
|
||||
- 2026-03-20: Updated to match implementation — Option K/V atoms, Options as []Option, Data/Drive split, ErrorPanic/ErrorLog renames, subsystem table
|
||||
- 2026-03-19: Initial draft
|
||||
53
pkg/lib/workspace/default/.core/reference/app.go
Normal file
53
pkg/lib/workspace/default/.core/reference/app.go
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
// Application identity for the Core framework.
|
||||
// Based on leaanthony/sail — Name, Filename, Path.
|
||||
|
||||
package core
|
||||
|
||||
import (
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
// App holds the application identity and optional GUI runtime.
|
||||
type App struct {
|
||||
// Name is the human-readable application name (e.g., "Core CLI").
|
||||
Name string
|
||||
|
||||
// Version is the application version string (e.g., "1.2.3").
|
||||
Version string
|
||||
|
||||
// Description is a short description of the application.
|
||||
Description string
|
||||
|
||||
// Filename is the executable filename (e.g., "core").
|
||||
Filename string
|
||||
|
||||
// Path is the absolute path to the executable.
|
||||
Path string
|
||||
|
||||
// Runtime is the GUI runtime (e.g., Wails App).
|
||||
// Nil for CLI-only applications.
|
||||
Runtime any
|
||||
}
|
||||
|
||||
// Find locates a program on PATH and returns a Result containing the App.
|
||||
//
|
||||
// r := core.Find("node", "Node.js")
|
||||
// if r.OK { app := r.Value.(*App) }
|
||||
func Find(filename, name string) Result {
|
||||
path, err := exec.LookPath(filename)
|
||||
if err != nil {
|
||||
return Result{err, false}
|
||||
}
|
||||
abs, err := filepath.Abs(path)
|
||||
if err != nil {
|
||||
return Result{err, false}
|
||||
}
|
||||
return Result{&App{
|
||||
Name: name,
|
||||
Filename: filename,
|
||||
Path: abs,
|
||||
}, true}
|
||||
}
|
||||
101
pkg/lib/workspace/default/.core/reference/array.go
Normal file
101
pkg/lib/workspace/default/.core/reference/array.go
Normal file
|
|
@ -0,0 +1,101 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
// Generic slice operations for the Core framework.
|
||||
// Based on leaanthony/slicer, rewritten with Go 1.18+ generics.
|
||||
|
||||
package core
|
||||
|
||||
// Array is a typed slice with common operations.
|
||||
type Array[T comparable] struct {
|
||||
items []T
|
||||
}
|
||||
|
||||
// NewArray creates an empty Array.
|
||||
func NewArray[T comparable](items ...T) *Array[T] {
|
||||
return &Array[T]{items: items}
|
||||
}
|
||||
|
||||
// Add appends values.
|
||||
func (s *Array[T]) Add(values ...T) {
|
||||
s.items = append(s.items, values...)
|
||||
}
|
||||
|
||||
// AddUnique appends values only if not already present.
|
||||
func (s *Array[T]) AddUnique(values ...T) {
|
||||
for _, v := range values {
|
||||
if !s.Contains(v) {
|
||||
s.items = append(s.items, v)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Contains returns true if the value is in the slice.
|
||||
func (s *Array[T]) Contains(val T) bool {
|
||||
for _, v := range s.items {
|
||||
if v == val {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Filter returns a new Array with elements matching the predicate.
|
||||
func (s *Array[T]) Filter(fn func(T) bool) Result {
|
||||
filtered := &Array[T]{}
|
||||
for _, v := range s.items {
|
||||
if fn(v) {
|
||||
filtered.items = append(filtered.items, v)
|
||||
}
|
||||
}
|
||||
return Result{filtered, true}
|
||||
}
|
||||
|
||||
// Each runs a function on every element.
|
||||
func (s *Array[T]) Each(fn func(T)) {
|
||||
for _, v := range s.items {
|
||||
fn(v)
|
||||
}
|
||||
}
|
||||
|
||||
// Remove removes the first occurrence of a value.
|
||||
func (s *Array[T]) Remove(val T) {
|
||||
for i, v := range s.items {
|
||||
if v == val {
|
||||
s.items = append(s.items[:i], s.items[i+1:]...)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Deduplicate removes duplicate values, preserving order.
|
||||
func (s *Array[T]) Deduplicate() {
|
||||
seen := make(map[T]struct{})
|
||||
result := make([]T, 0, len(s.items))
|
||||
for _, v := range s.items {
|
||||
if _, exists := seen[v]; !exists {
|
||||
seen[v] = struct{}{}
|
||||
result = append(result, v)
|
||||
}
|
||||
}
|
||||
s.items = result
|
||||
}
|
||||
|
||||
// Len returns the number of elements.
|
||||
func (s *Array[T]) Len() int {
|
||||
return len(s.items)
|
||||
}
|
||||
|
||||
// Clear removes all elements.
|
||||
func (s *Array[T]) Clear() {
|
||||
s.items = nil
|
||||
}
|
||||
|
||||
// AsSlice returns a copy of the underlying slice.
|
||||
func (s *Array[T]) AsSlice() []T {
|
||||
if s.items == nil {
|
||||
return nil
|
||||
}
|
||||
out := make([]T, len(s.items))
|
||||
copy(out, s.items)
|
||||
return out
|
||||
}
|
||||
169
pkg/lib/workspace/default/.core/reference/cli.go
Normal file
169
pkg/lib/workspace/default/.core/reference/cli.go
Normal file
|
|
@ -0,0 +1,169 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
// Cli is the CLI surface layer for the Core command tree.
|
||||
// It reads commands from Core's registry and wires them to terminal I/O.
|
||||
//
|
||||
// Run the CLI:
|
||||
//
|
||||
// c := core.New(core.Options{{Key: "name", Value: "myapp"}})
|
||||
// c.Command("deploy", handler)
|
||||
// c.Cli().Run()
|
||||
//
|
||||
// The Cli resolves os.Args to a command path, parses flags,
|
||||
// and calls the command's action with parsed options.
|
||||
package core
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
)
|
||||
|
||||
// Cli is the CLI surface for the Core command tree.
|
||||
type Cli struct {
|
||||
core *Core
|
||||
output io.Writer
|
||||
banner func(*Cli) string
|
||||
}
|
||||
|
||||
// Print writes to the CLI output (defaults to os.Stdout).
|
||||
//
|
||||
// c.Cli().Print("hello %s", "world")
|
||||
func (cl *Cli) Print(format string, args ...any) {
|
||||
Print(cl.output, format, args...)
|
||||
}
|
||||
|
||||
// SetOutput sets the CLI output writer.
|
||||
//
|
||||
// c.Cli().SetOutput(os.Stderr)
|
||||
func (cl *Cli) SetOutput(w io.Writer) {
|
||||
cl.output = w
|
||||
}
|
||||
|
||||
// Run resolves os.Args to a command path and executes it.
|
||||
//
|
||||
// c.Cli().Run()
|
||||
// c.Cli().Run("deploy", "to", "homelab")
|
||||
func (cl *Cli) Run(args ...string) Result {
|
||||
if len(args) == 0 {
|
||||
args = os.Args[1:]
|
||||
}
|
||||
|
||||
clean := FilterArgs(args)
|
||||
|
||||
if cl.core == nil || cl.core.commands == nil {
|
||||
if cl.banner != nil {
|
||||
cl.Print(cl.banner(cl))
|
||||
}
|
||||
return Result{}
|
||||
}
|
||||
|
||||
cl.core.commands.mu.RLock()
|
||||
cmdCount := len(cl.core.commands.commands)
|
||||
cl.core.commands.mu.RUnlock()
|
||||
|
||||
if cmdCount == 0 {
|
||||
if cl.banner != nil {
|
||||
cl.Print(cl.banner(cl))
|
||||
}
|
||||
return Result{}
|
||||
}
|
||||
|
||||
// Resolve command path from args
|
||||
var cmd *Command
|
||||
var remaining []string
|
||||
|
||||
cl.core.commands.mu.RLock()
|
||||
for i := len(clean); i > 0; i-- {
|
||||
path := JoinPath(clean[:i]...)
|
||||
if c, ok := cl.core.commands.commands[path]; ok {
|
||||
cmd = c
|
||||
remaining = clean[i:]
|
||||
break
|
||||
}
|
||||
}
|
||||
cl.core.commands.mu.RUnlock()
|
||||
|
||||
if cmd == nil {
|
||||
if cl.banner != nil {
|
||||
cl.Print(cl.banner(cl))
|
||||
}
|
||||
cl.PrintHelp()
|
||||
return Result{}
|
||||
}
|
||||
|
||||
// Build options from remaining args
|
||||
opts := Options{}
|
||||
for _, arg := range remaining {
|
||||
key, val, valid := ParseFlag(arg)
|
||||
if valid {
|
||||
if Contains(arg, "=") {
|
||||
opts = append(opts, Option{Key: key, Value: val})
|
||||
} else {
|
||||
opts = append(opts, Option{Key: key, Value: true})
|
||||
}
|
||||
} else if !IsFlag(arg) {
|
||||
opts = append(opts, Option{Key: "_arg", Value: arg})
|
||||
}
|
||||
}
|
||||
|
||||
if cmd.Action != nil {
|
||||
return cmd.Run(opts)
|
||||
}
|
||||
if cmd.Lifecycle != nil {
|
||||
return cmd.Start(opts)
|
||||
}
|
||||
return Result{E("core.Cli.Run", Concat("command \"", cmd.Path, "\" is not executable"), nil), false}
|
||||
}
|
||||
|
||||
// PrintHelp prints available commands.
|
||||
//
|
||||
// c.Cli().PrintHelp()
|
||||
func (cl *Cli) PrintHelp() {
|
||||
if cl.core == nil || cl.core.commands == nil {
|
||||
return
|
||||
}
|
||||
|
||||
name := ""
|
||||
if cl.core.app != nil {
|
||||
name = cl.core.app.Name
|
||||
}
|
||||
if name != "" {
|
||||
cl.Print("%s commands:", name)
|
||||
} else {
|
||||
cl.Print("Commands:")
|
||||
}
|
||||
|
||||
cl.core.commands.mu.RLock()
|
||||
defer cl.core.commands.mu.RUnlock()
|
||||
|
||||
for path, cmd := range cl.core.commands.commands {
|
||||
if cmd.Hidden || (cmd.Action == nil && cmd.Lifecycle == nil) {
|
||||
continue
|
||||
}
|
||||
tr := cl.core.I18n().Translate(cmd.I18nKey())
|
||||
desc, _ := tr.Value.(string)
|
||||
if desc == "" || desc == cmd.I18nKey() {
|
||||
cl.Print(" %s", path)
|
||||
} else {
|
||||
cl.Print(" %-30s %s", path, desc)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// SetBanner sets the banner function.
|
||||
//
|
||||
// c.Cli().SetBanner(func(_ *core.Cli) string { return "My App v1.0" })
|
||||
func (cl *Cli) SetBanner(fn func(*Cli) string) {
|
||||
cl.banner = fn
|
||||
}
|
||||
|
||||
// Banner returns the banner string.
|
||||
func (cl *Cli) Banner() string {
|
||||
if cl.banner != nil {
|
||||
return cl.banner(cl)
|
||||
}
|
||||
if cl.core != nil && cl.core.app != nil && cl.core.app.Name != "" {
|
||||
return cl.core.app.Name
|
||||
}
|
||||
return ""
|
||||
}
|
||||
208
pkg/lib/workspace/default/.core/reference/command.go
Normal file
208
pkg/lib/workspace/default/.core/reference/command.go
Normal file
|
|
@ -0,0 +1,208 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
// Command is a DTO representing an executable operation.
|
||||
// Commands don't know if they're root, child, or nested — the tree
|
||||
// structure comes from composition via path-based registration.
|
||||
//
|
||||
// Register a command:
|
||||
//
|
||||
// c.Command("deploy", func(opts core.Options) core.Result {
|
||||
// return core.Result{"deployed", true}
|
||||
// })
|
||||
//
|
||||
// Register a nested command:
|
||||
//
|
||||
// c.Command("deploy/to/homelab", handler)
|
||||
//
|
||||
// Description is an i18n key — derived from path if omitted:
|
||||
//
|
||||
// "deploy" → "cmd.deploy.description"
|
||||
// "deploy/to/homelab" → "cmd.deploy.to.homelab.description"
|
||||
package core
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
// CommandAction is the function signature for command handlers.
|
||||
//
|
||||
// func(opts core.Options) core.Result
|
||||
type CommandAction func(Options) Result
|
||||
|
||||
// CommandLifecycle is implemented by commands that support managed lifecycle.
|
||||
// Basic commands only need an action. Daemon commands implement Start/Stop/Signal
|
||||
// via go-process.
|
||||
type CommandLifecycle interface {
|
||||
Start(Options) Result
|
||||
Stop() Result
|
||||
Restart() Result
|
||||
Reload() Result
|
||||
Signal(string) Result
|
||||
}
|
||||
|
||||
// Command is the DTO for an executable operation.
|
||||
type Command struct {
|
||||
Name string
|
||||
Description string // i18n key — derived from path if empty
|
||||
Path string // "deploy/to/homelab"
|
||||
Action CommandAction // business logic
|
||||
Lifecycle CommandLifecycle // optional — provided by go-process
|
||||
Flags Options // declared flags
|
||||
Hidden bool
|
||||
commands map[string]*Command // child commands (internal)
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// I18nKey returns the i18n key for this command's description.
|
||||
//
|
||||
// cmd with path "deploy/to/homelab" → "cmd.deploy.to.homelab.description"
|
||||
func (cmd *Command) I18nKey() string {
|
||||
if cmd.Description != "" {
|
||||
return cmd.Description
|
||||
}
|
||||
path := cmd.Path
|
||||
if path == "" {
|
||||
path = cmd.Name
|
||||
}
|
||||
return Concat("cmd.", Replace(path, "/", "."), ".description")
|
||||
}
|
||||
|
||||
// Run executes the command's action with the given options.
|
||||
//
|
||||
// result := cmd.Run(core.Options{{Key: "target", Value: "homelab"}})
|
||||
func (cmd *Command) Run(opts Options) Result {
|
||||
if cmd.Action == nil {
|
||||
return Result{E("core.Command.Run", Concat("command \"", cmd.Path, "\" is not executable"), nil), false}
|
||||
}
|
||||
return cmd.Action(opts)
|
||||
}
|
||||
|
||||
// Start delegates to the lifecycle implementation if available.
|
||||
func (cmd *Command) Start(opts Options) Result {
|
||||
if cmd.Lifecycle != nil {
|
||||
return cmd.Lifecycle.Start(opts)
|
||||
}
|
||||
return cmd.Run(opts)
|
||||
}
|
||||
|
||||
// Stop delegates to the lifecycle implementation.
|
||||
func (cmd *Command) Stop() Result {
|
||||
if cmd.Lifecycle != nil {
|
||||
return cmd.Lifecycle.Stop()
|
||||
}
|
||||
return Result{}
|
||||
}
|
||||
|
||||
// Restart delegates to the lifecycle implementation.
|
||||
func (cmd *Command) Restart() Result {
|
||||
if cmd.Lifecycle != nil {
|
||||
return cmd.Lifecycle.Restart()
|
||||
}
|
||||
return Result{}
|
||||
}
|
||||
|
||||
// Reload delegates to the lifecycle implementation.
|
||||
func (cmd *Command) Reload() Result {
|
||||
if cmd.Lifecycle != nil {
|
||||
return cmd.Lifecycle.Reload()
|
||||
}
|
||||
return Result{}
|
||||
}
|
||||
|
||||
// Signal delegates to the lifecycle implementation.
|
||||
func (cmd *Command) Signal(sig string) Result {
|
||||
if cmd.Lifecycle != nil {
|
||||
return cmd.Lifecycle.Signal(sig)
|
||||
}
|
||||
return Result{}
|
||||
}
|
||||
|
||||
// --- Command Registry (on Core) ---
|
||||
|
||||
// commandRegistry holds the command tree.
|
||||
type commandRegistry struct {
|
||||
commands map[string]*Command
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// Command gets or registers a command by path.
|
||||
//
|
||||
// c.Command("deploy", Command{Action: handler})
|
||||
// r := c.Command("deploy")
|
||||
func (c *Core) Command(path string, command ...Command) Result {
|
||||
if len(command) == 0 {
|
||||
c.commands.mu.RLock()
|
||||
cmd, ok := c.commands.commands[path]
|
||||
c.commands.mu.RUnlock()
|
||||
return Result{cmd, ok}
|
||||
}
|
||||
|
||||
if path == "" || HasPrefix(path, "/") || HasSuffix(path, "/") || Contains(path, "//") {
|
||||
return Result{E("core.Command", Concat("invalid command path: \"", path, "\""), nil), false}
|
||||
}
|
||||
|
||||
c.commands.mu.Lock()
|
||||
defer c.commands.mu.Unlock()
|
||||
|
||||
if existing, exists := c.commands.commands[path]; exists && (existing.Action != nil || existing.Lifecycle != nil) {
|
||||
return Result{E("core.Command", Concat("command \"", path, "\" already registered"), nil), false}
|
||||
}
|
||||
|
||||
cmd := &command[0]
|
||||
cmd.Name = pathName(path)
|
||||
cmd.Path = path
|
||||
if cmd.commands == nil {
|
||||
cmd.commands = make(map[string]*Command)
|
||||
}
|
||||
|
||||
// Preserve existing subtree when overwriting a placeholder parent
|
||||
if existing, exists := c.commands.commands[path]; exists {
|
||||
for k, v := range existing.commands {
|
||||
if _, has := cmd.commands[k]; !has {
|
||||
cmd.commands[k] = v
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
c.commands.commands[path] = cmd
|
||||
|
||||
// Build parent chain — "deploy/to/homelab" creates "deploy" and "deploy/to" if missing
|
||||
parts := Split(path, "/")
|
||||
for i := len(parts) - 1; i > 0; i-- {
|
||||
parentPath := JoinPath(parts[:i]...)
|
||||
if _, exists := c.commands.commands[parentPath]; !exists {
|
||||
c.commands.commands[parentPath] = &Command{
|
||||
Name: parts[i-1],
|
||||
Path: parentPath,
|
||||
commands: make(map[string]*Command),
|
||||
}
|
||||
}
|
||||
c.commands.commands[parentPath].commands[parts[i]] = cmd
|
||||
cmd = c.commands.commands[parentPath]
|
||||
}
|
||||
|
||||
return Result{OK: true}
|
||||
}
|
||||
|
||||
// Commands returns all registered command paths.
|
||||
//
|
||||
// paths := c.Commands()
|
||||
func (c *Core) Commands() []string {
|
||||
if c.commands == nil {
|
||||
return nil
|
||||
}
|
||||
c.commands.mu.RLock()
|
||||
defer c.commands.mu.RUnlock()
|
||||
var paths []string
|
||||
for k := range c.commands.commands {
|
||||
paths = append(paths, k)
|
||||
}
|
||||
return paths
|
||||
}
|
||||
|
||||
// pathName extracts the last segment of a path.
|
||||
// "deploy/to/homelab" → "homelab"
|
||||
func pathName(path string) string {
|
||||
parts := Split(path, "/")
|
||||
return parts[len(parts)-1]
|
||||
}
|
||||
135
pkg/lib/workspace/default/.core/reference/config.go
Normal file
135
pkg/lib/workspace/default/.core/reference/config.go
Normal file
|
|
@ -0,0 +1,135 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
// Settings, feature flags, and typed configuration for the Core framework.
|
||||
|
||||
package core
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
// ConfigVar is a variable that can be set, unset, and queried for its state.
|
||||
type ConfigVar[T any] struct {
|
||||
val T
|
||||
set bool
|
||||
}
|
||||
|
||||
func (v *ConfigVar[T]) Get() T { return v.val }
|
||||
func (v *ConfigVar[T]) Set(val T) { v.val = val; v.set = true }
|
||||
func (v *ConfigVar[T]) IsSet() bool { return v.set }
|
||||
func (v *ConfigVar[T]) Unset() {
|
||||
v.set = false
|
||||
var zero T
|
||||
v.val = zero
|
||||
}
|
||||
|
||||
func NewConfigVar[T any](val T) ConfigVar[T] {
|
||||
return ConfigVar[T]{val: val, set: true}
|
||||
}
|
||||
|
||||
// ConfigOptions holds configuration data.
|
||||
type ConfigOptions struct {
|
||||
Settings map[string]any
|
||||
Features map[string]bool
|
||||
}
|
||||
|
||||
func (o *ConfigOptions) init() {
|
||||
if o.Settings == nil {
|
||||
o.Settings = make(map[string]any)
|
||||
}
|
||||
if o.Features == nil {
|
||||
o.Features = make(map[string]bool)
|
||||
}
|
||||
}
|
||||
|
||||
// Config holds configuration settings and feature flags.
|
||||
type Config struct {
|
||||
*ConfigOptions
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// Set stores a configuration value by key.
|
||||
func (e *Config) Set(key string, val any) {
|
||||
e.mu.Lock()
|
||||
if e.ConfigOptions == nil {
|
||||
e.ConfigOptions = &ConfigOptions{}
|
||||
}
|
||||
e.ConfigOptions.init()
|
||||
e.Settings[key] = val
|
||||
e.mu.Unlock()
|
||||
}
|
||||
|
||||
// Get retrieves a configuration value by key.
|
||||
func (e *Config) Get(key string) Result {
|
||||
e.mu.RLock()
|
||||
defer e.mu.RUnlock()
|
||||
if e.ConfigOptions == nil || e.Settings == nil {
|
||||
return Result{}
|
||||
}
|
||||
val, ok := e.Settings[key]
|
||||
if !ok {
|
||||
return Result{}
|
||||
}
|
||||
return Result{val, true}
|
||||
}
|
||||
|
||||
func (e *Config) String(key string) string { return ConfigGet[string](e, key) }
|
||||
func (e *Config) Int(key string) int { return ConfigGet[int](e, key) }
|
||||
func (e *Config) Bool(key string) bool { return ConfigGet[bool](e, key) }
|
||||
|
||||
// ConfigGet retrieves a typed configuration value.
|
||||
func ConfigGet[T any](e *Config, key string) T {
|
||||
r := e.Get(key)
|
||||
if !r.OK {
|
||||
var zero T
|
||||
return zero
|
||||
}
|
||||
typed, _ := r.Value.(T)
|
||||
return typed
|
||||
}
|
||||
|
||||
// --- Feature Flags ---
|
||||
|
||||
func (e *Config) Enable(feature string) {
|
||||
e.mu.Lock()
|
||||
if e.ConfigOptions == nil {
|
||||
e.ConfigOptions = &ConfigOptions{}
|
||||
}
|
||||
e.ConfigOptions.init()
|
||||
e.Features[feature] = true
|
||||
e.mu.Unlock()
|
||||
}
|
||||
|
||||
func (e *Config) Disable(feature string) {
|
||||
e.mu.Lock()
|
||||
if e.ConfigOptions == nil {
|
||||
e.ConfigOptions = &ConfigOptions{}
|
||||
}
|
||||
e.ConfigOptions.init()
|
||||
e.Features[feature] = false
|
||||
e.mu.Unlock()
|
||||
}
|
||||
|
||||
func (e *Config) Enabled(feature string) bool {
|
||||
e.mu.RLock()
|
||||
defer e.mu.RUnlock()
|
||||
if e.ConfigOptions == nil || e.Features == nil {
|
||||
return false
|
||||
}
|
||||
return e.Features[feature]
|
||||
}
|
||||
|
||||
func (e *Config) EnabledFeatures() []string {
|
||||
e.mu.RLock()
|
||||
defer e.mu.RUnlock()
|
||||
if e.ConfigOptions == nil || e.Features == nil {
|
||||
return nil
|
||||
}
|
||||
var result []string
|
||||
for k, v := range e.Features {
|
||||
if v {
|
||||
result = append(result, k)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
105
pkg/lib/workspace/default/.core/reference/contract.go
Normal file
105
pkg/lib/workspace/default/.core/reference/contract.go
Normal file
|
|
@ -0,0 +1,105 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
// Contracts, options, and type definitions for the Core framework.
|
||||
|
||||
package core
|
||||
|
||||
import (
|
||||
"context"
|
||||
)
|
||||
|
||||
// Message is the type for IPC broadcasts (fire-and-forget).
|
||||
type Message any
|
||||
|
||||
// Query is the type for read-only IPC requests.
|
||||
type Query any
|
||||
|
||||
// Task is the type for IPC requests that perform side effects.
|
||||
type Task any
|
||||
|
||||
// TaskWithIdentifier is an optional interface for tasks that need to know their assigned identifier.
|
||||
type TaskWithIdentifier interface {
|
||||
Task
|
||||
SetTaskIdentifier(id string)
|
||||
GetTaskIdentifier() string
|
||||
}
|
||||
|
||||
// QueryHandler handles Query requests. Returns Result{Value, OK}.
|
||||
type QueryHandler func(*Core, Query) Result
|
||||
|
||||
// TaskHandler handles Task requests. Returns Result{Value, OK}.
|
||||
type TaskHandler func(*Core, Task) Result
|
||||
|
||||
// Startable is implemented by services that need startup initialisation.
|
||||
type Startable interface {
|
||||
OnStartup(ctx context.Context) error
|
||||
}
|
||||
|
||||
// Stoppable is implemented by services that need shutdown cleanup.
|
||||
type Stoppable interface {
|
||||
OnShutdown(ctx context.Context) error
|
||||
}
|
||||
|
||||
// --- Action Messages ---
|
||||
|
||||
type ActionServiceStartup struct{}
|
||||
type ActionServiceShutdown struct{}
|
||||
|
||||
type ActionTaskStarted struct {
|
||||
TaskIdentifier string
|
||||
Task Task
|
||||
}
|
||||
|
||||
type ActionTaskProgress struct {
|
||||
TaskIdentifier string
|
||||
Task Task
|
||||
Progress float64
|
||||
Message string
|
||||
}
|
||||
|
||||
type ActionTaskCompleted struct {
|
||||
TaskIdentifier string
|
||||
Task Task
|
||||
Result any
|
||||
Error error
|
||||
}
|
||||
|
||||
// --- Constructor ---
|
||||
|
||||
// New creates a Core instance.
|
||||
//
|
||||
// c := core.New(core.Options{
|
||||
// {Key: "name", Value: "myapp"},
|
||||
// })
|
||||
func New(opts ...Options) *Core {
|
||||
c := &Core{
|
||||
app: &App{},
|
||||
data: &Data{},
|
||||
drive: &Drive{},
|
||||
fs: &Fs{root: "/"},
|
||||
config: &Config{ConfigOptions: &ConfigOptions{}},
|
||||
error: &ErrorPanic{},
|
||||
log: &ErrorLog{log: Default()},
|
||||
lock: &Lock{},
|
||||
ipc: &Ipc{},
|
||||
i18n: &I18n{},
|
||||
services: &serviceRegistry{services: make(map[string]*Service)},
|
||||
commands: &commandRegistry{commands: make(map[string]*Command)},
|
||||
}
|
||||
c.context, c.cancel = context.WithCancel(context.Background())
|
||||
|
||||
if len(opts) > 0 {
|
||||
cp := make(Options, len(opts[0]))
|
||||
copy(cp, opts[0])
|
||||
c.options = &cp
|
||||
name := cp.String("name")
|
||||
if name != "" {
|
||||
c.app.Name = name
|
||||
}
|
||||
}
|
||||
|
||||
// Init Cli surface with Core reference
|
||||
c.cli = &Cli{core: c}
|
||||
|
||||
return c
|
||||
}
|
||||
81
pkg/lib/workspace/default/.core/reference/core.go
Normal file
81
pkg/lib/workspace/default/.core/reference/core.go
Normal file
|
|
@ -0,0 +1,81 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
// Package core is a dependency injection and service lifecycle framework for Go.
|
||||
// This file defines the Core struct, accessors, and IPC/error wrappers.
|
||||
|
||||
package core
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
// --- Core Struct ---
|
||||
|
||||
// Core is the central application object that manages services, assets, and communication.
|
||||
type Core struct {
|
||||
options *Options // c.Options() — Input configuration used to create this Core
|
||||
app *App // c.App() — Application identity + optional GUI runtime
|
||||
data *Data // c.Data() — Embedded/stored content from packages
|
||||
drive *Drive // c.Drive() — Resource handle registry (transports)
|
||||
fs *Fs // c.Fs() — Local filesystem I/O (sandboxable)
|
||||
config *Config // c.Config() — Configuration, settings, feature flags
|
||||
error *ErrorPanic // c.Error() — Panic recovery and crash reporting
|
||||
log *ErrorLog // c.Log() — Structured logging + error wrapping
|
||||
cli *Cli // c.Cli() — CLI surface layer
|
||||
commands *commandRegistry // c.Command("path") — Command tree
|
||||
services *serviceRegistry // c.Service("name") — Service registry
|
||||
lock *Lock // c.Lock("name") — Named mutexes
|
||||
ipc *Ipc // c.IPC() — Message bus for IPC
|
||||
i18n *I18n // c.I18n() — Internationalisation and locale collection
|
||||
|
||||
context context.Context
|
||||
cancel context.CancelFunc
|
||||
taskIDCounter atomic.Uint64
|
||||
waitGroup sync.WaitGroup
|
||||
shutdown atomic.Bool
|
||||
}
|
||||
|
||||
// --- Accessors ---
|
||||
|
||||
func (c *Core) Options() *Options { return c.options }
|
||||
func (c *Core) App() *App { return c.app }
|
||||
func (c *Core) Data() *Data { return c.data }
|
||||
func (c *Core) Drive() *Drive { return c.drive }
|
||||
func (c *Core) Embed() Result { return c.data.Get("app") } // legacy — use Data()
|
||||
func (c *Core) Fs() *Fs { return c.fs }
|
||||
func (c *Core) Config() *Config { return c.config }
|
||||
func (c *Core) Error() *ErrorPanic { return c.error }
|
||||
func (c *Core) Log() *ErrorLog { return c.log }
|
||||
func (c *Core) Cli() *Cli { return c.cli }
|
||||
func (c *Core) IPC() *Ipc { return c.ipc }
|
||||
func (c *Core) I18n() *I18n { return c.i18n }
|
||||
func (c *Core) Context() context.Context { return c.context }
|
||||
func (c *Core) Core() *Core { return c }
|
||||
|
||||
// --- IPC (uppercase aliases) ---
|
||||
|
||||
func (c *Core) ACTION(msg Message) Result { return c.Action(msg) }
|
||||
func (c *Core) QUERY(q Query) Result { return c.Query(q) }
|
||||
func (c *Core) QUERYALL(q Query) Result { return c.QueryAll(q) }
|
||||
func (c *Core) PERFORM(t Task) Result { return c.Perform(t) }
|
||||
|
||||
// --- Error+Log ---
|
||||
|
||||
// LogError logs an error and returns the Result from ErrorLog.
|
||||
func (c *Core) LogError(err error, op, msg string) Result {
|
||||
return c.log.Error(err, op, msg)
|
||||
}
|
||||
|
||||
// LogWarn logs a warning and returns the Result from ErrorLog.
|
||||
func (c *Core) LogWarn(err error, op, msg string) Result {
|
||||
return c.log.Warn(err, op, msg)
|
||||
}
|
||||
|
||||
// Must logs and panics if err is not nil.
|
||||
func (c *Core) Must(err error, op, msg string) {
|
||||
c.log.Must(err, op, msg)
|
||||
}
|
||||
|
||||
// --- Global Instance ---
|
||||
202
pkg/lib/workspace/default/.core/reference/data.go
Normal file
202
pkg/lib/workspace/default/.core/reference/data.go
Normal file
|
|
@ -0,0 +1,202 @@
|
|||
// SPDX-License-Identifier: EUPL-1.2
|
||||
|
||||
// Data is the embedded/stored content system for core packages.
|
||||
// Packages mount their embedded content here and other packages
|
||||
// read from it by path.
|
||||
//
|
||||
// Mount a package's assets:
|
||||
//
|
||||
// c.Data().New(core.Options{
|
||||
// {Key: "name", Value: "brain"},
|
||||
// {Key: "source", Value: brainFS},
|
||||
// {Key: "path", Value: "prompts"},
|
||||
// })
|
||||
//
|
||||
// Read from any mounted path:
|
||||
//
|
||||
// content := c.Data().ReadString("brain/coding.md")
|
||||
// entries := c.Data().List("agent/flow")
|
||||
//
|
||||
// Extract a template directory:
|
||||
//
|
||||
// c.Data().Extract("agent/workspace/default", "/tmp/ws", data)
|
||||
package core
|
||||
|
||||
import (
|
||||
"io/fs"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// Data manages mounted embedded filesystems from core packages.
|
||||
type Data struct {
|
||||
mounts map[string]*Embed
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// New registers an embedded filesystem under a named prefix.
|
||||
//
|
||||
// c.Data().New(core.Options{
|
||||
// {Key: "name", Value: "brain"},
|
||||
// {Key: "source", Value: brainFS},
|
||||
// {Key: "path", Value: "prompts"},
|
||||
// })
|
||||
func (d *Data) New(opts Options) Result {
|
||||
name := opts.String("name")
|
||||
if name == "" {
|
||||
return Result{}
|
||||
}
|
||||
|
||||
r := opts.Get("source")
|
||||
if !r.OK {
|
||||
return r
|
||||
}
|
||||
|
||||
fsys, ok := r.Value.(fs.FS)
|
||||
if !ok {
|
||||
return Result{E("data.New", "source is not fs.FS", nil), false}
|
||||
}
|
||||
|
||||
path := opts.String("path")
|
||||
if path == "" {
|
||||
path = "."
|
||||
}
|
||||
|
||||
d.mu.Lock()
|
||||
defer d.mu.Unlock()
|
||||
|
||||
if d.mounts == nil {
|
||||
d.mounts = make(map[string]*Embed)
|
||||
}
|
||||
|
||||
mr := Mount(fsys, path)
|
||||
if !mr.OK {
|
||||
return mr
|
||||
}
|
||||
|
||||
emb := mr.Value.(*Embed)
|
||||
d.mounts[name] = emb
|
||||
return Result{emb, true}
|
||||
}
|
||||
|
||||
// Get returns the Embed for a named mount point.
|
||||
//
|
||||
// r := c.Data().Get("brain")
|
||||
// if r.OK { emb := r.Value.(*Embed) }
|
||||
func (d *Data) Get(name string) Result {
|
||||
d.mu.RLock()
|
||||
defer d.mu.RUnlock()
|
||||
if d.mounts == nil {
|
||||
return Result{}
|
||||
}
|
||||
emb, ok := d.mounts[name]
|
||||
if !ok {
|
||||
return Result{}
|
||||
}
|
||||
return Result{emb, true}
|
||||
}
|
||||
|
||||
// resolve splits a path like "brain/coding.md" into mount name + relative path.
|
||||
func (d *Data) resolve(path string) (*Embed, string) {
|
||||
d.mu.RLock()
|
||||
defer d.mu.RUnlock()
|
||||
|
||||
parts := SplitN(path, "/", 2)
|
||||
if len(parts) < 2 {
|
||||
return nil, ""
|
||||
}
|
||||
if d.mounts == nil {
|
||||
return nil, ""
|
||||
}
|
||||
emb := d.mounts[parts[0]]
|
||||
return emb, parts[1]
|
||||
}
|
||||
|
||||
// ReadFile reads a file by full path.
|
||||
//
|
||||
// r := c.Data().ReadFile("brain/prompts/coding.md")
|
||||
// if r.OK { data := r.Value.([]byte) }
|
||||
func (d *Data) ReadFile(path string) Result {
|
||||
emb, rel := d.resolve(path)
|
||||
if emb == nil {
|
||||
return Result{}
|
||||
}
|
||||
return emb.ReadFile(rel)
|
||||
}
|
||||
|
||||
// ReadString reads a file as a string.
|
||||
//
|
||||
// r := c.Data().ReadString("agent/flow/deploy/to/homelab.yaml")
|
||||
// if r.OK { content := r.Value.(string) }
|
||||
func (d *Data) ReadString(path string) Result {
|
||||
r := d.ReadFile(path)
|
||||
if !r.OK {
|
||||
return r
|
||||
}
|
||||
return Result{string(r.Value.([]byte)), true}
|
||||
}
|
||||
|
||||
// List returns directory entries at a path.
|
||||
//
|
||||
// r := c.Data().List("agent/persona/code")
|
||||
// if r.OK { entries := r.Value.([]fs.DirEntry) }
|
||||
func (d *Data) List(path string) Result {
|
||||
emb, rel := d.resolve(path)
|
||||
if emb == nil {
|
||||
return Result{}
|
||||
}
|
||||
r := emb.ReadDir(rel)
|
||||
if !r.OK {
|
||||
return r
|
||||
}
|
||||
return Result{r.Value, true}
|
||||
}
|
||||
|
||||
// ListNames returns filenames (without extensions) at a path.
|
||||
//
|
||||
// r := c.Data().ListNames("agent/flow")
|
||||
// if r.OK { names := r.Value.([]string) }
|
||||
func (d *Data) ListNames(path string) Result {
|
||||
r := d.List(path)
|
||||
if !r.OK {
|
||||
return r
|
||||
}
|
||||
entries := r.Value.([]fs.DirEntry)
|
||||
var names []string
|
||||
for _, e := range entries {
|
||||
name := e.Name()
|
||||
if !e.IsDir() {
|
||||
name = TrimSuffix(name, filepath.Ext(name))
|
||||
}
|
||||
names = append(names, name)
|
||||
}
|
||||
return Result{names, true}
|
||||
}
|
||||
|
||||
// Extract copies a template directory to targetDir.
|
||||
//
|
||||
// r := c.Data().Extract("agent/workspace/default", "/tmp/ws", templateData)
|
||||
func (d *Data) Extract(path, targetDir string, templateData any) Result {
|
||||
emb, rel := d.resolve(path)
|
||||
if emb == nil {
|
||||
return Result{}
|
||||
}
|
||||
r := emb.Sub(rel)
|
||||
if !r.OK {
|
||||
return r
|
||||
}
|
||||
return Extract(r.Value.(*Embed).FS(), targetDir, templateData)
|
||||
}
|
||||
|
||||
// Mounts returns the names of all mounted content.
|
||||
//
|
||||
// names := c.Data().Mounts()
|
||||
func (d *Data) Mounts() []string {
|
||||
d.mu.RLock()
|
||||
defer d.mu.RUnlock()
|
||||
var names []string
|
||||
for k := range d.mounts {
|
||||
names = append(names, k)
|
||||
}
|
||||
return names
|
||||
}
|
||||
177
pkg/lib/workspace/default/.core/reference/docs/commands.md
Normal file
177
pkg/lib/workspace/default/.core/reference/docs/commands.md
Normal file
|
|
@ -0,0 +1,177 @@
|
|||
---
|
||||
title: Commands
|
||||
description: Path-based command registration and CLI execution.
|
||||
---
|
||||
|
||||
# Commands
|
||||
|
||||
Commands are one of the most AX-native parts of CoreGO. The path is the identity.
|
||||
|
||||
## Register a Command
|
||||
|
||||
```go
|
||||
c.Command("deploy/to/homelab", core.Command{
|
||||
Action: func(opts core.Options) core.Result {
|
||||
target := opts.String("target")
|
||||
return core.Result{Value: "deploying to " + target, OK: true}
|
||||
},
|
||||
})
|
||||
```
|
||||
|
||||
## Command Paths
|
||||
|
||||
Paths must be clean:
|
||||
|
||||
- no empty path
|
||||
- no leading slash
|
||||
- no trailing slash
|
||||
- no double slash
|
||||
|
||||
These paths are valid:
|
||||
|
||||
```text
|
||||
deploy
|
||||
deploy/to/homelab
|
||||
workspace/create
|
||||
```
|
||||
|
||||
These are rejected:
|
||||
|
||||
```text
|
||||
/deploy
|
||||
deploy/
|
||||
deploy//to
|
||||
```
|
||||
|
||||
## Parent Commands Are Auto-Created
|
||||
|
||||
When you register `deploy/to/homelab`, CoreGO also creates placeholder parents if they do not already exist:
|
||||
|
||||
- `deploy`
|
||||
- `deploy/to`
|
||||
|
||||
This makes the path tree navigable without extra setup.
|
||||
|
||||
## Read a Command Back
|
||||
|
||||
```go
|
||||
r := c.Command("deploy/to/homelab")
|
||||
if r.OK {
|
||||
cmd := r.Value.(*core.Command)
|
||||
_ = cmd
|
||||
}
|
||||
```
|
||||
|
||||
## Run a Command Directly
|
||||
|
||||
```go
|
||||
cmd := c.Command("deploy/to/homelab").Value.(*core.Command)
|
||||
|
||||
r := cmd.Run(core.Options{
|
||||
{Key: "target", Value: "uk-prod"},
|
||||
})
|
||||
```
|
||||
|
||||
If `Action` is nil, `Run` returns `Result{OK:false}` with a structured error.
|
||||
|
||||
## Run Through the CLI Surface
|
||||
|
||||
```go
|
||||
r := c.Cli().Run("deploy", "to", "homelab", "--target=uk-prod", "--debug")
|
||||
```
|
||||
|
||||
`Cli.Run` resolves the longest matching command path from the arguments, then converts the remaining args into `core.Options`.
|
||||
|
||||
## Flag Parsing Rules
|
||||
|
||||
### Double Dash
|
||||
|
||||
```text
|
||||
--target=uk-prod -> key "target", value "uk-prod"
|
||||
--debug -> key "debug", value true
|
||||
```
|
||||
|
||||
### Single Dash
|
||||
|
||||
```text
|
||||
-v -> key "v", value true
|
||||
-n=4 -> key "n", value "4"
|
||||
```
|
||||
|
||||
### Positional Arguments
|
||||
|
||||
Non-flag arguments after the command path are stored as repeated `_arg` options.
|
||||
|
||||
```go
|
||||
r := c.Cli().Run("workspace", "open", "alpha")
|
||||
```
|
||||
|
||||
That produces an option like:
|
||||
|
||||
```go
|
||||
core.Option{Key: "_arg", Value: "alpha"}
|
||||
```
|
||||
|
||||
### Important Details
|
||||
|
||||
- flag values stay as strings
|
||||
- `opts.Int("port")` only works if some code stored an actual `int`
|
||||
- invalid flags such as `-verbose` and `--v` are ignored
|
||||
|
||||
## Help Output
|
||||
|
||||
`Cli.PrintHelp()` prints executable commands:
|
||||
|
||||
```go
|
||||
c.Cli().PrintHelp()
|
||||
```
|
||||
|
||||
It skips:
|
||||
|
||||
- hidden commands
|
||||
- placeholder parents with no `Action` and no `Lifecycle`
|
||||
|
||||
Descriptions are resolved through `cmd.I18nKey()`.
|
||||
|
||||
## I18n Description Keys
|
||||
|
||||
If `Description` is empty, CoreGO derives a key from the path.
|
||||
|
||||
```text
|
||||
deploy -> cmd.deploy.description
|
||||
deploy/to/homelab -> cmd.deploy.to.homelab.description
|
||||
workspace/create -> cmd.workspace.create.description
|
||||
```
|
||||
|
||||
If `Description` is already set, CoreGO uses it as-is.
|
||||
|
||||
## Lifecycle Commands
|
||||
|
||||
Commands can also delegate to a lifecycle implementation.
|
||||
|
||||
```go
|
||||
type daemonCommand struct{}
|
||||
|
||||
func (d *daemonCommand) Start(opts core.Options) core.Result { return core.Result{OK: true} }
|
||||
func (d *daemonCommand) Stop() core.Result { return core.Result{OK: true} }
|
||||
func (d *daemonCommand) Restart() core.Result { return core.Result{OK: true} }
|
||||
func (d *daemonCommand) Reload() core.Result { return core.Result{OK: true} }
|
||||
func (d *daemonCommand) Signal(sig string) core.Result { return core.Result{Value: sig, OK: true} }
|
||||
|
||||
c.Command("agent/serve", core.Command{
|
||||
Lifecycle: &daemonCommand{},
|
||||
})
|
||||
```
|
||||
|
||||
Important behavior:
|
||||
|
||||
- `Start` falls back to `Run` when `Lifecycle` is nil
|
||||
- `Stop`, `Restart`, `Reload`, and `Signal` return an empty `Result` when `Lifecycle` is nil
|
||||
|
||||
## List Command Paths
|
||||
|
||||
```go
|
||||
paths := c.Commands()
|
||||
```
|
||||
|
||||
Like the service registry, the command registry is map-backed, so iteration order is not guaranteed.
|
||||
|
|
@ -0,0 +1,96 @@
|
|||
---
|
||||
title: Configuration
|
||||
description: Constructor options, runtime settings, and feature flags.
|
||||
---
|
||||
|
||||
# Configuration
|
||||
|
||||
CoreGO uses two different configuration layers:
|
||||
|
||||
- constructor-time `core.Options`
|
||||
- runtime `c.Config()`
|
||||
|
||||
## Constructor-Time Options
|
||||
|
||||
```go
|
||||
c := core.New(core.Options{
|
||||
{Key: "name", Value: "agent-workbench"},
|
||||
})
|
||||
```
|
||||
|
||||
### Current Behavior
|
||||
|
||||
- `New` accepts `opts ...Options`
|
||||
- the current implementation copies only the first `Options` slice
|
||||
- the `name` key is applied to `c.App().Name`
|
||||
|
||||
If you need more constructor data, put it in the first `core.Options` slice.
|
||||
|
||||
## Runtime Settings with `Config`
|
||||
|
||||
Use `c.Config()` for mutable process settings.
|
||||
|
||||
```go
|
||||
c.Config().Set("workspace.root", "/srv/workspaces")
|
||||
c.Config().Set("max_agents", 8)
|
||||
c.Config().Set("debug", true)
|
||||
```
|
||||
|
||||
Read them back with:
|
||||
|
||||
```go
|
||||
root := c.Config().String("workspace.root")
|
||||
maxAgents := c.Config().Int("max_agents")
|
||||
debug := c.Config().Bool("debug")
|
||||
raw := c.Config().Get("workspace.root")
|
||||
```
|
||||
|
||||
### Important Details
|
||||
|
||||
- missing keys return zero values
|
||||
- typed accessors do not coerce strings into ints or bools
|
||||
- `Get` returns `core.Result`
|
||||
|
||||
## Feature Flags
|
||||
|
||||
`Config` also tracks named feature flags.
|
||||
|
||||
```go
|
||||
c.Config().Enable("workspace.templates")
|
||||
c.Config().Enable("agent.review")
|
||||
c.Config().Disable("agent.review")
|
||||
```
|
||||
|
||||
Read them with:
|
||||
|
||||
```go
|
||||
enabled := c.Config().Enabled("workspace.templates")
|
||||
features := c.Config().EnabledFeatures()
|
||||
```
|
||||
|
||||
Feature names are case-sensitive.
|
||||
|
||||
## `ConfigVar[T]`
|
||||
|
||||
Use `ConfigVar[T]` when you need a typed value that can also represent “set versus unset”.
|
||||
|
||||
```go
|
||||
theme := core.NewConfigVar("amber")
|
||||
|
||||
if theme.IsSet() {
|
||||
fmt.Println(theme.Get())
|
||||
}
|
||||
|
||||
theme.Unset()
|
||||
```
|
||||
|
||||
This is useful for package-local state where zero values are not enough to describe configuration presence.
|
||||
|
||||
## Recommended Pattern
|
||||
|
||||
Use the two layers for different jobs:
|
||||
|
||||
- put startup identity such as `name` into `core.Options`
|
||||
- put mutable runtime values and feature switches into `c.Config()`
|
||||
|
||||
That keeps constructor intent separate from live process state.
|
||||
120
pkg/lib/workspace/default/.core/reference/docs/errors.md
Normal file
120
pkg/lib/workspace/default/.core/reference/docs/errors.md
Normal file
|
|
@ -0,0 +1,120 @@
|
|||
---
|
||||
title: Errors
|
||||
description: Structured errors, logging helpers, and panic recovery.
|
||||
---
|
||||
|
||||
# Errors
|
||||
|
||||
CoreGO treats failures as structured operational data.
|
||||
|
||||
Repository convention: use `E()` instead of `fmt.Errorf` for framework and service errors.
|
||||
|
||||
## `Err`
|
||||
|
||||
The structured error type is:
|
||||
|
||||
```go
|
||||
type Err struct {
|
||||
Operation string
|
||||
Message string
|
||||
Cause error
|
||||
Code string
|
||||
}
|
||||
```
|
||||
|
||||
## Create Errors
|
||||
|
||||
### `E`
|
||||
|
||||
```go
|
||||
err := core.E("workspace.Load", "failed to read workspace manifest", cause)
|
||||
```
|
||||
|
||||
### `Wrap`
|
||||
|
||||
```go
|
||||
err := core.Wrap(cause, "workspace.Load", "manifest parse failed")
|
||||
```
|
||||
|
||||
### `WrapCode`
|
||||
|
||||
```go
|
||||
err := core.WrapCode(cause, "WORKSPACE_INVALID", "workspace.Load", "manifest parse failed")
|
||||
```
|
||||
|
||||
### `NewCode`
|
||||
|
||||
```go
|
||||
err := core.NewCode("NOT_FOUND", "workspace not found")
|
||||
```
|
||||
|
||||
## Inspect Errors
|
||||
|
||||
```go
|
||||
op := core.Operation(err)
|
||||
code := core.ErrorCode(err)
|
||||
msg := core.ErrorMessage(err)
|
||||
root := core.Root(err)
|
||||
stack := core.StackTrace(err)
|
||||
pretty := core.FormatStackTrace(err)
|
||||
```
|
||||
|
||||
These helpers keep the operational chain visible without extra type assertions.
|
||||
|
||||
## Join and Standard Wrappers
|
||||
|
||||
```go
|
||||
combined := core.ErrorJoin(err1, err2)
|
||||
same := core.Is(combined, err1)
|
||||
```
|
||||
|
||||
`core.As` and `core.NewError` mirror the standard library for convenience.
|
||||
|
||||
## Log-and-Return Helpers
|
||||
|
||||
`Core` exposes two convenience wrappers:
|
||||
|
||||
```go
|
||||
r1 := c.LogError(err, "workspace.Load", "workspace load failed")
|
||||
r2 := c.LogWarn(err, "workspace.Load", "workspace load degraded")
|
||||
```
|
||||
|
||||
These log through the default logger and return `core.Result`.
|
||||
|
||||
You can also use the underlying `ErrorLog` directly:
|
||||
|
||||
```go
|
||||
r := c.Log().Error(err, "workspace.Load", "workspace load failed")
|
||||
```
|
||||
|
||||
`Must` logs and then panics when the error is non-nil:
|
||||
|
||||
```go
|
||||
c.Must(err, "workspace.Load", "workspace load failed")
|
||||
```
|
||||
|
||||
## Panic Recovery
|
||||
|
||||
`ErrorPanic` handles process-safe panic capture.
|
||||
|
||||
```go
|
||||
defer c.Error().Recover()
|
||||
```
|
||||
|
||||
Run background work with recovery:
|
||||
|
||||
```go
|
||||
c.Error().SafeGo(func() {
|
||||
panic("captured")
|
||||
})
|
||||
```
|
||||
|
||||
If `ErrorPanic` has a configured crash file path, it appends JSON crash reports and `Reports(n)` reads them back.
|
||||
|
||||
That crash file path is currently internal state on `ErrorPanic`, not a public constructor option on `Core.New()`.
|
||||
|
||||
## Logging and Error Context
|
||||
|
||||
The logging subsystem automatically extracts `op` and logical stack information from structured errors when those values are present in the key-value list.
|
||||
|
||||
That makes errors created with `E`, `Wrap`, or `WrapCode` much easier to follow in logs.
|
||||
|
|
@ -0,0 +1,208 @@
|
|||
---
|
||||
title: Getting Started
|
||||
description: Build a first CoreGO application with the current API.
|
||||
---
|
||||
|
||||
# Getting Started
|
||||
|
||||
This page shows the shortest path to a useful CoreGO application using the API that exists in this repository today.
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
go get dappco.re/go/core
|
||||
```
|
||||
|
||||
## Create a Core
|
||||
|
||||
`New` takes zero or more `core.Options` slices, but the current implementation only reads the first one. In practice, treat the constructor as `core.New(core.Options{...})`.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import "dappco.re/go/core"
|
||||
|
||||
func main() {
|
||||
c := core.New(core.Options{
|
||||
{Key: "name", Value: "agent-workbench"},
|
||||
})
|
||||
|
||||
_ = c
|
||||
}
|
||||
```
|
||||
|
||||
The `name` option is copied into `c.App().Name`.
|
||||
|
||||
## Register a Service
|
||||
|
||||
Services are registered explicitly with a name and a `core.Service` DTO.
|
||||
|
||||
```go
|
||||
c.Service("audit", core.Service{
|
||||
OnStart: func() core.Result {
|
||||
core.Info("audit service started", "app", c.App().Name)
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
OnStop: func() core.Result {
|
||||
core.Info("audit service stopped", "app", c.App().Name)
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
```
|
||||
|
||||
This registry stores `core.Service` values. It is a lifecycle registry, not a typed object container.
|
||||
|
||||
## Register a Query, Task, and Command
|
||||
|
||||
```go
|
||||
type workspaceCountQuery struct{}
|
||||
|
||||
type createWorkspaceTask struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
c.RegisterQuery(func(_ *core.Core, q core.Query) core.Result {
|
||||
switch q.(type) {
|
||||
case workspaceCountQuery:
|
||||
return core.Result{Value: 1, OK: true}
|
||||
}
|
||||
return core.Result{}
|
||||
})
|
||||
|
||||
c.RegisterTask(func(_ *core.Core, t core.Task) core.Result {
|
||||
switch task := t.(type) {
|
||||
case createWorkspaceTask:
|
||||
path := "/tmp/agent-workbench/" + task.Name
|
||||
return core.Result{Value: path, OK: true}
|
||||
}
|
||||
return core.Result{}
|
||||
})
|
||||
|
||||
c.Command("workspace/create", core.Command{
|
||||
Action: func(opts core.Options) core.Result {
|
||||
return c.PERFORM(createWorkspaceTask{
|
||||
Name: opts.String("name"),
|
||||
})
|
||||
},
|
||||
})
|
||||
```
|
||||
|
||||
## Start the Runtime
|
||||
|
||||
```go
|
||||
if !c.ServiceStartup(context.Background(), nil).OK {
|
||||
panic("startup failed")
|
||||
}
|
||||
```
|
||||
|
||||
`ServiceStartup` returns `core.Result`, not `error`.
|
||||
|
||||
## Run Through the CLI Surface
|
||||
|
||||
```go
|
||||
r := c.Cli().Run("workspace", "create", "--name=alpha")
|
||||
if r.OK {
|
||||
fmt.Println("created:", r.Value)
|
||||
}
|
||||
```
|
||||
|
||||
For flags with values, the CLI stores the value as a string. `--name=alpha` becomes `opts.String("name") == "alpha"`.
|
||||
|
||||
## Query the System
|
||||
|
||||
```go
|
||||
count := c.QUERY(workspaceCountQuery{})
|
||||
if count.OK {
|
||||
fmt.Println("workspace count:", count.Value)
|
||||
}
|
||||
```
|
||||
|
||||
## Shut Down Cleanly
|
||||
|
||||
```go
|
||||
_ = c.ServiceShutdown(context.Background())
|
||||
```
|
||||
|
||||
Shutdown cancels `c.Context()`, broadcasts `ActionServiceShutdown{}`, waits for background tasks to finish, and then runs service stop hooks.
|
||||
|
||||
## Full Example
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"dappco.re/go/core"
|
||||
)
|
||||
|
||||
type workspaceCountQuery struct{}
|
||||
|
||||
type createWorkspaceTask struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
func main() {
|
||||
c := core.New(core.Options{
|
||||
{Key: "name", Value: "agent-workbench"},
|
||||
})
|
||||
|
||||
c.Config().Set("workspace.root", "/tmp/agent-workbench")
|
||||
c.Config().Enable("workspace.templates")
|
||||
|
||||
c.Service("audit", core.Service{
|
||||
OnStart: func() core.Result {
|
||||
core.Info("service started", "service", "audit")
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
OnStop: func() core.Result {
|
||||
core.Info("service stopped", "service", "audit")
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
c.RegisterQuery(func(_ *core.Core, q core.Query) core.Result {
|
||||
switch q.(type) {
|
||||
case workspaceCountQuery:
|
||||
return core.Result{Value: 1, OK: true}
|
||||
}
|
||||
return core.Result{}
|
||||
})
|
||||
|
||||
c.RegisterTask(func(_ *core.Core, t core.Task) core.Result {
|
||||
switch task := t.(type) {
|
||||
case createWorkspaceTask:
|
||||
path := c.Config().String("workspace.root") + "/" + task.Name
|
||||
return core.Result{Value: path, OK: true}
|
||||
}
|
||||
return core.Result{}
|
||||
})
|
||||
|
||||
c.Command("workspace/create", core.Command{
|
||||
Action: func(opts core.Options) core.Result {
|
||||
return c.PERFORM(createWorkspaceTask{
|
||||
Name: opts.String("name"),
|
||||
})
|
||||
},
|
||||
})
|
||||
|
||||
if !c.ServiceStartup(context.Background(), nil).OK {
|
||||
panic("startup failed")
|
||||
}
|
||||
|
||||
created := c.Cli().Run("workspace", "create", "--name=alpha")
|
||||
fmt.Println("created:", created.Value)
|
||||
|
||||
count := c.QUERY(workspaceCountQuery{})
|
||||
fmt.Println("workspace count:", count.Value)
|
||||
|
||||
_ = c.ServiceShutdown(context.Background())
|
||||
}
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Read [primitives.md](primitives.md) next so the repeated shapes are clear.
|
||||
- Read [commands.md](commands.md) if you are building a CLI-first system.
|
||||
- Read [messaging.md](messaging.md) if services need to collaborate without direct imports.
|
||||
112
pkg/lib/workspace/default/.core/reference/docs/index.md
Normal file
112
pkg/lib/workspace/default/.core/reference/docs/index.md
Normal file
|
|
@ -0,0 +1,112 @@
|
|||
---
|
||||
title: CoreGO
|
||||
description: AX-first documentation for the CoreGO framework.
|
||||
---
|
||||
|
||||
# CoreGO
|
||||
|
||||
CoreGO is the foundation layer for the Core ecosystem. It gives you one container, one command tree, one message bus, and a small set of shared primitives that repeat across the whole framework.
|
||||
|
||||
The current module path is `dappco.re/go/core`.
|
||||
|
||||
## AX View
|
||||
|
||||
CoreGO already follows the main AX ideas from RFC-025:
|
||||
|
||||
- predictable names such as `Core`, `Service`, `Command`, `Options`, `Result`, `Message`
|
||||
- path-shaped command registration such as `deploy/to/homelab`
|
||||
- one repeated input shape (`Options`) and one repeated return shape (`Result`)
|
||||
- comments and examples that show real usage instead of restating the type signature
|
||||
|
||||
## What CoreGO Owns
|
||||
|
||||
| Surface | Purpose |
|
||||
|---------|---------|
|
||||
| `Core` | Central container and access point |
|
||||
| `Service` | Managed lifecycle component |
|
||||
| `Command` | Path-based command tree node |
|
||||
| `ACTION`, `QUERY`, `PERFORM` | Decoupled communication between components |
|
||||
| `Data`, `Drive`, `Fs`, `Config`, `I18n`, `Cli` | Built-in subsystems for common runtime work |
|
||||
| `E`, `Wrap`, `ErrorLog`, `ErrorPanic` | Structured failures and panic recovery |
|
||||
|
||||
## Quick Example
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"dappco.re/go/core"
|
||||
)
|
||||
|
||||
type flushCacheTask struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
func main() {
|
||||
c := core.New(core.Options{
|
||||
{Key: "name", Value: "agent-workbench"},
|
||||
})
|
||||
|
||||
c.Service("cache", core.Service{
|
||||
OnStart: func() core.Result {
|
||||
core.Info("cache ready", "app", c.App().Name)
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
OnStop: func() core.Result {
|
||||
core.Info("cache stopped", "app", c.App().Name)
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
|
||||
c.RegisterTask(func(_ *core.Core, task core.Task) core.Result {
|
||||
switch task.(type) {
|
||||
case flushCacheTask:
|
||||
return core.Result{Value: "cache flushed", OK: true}
|
||||
}
|
||||
return core.Result{}
|
||||
})
|
||||
|
||||
c.Command("cache/flush", core.Command{
|
||||
Action: func(opts core.Options) core.Result {
|
||||
return c.PERFORM(flushCacheTask{Name: opts.String("name")})
|
||||
},
|
||||
})
|
||||
|
||||
if !c.ServiceStartup(context.Background(), nil).OK {
|
||||
panic("startup failed")
|
||||
}
|
||||
|
||||
r := c.Cli().Run("cache", "flush", "--name=session-store")
|
||||
fmt.Println(r.Value)
|
||||
|
||||
_ = c.ServiceShutdown(context.Background())
|
||||
}
|
||||
```
|
||||
|
||||
## Documentation Paths
|
||||
|
||||
| Path | Covers |
|
||||
|------|--------|
|
||||
| [getting-started.md](getting-started.md) | First runnable CoreGO app |
|
||||
| [primitives.md](primitives.md) | `Options`, `Result`, `Service`, `Message`, `Query`, `Task` |
|
||||
| [services.md](services.md) | Service registry, service locks, runtime helpers |
|
||||
| [commands.md](commands.md) | Path-based commands and CLI execution |
|
||||
| [messaging.md](messaging.md) | `ACTION`, `QUERY`, `QUERYALL`, `PERFORM`, `PerformAsync` |
|
||||
| [lifecycle.md](lifecycle.md) | Startup, shutdown, context, background task draining |
|
||||
| [configuration.md](configuration.md) | Constructor options, config state, feature flags |
|
||||
| [subsystems.md](subsystems.md) | `App`, `Data`, `Drive`, `Fs`, `I18n`, `Cli` |
|
||||
| [errors.md](errors.md) | Structured errors, logging helpers, panic recovery |
|
||||
| [testing.md](testing.md) | Test naming and framework-level testing patterns |
|
||||
| [pkg/core.md](pkg/core.md) | Package-level reference summary |
|
||||
| [pkg/log.md](pkg/log.md) | Logging reference for the root package |
|
||||
| [pkg/PACKAGE_STANDARDS.md](pkg/PACKAGE_STANDARDS.md) | AX package-authoring guidance |
|
||||
|
||||
## Good Reading Order
|
||||
|
||||
1. Start with [getting-started.md](getting-started.md).
|
||||
2. Learn the repeated shapes in [primitives.md](primitives.md).
|
||||
3. Pick the integration path you need next: [services.md](services.md), [commands.md](commands.md), or [messaging.md](messaging.md).
|
||||
4. Use [subsystems.md](subsystems.md), [errors.md](errors.md), and [testing.md](testing.md) as reference pages while building.
|
||||
111
pkg/lib/workspace/default/.core/reference/docs/lifecycle.md
Normal file
111
pkg/lib/workspace/default/.core/reference/docs/lifecycle.md
Normal file
|
|
@ -0,0 +1,111 @@
|
|||
---
|
||||
title: Lifecycle
|
||||
description: Startup, shutdown, context ownership, and background task draining.
|
||||
---
|
||||
|
||||
# Lifecycle
|
||||
|
||||
CoreGO manages lifecycle through `core.Service` callbacks, not through reflection or implicit interfaces.
|
||||
|
||||
## Service Hooks
|
||||
|
||||
```go
|
||||
c.Service("cache", core.Service{
|
||||
OnStart: func() core.Result {
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
OnStop: func() core.Result {
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
```
|
||||
|
||||
Only services with `OnStart` appear in `Startables()`. Only services with `OnStop` appear in `Stoppables()`.
|
||||
|
||||
## `ServiceStartup`
|
||||
|
||||
```go
|
||||
r := c.ServiceStartup(context.Background(), nil)
|
||||
```
|
||||
|
||||
### What It Does
|
||||
|
||||
1. clears the shutdown flag
|
||||
2. stores a new cancellable context on `c.Context()`
|
||||
3. runs each `OnStart`
|
||||
4. broadcasts `ActionServiceStartup{}`
|
||||
|
||||
### Failure Behavior
|
||||
|
||||
- if the input context is already cancelled, startup returns that error
|
||||
- if any `OnStart` returns `OK:false`, startup stops immediately and returns that result
|
||||
|
||||
## `ServiceShutdown`
|
||||
|
||||
```go
|
||||
r := c.ServiceShutdown(context.Background())
|
||||
```
|
||||
|
||||
### What It Does
|
||||
|
||||
1. sets the shutdown flag
|
||||
2. cancels `c.Context()`
|
||||
3. broadcasts `ActionServiceShutdown{}`
|
||||
4. waits for background tasks created by `PerformAsync`
|
||||
5. runs each `OnStop`
|
||||
|
||||
### Failure Behavior
|
||||
|
||||
- if draining background tasks hits the shutdown context deadline, shutdown returns that context error
|
||||
- when service stop hooks fail, CoreGO returns the first error it sees
|
||||
|
||||
## Ordering
|
||||
|
||||
The current implementation builds `Startables()` and `Stoppables()` by iterating over a map-backed registry.
|
||||
|
||||
That means lifecycle order is not guaranteed today.
|
||||
|
||||
If your application needs strict startup or shutdown ordering, orchestrate it explicitly inside a smaller number of service callbacks instead of relying on registry order.
|
||||
|
||||
## `c.Context()`
|
||||
|
||||
`ServiceStartup` creates the context returned by `c.Context()`.
|
||||
|
||||
Use it for background work that should stop when the application shuts down:
|
||||
|
||||
```go
|
||||
c.Service("watcher", core.Service{
|
||||
OnStart: func() core.Result {
|
||||
go func(ctx context.Context) {
|
||||
<-ctx.Done()
|
||||
}(c.Context())
|
||||
return core.Result{OK: true}
|
||||
},
|
||||
})
|
||||
```
|
||||
|
||||
## Built-In Lifecycle Actions
|
||||
|
||||
You can listen for lifecycle state changes through the action bus.
|
||||
|
||||
```go
|
||||
c.RegisterAction(func(_ *core.Core, msg core.Message) core.Result {
|
||||
switch msg.(type) {
|
||||
case core.ActionServiceStartup:
|
||||
core.Info("core startup completed")
|
||||
case core.ActionServiceShutdown:
|
||||
core.Info("core shutdown started")
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
})
|
||||
```
|
||||
|
||||
## Background Task Draining
|
||||
|
||||
`ServiceShutdown` waits for the internal task waitgroup to finish before calling stop hooks.
|
||||
|
||||
This is what makes `PerformAsync` safe for long-running work that should complete before teardown.
|
||||
|
||||
## `OnReload`
|
||||
|
||||
`Service` includes an `OnReload` callback field, but CoreGO does not currently expose a top-level lifecycle runner for reload operations.
|
||||
171
pkg/lib/workspace/default/.core/reference/docs/messaging.md
Normal file
171
pkg/lib/workspace/default/.core/reference/docs/messaging.md
Normal file
|
|
@ -0,0 +1,171 @@
|
|||
---
|
||||
title: Messaging
|
||||
description: ACTION, QUERY, QUERYALL, PERFORM, and async task flow.
|
||||
---
|
||||
|
||||
# Messaging
|
||||
|
||||
CoreGO uses one message bus for broadcasts, lookups, and work dispatch.
|
||||
|
||||
## Message Types
|
||||
|
||||
```go
|
||||
type Message any
|
||||
type Query any
|
||||
type Task any
|
||||
```
|
||||
|
||||
Your own structs define the protocol.
|
||||
|
||||
```go
|
||||
type repositoryIndexed struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
type repositoryCountQuery struct{}
|
||||
|
||||
type syncRepositoryTask struct {
|
||||
Name string
|
||||
}
|
||||
```
|
||||
|
||||
## `ACTION`
|
||||
|
||||
`ACTION` is a broadcast.
|
||||
|
||||
```go
|
||||
c.RegisterAction(func(_ *core.Core, msg core.Message) core.Result {
|
||||
switch m := msg.(type) {
|
||||
case repositoryIndexed:
|
||||
core.Info("repository indexed", "name", m.Name)
|
||||
return core.Result{OK: true}
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
})
|
||||
|
||||
r := c.ACTION(repositoryIndexed{Name: "core-go"})
|
||||
```
|
||||
|
||||
### Behavior
|
||||
|
||||
- all registered action handlers are called in their current registration order
|
||||
- if a handler returns `OK:false`, dispatch stops and that `Result` is returned
|
||||
- if no handler fails, `ACTION` returns `Result{OK:true}`
|
||||
|
||||
## `QUERY`
|
||||
|
||||
`QUERY` is first-match request-response.
|
||||
|
||||
```go
|
||||
c.RegisterQuery(func(_ *core.Core, q core.Query) core.Result {
|
||||
switch q.(type) {
|
||||
case repositoryCountQuery:
|
||||
return core.Result{Value: 42, OK: true}
|
||||
}
|
||||
return core.Result{}
|
||||
})
|
||||
|
||||
r := c.QUERY(repositoryCountQuery{})
|
||||
```
|
||||
|
||||
### Behavior
|
||||
|
||||
- handlers run until one returns `OK:true`
|
||||
- the first successful result wins
|
||||
- if nothing handles the query, CoreGO returns an empty `Result`
|
||||
|
||||
## `QUERYALL`
|
||||
|
||||
`QUERYALL` collects every successful non-nil response.
|
||||
|
||||
```go
|
||||
r := c.QUERYALL(repositoryCountQuery{})
|
||||
results := r.Value.([]any)
|
||||
```
|
||||
|
||||
### Behavior
|
||||
|
||||
- every query handler is called
|
||||
- only `OK:true` results with non-nil `Value` are collected
|
||||
- the call itself returns `OK:true` even when the result list is empty
|
||||
|
||||
## `PERFORM`
|
||||
|
||||
`PERFORM` dispatches a task to the first handler that accepts it.
|
||||
|
||||
```go
|
||||
c.RegisterTask(func(_ *core.Core, t core.Task) core.Result {
|
||||
switch task := t.(type) {
|
||||
case syncRepositoryTask:
|
||||
return core.Result{Value: "synced " + task.Name, OK: true}
|
||||
}
|
||||
return core.Result{}
|
||||
})
|
||||
|
||||
r := c.PERFORM(syncRepositoryTask{Name: "core-go"})
|
||||
```
|
||||
|
||||
### Behavior
|
||||
|
||||
- handlers run until one returns `OK:true`
|
||||
- the first successful result wins
|
||||
- if nothing handles the task, CoreGO returns an empty `Result`
|
||||
|
||||
## `PerformAsync`
|
||||
|
||||
`PerformAsync` runs a task in a background goroutine and returns a generated task identifier.
|
||||
|
||||
```go
|
||||
r := c.PerformAsync(syncRepositoryTask{Name: "core-go"})
|
||||
taskID := r.Value.(string)
|
||||
```
|
||||
|
||||
### Generated Events
|
||||
|
||||
Async execution emits three action messages:
|
||||
|
||||
| Message | When |
|
||||
|---------|------|
|
||||
| `ActionTaskStarted` | just before background execution begins |
|
||||
| `ActionTaskProgress` | whenever `Progress` is called |
|
||||
| `ActionTaskCompleted` | after the task finishes or panics |
|
||||
|
||||
Example listener:
|
||||
|
||||
```go
|
||||
c.RegisterAction(func(_ *core.Core, msg core.Message) core.Result {
|
||||
switch m := msg.(type) {
|
||||
case core.ActionTaskCompleted:
|
||||
core.Info("task completed", "task", m.TaskIdentifier, "err", m.Error)
|
||||
}
|
||||
return core.Result{OK: true}
|
||||
})
|
||||
```
|
||||
|
||||
## Progress Updates
|
||||
|
||||
```go
|
||||
c.Progress(taskID, 0.5, "indexing commits", syncRepositoryTask{Name: "core-go"})
|
||||
```
|
||||
|
||||
That broadcasts `ActionTaskProgress`.
|
||||
|
||||
## `TaskWithIdentifier`
|
||||
|
||||
Tasks that implement `TaskWithIdentifier` receive the generated ID before dispatch.
|
||||
|
||||
```go
|
||||
type trackedTask struct {
|
||||
ID string
|
||||
Name string
|
||||
}
|
||||
|
||||
func (t *trackedTask) SetTaskIdentifier(id string) { t.ID = id }
|
||||
func (t *trackedTask) GetTaskIdentifier() string { return t.ID }
|
||||
```
|
||||
|
||||
## Shutdown Interaction
|
||||
|
||||
When shutdown has started, `PerformAsync` returns an empty `Result` instead of scheduling more work.
|
||||
|
||||
This is why `ServiceShutdown` can safely drain the outstanding background tasks before stopping services.
|
||||
|
|
@ -0,0 +1,138 @@
|
|||
# AX Package Standards
|
||||
|
||||
This page describes how to build packages on top of CoreGO in the style described by RFC-025.
|
||||
|
||||
## 1. Prefer Predictable Names
|
||||
|
||||
Use names that tell an agent what the thing is without translation.
|
||||
|
||||
Good:
|
||||
|
||||
- `RepositoryService`
|
||||
- `RepositoryServiceOptions`
|
||||
- `WorkspaceCountQuery`
|
||||
- `SyncRepositoryTask`
|
||||
|
||||
Avoid shortening names unless the abbreviation is already universal.
|
||||
|
||||
## 2. Put Real Usage in Comments
|
||||
|
||||
Write comments that show a real call with realistic values.
|
||||
|
||||
Good:
|
||||
|
||||
```go
|
||||
// Sync a repository into the local workspace cache.
|
||||
// svc.SyncRepository("core-go", "/srv/repos/core-go")
|
||||
```
|
||||
|
||||
Avoid comments that only repeat the signature.
|
||||
|
||||
## 3. Keep Paths Semantic
|
||||
|
||||
If a command or template lives at a path, let the path explain the intent.
|
||||
|
||||
Good:
|
||||
|
||||
```text
|
||||
deploy/to/homelab
|
||||
workspace/create
|
||||
template/workspace/go
|
||||
```
|
||||
|
||||
That keeps the CLI, tests, docs, and message vocabulary aligned.
|
||||
|
||||
## 4. Reuse CoreGO Primitives
|
||||
|
||||
At Core boundaries, prefer the shared shapes:
|
||||
|
||||
- `core.Options` for lightweight input
|
||||
- `core.Result` for output
|
||||
- `core.Service` for lifecycle registration
|
||||
- `core.Message`, `core.Query`, `core.Task` for bus protocols
|
||||
|
||||
Inside your package, typed structs are still good. Use `ServiceRuntime[T]` when you want typed package options plus a `Core` reference.
|
||||
|
||||
```go
|
||||
type repositoryServiceOptions struct {
|
||||
BaseDirectory string
|
||||
}
|
||||
|
||||
type repositoryService struct {
|
||||
*core.ServiceRuntime[repositoryServiceOptions]
|
||||
}
|
||||
```
|
||||
|
||||
## 5. Prefer Explicit Registration
|
||||
|
||||
Register services and commands with names and paths that stay readable in grep results.
|
||||
|
||||
```go
|
||||
c.Service("repository", core.Service{...})
|
||||
c.Command("repository/sync", core.Command{...})
|
||||
```
|
||||
|
||||
## 6. Use the Bus for Decoupling
|
||||
|
||||
When one package needs another package’s behavior, prefer queries and tasks over tight package coupling.
|
||||
|
||||
```go
|
||||
type repositoryCountQuery struct{}
|
||||
type syncRepositoryTask struct {
|
||||
Name string
|
||||
}
|
||||
```
|
||||
|
||||
That keeps the protocol visible in code and easy for agents to follow.
|
||||
|
||||
## 7. Use Structured Errors
|
||||
|
||||
Use `core.E`, `core.Wrap`, and `core.WrapCode`.
|
||||
|
||||
```go
|
||||
return core.Result{
|
||||
Value: core.E("repository.Sync", "git fetch failed", err),
|
||||
OK: false,
|
||||
}
|
||||
```
|
||||
|
||||
Do not introduce free-form `fmt.Errorf` chains in framework code.
|
||||
|
||||
## 8. Keep Testing Names Predictable
|
||||
|
||||
Follow the repository pattern:
|
||||
|
||||
- `_Good`
|
||||
- `_Bad`
|
||||
- `_Ugly`
|
||||
|
||||
Example:
|
||||
|
||||
```go
|
||||
func TestRepositorySync_Good(t *testing.T) {}
|
||||
func TestRepositorySync_Bad(t *testing.T) {}
|
||||
func TestRepositorySync_Ugly(t *testing.T) {}
|
||||
```
|
||||
|
||||
## 9. Prefer Stable Shapes Over Clever APIs
|
||||
|
||||
For package APIs, avoid patterns that force an agent to infer too much hidden control flow.
|
||||
|
||||
Prefer:
|
||||
|
||||
- clear structs
|
||||
- explicit names
|
||||
- path-based commands
|
||||
- visible message types
|
||||
|
||||
Avoid:
|
||||
|
||||
- implicit global state unless it is truly a default service
|
||||
- panic-hiding constructors
|
||||
- dense option chains when a small explicit struct would do
|
||||
|
||||
## 10. Document the Current Reality
|
||||
|
||||
If the implementation is in transition, document what the code does now, not the API shape you plan to have later.
|
||||
|
||||
That keeps agents correct on first pass, which is the real AX metric.
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue