docs: remove implemented plans, annotate partial ones

18 plan files deleted (absorbed into core.help docs).
4 kept with implementation notes (lint MCP, AltumCode Layer 2).

Co-Authored-By: Virgil <virgil@lethean.io>
This commit is contained in:
Snider 2026-03-14 08:09:20 +00:00
parent 681c88795f
commit d66ff46312
23 changed files with 8 additions and 12514 deletions

View file

@ -1,123 +0,0 @@
# Design: core/docs with go-help Engine
**Date**: 6 Mar 2026
**Status**: Approved
**Replaces**: Hugo/Docsy docs-site, standalone go-help repo
**Domain**: core.help (BunnyCDN)
## Problem
Documentation lives in three disconnected repos: go-help (engine), docs (content), docs-site (Hugo wrapper). Hugo needs Node/PostCSS in CI, adds Docsy as a Go module dependency, and produces a site that can't be served from inside a Go binary. We want one tool that embeds docs in binaries AND generates the public site.
## Decision
Merge go-help into `core/docs` as `pkg/help`. Replace Hugo/Docsy with go-help's native static generator. Use go-html's HLCRF compositor for the layout wrapper. Archive go-help and docs-site on forge.
## Architecture
```
core/docs/ (forge.lthn.ai/core/docs)
pkg/help/ go-help library
catalog.go topic store + search
search.go inverted index, stemming, fuzzy
parser.go YAML frontmatter + section extraction
render.go goldmark markdown -> HTML
server.go HTTP server (HTML + JSON API)
generate.go static site generator
layout.go NEW: go-html HLCRF wrapper
ingest.go CLI help text -> Topic
stemmer.go Porter-style stemmer
templates/ embedded HTML templates (updated)
content/ aggregated docs tree (built by core docs collect)
cli/
go/
php/
api/
packages/
deploy/
cmd/docs-server/ optional standalone server binary
go.mod forge.lthn.ai/core/docs
```
### Dependency chain
```
pkg/help -> go-html -> go-i18n -> go-inference
pkg/help -> goldmark
pkg/help -> yaml.v3
```
No database, no network dependency, no external service at runtime.
## Content Ownership
Docs live in their package repos under a `docs/` folder (convention, zero config). The `core/docs/content/` tree is the aggregated output, built by go-devops:
1. `core docs collect` — walks workspace, copies each repo's `docs/` into `content/{repo-name}/`
2. `core docs build` — go-help parses collected tree, generates static site to `dist/`
3. `core docs deploy` — pushes `dist/` to BunnyCDN at core.help
For binary builds (CoreGUI, core-php), the binary embeds only the docs from its own deps (determined by go.mod imports, not the full workspace).
## Dual-Mode Serving
### In-app (CoreGUI webview / `core help serve`)
`pkg/help.NewServer()` serves from `//go:embed` content. No network. Runs on localhost, CoreGUI opens a webview to it. Fragment URLs work: `#rate-limiting` scrolls to the section.
### Public (core.help)
Static HTML generated by `pkg/help.Generate()`, deployed to BunnyCDN. Same URLs, same fragments. External docs can deep-link: `core.help/topics/cli-dev-work#sync-interval`.
## Layout: go-html HLCRF
Replaces go-help's current `html/template` files with go-html compositor:
| Slot | Element | Content |
|------|---------|---------|
| H | `<header role="banner">` | Nav bar + search input + "core.help" branding |
| L | `<aside role="complementary">` | Topic tree grouped by tag, collapsible sections |
| C | `<main role="main">` | Rendered Markdown with section anchors (id attrs on headings) |
| F | `<footer role="contentinfo">` | Licence, version, links |
Semantic HTML with ARIA roles. Deterministic `data-block` IDs for CSS targeting. Dark theme CSS stays inline (no external stylesheets).
## Fragment Linking
go-help's `GenerateID()` produces URL-safe slugs from headings:
- `"Rate Limiting"` -> `rate-limiting`
- `"API / Rate Limits"` -> `api-rate-limits`
Every heading rendered with an `id` attribute. Works identically in-app and on the public site.
The JSON API (`/api/topics/{id}`) returns sections with IDs, so a client can request a subsection programmatically.
## Search
No changes to the search engine:
- Inverted index with stemming and fuzzy matching (Levenshtein, max distance 2)
- Phrase search via quoted strings
- Relevance scoring: title 10x, phrase 8x, section 5x, tag 3x
- Static site: `search-index.json` with client-side JS
- In-app server: server-side `catalog.Search()`
## Repo Lifecycle
| Repo | Action |
|------|--------|
| go-help | Archive on forge after merge |
| docs-site | Archive on forge |
| docs | Becomes Go module, gains `pkg/help/` + `go.mod` |
## Not In Scope
- i18n for docs content (English only; go-i18n is a transitive dep of go-html, not used for content)
- Authentication (core.help is public, in-app is local)
- Wiki sync (separate concern: kb.yaml / OpenBrain)
- Database (catalog is in-memory)
## Testing
- All existing go-help tests (~94% coverage) move to `pkg/help/`
- New tests for go-html HLCRF layout rendering (output structure, section anchor IDs)
- Integration test: parse `docs/` folder -> generate static site -> verify file structure + search index

File diff suppressed because it is too large Load diff

View file

@ -1,158 +0,0 @@
# Studio: Multimedia Pipeline Design
**Date:** 8 March 2026
**Status:** Approved
## Goal
Local AI multimedia pipeline for video remixing, content creation, and voice interaction. Runs as a CorePHP service (Studio) dispatching GPU work to homelab infrastructure. First client: OF agency remixing existing footage into TikTok-ready variants.
## Architecture
Studio is a job orchestrator. LEM handles creative decisions (smart layer), ffmpeg and GPU services handle execution (dumb layer). LEM never touches video frames — it produces JSON manifests that the execution layer consumes mechanically.
```
Studio (CorePHP, lthn.ai/lthn.sh)
├── Livewire UI (studio.lthn.ai)
├── Artisan Commands (CLI)
└── API Routes (/api/studio/*)
Studio Actions (RemixVideo, GenerateManifest, etc.)
Redis Job Queue
├── Ollama (LEM fleet) ─── Creative decisions, scripts, captions
├── Whisper Service ────── Transcribe source footage, STT
├── TTS Service ────────── Voiceover generation
├── ffmpeg Worker ──────── Render manifests to video
└── ComfyUI (Phase 2) ─── Image gen, thumbnails, overlays
```
All GPU services are Docker containers on the homelab (or any GPU server). Studio dispatches over HTTP. No local GPU dependency — remote-first from day one.
## Library & Cataloguing
Source material catalogued across three stores:
- **PG** (`studio_assets`): Metadata — filename, duration, resolution, tags (season/theme/mood), workspace
- **Qdrant**: Vector embeddings from Whisper transcripts + CLIP image embeddings (phase 2). Semantic search
- **Filesystem**: Raw files on homelab storage, PG references paths
- **.md catalogue files**: Human-readable collection descriptions, style guides, brand notes. LEM reads as context
Query flow:
```
Brief ("summer lollipop TikTok, 15s, upbeat")
→ LEM queries PG for tagged assets
→ LEM queries Qdrant for semantic matches
→ LEM reads collection .md for style context
→ LEM outputs manifest JSON
```
## Manifest Format
LEM produces, ffmpeg consumes. No AI in execution.
```json
{
"template": "tiktok-15s",
"clips": [
{"asset_id": 42, "start": 3.2, "end": 8.1, "order": 1},
{"asset_id": 17, "start": 0.0, "end": 5.5, "order": 2}
],
"captions": [
{"text": "Summer vibes only", "at": 0.5, "duration": 3, "style": "bold-center"}
],
"audio": {"track": "original", "fade_in": 0.5},
"output": {"format": "mp4", "resolution": "1080x1920", "fps": 30}
}
```
Variants: LEM produces multiple manifests from the same brief. Worker renders each independently.
## GPU Services (Homelab)
| Service | Container | Port | Model | Purpose |
|---------|-----------|------|-------|---------|
| Ollama | studio-ollama | 11434 | LEM fleet | Creative decisions, scripts, captions |
| Whisper | studio-whisper | 9100 | whisper-large-v3-turbo | Transcribe footage, STT |
| TTS | studio-tts | 9200 | Kokoro/Parler | Voiceover generation |
| ffmpeg Worker | studio-worker | — | n/a | Queue consumer, renders manifests |
| ComfyUI | studio-comfyui | 8188 | Flux/SD3.5 | Image gen, thumbnails (Phase 2) |
Shared with existing homelab: noc-net Docker network, Traefik, PG, Qdrant. Each service exposes REST, Studio POSTs work and gets callbacks.
Deployment: Ansible playbook per service, ROCm Docker images for GPU services.
## CorePHP Module
`app/Mod/Studio/` — same patterns as LEM module.
**Actions:**
- `CatalogueAsset::run()` — ingest, extract metadata, generate embeddings
- `GenerateManifest::run()` — brief + library → LEM → manifest JSON
- `RenderManifest::run()` — dispatch to ffmpeg worker
- `TranscribeAsset::run()` — send to Whisper, store transcript
- `SynthesiseSpeech::run()` — send to TTS, return audio
**Artisan commands:**
- `studio:catalogue` — batch ingest directory
- `studio:remix` — brief in, rendered videos out
- `studio:transcribe` — batch transcribe library
**API routes** (`/api/studio/*`):
- `POST /remix` — submit brief, get job ID
- `GET /remix/{id}` — poll status, get output URLs
- `POST /assets` — upload/catalogue
- `GET /assets` — search library
**Livewire UI:**
- Asset browser with tag/search
- Remix form — pick assets or let LEM choose, enter brief, select template
- Job status + preview
- Download/share
**Config:** `config/studio.php` — GPU endpoints, templates, Qdrant collection, storage paths.
## Phased Delivery
### Phase 1 — Foundation (before April)
- Studio module scaffolding (actions, routes, commands)
- Asset cataloguing (upload, PG metadata, Whisper transcripts)
- Whisper service on homelab
- `studio:transcribe` end to end
- Basic Livewire asset browser
### Phase 2 — Remix Pipeline
- Manifest format finalised
- LEM integration via Ollama (brief → manifest)
- ffmpeg worker on homelab
- `studio:remix` CLI + API
- Livewire remix form + job status
### Phase 3 — Voice & TTS
- TTS service on homelab (Kokoro)
- Voice interface: Whisper STT → LEM → TTS
- Voiceover generation for scripts
### Phase 4 — Visual Generation
- ComfyUI on homelab with Flux/SD3.5
- Thumbnail generation
- Image overlays in manifests
- Video generation via Wan2.1 (experimental)
### Phase 5 — Production
- Full library from agency
- Authentik account for client
- studio.lthn.ai live
- Usage tracking via 66analytics
Phase 1 + 2 = April demo. Upload videos, enter brief, get remixed TikToks back.
## Key Decisions
- **Smart/dumb separation**: LEM produces prompts and manifests (creative), ffmpeg executes (mechanical). Value is in the creative layer.
- **Remote-first GPU**: All inference on homelab/GPU server, never local. Easy to scale to cloud later.
- **Manifest-driven**: JSON contract between LEM and execution. Either side can evolve independently.
- **Same Action pattern**: CLI and API call identical actions. UI is just a thin Livewire layer.
- **Existing infra**: PG, Redis, Qdrant, Ollama, Traefik, Authentik — all already deployed.

File diff suppressed because it is too large Load diff

View file

@ -1,92 +0,0 @@
# core/mcp Extraction
**Goal:** Consolidate MCP code into `core/mcp` — Go MCP server from go-ai + PHP MCP from php-mcp. Produces `core-mcp` binary.
**Pattern:** Polyglot repo like `core/agent` — Go at root, PHP in `src/php/`, composer.json at root as `lthn/mcp`.
---
### Task 1: Create core/mcp repo on forge, clone locally
- Create `core/mcp` repo on forge via API
- Clone to `/Users/snider/Code/core/mcp/`
- Add to `~/Code/go.work`
### Task 2: Move Go MCP package from go-ai
**Move:** `go-ai/mcp/``core/mcp/pkg/mcp/`
All files: `mcp.go`, `registry.go`, `subsystem.go`, `bridge.go`, transports (stdio, tcp, unix), all `tools_*.go`, all `*_test.go`, `brain/`, `ide/`
**Move:** `go-ai/cmd/mcpcmd/``core/mcp/cmd/mcpcmd/`
**Move:** `go-ai/cmd/brain-seed/``core/mcp/cmd/brain-seed/`
Create `go.mod` as `forge.lthn.ai/core/mcp`. Deps from go-ai: go-sdk, gorilla/websocket, go-ml, go-rag, go-inference, go-process, go-i18n, gin.
Find-replace `forge.lthn.ai/core/go-ai/mcp``forge.lthn.ai/core/mcp/pkg/mcp` in all moved files.
### Task 3: Create cmd/core-mcp/main.go
```go
package main
import (
"forge.lthn.ai/core/cli/pkg/cli"
mcpcmd "forge.lthn.ai/core/mcp/cmd/mcpcmd"
)
func main() {
cli.Main(
cli.WithCommands("mcp", mcpcmd.AddMCPCommands),
)
}
```
Add `.core/build.yaml`:
```yaml
project:
name: core-mcp
binary: core-mcp
```
### Task 4: Move PHP from php-mcp into src/php/
Copy all PHP source from `/Users/snider/Code/core/php-mcp/``core/mcp/src/php/`
Create `composer.json` at root:
```json
{
"name": "lthn/mcp",
"description": "Model Context Protocol server for Laravel + standalone Go binary",
"license": "EUPL-1.2",
"require": { "php": "^8.2", "lthn/php": "*" },
"autoload": { "psr-4": { "Core\\Mcp\\": "src/php/" } },
"replace": { "core/php-mcp": "self.version", "lthn/php-mcp": "self.version" }
}
```
Add `.gitattributes` to exclude Go from composer dist.
### Task 5: Update consumers
- `core/agent` — change import `go-ai/mcp``core/mcp/pkg/mcp` in `pkg/loop/tools_mcp.go`
- `core/cli` — change `go-ai/cmd/mcpcmd``core/mcp/cmd/mcpcmd` import
- App `composer.json` — change `core/php-mcp``lthn/mcp`, VCS url → `core/mcp.git`
### Task 6: Clean up go-ai
- Delete `go-ai/mcp/` directory
- Delete `go-ai/cmd/mcpcmd/` and `go-ai/cmd/brain-seed/`
- Remove unused deps from go-ai's go.mod (go-sdk, websocket, etc.)
- go-ai keeps: `ai/`, `cmd/{daemon,embed-bench,lab,metrics,rag,security}`
### Task 7: Register on Packagist, verify
- Submit `core/mcp` to Packagist as `lthn/mcp`
- `go build ./cmd/core-mcp` — verify binary builds
- `go test ./...` — verify tests pass
- Archive `core/php-mcp` on forge
---
**Repos affected:** core/mcp (new), go-ai (shrinks), core/agent (import update), core/cli (import update), php-mcp (archived), app composer.json

View file

@ -1,149 +0,0 @@
# Daemon Process Management Extraction — Design
## Goal
Move daemon runtime primitives from `core/cli` and daemon CLI commands from `go-ai` into their natural homes: runtime types into `go-process`, generic CLI commands into `core/cli` as a reusable command builder.
## Problem
Daemon lifecycle management is scattered across three repos:
| Repo | What it has | Problem |
|------|-------------|---------|
| `cli/pkg/cli/daemon.go` | PIDFile, HealthServer, Daemon, Mode detection | Process management primitives don't belong in a CLI/TUI library |
| `go-ai/cmd/daemon/cmd.go` | start/stop/status/run CLI + MCP foreground | Generic daemon commands hardcoded to one consumer |
| `go-process` | Process spawning, signals, output, DAG runner | Missing self-as-daemon management |
## Design
### go-process gains daemon runtime types
New files in `go-process` root package:
- **`daemon.go`** — `Daemon`, `DaemonOptions`, `Mode`, `DetectMode()`
- **`pidfile.go`** — `PIDFile` (acquire, release, read, stale detection)
- **`health.go`** — `HealthServer`, `HealthCheck` type
These are standalone types alongside `Process` and `Runner`. A `Daemon` manages *this process* as a long-running service. A `Process` manages *child processes*. Same domain, complementary concerns.
**Types extracted from `cli/pkg/cli/daemon.go`:**
```go
// Mode represents how the process was launched.
type Mode int
const (
ModeInteractive Mode = iota // TTY attached
ModePipe // stdin/stdout piped
ModeDaemon // Detached background
)
// PIDFile manages a PID lock file for daemon processes.
type PIDFile struct { ... }
func NewPIDFile(path string) *PIDFile
func (p *PIDFile) Acquire() error
func (p *PIDFile) Release() error
func (p *PIDFile) Read() (int, bool) // Read PID + check if running
// HealthServer provides HTTP /health and /ready endpoints.
type HealthServer struct { ... }
func NewHealthServer(addr string) *HealthServer
func (h *HealthServer) AddCheck(check HealthCheck)
func (h *HealthServer) SetReady(ready bool)
func (h *HealthServer) Start() error
func (h *HealthServer) Stop(ctx context.Context) error
// Daemon orchestrates PIDFile + HealthServer + signal handling.
type Daemon struct { ... }
type DaemonOptions struct {
PIDFile string
ShutdownTimeout time.Duration
HealthAddr string
HealthChecks []HealthCheck
OnReload func()
}
func NewDaemon(opts DaemonOptions) *Daemon
func (d *Daemon) Start() error
func (d *Daemon) Run(ctx context.Context) error
func (d *Daemon) Stop() error
func (d *Daemon) SetReady(ready bool)
```
**New helper from `go-ai/cmd/daemon/cmd.go`:**
```go
// WaitForHealth polls a health endpoint until it responds OK or timeout.
func WaitForHealth(addr string, timeout time.Duration) bool
```
### core/cli gains generic daemon CLI commands
New file `cmd/daemon/cmd.go` in `core/cli`:
```go
// DaemonCommandConfig configures the generic daemon CLI commands.
type DaemonCommandConfig struct {
RunForeground func(ctx context.Context) error // Business logic callback
PIDFile string // Default PID file path
HealthAddr string // Default health address
}
// AddDaemonCommand registers start/stop/status/run subcommands.
func AddDaemonCommand(root *cli.Command, cfg DaemonCommandConfig)
```
Subcommands:
- **`start`** — Re-exec binary as detached process, wait for health
- **`stop`** — Read PID file, send SIGTERM, wait for exit
- **`status`** — Check PID file + health endpoint, display status
- **`run`** — Run foreground (calls `cfg.RunForeground`), manages Daemon lifecycle
All commands import `go-process` for PIDFile, Daemon, WaitForHealth.
### go-ai shrinks
`go-ai/cmd/daemon/` deleted entirely. Registration becomes:
```go
import daemon "forge.lthn.ai/core/cli/cmd/daemon"
daemon.AddDaemonCommand(root, daemon.DaemonCommandConfig{
RunForeground: func(ctx context.Context) error {
svc := mcp.New(mcp.WithSubsystem(...))
return startMCP(ctx, svc, cfg)
},
PIDFile: cfg.PIDFile,
HealthAddr: cfg.HealthAddr,
})
```
`startMCP()` stays in go-ai — it's MCP-specific business logic.
### Deletion
- `cli/pkg/cli/daemon.go` — deleted (types move to go-process)
- `go-ai/cmd/daemon/cmd.go` — deleted (commands move to cli, MCP wiring stays in go-ai)
### What doesn't change
- go-process existing API (Process, Runner, RunSpec, Service, exec/) — untouched
- go-process core/go dependency — already present
- Any other consumer of cli/daemon.go gets updated to import from go-process
## Dependencies
```
go-process (gains daemon types, no new deps)
└── core/go (existing)
core/cli (gains daemon commands)
└── go-process (new — for PIDFile, Daemon, WaitForHealth)
go-ai (shrinks)
└── core/cli (existing — now uses cli's daemon commands)
```
## Testing
- go-process: Unit tests for PIDFile, HealthServer, Daemon, WaitForHealth, Mode detection (ported from cli + go-ai tests)
- core/cli: Integration tests for daemon commands (mock RunForeground callback)
- go-ai: Verify MCP wiring still works after refactor

File diff suppressed because it is too large Load diff

View file

@ -1,174 +0,0 @@
# Daemon Registry & Project Manifest — Design
## Goal
Unified `core start/stop/list` for managing multiple background daemons, driven by a `.core/manifest.yaml` project identity file. Runtime state tracked in `~/.core/daemons/`. Release snapshots frozen as `core.json` for marketplace indexing.
## Problem
Daemon management is per-binary with no central awareness. Each consumer (go-ai, core-ide, go-html gallery, LEM chat, blockchain node) picks its own PID path and health port. No way to:
- List all running daemons across projects
- Start a project's services from one command
- Let the marketplace know what a project can run
## Design
### Manifest Schema (go-scm/manifest)
`.core/manifest.yaml` is the project identity file. Extends the existing manifest (UI layout, permissions, modules) with a `daemons` section:
```yaml
code: photo-browser
name: Photo Browser
version: 0.1.0
description: Browse and serve local photo collections
daemons:
serve:
binary: core-php
args: [php, serve]
health: "127.0.0.1:0"
default: true
worker:
binary: core-mlx
args: [worker, start]
health: "127.0.0.1:0"
layout: HLCRF
slots:
C: photo-grid
L: folder-tree
permissions:
read: [./photos/]
modules: [core/media, core/fs]
```
Fields per daemon entry:
- `binary` — executable name (auto-detected if omitted)
- `args` — arguments passed to the binary
- `health` — health check address (port 0 = dynamic)
- `default` — marks the daemon that `core start` runs with no args
### Runtime Registry (go-process)
When a daemon starts, a registration file is written to `~/.core/daemons/`. Removed on stop.
File naming: `{code}-{daemon}.json`
```go
type DaemonEntry struct {
Code string `json:"code"`
Daemon string `json:"daemon"`
PID int `json:"pid"`
Health string `json:"health"`
Project string `json:"project"`
Binary string `json:"binary"`
Started time.Time `json:"started"`
}
type Registry struct { ... }
func NewRegistry() *Registry
func (r *Registry) Register(entry DaemonEntry) error
func (r *Registry) Unregister(code, daemon string) error
func (r *Registry) List() ([]DaemonEntry, error)
func (r *Registry) Get(code, daemon string) (*DaemonEntry, bool)
```
`List()` checks each entry's PID — if dead, removes the stale file and skips it.
`Daemon.Start()` gains an optional registry hook — auto-registers on Start, auto-unregisters on Stop.
### CLI Commands (cli)
Top-level commands, not under `core daemon`:
**`core start [daemon-name]`**
1. Find `.core/manifest.yaml` in cwd or parent dirs
2. Parse manifest, find daemon entry (default if no name given)
3. Check registry — already running? say so
4. Exec the declared binary with args, detached
5. Wait for health, register in `~/.core/daemons/`
**`core stop [daemon-name]`**
1. With name: look up in registry by code+name, send SIGTERM
2. Without name: stop all daemons for current project's code
3. Unregister on exit
**`core list`**
1. Scan `~/.core/daemons/`, prune stale, print table
```
CODE DAEMON PID HEALTH PROJECT
photo-browser serve 48291 127.0.0.1:54321 /Users/snider/Code/photo-app
core mcp 51002 127.0.0.1:9101 -
core-ide headless 51100 127.0.0.1:9878 -
```
**`core restart [daemon-name]`** — stop then start.
### Release Snapshot (go-devops)
`core build release` generates `core.json` at repo root from `.core/manifest.yaml`:
```json
{
"schema": 1,
"code": "photo-browser",
"name": "Photo Browser",
"version": "0.1.0",
"commit": "a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2",
"tag": "v0.1.0",
"built": "2026-03-09T15:00:00Z",
"daemons": { ... },
"layout": "HLCRF",
"slots": { ... },
"permissions": { ... },
"modules": [ ... ]
}
```
Differences from YAML manifest:
- `schema` version for forward compatibility
- `commit` — full SHA, immutable
- `tag` — release tag
- `built` — timestamp
Marketplace indexes `core.json` from tagged releases on forge. Self-describing package listings, no manual catalogue updates.
## Where Code Lives
| Component | Repo | Why |
|-----------|------|-----|
| `DaemonEntry`, `Registry` | go-process | Runtime state, alongside PIDFile/Daemon |
| Manifest schema (+ `daemons`) | go-scm/manifest | Already owns the manifest type |
| `core start/stop/list` | cli | Top-level CLI commands |
| `core.json` generation | go-devops | Part of release pipeline |
| Marketplace indexing | go-scm/marketplace | Already owns the catalogue |
## Dependencies
```
go-process (gains Registry, no new deps)
cli (gains start/stop/list commands)
└── go-process (existing — Registry, ReadPID)
└── go-scm/manifest (new — parse .core/manifest.yaml)
go-devops (gains core.json generation)
└── go-scm/manifest (new — read manifest for snapshot)
```
## What Doesn't Change
- Existing `core daemon start/stop/status` in go-ai stays as MCP-specific consumer
- Existing `Daemon`, `PIDFile`, `HealthServer` in go-process — untouched
- Existing `DaemonCommandConfig` / `AddDaemonCommand` in cli — untouched
- go-scm plugin system (plugin.json, registry.json) — separate concern
## Testing
- go-process: Unit tests for Registry (register, unregister, list, stale pruning)
- go-scm: Unit tests for extended manifest parsing (daemons section)
- cli: Integration tests for start/stop/list with mock manifest
- go-devops: Unit test for core.json generation from manifest

File diff suppressed because it is too large Load diff

View file

@ -1,144 +0,0 @@
# Go-Blockchain Modernisation Design
## Goal
Modernise `forge.lthn.ai/core/go-blockchain` from a standalone binary with stdlib `flag` and bare goroutines into a proper `core-chain` binary using `cli.Main()`, DI services, and go-process daemon lifecycle.
## Architecture
The refactor migrates go-blockchain to the standard Core CLI patterns without touching internal blockchain logic (chain/, consensus/, crypto/, wire/, types/, etc.). The P2P sync loop and wallet scanner become `core.Service` implementations managed by the DI container, with the sync service optionally running as a go-process daemon in headless mode.
## Current State
- **Entry point**: `cmd/chain/main.go` — stdlib `flag`, direct `store.New()`, bare `go syncLoop()`, inline `frame.Run()`
- **Stale go.mod**: Replace directives point to `/home/claude/Code/core/*` (different machine)
- **No build config**: No `.core/build.yaml`, not in `go.work`
- **Dependencies**: core/cli, core/go-p2p, core/go-store, bubbletea, testify, x/crypto
## Design
### 1. Entry Point: `cli.Main()` Migration
New `cmd/core-chain/main.go`:
```go
package main
import (
"forge.lthn.ai/core/cli/pkg/cli"
blockchain "forge.lthn.ai/core/go-blockchain"
)
func main() {
cli.Main(
cli.WithCommands("chain", blockchain.AddChainCommands),
)
}
```
`AddChainCommands()` registers subcommands on the `chain` parent cobra command:
| Subcommand | Description |
|------------|-------------|
| `chain explorer` | TUI block explorer (current default mode) |
| `chain sync` | Headless P2P sync (daemon-capable) |
| `chain mine` | Mining (existing mining/ package) |
Persistent flags on `chain` parent: `--data-dir`, `--seed`, `--testnet`.
Old `cmd/chain/` directory is removed after migration.
### 2. SyncService as `core.Service`
Wraps the current `syncLoop`/`syncOnce` logic:
```go
type SyncService struct {
*core.ServiceRuntime[SyncServiceOptions]
}
type SyncServiceOptions struct {
DataDir string
Seed string
Testnet bool
Chain *chain.Chain
}
func (s *SyncService) OnStartup(ctx context.Context) error {
go s.syncLoop(ctx)
return nil
}
func (s *SyncService) OnShutdown() error { return nil }
```
In headless mode (`core-chain chain sync`), runs as a **go-process Daemon**:
- PID file at `~/.core/daemons/core-chain-sync.pid`
- Auto-registered in daemon registry
- `core-chain chain sync --stop` to halt
In TUI mode (`core-chain chain explorer`), SyncService starts as a background service within the same process — no daemon wrapper needed.
### 3. WalletService as `core.Service`
Same pattern as SyncService. Wraps existing `wallet/` scanner logic. Only instantiated when wallet-related subcommands are invoked.
### 4. DI Container Wiring
```go
func AddChainCommands(parent *cobra.Command) {
// Parse persistent flags, create store, chain
// Wire services into core.New():
c, _ := core.New(
core.WithService(NewSyncService),
core.WithService(NewWalletService),
)
}
```
The `chain.Chain` instance and `store.Store` are created once and injected into services via the DI container — no globals.
### 5. Wire Protocol Extraction (Deferred)
The `wire/` package is generic Levin binary serialization, reusable by go-p2p and potentially other modules. Candidate for extraction to `forge.lthn.ai/core/go-wire` as a separate module.
**Deferred to a follow-up task** — initial refactor keeps wire/ in-tree.
### 6. Build & Workspace Integration
- **`.core/build.yaml`**: `binary: core-chain`, targets: `darwin/arm64`, `linux/amd64`
- **go.mod**: Fix replace directives from `/home/claude/Code/core/*` to `/Users/snider/Code/core/*`
- **go.work**: Add `./go-blockchain` entry
- **Build**: `core build` or `go build -o ./bin/core-chain ./cmd/core-chain`
### 7. Unchanged Packages
These packages contain blockchain domain logic and are not touched:
- `chain/` — block storage, sync algorithm, validation
- `consensus/` — proof-of-work consensus rules
- `crypto/` — cryptographic primitives
- `difficulty/` — difficulty adjustment
- `mining/` — block mining
- `types/` — block, transaction, header types
- `wire/` — binary serialization (extraction deferred)
- `p2p/` — Levin protocol encoding (called from SyncService)
- `tui/` — TUI models (wired into `cli.NewFrame("HCF")` from explorer subcommand)
- `config/` — network configs, hard forks, client version
- `wallet/` — wallet scanning (wrapped by WalletService)
- `rpc/` — RPC client types
## Binary
Standalone `core-chain` binary (not integrated into main `core` binary). Rationale: go-blockchain pulls in x/crypto and potentially CGo-dependent dependencies that shouldn't bloat the core CLI.
## Key References
| File | Role |
|------|------|
| `cmd/chain/main.go` | Current entry point (to be replaced) |
| `chain/sync.go` | Sync logic (to be wrapped by SyncService) |
| `wallet/scanner.go` | Wallet scanner (to be wrapped by WalletService) |
| `tui/` | TUI models (rewired to explorer subcommand) |
| `core/cli/pkg/cli/main.go` | `cli.Main()` pattern |
| `core/go-process/daemon.go` | Daemon lifecycle |

View file

@ -1,873 +0,0 @@
# Go-Blockchain Modernisation Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Modernise go-blockchain from a standalone flag-based binary into a proper `core-chain` binary using `cli.Main()`, DI services, and go-process daemon lifecycle.
**Architecture:** The refactor wraps existing blockchain logic (chain/, consensus/, crypto/, wire/, etc.) in Core framework patterns without modifying domain code. The P2P sync loop becomes a `core.Service` with optional daemon mode via go-process. The CLI entry point migrates from stdlib `flag` to `cli.Main()` + `cli.WithCommands()`. A new `AddChainCommands()` registration function provides `explorer`, `sync`, and `mine` subcommands.
**Tech Stack:** Go 1.26, `forge.lthn.ai/core/cli` (cobra + bubbletea), `forge.lthn.ai/core/go` (DI container), `forge.lthn.ai/core/go-process` (daemon lifecycle), `forge.lthn.ai/core/go-store` (SQLite), `forge.lthn.ai/core/go-p2p` (Levin protocol)
---
### Task 1: Fix go.mod and add to go.work
**Context:** The go.mod has stale replace directives pointing to `/home/claude/Code/core/*` (a different machine). These need to point to `/Users/snider/Code/core/*` for local workspace resolution. The module also needs to be added to go.work.
**Files:**
- Modify: `/Users/snider/Code/core/go-blockchain/go.mod:59-67`
- Modify: `/Users/snider/Code/go.work`
**Step 1: Fix replace directives in go.mod**
Open `/Users/snider/Code/core/go-blockchain/go.mod` and replace all `/home/claude/Code/core/` paths with `/Users/snider/Code/core/`:
```
replace forge.lthn.ai/core/cli => /Users/snider/Code/core/cli
replace forge.lthn.ai/core/go => /Users/snider/Code/core/go
replace forge.lthn.ai/core/go-crypt => /Users/snider/Code/core/go-crypt
replace forge.lthn.ai/core/go-p2p => /Users/snider/Code/core/go-p2p
replace forge.lthn.ai/core/go-store => /Users/snider/Code/core/go-store
```
**Step 2: Add go-blockchain to go.work**
```bash
cd /Users/snider/Code && go work use ./core/go-blockchain
```
**Step 3: Verify the module resolves**
```bash
cd /Users/snider/Code/core/go-blockchain && GOWORK=/Users/snider/Code/go.work go build ./...
```
Expected: Build succeeds (existing code compiles).
**Step 4: Commit**
```bash
cd /Users/snider/Code/core/go-blockchain
git add go.mod
git commit -m "fix: update go.mod replace directives for local workspace"
```
Also commit go.work change:
```bash
cd /Users/snider/Code
git add go.work
git commit -m "chore: add go-blockchain to workspace"
```
---
### Task 2: Create AddChainCommands registration function
**Context:** This is the core of the migration. Instead of `main()` directly creating everything, we create a `AddChainCommands(root *cobra.Command)` function that registers a `chain` parent command with persistent flags and subcommands. The `chain` parent command holds shared state (data dir, seed, testnet flag, chain config).
**Files:**
- Create: `/Users/snider/Code/core/go-blockchain/commands.go`
- Test: `/Users/snider/Code/core/go-blockchain/commands_test.go`
**Step 1: Write the test**
Create `/Users/snider/Code/core/go-blockchain/commands_test.go`:
```go
package blockchain
import (
"testing"
"github.com/spf13/cobra"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestAddChainCommands_Good_RegistersParent(t *testing.T) {
root := &cobra.Command{Use: "test"}
AddChainCommands(root)
// Should have a "chain" subcommand
chainCmd, _, err := root.Find([]string{"chain"})
require.NoError(t, err)
assert.Equal(t, "chain", chainCmd.Name())
}
func TestAddChainCommands_Good_HasSubcommands(t *testing.T) {
root := &cobra.Command{Use: "test"}
AddChainCommands(root)
chainCmd, _, _ := root.Find([]string{"chain"})
// Should have explorer, sync, mine subcommands
var names []string
for _, sub := range chainCmd.Commands() {
names = append(names, sub.Name())
}
assert.Contains(t, names, "explorer")
assert.Contains(t, names, "sync")
}
func TestAddChainCommands_Good_PersistentFlags(t *testing.T) {
root := &cobra.Command{Use: "test"}
AddChainCommands(root)
chainCmd, _, _ := root.Find([]string{"chain"})
// Should have persistent flags
assert.NotNil(t, chainCmd.PersistentFlags().Lookup("data-dir"))
assert.NotNil(t, chainCmd.PersistentFlags().Lookup("seed"))
assert.NotNil(t, chainCmd.PersistentFlags().Lookup("testnet"))
}
```
**Step 2: Run test to verify it fails**
```bash
cd /Users/snider/Code/core/go-blockchain && GOWORK=/Users/snider/Code/go.work go test -run TestAddChainCommands -v .
```
Expected: FAIL — `AddChainCommands` not defined.
**Step 3: Write the implementation**
Create `/Users/snider/Code/core/go-blockchain/commands.go`:
```go
// Copyright (c) 2017-2026 Lethean (https://lt.hn)
//
// Licensed under the European Union Public Licence (EUPL) version 1.2.
// SPDX-License-Identifier: EUPL-1.2
package blockchain
import (
"fmt"
"os"
"path/filepath"
"forge.lthn.ai/core/go-blockchain/config"
"github.com/spf13/cobra"
)
// AddChainCommands registers the "chain" command group with explorer,
// sync, and mine subcommands.
func AddChainCommands(root *cobra.Command) {
var (
dataDir string
seed string
testnet bool
)
chainCmd := &cobra.Command{
Use: "chain",
Short: "Lethean blockchain node",
Long: "Manage the Lethean blockchain — sync, explore, and mine.",
}
chainCmd.PersistentFlags().StringVar(&dataDir, "data-dir", defaultDataDir(), "blockchain data directory")
chainCmd.PersistentFlags().StringVar(&seed, "seed", "seeds.lthn.io:36942", "seed peer address (host:port)")
chainCmd.PersistentFlags().BoolVar(&testnet, "testnet", false, "use testnet")
chainCmd.AddCommand(
newExplorerCmd(&dataDir, &seed, &testnet),
newSyncCmd(&dataDir, &seed, &testnet),
)
root.AddCommand(chainCmd)
}
// resolveConfig returns the chain config and forks for the current network.
func resolveConfig(testnet bool, seed *string) (config.ChainConfig, []config.HardFork) {
if testnet {
if *seed == "seeds.lthn.io:36942" {
*seed = "localhost:46942"
}
return config.Testnet, config.TestnetForks
}
return config.Mainnet, config.MainnetForks
}
func defaultDataDir() string {
home, err := os.UserHomeDir()
if err != nil {
return ".lethean"
}
return filepath.Join(home, ".lethean", "chain")
}
// ensureDataDir creates the data directory if it doesn't exist.
func ensureDataDir(dataDir string) error {
if err := os.MkdirAll(dataDir, 0o755); err != nil {
return fmt.Errorf("create data dir: %w", err)
}
return nil
}
```
**Step 4: Run tests to verify they pass**
```bash
cd /Users/snider/Code/core/go-blockchain && GOWORK=/Users/snider/Code/go.work go test -run TestAddChainCommands -v .
```
Expected: PASS (3 tests).
**Step 5: Commit**
```bash
cd /Users/snider/Code/core/go-blockchain
git add commands.go commands_test.go
git commit -m "feat: add AddChainCommands registration function"
```
---
### Task 3: Create explorer subcommand
**Context:** The explorer subcommand is the TUI block explorer — the current default mode of the binary. It creates a store, chain, node, TUI models, and runs `cli.NewFrame("HCF")`. This replaces the bulk of the current `main()`.
**Files:**
- Create: `/Users/snider/Code/core/go-blockchain/cmd_explorer.go`
**Step 1: Write the implementation**
Create `/Users/snider/Code/core/go-blockchain/cmd_explorer.go`:
```go
// Copyright (c) 2017-2026 Lethean (https://lt.hn)
//
// Licensed under the European Union Public Licence (EUPL) version 1.2.
// SPDX-License-Identifier: EUPL-1.2
package blockchain
import (
"context"
"log"
"os"
"os/signal"
"path/filepath"
cli "forge.lthn.ai/core/cli/pkg/cli"
store "forge.lthn.ai/core/go-store"
"forge.lthn.ai/core/go-blockchain/chain"
"forge.lthn.ai/core/go-blockchain/tui"
"github.com/spf13/cobra"
)
func newExplorerCmd(dataDir, seed *string, testnet *bool) *cobra.Command {
return &cobra.Command{
Use: "explorer",
Short: "TUI block explorer",
Long: "Interactive terminal block explorer with live sync status.",
RunE: func(cmd *cobra.Command, args []string) error {
return runExplorer(*dataDir, *seed, *testnet)
},
}
}
func runExplorer(dataDir, seed string, testnet bool) error {
if err := ensureDataDir(dataDir); err != nil {
return err
}
dbPath := filepath.Join(dataDir, "chain.db")
s, err := store.New(dbPath)
if err != nil {
log.Fatalf("open store: %v", err)
}
defer s.Close()
c := chain.New(s)
cfg, forks := resolveConfig(testnet, &seed)
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt)
defer cancel()
// Start P2P sync in background.
go syncLoop(ctx, c, &cfg, forks, seed)
node := tui.NewNode(c)
status := tui.NewStatusModel(node)
explorer := tui.NewExplorerModel(c)
hints := tui.NewKeyHintsModel()
frame := cli.NewFrame("HCF")
frame.Header(status)
frame.Content(explorer)
frame.Footer(hints)
frame.Run()
return nil
}
```
**Step 2: Verify it compiles**
```bash
cd /Users/snider/Code/core/go-blockchain && GOWORK=/Users/snider/Code/go.work go build ./...
```
Expected: Build succeeds.
**Step 3: Commit**
```bash
cd /Users/snider/Code/core/go-blockchain
git add cmd_explorer.go
git commit -m "feat: add explorer subcommand (TUI block explorer)"
```
---
### Task 4: Create sync subcommand with daemon support
**Context:** The sync subcommand runs the P2P sync loop headless (no TUI). When `--daemon` is passed, it runs as a go-process Daemon with PID file and registry entry. `--stop` sends a signal to stop a running daemon.
**Files:**
- Create: `/Users/snider/Code/core/go-blockchain/cmd_sync.go`
- Create: `/Users/snider/Code/core/go-blockchain/sync_service.go`
**Step 1: Add go-process dependency**
```bash
cd /Users/snider/Code/core/go-blockchain && GOWORK=/Users/snider/Code/go.work go get forge.lthn.ai/core/go-process
```
Add replace directive to go.mod:
```
replace forge.lthn.ai/core/go-process => /Users/snider/Code/core/go-process
```
**Step 2: Write sync_service.go — the sync loop extracted from main.go**
Create `/Users/snider/Code/core/go-blockchain/sync_service.go`:
```go
// Copyright (c) 2017-2026 Lethean (https://lt.hn)
//
// Licensed under the European Union Public Licence (EUPL) version 1.2.
// SPDX-License-Identifier: EUPL-1.2
package blockchain
import (
"context"
"crypto/rand"
"encoding/binary"
"fmt"
"log"
"net"
"time"
"forge.lthn.ai/core/go-blockchain/chain"
"forge.lthn.ai/core/go-blockchain/config"
"forge.lthn.ai/core/go-blockchain/p2p"
levin "forge.lthn.ai/core/go-p2p/node/levin"
)
// syncLoop continuously syncs the chain from the seed peer.
// It retries on error and polls every 30s when synced.
func syncLoop(ctx context.Context, c *chain.Chain, cfg *config.ChainConfig, forks []config.HardFork, seed string) {
opts := chain.SyncOptions{
VerifySignatures: false,
Forks: forks,
}
for {
select {
case <-ctx.Done():
return
default:
}
if err := syncOnce(ctx, c, cfg, opts, seed); err != nil {
log.Printf("sync: %v (retrying in 10s)", err)
select {
case <-ctx.Done():
return
case <-time.After(10 * time.Second):
}
continue
}
// Synced — wait before polling again.
select {
case <-ctx.Done():
return
case <-time.After(30 * time.Second):
}
}
}
func syncOnce(ctx context.Context, c *chain.Chain, cfg *config.ChainConfig, opts chain.SyncOptions, seed string) error {
conn, err := net.DialTimeout("tcp", seed, 10*time.Second)
if err != nil {
return fmt.Errorf("dial %s: %w", seed, err)
}
defer conn.Close()
lc := levin.NewConnection(conn)
var peerIDBuf [8]byte
rand.Read(peerIDBuf[:])
peerID := binary.LittleEndian.Uint64(peerIDBuf[:])
localHeight, _ := c.Height()
req := p2p.HandshakeRequest{
NodeData: p2p.NodeData{
NetworkID: cfg.NetworkID,
PeerID: peerID,
LocalTime: time.Now().Unix(),
MyPort: 0,
},
PayloadData: p2p.CoreSyncData{
CurrentHeight: localHeight,
ClientVersion: config.ClientVersion,
NonPruningMode: true,
},
}
payload, err := p2p.EncodeHandshakeRequest(&req)
if err != nil {
return fmt.Errorf("encode handshake: %w", err)
}
if err := lc.WritePacket(p2p.CommandHandshake, payload, true); err != nil {
return fmt.Errorf("write handshake: %w", err)
}
hdr, data, err := lc.ReadPacket()
if err != nil {
return fmt.Errorf("read handshake: %w", err)
}
if hdr.Command != uint32(p2p.CommandHandshake) {
return fmt.Errorf("unexpected command %d", hdr.Command)
}
var resp p2p.HandshakeResponse
if err := resp.Decode(data); err != nil {
return fmt.Errorf("decode handshake: %w", err)
}
localSync := p2p.CoreSyncData{
CurrentHeight: localHeight,
ClientVersion: config.ClientVersion,
NonPruningMode: true,
}
p2pConn := chain.NewLevinP2PConn(lc, resp.PayloadData.CurrentHeight, localSync)
return c.P2PSync(ctx, p2pConn, opts)
}
```
**Step 3: Write cmd_sync.go — the sync subcommand**
Create `/Users/snider/Code/core/go-blockchain/cmd_sync.go`:
```go
// Copyright (c) 2017-2026 Lethean (https://lt.hn)
//
// Licensed under the European Union Public Licence (EUPL) version 1.2.
// SPDX-License-Identifier: EUPL-1.2
package blockchain
import (
"context"
"fmt"
"log"
"os"
"os/signal"
"path/filepath"
"syscall"
"forge.lthn.ai/core/go-blockchain/chain"
"forge.lthn.ai/core/go-process"
store "forge.lthn.ai/core/go-store"
"github.com/spf13/cobra"
)
func newSyncCmd(dataDir, seed *string, testnet *bool) *cobra.Command {
var (
daemon bool
stop bool
)
cmd := &cobra.Command{
Use: "sync",
Short: "Headless P2P chain sync",
Long: "Sync the blockchain from P2P peers without the TUI explorer.",
RunE: func(cmd *cobra.Command, args []string) error {
if stop {
return stopSyncDaemon(*dataDir)
}
if daemon {
return runSyncDaemon(*dataDir, *seed, *testnet)
}
return runSyncForeground(*dataDir, *seed, *testnet)
},
}
cmd.Flags().BoolVar(&daemon, "daemon", false, "run as background daemon")
cmd.Flags().BoolVar(&stop, "stop", false, "stop a running sync daemon")
return cmd
}
func runSyncForeground(dataDir, seed string, testnet bool) error {
if err := ensureDataDir(dataDir); err != nil {
return err
}
dbPath := filepath.Join(dataDir, "chain.db")
s, err := store.New(dbPath)
if err != nil {
return fmt.Errorf("open store: %w", err)
}
defer s.Close()
c := chain.New(s)
cfg, forks := resolveConfig(testnet, &seed)
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer cancel()
log.Println("Starting headless P2P sync...")
syncLoop(ctx, c, &cfg, forks, seed)
log.Println("Sync stopped.")
return nil
}
func runSyncDaemon(dataDir, seed string, testnet bool) error {
if err := ensureDataDir(dataDir); err != nil {
return err
}
pidFile := filepath.Join(dataDir, "sync.pid")
d := process.NewDaemon(process.DaemonOptions{
PIDFile: pidFile,
Registry: process.DefaultRegistry(),
RegistryEntry: process.DaemonEntry{
Code: "forge.lthn.ai/core/go-blockchain",
Daemon: "sync",
},
})
if err := d.Start(); err != nil {
return fmt.Errorf("daemon start: %w", err)
}
dbPath := filepath.Join(dataDir, "chain.db")
s, err := store.New(dbPath)
if err != nil {
_ = d.Stop()
return fmt.Errorf("open store: %w", err)
}
defer s.Close()
c := chain.New(s)
cfg, forks := resolveConfig(testnet, &seed)
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer cancel()
d.SetReady(true)
log.Println("Sync daemon started.")
// Run sync loop in a goroutine; daemon.Run blocks until signal.
go syncLoop(ctx, c, &cfg, forks, seed)
return d.Run(ctx)
}
func stopSyncDaemon(dataDir string) error {
pidFile := filepath.Join(dataDir, "sync.pid")
pid, err := process.ReadPID(pidFile)
if err != nil {
return fmt.Errorf("no running sync daemon found: %w", err)
}
proc, err := os.FindProcess(pid)
if err != nil {
return fmt.Errorf("find process %d: %w", pid, err)
}
if err := proc.Signal(syscall.SIGTERM); err != nil {
return fmt.Errorf("signal process %d: %w", pid, err)
}
log.Printf("Sent SIGTERM to sync daemon (PID %d)", pid)
return nil
}
```
**Step 4: Verify it compiles**
```bash
cd /Users/snider/Code/core/go-blockchain && GOWORK=/Users/snider/Code/go.work go build ./...
```
Expected: Build succeeds.
**Step 5: Commit**
```bash
cd /Users/snider/Code/core/go-blockchain
git add sync_service.go cmd_sync.go go.mod go.sum
git commit -m "feat: add sync subcommand with daemon support"
```
---
### Task 5: Create cmd/core-chain/main.go entry point
**Context:** The new standalone binary entry point. Uses `cli.WithAppName("core-chain")` and `cli.Main()` with `WithCommands()`.
**Files:**
- Create: `/Users/snider/Code/core/go-blockchain/cmd/core-chain/main.go`
**Step 1: Create the directory**
```bash
mkdir -p /Users/snider/Code/core/go-blockchain/cmd/core-chain
```
**Step 2: Write the entry point**
Create `/Users/snider/Code/core/go-blockchain/cmd/core-chain/main.go`:
```go
// Copyright (c) 2017-2026 Lethean (https://lt.hn)
//
// Licensed under the European Union Public Licence (EUPL) version 1.2.
// SPDX-License-Identifier: EUPL-1.2
package main
import (
cli "forge.lthn.ai/core/cli/pkg/cli"
blockchain "forge.lthn.ai/core/go-blockchain"
)
func main() {
cli.WithAppName("core-chain")
cli.Main(
cli.WithCommands("chain", blockchain.AddChainCommands),
)
}
```
**Step 3: Build the binary**
```bash
cd /Users/snider/Code/core/go-blockchain && GOWORK=/Users/snider/Code/go.work go build -o ./bin/core-chain ./cmd/core-chain
```
Expected: Produces `bin/core-chain` binary.
**Step 4: Verify help output**
```bash
./bin/core-chain chain --help
```
Expected output should show:
```
Manage the Lethean blockchain — sync, explore, and mine.
Usage:
core-chain chain [command]
Available Commands:
explorer TUI block explorer
sync Headless P2P chain sync
Flags:
--data-dir string blockchain data directory (default "~/.lethean/chain")
--seed string seed peer address (host:port) (default "seeds.lthn.io:36942")
--testnet use testnet
```
**Step 5: Commit**
```bash
cd /Users/snider/Code/core/go-blockchain
git add cmd/core-chain/main.go
git commit -m "feat: add core-chain binary entry point with cli.Main()"
```
---
### Task 6: Remove old cmd/chain/main.go
**Context:** The old entry point is now replaced by `cmd/core-chain/main.go` + the package-level `commands.go`, `cmd_explorer.go`, `cmd_sync.go`, and `sync_service.go`. The sync logic was moved to `sync_service.go` (package-level), and `defaultDataDir` was moved to `commands.go`.
**Files:**
- Delete: `/Users/snider/Code/core/go-blockchain/cmd/chain/main.go`
- Delete: `/Users/snider/Code/core/go-blockchain/cmd/chain/` (directory)
**Step 1: Remove old entry point**
```bash
rm -rf /Users/snider/Code/core/go-blockchain/cmd/chain
```
**Step 2: Verify build still works**
```bash
cd /Users/snider/Code/core/go-blockchain && GOWORK=/Users/snider/Code/go.work go build ./...
```
Expected: Build succeeds (old cmd/chain is gone, cmd/core-chain is the only entry point).
**Step 3: Run all tests**
```bash
cd /Users/snider/Code/core/go-blockchain && GOWORK=/Users/snider/Code/go.work go test ./...
```
Expected: All tests pass.
**Step 4: Commit**
```bash
cd /Users/snider/Code/core/go-blockchain
git add -A cmd/chain
git commit -m "refactor: remove old cmd/chain entry point (replaced by cmd/core-chain)"
```
---
### Task 7: Add .core/build.yaml
**Context:** Every Core ecosystem binary needs a `.core/build.yaml` for the `core build` system. This tells the build system the binary name, targets, and ldflags.
**Files:**
- Create: `/Users/snider/Code/core/go-blockchain/.core/build.yaml`
**Step 1: Create the build config**
```bash
mkdir -p /Users/snider/Code/core/go-blockchain/.core
```
Create `/Users/snider/Code/core/go-blockchain/.core/build.yaml`:
```yaml
project: core-chain
binary: core-chain
main: ./cmd/core-chain
targets:
- os: darwin
arch: arm64
- os: linux
arch: amd64
ldflags:
- -s -w
- -X forge.lthn.ai/core/cli/pkg/cli.AppVersion={{.Version}}
- -X forge.lthn.ai/core/cli/pkg/cli.BuildCommit={{.Commit}}
- -X forge.lthn.ai/core/cli/pkg/cli.BuildDate={{.Date}}
```
**Step 2: Verify build via core build**
```bash
cd /Users/snider/Code/core/go-blockchain && core build
```
Expected: Produces `core-chain` binary in `./bin/`.
**Step 3: Commit**
```bash
cd /Users/snider/Code/core/go-blockchain
git add .core/build.yaml
git commit -m "chore: add .core/build.yaml for core-chain binary"
```
---
### Task 8: Final verification and push
**Context:** End-to-end verification that everything works: build, tests, binary help output.
**Files:** None (verification only).
**Step 1: Clean build**
```bash
cd /Users/snider/Code/core/go-blockchain && GOWORK=/Users/snider/Code/go.work go build ./...
```
Expected: Clean build, no errors.
**Step 2: Run all tests**
```bash
cd /Users/snider/Code/core/go-blockchain && GOWORK=/Users/snider/Code/go.work go test ./...
```
Expected: All tests pass.
**Step 3: Build binary**
```bash
cd /Users/snider/Code/core/go-blockchain && GOWORK=/Users/snider/Code/go.work go build -o ./bin/core-chain ./cmd/core-chain
```
Expected: Binary built successfully.
**Step 4: Verify CLI help**
```bash
./bin/core-chain --help
./bin/core-chain chain --help
./bin/core-chain chain explorer --help
./bin/core-chain chain sync --help
```
Expected: Clean help output with correct app name and subcommands.
**Step 5: Push to forge**
```bash
cd /Users/snider/Code/core/go-blockchain && git push origin main
```
Expected: Push succeeds to forge.
---
## File Summary
| Action | File | Purpose |
|--------|------|---------|
| Modify | `go.mod` | Fix replace directives |
| Create | `commands.go` | `AddChainCommands()` + shared helpers |
| Create | `commands_test.go` | Tests for command registration |
| Create | `cmd_explorer.go` | TUI block explorer subcommand |
| Create | `sync_service.go` | Extracted sync loop (from old main.go) |
| Create | `cmd_sync.go` | Headless sync subcommand with daemon support |
| Create | `cmd/core-chain/main.go` | Standalone binary entry point |
| Delete | `cmd/chain/main.go` | Old entry point (replaced) |
| Create | `.core/build.yaml` | Build system config |
## Dependency Changes
| Dependency | Status |
|------------|--------|
| `forge.lthn.ai/core/go-process` | **New** — daemon lifecycle, PID file, registry |
| `forge.lthn.ai/core/cli` | Existing — now used for `cli.Main()` + `WithCommands()` |
| `forge.lthn.ai/core/go-store` | Existing — unchanged |
| `forge.lthn.ai/core/go-p2p` | Existing — unchanged |

View file

@ -1,180 +0,0 @@
# go-devops Decomposition Design
**Goal:** Break `core/go-devops` (31K LOC monolith) into logical, independently-versioned packages now that the Core Go framework is stable as a base layer.
**Architecture:** Extract 4 loosely-coupled packages into dedicated repos. Keep the tightly-coupled build/release pipeline as go-devops's core purpose. Redirect imports so consumers pull from the right place.
**Tech Stack:** Go 1.26, existing Core Go ecosystem conventions (.core/build.yaml, cli.Main(), forge SSH).
---
## Current State
go-devops is a 31K LOC catch-all containing:
| Package | LOC | Purpose | Coupling |
|---------|-----|---------|----------|
| `build/` + `build/builders/` + `build/signing/` | 6.8K | Cross-compilation framework, 8 language builders | **Tight** (core purpose) |
| `release/` + `release/publishers/` | 5.9K | Release orchestration, 8 publishers, changelog | **Tight** (core purpose) |
| `sdk/` + `sdk/generators/` | 1.3K | OpenAPI SDK generation, breaking change detection | **Tight** (release feature) |
| `cmd/` (13 packages) | 13.2K | CLI commands | **Tight** (CLI layer) |
| `devkit/` | 1.2K | Code quality checks (coverage, complexity, vuln) | **Loose** (stdlib only) |
| `infra/` | 1.2K | Hetzner Cloud + CloudNS provider APIs | **Loose** (stdlib only) |
| `ansible/` | 3.7K | Pure Go Ansible playbook engine | **Loose** (go-log only) |
| `container/` | 1.3K | LinuxKit VM/hypervisor abstraction | **Loose** (go-io only) |
| `devops/` + `devops/sources/` | 1.7K | LinuxKit dev environment manager | **Medium** (container/) |
| `deploy/` | 0.4K | Coolify + Python deployment wrappers | **Medium** (release) |
## Decomposition Plan
### Phase 1: Extract to New Repos
#### 1.1 `devkit/` → merge into `core/lint`
**Why:** devkit's Finding, CoverageReport, ComplexityResult types align directly with lint's existing Finding type. The native AST complexity analyzer (`complexity.go`) is exactly lint's planned "layer 2" (AST-based detection). Coverage and vulncheck parsing are structured analysis — same domain.
**What moves:**
- `devkit/complexity.go``core/lint/pkg/lint/complexity.go` (AST-based cyclomatic complexity)
- `devkit/coverage.go``core/lint/pkg/lint/coverage.go` (coverage snapshot + regression tracking)
- `devkit/vulncheck.go``core/lint/pkg/lint/vulncheck.go` (govulncheck JSON parsing)
- `devkit/devkit.go` types → align with existing `lint.Finding`, add `TODO`, `SecretLeak` types
- `devkit/devkit.go` tool wrappers (Lint, RaceDetect, etc.) → `core/lint/pkg/lint/tools.go` (subprocess wrappers)
**What stays:** `cmd/qa/` stays in go-devops but changes imports from `devkit` to `core/lint`.
**Type alignment:**
```
devkit.Finding → lint.Finding (already equivalent)
devkit.TODO → new lint catalog rule (detection: regex)
devkit.SecretLeak → new lint catalog rule (detection: regex)
devkit.ComplexFunc → lint.ComplexityResult (new type)
```
**New lint detection types:** `regex` (existing), `ast` (complexity), `tool` (subprocess wrappers)
#### 1.2 `infra/``core/go-infra`
**Why:** Pure stdlib, zero go-devops coupling. Generic infrastructure provider APIs (Hetzner Cloud, CloudNS). Reusable by any service that needs infrastructure management — not just devops.
**What moves:**
- `infra/hetzner.go``core/go-infra/hetzner/`
- `infra/cloudns.go``core/go-infra/cloudns/`
- `infra/api.go``core/go-infra/pkg/api/` (retry + rate-limit HTTP client)
**Dependencies:** stdlib only (net/http, encoding/json, math/rand). No framework deps.
**Consumers:** `cmd/prod/`, `cmd/monitor/` → change imports.
#### 1.3 `ansible/``core/go-ansible`
**Why:** Pure Go Ansible playbook engine (3.7K LOC). Only depends on go-log + golang.org/x/crypto for SSH. Generically useful — any Go service that needs to orchestrate remote servers. Currently used by deploy commands, but could power Lethean node provisioning, homelab automation, CI pipelines.
**What moves:**
- `ansible/ansible.go``core/go-ansible/ansible.go`
- `ansible/ssh.go``core/go-ansible/ssh.go`
- `ansible/playbook.go``core/go-ansible/playbook.go`
- `ansible/vars.go``core/go-ansible/vars.go`
- `ansible/handlers.go``core/go-ansible/handlers.go`
**Dependencies:** go-log, golang.org/x/crypto (SSH).
**Consumers:** `cmd/deploy/` → change imports.
#### 1.4 `container/` + `devops/``core/go-container`
**Why:** LinuxKit VM/container abstraction with QEMU (Linux) and Hyperkit (macOS) support. Strategic for Lethean network — TIM (Terminal Isolation Matrix) uses immutable LinuxKit images for node security. Distroless, read-only filesystems, single-binary containers.
**What moves:**
- `container/``core/go-container/` (VM manager, hypervisor abstraction)
- `devops/``core/go-container/devenv/` (dev environment manager, image sources)
- `devops/sources/``core/go-container/sources/` (CDN, GitHub image fetching)
**Dependencies:** go-io, go-config, Borg (for git-based image sources).
**Consumers:** `cmd/vm/` → change imports. Future: Lethean node runtime.
**Lethean context:** Network nodes run from immutable LinuxKit images to guarantee environment security. Read-only root filesystem, signed images, minimal attack surface. The container package provides the local hypervisor abstraction for running these images on dev machines and validators.
### Phase 2: Reorganise Within go-devops
After extraction, go-devops becomes a focused **build + release + deploy** tool:
```
go-devops/
├── build/ # Project detection + cross-compilation
│ ├── builders/ # 8 language-specific builders
│ └── signing/ # Code signing (GPG, codesign, signtool)
├── release/ # Release orchestration + changelog
│ └── publishers/ # 8 publisher implementations
├── sdk/ # OpenAPI SDK generation
│ └── generators/ # 4 language generators
├── deploy/ # Deployment strategies (Coolify)
└── cmd/ # CLI commands
├── build/ # core build
├── release/ # core release
├── deploy/ # core deploy
├── dev/ # core dev (multi-repo git workflow)
├── setup/ # core setup (GitHub org bootstrap)
├── prod/ # core prod (imports go-infra)
├── qa/ # core qa (imports core/lint)
├── ci/ # core ci
├── docs/ # core docs
├── sdk/ # core sdk
├── vm/ # core vm (imports go-container)
├── monitor/ # core monitor
└── gitcmd/ # core git (aliases)
```
**Reduced dependencies:** go-devops drops stdlib-only packages from its tree. External deps like go-embed-python, kin-openapi, oasdiff stay (they're build/release/SDK specific).
### Phase 3: Update Consumers
| Consumer | Current Import | New Import |
|----------|---------------|------------|
| `cmd/qa/` | `go-devops/devkit` | `core/lint/pkg/lint` |
| `cmd/prod/` | `go-devops/infra` | `core/go-infra` |
| `cmd/deploy/` | `go-devops/ansible` | `core/go-ansible` |
| `cmd/vm/` | `go-devops/container` | `core/go-container` |
| `cmd/vm/` | `go-devops/devops` | `core/go-container/devenv` |
| `core/cli/cmd/gocmd/cmd_qa.go` | `go-devops/cmd/qa` | `core/lint/pkg/lint` |
## Dependency Graph (After)
```
core/lint (standalone, zero deps)
^
|
core/go-infra (standalone, stdlib only)
^
|
core/go-ansible (go-log, x/crypto)
^
|
core/go-container (go-io, go-config, Borg)
^
|
go-devops (build + release + deploy)
imports: core/lint, go-infra, go-ansible, go-container
imports: go-io, go-log, go-scm, go-i18n, cli
imports: kin-openapi, oasdiff, go-embed-python (build/release specific)
```
## Execution Order
1. **devkit → core/lint** (smallest, highest value — already have lint repo)
2. **infra → go-infra** (zero deps, clean extract)
3. **ansible → go-ansible** (go-log only, clean extract)
4. **container + devops → go-container** (slightly more deps, needs Borg)
5. **Update go-devops imports** (remove extracted packages, point to new repos)
6. **Update core/cli imports** (cmd_qa.go uses devkit)
## Vanity Import Server
`cmd/vanity-import/` is a standalone HTTP service for Go vanity imports (dappco.re → forge repos). Extract to its own repo or binary. Not blocking — low priority.
## Risk Mitigation
- **Backward compatibility**: Keep type aliases in go-devops during transition (`type Finding = lint.Finding`)
- **go.work**: All repos in workspace, so local development works immediately
- **Testing**: Each extraction verified by `go test ./...` in both source and destination
- **Incremental**: One extraction at a time, commit + push + tag before next

View file

@ -1,5 +1,7 @@
# Lint Pattern Catalog & Polish Skill Design
> **Partial implementation (14 Mar 2026):** Layer 1 (`core/lint` -- catalog, matcher, scanner, CLI) is fully implemented and documented at `docs/tools/lint/index.md`. Layer 2 (MCP subsystem in `go-ai`) and Layer 3 (Claude Code polish skill in `core/agent`) are NOT implemented. This plan is retained for those remaining layers.
**Goal:** A structured pattern catalog (`core/lint`) that captures recurring code quality findings as regex rules, exposes them via MCP tools in `go-ai`, and orchestrates multi-AI code review via a Claude Code skill in `core/agent`.
**Architecture:** Three layers — a standalone catalog+matcher library (`core/lint`), an MCP subsystem in `go-ai` that exposes lint tools to agents, and a Claude Code plugin in `core/agent` that orchestrates the "polish" workflow (deterministic checks + AI reviewers + feedback loop into the catalog).

View file

@ -1,5 +1,7 @@
# Lint Pattern Catalog Implementation Plan
> **Fully implemented (14 Mar 2026).** All tasks in this plan are complete. The `core/lint` module ships 18 rules across 3 catalogs, with a working CLI and embedded YAML. This plan is retained alongside the design doc, which tracks the remaining MCP and polish skill layers.
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Build `core/lint` — a standalone Go library + CLI that loads YAML pattern catalogs and runs regex-based code checks, seeded with 18 patterns from the March 2026 ecosystem sweep.

View file

@ -1,155 +0,0 @@
# Plug Package Extraction Design
**Goal:** Extract `app/Plug/*` categories from the Laravel app into 8 independent `core/php-plug-*` packages on forge, restoring the original package split that was flattened during the GitHub to Forge migration. Move shared contracts into `core/php` as the framework base.
**Pattern:** Same as CoreGO — `core/php` is the foundation framework (like `core/go`), with domain packages depending on it.
---
## Current State
**Framework (`core/php` `src/Plug/`)** — 7 files:
- `Boot.php` — Registry singleton registration
- `Registry.php` — Auto-discovery, capability checking
- `Response.php` — Standardised operation response
- `Concern/BuildsResponse.php` — Response builder trait
- `Concern/ManagesTokens.php` — OAuth token management trait
- `Concern/UsesHttp.php` — HTTP client helpers trait
- `Enum/Status.php` — OK, UNAUTHORIZED, RATE_LIMITED, etc.
**App (`app/Plug/`)** — 140 files across 10 categories:
| Category | Providers | Files |
|----------|-----------|-------|
| Contract | 8 interfaces (Authenticable, Postable, Deletable, Readable, Commentable, Listable, MediaUploadable, Refreshable) | 8 |
| Social | LinkedIn, Meta, Pinterest, Reddit, TikTok, Twitter, VK, YouTube | ~48 |
| Web3 | Bluesky, Farcaster, Lemmy, Mastodon, Nostr, Threads | ~28 |
| Content | Devto, Hashnode, Medium, Wordpress | ~18 |
| Chat | Discord, Slack, Telegram | ~14 |
| Business | GoogleMyBusiness | ~5 |
| Cdn | Bunny (Purge, Stats, CdnManager) + contracts | 5 |
| Storage | Bunny (Browse, Delete, Download, Upload, VBucket, StorageManager) + contracts | 8 |
| Stock | Unsplash (Search, Photo, Collection, Download, Exception, Jobs) | 6 |
## Namespace Alignment
The app currently uses `Plug\` namespace while core/php uses `Core\Plug\`. This extraction aligns everything under `Core\Plug\*` to match the framework convention.
## Target State
### 1. Contracts move into `core/php`
`src/Plug/Contract/` gains 8 interfaces from the app:
```
core/php/src/Plug/Contract/
├── Authenticable.php
├── Commentable.php
├── Deletable.php
├── Listable.php
├── MediaUploadable.php
├── Postable.php
├── Readable.php
└── Refreshable.php
```
Namespace: `Core\Plug\Contract\` (aligned with framework).
### 2. Eight new packages on forge
Each package has its own repo at `forge.lthn.ai/core/php-plug-{name}`.
```
core/php-plug-social/
├── composer.json # requires core/php
├── CLAUDE.md
└── src/
├── LinkedIn/{Auth,Post,Delete,Media,Pages,Read}.php
├── Meta/{Auth,Post,Delete,Media,Pages,Read}.php
├── Pinterest/{Auth,Post,Delete,Media,Boards,Read}.php
├── Reddit/{Auth,Post,Delete,Media,Read,Subreddits}.php
├── TikTok/{Auth,Post,Read}.php
├── Twitter/{Auth,Post,Delete,Media,Read}.php
├── VK/{Auth,Post,Delete,Media,Groups,Read}.php
└── YouTube/{Auth,Post,Delete,Comment,Read}.php
```
PSR-4 mapping per package:
| Package | Composer Name | Namespace | Autoload |
|---------|--------------|-----------|----------|
| `core/php-plug-social` | `core/php-plug-social` | `Core\Plug\Social\` | `src/` |
| `core/php-plug-web3` | `core/php-plug-web3` | `Core\Plug\Web3\` | `src/` |
| `core/php-plug-content` | `core/php-plug-content` | `Core\Plug\Content\` | `src/` |
| `core/php-plug-chat` | `core/php-plug-chat` | `Core\Plug\Chat\` | `src/` |
| `core/php-plug-business` | `core/php-plug-business` | `Core\Plug\Business\` | `src/` |
| `core/php-plug-cdn` | `core/php-plug-cdn` | `Core\Plug\Cdn\` | `src/` |
| `core/php-plug-storage` | `core/php-plug-storage` | `Core\Plug\Storage\` | `src/` |
| `core/php-plug-stock` | `core/php-plug-stock` | `Core\Plug\Stock\` | `src/` |
Cdn and Storage packages include their own sub-contracts (`Core\Plug\Cdn\Contract\*`, `Core\Plug\Storage\Contract\*`) since those are domain-specific (Purgeable, HasStats, Browseable, Uploadable, etc.) rather than shared Plug contracts.
### 3. Registry update
`Registry::discover()` currently scans `__DIR__` for category subdirectories. After extraction, providers live in composer-installed paths. Two options:
**Chosen:** Each package registers its providers via a service provider that calls `Registry::register()`. The Registry gains a `register(string $identifier, array $meta)` method. No filesystem scanning needed.
### 4. App cleanup
`app/Plug/` is deleted entirely:
- `Boot.php` — redundant (core/php has one)
- `Registry.php` — redundant (core/php has one)
- `Response.php` — redundant (core/php has one)
- `Contract/` — moved to core/php
- All category dirs — moved to packages
The app's `composer.json` gains the 8 new packages as dependencies.
## Dependency Graph
```
core/php (framework)
├── src/Plug/Contract/* ← shared interfaces
├── src/Plug/Registry.php ← provider registry
├── src/Plug/Response.php ← standardised response
├── src/Plug/Concern/* ← shared traits
└── src/Plug/Enum/Status.php ← status enum
core/php-plug-social ─┐
core/php-plug-web3 ─┤
core/php-plug-content ─┤
core/php-plug-chat ─┤ all depend on core/php
core/php-plug-business ─┤
core/php-plug-cdn ─┤
core/php-plug-storage ─┤
core/php-plug-stock ─┘
```
## Namespace Mapping
All provider code is renamed from `Plug\*` to `Core\Plug\*`:
```php
// Before (in app): Plug\Social\Twitter\Post
// After (in package): Core\Plug\Social\Twitter\Post
// Before: use Plug\Contract\Postable;
// After: use Core\Plug\Contract\Postable;
// Before: use Plug\Concern\UsesHttp;
// After: use Core\Plug\Concern\UsesHttp;
```
The app's `"Plug\\" => "app/Plug/"` autoload entry is removed entirely.
## Composer Repository Config
Each package uses forge's Composer repository:
```json
{
"type": "vcs",
"url": "ssh://git@forge.lthn.ai:2223/core/php-plug-social.git"
}
```

View file

@ -1,379 +0,0 @@
# Plug Package Extraction Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Extract `app/Plug/*` from the Laravel app into 8 independent `core/php-plug-*` repos on forge, with contracts moving to `core/php` and all namespaces aligned to `Core\Plug\*`.
**Architecture:** The app's `Plug\` namespace was a flattened copy of what should be `Core\Plug\*` in the framework. Contracts (shared interfaces) go into `core/php`. Each category (Social, Web3, etc.) becomes its own composer package. The app's `app/Plug/` directory is deleted entirely.
**Tech Stack:** PHP 8.2+, Composer, Laravel, PSR-4 autoloading, Forge SSH repos
---
### Task 1: Move contracts into `core/php`
**Context:** The 8 shared interfaces that all providers implement currently live in `app/Plug/Contract/` with `Plug\Contract\` namespace. They belong in the framework at `core/php/src/Plug/Contract/` with `Core\Plug\Contract\` namespace.
**Files:**
- Create: `/Users/snider/Code/core/php/src/Plug/Contract/Authenticable.php`
- Create: `/Users/snider/Code/core/php/src/Plug/Contract/Commentable.php`
- Create: `/Users/snider/Code/core/php/src/Plug/Contract/Deletable.php`
- Create: `/Users/snider/Code/core/php/src/Plug/Contract/Listable.php`
- Create: `/Users/snider/Code/core/php/src/Plug/Contract/MediaUploadable.php`
- Create: `/Users/snider/Code/core/php/src/Plug/Contract/Postable.php`
- Create: `/Users/snider/Code/core/php/src/Plug/Contract/Readable.php`
- Create: `/Users/snider/Code/core/php/src/Plug/Contract/Refreshable.php`
- Modify: `/Users/snider/Code/core/php/src/Plug/Registry.php` — add `register()` method
**Step 1:** Copy contracts from app to core/php
```bash
mkdir -p /Users/snider/Code/core/php/src/Plug/Contract
cp /Users/snider/Code/lab/host.uk.com/app/Plug/Contract/*.php /Users/snider/Code/core/php/src/Plug/Contract/
```
**Step 2:** Update namespace in all 8 contracts from `Plug\Contract``Core\Plug\Contract` and `use Plug\Response``use Core\Plug\Response`
```bash
cd /Users/snider/Code/core/php/src/Plug/Contract
sed -i '' 's/namespace Plug\\Contract;/namespace Core\\Plug\\Contract;/' *.php
sed -i '' 's/use Plug\\Response;/use Core\\Plug\\Response;/' *.php
```
**Step 3:** Add `register()` method to Registry so packages can self-register instead of relying on filesystem scanning
In `/Users/snider/Code/core/php/src/Plug/Registry.php`, add after the `discover()` method:
```php
/**
* Register a provider programmatically.
*
* Used by plug packages to self-register without filesystem scanning.
*/
public function register(string $identifier, string $category, string $name, string $namespace): void
{
$this->providers[$identifier] = [
'category' => $category,
'name' => $name,
'namespace' => $namespace,
'path' => null,
];
}
```
**Step 4:** Verify
```bash
cd /Users/snider/Code/core/php
grep -r "namespace Core\\Plug\\Contract" src/Plug/Contract/
# Should show 8 files with Core\Plug\Contract namespace
```
**Step 5:** Commit
```bash
cd /Users/snider/Code/core/php
git add src/Plug/Contract/ src/Plug/Registry.php
git commit -m "feat(plug): add shared contracts and Registry::register()"
```
---
### Task 2: Create `core/php-plug-social`
**Context:** The largest package — 8 social media providers (LinkedIn, Meta, Pinterest, Reddit, TikTok, Twitter, VK, YouTube), ~48 files. This is the template for all other packages.
**Files:**
- Source: `/Users/snider/Code/lab/host.uk.com/app/Plug/Social/` (all files)
- Create: `/Users/snider/Code/core/php-plug-social/` (new repo)
**Step 1:** Create repo on forge
```bash
# Create repo via Forgejo API
curl -X POST "https://forge.lthn.ai/api/v1/orgs/core/repos" \
-H "Authorization: token $(cat ~/.config/forge/token)" \
-H "Content-Type: application/json" \
-d '{"name":"php-plug-social","description":"Social media provider integrations (Twitter, Meta, LinkedIn, etc.)","private":true,"auto_init":true}'
```
**Step 2:** Clone and set up directory structure
```bash
cd /Users/snider/Code/core
git clone ssh://git@forge.lthn.ai:2223/core/php-plug-social.git
cd php-plug-social
mkdir -p src
```
**Step 3:** Create `composer.json`
```json
{
"name": "core/php-plug-social",
"description": "Social media provider integrations for the Plug framework",
"type": "library",
"license": "EUPL-1.2",
"require": {
"php": "^8.2",
"core/php": "^1.0"
},
"autoload": {
"psr-4": {
"Core\\Plug\\Social\\": "src/"
}
},
"minimum-stability": "dev",
"prefer-stable": true,
"repositories": [
{
"type": "vcs",
"url": "ssh://git@forge.lthn.ai:2223/core/php.git"
}
]
}
```
**Step 4:** Copy provider files
```bash
cp -r /Users/snider/Code/lab/host.uk.com/app/Plug/Social/* src/
```
**Step 5:** Update namespaces in all files — two changes per file:
1. `namespace Plug\Social\{Provider}``namespace Core\Plug\Social\{Provider}`
2. `use Plug\{...}``use Core\Plug\{...}`
```bash
cd /Users/snider/Code/core/php-plug-social
# Update namespace declarations
find src -name "*.php" -exec sed -i '' 's/namespace Plug\\Social\\/namespace Core\\Plug\\Social\\/' {} \;
# Update use statements for framework classes
find src -name "*.php" -exec sed -i '' 's/use Plug\\Concern\\/use Core\\Plug\\Concern\\/' {} \;
find src -name "*.php" -exec sed -i '' 's/use Plug\\Contract\\/use Core\\Plug\\Contract\\/' {} \;
find src -name "*.php" -exec sed -i '' 's/use Plug\\Response;/use Core\\Plug\\Response;/' {} \;
find src -name "*.php" -exec sed -i '' 's/use Plug\\Enum\\/use Core\\Plug\\Enum\\/' {} \;
# Update internal cross-references (e.g., new Media in Twitter\Post)
find src -name "*.php" -exec sed -i '' 's/use Plug\\Social\\/use Core\\Plug\\Social\\/' {} \;
```
**Step 6:** Verify namespaces are correct
```bash
grep -r "namespace " src/ | head -20
# Every line should show Core\Plug\Social\{Provider}
grep -r "use Plug\\\\" src/
# Should return nothing — all should be Core\Plug\*
```
**Step 7:** Commit and push
```bash
cd /Users/snider/Code/core/php-plug-social
git add .
git commit -m "feat: extract social providers from app/Plug/Social"
git push origin main
```
---
### Task 3: Create `core/php-plug-web3`
**Same pattern as Task 2.** 6 providers (Bluesky, Farcaster, Lemmy, Mastodon, Nostr, Threads), ~28 files.
**Step 1:** Create repo on forge (name: `php-plug-web3`, description: "Decentralised/Web3 provider integrations")
**Step 2:** Clone, create `composer.json` (same template, change name/description/namespace to `Core\\Plug\\Web3\\`)
**Step 3:** Copy files
```bash
cp -r /Users/snider/Code/lab/host.uk.com/app/Plug/Web3/* src/
```
**Step 4:** Update namespaces
```bash
find src -name "*.php" -exec sed -i '' 's/namespace Plug\\Web3\\/namespace Core\\Plug\\Web3\\/' {} \;
find src -name "*.php" -exec sed -i '' 's/use Plug\\Concern\\/use Core\\Plug\\Concern\\/' {} \;
find src -name "*.php" -exec sed -i '' 's/use Plug\\Contract\\/use Core\\Plug\\Contract\\/' {} \;
find src -name "*.php" -exec sed -i '' 's/use Plug\\Response;/use Core\\Plug\\Response;/' {} \;
find src -name "*.php" -exec sed -i '' 's/use Plug\\Enum\\/use Core\\Plug\\Enum\\/' {} \;
find src -name "*.php" -exec sed -i '' 's/use Plug\\Web3\\/use Core\\Plug\\Web3\\/' {} \;
```
**Step 5:** Verify, commit, push
---
### Task 4: Create `core/php-plug-content`
**Same pattern.** 4 providers (Devto, Hashnode, Medium, Wordpress), ~18 files.
**Namespace:** `Core\\Plug\\Content\\`
**Copy:** `app/Plug/Content/*``src/`
**Sed replacements:** Same pattern — `Plug\Content\``Core\Plug\Content\`, plus framework imports.
---
### Task 5: Create `core/php-plug-chat`
**Same pattern.** 3 providers (Discord, Slack, Telegram), ~14 files.
**Namespace:** `Core\\Plug\\Chat\\`
**Copy:** `app/Plug/Chat/*``src/`
**Note:** Chat providers use `Postable` but not `ManagesTokens` in some cases (Slack, Discord use webhook-style auth). Verify `use` statements are correct after sed.
---
### Task 6: Create `core/php-plug-business`
**Same pattern.** 1 provider (GoogleMyBusiness), ~5 files.
**Namespace:** `Core\\Plug\\Business\\`
**Copy:** `app/Plug/Business/*``src/`
---
### Task 7: Create `core/php-plug-cdn`
**Different from social/web3 — includes its own domain-specific contracts.**
**Files:**
- Source: `/Users/snider/Code/lab/host.uk.com/app/Plug/Cdn/`
- Includes: `Contract/HasStats.php`, `Contract/Purgeable.php`, `CdnManager.php`, `Bunny/Purge.php`, `Bunny/Stats.php`
**Namespace:** `Core\\Plug\\Cdn\\`
**Additional sed:** `use Plug\\Cdn\\Contract\\``use Core\\Plug\\Cdn\\Contract\\`
---
### Task 8: Create `core/php-plug-storage`
**Same pattern as CDN — includes domain-specific contracts.**
**Files:**
- Source: `/Users/snider/Code/lab/host.uk.com/app/Plug/Storage/`
- Includes: `Contract/{Browseable,Deletable,Downloadable,Uploadable}.php`, `StorageManager.php`, `Bunny/{Browse,Delete,Download,Upload,VBucket}.php`
**Namespace:** `Core\\Plug\\Storage\\`
**Additional sed:** `use Plug\\Storage\\Contract\\``use Core\\Plug\\Storage\\Contract\\`
---
### Task 9: Create `core/php-plug-stock`
**Same pattern.** 1 provider (Unsplash), ~6 files. Has a Jobs subdirectory.
**Namespace:** `Core\\Plug\\Stock\\`
**Copy:** `app/Plug/Stock/*``src/`
**Additional sed:** `use Plug\\Stock\\``use Core\\Plug\\Stock\\` (for internal cross-references like `TriggerDownload``Download`)
---
### Task 10: Clean up the Laravel app
**Context:** After all packages are extracted, remove the flattened code from the app and wire up the new packages.
**Files:**
- Delete: `/Users/snider/Code/lab/host.uk.com/app/Plug/` (entire directory)
- Modify: `/Users/snider/Code/lab/host.uk.com/composer.json` — add 8 new requires + repos, remove `Plug\\` autoload
- Modify: `/Users/snider/Code/lab/host.uk.com/app/Boot.php` — remove `Plug\Boot` reference
**Step 1:** Remove `app/Plug/` entirely
```bash
rm -rf /Users/snider/Code/lab/host.uk.com/app/Plug
```
**Step 2:** Update `composer.json`
Remove from `autoload.psr-4`:
```json
"Plug\\": "app/Plug/",
```
Add to `require`:
```json
"core/php-plug-social": "dev-main",
"core/php-plug-web3": "dev-main",
"core/php-plug-content": "dev-main",
"core/php-plug-chat": "dev-main",
"core/php-plug-business": "dev-main",
"core/php-plug-cdn": "dev-main",
"core/php-plug-storage": "dev-main",
"core/php-plug-stock": "dev-main"
```
Add to `repositories`:
```json
{"type": "vcs", "url": "ssh://git@forge.lthn.ai:2223/core/php-plug-social.git"},
{"type": "vcs", "url": "ssh://git@forge.lthn.ai:2223/core/php-plug-web3.git"},
{"type": "vcs", "url": "ssh://git@forge.lthn.ai:2223/core/php-plug-content.git"},
{"type": "vcs", "url": "ssh://git@forge.lthn.ai:2223/core/php-plug-chat.git"},
{"type": "vcs", "url": "ssh://git@forge.lthn.ai:2223/core/php-plug-business.git"},
{"type": "vcs", "url": "ssh://git@forge.lthn.ai:2223/core/php-plug-cdn.git"},
{"type": "vcs", "url": "ssh://git@forge.lthn.ai:2223/core/php-plug-storage.git"},
{"type": "vcs", "url": "ssh://git@forge.lthn.ai:2223/core/php-plug-stock.git"}
```
**Step 3:** Update `app/Boot.php` — remove the `Plug\Boot::class` provider registration
**Step 4:** Run composer update
```bash
cd /Users/snider/Code/lab/host.uk.com
composer update
```
**Step 5:** Clear caches and verify
```bash
php artisan config:clear
php artisan cache:clear
php artisan route:clear
```
**Step 6:** Commit
```bash
cd /Users/snider/Code/lab/host.uk.com
git add -A
git commit -m "refactor: replace app/Plug with core/php-plug-* packages"
```
---
### Task 11: Push `core/php` contracts and verify
**Step 1:** Push core/php with new contracts
```bash
cd /Users/snider/Code/core/php
git push origin main
```
**Step 2:** Verify all packages resolve
```bash
cd /Users/snider/Code/lab/host.uk.com
composer update --dry-run 2>&1 | head -30
```
**Step 3:** Run the app
```bash
# Visit lthn.test in browser — should load without errors
php artisan tinker --execute="app('plug.registry')->identifiers()"
```

File diff suppressed because it is too large Load diff

View file

@ -1,95 +0,0 @@
# Unified QA Provider — core/lint
**Date:** 2026-03-09
**Status:** Approved
## Problem
PHP QA tooling (~2,150 LOC) lives in `core/php` alongside dev server, deploy, and service management code. Go QA tooling (~3,500 LOC) lives in `core/lint`. Two separate QA entry points (`core php qa` and `core qa`) fragment the developer experience.
## Solution
Make `core/lint` the unified QA provider for all languages. Extract PHP QA library and CLI code from `core/php` into `core/lint` under language sub-packages.
## Architecture
```
lint/
├── pkg/detect/ # Project type detection
│ └── detect.go # IsPHPProject(), IsGoProject(), DetectAll()
├── pkg/lint/ # Go analysis (unchanged)
│ ├── complexity.go
│ ├── coverage.go
│ ├── scanner.go
│ ├── tools.go
│ ├── vulncheck.go
│ └── ...
├── pkg/php/ # PHP analysis (from core/php quality.go)
│ ├── format.go # Pint (DetectFormatter, Format)
│ ├── analyse.go # PHPStan/Larastan, Psalm
│ ├── audit.go # Composer/npm audit
│ ├── security.go # .env + filesystem security checks
│ ├── refactor.go # Rector
│ ├── mutation.go # Infection
│ ├── pipeline.go # QA pipeline stages
│ └── runner.go # QARunner orchestration (go-process)
├── cmd/qa/ # Unified CLI
│ ├── cmd_qa.go # Root — auto-detects project type
│ ├── cmd_docblock.go # (existing Go)
│ ├── cmd_health.go # (existing Go)
│ ├── cmd_php.go # PHP: fmt, stan, psalm, audit, security, rector, infection
│ └── ...
└── cmd/core-lint/main.go
```
## Key Decisions
### Project Detection (`pkg/detect/`)
- Uses `go-io` Medium for filesystem checks
- Exports `IsPHPProject(dir)`, `IsGoProject(dir)`, `DetectAll(dir) []ProjectType`
- Both `pkg/lint` and `pkg/php` import this shared package
### PHP Library (`pkg/php/`)
- Pure library, no CLI coupling
- Option structs in, result structs out
- Replaces `getMedium()` with `io.NewMedium()` directly
- No dependency on `core/php` — fully standalone
- Tools: Pint, PHPStan/Larastan, Psalm, Rector, Infection, composer/npm audit, security
### QA Runner (`pkg/php/runner.go`)
- Uses `go-process` for subprocess orchestration with dependency ordering
- Stages: quick (audit, fmt, stan), standard (psalm, test), full (rector, infection)
- JSON output mode for CI
### Unified CLI (`cmd/qa/`)
- `core qa` auto-detects: Go project → Go checks, PHP project → PHP checks, both → both
- Individual tools: `core qa fmt`, `core qa stan`, `core qa psalm`, etc.
- Existing Go commands unchanged
### core/php Cleanup
- Remove: `quality.go`, `cmd_quality.go`, `cmd_qa_runner.go`, `qa.yaml`
- `core php qa` removed (users run `core qa`)
- core/php retains: dev server, deploy, build, services, container, FrankenPHP
## Dependencies
lint gains:
- `go-io` (already present)
- `go-process` (new — for QA runner subprocess orchestration)
- `go-i18n` (already present)
## Migration
| Source (core/php) | Destination (core/lint) |
|---|---|
| `quality.go` (format section) | `pkg/php/format.go` |
| `quality.go` (analyse section) | `pkg/php/analyse.go` |
| `quality.go` (audit section) | `pkg/php/audit.go` |
| `quality.go` (security section) | `pkg/php/security.go` |
| `quality.go` (rector section) | `pkg/php/refactor.go` |
| `quality.go` (infection section) | `pkg/php/mutation.go` |
| `quality.go` (pipeline section) | `pkg/php/pipeline.go` |
| `cmd_qa_runner.go` | `pkg/php/runner.go` |
| `cmd_quality.go` (all commands) | `cmd/qa/cmd_php.go` |
| `qa.yaml` | `pkg/php/qa.yaml` (embedded) |
| `IsPHPProject()` from detect.go | `pkg/detect/detect.go` |

View file

@ -1,389 +0,0 @@
# Unified QA Provider Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Make `core/lint` the unified QA provider for Go and PHP by extracting PHP QA code from `core/php` into `core/lint`.
**Architecture:** Language sub-packages under `pkg/``pkg/detect/` for project detection, `pkg/lint/` for Go (unchanged), `pkg/php/` for PHP analysis. `cmd/qa/` gains PHP subcommands. Uses `go-io` Medium interface for filesystem, `go-process` for QA pipeline orchestration.
**Tech Stack:** Go 1.26, go-io (Medium interface), go-process (subprocess runner), go-i18n, cli
---
### Task 1: Create `pkg/detect/` — Project Type Detection
**Files:**
- Create: `pkg/detect/detect.go`
- Create: `pkg/detect/detect_test.go`
**Step 1: Write the failing test**
```go
package detect
import (
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
)
func TestIsGoProject_Good(t *testing.T) {
dir := t.TempDir()
os.WriteFile(filepath.Join(dir, "go.mod"), []byte("module test"), 0644)
assert.True(t, IsGoProject(dir))
}
func TestIsGoProject_Bad(t *testing.T) {
dir := t.TempDir()
assert.False(t, IsGoProject(dir))
}
func TestIsPHPProject_Good(t *testing.T) {
dir := t.TempDir()
os.WriteFile(filepath.Join(dir, "composer.json"), []byte("{}"), 0644)
assert.True(t, IsPHPProject(dir))
}
func TestIsPHPProject_Bad(t *testing.T) {
dir := t.TempDir()
assert.False(t, IsPHPProject(dir))
}
func TestDetectAll_Good(t *testing.T) {
dir := t.TempDir()
os.WriteFile(filepath.Join(dir, "go.mod"), []byte("module test"), 0644)
os.WriteFile(filepath.Join(dir, "composer.json"), []byte("{}"), 0644)
types := DetectAll(dir)
assert.Contains(t, types, Go)
assert.Contains(t, types, PHP)
}
```
**Step 2: Run test to verify it fails**
Run: `cd /Users/snider/Code && GOWORK=go.work go test ./core/lint/pkg/detect/... -v`
Expected: FAIL — functions not defined
**Step 3: Write minimal implementation**
```go
// Package detect identifies project types by examining filesystem markers.
package detect
import "os"
// ProjectType identifies a project's language/framework.
type ProjectType string
const (
Go ProjectType = "go"
PHP ProjectType = "php"
)
// IsGoProject returns true if dir contains a go.mod file.
func IsGoProject(dir string) bool {
_, err := os.Stat(dir + "/go.mod")
return err == nil
}
// IsPHPProject returns true if dir contains a composer.json file.
func IsPHPProject(dir string) bool {
_, err := os.Stat(dir + "/composer.json")
return err == nil
}
// DetectAll returns all detected project types in the directory.
func DetectAll(dir string) []ProjectType {
var types []ProjectType
if IsGoProject(dir) {
types = append(types, Go)
}
if IsPHPProject(dir) {
types = append(types, PHP)
}
return types
}
```
**Step 4: Run test to verify it passes**
Run: `cd /Users/snider/Code && GOWORK=go.work go test ./core/lint/pkg/detect/... -v`
Expected: PASS
**Step 5: Commit**
```
feat(lint): add pkg/detect — project type detection
```
---
### Task 2: Create `pkg/php/format.go` — Pint Formatter
**Files:**
- Create: `pkg/php/format.go`
- Create: `pkg/php/format_test.go`
**Step 1: Write the failing test**
```go
package php
import (
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
)
func TestDetectFormatter_Good(t *testing.T) {
dir := t.TempDir()
// Create vendor/bin/pint
vendorBin := filepath.Join(dir, "vendor", "bin")
os.MkdirAll(vendorBin, 0755)
os.WriteFile(filepath.Join(vendorBin, "pint"), []byte("#!/bin/sh"), 0755)
ft, found := DetectFormatter(dir)
assert.True(t, found)
assert.Equal(t, FormatterPint, ft)
}
func TestDetectFormatter_Bad(t *testing.T) {
dir := t.TempDir()
_, found := DetectFormatter(dir)
assert.False(t, found)
}
```
**Step 2: Run test to verify it fails**
Run: `cd /Users/snider/Code && GOWORK=go.work go test ./core/lint/pkg/php/... -v`
Expected: FAIL
**Step 3: Write implementation**
Extract from `core/php/quality.go` lines 16-232 (FormatOptions, FormatterType, DetectFormatter, Format, buildPintCommand). Replace `getMedium()` with `io.Local`. Change package to `php`.
Key changes from source:
- `package php` (not `package php` from core/php — same name, different module)
- Replace `getMedium()``io.Local`
- Replace `cli.Sprintf``fmt.Sprintf`
- Keep `cli.WrapVerb`, `cli.Err` for error returns
**Step 4: Run test to verify it passes**
Run: `cd /Users/snider/Code && GOWORK=go.work go test ./core/lint/pkg/php/... -v`
Expected: PASS
**Step 5: Commit**
```
feat(lint): add pkg/php/format — Pint formatter detection and execution
```
---
### Task 3: Create `pkg/php/analyse.go` — PHPStan + Psalm
**Files:**
- Create: `pkg/php/analyse.go`
- Create: `pkg/php/analyse_test.go`
Extract from `core/php/quality.go`:
- `AnalyseOptions`, `AnalyserType`, `DetectAnalyser`, `Analyse`, `buildPHPStanCommand` (lines 37-266)
- `PsalmOptions`, `PsalmType`, `DetectPsalm`, `RunPsalm` (lines 272-370)
Same `getMedium()``io.Local` replacement.
Test: create temp dirs with mock `vendor/bin/phpstan`, verify `DetectAnalyser` finds it.
**Commit:**
```
feat(lint): add pkg/php/analyse — PHPStan, Larastan, Psalm
```
---
### Task 4: Create `pkg/php/audit.go` — Security Auditing
**Files:**
- Create: `pkg/php/audit.go`
- Create: `pkg/php/audit_test.go`
Extract from `core/php/quality.go`:
- `AuditOptions`, `AuditResult`, `AuditAdvisory`, `RunAudit`, `runComposerAudit`, `runNpmAudit` (lines 376-521)
Test: verify `AuditResult` struct fields, test JSON parsing of mock composer audit output.
**Commit:**
```
feat(lint): add pkg/php/audit — composer and npm security auditing
```
---
### Task 5: Create `pkg/php/security.go` — Security Checks
**Files:**
- Create: `pkg/php/security.go`
- Create: `pkg/php/security_test.go`
Extract from `core/php/quality.go`:
- `SecurityOptions`, `SecurityResult`, `SecurityCheck`, `SecuritySummary` (lines 781-817)
- `RunSecurityChecks`, `runEnvSecurityChecks`, `runFilesystemSecurityChecks` (lines 819-994)
Test: create temp .env with `APP_DEBUG=true`, verify check catches it.
**Commit:**
```
feat(lint): add pkg/php/security — .env and filesystem security checks
```
---
### Task 6: Create `pkg/php/refactor.go` + `pkg/php/mutation.go`
**Files:**
- Create: `pkg/php/refactor.go` — Rector
- Create: `pkg/php/mutation.go` — Infection
- Create: `pkg/php/refactor_test.go`
- Create: `pkg/php/mutation_test.go`
Extract Rector (lines 527-598) and Infection (lines 604-693) from `core/php/quality.go`.
**Commit:**
```
feat(lint): add pkg/php/refactor + mutation — Rector and Infection
```
---
### Task 7: Create `pkg/php/test.go` — Test Runner
**Files:**
- Create: `pkg/php/test.go`
- Create: `pkg/php/test_test.go`
Extract from `core/php/testing.go`:
- `TestOptions`, `TestRunner`, `DetectTestRunner`, `RunTests`, `buildPestCommand`, `buildPHPUnitCommand`
**Commit:**
```
feat(lint): add pkg/php/test — Pest and PHPUnit runner
```
---
### Task 8: Create `pkg/php/pipeline.go` + `pkg/php/runner.go` — QA Orchestration
**Files:**
- Create: `pkg/php/pipeline.go`
- Create: `pkg/php/runner.go`
- Create: `pkg/php/pipeline_test.go`
Extract from `core/php/quality.go`:
- `QAOptions`, `QAStage`, `QACheckResult`, `QAResult`, `GetQAStages`, `GetQAChecks` (lines 699-775) → `pipeline.go`
Extract from `core/php/cmd_qa_runner.go`:
- `QARunner`, `QARunResult`, `QACheckRunResult`, `NewQARunner`, `Run`, `BuildSpecs`, `buildSpec``runner.go`
This task adds `go-process` dependency to lint's go.mod.
**Commit:**
```
feat(lint): add pkg/php/pipeline + runner — QA orchestration
```
---
### Task 9: Add PHP commands to `cmd/qa/`
**Files:**
- Create: `cmd/qa/cmd_php.go`
- Modify: `cmd/qa/cmd_qa.go` — add auto-detect logic
Extract CLI commands from `core/php/cmd_quality.go`:
- `addPHPFmtCommand``core qa fmt` (detect: PHP → pint, Go → gofmt)
- `addPHPStanCommand``core qa stan`
- `addPHPPsalmCommand``core qa psalm`
- `addPHPAuditCommand``core qa audit`
- `addPHPSecurityCommand``core qa security`
- `addPHPRectorCommand``core qa rector`
- `addPHPInfectionCommand``core qa infection`
- `addPHPTestCommand``core qa test` (detect: PHP → pest/phpunit, Go → go test)
- `addPHPQACommand` → integrated into main `core qa` with auto-detect
Auto-detect in `cmd_qa.go`:
```go
types := detect.DetectAll(cwd)
for _, t := range types {
switch t {
case detect.Go:
// run Go checks (existing)
case detect.PHP:
// run PHP checks (new)
}
}
```
**Commit:**
```
feat(lint): add PHP QA commands to core qa
```
---
### Task 10: Update go.mod, build, test
**Files:**
- Modify: `go.mod` — add `go-process` dep
- Modify: `cmd/qa/cmd_qa.go` — register PHP subcommands
**Step 1:** Run `go mod tidy`
**Step 2:** Run `go build ./...`
**Step 3:** Run `go test ./...`
**Step 4:** Verify `core qa --help` shows new PHP commands
**Commit:**
```
feat(lint): wire up PHP QA, update dependencies
```
---
### Task 11: Remove QA code from core/php
**Files:**
- Delete: `core/php/quality.go`
- Delete: `core/php/quality_test.go`
- Delete: `core/php/quality_extended_test.go`
- Delete: `core/php/cmd_quality.go`
- Delete: `core/php/cmd_qa_runner.go`
- Delete: `core/php/qa.yaml`
- Modify: `core/php/cmd_commands.go` — remove QA command registrations
**Step 1:** Remove files
**Step 2:** Remove QA command registrations from `cmd_commands.go`
**Step 3:** Run `go build ./...` on core/php to verify nothing breaks
**Step 4:** Run `go build ./...` on core/lint to verify still compiles
**Commit:**
```
refactor(php): remove QA code — moved to core/lint
```
---
### Task 12: Final verification and tag
**Step 1:** Build all: `GOWORK=go.work go build ./core/lint/... ./core/php/...`
**Step 2:** Test all: `GOWORK=go.work go test ./core/lint/...`
**Step 3:** Verify CLI: build core binary, run `core qa --help`
**Step 4:** Tag lint v0.3.0
**Commit:**
```
chore(lint): tag v0.3.0 — unified QA provider
```

View file

@ -1,5 +1,7 @@
# AltumCode Update Checker — Design
> **Note:** Layer 1 (version detection via PHP artisan) is implemented and documented at `docs/docs/php/packages/uptelligence.md`. Layer 2 (browser-automated downloads via Claude Code skill) is NOT yet implemented.
## Problem
Host UK runs 4 AltumCode SaaS products and 13 plugins across two marketplaces (CodeCanyon + LemonSqueezy). Checking for updates and downloading them is a manual process: ~50 clicks across two marketplace UIs, moving 16+ zip files, extracting to the right directories. This eats a morning of momentum every update cycle.

View file

@ -1,5 +1,7 @@
# AltumCode Update Checker Implementation Plan
> **Note:** Layer 1 (Tasks 1-2, 4: version checking + seeder + sync command) is implemented and documented at `docs/docs/php/packages/uptelligence.md`. Task 3 (Claude Code browser skill for Layer 2 downloads) is NOT yet implemented.
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Add AltumCode product + plugin version checking to uptelligence, and create a Claude Code skill for browser-automated downloads from LemonSqueezy and CodeCanyon.

View file

@ -1,192 +0,0 @@
# Scheduled Actions Design
## Goal
Allow CorePHP Actions to declare their own schedule via PHP 8.1 attributes, persist schedules to the database for runtime control, and auto-discover them during deploy — replacing the need for manual `routes/console.php` entries and enabling admin visibility.
## Architecture
**Attribute-driven, database-backed scheduling.** Actions declare defaults with `#[Scheduled]`. A sync command persists them to a `scheduled_actions` table. The scheduler reads the table at runtime. Admin panel provides visibility and control.
**Tech Stack:** PHP 8.1 attributes, Laravel Scheduler, Eloquent, existing CorePHP module scanner paths.
---
## Components
### 1. `#[Scheduled]` Attribute
**File:** `src/Core/Actions/Scheduled.php`
```php
#[Attribute(Attribute::TARGET_CLASS)]
class Scheduled
{
public function __construct(
public string $frequency, // 'everyMinute', 'dailyAt:09:00', 'weeklyOn:1,09:00'
public ?string $timezone = null, // 'Europe/London' — null uses app default
public bool $withoutOverlapping = true,
public bool $runInBackground = true,
) {}
}
```
The `frequency` string maps to Laravel Schedule methods. Colon-separated arguments:
- `dailyAt:09:00` &rarr; `->dailyAt('09:00')`
- `weeklyOn:1,09:00` &rarr; `->weeklyOn(1, '09:00')`
- `everyMinute` &rarr; `->everyMinute()`
- `hourly` &rarr; `->hourly()`
- `monthlyOn:1,00:00` &rarr; `->monthlyOn(1, '00:00')`
### 2. `scheduled_actions` Table
```
scheduled_actions
├── id BIGINT PK
├── action_class VARCHAR(255) UNIQUE — fully qualified class name
├── frequency VARCHAR(100) — from attribute, admin-editable
├── timezone VARCHAR(50) NULL
├── without_overlapping BOOLEAN DEFAULT true
├── run_in_background BOOLEAN DEFAULT true
├── is_enabled BOOLEAN DEFAULT true — toggle in admin
├── last_run_at TIMESTAMP NULL
├── next_run_at TIMESTAMP NULL — computed from frequency
├── created_at TIMESTAMP
├── updated_at TIMESTAMP
```
No tenant scoping — these are system-level platform schedules, not per-user.
### 3. `ScheduledAction` Model
**File:** `src/Core/Actions/ScheduledAction.php`
Eloquent model with:
- `scopeEnabled()` — where `is_enabled = true`
- `markRun()` — updates `last_run_at`, computes `next_run_at`
- `frequencyMethod()` / `frequencyArgs()` — parses `frequency` string
### 4. `ScheduledActionScanner`
**File:** `src/Core/Actions/ScheduledActionScanner.php`
Scans module paths for classes with `#[Scheduled]` attribute using `ReflectionClass::getAttributes()`.
Reuses the same scan paths as `ModuleScanner`:
- `app/Core`, `app/Mod`, `app/Website` (application)
- `src/Core`, `src/Mod` (framework)
Returns: `array<class-string, Scheduled>` — map of class name to attribute instance.
### 5. `schedule:sync` Command
**File:** `src/Core/Console/Commands/ScheduleSyncCommand.php`
```
php artisan schedule:sync
```
- Runs `ScheduledActionScanner`
- Upserts `scheduled_actions` rows:
- **New classes** &rarr; insert with attribute defaults
- **Removed classes** &rarr; set `is_enabled = false` (don't delete)
- **Existing rows manually edited** &rarr; preserve the override (only overwrite if frequency matches the previous attribute default)
- Prints summary: `3 added, 1 disabled, 12 unchanged`
- Run during deploy/migration
### 6. `ScheduleServiceProvider`
**File:** `src/Core/Actions/ScheduleServiceProvider.php`
Registered in framework boot, console context only.
- Queries `scheduled_actions` where `is_enabled = true`
- For each row:
```php
Schedule::call(fn () => $row->action_class::run())
->$frequencyMethod(...$frequencyArgs)
->withoutOverlapping() // if set
->runInBackground() // if set
->timezone($timezone) // if set
```
- Updates `last_run_at` via `after()` callback
---
## Flow
### Deploy/Migration
```
artisan schedule:sync
├── ScheduledActionScanner scans #[Scheduled] attributes
├── Upsert scheduled_actions table
└── Summary: "3 added, 1 disabled, 12 unchanged"
```
### Runtime (every minute)
```
artisan schedule:run
└── ScheduleServiceProvider
├── Query scheduled_actions WHERE is_enabled = true
├── For each: Schedule::call(fn () => ActionClass::run())
└── After each: update last_run_at, compute next_run_at
```
### Admin Panel (future, not MVP)
Table view of `scheduled_actions` with enable/disable toggle, frequency editing, last_run_at display.
---
## Usage Example
```php
<?php
declare(strict_types=1);
namespace Mod\Social\Actions;
use Core\Actions\Action;
use Core\Actions\Scheduled;
#[Scheduled(frequency: 'dailyAt:09:00', timezone: 'Europe/London')]
class PublishDiscordDigest
{
use Action;
public function handle(): void
{
// Gather yesterday's commits across repos
// Summarise changes
// Post to Discord webhook
}
}
```
No Boot registration needed. No `routes/console.php` entry. The scanner discovers it, `schedule:sync` persists it, the scheduler runs it.
---
## Migration Strategy
- **Existing `routes/console.php` commands** stay as-is. No breaking changes.
- **New scheduled work** uses `#[Scheduled]` actions going forward.
- **Over time**, existing commands can be migrated to actions at natural touch points.
## First Consumers
- Discord daily digest (summarise repo changes, post to Lethean Discord)
- Social media scheduled posting triggers
- Image resizing queue triggers (VIP feature)
- AltumCode cron replacements (longer term — wget loops work for now)
- Sync operations (biolinks, analytics data, etc.)
## Non-Goals (MVP)
- Per-tenant scheduling (system-level only for now)
- Admin panel UI (just the table/model/command/provider)
- Caching scanner results (premature optimisation)
- Replacing existing `routes/console.php` entries (gradual migration)

File diff suppressed because it is too large Load diff