Compare commits

...

31 commits
v0.1.0 ... dev

Author SHA1 Message Date
user.email
c3597da9cc fix(rfc-025): complete quality gate — 10 imports, fmt→Println, verification updated
- Principle 9 table: 10 disallowed imports (was 9, added fmt and strings)
- fmt.Println→Println in AX TDD example (practicing what we preach)
- Verification script: grep pattern matches all 10 disallowed imports
- Added string concat check to verification
- Removed magic method check (replaced by import check)
- Adoption list: updated with imports, string ops, examples, comments

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 19:46:40 +00:00
user.email
7e5fc0f93f fix(rfc-025): add io to Principle 9 quality gate
io bypasses stream primitives. Core provides:
- core.ReadAll(reader) — read all + close
- core.WriteAll(writer, content) — write + close
- core.CloseStream(v) — close any Closer

9 disallowed imports in the quality gate. Zero violations in core/go.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 19:31:18 +00:00
user.email
827739cb9f fix(rfc-025): add os to Principle 9 quality gate
os bypasses Fs/Env primitives. Core provides:
- c.Fs().Write/Read/List/EnsureDir/TempDir/DeleteAll
- core.Env() for environment variables
- core.DirFS() for fs.FS from directory

Validated: core/go tests have zero os imports.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 19:23:43 +00:00
user.email
4a5e5bbd1a fix(rfc-025): add path/filepath + errors to Principle 9 quality gate
path/filepath bypasses core.Path() security boundary.
errors bypasses core.NewError()/core.Is()/core.As().

Both now in the disallowed imports table. Validated by dogfooding
core/go's own tests — zero filepath, zero errors imports remaining.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 18:58:35 +00:00
user.email
2507f144a3 feat(rfc-025): add Principle 7b — Example Tests as AX TDD
Example functions serve triple duty: test, godoc, documentation seed.
Write the Example first — if it's awkward, the API is wrong.

Convention: one {source}_example_test.go per source file.
Quality gate: source file without example file = missing documentation.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 18:39:51 +00:00
user.email
80563d97ec fix(rfc-025): pass 5 — adoption updated, verification section, references
- Adoption: reflects core/go fully migrated, updated priority list with
  Actions, JSON, Validation
- Added Verification section with 4 mechanical audit scripts
- References: added RFC.md, consumer RFCs, RFC-004, RFC-021
- Removed stale references (DTO refactor, primitives design dates)

An agent can now audit AX compliance with copy-paste bash commands.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 17:53:41 +00:00
user.email
67deb87070 fix(rfc-025): pass 4 — security in motivation, magic method noted, Task as declarative Go
- Motivation: exec.Command risk now includes "no entitlement, path traversal"
- Principle 5: added Task as Go-native declarative equivalent to YAML steps
- Principle 6: HandleIPCEvents explicitly noted as "one remaining magic method"
- Structure: added "Operational Principles" divider before Principles 8-10
- 4 issues, increasingly subtle — convergence continues

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 17:51:50 +00:00
user.email
b95e265d28 fix(rfc-025): pass 3 — Draft→Active, ServiceRuntime consistency, security callout
- Status: Draft → Active (10 principles, validated, governs ecosystem)
- Adoption: exec.Command → c.Process() (not go-process)
- Command Registration: uses OnStartup + s.Core() (ServiceRuntime pattern)
- Process example: s.core → s.Core() (ServiceRuntime, not manual field)
- Process anti-pattern: added "path traversal risk" + "no entitlement" callout
- Added security note: AX model IS the security model — Actions gate through entitlements

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 17:48:47 +00:00
user.email
297d920fed fix(rfc-025): pass 2 — stale examples, self-contradiction, missing entitlement pattern
- Principle 2: NewPrep/SetCore examples → actual Core comment style
- Principle 3: proc.go (deleted) → handlers.go, added docs/RFC.md
- Principle 5: fmt.Errorf → core.E() (was contradicting Principle 9)
- Principle 6: Service Registration → ServiceRuntime pattern
- Error Handling: SetCore manual pattern → ServiceRuntime
- Added: Permission Gating example (AX vs scattered if-statements)

The RFC now practices what it preaches — no example contradicts a rule.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 17:45:51 +00:00
user.email
49584e8b4c fix(rfc-025): IPC example → named Actions, JSON primitive now exists
- Replaced anonymous broadcast example with named Action pattern
- Added Task composition example
- Moved ACTION/QUERY to "Legacy Layer" subsection
- Principle 9: encoding/json now has core.JSONMarshal() replacement

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 17:41:08 +00:00
user.email
05c4fc0a78 feat(rfc-025): update AX spec to v0.8.0 — 3 new principles
Updated all code examples to match core/go v0.8.0 implementation:
- process.NewService → process.Register
- process.RunWithOptions → c.Process().RunIn()
- PERFORM removed from subsystem table
- Startable/Stoppable return Result
- Added Process, API, Action, Task, Entitled, RegistryOf to subsystem table

New principles from validated session patterns:
- Principle 8: RFC as Domain Load — load spec at session start
- Principle 9: Primitives as Quality Gates — imports are the lint rule
- Principle 10: Registration + Entitlement — two-layer permission model

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 17:35:21 +00:00
user.email
fef469ecc9 docs(go): update index.md to match v0.8.0 API
Rewrites core/go documentation to reflect current implementation:
- New() returns *Core (not error tuple)
- Startable/Stoppable return Result (not error)
- Named Actions, Task composition, Process primitive
- Registry[T] universal collection
- Correct import paths (dappco.re/go/core)
- All subsystem accessors documented
- Links to RFC.md as authoritative spec

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 15:27:06 +00:00
user.email
0ca63712dc feat: add llm.txt + docs/RFC.md — agent entry points and spec index
llm.txt: standard entry point for agents landing on the repo
docs/RFC.md: categorised index of all 28 RFCs with status

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 10:52:46 +00:00
user.email
4509fc5719 feat(rfc-025): major update — align all examples to v0.7.0 API
Every code example now matches the actual implementation:
- Option{Key,Value} not Option{K,V}
- core.New(core.WithService(...)) not core.New(core.Options{})
- core.Result (no generics) not core.Result[T]
- Subsystem table matches actual Core accessors
- Service registration shows real factory pattern
- IPC examples use actual messages package

New content:
- Process execution rule: go-process not os/exec (Principle 6)
- Command extraction pattern (closures → named methods)
- Full IPC event-driven communication example
- Updated adoption priority (added test naming + process execution)
- Aligned file structure to actual core/agent layout

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 10:46:47 +00:00
user.email
6cf8588092 feat(rfc): add Principle 7 — Tests as Behavioural Specification
Test{File}_{Function}_{Good,Bad,Ugly} naming convention.
Tests ARE the spec — machine-queryable, no prose needed.
Missing categories = gaps in the specification.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-25 08:52:10 +00:00
user.email
d89425534a docs 2026-03-24 14:56:51 +00:00
user.email
80b774429a chore: sync dependencies for v0.1.6
Some checks failed
Build and Deploy / deploy (push) Failing after 6s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-17 17:55:38 +00:00
user.email
ea6a49359c chore: sync dependencies for v0.1.5
Some checks failed
Build and Deploy / deploy (push) Failing after 7s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-17 17:51:01 +00:00
user.email
02536b3a13 chore: sync dependencies for v0.1.4
Some checks failed
Build and Deploy / deploy (push) Failing after 6s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-16 22:18:12 +00:00
user.email
9744714be3 fix(help): use go-log E() pattern for error handling in catalog
Some checks failed
Build and Deploy / deploy (push) Failing after 6s
Replace fmt.Errorf calls with log.E() structured errors in
LoadContentDir and Get, providing operation context for the
error chain.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-15 17:39:56 +00:00
user.email
791d64833d chore: sync go.mod dependencies
Some checks failed
Build and Deploy / deploy (push) Failing after 5s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-15 15:39:01 +00:00
user.email
ed8b61cc5b docs: add all .core/ config files to configuration reference
Some checks failed
Build and Deploy / deploy (push) Failing after 6s
Added workspace.yaml, work.yaml, git.yaml, kb.yaml, test.yaml, and
manifest.yaml documentation. Added quick reference table with scope,
package, and discovery pattern for all 12 config file types. Expanded
directory structure to show user/workspace/project scopes.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-15 13:17:03 +00:00
user.email
b2f1921db0 chore: add .core/ and .idea/ to .gitignore
Some checks failed
Build and Deploy / deploy (push) Failing after 9s
2026-03-15 10:17:49 +00:00
user.email
5ab72b2b71 fix: update stale import paths and dependency versions from extraction
Some checks failed
Build and Deploy / deploy (push) Failing after 8s
Resolve stale forge.lthn.ai/core/cli v0.1.0 references (tag never existed,
earliest is v0.0.1) and regenerate go.sum via workspace-aware go mod tidy.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-14 13:38:59 +00:00
user.email
a0b86767b6 docs: update gui architecture overview
Some checks failed
Build and Deploy / deploy (push) Failing after 8s
Replace stub page with comprehensive overview of the IPC-based package
structure covering all 16 sub-packages, platform insulation pattern,
service registration, config wiring, and MCP integration.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-14 08:21:37 +00:00
user.email
05305d9870 docs: update documentation from implemented plans
Some checks failed
Build and Deploy / deploy (push) Failing after 7s
Add new pages: scheduled-actions, studio, plug, uptelligence.
Update: go-blockchain, go-devops, go-process, mcp, lint, docs engine.
Update nav and indexes.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-14 08:09:17 +00:00
user.email
0ee4c15ee2 docs: add CLAUDE.md project instructions
Some checks failed
Build and Deploy / deploy (push) Failing after 8s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-13 13:38:01 +00:00
user.email
0f7dd895c3 docs(agent): split plugin docs into Claude, Codex, Gemini, LEM pages
Some checks failed
Build and Deploy / deploy (push) Failing after 5s
Replaces the single plugin table with dedicated pages per AI platform:
- Claude Code: full marketplace + npm distribution reference
- OpenAI Codex: AGENTS.md structure and plugin inventory
- Google Gemini: CLI extension and MCP server
- LEM: local inference integration and community compute

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-13 11:04:19 +00:00
Snider
e4f3c3e731 docs: sync verified repo documentation to core.help
Some checks failed
Build and Deploy / deploy (push) Failing after 7s
Update all 29 Go package pages, 4 tool pages (agent, mcp, ide, lint),
TypeScript, and Go framework index with rich content from individual
repo docs/. Add lint to Tools nav.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 13:20:52 +00:00
Snider
5b5beaf36f refactor: move agent, mcp, ide, cli, api under tools/ directory
URLs now /tools/agent/, /tools/mcp/, /tools/cli/, etc.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-11 11:50:26 +00:00
Snider
a61000db6e feat: migrate docs site to Zensical with full nav and new sections
Replace Hugo+Docsy with Zensical (MkDocs Material). Restructure all
content under docs/ with explicit nav. Add 19 new Go package pages,
plus Agent, MCP, CoreTS, IDE, GUI, and AI (LEM) sections. PHP sidebar
restructured with collapsible Guides/Reference groups. Homepage now
has sidebar with Where to Start guide and Community links.

Tabs: Home | Go | PHP | TS | GUI | AI | Tools | Deploy | Publish

Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-11 11:48:44 +00:00
381 changed files with 42055 additions and 418 deletions

View file

@ -0,0 +1,39 @@
name: Build and Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install Zensical
run: pip install zensical
- name: Build
run: zensical build
- name: Deploy to BunnyCDN
run: |
pip install s3cmd
cat > ~/.s3cfg <<EOCFG
[default]
access_key = $AWS_ACCESS_KEY_ID
secret_key = $AWS_SECRET_ACCESS_KEY
host_base = storage.bunnycdn.com
host_bucket = %(bucket)s.storage.bunnycdn.com
use_https = True
EOCFG
s3cmd sync site/ s3://core-help/ --delete-removed --acl-public
env:
AWS_ACCESS_KEY_ID: ${{ secrets.BUNNY_STORAGE_ZONE }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.BUNNY_STORAGE_KEY }}

5
.gitignore vendored Normal file
View file

@ -0,0 +1,5 @@
site/
.cache/
.DS_Store
.core/
.idea/

57
CLAUDE.md Normal file
View file

@ -0,0 +1,57 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Documentation platform for the Core ecosystem (CLI, Go packages, PHP modules, MCP tools). Published at https://core.help. Two main components:
1. **`docs/`** — Markdown source files (217+) with YAML frontmatter, organized by section (Go, PHP, TS, GUI, AI, Tools, Deploy, Publish)
2. **`pkg/help/`** — Go library for help content management: parsing, search, HTTP serving, and static site generation
## Common Commands
```bash
# Run all tests
go test ./...
# Run a single test
go test ./pkg/help/ -run TestFunctionName
# Run benchmarks
go test ./pkg/help/ -bench .
# Build the static documentation site (requires Python + zensical)
pip install zensical
zensical build
```
## Architecture: `pkg/help/`
The Go help library is display-agnostic — it can serve HTML, expose a JSON API, or generate a static site from the same content.
**Data flow:** Markdown files → `ParseTopic()` (parser.go) → `Topic` structs → `Catalog` (catalog.go) → consumed by Server, Search, or Generate.
Key types and their roles:
- **`Topic`/`Frontmatter`** (topic.go) — Data model. Topics have ID, title, content, sections, tags, related links, and sort order. Frontmatter is parsed from YAML `---` blocks.
- **`Catalog`** (catalog.go) — Topic registry with `Add`, `Get`, `List`, `Search`. `LoadContentDir()` recursively loads `.md` files from a directory. `DefaultCatalog()` provides built-in starter topics.
- **`searchIndex`** (search.go) — Full-text search with TF-IDF scoring, prefix matching, fuzzy matching, stemming (Porter stemmer in stemmer.go), and phrase detection. Title matches are boosted.
- **`Server`** (server.go) — HTTP handler with HTML routes (`/`, `/topics/{id}`, `/search`) and JSON API routes (`/api/topics`, `/api/topics/{id}`, `/api/search`).
- **`Generate*`** (generate.go) — Static site generator producing index, topic pages, 404, and `search-index.json` for client-side search.
- **`Render*`/`Layout*`** (render.go, layout.go) — HTML rendering using `forge.lthn.ai/core/go-html` (HLCRF layout pattern with dark theme).
- **`IngestHelp`** (ingest.go) — Converts Go CLI `--help` text output into structured `Topic` objects.
## Site Configuration
`zensical.toml` defines the doc site structure — navigation tree, theme settings, markdown extensions (admonition, mermaid, tabbed content, code highlighting). Zensical is a Python-based static site generator.
## CI/CD
Forgejo workflow (`.forgejo/workflows/deploy.yml`): on push to `main`, builds with `zensical build` and deploys the `site/` directory to BunnyCDN via s3cmd.
## Conventions
- License: EUPL-1.2 (SPDX headers in source files)
- Go module: `forge.lthn.ai/core/docs`
- Tests use `testify/assert` and `testify/require`
- Markdown files use YAML frontmatter (`title`, `tags`, `related`, `order` fields)

BIN
build/.DS_Store vendored

Binary file not shown.

View file

@ -1,110 +0,0 @@
# core docs
Documentation management across repositories.
## Usage
```bash
core docs <command> [flags]
```
## Commands
| Command | Description |
|---------|-------------|
| `list` | List documentation across repos |
| `sync` | Sync documentation to output directory |
## docs list
Show documentation coverage across all repos.
```bash
core docs list [flags]
```
### Flags
| Flag | Description |
|------|-------------|
| `--registry` | Path to repos.yaml |
### Output
```
Repo README CLAUDE CHANGELOG docs/
──────────────────────────────────────────────────────────────────────
core ✓ ✓ — 12 files
core-php ✓ ✓ ✓ 8 files
core-images ✓ — — —
Coverage: 3 with docs, 0 without
```
## docs sync
Sync documentation from all repos to an output directory.
```bash
core docs sync [flags]
```
### Flags
| Flag | Description |
|------|-------------|
| `--registry` | Path to repos.yaml |
| `--output` | Output directory (default: ./docs-build) |
| `--dry-run` | Show what would be synced |
### Output Structure
```
docs-build/
└── packages/
├── core/
│ ├── index.md # from README.md
│ ├── claude.md # from CLAUDE.md
│ ├── changelog.md # from CHANGELOG.md
│ ├── build.md # from docs/build.md
│ └── ...
└── core-php/
├── index.md
└── ...
```
### Example
```bash
# Preview what will be synced
core docs sync --dry-run
# Sync to default output
core docs sync
# Sync to custom directory
core docs sync --output ./site/content
```
## What Gets Synced
For each repo, the following files are collected:
| Source | Destination |
|--------|-------------|
| `README.md` | `index.md` |
| `CLAUDE.md` | `claude.md` |
| `CHANGELOG.md` | `changelog.md` |
| `docs/*.md` | `*.md` |
## Integration with core.help
The synced docs are used to build https://core.help:
1. Run `core docs sync --output ../core-php/docs/packages`
2. VitePress builds the combined documentation
3. Deploy to core.help
## See Also
- [Configuration](../../configuration.md) - Project configuration

View file

@ -1,98 +0,0 @@
# Core Go
Core is a Go framework for the host-uk ecosystem - build, release, and deploy Go, Wails, PHP, and container workloads.
## Installation
```bash
# Via Go (recommended)
go install github.com/host-uk/core/cmd/core@latest
# Or download binary from releases
curl -Lo core https://github.com/host-uk/core/releases/latest/download/core-$(go env GOOS)-$(go env GOARCH)
chmod +x core && sudo mv core /usr/local/bin/
# Verify
core doctor
```
See [Getting Started](getting-started.md) for all installation options including building from source.
## Command Reference
See [CLI](/build/cli/) for full command documentation.
| Command | Description |
|---------|-------------|
| [go](/build/cli/go/) | Go development (test, fmt, lint, cov) |
| [php](/build/cli/php/) | Laravel/PHP development |
| [build](/build/cli/build/) | Build Go, Wails, Docker, LinuxKit projects |
| [ci](/build/cli/ci/) | Publish releases (dry-run by default) |
| [sdk](/build/cli/sdk/) | SDK generation and validation |
| [dev](/build/cli/dev/) | Multi-repo workflow + dev environment |
| [pkg](/build/cli/pkg/) | Package search and install |
| [vm](/build/cli/vm/) | LinuxKit VM management |
| [docs](/build/cli/docs/) | Documentation management |
| [setup](/build/cli/setup/) | Clone repos from registry |
| [doctor](/build/cli/doctor/) | Check development environment |
## Quick Start
```bash
# Go development
core go test # Run tests
core go test --coverage # With coverage
core go fmt # Format code
core go lint # Lint code
# Build
core build # Auto-detect and build
core build --targets linux/amd64,darwin/arm64
# Release (dry-run by default)
core ci # Preview release
core ci --we-are-go-for-launch # Actually publish
# Multi-repo workflow
core dev work # Status + commit + push
core dev work --status # Just show status
# PHP development
core php dev # Start dev environment
core php test # Run tests
```
## Configuration
Core uses `.core/` directory for project configuration:
```
.core/
├── release.yaml # Release targets and settings
├── build.yaml # Build configuration (optional)
└── linuxkit/ # LinuxKit templates
```
And `repos.yaml` in workspace root for multi-repo management.
## Guides
- [Getting Started](getting-started.md) - Installation and first steps
- [Workflows](workflows.md) - Common task sequences
- [Troubleshooting](troubleshooting.md) - When things go wrong
- [Migration](migration.md) - Moving from legacy tools
## Reference
- [Configuration](configuration.md) - All config options
- [Glossary](glossary.md) - Term definitions
## Claude Code Skill
Install the skill to teach Claude Code how to use the Core CLI:
```bash
curl -fsSL https://raw.githubusercontent.com/host-uk/core/main/.claude/skills/core/install.sh | bash
```
See [skill/](skill/) for details.

BIN
build/php/.DS_Store vendored

Binary file not shown.

View file

@ -1,55 +0,0 @@
# Discovery: L1 Packages vs Standalone php-* Modules
**Issue:** #3
**Date:** 2026-02-21
**Status:** Complete findings filed as issues #4, #5, #6, #7
## L1 Packages (Boot.php files under src/Core/)
| Package | Path | Has Standalone? |
|---------|------|----------------|
| Activity | `src/Core/Activity/` | No |
| Bouncer | `src/Core/Bouncer/` | No |
| Bouncer/Gate | `src/Core/Bouncer/Gate/` | No |
| Cdn | `src/Core/Cdn/` | No |
| Config | `src/Core/Config/` | No |
| Console | `src/Core/Console/` | No |
| Front | `src/Core/Front/` | No (root) |
| Front/Admin | `src/Core/Front/Admin/` | Partial `core/php-admin` extends |
| Front/Api | `src/Core/Front/Api/` | Partial `core/php-api` extends |
| Front/Cli | `src/Core/Front/Cli/` | No |
| Front/Client | `src/Core/Front/Client/` | No |
| Front/Components | `src/Core/Front/Components/` | No |
| Front/Mcp | `src/Core/Front/Mcp/` | Intentional `core/php-mcp` fills |
| Front/Stdio | `src/Core/Front/Stdio/` | No |
| Front/Web | `src/Core/Front/Web/` | No |
| Headers | `src/Core/Headers/` | No |
| Helpers | `src/Core/Helpers/` | No |
| Lang | `src/Core/Lang/` | No |
| Mail | `src/Core/Mail/` | No |
| Media | `src/Core/Media/` | No |
| Search | `src/Core/Search/` | No (admin search is separate concern) |
| Seo | `src/Core/Seo/` | No |
## Standalone Repos
| Repo | Package | Namespace | Relationship |
|------|---------|-----------|-------------|
| `core/php-tenant` | `host-uk/core-tenant` | `Core\Tenant\` | Extension |
| `core/php-admin` | `host-uk/core-admin` | `Core\Admin\` | Extends Front/Admin |
| `core/php-api` | `host-uk/core-api` | `Core\Api\` | Extends Front/Api |
| `core/php-content` | `host-uk/core-content` | `Core\Mod\Content\` | Extension |
| `core/php-commerce` | `host-uk/core-commerce` | `Core\Mod\Commerce\` | Extension |
| `core/php-agentic` | `host-uk/core-agentic` | `Core\Mod\Agentic\` | Extension |
| `core/php-mcp` | `host-uk/core-mcp` | `Core\Mcp\` | Fills Front/Mcp shell |
| `core/php-developer` | `host-uk/core-developer` | `Core\Developer\` | Extension (also needs core-admin) |
| `core/php-devops` | *(DevOps tooling)* | N/A | Not a PHP module |
## Overlaps Found
See issues filed:
- **#4** `Front/Api` rate limiting vs `core/php-api` `RateLimitApi` middleware double rate limiting risk
- **#5** `Core\Search` vs `core/php-admin` search subsystem dual registries
- **#6** `Core\Activity` UI duplicated in `core/php-admin` and `core/php-developer`
- **#7** Summary issue with full analysis

67
docs/RFC.md Normal file
View file

@ -0,0 +1,67 @@
# RFC Index — Lethean Ecosystem Specifications
> Request For Contribution — design specifications that define how the ecosystem works.
> Each RFC is detailed enough that an agent can implement the described system from the document alone.
## How to Read
Start with the category that matches your task. Each RFC is self-contained — you don't need to read them in order. If you're contributing code, read RFC-025 (Agent Experience) first — it defines the conventions all code must follow.
## Core Framework
| RFC | Title | Status |
|-----|-------|--------|
| [RFC-021](specs/RFC-021-CORE-PLATFORM-ARCHITECTURE.md) | Core Platform Architecture | Draft |
| [RFC-025](specs/RFC-025-AGENT-EXPERIENCE.md) | Agent Experience (AX) Design Principles | Draft |
| [RFC-002](specs/RFC-002-EVENT-DRIVEN-MODULES.md) | Event-Driven Module Loading | Implemented |
| [RFC-003](specs/RFC-003-CONFIG-CHANNELS.md) | Config Channels | Implemented |
| [RFC-004](specs/RFC-004-ENTITLEMENTS.md) | Entitlements and Feature System | Implemented |
| [RFC-024](specs/RFC-024-ISSUE-TRACKER.md) | Issue Tracker and Sprint System | Draft |
## Commerce and Products
| RFC | Title | Status |
|-----|-------|--------|
| [RFC-005](specs/RFC-005-COMMERCE-MATRIX.md) | Commerce Entity Matrix | Implemented |
| [RFC-006](specs/RFC-006-COMPOUND-SKU.md) | Compound SKU Format | Implemented |
| [RFC-001](specs/RFC-001-HLCRF-COMPOSITOR.md) | HLCRF Compositor | Implemented |
## Cryptography and Security
| RFC | Title | Status |
|-----|-------|--------|
| [RFC-011](specs/RFC-011-OSS-DRM.md) | Open Source DRM for Independent Artists | Proposed |
| [RFC-007](specs/RFC-007-LTHN-HASH.md) | LTHN Quasi-Salted Hash Algorithm | Implemented |
| [RFC-008](specs/RFC-008-PRE-OBFUSCATION-LAYER.md) | Pre-Obfuscation Layer Protocol for AEAD Ciphers | Implemented |
| [RFC-009](specs/RFC-009-SIGIL-TRANSFORMATION.md) | Sigil Transformation Framework | Implemented |
| [RFC-010](specs/RFC-010-TRIX-CONTAINER.md) | TRIX Binary Container Format | Implemented |
| [RFC-015](specs/RFC-015-STIM.md) | STIM Encrypted Container Format | Implemented |
| [RFC-016](specs/RFC-016-TRIX-PGP.md) | TRIX PGP Encryption Format | Implemented |
| [RFC-017](specs/RFC-017-LTHN-KEY-DERIVATION.md) | LTHN Key Derivation | Implemented |
| [RFC-019](specs/RFC-019-STMF.md) | STMF Secure To-Me Form | Implemented |
| [RFC-020](specs/RFC-020-WASM-API.md) | WASM Decryption API | Implemented |
## Data and Messaging
| RFC | Title | Status |
|-----|-------|--------|
| [RFC-012](specs/RFC-012-SMSG-FORMAT.md) | SMSG Container Format | Implemented |
| [RFC-013](specs/RFC-013-DATANODE.md) | DataNode In-Memory Filesystem | Implemented |
| [RFC-014](specs/RFC-014-TIM.md) | Terminal Isolation Matrix (TIM) | Implemented |
| [RFC-018](specs/RFC-018-BORGFILE.md) | Borgfile Compilation | Implemented |
## Lethean Network (Legacy)
| RFC | Title | Status |
|-----|-------|--------|
| [RFC-0001](specs/RFC-0001-network-overview.md) | Lethean Network Overview | Implemented |
| [RFC-0002](specs/RFC-0002-service-descriptor-protocol.md) | Service Descriptor Protocol (SDP) | Implemented |
| [RFC-0003](specs/RFC-0003-exit-node-architecture.md) | Exit Node Architecture | Implemented |
| [RFC-0004](specs/RFC-0004-payment-dispatcher-protocol.md) | Payment and Dispatcher Protocol | Implemented |
| [RFC-0005](specs/RFC-0005-client-protocol.md) | Client Protocol | Implemented |
## Contributing
New RFCs follow the numbering scheme `RFC-NNN-TITLE.md` (3-digit, uppercase title). Use RFC-011 (OSS DRM) as the reference for detail level — an agent should be able to implement the system from the document alone.
All contributions must follow [RFC-025: Agent Experience](specs/RFC-025-AGENT-EXPERIENCE.md).

57
docs/ai/index.md Normal file
View file

@ -0,0 +1,57 @@
---
title: AI
description: LEM — Lethean Evaluation Model, training pipeline, scoring, and inference
---
# AI
`forge.lthn.ai/lthn/lem`
LEM (Lethean Evaluation Model) is an AI training and evaluation platform. A 1-billion-parameter model trained with 5 axioms consistently outperforms untrained models 27 times its size. 29 models tested, 3,000+ individual runs, two independent probe sets. Fully reproducible on Apple Silicon.
## Benchmark Highlights
| Model | Params | v2 Score | Notes |
|-------|--------|----------|-------|
| Gemma3 12B + LEK kernel | 12B | **23.66** | Best kernel-boosted |
| Gemma3 27B + LEK kernel | 27B | 23.26 | |
| **LEK-Gemma3 1B baseline** | **1B** | **21.74** | **Axioms in weights** |
| Base Gemma3 4B | 4B | 21.12 | Untrained |
| Base Gemma3 12B | 12B | 20.47 | Untrained |
## Packages
### pkg/lem
The core engine — 75+ files covering the full pipeline:
- **Training**: distillation, conversation generation, attention analysis, grammar integration
- **Scoring**: heuristic probes, tiered scoring, coverage analysis, judge evaluation
- **Inference**: Metal and mlx-lm backends, worker pool, client API
- **Data**: InfluxDB time-series storage, Parquet export, zstd compression, ingestion
- **Publishing**: Forgejo, Hugging Face, Docker registry integration
- **Analytics**: cluster analysis, feature extraction, metrics, comparison tools
### pkg/lab
LEM Lab — model store, configuration, local experimentation environment.
### pkg/heuristic
Standalone heuristic scoring engine for probe evaluation.
## Binaries
| Command | Purpose |
|---------|---------|
| `lemcmd` | Main CLI — training, scoring, publishing |
| `scorer` | Standalone scoring binary |
| `lem-desktop` | Desktop app (LEM Lab UI) |
| `composure-convert` | Training data format conversion |
| `dedup-check` | Dataset deduplication checker |
## Repository
- **Source**: [forge.lthn.ai/lthn/lem](https://forge.lthn.ai/lthn/lem)
- **Go module**: `forge.lthn.ai/lthn/lem`
- **Models**: [huggingface.co/lthn](https://huggingface.co/lthn)

32
docs/getting-started.md Normal file
View file

@ -0,0 +1,32 @@
---
title: Where to Start
description: Choose your path through the Core platform based on what you're building
---
# Where to Start
Not sure where to begin? Pick the path that matches what you're building.
## I want to build a Go application
1. Read the [Go overview](go/index.md) to understand the DI framework
2. Follow the [Getting Started guide](go/getting-started.md) for setup
3. Explore [Go Packages](go/packages/index.md) for AI, ML, crypto, and more
## I want to build a PHP application
1. Read the [PHP overview](php/index.md) for the framework architecture
2. Follow [Installation](php/getting-started/installation.md) and [Quick Start](php/getting-started/quick-start.md)
3. Learn about [Modules](php/framework/modules.md) and [Events](php/framework/events.md)
## I want to use the CLI
1. Read the [CLI overview](cli/index.md) for all available commands
2. Run `core doctor` to check your environment
3. Explore [Multi-Repo tools](cli/dev/index.md) for managing multiple repositories
## I want to deploy
1. Choose a [deployment target](deploy/index.md) — Docker, PHP, or LinuxKit
2. Set up CI with [`core ci`](cli/ci/index.md)
3. [Publish](publish/index.md) to package managers, registries, or GitHub

View file

@ -1,15 +1,60 @@
# Configuration
Core uses `.core/` directory for project configuration.
Core uses `.core/` directory for project configuration. Config files are auto-discovered — commands need zero arguments.
## Quick Reference
| File | Scope | Package | Purpose |
|------|-------|---------|---------|
| `~/.core/config.yaml` | User | go-config | Global settings (Viper) |
| `.core/build.yaml` | Project | go-build | Build targets, flags, signing |
| `.core/release.yaml` | Project | go-build | Publishers, changelog, SDK gen |
| `.core/php.yaml` | Project | go-build | PHP/Laravel dev, test, deploy |
| `.core/test.yaml` | Project | go-container | Named test commands |
| `.core/manifest.yaml` | App | go-scm | Providers, daemons, permissions |
| `.core/workspace.yaml` | Workspace | agent | Active package, paths |
| `.core/repos.yaml` | Workspace | go-scm | Repo registry + dependencies |
| `.core/work.yaml` | Workspace | go-scm | Sync policy, agent heartbeat |
| `.core/git.yaml` | Machine | go-scm | Local git state (gitignored) |
| `.core/kb.yaml` | Workspace | go-scm | Wiki mirror, Qdrant search |
| `.core/linuxkit/*.yml` | Project | go-container | VM templates |
**Scopes:**
- **User** (`~/.core/`) — global settings, persists across all projects
- **Project** (`{repo}/.core/`) — per-repository config, checked into git
- **Workspace** (`{workspace}/.core/`) — multi-repo workspace config, checked into git
- **Machine** (`{workspace}/.core/`) — per-machine state, gitignored
**Discovery patterns:**
- **Fixed path**`build.yaml`, `release.yaml`, `test.yaml`, `manifest.yaml`
- **Walk-up**`workspace.yaml`, `repos.yaml` (search current dir → parents → home)
- **Direct load**`work.yaml`, `git.yaml`, `kb.yaml` (from workspace root)
## Directory Structure
```
.core/
├── release.yaml # Release configuration
├── build.yaml # Build configuration (optional)
├── php.yaml # PHP configuration (optional)
└── linuxkit/ # LinuxKit templates
~/.core/ # User-level (global)
├── config.yaml # Global settings
├── plugins/ # Plugin discovery
├── known_hosts # SSH known hosts
└── linuxkit/ # User LinuxKit templates
{workspace}/.core/ # Workspace-level (shared)
├── workspace.yaml # Active package, paths
├── repos.yaml # Repository registry
├── work.yaml # Sync policy, agent heartbeat
├── git.yaml # Machine-local git state (gitignored)
└── kb.yaml # Knowledge base config
{project}/.core/ # Project-level (per-repo)
├── build.yaml # Build configuration
├── release.yaml # Release configuration
├── php.yaml # PHP/Laravel configuration
├── test.yaml # Test commands
├── manifest.yaml # Application manifest
└── linuxkit/ # LinuxKit templates
├── server.yml
└── dev.yml
```
@ -298,6 +343,168 @@ repos:
| `product` | User-facing applications | Foundation + modules |
| `template` | Starter templates | Any |
## workspace.yaml
Workspace-level configuration. Discovered by walking up from CWD.
```yaml
version: 1
# Active package for unified commands
active: core-php
# Default package types for setup
default_only:
- foundation
- module
# Paths
packages_dir: ./packages
# Workspace settings
settings:
suggest_core_commands: true
show_active_in_prompt: true
```
**Package:** `forge.lthn.ai/core/agent` · **Discovery:** walk-up from CWD
## work.yaml
Team sync policy. Checked into git (shared across team).
```yaml
version: 1
sync:
interval: 5m
auto_pull: true
auto_push: false
clone_missing: true
agent:
heartbeat_interval: 30s
stale_after: 10m
overlap_warning: true
triggers:
on_activate: sync
on_commit: push
scheduled: "*/5 * * * *"
```
**Package:** `forge.lthn.ai/core/go-scm/repos` · **Discovery:** `{workspaceRoot}/.core/work.yaml`
## git.yaml
Machine-local git state. **Gitignored** — not shared across machines.
```yaml
version: 1
repos:
core-php:
branch: main
remote: origin
last_pull: "2026-03-15T10:00:00Z"
last_push: "2026-03-15T09:45:00Z"
ahead: 0
behind: 0
core-tenant:
branch: main
remote: origin
last_pull: "2026-03-15T10:00:00Z"
agent:
name: cladius
last_heartbeat: "2026-03-15T10:05:00Z"
```
**Package:** `forge.lthn.ai/core/go-scm/repos` · **Discovery:** `{workspaceRoot}/.core/git.yaml`
## kb.yaml
Knowledge base configuration. Controls wiki mirroring and vector search.
```yaml
version: 1
wiki:
enabled: true
directory: kb # Relative to .core/
remote: "ssh://git@forge.lthn.ai:2223/core/wiki.git"
search:
qdrant:
host: qdrant.lthn.sh
port: 6334
collection: openbrain
ollama:
url: http://ollama.lthn.sh
model: embeddinggemma
top_k: 10
```
**Package:** `forge.lthn.ai/core/go-scm/repos` · **Discovery:** `{workspaceRoot}/.core/kb.yaml`
## test.yaml
Named test commands per project. Auto-detected if not present.
```yaml
version: 1
commands:
unit:
run: composer test -- --filter=Unit
env:
APP_ENV: testing
integration:
run: composer test -- --filter=Integration
env:
APP_ENV: testing
DB_DATABASE: test_db
all:
run: composer test
```
**Auto-detection chain** (if no `test.yaml`): `composer.json``package.json``go.mod``pytest``Taskfile`
**Package:** `forge.lthn.ai/core/go-container/devenv` · **Discovery:** `{projectDir}/.core/test.yaml`
## manifest.yaml
Application manifest for providers, daemons, and permissions. Supports ed25519 signature verification.
```yaml
version: 1
app:
name: my-provider
namespace: my-provider
description: Custom service provider
providers:
- namespace: my-provider
port: 9900
binary: ./bin/my-provider
args: ["serve"]
elements:
- tag: my-provider-panel
source: /assets/my-provider.js
daemons:
- name: worker
command: ./bin/worker
restart: always
permissions:
- net.listen
- fs.read
```
**Package:** `forge.lthn.ai/core/go-scm/manifest` · **Discovery:** `{appRoot}/.core/manifest.yaml`
---
## Environment Variables

243
docs/go/index.md Normal file
View file

@ -0,0 +1,243 @@
---
title: Core Go Framework
description: Dependency injection, service lifecycle, and permission framework for Go.
---
# Core Go Framework
Core (`dappco.re/go/core`) is a dependency injection, service lifecycle, and permission framework for Go. It provides a typed service registry, lifecycle hooks, a message-passing bus, named actions with task composition, and an entitlement primitive for permission gating.
This is the foundation layer of the ecosystem. It has no CLI, no GUI, and minimal dependencies (stdlib + go-io + go-log).
## Installation
```bash
go get dappco.re/go/core
```
Requires Go 1.26 or later.
## Quick Example
```go
package main
import "dappco.re/go/core"
func main() {
c := core.New(
core.WithOption("name", "my-app"),
core.WithService(mypackage.Register),
core.WithService(monitor.Register),
core.WithServiceLock(),
)
c.Run()
}
```
`core.New()` returns `*Core`. Services register via factory functions. `Run()` calls `ServiceStartup`, runs the CLI, then `ServiceShutdown`. For error handling use `RunE()` which returns `error` instead of calling `os.Exit`.
## Service Registration
```go
func Register(c *core.Core) core.Result {
svc := &MyService{
ServiceRuntime: core.NewServiceRuntime(c, MyOptions{}),
}
return core.Result{Value: svc, OK: true}
}
// In main:
core.New(core.WithService(mypackage.Register))
```
Services implement `Startable` and/or `Stoppable` for lifecycle hooks:
```go
type Startable interface {
OnStartup(ctx context.Context) Result
}
type Stoppable interface {
OnShutdown(ctx context.Context) Result
}
```
## Subsystem Accessors
```go
c.Options() // *Options — input configuration
c.App() // *App — application identity
c.Config() // *Config — runtime settings, feature flags
c.Data() // *Data — embedded assets (Registry[*Embed])
c.Drive() // *Drive — transport handles (Registry[*DriveHandle])
c.Fs() // *Fs — filesystem I/O (sandboxable)
c.Cli() // *Cli — CLI command framework
c.IPC() // *Ipc — message bus internals
c.Log() // *ErrorLog — structured logging
c.Error() // *ErrorPanic — panic recovery
c.I18n() // *I18n — internationalisation
c.Process() // *Process — managed execution (Action sugar)
c.Context() // context.Context — Core's lifecycle context
c.Env(key) // string — environment variable (cached at init)
```
## Primitive Types
```go
// Option — the atom
core.Option{Key: "name", Value: "brain"}
// Options — universal input
opts := core.NewOptions(
core.Option{Key: "name", Value: "myapp"},
core.Option{Key: "port", Value: 8080},
)
opts.String("name") // "myapp"
opts.Int("port") // 8080
// Result — universal output
core.Result{Value: svc, OK: true}
```
## IPC — Message Passing
```go
// Broadcast (fire-and-forget)
c.ACTION(messages.AgentCompleted{Agent: "codex", Status: "completed"})
// Query (first responder)
r := c.QUERY(MyQuery{Name: "brain"})
// Register handler
c.RegisterAction(func(c *core.Core, msg core.Message) core.Result {
if ev, ok := msg.(messages.AgentCompleted); ok { /* handle */ }
return core.Result{OK: true}
})
```
## Named Actions
```go
// Register
c.Action("git.log", func(ctx context.Context, opts core.Options) core.Result {
dir := opts.String("dir")
return c.Process().RunIn(ctx, dir, "git", "log", "--oneline")
})
// Invoke
r := c.Action("git.log").Run(ctx, core.NewOptions(
core.Option{Key: "dir", Value: "/repo"},
))
// Check capability
if c.Action("process.run").Exists() { /* can run commands */ }
```
## Task Composition
```go
c.Task("deploy", core.TaskDef{
Steps: []core.Step{
{Action: "go.build"},
{Action: "go.test"},
{Action: "docker.push"},
{Action: "ansible.deploy", Async: true},
},
})
r := c.Task("deploy").Run(ctx, c, opts)
```
Sequential steps stop on failure. Async steps fire without blocking. `Input: "previous"` pipes the last step's output to the next.
## Process Primitive
```go
// Run a command (delegates to Action "process.run")
r := c.Process().Run(ctx, "git", "log", "--oneline")
r := c.Process().RunIn(ctx, "/repo", "go", "test", "./...")
// Permission by registration:
// No go-process registered → c.Process().Run() returns Result{OK: false}
// go-process registered → executes the command
```
## Registry[T]
Thread-safe named collection — the universal brick for all registries.
```go
r := core.NewRegistry[*MyService]()
r.Set("brain", brainSvc)
r.Get("brain") // Result{brainSvc, true}
r.Has("brain") // true
r.Names() // insertion order
r.List("process.*") // glob match
r.Each(func(name string, svc *MyService) { ... })
r.Lock() // fully frozen
r.Seal() // no new keys, updates OK
// Cross-cutting queries
c.RegistryOf("services").Names()
c.RegistryOf("actions").List("process.*")
```
## Commands
```go
c.Command("deploy/to/homelab", core.Command{
Action: handler,
Managed: "process.daemon", // go-process provides lifecycle
})
```
Path = hierarchy. `deploy/to/homelab` becomes `myapp deploy to homelab` in CLI.
## Utilities
```go
core.ID() // "id-1-a3f2b1" — unique identifier
core.ValidateName("brain") // Result{OK: true}
core.SanitisePath("../../x") // "x"
core.E("op", "msg", err) // structured error
fs.WriteAtomic(path, data) // write-to-temp-then-rename
fs.NewUnrestricted() // full filesystem access
fs.Root() // sandbox root path
```
## Error Handling
All errors use `core.E()`:
```go
return core.E("service.Method", "what failed", underlyingErr)
```
Never use `fmt.Errorf`, `errors.New`, or `log.*`. Core handles all error reporting.
## Documentation
| Page | Covers |
|------|--------|
| [Getting Started](getting-started.md) | Installing the Core CLI, first build |
| [Configuration](configuration.md) | Config files, environment variables |
| [Workflows](workflows.md) | Common task sequences |
| [Packages](packages/index.md) | Ecosystem package reference |
## API Specification
The full API contract lives in [`docs/RFC.md`](https://forge.lthn.ai/core/go/src/branch/dev/docs/RFC.md) — 3,800+ lines covering all 21 sections, 108 findings, and implementation plans.
## Dependencies
Core is deliberately minimal:
- `dappco.re/go/core/io` — abstract storage (local, S3, SFTP, WebDAV)
- `dappco.re/go/core/log` — structured logging
- `github.com/stretchr/testify` — test assertions (test-only)
## Licence
EUPL-1.2

130
docs/go/packages/go-ai.md Normal file
View file

@ -0,0 +1,130 @@
---
title: go-ai Overview
description: The AI integration hub for the Lethean Go ecosystem — MCP server, metrics, and facade.
---
# go-ai
**Module**: `forge.lthn.ai/core/go-ai`
**Language**: Go 1.26
**Licence**: EUPL-1.2
go-ai is the **integration hub** for the Lethean AI stack. It imports specialised modules and exposes them as a unified MCP server with IDE bridge support, metrics recording, and a thin AI facade.
## Architecture
```
AI Clients (Claude, Cursor, any MCP-capable IDE)
| MCP JSON-RPC (stdio / TCP / Unix)
v
[ go-ai MCP Server ] <-- this module
| | |
| | +-- ide/ subsystem --> Laravel core-agentic (WebSocket)
| +-- go-rag -----------------> Qdrant + Ollama
+-- go-ml ---------------------------> inference backends (go-mlx, go-rocm, ...)
Core CLI (forge.lthn.ai/core/cli) bootstraps and wires everything
```
go-ai is a pure library module. It contains no `main` package. The Core CLI (`core mcp serve`) imports `forge.lthn.ai/core/go-ai/mcp`, constructs a `mcp.Service`, and calls `Run()`.
## Package Layout
```
go-ai/
+-- ai/ # AI facade: RAG queries and JSONL metrics
| +-- ai.go # Package documentation and composition overview
| +-- rag.go # QueryRAGForTask() with graceful degradation
| +-- metrics.go # Event, Record(), ReadEvents(), Summary()
|
+-- cmd/ # CLI command registrations
| +-- daemon/ # core daemon (MCP server lifecycle)
| +-- metrics/ # core ai metrics viewer
| +-- rag/ # re-exports go-rag CLI commands
| +-- security/ # security scanning tools (deps, alerts, secrets, scan, jobs)
| +-- lab/ # homelab monitoring dashboard
| +-- embed-bench/ # embedding model benchmark utility
|
+-- docs/ # This documentation
```
The MCP server and all its tool subsystems are provided by the separate `forge.lthn.ai/core/mcp` module. go-ai wires that server together with the `ai/` facade and the CLI command registrations.
## Imported Modules
| Module | Purpose |
|--------|---------|
| `forge.lthn.ai/core/go-ml` | Inference backends, scoring engine |
| `forge.lthn.ai/core/go-rag` | Vector search, embeddings |
| `forge.lthn.ai/core/go-inference` | Shared TextModel/Backend interfaces |
| `forge.lthn.ai/core/go-process` | Process lifecycle management |
| `forge.lthn.ai/core/go-log` | Structured logging with security levels |
| `forge.lthn.ai/core/go-io` | Sandboxed filesystem abstraction |
| `forge.lthn.ai/core/go-i18n` | Internationalisation |
## Quick Start
go-ai is not run directly. It is consumed by the Core CLI:
```bash
# Start the MCP server on stdio (default)
core mcp serve
# Start on TCP
core mcp serve --mcp-transport tcp --mcp-addr 127.0.0.1:9100
# Run as a background daemon
core daemon start
# View AI metrics
core ai metrics --since 7d
```
## Documentation
| Page | Description |
|------|-------------|
| [MCP Server](mcp-server.md) | Protocol implementation, transports, tool registration |
| [ML Pipeline](ml-pipeline.md) | ML scoring, model management, inference backends |
| [RAG Pipeline](rag.md) | Retrieval-augmented generation, vector search |
| [Agentic Client](agentic.md) | Security scanning, metrics, CLI commands |
| [IDE Bridge](ide-bridge.md) | IDE integration, WebSocket bridge to Laravel |
## Build and Test
```bash
go test ./... # Run all tests
go test -run TestName ./... # Run a single test
go test -v -race ./... # Verbose with race detector
go build ./... # Verify compilation (library -- no binary)
go vet ./... # Vet
```
Tests follow the `_Good`, `_Bad`, `_Ugly` suffix convention:
- `_Good` -- Happy path, valid input
- `_Bad` -- Expected error conditions
- `_Ugly` -- Panics and edge cases
## Dependencies
### Direct
| Module | Role |
|--------|------|
| `forge.lthn.ai/core/cli` | CLI framework (cobra-based command registration) |
| `forge.lthn.ai/core/go-api` | API server framework |
| `forge.lthn.ai/core/go-i18n` | Internationalisation strings |
| `forge.lthn.ai/core/go-inference` | Shared inference interfaces |
| `forge.lthn.ai/core/go-io` | Filesystem abstraction |
| `forge.lthn.ai/core/go-log` | Structured logging |
| `forge.lthn.ai/core/go-ml` | ML scoring and inference |
| `forge.lthn.ai/core/go-process` | Process lifecycle |
| `forge.lthn.ai/core/go-rag` | RAG pipeline |
| `github.com/modelcontextprotocol/go-sdk` | MCP Go SDK |
| `github.com/gorilla/websocket` | WebSocket client (IDE bridge) |
| `github.com/gin-gonic/gin` | HTTP router |
### Indirect (via go-ml and go-rag)
`go-mlx`, `go-rocm`, `go-duckdb`, `parquet-go`, `ollama`, `qdrant/go-client`, and the Arrow ecosystem are transitive dependencies not imported directly by go-ai.

View file

@ -0,0 +1,159 @@
---
title: go-ansible
description: A pure Go Ansible playbook engine -- parses YAML playbooks, inventories, and roles, then executes tasks on remote hosts via SSH without requiring Python.
---
# go-ansible
`forge.lthn.ai/core/go-ansible` is a pure Go implementation of an Ansible playbook engine. It parses standard Ansible YAML playbooks, inventories, and roles, then executes tasks against remote hosts over SSH -- with no dependency on Python or the upstream `ansible-playbook` binary.
## Module Path
```
forge.lthn.ai/core/go-ansible
```
Requires **Go 1.26+**.
## Quick Start
### As a Library
```go
package main
import (
"context"
"fmt"
ansible "forge.lthn.ai/core/go-ansible"
)
func main() {
// Create an executor rooted at the playbook directory
executor := ansible.NewExecutor("/path/to/project")
defer executor.Close()
// Load inventory
if err := executor.SetInventory("/path/to/inventory.yml"); err != nil {
panic(err)
}
// Optionally set extra variables
executor.SetVar("deploy_version", "1.2.3")
// Optionally limit to specific hosts
executor.Limit = "web1"
// Set up callbacks for progress reporting
executor.OnTaskStart = func(host string, task *ansible.Task) {
fmt.Printf("TASK [%s] on %s\n", task.Name, host)
}
executor.OnTaskEnd = func(host string, task *ansible.Task, result *ansible.TaskResult) {
if result.Failed {
fmt.Printf(" FAILED: %s\n", result.Msg)
} else if result.Changed {
fmt.Printf(" changed\n")
} else {
fmt.Printf(" ok\n")
}
}
// Run the playbook
ctx := context.Background()
if err := executor.Run(ctx, "/path/to/playbook.yml"); err != nil {
panic(err)
}
}
```
### As a CLI Command
The package ships with a CLI integration under `cmd/ansible/` that registers a `core ansible` subcommand:
```bash
# Run a playbook
core ansible playbooks/deploy.yml -i inventory/production.yml
# Limit to a single host
core ansible site.yml -l web1
# Pass extra variables
core ansible deploy.yml -e "version=1.2.3" -e "env=prod"
# Dry run (check mode)
core ansible deploy.yml --check
# Increase verbosity
core ansible deploy.yml -vvv
# Test SSH connectivity to a host
core ansible test server.example.com -u root -i ~/.ssh/id_ed25519
```
**CLI flags:**
| Flag | Short | Description |
|------|-------|-------------|
| `--inventory` | `-i` | Inventory file or directory |
| `--limit` | `-l` | Restrict execution to matching hosts |
| `--tags` | `-t` | Only run tasks tagged with these values (comma-separated) |
| `--skip-tags` | | Skip tasks tagged with these values |
| `--extra-vars` | `-e` | Set additional variables (`key=value`, repeatable) |
| `--verbose` | `-v` | Increase verbosity (stack for more: `-vvv`) |
| `--check` | | Dry-run mode -- no changes are made |
## Package Layout
```
go-ansible/
types.go Core data types: Playbook, Play, Task, Inventory, Host, Facts
parser.go YAML parser for playbooks, inventories, tasks, roles
executor.go Execution engine: module dispatch, templating, conditions, loops
modules.go 41 module implementations (shell, apt, docker-compose, etc.)
ssh.go SSH client with key/password auth, become/sudo, file transfer
types_test.go Tests for data types and YAML unmarshalling
parser_test.go Tests for the YAML parser
executor_test.go Tests for the executor engine
ssh_test.go Tests for SSH client construction
mock_ssh_test.go Mock SSH infrastructure for module tests
modules_*_test.go Module-specific tests (cmd, file, svc, infra, adv)
cmd/
ansible/
cmd.go CLI command registration
ansible.go CLI implementation (flags, callbacks, output formatting)
```
## Supported Modules
41 module handlers are implemented, covering the most commonly used Ansible modules:
| Category | Modules |
|----------|---------|
| **Command execution** | `shell`, `command`, `raw`, `script` |
| **File operations** | `copy`, `template`, `file`, `lineinfile`, `blockinfile`, `stat`, `slurp`, `fetch`, `get_url` |
| **Package management** | `apt`, `apt_key`, `apt_repository`, `package`, `pip` |
| **Service management** | `service`, `systemd` |
| **User and group** | `user`, `group` |
| **HTTP** | `uri` |
| **Source control** | `git` |
| **Archive** | `unarchive` |
| **System** | `hostname`, `sysctl`, `cron`, `reboot`, `setup` |
| **Flow control** | `debug`, `fail`, `assert`, `set_fact`, `pause`, `wait_for`, `meta`, `include_vars` |
| **Community** | `community.general.ufw`, `ansible.posix.authorized_key`, `community.docker.docker_compose` |
Both fully-qualified collection names (e.g. `ansible.builtin.shell`) and short-form names (e.g. `shell`) are accepted.
## Dependencies
| Module | Purpose |
|--------|---------|
| `forge.lthn.ai/core/cli` | CLI framework (command registration, flags, styled output) |
| `forge.lthn.ai/core/go-log` | Structured logging and contextual error helper (`log.E()`) |
| `golang.org/x/crypto` | SSH protocol implementation (`crypto/ssh`, `crypto/ssh/knownhosts`) |
| `gopkg.in/yaml.v3` | YAML parsing for playbooks, inventories, and role files |
| `github.com/stretchr/testify` | Test assertions (test-only) |
## Licence
EUPL-1.2

173
docs/go/packages/go-api.md Normal file
View file

@ -0,0 +1,173 @@
---
title: go-api
description: Gin-based REST framework with OpenAPI generation, middleware composition, and SDK codegen for the Lethean Go ecosystem.
---
<!-- SPDX-License-Identifier: EUPL-1.2 -->
# go-api
**Module path:** `forge.lthn.ai/core/go-api`
**Language:** Go 1.26
**Licence:** EUPL-1.2
go-api is a REST framework built on top of [Gin](https://github.com/gin-gonic/gin). It provides
an `Engine` that subsystems plug into via the `RouteGroup` interface. Each ecosystem package
(go-ai, go-ml, go-rag, and others) registers its own route group, and go-api handles the HTTP
plumbing: middleware composition, response envelopes, WebSocket and SSE integration, GraphQL
hosting, Authentik identity, OpenAPI 3.1 specification generation, and client SDK codegen.
go-api is a library. It has no `main` package and produces no binary on its own. Callers
construct an `Engine`, register route groups, and call `Serve()`.
---
## Quick Start
```go
package main
import (
"context"
"os/signal"
"syscall"
api "forge.lthn.ai/core/go-api"
)
func main() {
engine, _ := api.New(
api.WithAddr(":8080"),
api.WithBearerAuth("my-secret-token"),
api.WithCORS("*"),
api.WithRequestID(),
api.WithSecure(),
api.WithSlog(nil),
api.WithSwagger("My API", "A service description", "1.0.0"),
)
engine.Register(myRoutes) // any RouteGroup implementation
ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer stop()
_ = engine.Serve(ctx) // blocks until context is cancelled, then shuts down gracefully
}
```
The default listen address is `:8080`. A built-in `GET /health` endpoint is always present.
Every feature beyond panic recovery requires an explicit `With*()` option.
---
## Implementing a RouteGroup
Any type that satisfies the `RouteGroup` interface can register endpoints:
```go
type Routes struct{ service *mypackage.Service }
func (r *Routes) Name() string { return "mypackage" }
func (r *Routes) BasePath() string { return "/v1/mypackage" }
func (r *Routes) RegisterRoutes(rg *gin.RouterGroup) {
rg.GET("/items", r.listItems)
rg.POST("/items", r.createItem)
}
func (r *Routes) listItems(c *gin.Context) {
items, _ := r.service.List(c.Request.Context())
c.JSON(200, api.OK(items))
}
```
Register with the engine:
```go
engine.Register(&Routes{service: svc})
```
---
## Package Layout
| File | Purpose |
|------|---------|
| `api.go` | `Engine` struct, `New()`, `build()`, `Serve()`, `Handler()`, `Channels()` |
| `options.go` | All `With*()` option functions (25 options) |
| `group.go` | `RouteGroup`, `StreamGroup`, `DescribableGroup` interfaces; `RouteDescription` |
| `response.go` | `Response[T]`, `Error`, `Meta`, `OK()`, `Fail()`, `FailWithDetails()`, `Paginated()` |
| `middleware.go` | `bearerAuthMiddleware()`, `requestIDMiddleware()` |
| `authentik.go` | `AuthentikUser`, `AuthentikConfig`, `GetUser()`, `RequireAuth()`, `RequireGroup()` |
| `websocket.go` | `wrapWSHandler()` helper |
| `sse.go` | `SSEBroker`, `NewSSEBroker()`, `Publish()`, `Handler()`, `Drain()`, `ClientCount()` |
| `cache.go` | `cacheStore`, `cacheEntry`, `cacheWriter`, `cacheMiddleware()` |
| `brotli.go` | `brotliHandler`, `newBrotliHandler()`; compression level constants |
| `graphql.go` | `graphqlConfig`, `GraphQLOption`, `WithPlayground()`, `WithGraphQLPath()`, `mountGraphQL()` |
| `i18n.go` | `I18nConfig`, `WithI18n()`, `i18nMiddleware()`, `GetLocale()`, `GetMessage()` |
| `tracing.go` | `WithTracing()`, `NewTracerProvider()` |
| `swagger.go` | `swaggerSpec`, `registerSwagger()`; sequence counter for multi-instance safety |
| `openapi.go` | `SpecBuilder`, `Build()`, `buildPaths()`, `buildTags()`, `envelopeSchema()` |
| `export.go` | `ExportSpec()`, `ExportSpecToFile()` |
| `bridge.go` | `ToolDescriptor`, `ToolBridge`, `NewToolBridge()`, `Add()`, `Describe()`, `Tools()` |
| `codegen.go` | `SDKGenerator`, `Generate()`, `Available()`, `SupportedLanguages()` |
| `cmd/api/` | CLI subcommands: `core api spec` and `core api sdk` |
---
## Dependencies
### Direct
| Module | Role |
|--------|------|
| `github.com/gin-gonic/gin` | HTTP router and middleware engine |
| `github.com/gin-contrib/cors` | CORS policy middleware |
| `github.com/gin-contrib/secure` | Security headers (HSTS, X-Frame-Options, nosniff) |
| `github.com/gin-contrib/gzip` | Gzip response compression |
| `github.com/gin-contrib/slog` | Structured request logging via `log/slog` |
| `github.com/gin-contrib/timeout` | Per-request deadline enforcement |
| `github.com/gin-contrib/static` | Static file serving |
| `github.com/gin-contrib/sessions` | Cookie-backed server sessions |
| `github.com/gin-contrib/authz` | Casbin policy-based authorisation |
| `github.com/gin-contrib/httpsign` | HTTP Signatures verification |
| `github.com/gin-contrib/location/v2` | Reverse proxy header detection |
| `github.com/gin-contrib/pprof` | Go profiling endpoints |
| `github.com/gin-contrib/expvar` | Runtime metrics endpoint |
| `github.com/casbin/casbin/v2` | Policy-based access control engine |
| `github.com/coreos/go-oidc/v3` | OIDC provider discovery and JWT validation |
| `github.com/andybalholm/brotli` | Brotli compression |
| `github.com/gorilla/websocket` | WebSocket upgrade support |
| `github.com/swaggo/gin-swagger` | Swagger UI handler |
| `github.com/swaggo/files` | Swagger UI static assets |
| `github.com/swaggo/swag` | Swagger spec registry |
| `github.com/99designs/gqlgen` | GraphQL schema execution (gqlgen) |
| `go.opentelemetry.io/otel` | OpenTelemetry tracing SDK |
| `go.opentelemetry.io/contrib/.../otelgin` | OpenTelemetry Gin instrumentation |
| `golang.org/x/text` | BCP 47 language tag matching |
| `gopkg.in/yaml.v3` | YAML export of OpenAPI specs |
| `forge.lthn.ai/core/cli` | CLI command registration (for `cmd/api/` subcommands) |
### Ecosystem position
go-api sits at the base of the Lethean HTTP stack. It has no imports from other Lethean
ecosystem modules (beyond `core/cli` for the CLI subcommands). Other packages import go-api
to expose their functionality as REST endpoints:
```
Application main / Core CLI
|
v
go-api Engine <-- this module
| | |
| | +-- OpenAPI spec --> SDKGenerator --> openapi-generator-cli
| +-- ToolBridge --> go-ai / go-ml / go-rag route groups
+-- RouteGroups ----------> any package implementing RouteGroup
```
---
## Further Reading
- [Architecture](architecture.md) -- internals, key types, data flow, middleware stack
- [Development](development.md) -- building, testing, contributing, coding standards

View file

@ -0,0 +1,189 @@
---
title: Lethean Go Blockchain
description: Pure Go implementation of the Lethean CryptoNote/Zano-fork blockchain protocol.
---
# Lethean Go Blockchain
`go-blockchain` is a Go reimplementation of the Lethean blockchain protocol. It provides pure-Go implementations of chain logic, data structures, consensus rules, wallet operations, and networking, delegating only mathematically complex cryptographic operations (ring signatures, Bulletproofs+, Zarcanum proofs) to a cleaned C++ library via CGo.
**Module path:** `forge.lthn.ai/core/go-blockchain`
**Licence:** [European Union Public Licence (EUPL) version 1.2](https://joinup.ec.europa.eu/software/page/eupl/licence-eupl)
## Lineage
```
CryptoNote (van Saberhagen, 2013)
|
IntenseCoin (2017)
|
Lethean (2017-present)
|
Zano rebase (2025) -- privacy upgrades: Zarcanum, CLSAG, Bulletproofs+, confidential assets
|
go-blockchain -- Go reimplementation of the Zano-fork protocol
```
The Lethean mainnet launched on **2026-02-12** with genesis timestamp `1770897600` (12:00 UTC). The chain runs a hybrid PoW/PoS consensus with 120-second block targets.
## Binary
The repo produces a standalone `core-chain` binary via `cmd/core-chain/main.go`.
It uses `cli.Main()` and `cli.WithCommands()` from the Core CLI framework,
keeping the potentially heavy CGo dependencies out of the main `core` binary.
```bash
# Build
core build # uses .core/build.yaml
go build -o ./bin/core-chain ./cmd/core-chain
# TUI block explorer
core-chain chain explorer
# Headless P2P sync
core-chain chain sync
# Sync as a background daemon
core-chain chain sync --daemon
# Stop a running sync daemon
core-chain chain sync --stop
```
Persistent flags on the `chain` parent command:
| Flag | Default | Description |
|------|---------|-------------|
| `--data-dir` | `~/.lethean/chain` | Blockchain data directory |
| `--seed` | `seeds.lthn.io:36942` | Seed peer address |
| `--testnet` | `false` | Use testnet parameters |
The sync subcommand supports daemon mode via go-process, with PID file
locking at `{data-dir}/sync.pid` and automatic registration in the daemon
registry (`~/.core/daemons/`).
## Package structure
```
go-blockchain/
cmd/
core-chain/ Standalone binary entry point (cli.Main)
commands.go AddChainCommands() registration + shared helpers
cmd_explorer.go TUI block explorer subcommand
cmd_sync.go Headless sync subcommand with daemon support
sync_service.go Extracted P2P sync loop
config/ Chain parameters (mainnet/testnet), hardfork schedule
types/ Core data types: Hash, PublicKey, Address, Block, Transaction
wire/ Binary serialisation (consensus-critical, bit-identical to C++)
crypto/ CGo bridge to libcryptonote (ring sigs, BP+, Zarcanum, stealth)
difficulty/ PoW + PoS difficulty adjustment (LWMA variant)
consensus/ Three-layer block/transaction validation
chain/ Blockchain storage, block/tx validation, mempool
p2p/ Levin TCP protocol, peer discovery, handshake
rpc/ Daemon and wallet JSON-RPC client
wallet/ Key management, output scanning, tx construction
mining/ Solo PoW miner (RandomX nonce grinding)
tui/ Terminal dashboard (bubbletea + lipgloss)
.core/
build.yaml Build system config (targets: darwin/arm64, linux/amd64)
```
## Design Principles
1. **Consensus-critical code must be bit-identical** to the C++ implementation. The `wire/` package produces exactly the same binary output as the C++ serialisation for the same input.
2. **No global state.** Chain parameters are passed via `config.ChainConfig` structs, not package-level globals. `Mainnet` and `Testnet` are pre-defined instances.
3. **Interfaces at boundaries.** The `chain/` package defines interfaces for storage backends; the `wallet/` package uses Scanner, Signer, Builder, and RingSelector interfaces for v1/v2+ extensibility.
4. **Test against real chain data.** Wherever possible, tests use actual mainnet block and transaction hex blobs as test vectors, ensuring compatibility with the C++ node.
## Quick Start
```go
import (
"fmt"
"forge.lthn.ai/core/go-blockchain/config"
"forge.lthn.ai/core/go-blockchain/rpc"
"forge.lthn.ai/core/go-blockchain/types"
)
// Query the daemon
client := rpc.NewClient("http://localhost:36941")
info, err := client.GetInfo()
if err != nil {
panic(err)
}
fmt.Printf("Height: %d, Difficulty: %d\n", info.Height, info.PowDifficulty)
// Decode an address
addr, prefix, err := types.DecodeAddress("iTHN...")
if err != nil {
panic(err)
}
fmt.Printf("Spend key: %s\n", addr.SpendPublicKey)
fmt.Printf("Auditable: %v\n", addr.IsAuditable())
// Check hardfork version at a given height
version := config.VersionAtHeight(config.MainnetForks, 15000)
fmt.Printf("Active hardfork at height 15000: HF%d\n", version)
```
## CGo Boundary
The `crypto/` package is the **only** package that crosses the CGo boundary. All other packages are pure Go.
```
Go side C++ side (libcryptonote + librandomx)
+---------+ +---------------------------+
| crypto/ | --- CGo calls ---> | cn_fast_hash() |
| | | generate_key_derivation |
| | | generate_key_image |
| | | check_ring_signature |
| | | CLSAG_GG/GGX/GGXXG_verify|
| | | bulletproof_plus_verify |
| | | zarcanum_verify |
| | | randomx_hash |
+---------+ +---------------------------+
```
When CGo is disabled, stub implementations return errors, allowing the rest of the codebase to compile and run tests that do not require real cryptographic operations.
## Development Phases
The project follows a 9-phase development plan. See the [wiki Development Phases page](https://forge.lthn.ai/core/go-blockchain/wiki/Development-Phases) for detailed phase descriptions.
| Phase | Scope | Status |
|-------|-------|--------|
| 0 | Config + Types | Complete |
| 1 | Wire Serialisation | Complete |
| 2 | CGo Crypto Bridge | Complete |
| 3 | P2P Protocol | Complete |
| 4 | RPC Client | Complete |
| 5 | Chain Storage | Complete |
| 6 | Wallet Core | Complete |
| 7 | Consensus Rules | Complete |
| 8 | Mining | Complete |
## Dependencies
| Module | Purpose |
|--------|---------|
| `forge.lthn.ai/core/cli` | CLI framework (`cli.Main`, cobra, bubbletea TUI) |
| `forge.lthn.ai/core/go` | DI container and service lifecycle |
| `forge.lthn.ai/core/go-process` | Daemon lifecycle, PID file, registry (sync daemon mode) |
| `forge.lthn.ai/core/go-store` | SQLite storage backend for chain data |
| `forge.lthn.ai/core/go-p2p` | Levin protocol implementation |
| `forge.lthn.ai/core/go-crypt` | Cryptographic utilities |
| `golang.org/x/crypto` | SSH, additional crypto primitives |
| `github.com/stretchr/testify` | Test assertions (test-only) |
## Further reading
- [Architecture](architecture.md) -- Package dependencies, CGo boundary, data structures
- [Cryptography](cryptography.md) -- Crypto primitives, hashing, signatures, proofs
- [Networking](networking.md) -- P2P protocol, peer discovery, message types
- [RPC Reference](rpc.md) -- Daemon and wallet JSON-RPC API
- [Chain Parameters](parameters.md) -- Tokenomics, emission, hardfork schedule

View file

@ -0,0 +1,208 @@
---
title: go-build
description: Build system, release pipeline, and SDK generation for the Core ecosystem.
---
# go-build
`forge.lthn.ai/core/go-build` is the build, release, and SDK generation toolkit for Core projects. It provides:
- **Auto-detecting builders** for Go, Wails, Docker, LinuxKit, C++, and Taskfile projects
- **Cross-compilation** with per-target archiving (tar.gz, tar.xz, zip) and SHA-256 checksums
- **Code signing** -- macOS codesign with notarisation, GPG detached signatures, Windows signtool (placeholder)
- **Release automation** -- semantic versioning from git tags, conventional-commit changelogs, multi-target publishing
- **SDK generation** -- OpenAPI spec diffing for breaking-change detection, code generation for TypeScript, Python, Go, and PHP
- **CLI integration** -- registers `core build`, `core ci`, and `core sdk` commands via the Core CLI framework
## Module Path
```
forge.lthn.ai/core/go-build
```
Requires **Go 1.26+**.
## Quick Start
### Build a project
From any project directory containing a recognised marker file:
```bash
core build # Auto-detect type, build for configured targets
core build --targets linux/amd64 # Single target
core build --ci # JSON output for CI pipelines
core build --verbose # Detailed step-by-step output
```
The builder is chosen by marker-file priority:
| Marker file | Builder |
|-------------------|------------|
| `wails.json` | Wails |
| `go.mod` | Go |
| `package.json` | Node (stub)|
| `composer.json` | PHP (stub) |
| `CMakeLists.txt` | C++ |
| `Dockerfile` | Docker |
| `linuxkit.yml` | LinuxKit |
| `Taskfile.yml` | Taskfile |
### Release artifacts
```bash
core build release --we-are-go-for-launch # Build + archive + checksum + publish
core build release # Dry-run (default without the flag)
core build release --draft --prerelease # Mark as draft pre-release
```
### Publish pre-built artifacts
After `core build` has populated `dist/`:
```bash
core ci # Dry-run publish from dist/
core ci --we-are-go-for-launch # Actually publish
core ci --version v1.2.3 # Override version
```
### Generate changelogs
```bash
core ci changelog # From latest tag to HEAD
core ci changelog --from v0.1.0 --to v0.2.0
core ci version # Show determined next version
core ci init # Scaffold .core/release.yaml
```
### SDK operations
```bash
core build sdk # Generate SDKs for all configured languages
core build sdk --lang typescript # Single language
core sdk diff --base v1.0.0 --spec api/openapi.yaml # Breaking-change check
core sdk validate # Validate OpenAPI spec
```
## Package Layout
```
forge.lthn.ai/core/go-build/
|
|-- cmd/
| |-- build/ CLI commands for `core build` (build, from-path, pwa, sdk, release)
| |-- ci/ CLI commands for `core ci` (init, changelog, version, publish)
| +-- sdk/ CLI commands for `core sdk` (diff, validate)
|
+-- pkg/
|-- build/ Core build types, config loading, discovery, archiving, checksums
| |-- builders/ Builder implementations (Go, Wails, Docker, LinuxKit, C++, Taskfile)
| +-- signing/ Code-signing implementations (macOS codesign, GPG, Windows stub)
|
|-- release/ Release orchestration, versioning, changelog, config
| +-- publishers/ Publisher implementations (GitHub, Docker, npm, Homebrew, Scoop, AUR, Chocolatey, LinuxKit)
|
+-- sdk/ OpenAPI SDK generation and breaking-change diffing
+-- generators/ Language generators (TypeScript, Python, Go, PHP)
```
## Configuration Files
Build and release behaviour is driven by two YAML files in the `.core/` directory.
### `.core/build.yaml`
Controls compilation targets, flags, and signing:
```yaml
version: 1
project:
name: myapp
description: My application
main: ./cmd/myapp
binary: myapp
build:
cgo: false
flags: ["-trimpath"]
ldflags: ["-s", "-w"]
env: []
targets:
- os: linux
arch: amd64
- os: linux
arch: arm64
- os: darwin
arch: arm64
- os: windows
arch: amd64
sign:
enabled: true
gpg:
key: $GPG_KEY_ID
macos:
identity: $CODESIGN_IDENTITY
notarize: false
apple_id: $APPLE_ID
team_id: $APPLE_TEAM_ID
app_password: $APPLE_APP_PASSWORD
```
When no `.core/build.yaml` exists, sensible defaults apply (CGO off, `-trimpath -s -w`, four standard targets).
### `.core/release.yaml`
Controls versioning, changelog filtering, publishers, and SDK generation:
```yaml
version: 1
project:
name: myapp
repository: owner/repo
build:
targets:
- os: linux
arch: amd64
- os: darwin
arch: arm64
publishers:
- type: github
draft: false
prerelease: false
- type: homebrew
tap: owner/homebrew-tap
- type: docker
registry: ghcr.io
image: owner/myapp
tags: ["latest", "{{.Version}}"]
changelog:
include: [feat, fix, perf, refactor]
exclude: [chore, docs, style, test, ci]
sdk:
spec: api/openapi.yaml
languages: [typescript, python, go, php]
output: sdk
diff:
enabled: true
fail_on_breaking: false
```
## Dependencies
| Dependency | Purpose |
|---|---|
| `forge.lthn.ai/core/cli` | CLI command registration and TUI styling |
| `forge.lthn.ai/core/go-io` | Filesystem abstraction (`io.Medium`, `io.Local`) |
| `forge.lthn.ai/core/go-i18n` | Internationalised CLI labels |
| `forge.lthn.ai/core/go-log` | Structured error logging |
| `github.com/Snider/Borg` | XZ compression for tar.xz archives |
| `github.com/getkin/kin-openapi` | OpenAPI spec loading and validation |
| `github.com/oasdiff/oasdiff` | OpenAPI diff and breaking-change detection |
| `gopkg.in/yaml.v3` | YAML config parsing |
| `github.com/leaanthony/debme` | Embedded filesystem anchoring (PWA templates) |
| `github.com/leaanthony/gosod` | Template extraction for PWA builds |
| `golang.org/x/net` | HTML parsing for PWA manifest detection |
| `golang.org/x/text` | Changelog section title casing |
## Licence
EUPL-1.2

View file

@ -0,0 +1,111 @@
---
title: go-cache
description: File-based caching with TTL expiry, storage-agnostic via the go-io Medium interface.
---
# go-cache
`go-cache` is a lightweight, storage-agnostic caching library for Go. It stores
JSON-serialised entries with automatic TTL expiry and path-traversal protection.
**Module path:** `forge.lthn.ai/core/go-cache`
**Licence:** EUPL-1.2
## Quick Start
```go
import (
"fmt"
"time"
"forge.lthn.ai/core/go-cache"
)
func main() {
// Create a cache with default settings:
// - storage: local filesystem (io.Local)
// - directory: .core/cache/ in the working directory
// - TTL: 1 hour
c, err := cache.New(nil, "", 0)
if err != nil {
panic(err)
}
// Store a value
err = c.Set("user/profile", map[string]string{
"name": "Alice",
"role": "admin",
})
if err != nil {
panic(err)
}
// Retrieve it (returns false if missing or expired)
var profile map[string]string
found, err := c.Get("user/profile", &profile)
if err != nil {
panic(err)
}
if found {
fmt.Println(profile["name"]) // Alice
}
}
```
## Package Layout
| File | Purpose |
|-----------------|-------------------------------------------------------------|
| `cache.go` | Core types (`Cache`, `Entry`), CRUD operations, key helpers |
| `cache_test.go` | Tests covering set/get, expiry, delete, clear, defaults |
| `go.mod` | Module definition (Go 1.26) |
## Dependencies
| Module | Version | Role |
|-------------------------------|---------|---------------------------------------------|
| `forge.lthn.ai/core/go-io` | v0.0.3 | Storage abstraction (`Medium` interface) |
| `forge.lthn.ai/core/go-log` | v0.0.1 | Structured logging (indirect, via `go-io`) |
There are no other runtime dependencies. The test suite uses the standard
library only (plus the `MockMedium` from `go-io`).
## Key Concepts
### Storage Backends
The cache does not read or write files directly. All I/O goes through the
`io.Medium` interface defined in `go-io`. This means the same cache logic works
against:
- **Local filesystem** (`io.Local`) -- the default
- **SQLite KV store** (`store.Medium` from `go-io/store`)
- **S3-compatible storage** (`go-io/s3`)
- **In-memory mock** (`io.NewMockMedium()`) -- ideal for tests
Pass any `Medium` implementation as the first argument to `cache.New()`.
### TTL and Expiry
Every entry records both `cached_at` and `expires_at` timestamps. On `Get()`,
if the current time is past `expires_at`, the entry is treated as a cache miss
-- no stale data is ever returned. The default TTL is one hour
(`cache.DefaultTTL`).
### GitHub Cache Keys
The package includes two helper functions that produce consistent cache keys
for GitHub API data:
```go
cache.GitHubReposKey("host-uk") // "github/host-uk/repos"
cache.GitHubRepoKey("host-uk", "core") // "github/host-uk/core/meta"
```
These are convenience helpers used by other packages in the ecosystem (such as
`go-devops`) to avoid key duplication when caching GitHub responses.

View file

@ -0,0 +1,141 @@
---
title: go-config
description: Layered configuration management for the Core framework with file, environment, and in-memory resolution.
---
# go-config
`forge.lthn.ai/core/go-config` provides layered configuration management for applications built on the Core framework. It resolves values through a priority chain -- defaults, file, environment variables, flags -- so that the same codebase works identically across local development, CI, and production without code changes.
## Module Path
```
forge.lthn.ai/core/go-config
```
Requires **Go 1.26+**.
## Quick Start
### Standalone usage
```go
package main
import (
"fmt"
config "forge.lthn.ai/core/go-config"
)
func main() {
cfg, err := config.New() // loads ~/.core/config.yaml if it exists
if err != nil {
panic(err)
}
// Write a value and persist it
_ = cfg.Set("dev.editor", "vim")
_ = cfg.Commit()
// Read it back
var editor string
_ = cfg.Get("dev.editor", &editor)
fmt.Println(editor) // "vim"
}
```
### As a Core framework service
```go
import (
config "forge.lthn.ai/core/go-config"
"forge.lthn.ai/core/go/pkg/core"
)
app, _ := core.New(
core.WithService(config.NewConfigService),
)
// The config service loads automatically during OnStartup.
// Retrieve it later via core.ServiceFor[*config.Service](app).
```
## Package Layout
| File | Purpose |
|-----------------|----------------------------------------------------------------|
| `config.go` | Core `Config` struct -- layered Get/Set, file load, commit |
| `env.go` | Environment variable iteration and prefix-based loading |
| `service.go` | Framework service wrapper with lifecycle (`Startable`) support |
| `config_test.go`| Tests following the `_Good` / `_Bad` / `_Ugly` convention |
## Dependencies
| Module | Role |
|-----------------------------------|-----------------------------------------|
| `forge.lthn.ai/core/go` | Core framework (`core.Config` interface, `ServiceRuntime`) |
| `forge.lthn.ai/core/go-io` | Storage abstraction (`Medium` for reading/writing files) |
| `forge.lthn.ai/core/go-log` | Contextual error helper (`E()`) |
| `github.com/spf13/viper` | Underlying configuration engine |
| `gopkg.in/yaml.v3` | YAML serialisation for `Commit()` |
## Configuration Priority
Values are resolved in ascending priority order:
1. **Defaults** -- hardcoded fallbacks (via `Set()` before any file load)
2. **File** -- YAML loaded from `~/.core/config.yaml` (or a custom path)
3. **Environment variables** -- prefixed with `CORE_CONFIG_` by default
4. **Explicit Set()** -- in-memory overrides applied at runtime
Environment variables always override file values. An explicit `Set()` call overrides everything.
## Key Access
All keys use **dot notation** for nested values:
```go
cfg.Set("a.b.c", "deep")
var val string
cfg.Get("a.b.c", &val) // "deep"
```
This maps to YAML structure:
```yaml
a:
b:
c: deep
```
## Environment Variable Mapping
Environment variables are mapped to dot-notation keys by:
1. Stripping the prefix (default `CORE_CONFIG_`)
2. Lowercasing
3. Replacing `_` with `.`
For example, `CORE_CONFIG_DEV_EDITOR=nano` resolves to key `dev.editor` with value `"nano"`.
You can change the prefix with `WithEnvPrefix`:
```go
cfg, _ := config.New(config.WithEnvPrefix("MYAPP"))
// MYAPP_SETTING=secret -> key "setting"
```
## Persisting Changes
`Set()` only writes to memory. Call `Commit()` to flush changes to disk:
```go
cfg.Set("dev.editor", "vim")
cfg.Commit() // writes to ~/.core/config.yaml
```
`Commit()` only persists values that were loaded from the file or explicitly set via `Set()`. Environment variable values are never leaked into the config file.
## Licence
EUPL-1.2

View file

@ -0,0 +1,146 @@
---
title: go-container
description: Container runtime, LinuxKit image builder, and portable development environment management for Go.
---
# go-container
`forge.lthn.ai/core/go-container` provides a container runtime built on LinuxKit and lightweight hypervisors. It manages the full lifecycle of LinuxKit virtual machines -- from building images with embedded templates, to running them via QEMU or Hyperkit, to offering a portable development environment with shell access, project mounting, test execution, and Claude AI integration.
This is **not** a Docker wrapper. It runs real VMs from LinuxKit images (ISO, qcow2, VMDK, raw) using platform-native acceleration (KVM on Linux, HVF on macOS, Hyperkit where available).
## Module path
```
forge.lthn.ai/core/go-container
```
Requires **Go 1.26+**.
## Quick start
### Run a VM from an image
```go
import (
"context"
container "forge.lthn.ai/core/go-container"
"forge.lthn.ai/core/go-io"
)
manager, err := container.NewLinuxKitManager(io.Local)
if err != nil {
log.Fatal(err)
}
ctx := context.Background()
c, err := manager.Run(ctx, "/path/to/image.qcow2", container.RunOptions{
Name: "my-vm",
Memory: 2048,
CPUs: 2,
SSHPort: 2222,
Detach: true,
})
if err != nil {
log.Fatal(err)
}
fmt.Printf("Started container %s (PID %d)\n", c.ID, c.PID)
```
### Use the development environment
```go
import (
"forge.lthn.ai/core/go-container/devenv"
"forge.lthn.ai/core/go-io"
)
dev, err := devenv.New(io.Local)
if err != nil {
log.Fatal(err)
}
// Boot the dev environment (downloads image if needed)
ctx := context.Background()
err = dev.Boot(ctx, devenv.DefaultBootOptions())
// Open an SSH shell
err = dev.Shell(ctx, devenv.ShellOptions{})
// Run tests inside the VM
err = dev.Test(ctx, "/path/to/project", devenv.TestOptions{})
```
### Build and run from a LinuxKit template
```go
import container "forge.lthn.ai/core/go-container"
// List available templates (built-in + user-defined)
templates := container.ListTemplates()
// Apply variables to a template
content, err := container.ApplyTemplate("core-dev", map[string]string{
"SSH_KEY": "ssh-ed25519 AAAA...",
"MEMORY": "4096",
"HOSTNAME": "my-dev-box",
})
```
## Package layout
| Package | Import path | Purpose |
|---------|-------------|---------|
| `container` (root) | `forge.lthn.ai/core/go-container` | Container struct, Manager interface, hypervisor abstraction, LinuxKit manager, state persistence, template engine |
| `devenv` | `forge.lthn.ai/core/go-container/devenv` | Portable dev environment orchestration: boot, shell, serve, test, Claude sandbox, image management |
| `sources` | `forge.lthn.ai/core/go-container/sources` | Image download backends: CDN and GitHub Releases with progress reporting |
| `cmd/vm` | `forge.lthn.ai/core/go-container/cmd/vm` | CLI commands (`core vm run`, `core vm ps`, `core vm stop`, `core vm logs`, `core vm exec`, `core vm templates`) |
## Dependencies
| Module | Purpose |
|--------|---------|
| `forge.lthn.ai/core/go-io` | File system abstraction (`Medium` interface), process utilities |
| `forge.lthn.ai/core/go-config` | Configuration loading (used by `devenv` for `~/.core/config.yaml`) |
| `forge.lthn.ai/core/go-i18n` | Internationalised UI strings (used by `cmd/vm`) |
| `forge.lthn.ai/core/cli` | CLI framework (used by `cmd/vm` for command registration) |
| `github.com/stretchr/testify` | Test assertions |
| `gopkg.in/yaml.v3` | YAML parsing for test configuration |
The root `container` package has only two direct dependencies: `go-io` and the standard library. The `devenv` and `cmd/vm` packages pull in the heavier dependencies.
## CLI commands
When registered via `cmd/vm`, the following commands become available under `core vm`:
| Command | Description |
|---------|-------------|
| `core vm run [image]` | Run a VM from an image file or `--template` |
| `core vm ps` | List running VMs (`-a` for all including stopped) |
| `core vm stop <id>` | Stop a running VM by ID or name (supports partial matching) |
| `core vm logs <id>` | View VM logs (`-f` to follow) |
| `core vm exec <id> <cmd>` | Execute a command inside the VM via SSH |
| `core vm templates` | List available LinuxKit templates |
| `core vm templates show <name>` | Display a template's full YAML |
| `core vm templates vars <name>` | Show a template's required and optional variables |
## Built-in templates
Two LinuxKit templates are embedded in the binary:
- **core-dev** -- Full development environment with Go, Node.js, PHP, Docker-in-LinuxKit, and SSH access
- **server-php** -- Production PHP server with FrankenPHP, Caddy reverse proxy, and health checks
User-defined templates can be placed in `.core/linuxkit/` (workspace-relative) or `~/.core/linuxkit/` (global). They are discovered automatically and merged with the built-in set.
## Licence
EUPL-1.2. See [LICENSE](../LICENSE) for the full text.

View file

@ -0,0 +1,163 @@
---
title: go-crypt
description: Cryptographic primitives, authentication, and trust policy engine for the Lethean agent platform.
---
# go-crypt
**Module**: `forge.lthn.ai/core/go-crypt`
**Licence**: EUPL-1.2
**Language**: Go 1.26
Cryptographic primitives, authentication, and trust policy engine for the
Lethean agent platform. Provides symmetric encryption, password hashing,
OpenPGP authentication with both online and air-gapped modes, RSA key
management, deterministic content hashing, and a three-tier agent access
control system with an audit log and approval queue.
## Quick Start
```go
import (
"forge.lthn.ai/core/go-crypt/crypt"
"forge.lthn.ai/core/go-crypt/auth"
"forge.lthn.ai/core/go-crypt/trust"
)
```
### Encrypt and Decrypt Data
The default cipher is XChaCha20-Poly1305 with Argon2id key derivation. A
random salt and nonce are generated automatically and prepended to the
ciphertext.
```go
// Encrypt with ChaCha20-Poly1305 + Argon2id KDF
ciphertext, err := crypt.Encrypt(plaintext, []byte("my passphrase"))
// Decrypt
plaintext, err := crypt.Decrypt(ciphertext, []byte("my passphrase"))
// Or use AES-256-GCM instead
ciphertext, err := crypt.EncryptAES(plaintext, []byte("my passphrase"))
plaintext, err := crypt.DecryptAES(ciphertext, []byte("my passphrase"))
```
### Hash and Verify Passwords
```go
// Hash with Argon2id (recommended)
hash, err := crypt.HashPassword("hunter2")
// Returns: $argon2id$v=19$m=65536,t=3,p=4$<salt>$<hash>
// Verify (constant-time comparison)
match, err := crypt.VerifyPassword("hunter2", hash)
```
### OpenPGP Authentication
```go
// Create an authenticator backed by a storage medium
a := auth.New(medium,
auth.WithSessionStore(sqliteStore),
auth.WithSessionTTL(8 * time.Hour),
)
// Register a user (generates PGP keypair, stores credentials)
user, err := a.Register("alice", "password123")
// Password-based login (bypasses PGP challenge-response)
session, err := a.Login(userID, "password123")
// Validate a session token
session, err := a.ValidateSession(token)
```
### Trust Policy Evaluation
```go
// Set up a registry and register agents
registry := trust.NewRegistry()
registry.Register(trust.Agent{
Name: "Athena",
Tier: trust.TierFull,
})
registry.Register(trust.Agent{
Name: "Clotho",
Tier: trust.TierVerified,
ScopedRepos: []string{"core/*"},
})
// Evaluate capabilities
engine := trust.NewPolicyEngine(registry)
result := engine.Evaluate("Athena", trust.CapPushRepo, "core/go-crypt")
// result.Decision == trust.Allow
result = engine.Evaluate("Clotho", trust.CapMergePR, "core/go-crypt")
// result.Decision == trust.NeedsApproval
```
## Package Layout
| Package | Import Path | Description |
|---------|-------------|-------------|
| `crypt` | `go-crypt/crypt` | High-level encrypt/decrypt (ChaCha20 + AES), password hashing, HMAC, checksums, key derivation |
| `crypt/chachapoly` | `go-crypt/crypt/chachapoly` | Standalone ChaCha20-Poly1305 AEAD wrapper |
| `crypt/lthn` | `go-crypt/crypt/lthn` | RFC-0004 quasi-salted deterministic hash for content identifiers |
| `crypt/pgp` | `go-crypt/crypt/pgp` | OpenPGP key generation, encryption, decryption, signing, verification |
| `crypt/rsa` | `go-crypt/crypt/rsa` | RSA-OAEP-SHA256 key generation and encryption (2048+ bit) |
| `crypt/openpgp` | `go-crypt/crypt/openpgp` | Service wrapper implementing the `core.Crypt` interface with IPC support |
| `auth` | `go-crypt/auth` | OpenPGP challenge-response authentication, session management, key rotation/revocation |
| `trust` | `go-crypt/trust` | Agent trust model, policy engine, approval queue, audit log |
| `cmd/crypt` | `go-crypt/cmd/crypt` | CLI commands: `crypt encrypt`, `crypt decrypt`, `crypt hash`, `crypt keygen`, `crypt checksum` |
## CLI Commands
The `cmd/crypt` package registers a `crypt` command group with the `core` CLI:
```bash
# Encrypt a file (ChaCha20-Poly1305 by default)
core crypt encrypt myfile.txt -p "passphrase"
core crypt encrypt myfile.txt --aes -p "passphrase"
# Decrypt
core crypt decrypt myfile.txt.enc -p "passphrase"
# Hash a password
core crypt hash "my password" # Argon2id
core crypt hash "my password" --bcrypt # Bcrypt
# Verify a password against a hash
core crypt hash "my password" --verify '$argon2id$v=19$...'
# Generate a random key
core crypt keygen # 32 bytes, hex
core crypt keygen -l 64 --base64 # 64 bytes, base64
# Compute file checksums
core crypt checksum myfile.txt # SHA-256
core crypt checksum myfile.txt --sha512
core crypt checksum myfile.txt --verify "abc123..."
```
## Dependencies
| Module | Role |
|--------|------|
| `forge.lthn.ai/core/go` | Framework: `core.E` error helper, `core.Crypt` interface, `io.Medium` storage abstraction |
| `forge.lthn.ai/core/go-store` | SQLite KV store for persistent session storage |
| `forge.lthn.ai/core/go-io` | `io.Medium` interface used by the auth package |
| `forge.lthn.ai/core/go-log` | Contextual error wrapping via `core.E()` |
| `forge.lthn.ai/core/cli` | CLI framework for the `cmd/crypt` commands |
| `github.com/ProtonMail/go-crypto` | OpenPGP implementation (actively maintained, post-quantum research) |
| `golang.org/x/crypto` | Argon2id, ChaCha20-Poly1305, scrypt, HKDF, bcrypt |
| `github.com/stretchr/testify` | Test assertions (`assert`, `require`) |
No C toolchain or CGo is required. All cryptographic operations use pure Go
implementations.
## Further Reading
- [Architecture](architecture.md) -- internals, data flow, algorithm reference
- [Development](development.md) -- building, testing, contributing
- [History](history.md) -- completed phases, security audit findings, known limitations

View file

@ -0,0 +1,118 @@
---
title: go-devops
description: Multi-repo development workflows, deployment, and release snapshot generation for the Lethean ecosystem.
---
# go-devops
`forge.lthn.ai/core/go-devops` provides multi-repo development workflow
commands (`core dev`), deployment orchestration, documentation sync, and
release snapshot generation (`core.json`).
**Module**: `forge.lthn.ai/core/go-devops`
**Go**: 1.26
**Licence**: EUPL-1.2
## Decomposition
go-devops was originally a 31K LOC monolith covering builds, releases,
infrastructure, Ansible, containers, and code quality. It has since been
decomposed into focused, independently-versioned packages:
| Extracted package | Former location | What moved |
|-------------------|-----------------|------------|
| [go-build](go-build.md) | `build/`, `release/`, `sdk/` | Cross-compilation, code signing, release publishing, SDK generation |
| [go-infra](go-infra.md) | `infra/` | Hetzner Cloud/Robot, CloudNS provider APIs, `infra.yaml` config |
| [go-ansible](go-ansible.md) | `ansible/` | Pure Go Ansible playbook engine (41 modules, SSH) |
| [go-container](go-container.md) | `container/`, `devops/` | LinuxKit VM management, dev environments, image sources |
The `devkit/` package (cyclomatic complexity, coverage, vulnerability scanning)
was merged into `core/lint`.
After decomposition, go-devops retains multi-repo orchestration, deployment,
documentation sync, and manifest snapshot generation.
## What it does
| Area | Summary |
|------|---------|
| **Multi-repo workflows** | Status, commit, push, pull across all repos in a `repos.yaml` workspace |
| **GitHub integration** | Issue listing, PR review status, CI workflow checks |
| **Documentation sync** | Collect docs from multi-repo workspaces into a central location |
| **Deployment** | Coolify PaaS integration |
| **Release snapshots** | Generate `core.json` from `.core/manifest.yaml` for marketplace indexing |
| **Setup** | Repository and CI bootstrapping |
## Package layout
```
go-devops/
├── cmd/ CLI command registrations
│ ├── dev/ Multi-repo workflow commands (work, health, commit, push, pull)
│ ├── docs/ Documentation sync and listing
│ ├── deploy/ Coolify deployment commands
│ ├── setup/ Repository and CI bootstrapping
│ └── gitcmd/ Git helpers
├── deploy/ Deployment integrations (Coolify PaaS)
└── snapshot/ Frozen release manifest generation (core.json)
```
## CLI commands
go-devops registers commands into the `core` CLI binary (built from `forge.lthn.ai/core/cli`). Key commands:
```bash
# Multi-repo development
core dev health # Quick summary across all repos
core dev work # Combined status, commit, push workflow
core dev commit # Claude-assisted commits for dirty repos
core dev push # Push repos with unpushed commits
core dev pull # Pull repos behind remote
# GitHub integration
core dev issues # List open issues across repos
core dev reviews # PRs needing review
core dev ci # GitHub Actions status
# Documentation
core docs list # Scan repos for docs
core docs sync # Copy docs to central location
core docs sync --target gohelp # Sync to go-help format
# Deployment
core deploy servers # List Coolify servers
core deploy apps # List Coolify applications
# Setup
core setup repo # Generate .core/ configuration for a repo
core setup ci # Bootstrap CI configuration
```
## Release snapshots
The `snapshot` package generates a frozen `core.json` manifest from
`.core/manifest.yaml`, embedding the git commit SHA, tag, and build
timestamp. This file is consumed by the marketplace for self-describing
package listings.
```json
{
"schema": 1,
"code": "photo-browser",
"name": "Photo Browser",
"version": "0.1.0",
"commit": "a1b2c3d4...",
"tag": "v0.1.0",
"built": "2026-03-09T15:00:00Z",
"daemons": { ... },
"modules": [ ... ]
}
```
## Further reading
- [go-build](go-build.md) -- Build system, release pipeline, SDK generation
- [go-infra](go-infra.md) -- Infrastructure provider APIs
- [go-ansible](go-ansible.md) -- Pure Go Ansible playbook engine
- [go-container](go-container.md) -- LinuxKit VM management
- [Doc Sync](sync.md) -- Documentation sync across multi-repo workspaces

View file

@ -0,0 +1,151 @@
---
title: go-forge
description: Full-coverage Go client for the Forgejo API with generics-based CRUD, pagination, and code-generated types.
---
# go-forge
`forge.lthn.ai/core/go-forge` is a Go client library for the [Forgejo](https://forgejo.org) REST API. It provides typed access to 18 API domains (repositories, issues, pull requests, organisations, and more) through a single top-level `Forge` client. Types are generated directly from Forgejo's `swagger.v1.json` specification, keeping the library in lockstep with the server.
**Module path:** `forge.lthn.ai/core/go-forge`
**Go version:** 1.26+
**Licence:** EUPL-1.2
## Quick start
```go
package main
import (
"context"
"fmt"
"log"
"forge.lthn.ai/core/go-forge"
)
func main() {
// Create a client with your Forgejo URL and API token.
f := forge.NewForge("https://forge.lthn.ai", "your-token")
ctx := context.Background()
// List repositories for an organisation (first page, 50 per page).
result, err := f.Repos.List(ctx, forge.Params{"org": "core"}, forge.DefaultList)
if err != nil {
log.Fatal(err)
}
for _, repo := range result.Items {
fmt.Println(repo.Name)
}
// Get a single repository.
repo, err := f.Repos.Get(ctx, forge.Params{"owner": "core", "repo": "go-forge"})
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s — %s\n", repo.FullName, repo.Description)
}
```
### Configuration from environment
If you prefer to resolve the URL and token from environment variables rather than hard-coding them, use `NewForgeFromConfig`:
```go
// Priority: flags > env (FORGE_URL, FORGE_TOKEN) > defaults (http://localhost:3000)
f, err := forge.NewForgeFromConfig("", "", forge.WithUserAgent("my-tool/1.0"))
if err != nil {
log.Fatal(err) // no token configured
}
```
Environment variables:
| Variable | Purpose | Default |
|---------------|--------------------------------------|--------------------------|
| `FORGE_URL` | Base URL of the Forgejo instance | `http://localhost:3000` |
| `FORGE_TOKEN` | API token for authentication | (none -- required) |
## Package layout
```
go-forge/
├── client.go HTTP client, auth, error handling, rate limits
├── config.go Config resolution: flags > env > defaults
├── forge.go Top-level Forge struct aggregating all 18 services
├── resource.go Generic Resource[T, C, U] for CRUD operations
├── pagination.go ListPage, ListAll, ListIter — paginated requests
├── params.go Path variable resolution ({owner}/{repo} -> values)
├── repos.go RepoService — repositories, forks, transfers, mirrors
├── issues.go IssueService — issues, comments, labels, reactions
├── pulls.go PullService — pull requests, merges, reviews
├── orgs.go OrgService — organisations, members
├── users.go UserService — users, followers, stars
├── teams.go TeamService — teams, members, repositories
├── admin.go AdminService — site admin, cron, user management
├── branches.go BranchService — branches, branch protections
├── releases.go ReleaseService — releases, assets, tags
├── labels.go LabelService — repo and org labels
├── webhooks.go WebhookService — repo and org webhooks
├── notifications.go NotificationService — notifications, threads
├── packages.go PackageService — package registry
├── actions.go ActionsService — CI/CD secrets, variables, dispatches
├── contents.go ContentService — file read/write/delete
├── wiki.go WikiService — wiki pages
├── commits.go CommitService — statuses, notes
├── misc.go MiscService — markdown, licences, gitignore, version
├── types/ 229 generated Go types from swagger.v1.json
│ ├── generate.go go:generate directive
│ ├── repo.go Repository, CreateRepoOption, EditRepoOption, ...
│ ├── issue.go Issue, CreateIssueOption, ...
│ ├── pr.go PullRequest, CreatePullRequestOption, ...
│ └── ... (36 files total, grouped by domain)
├── cmd/forgegen/ Code generator: swagger spec -> types/*.go
│ ├── main.go CLI entry point
│ ├── parser.go Swagger spec parsing, type extraction, CRUD pair detection
│ └── generator.go Template-based Go source file generation
└── testdata/
└── swagger.v1.json Forgejo API specification (input for codegen)
```
## Services
The `Forge` struct exposes 18 service fields, each handling a different API domain:
| Service | Struct | Embedding | Domain |
|-----------------|---------------------|----------------------------------|--------------------------------------|
| `Repos` | `RepoService` | `Resource[Repository, ...]` | Repositories, forks, transfers |
| `Issues` | `IssueService` | `Resource[Issue, ...]` | Issues, comments, labels, reactions |
| `Pulls` | `PullService` | `Resource[PullRequest, ...]` | Pull requests, merges, reviews |
| `Orgs` | `OrgService` | `Resource[Organization, ...]` | Organisations, members |
| `Users` | `UserService` | `Resource[User, ...]` | Users, followers, stars |
| `Teams` | `TeamService` | `Resource[Team, ...]` | Teams, members, repos |
| `Admin` | `AdminService` | (standalone) | Site admin, cron, user management |
| `Branches` | `BranchService` | `Resource[Branch, ...]` | Branches, protections |
| `Releases` | `ReleaseService` | `Resource[Release, ...]` | Releases, assets, tags |
| `Labels` | `LabelService` | (standalone) | Repo and org labels |
| `Webhooks` | `WebhookService` | `Resource[Hook, ...]` | Repo and org webhooks |
| `Notifications` | `NotificationService` | (standalone) | Notifications, threads |
| `Packages` | `PackageService` | (standalone) | Package registry |
| `Actions` | `ActionsService` | (standalone) | CI/CD secrets, variables, dispatches |
| `Contents` | `ContentService` | (standalone) | File read/write/delete |
| `Wiki` | `WikiService` | (standalone) | Wiki pages |
| `Commits` | `CommitService` | (standalone) | Commit statuses, git notes |
| `Misc` | `MiscService` | (standalone) | Markdown, licences, gitignore, version |
Services that embed `Resource[T, C, U]` inherit `List`, `ListAll`, `Iter`, `Get`, `Create`, `Update`, and `Delete` methods automatically. Standalone services have hand-written methods because their API endpoints are heterogeneous and do not fit a uniform CRUD pattern.
## Dependencies
This module has **zero external dependencies**. It relies solely on the Go standard library (`net/http`, `encoding/json`, `context`, `iter`, etc.) and requires Go 1.26 or later.
```
module forge.lthn.ai/core/go-forge
go 1.26.0
```

125
docs/go/packages/go-git.md Normal file
View file

@ -0,0 +1,125 @@
---
title: go-git
description: Multi-repository Git operations library for Go with parallel status checking and Core framework integration.
---
# go-git
**Module:** `forge.lthn.ai/core/go-git`
**Go version:** 1.26+
**Licence:** [EUPL-1.2](../LICENSE.md)
## What it does
go-git is a Go library for orchestrating Git operations across multiple repositories. It was extracted from `forge.lthn.ai/core/go-scm/git/` into a standalone module.
The library provides two layers:
1. **Standalone functions** -- pure Git operations that depend only on the standard library.
2. **Core service integration** -- a `Service` type that plugs into the Core DI framework, exposing Git operations via the query/task message bus.
Typical use cases include multi-repo status dashboards, batch push/pull workflows, and CI tooling that needs to inspect many repositories at once.
## Quick start
### Standalone usage (no framework)
```go
package main
import (
"context"
"fmt"
git "forge.lthn.ai/core/go-git"
)
func main() {
statuses := git.Status(context.Background(), git.StatusOptions{
Paths: []string{"/home/dev/repo-a", "/home/dev/repo-b"},
Names: map[string]string{
"/home/dev/repo-a": "repo-a",
"/home/dev/repo-b": "repo-b",
},
})
for _, s := range statuses {
if s.Error != nil {
fmt.Printf("%s: error: %v\n", s.Name, s.Error)
continue
}
fmt.Printf("%s [%s]: modified=%d untracked=%d staged=%d ahead=%d behind=%d\n",
s.Name, s.Branch, s.Modified, s.Untracked, s.Staged, s.Ahead, s.Behind)
}
}
```
### With the Core framework
```go
package main
import (
"fmt"
"forge.lthn.ai/core/go/pkg/core"
git "forge.lthn.ai/core/go-git"
)
func main() {
c, err := core.New(
core.WithService(git.NewService(git.ServiceOptions{
WorkDir: "/home/dev/projects",
})),
)
if err != nil {
panic(err)
}
// Query status via the message bus.
result, err := c.Query(git.QueryStatus{
Paths: []string{"/home/dev/projects/repo-a"},
Names: map[string]string{"/home/dev/projects/repo-a": "repo-a"},
})
if err != nil {
panic(err)
}
statuses := result.([]git.RepoStatus)
for _, s := range statuses {
fmt.Printf("%s: dirty=%v ahead=%v\n", s.Name, s.IsDirty(), s.HasUnpushed())
}
}
```
## Package layout
| File | Purpose |
|------|---------|
| `git.go` | Standalone Git operations -- `Status`, `Push`, `Pull`, `PushMultiple`, error types. Zero framework dependencies. |
| `service.go` | Core framework integration -- `Service`, query types (`QueryStatus`, `QueryDirtyRepos`, `QueryAheadRepos`), task types (`TaskPush`, `TaskPull`, `TaskPushMultiple`). |
| `git_test.go` | Tests for standalone operations using real temporary Git repositories. |
| `service_test.go` | Tests for `Service` filtering helpers (`DirtyRepos`, `AheadRepos`, iterators). |
| `service_extra_test.go` | Integration tests for `Service` query/task handlers against the Core framework. |
## Dependencies
| Dependency | Purpose |
|------------|---------|
| `forge.lthn.ai/core/go/pkg/core` | DI container, `ServiceRuntime`, query/task bus (used only by `service.go`). |
| `github.com/stretchr/testify` | Assertions in tests (test-only). |
The standalone layer (`git.go`) uses only the Go standard library. It shells out to the system `git` binary -- there is no embedded Git implementation.
## Build targets
Defined in `.core/build.yaml`:
| OS | Architecture |
|----|-------------|
| Linux | amd64 |
| Linux | arm64 |
| Darwin | arm64 |
| Windows | amd64 |

View file

@ -0,0 +1,80 @@
---
title: go-html
description: HLCRF DOM compositor with grammar pipeline integration for type-safe server-side HTML generation and optional WASM client rendering.
---
# go-html
`go-html` is a pure-Go library for building HTML documents as type-safe node trees and rendering them to string output. It provides a five-slot layout compositor (Header, Left, Content, Right, Footer -- abbreviated HLCRF), a responsive multi-variant wrapper, a server-side grammar analysis pipeline, a Web Component code generator, and an optional WASM module for client-side rendering.
**Module path:** `forge.lthn.ai/core/go-html`
**Go version:** 1.26
**Licence:** EUPL-1.2
## Quick Start
```go
package main
import html "forge.lthn.ai/core/go-html"
func main() {
page := html.NewLayout("HCF").
H(html.El("nav", html.Text("nav.label"))).
C(html.El("article",
html.El("h1", html.Text("page.title")),
html.Each(items, func(item Item) html.Node {
return html.El("li", html.Text(item.Name))
}),
)).
F(html.El("footer", html.Text("footer.copyright")))
output := page.Render(html.NewContext())
}
```
This builds a Header-Content-Footer layout with semantic HTML elements (`<header>`, `<main>`, `<footer>`), ARIA roles, and deterministic `data-block` path identifiers. Text nodes pass through the `go-i18n` translation layer and are HTML-escaped by default.
## Package Layout
| Path | Purpose |
|------|---------|
| `node.go` | `Node` interface and all node types: `El`, `Text`, `Raw`, `If`, `Unless`, `Each`, `EachSeq`, `Switch`, `Entitled` |
| `layout.go` | HLCRF compositor with semantic HTML elements and ARIA roles |
| `responsive.go` | Multi-variant breakpoint wrapper (`data-variant` containers) |
| `context.go` | Rendering context: identity, locale, entitlements, i18n service |
| `render.go` | `Render()` convenience function |
| `path.go` | `ParseBlockID()` for decoding `data-block` path attributes |
| `pipeline.go` | `StripTags`, `Imprint`, `CompareVariants` (server-side only, `!js` build tag) |
| `codegen/codegen.go` | Web Component class generation (closed Shadow DOM) |
| `cmd/codegen/main.go` | Build-time CLI: JSON slot map on stdin, JS bundle on stdout |
| `cmd/wasm/main.go` | WASM entry point exporting `renderToString()` to JavaScript |
## Key Concepts
**Node tree** -- All renderable units implement `Node`, a single-method interface: `Render(ctx *Context) string`. The library composes nodes into trees using `El()` for elements, `Text()` for translated text, and control-flow constructors (`If`, `Unless`, `Each`, `Switch`, `Entitled`).
**HLCRF Layout** -- A five-slot compositor that maps to semantic HTML: `<header>` (H), `<aside>` (L/R), `<main>` (C), `<footer>` (F). The variant string controls which slots render: `"HLCRF"` for all five, `"HCF"` for three, `"C"` for content only. Layouts nest: placing a `Layout` inside another layout's slot produces hierarchical `data-block` paths like `L-0-C-0`.
**Responsive variants** -- `Responsive` wraps multiple `Layout` instances with named breakpoints (e.g. `"desktop"`, `"mobile"`). Each variant renders inside a `<div data-variant="name">` container for CSS or JavaScript targeting.
**Grammar pipeline** -- Server-side only. `Imprint()` renders a node tree to HTML, strips tags, tokenises the plain text via `go-i18n/reversal`, and returns a `GrammarImprint` for semantic analysis. `CompareVariants()` computes pairwise similarity scores across responsive variants.
**Web Component codegen** -- `cmd/codegen/` generates ES2022 Web Component classes with closed Shadow DOM from a JSON slot-to-tag mapping. This is a build-time tool, not used at runtime.
## Dependencies
```
forge.lthn.ai/core/go-html
forge.lthn.ai/core/go-i18n (direct, all builds)
forge.lthn.ai/core/go-inference (indirect, via go-i18n)
forge.lthn.ai/core/go-i18n/reversal (server builds only, !js)
github.com/stretchr/testify (test only)
```
Both `go-i18n` and `go-inference` must be present on the local filesystem. The `go.mod` uses `replace` directives pointing to sibling directories (`../go-i18n`, `../go-inference`).
## Further Reading
- [Architecture](architecture.md) -- Node interface, HLCRF layout internals, responsive compositor, grammar pipeline, WASM module, codegen CLI
- [Development](development.md) -- Building, testing, benchmarks, WASM builds, coding standards, contribution guide

View file

@ -0,0 +1,86 @@
---
title: go-i18n Grammar Engine
description: Grammar-aware internationalisation for Go with forward composition and reverse decomposition.
---
# go-i18n Grammar Engine
`forge.lthn.ai/core/go-i18n` is a **grammar engine** for Go. Unlike flat key-value translation systems, it composes grammatically correct output from verbs, nouns, and articles -- and can reverse the process, decomposing inflected text back into base forms with grammatical metadata.
This is the foundation for the Poindexter classification pipeline and the LEM scoring system.
## Architecture
| Layer | Package | Purpose |
|-------|---------|---------|
| Forward | Root (`i18n`) | Compose grammar-aware messages: `T()`, `PastTense()`, `Gerund()`, `Pluralize()`, `Article()` |
| Reverse | `reversal/` | Decompose text back to base forms with tense/number metadata |
| Imprint | `reversal/` | Lossy feature vector projection for grammar fingerprinting |
| Multiply | `reversal/` | Deterministic training data augmentation |
| Classify | Root (`i18n`) | 1B model domain classification pipeline |
| Data | `locales/` | Grammar tables (JSON) -- only `gram.*` data |
## Quick Start
```go
import i18n "forge.lthn.ai/core/go-i18n"
// Initialise the default service (uses embedded en.json)
svc, err := i18n.New()
i18n.SetDefault(svc)
// Forward composition
i18n.T("i18n.progress.build") // "Building..."
i18n.T("i18n.done.delete", "cache") // "Cache deleted"
i18n.T("i18n.count.file", 5) // "5 files"
i18n.PastTense("commit") // "committed"
i18n.Article("SSH") // "an"
```
```go
import "forge.lthn.ai/core/go-i18n/reversal"
// Reverse decomposition
tok := reversal.NewTokeniser()
tokens := tok.Tokenise("Deleted the configuration files")
// Grammar fingerprinting
imp := reversal.NewImprint(tokens)
sim := imp.Similar(otherImp) // 0.0-1.0
// Training data augmentation
m := reversal.NewMultiplier()
variants := m.Expand("Delete the file") // 4-7 grammatical variants
```
## Documentation
- [Forward API](forward-api.md) -- `T()`, grammar primitives, namespace handlers, Subject builder
- [Reversal Engine](reversal.md) -- 3-tier tokeniser, matching, morphology rules, round-trip verification
- [GrammarImprint](grammar-imprint.md) -- Lossy feature vectors, weighted cosine similarity, reference distributions
- [Locale JSON Schema](locale-schema.md) -- `en.json` structure, grammar table contract, sacred rules
- [Multiplier](multiplier.md) -- Deterministic variant generation, case preservation, round-trip guarantee
## Key Design Decisions
**Grammar engine, not translation file manager.** Consumers bring their own translations. go-i18n provides the grammatical composition and decomposition primitives.
**3-tier lookup.** All grammar lookups follow the same pattern: JSON locale data (tier 1) takes precedence over irregular Go maps (tier 2), which take precedence over regular morphology rules (tier 3). This lets locale files override any built-in rule.
**Round-trip verification.** The reversal engine verifies tier 3 candidates by applying the forward function and checking the result matches the original. This eliminates phantom base forms like "walke" or "processe".
**Lossy imprints.** GrammarImprint intentionally discards content, preserving only grammatical structure. Two texts with similar grammar produce similar imprints regardless of subject matter. This is a privacy-preserving proxy for semantic similarity.
## Running Tests
```bash
go test ./... # All tests
go test -v ./reversal/ # Reversal engine tests
go test -bench=. ./... # Benchmarks
```
## Status
- **Phase 1** (Harden): Dual-class disambiguation -- design approved, implementation in progress
- **Phase 2** (Reference Distributions): 1B pre-classification pipeline + imprint calibration
- **Phase 3** (Multi-Language): French grammar tables

View file

@ -0,0 +1,106 @@
---
title: go-inference
description: Shared interfaces for text generation backends in the Core Go ecosystem.
---
# go-inference
Module: `forge.lthn.ai/core/go-inference`
go-inference defines the shared contract between GPU-specific inference backends and their consumers. It contains the interfaces, types, and registry that let a consumer load a model and generate text without knowing which GPU runtime is underneath.
## Why it exists
The Core Go ecosystem has multiple inference backends:
- **go-mlx** — Apple Metal on macOS (darwin/arm64), native GPU memory access
- **go-rocm** — AMD ROCm on Linux (linux/amd64), llama-server subprocess
- **go-ml** — scoring engine, also wraps llama.cpp HTTP as a third backend path
And multiple consumers:
- **go-ai** — MCP hub exposing inference via 30+ agent tools
- **go-i18n** — domain classification via Gemma3-1B
- **go-ml** — training pipeline, scoring engine
Without a shared interface layer, every consumer would need to import every backend directly, dragging in CGO bindings, Metal frameworks, and ROCm libraries on platforms that cannot use them.
go-inference breaks that coupling. A backend imports go-inference and implements its interfaces. A consumer imports go-inference and programs against those interfaces. Neither needs to know about the other at compile time.
## Zero dependencies
The package imports only the Go standard library. The sole exception is `testify` in the test tree. This is a deliberate constraint — the package sits at the base of a dependency graph where backends pull in heavyweight GPU libraries. None of those concerns belong in the interface layer.
## Ecosystem position
```
go-inference (this package)
|
|── implemented by ────────────────────────
| |
go-mlx go-rocm
(darwin/arm64, Metal GPU) (linux/amd64, AMD ROCm)
| |
└──────────── consumed by ─────────────────┘
|
go-ml
(scoring engine, llama.cpp HTTP)
|
go-ai
(MCP hub, 30+ tools)
|
go-i18n
(domain classification)
```
## Package layout
| File | Purpose |
|------|---------|
| `inference.go` | `TextModel`, `Backend` interfaces, backend registry, `LoadModel()` entry point |
| `options.go` | `GenerateConfig`, `LoadConfig`, functional options (`WithMaxTokens`, `WithBackend`, etc.) |
| `training.go` | `TrainableModel`, `LoRAConfig`, `Adapter` interfaces, `LoadTrainable()` |
| `discover.go` | `Discover()` scans directories for model files (config.json + *.safetensors) |
## Quick start
```go
import "forge.lthn.ai/core/go-inference"
// Load a model (auto-detects the best available backend)
m, err := inference.LoadModel("/path/to/model/")
if err != nil {
log.Fatal(err)
}
defer m.Close()
// Stream tokens
ctx := context.Background()
for tok := range m.Generate(ctx, "Once upon a time", inference.WithMaxTokens(128)) {
fmt.Print(tok.Text)
}
if err := m.Err(); err != nil {
log.Fatal(err)
}
```
## Further reading
- [Interfaces](interfaces.md) — `TextModel`, `Backend`, `TrainableModel`, `AttentionInspector`
- [Types](types.md) — `Token`, `GenerateConfig`, `LoadConfig`, `LoRAConfig`, and all supporting structs
- [Backends](backends.md) — How the registry works, how to implement a new backend
## Stability contract
This package is the shared contract. Changes here affect go-mlx, go-rocm, and go-ml simultaneously. The rules:
1. **Never change** existing method signatures on `TextModel` or `Backend`.
2. **Only add** methods when two or more consumers have a concrete need.
3. **New capability** is expressed as separate interfaces that embed `TextModel`, not by extending `TextModel` itself. Consumers opt in via type assertion.
4. **New fields** on `GenerateConfig` or `LoadConfig` are safe — zero-value defaults preserve backwards compatibility.
## Requirements
- Go 1.26+ (uses `iter.Seq`, `maps`, `slices`)
- No CGO, no build tags, no platform constraints
- Licence: EUPL-1.2

View file

@ -0,0 +1,122 @@
---
title: go-infra
description: Infrastructure provider API clients and YAML-based configuration for managing production environments.
---
# go-infra
`forge.lthn.ai/core/go-infra` provides typed Go clients for infrastructure provider APIs (Hetzner Cloud, Hetzner Robot, CloudNS) and a declarative YAML configuration layer for describing production topology. It also ships CLI commands for production management (`core prod`) and security monitoring (`core monitor`).
The library has no framework dependencies beyond the Go standard library, YAML parsing, and testify for tests. All HTTP communication goes through a shared `APIClient` that handles retries, exponential backoff, and rate-limit compliance automatically.
## Module Path
```
forge.lthn.ai/core/go-infra
```
Requires **Go 1.26+**.
## Quick Start
### Using the API Clients Directly
```go
import "forge.lthn.ai/core/go-infra"
// Hetzner Cloud -- list all servers
hc := infra.NewHCloudClient(os.Getenv("HCLOUD_TOKEN"))
servers, err := hc.ListServers(ctx)
// Hetzner Robot -- list dedicated servers
hr := infra.NewHRobotClient(user, password)
dedicated, err := hr.ListServers(ctx)
// CloudNS -- ensure a DNS record exists
dns := infra.NewCloudNSClient(authID, authPassword)
changed, err := dns.EnsureRecord(ctx, "example.com", "www", "A", "1.2.3.4", 300)
```
### Loading Infrastructure Configuration
```go
import "forge.lthn.ai/core/go-infra"
// Auto-discover infra.yaml by walking up from the current directory
cfg, path, err := infra.Discover(".")
// Or load a specific file
cfg, err := infra.Load("/path/to/infra.yaml")
// Query the configuration
appServers := cfg.AppServers()
for name, host := range appServers {
fmt.Printf("%s: %s (%s)\n", name, host.IP, host.Role)
}
```
### CLI Commands
When registered with the `core` CLI binary, go-infra provides two command groups:
```bash
# Production infrastructure management
core prod status # Health check all hosts, services, and load balancer
core prod setup # Phase 1 foundation: discover topology, create LB, configure DNS
core prod setup --dry-run # Preview what setup would do
core prod setup --step=dns # Run a single setup step
core prod dns list # List DNS records for a zone
core prod dns set www A 1.2.3.4 # Create or update a DNS record
core prod lb status # Show load balancer status and target health
core prod lb create # Create load balancer from infra.yaml
core prod ssh noc # SSH into a named host
# Security monitoring (aggregates GitHub Security findings)
core monitor # Scan current repo
core monitor --all # Scan all repos in registry
core monitor --repo core-php # Scan a specific repo
core monitor --severity high # Filter by severity
core monitor --json # JSON output
```
## Package Layout
| Path | Description |
|------|-------------|
| `client.go` | Shared HTTP API client with retry, exponential backoff, and rate-limit handling |
| `config.go` | YAML infrastructure configuration parser and typed config structs |
| `hetzner.go` | Hetzner Cloud API (servers, load balancers, snapshots) and Hetzner Robot API (dedicated servers) |
| `cloudns.go` | CloudNS DNS API (zones, records, ACME challenge helpers) |
| `cmd/prod/` | CLI commands for production infrastructure management (`core prod`) |
| `cmd/monitor/` | CLI commands for security finding aggregation (`core monitor`) |
## Dependencies
### Direct
| Module | Purpose |
|--------|---------|
| `forge.lthn.ai/core/cli` | CLI framework (cobra-based command registration) |
| `forge.lthn.ai/core/go-ansible` | SSH client used by `core prod status` for host health checks |
| `forge.lthn.ai/core/go-i18n` | Internationalisation strings for monitor command |
| `forge.lthn.ai/core/go-io` | Filesystem abstraction used by monitor's registry lookup |
| `forge.lthn.ai/core/go-log` | Structured error logging |
| `forge.lthn.ai/core/go-scm` | Repository registry for multi-repo monitoring |
| `gopkg.in/yaml.v3` | YAML parsing for `infra.yaml` |
| `github.com/stretchr/testify` | Test assertions |
The core library types (`config.go`, `client.go`, `hetzner.go`, `cloudns.go`) only depend on the standard library and `gopkg.in/yaml.v3`. The heavier dependencies (`cli`, `go-ansible`, `go-scm`, etc.) are confined to the `cmd/` packages.
## Environment Variables
| Variable | Used by | Description |
|----------|---------|-------------|
| `HCLOUD_TOKEN` | `prod setup`, `prod status`, `prod lb` | Hetzner Cloud API bearer token |
| `HETZNER_ROBOT_USER` | `prod setup` | Hetzner Robot API username |
| `HETZNER_ROBOT_PASS` | `prod setup` | Hetzner Robot API password |
| `CLOUDNS_AUTH_ID` | `prod setup`, `prod dns` | CloudNS sub-auth user ID |
| `CLOUDNS_AUTH_PASSWORD` | `prod setup`, `prod dns` | CloudNS auth password |
## Licence
EUPL-1.2

121
docs/go/packages/go-io.md Normal file
View file

@ -0,0 +1,121 @@
---
title: go-io
description: Unified storage abstraction for Go with pluggable backends — local filesystem, S3, SQLite, in-memory, and key-value.
---
# go-io
`forge.lthn.ai/core/go-io` is a storage abstraction library that provides a single `Medium` interface for reading and writing files across different backends. Write your code against `Medium` once, then swap between local disk, S3, SQLite, or in-memory storage without changing a line of business logic.
The library also includes `sigil`, a composable data-transformation pipeline for encoding, compression, hashing, and authenticated encryption.
## Quick Start
```go
import (
io "forge.lthn.ai/core/go-io"
"forge.lthn.ai/core/go-io/s3"
"forge.lthn.ai/core/go-io/node"
)
// Use the pre-initialised local filesystem (unsandboxed, rooted at "/").
content, _ := io.Local.Read("/etc/hostname")
// Create a sandboxed medium restricted to a single directory.
sandbox, _ := io.NewSandboxed("/var/data/myapp")
_ = sandbox.Write("config.yaml", "key: value")
// In-memory filesystem with tar serialisation.
mem := node.New()
mem.AddData("hello.txt", []byte("world"))
tarball, _ := mem.ToTar()
// S3 backend (requires an *s3.Client from the AWS SDK).
bucket, _ := s3.New("my-bucket", s3.WithClient(awsClient), s3.WithPrefix("uploads/"))
_ = bucket.Write("photo.jpg", rawData)
```
## Package Layout
| Package | Import Path | Purpose |
|---------|-------------|---------|
| `io` (root) | `forge.lthn.ai/core/go-io` | `Medium` interface, helper functions, `MockMedium` for tests |
| `local` | `forge.lthn.ai/core/go-io/local` | Local filesystem backend with path sandboxing and symlink-escape protection |
| `s3` | `forge.lthn.ai/core/go-io/s3` | Amazon S3 / S3-compatible backend (Garage, MinIO, etc.) |
| `sqlite` | `forge.lthn.ai/core/go-io/sqlite` | SQLite-backed virtual filesystem (pure Go driver, no CGO) |
| `node` | `forge.lthn.ai/core/go-io/node` | In-memory filesystem implementing both `Medium` and `fs.FS`, with tar round-tripping |
| `datanode` | `forge.lthn.ai/core/go-io/datanode` | Thread-safe in-memory `Medium` backed by Borg's DataNode, with snapshot/restore |
| `store` | `forge.lthn.ai/core/go-io/store` | Group-namespaced key-value store (SQLite), with a `Medium` adapter and Go template rendering |
| `sigil` | `forge.lthn.ai/core/go-io/sigil` | Composable data transformations: encoding, compression, hashing, XChaCha20-Poly1305 encryption |
| `workspace` | `forge.lthn.ai/core/go-io/workspace` | Encrypted workspace service integrated with the Core DI container |
## The Medium Interface
Every storage backend implements the same 18-method interface:
```go
type Medium interface {
// Content operations
Read(path string) (string, error)
Write(path, content string) error
FileGet(path string) (string, error) // alias for Read
FileSet(path, content string) error // alias for Write
// Streaming (for large files)
ReadStream(path string) (io.ReadCloser, error)
WriteStream(path string) (io.WriteCloser, error)
Open(path string) (fs.File, error)
Create(path string) (io.WriteCloser, error)
Append(path string) (io.WriteCloser, error)
// Directory operations
EnsureDir(path string) error
List(path string) ([]fs.DirEntry, error)
// Metadata
Stat(path string) (fs.FileInfo, error)
Exists(path string) bool
IsFile(path string) bool
IsDir(path string) bool
// Mutation
Delete(path string) error
DeleteAll(path string) error
Rename(oldPath, newPath string) error
}
```
All backends implement this interface fully. Backends where a method has no natural equivalent (e.g., `EnsureDir` on S3) provide a safe no-op.
## Cross-Medium Operations
The root package provides helper functions that accept any `Medium`:
```go
// Copy a file between any two backends.
err := io.Copy(localMedium, "source.txt", s3Medium, "dest.txt")
// Read/Write wrappers that take an explicit medium.
content, err := io.Read(medium, "path")
err := io.Write(medium, "path", "content")
```
## Dependencies
| Dependency | Role |
|------------|------|
| `forge.lthn.ai/core/go-log` | Structured error helper (`E()`) |
| `forge.lthn.ai/Snider/Borg` | DataNode in-memory FS (used by `datanode` package) |
| `github.com/aws/aws-sdk-go-v2` | S3 client (used by `s3` package) |
| `golang.org/x/crypto` | BLAKE2, SHA-3, RIPEMD-160, XChaCha20-Poly1305 (used by `sigil`) |
| `modernc.org/sqlite` | Pure Go SQLite driver (used by `sqlite` and `store`) |
| `github.com/stretchr/testify` | Test assertions |
Go version: **1.26.0**
Licence: **EUPL-1.2**

View file

@ -0,0 +1,99 @@
---
title: go-log
description: Structured logging and error handling for Core applications
---
# go-log
`forge.lthn.ai/core/go-log` provides structured logging and contextual error
handling for Go applications built on the Core framework. It is a small,
zero-dependency library (only `testify` at test time) that replaces ad-hoc
`fmt.Println` / `log.Printf` calls with level-filtered, key-value structured
output and a rich error type that carries operation context through the call
stack.
## Quick Start
```go
import "forge.lthn.ai/core/go-log"
// Use the package-level default logger straight away
log.SetLevel(log.LevelDebug)
log.Info("server started", "port", 8080)
log.Warn("high latency", "ms", 320)
log.Error("request failed", "err", err)
// Security events are always visible at Error level
log.Security("brute force detected", "ip", "10.0.0.1", "attempts", 47)
```
### Creating a Custom Logger
```go
logger := log.New(log.Options{
Level: log.LevelInfo,
Output: os.Stdout,
RedactKeys: []string{"password", "token", "secret"},
})
logger.Info("login", "user", "admin", "password", "hunter2")
// Output: 14:32:01 [INF] login user="admin" password="[REDACTED]"
```
### Structured Errors
```go
// Create an error with operational context
err := log.E("db.Connect", "connection refused", underlyingErr)
// Wrap errors as they bubble up through layers
err = log.Wrap(err, "user.Save", "failed to persist user")
// Inspect the chain
log.Op(err) // "user.Save"
log.Root(err) // the original underlyingErr
log.StackTrace(err) // ["user.Save", "db.Connect"]
log.FormatStackTrace(err) // "user.Save -> db.Connect"
```
### Combined Log-and-Return
```go
if err != nil {
return log.LogError(err, "handler.Process", "request failed")
// Logs at Error level AND returns a wrapped error -- one line instead of three
}
```
## Package Layout
| File | Purpose |
|------|---------|
| `log.go` | Logger type, log levels, key-value formatting, redaction, default logger, `Username()` helper |
| `errors.go` | `Err` structured error type, creation helpers (`E`, `Wrap`, `WrapCode`, `NewCode`), introspection (`Op`, `ErrCode`, `Root`, `StackTrace`), combined log-and-return helpers (`LogError`, `LogWarn`, `Must`) |
| `log_test.go` | Tests for the Logger: level filtering, key-value output, redaction, injection prevention, security logging |
| `errors_test.go` | Tests for structured errors: creation, wrapping, code propagation, introspection, stack traces, log-and-return helpers |
## Dependencies
| Module | Purpose |
|--------|---------|
| Go standard library only | Runtime -- no external dependencies |
| `github.com/stretchr/testify` | Test assertions (test-only) |
The package deliberately avoids external runtime dependencies. Log rotation is
supported through an optional `RotationWriterFactory` hook that can be wired up
by `core/go-io` or any other provider -- go-log itself carries no file-rotation
code.
## Module Path
```
forge.lthn.ai/core/go-log
```
Requires **Go 1.26+** (uses `iter.Seq` from the standard library).
## Licence
EUPL-1.2

120
docs/go/packages/go-ml.md Normal file
View file

@ -0,0 +1,120 @@
---
title: go-ml
description: ML inference backends, scoring engine, and agent orchestrator for Go.
---
# go-ml
`forge.lthn.ai/core/go-ml` provides pluggable inference backends, a multi-suite scoring engine with ethics-aware probes, GGUF model management, and a concurrent worker pipeline for batch evaluation.
**Module**: `forge.lthn.ai/core/go-ml`
**Size**: ~7,500 LOC across 41 Go files, 6 test files
**Licence**: EUPL-1.2
## Core Capabilities
| Area | Description |
|------|-------------|
| **Inference backends** | MLX (Metal GPU), llama.cpp (subprocess), HTTP (Ollama/vLLM/OpenAI-compatible) |
| **Scoring engine** | Heuristic (regex), semantic (LLM judge), exact match, ethics probes |
| **Agent orchestrator** | Multi-model scoring runs with concurrent worker pool |
| **Model management** | GGUF format parsing, MLX-to-PEFT conversion, Ollama model creation |
| **Data pipeline** | DuckDB storage, Parquet I/O, InfluxDB metrics, HuggingFace publishing |
## Dependencies
| Module | Purpose |
|--------|---------|
| `core/go` | Framework services, lifecycle, process management |
| `core/go-inference` | Shared `TextModel`/`Backend`/`Token` interfaces |
| `core/go-mlx` | Native Metal GPU inference (darwin/arm64) |
| `core/go-process` | Subprocess management for llama-server |
| `core/go-log` | Structured error helpers |
| `core/go-api` | REST API route registration |
| `go-duckdb` | Embedded analytics database |
| `parquet-go` | Columnar data format |
## Architecture Overview
The package is organised around four layers:
```
Backend Layer — inference.go, backend_http.go, backend_llama.go, backend_mlx.go
Pluggable backends behind a common Backend interface.
Scoring Layer — score.go, heuristic.go, judge.go, exact.go, probes.go
Multi-suite concurrent scoring engine.
Agent Layer — agent_execute.go, agent_eval.go, agent_influx.go, agent_ssh.go
Orchestrates checkpoint discovery, evaluation, and result publishing.
Data Layer — db.go, influx.go, parquet.go, export.go, io.go
Storage, metrics, and training data pipeline.
```
## Service Registration
`go-ml` integrates with the Core DI framework via `Service`:
```go
import "forge.lthn.ai/core/go-ml"
c, _ := core.New(
core.WithName("ml", ml.NewService(ml.Options{
OllamaURL: "http://localhost:11434",
JudgeURL: "http://localhost:11434",
JudgeModel: "qwen3:8b",
Suites: "all",
Concurrency: 4,
})),
)
```
On startup, the service registers configured backends and initialises the scoring engine. It implements `Startable` and `Stoppable` for lifecycle integration.
### Service Methods
```go
svc.Generate(ctx, "ollama", "Explain LoRA", ml.DefaultGenOpts())
svc.ScoreResponses(ctx, responses)
svc.RegisterBackend("custom", myBackend)
svc.Backend("ollama")
svc.DefaultBackend()
svc.Judge()
svc.Engine()
```
## REST API
The `api` sub-package provides Gin-based REST endpoints:
| Method | Path | Description |
|--------|------|-------------|
| `GET` | `/v1/ml/backends` | List registered backends with availability |
| `GET` | `/v1/ml/status` | Service readiness, backend list, judge availability |
| `POST` | `/v1/ml/generate` | Generate text against a named backend |
WebSocket channels: `ml.generate`, `ml.status`.
## Quick Start
```go
// Create an HTTP backend pointing at Ollama
backend := ml.NewHTTPBackend("http://localhost:11434", "qwen3:8b")
// Generate text
result, err := backend.Generate(ctx, "What is LoRA?", ml.DefaultGenOpts())
fmt.Println(result.Text)
// Score a batch of responses
judge := ml.NewJudge(backend)
engine := ml.NewEngine(judge, 4, "heuristic,semantic")
scores := engine.ScoreAll(ctx, responses)
```
## Further Reading
- [Scoring Engine](scoring.md) -- Heuristic analysis, LLM judge, probes, benchmarks
- [Backends](backends.md) -- HTTP, llama.cpp, MLX, and the inference adapter
- [Training Pipeline](training.md) -- Data export, LoRA conversion, adapter management
- [Model Management](models.md) -- GGUF parsing, Ollama integration, checkpoint discovery

127
docs/go/packages/go-mlx.md Normal file
View file

@ -0,0 +1,127 @@
---
title: go-mlx
description: Native Metal GPU inference and training for Go on Apple Silicon.
---
# go-mlx
`forge.lthn.ai/core/go-mlx` provides native Apple Metal GPU inference and LoRA fine-tuning for Go. It wraps Apple's [MLX](https://github.com/ml-explore/mlx) framework through the [mlx-c](https://github.com/ml-explore/mlx-c) C API, implementing the `inference.Backend` interface from `forge.lthn.ai/core/go-inference`.
**Platform:** darwin/arm64 only (Apple Silicon M1-M4). A stub provides `MetalAvailable() bool` returning false on all other platforms.
## Quick Start
```go
import (
"context"
"fmt"
"forge.lthn.ai/core/go-inference"
_ "forge.lthn.ai/core/go-mlx" // registers "metal" backend via init()
)
func main() {
m, err := inference.LoadModel("/path/to/safetensors/model/")
if err != nil {
panic(err)
}
defer m.Close()
ctx := context.Background()
for tok := range m.Generate(ctx, "What is 2+2?", inference.WithMaxTokens(128)) {
fmt.Print(tok.Text)
}
if err := m.Err(); err != nil {
panic(err)
}
}
```
The blank import (`_ "forge.lthn.ai/core/go-mlx"`) auto-registers the Metal backend. All interaction goes through the `go-inference` interfaces -- go-mlx itself exports only Metal-specific memory controls.
## Features
- **Streaming inference** -- token-by-token generation via `iter.Seq[Token]` (range-over-func)
- **Multi-turn chat** -- native chat templates for Gemma 3, Qwen 2/3, and Llama 3
- **Batch inference** -- `Classify` (prefill-only) and `BatchGenerate` (autoregressive) for multiple prompts
- **LoRA fine-tuning** -- low-rank adaptation with AdamW optimiser and gradient checkpointing
- **Quantisation** -- transparent support for 4-bit and 8-bit quantised models via `QuantizedMatmul`
- **Attention inspection** -- extract post-RoPE K vectors from the KV cache for analysis
- **Performance metrics** -- prefill/decode tokens per second, GPU memory usage
## Supported Models
Models must be in **HuggingFace safetensors format** (not GGUF). Architecture is auto-detected from `config.json`:
| Architecture | `model_type` values | Tested sizes |
|-------------|---------------------|-------------|
| Gemma 3 | `gemma3`, `gemma3_text`, `gemma2` | 1B, 4B, 27B |
| Qwen 3 | `qwen3`, `qwen2` | 8B+ |
| Llama 3 | `llama` | 8B+ |
## Package Layout
| Package | Purpose |
|---------|---------|
| Root (`mlx`) | Public API: Metal backend registration, memory controls, training type exports |
| `internal/metal/` | All CGO code: array ops, model loaders, generation, training primitives |
| `mlxlm/` | Alternative subprocess backend via Python's mlx-lm (no CGO required) |
## Metal Memory Controls
These control the Metal allocator directly, not individual models:
```go
import mlx "forge.lthn.ai/core/go-mlx"
mlx.SetCacheLimit(4 << 30) // 4 GB cache limit
mlx.SetMemoryLimit(32 << 30) // 32 GB hard limit
mlx.ClearCache() // release cached memory between chat turns
fmt.Printf("active: %d MB, peak: %d MB\n",
mlx.GetActiveMemory()/1024/1024,
mlx.GetPeakMemory()/1024/1024)
```
| Function | Purpose |
|----------|---------|
| `SetCacheLimit(bytes)` | Soft limit on the allocator cache |
| `SetMemoryLimit(bytes)` | Hard ceiling on Metal memory |
| `SetWiredLimit(bytes)` | Wired memory limit |
| `GetActiveMemory()` | Current live allocations in bytes |
| `GetPeakMemory()` | High-water mark since last reset |
| `GetCacheMemory()` | Cached (not yet freed) memory |
| `ClearCache()` | Release cached memory to the OS |
| `ResetPeakMemory()` | Reset the high-water mark |
| `GetDeviceInfo()` | Metal GPU hardware information |
## Performance Baseline
Measured on M3 Ultra (60-core GPU, 96 GB unified memory):
| Operation | Throughput |
|-----------|-----------|
| Gemma3-1B 4-bit prefill | 246 tok/s |
| Gemma3-1B 4-bit decode | 82 tok/s |
| Gemma3-1B 4-bit classify (4 prompts) | 152 prompts/s |
| DeepSeek R1 7B 4-bit decode | 27 tok/s |
| Llama 3.1 8B 4-bit decode | 30 tok/s |
## Documentation
- [Architecture](architecture.md) -- CGO binding layer, lazy evaluation, memory model, attention, KV cache
- [Models](models.md) -- model loading, supported architectures, tokenisation, chat templates
- [Training](training.md) -- LoRA fine-tuning, gradient computation, AdamW optimiser, loss functions
- [Build Guide](build.md) -- prerequisites, CMake setup, build tags, testing
## Downstream Consumers
| Package | Role |
|---------|------|
| `forge.lthn.ai/core/go-ml` | Imports go-inference + go-mlx for the Metal backend training loop |
| `forge.lthn.ai/core/go-i18n` | Gemma3-1B domain classification (Phase 2a) |
| `forge.lthn.ai/core/go-rocm` | Sibling AMD GPU backend, same go-inference interfaces |
## Licence
EUPL-1.2

View file

@ -0,0 +1,94 @@
---
title: go-p2p Overview
description: P2P mesh networking layer for the Lethean network.
---
# go-p2p
P2P networking layer for the Lethean network. Encrypted WebSocket mesh with UEPS wire protocol.
**Module:** `forge.lthn.ai/core/go-p2p`
**Go:** 1.26
**Licence:** EUPL-1.2
## Package Structure
```
go-p2p/
├── node/ P2P mesh: identity, transport, peers, protocol, controller, dispatcher
│ └── levin/ Levin binary protocol (header, storage, varint, connection)
├── ueps/ UEPS wire protocol (RFC-021): TLV packet builder and stream reader
└── logging/ Structured levelled logger with component scoping
```
## What Each Piece Does
| Component | File(s) | Purpose |
|-----------|---------|---------|
| [Identity](identity.md) | `identity.go` | X25519 keypair, node ID derivation, HMAC-SHA256 challenge-response |
| [Transport](transport.md) | `transport.go` | Encrypted WebSocket connections, SMSG encryption, rate limiting |
| [Discovery](discovery.md) | `peer.go` | Peer registry, KD-tree selection, score tracking, allowlist auth |
| [UEPS](ueps.md) | `ueps/packet.go`, `ueps/reader.go` | TLV wire protocol with HMAC integrity (RFC-021) |
| [Routing](routing.md) | `dispatcher.go` | Intent-based packet routing with threat circuit breaker |
| [TIM Bundles](tim.md) | `bundle.go` | Encrypted deployment bundles, tar extraction with Zip Slip defence |
| Messages | `message.go` | Message envelope, payload types, protocol version negotiation |
| Protocol | `protocol.go` | Response validation, structured error handling |
| Controller | `controller.go` | Request-response correlation, remote peer operations |
| Worker | `worker.go` | Incoming message dispatch, miner/profile management interfaces |
| Buffer Pool | `bufpool.go` | `sync.Pool`-backed JSON encoding for hot paths |
## Dependencies
| Module | Purpose |
|--------|---------|
| `forge.lthn.ai/Snider/Borg` | STMF crypto (keypairs), SMSG encryption, TIM bundle format |
| `forge.lthn.ai/Snider/Poindexter` | KD-tree peer scoring and nearest-neighbour selection |
| `github.com/gorilla/websocket` | WebSocket transport |
| `github.com/google/uuid` | Message and peer ID generation |
| `github.com/adrg/xdg` | XDG base directory paths for key and config storage |
## Message Protocol
Every message is a JSON-encoded `Message` struct transported over WebSocket. After handshake, all messages are SMSG-encrypted using the X25519-derived shared secret.
```go
type Message struct {
ID string `json:"id"` // UUID v4
Type MessageType `json:"type"` // Determines payload interpretation
From string `json:"from"` // Sender node ID
To string `json:"to"` // Recipient node ID
Timestamp time.Time `json:"ts"`
Payload json.RawMessage `json:"payload"` // Type-specific JSON
ReplyTo string `json:"replyTo,omitempty"` // For request-response correlation
}
```
### Message Types
| Category | Types |
|----------|-------|
| Connection | `handshake`, `handshake_ack`, `ping`, `pong`, `disconnect` |
| Operations | `get_stats`, `stats`, `start_miner`, `stop_miner`, `miner_ack` |
| Deployment | `deploy`, `deploy_ack` |
| Logs | `get_logs`, `logs` |
| Error | `error` (codes 1000--1005) |
## Node Roles
```go
const (
RoleController NodeRole = "controller" // Orchestrates work distribution
RoleWorker NodeRole = "worker" // Executes compute tasks
RoleDual NodeRole = "dual" // Both controller and worker
)
```
## Architecture Layers
The stack has two distinct protocol layers:
1. **UEPS (low-level)** -- Binary TLV wire protocol with HMAC-SHA256 integrity, intent routing, and threat scoring. Operates beneath the mesh layer. See [ueps.md](ueps.md).
2. **Node mesh (high-level)** -- JSON-over-WebSocket with SMSG encryption. Handles identity, peer management, controller/worker operations, and deployment bundles.
The dispatcher bridges the two layers, routing verified UEPS packets to registered intent handlers whilst enforcing the threat circuit breaker.

View file

@ -0,0 +1,215 @@
---
title: go-process
description: Process management with Core IPC integration for Go applications.
---
# go-process
`forge.lthn.ai/core/go-process` is a process management library that provides
spawning, monitoring, and controlling external processes with real-time output
streaming via the Core ACTION (IPC) system. It integrates directly with the
[Core DI framework](https://forge.lthn.ai/core/go) as a first-class service.
## Features
- Spawn and manage external processes with full lifecycle tracking
- Real-time stdout/stderr streaming via Core IPC actions
- Ring buffer output capture (default 1 MB, configurable)
- Process pipeline runner with dependency graphs, sequential, and parallel modes
- Daemon mode with PID file locking, health check HTTP server, and graceful shutdown
- Daemon registry for tracking running instances across the system
- Lightweight `exec` sub-package for one-shot command execution with logging
- Thread-safe throughout; designed for concurrent use
## Quick Start
### Register with Core
```go
import (
"context"
framework "forge.lthn.ai/core/go/pkg/core"
"forge.lthn.ai/core/go-process"
)
// Create a Core instance with the process service
c, err := framework.New(
framework.WithName("process", process.NewService(process.Options{})),
)
if err != nil {
log.Fatal(err)
}
// Retrieve the typed service
svc, err := framework.ServiceFor[*process.Service](c, "process")
if err != nil {
log.Fatal(err)
}
```
### Run a Command
```go
// Fire-and-forget (async)
proc, err := svc.Start(ctx, "go", "test", "./...")
if err != nil {
return err
}
<-proc.Done()
fmt.Println(proc.Output())
// Synchronous convenience
output, err := svc.Run(ctx, "echo", "hello world")
```
### Listen for Events
Process lifecycle events are broadcast through Core's ACTION system:
```go
c.RegisterAction(func(c *framework.Core, msg framework.Message) error {
switch m := msg.(type) {
case process.ActionProcessStarted:
fmt.Printf("Started: %s (PID %d)\n", m.Command, m.PID)
case process.ActionProcessOutput:
fmt.Print(m.Line)
case process.ActionProcessExited:
fmt.Printf("Exit code: %d (%s)\n", m.ExitCode, m.Duration)
case process.ActionProcessKilled:
fmt.Printf("Killed with %s\n", m.Signal)
}
return nil
})
```
### Global Convenience API
For applications that only need a single process service, a global singleton
is available:
```go
// Initialise once at startup
process.Init(coreInstance)
// Then use package-level functions anywhere
proc, _ := process.Start(ctx, "ls", "-la")
output, _ := process.Run(ctx, "date")
procs := process.List()
running := process.Running()
```
## Daemon mode
go-process also manages *this process* as a long-running service. Where
`Process` manages child processes, `Daemon` manages the current process's own
lifecycle -- PID file locking, health endpoints, signal handling, and graceful
shutdown.
These types were extracted from `core/cli` to give any Go service daemon
capabilities without depending on the full CLI framework.
### PID file
`PIDFile` enforces single-instance execution. It writes the current PID on
`Acquire()`, detects stale lock files, and cleans up on `Release()`.
```go
pf := process.NewPIDFile("/var/run/myapp.pid")
if err := pf.Acquire(); err != nil {
log.Fatal("another instance is running")
}
defer pf.Release()
```
### Health server
`HealthServer` provides HTTP `/health` and `/ready` endpoints. Custom health
checks can be added and the ready state toggled independently.
```go
hs := process.NewHealthServer("127.0.0.1:9000")
hs.AddCheck(func() error { return db.Ping() })
hs.Start()
defer hs.Stop(ctx)
hs.SetReady(true)
```
### Daemon orchestration
`Daemon` combines PID file, health server, and signal handling into a single
struct. It listens for `SIGTERM`/`SIGINT` and calls registered shutdown hooks.
```go
d := process.NewDaemon(process.DaemonOptions{
PIDFile: "/var/run/myapp.pid",
HealthAddr: "127.0.0.1:9000",
ShutdownTimeout: 30 * time.Second,
})
d.Start()
d.SetReady(true)
d.Run(ctx) // blocks until signal
```
### Daemon registry
The `Registry` tracks all running daemons across the system via JSON files
in `~/.core/daemons/`. When a `Daemon` is configured with a `Registry`, it
auto-registers on start and auto-unregisters on stop.
```go
reg := process.DefaultRegistry()
// Manual registration
reg.Register(process.DaemonEntry{
Code: "my-app", Daemon: "serve", PID: os.Getpid(),
Health: "127.0.0.1:9000", Project: "/path/to/project",
})
// List all live daemons (stale entries are pruned automatically)
entries, _ := reg.List()
// Auto-registration via Daemon
d := process.NewDaemon(process.DaemonOptions{
Registry: reg,
RegistryEntry: process.DaemonEntry{
Code: "my-app", Daemon: "serve",
},
})
```
The registry is consumed by `core start/stop/list` CLI commands for
project-level daemon management.
## Package layout
| Path | Description |
|------|-------------|
| `*.go` (root) | Process service, types, actions, runner, daemon, health, PID file, registry |
| `exec/` | Lightweight command wrapper with fluent API and structured logging |
Key files:
| File | Purpose |
|------|---------|
| `daemon.go` | `Daemon`, `DaemonOptions`, `Mode`, `DetectMode()` |
| `pidfile.go` | `PIDFile` (acquire, release, stale detection) |
| `health.go` | `HealthServer` with `/health` and `/ready` endpoints |
| `registry.go` | `Registry`, `DaemonEntry`, `DefaultRegistry()` |
## Module information
| Field | Value |
|-------|-------|
| Module path | `forge.lthn.ai/core/go-process` |
| Go version | 1.26.0 |
| Licence | EUPL-1.2 |
## Dependencies
| Module | Purpose |
|--------|---------|
| `forge.lthn.ai/core/go` | Core DI framework (`ServiceRuntime`, `Core.ACTION`, lifecycle interfaces) |
| `github.com/stretchr/testify` | Test assertions (test-only) |
The package has no other runtime dependencies beyond the Go standard library
and the Core framework.

View file

@ -0,0 +1,95 @@
---
title: go-rag
description: Retrieval-Augmented Generation library for Go — document chunking, Ollama embeddings, Qdrant vector storage, and formatted context retrieval for LLM prompt injection.
---
# go-rag
`forge.lthn.ai/core/go-rag` is a Retrieval-Augmented Generation library for Go. It handles the full RAG pipeline: splitting documents into chunks, generating embeddings via Ollama, storing and searching vectors in Qdrant (gRPC), applying keyword boosting, and formatting results for human display or LLM prompt injection.
The library is built around two core interfaces -- `Embedder` and `VectorStore` -- that decouple business logic from service implementations. You can swap backends, inject mocks for testing, or run the full pipeline against live services with the same API.
**Module**: `forge.lthn.ai/core/go-rag`
**Go version**: 1.26
**Licence**: EUPL-1.2
## Quick Start
```go
import "forge.lthn.ai/core/go-rag"
// Ingest a directory of Markdown files into a Qdrant collection
err := rag.IngestDirectory(ctx, "/path/to/docs", "my-collection", false)
// Query for relevant context (XML format, suitable for LLM prompt injection)
context, err := rag.QueryDocsContext(ctx, "how does rate limiting work?", "my-collection", 5)
// For long-lived processes, construct clients once and use the *With variants
qdrantClient, _ := rag.NewQdrantClient(rag.DefaultQdrantConfig())
ollamaClient, _ := rag.NewOllamaClient(rag.DefaultOllamaConfig())
results, err := rag.QueryWith(ctx, qdrantClient, ollamaClient, "question", "collection", 5)
```
The convenience wrappers (`IngestDirectory`, `QueryDocs`, etc.) create new connections on each call, which is fine for CLI usage. For server processes or loops, use the `*With` variants with pre-constructed clients to avoid per-call connection overhead.
## Package Layout
| File | Purpose |
|------|---------|
| `embedder.go` | `Embedder` interface -- `Embed`, `EmbedBatch`, `EmbedDimension` |
| `vectorstore.go` | `VectorStore` interface -- collection management, upsert, search |
| `chunk.go` | Markdown chunking with three-level splitting (sections, paragraphs, sentences) and configurable overlap |
| `ollama.go` | `OllamaClient` -- implements `Embedder` via the Ollama HTTP API |
| `qdrant.go` | `QdrantClient` -- implements `VectorStore` via the Qdrant gRPC API |
| `ingest.go` | Ingestion pipeline -- walk directory, chunk files, embed, batch upsert |
| `query.go` | Query pipeline -- embed query, vector search, threshold filter, format results |
| `keyword.go` | Keyword boosting post-filter for re-ranking search results |
| `collections.go` | Package-level collection management helpers |
| `helpers.go` | Convenience wrappers -- `*With` variants and default-client functions |
| `cmd/rag/` | CLI subcommands (`ingest`, `query`, `collections`) for the `core` binary |
## CLI Commands
The package provides CLI subcommands mounted under `core ai rag`:
```bash
# Ingest a directory of Markdown files
core ai rag ingest /path/to/docs --collection my-docs --recreate
# Query the vector database
core ai rag query "how does the module system work?" --top 10 --format context
# List and manage collections
core ai rag collections --stats
core ai rag collections --delete old-collection
```
All commands accept `--qdrant-host`, `--qdrant-port`, `--ollama-host`, `--ollama-port`, and `--model` flags, with defaults overridable via environment variables (`QDRANT_HOST`, `QDRANT_PORT`, `OLLAMA_HOST`, `OLLAMA_PORT`, `EMBEDDING_MODEL`).
## Dependencies
| Dependency | Role |
|------------|------|
| `forge.lthn.ai/core/go-log` | Structured error wrapping (`log.E`) |
| `forge.lthn.ai/core/go-i18n` | Internationalised CLI strings |
| `forge.lthn.ai/core/cli` | CLI framework (cobra-based commands) |
| `github.com/ollama/ollama` | Ollama HTTP client for embedding generation |
| `github.com/qdrant/go-client` | Qdrant gRPC client for vector storage and search |
| `github.com/stretchr/testify` | Test assertions (test-only) |
Transitive dependencies include `google.golang.org/grpc`, `google.golang.org/protobuf`, and `github.com/google/uuid`.
## Service Defaults
| Service | Host | Port | Protocol |
|---------|------|------|----------|
| Qdrant | localhost | 6334 | gRPC |
| Ollama | localhost | 11434 | HTTP |
The default embedding model is `nomic-embed-text` (768 dimensions). Other supported models include `mxbai-embed-large` (1024 dimensions) and `all-minilm` (384 dimensions).
## Further Reading
- [Architecture](architecture.md) -- interfaces, chunking strategy, ingestion pipeline, query pipeline, keyword boosting, performance characteristics
- [Development](development.md) -- prerequisites, build commands, test patterns, coding standards, contribution guidelines

View file

@ -0,0 +1,117 @@
---
title: go-ratelimit
description: Provider-agnostic sliding window rate limiter for LLM API calls, with YAML and SQLite persistence backends.
---
# go-ratelimit
**Module**: `forge.lthn.ai/core/go-ratelimit`
**Licence**: EUPL-1.2
**Go version**: 1.26+
go-ratelimit enforces requests-per-minute (RPM), tokens-per-minute (TPM), and
requests-per-day (RPD) quotas on a per-model basis using an in-memory sliding
window. It ships with default quota profiles for Gemini, OpenAI, Anthropic, and
a local inference provider. State persists across process restarts via YAML
(single-process) or SQLite with WAL mode (multi-process). A YAML-to-SQLite
migration helper is included.
## Quick Start
```go
import "forge.lthn.ai/core/go-ratelimit"
// Create a limiter with Gemini defaults (YAML backend).
rl, err := ratelimit.New()
if err != nil {
log.Fatal(err)
}
// Check capacity before sending.
if rl.CanSend("gemini-2.0-flash", 1500) {
// Make the API call...
rl.RecordUsage("gemini-2.0-flash", 1000, 500) // promptTokens, outputTokens
}
// Persist state to disk for recovery across restarts.
if err := rl.Persist(); err != nil {
log.Printf("persist failed: %v", err)
}
```
### Multi-provider configuration
```go
rl, err := ratelimit.NewWithConfig(ratelimit.Config{
Providers: []ratelimit.Provider{
ratelimit.ProviderGemini,
ratelimit.ProviderAnthropic,
},
Quotas: map[string]ratelimit.ModelQuota{
// Override a specific model's limits.
"gemini-3-pro-preview": {MaxRPM: 50, MaxTPM: 500000, MaxRPD: 200},
// Add a custom model not in any profile.
"llama-3.3-70b": {MaxRPM: 5, MaxTPM: 50000, MaxRPD: 0},
},
})
```
### SQLite backend (multi-process safe)
```go
rl, err := ratelimit.NewWithSQLite("~/.core/ratelimits.db")
if err != nil {
log.Fatal(err)
}
defer rl.Close()
// Load persisted state.
if err := rl.Load(); err != nil {
log.Fatal(err)
}
// Use exactly as the YAML backend -- CanSend, RecordUsage, Persist, etc.
```
### Blocking until capacity is available
```go
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if err := rl.WaitForCapacity(ctx, "claude-opus-4", 2000); err != nil {
log.Printf("timed out waiting for capacity: %v", err)
return
}
// Capacity is available; proceed with the API call.
```
## Package Layout
The module is a single package with no sub-packages.
| File | Purpose |
|------|---------|
| `ratelimit.go` | Core types (`RateLimiter`, `ModelQuota`, `Config`, `Provider`), sliding window logic, provider profiles, YAML persistence, `CountTokens` helper |
| `sqlite.go` | SQLite persistence backend (`sqliteStore`), schema creation, load/save operations |
| `ratelimit_test.go` | Tests for core logic, provider profiles, concurrency, and benchmarks |
| `sqlite_test.go` | Tests for SQLite backend, migration, and error recovery |
| `error_test.go` | Tests for SQLite and YAML error paths |
| `iter_test.go` | Tests for `Models()` and `Iter()` iterators, plus `CountTokens` edge cases |
## Dependencies
| Dependency | Purpose | Category |
|------------|---------|----------|
| `gopkg.in/yaml.v3` | YAML serialisation for the legacy persistence backend | Direct |
| `modernc.org/sqlite` | Pure Go SQLite driver (no CGO required) | Direct |
| `github.com/stretchr/testify` | Test assertions (`assert`, `require`) | Test only |
All indirect dependencies are pulled in by `modernc.org/sqlite`. No C toolchain
or system SQLite library is needed.
## Further Reading
- [Architecture](architecture.md) -- sliding window algorithm, provider quotas, YAML and SQLite backends, concurrency model
- [Development](development.md) -- build commands, test patterns, coding standards, commit conventions
- [History](history.md) -- completed phases with commit hashes, known limitations

150
docs/go/packages/go-scm.md Normal file
View file

@ -0,0 +1,150 @@
---
title: go-scm
description: SCM integration, AgentCI automation, and data collection for the Lethean ecosystem.
---
# go-scm
`go-scm` provides source control management integration for the Lethean ecosystem. It wraps the Forgejo and Gitea APIs behind ergonomic Go clients, runs an automated PR pipeline for AI agent workflows, collects data from external sources, and manages multi-repo workspaces via a declarative registry.
**Module path:** `forge.lthn.ai/core/go-scm`
**Go version:** 1.26
**Licence:** EUPL-1.2
## Quick Start
### Forgejo API Client
```go
import "forge.lthn.ai/core/go-scm/forge"
// Create a client from config file / env / flags
client, err := forge.NewFromConfig("", "")
// List open issues
issues, err := client.ListIssues("core", "go-scm", forge.ListIssuesOpts{
State: "open",
})
// List repos in an organisation (paginated iterator)
for repo, err := range client.ListOrgReposIter("core") {
fmt.Println(repo.Name)
}
```
### Multi-Repo Git Status
```go
import "forge.lthn.ai/core/go-scm/git"
statuses := git.Status(ctx, git.StatusOptions{
Paths: []string{"/home/dev/core/go-scm", "/home/dev/core/go-ai"},
Names: map[string]string{"/home/dev/core/go-scm": "go-scm"},
})
for _, s := range statuses {
if s.IsDirty() {
fmt.Printf("%s: %d modified, %d untracked\n", s.Name, s.Modified, s.Untracked)
}
}
```
### AgentCI Poll-Dispatch Loop
```go
import (
"forge.lthn.ai/core/go-scm/jobrunner"
"forge.lthn.ai/core/go-scm/jobrunner/forgejo"
"forge.lthn.ai/core/go-scm/jobrunner/handlers"
)
source := forgejo.New(forgejo.Config{Repos: []string{"core/go-scm"}}, forgeClient)
poller := jobrunner.NewPoller(jobrunner.PollerConfig{
Sources: []jobrunner.JobSource{source},
Handlers: []jobrunner.JobHandler{
handlers.NewDispatchHandler(forgeClient, forgeURL, token, spinner),
handlers.NewTickParentHandler(forgeClient),
handlers.NewEnableAutoMergeHandler(forgeClient),
},
PollInterval: 60 * time.Second,
})
poller.Run(ctx)
```
### Data Collection
```go
import "forge.lthn.ai/core/go-scm/collect"
cfg := collect.NewConfig("/tmp/collected")
excavator := &collect.Excavator{
Collectors: []collect.Collector{
&collect.GitHubCollector{Org: "lethean-io"},
&collect.MarketCollector{CoinID: "lethean", Historical: true},
&collect.PapersCollector{Source: "all", Query: "cryptography VPN"},
},
Resume: true,
}
result, err := excavator.Run(ctx, cfg)
```
## Package Layout
| Package | Import Path | Description |
|---------|-------------|-------------|
| `forge` | `go-scm/forge` | Forgejo API client -- repos, issues, PRs, labels, webhooks, organisations, PR metadata |
| `gitea` | `go-scm/gitea` | Gitea API client -- repos, issues, PRs, mirroring, PR metadata |
| `git` | `go-scm/git` | Multi-repo git operations -- parallel status checks, push, pull; Core DI service |
| `jobrunner` | `go-scm/jobrunner` | AgentCI pipeline engine -- signal types, poller loop, JSONL audit journal |
| `jobrunner/forgejo` | `go-scm/jobrunner/forgejo` | Forgejo job source -- polls epic issues for unchecked children, builds signals |
| `jobrunner/handlers` | `go-scm/jobrunner/handlers` | Pipeline handlers -- dispatch, completion, auto-merge, publish-draft, dismiss-reviews, fix-command, tick-parent |
| `agentci` | `go-scm/agentci` | Clotho Protocol orchestrator -- agent config, SSH security helpers, dual-run verification |
| `collect` | `go-scm/collect` | Data collection framework -- collector interface, rate limiting, state tracking, event dispatch |
| `manifest` | `go-scm/manifest` | Application manifest -- YAML parsing, ed25519 signing and verification |
| `marketplace` | `go-scm/marketplace` | Module marketplace -- catalogue index, search, git-based installer with signature verification |
| `plugin` | `go-scm/plugin` | CLI plugin system -- plugin interface, JSON registry, loader, GitHub-based installer |
| `repos` | `go-scm/repos` | Workspace management -- `repos.yaml` registry, topological sorting, work config, git state, KB config |
| `cmd/forge` | `go-scm/cmd/forge` | CLI commands for the `core forge` subcommand |
| `cmd/gitea` | `go-scm/cmd/gitea` | CLI commands for the `core gitea` subcommand |
| `cmd/collect` | `go-scm/cmd/collect` | CLI commands for data collection |
## Dependencies
### Direct
| Module | Purpose |
|--------|---------|
| `codeberg.org/mvdkleijn/forgejo-sdk/forgejo/v2` | Forgejo API SDK |
| `code.gitea.io/sdk/gitea` | Gitea API SDK |
| `forge.lthn.ai/core/cli` | CLI framework (Cobra, TUI) |
| `forge.lthn.ai/core/go-config` | Layered config (`~/.core/config.yaml`) |
| `forge.lthn.ai/core/go-io` | Filesystem abstraction (Medium, Sandbox, Store) |
| `forge.lthn.ai/core/go-log` | Structured logging and contextual error helper |
| `forge.lthn.ai/core/go-i18n` | Internationalisation |
| `github.com/stretchr/testify` | Test assertions |
| `golang.org/x/net` | HTML parsing for collectors |
| `gopkg.in/yaml.v3` | YAML parsing for manifests and registries |
### Indirect
The module transitively pulls in `forge.lthn.ai/core/go` (DI framework) via `go-config`, plus `spf13/viper`, `spf13/cobra`, Charmbracelet TUI libraries, and Go standard library extensions.
## Configuration
Authentication for both Forgejo and Gitea is resolved through a three-tier priority chain:
1. **Config file** -- `~/.core/config.yaml` keys `forge.url`, `forge.token` (or `gitea.*`)
2. **Environment variables** -- `FORGE_URL`, `FORGE_TOKEN` (or `GITEA_URL`, `GITEA_TOKEN`)
3. **CLI flags** -- `--url`, `--token` (highest priority)
Set credentials once:
```bash
core forge config --url https://forge.lthn.ai --token <your-token>
core gitea config --url https://gitea.snider.dev --token <your-token>
```
## Further Reading
- [Architecture](architecture.md) -- internal design, key types, data flow
- [Development Guide](development.md) -- building, testing, contributing

View file

@ -0,0 +1,99 @@
---
title: go-session
description: Claude Code JSONL transcript parser, analytics engine, and HTML timeline renderer for Go.
---
# go-session
`go-session` parses Claude Code JSONL session transcripts into structured event arrays, computes per-tool analytics, renders self-contained HTML timelines with client-side search, and generates VHS tape scripts for MP4 video output. It has no external runtime dependencies -- stdlib only.
**Module path:** `forge.lthn.ai/core/go-session`
**Go version:** 1.26
**Licence:** EUPL-1.2
## Quick Start
```go
import "forge.lthn.ai/core/go-session"
// Parse a single session file
sess, stats, err := session.ParseTranscript("/path/to/session.jsonl")
// Or parse from any io.Reader (streaming, in-memory, HTTP body, etc.)
sess, stats, err := session.ParseTranscriptReader(reader, "my-session-id")
// Compute analytics
analytics := session.Analyse(sess)
fmt.Println(session.FormatAnalytics(analytics))
// Render an interactive HTML timeline
err = session.RenderHTML(sess, "timeline.html")
// Search across all sessions in a directory
results, err := session.Search("~/.claude/projects/my-project", "git commit")
// List sessions (newest first)
sessions, err := session.ListSessions("~/.claude/projects/my-project")
// Prune old sessions
deleted, err := session.PruneSessions("~/.claude/projects/my-project", 30*24*time.Hour)
```
## Package Layout
The entire package lives in a single Go package (`session`) with five source files:
| File | Purpose |
|------|---------|
| `parser.go` | Core types (`Event`, `Session`, `ParseStats`), JSONL parsing (`ParseTranscript`, `ParseTranscriptReader`), session listing (`ListSessions`, `ListSessionsSeq`), pruning (`PruneSessions`), fetching (`FetchSession`), tool input extraction |
| `analytics.go` | `SessionAnalytics` type, `Analyse` (pure computation), `FormatAnalytics` (CLI-friendly text output) |
| `html.go` | `RenderHTML` -- self-contained dark-theme HTML timeline with collapsible panels, search, and XSS protection |
| `video.go` | `RenderMP4` -- VHS tape script generation and invocation for MP4 video output |
| `search.go` | `Search` and `SearchSeq` -- case-insensitive cross-session search over tool call inputs and outputs |
Test files mirror the source files (`parser_test.go`, `analytics_test.go`, `html_test.go`, `video_test.go`, `search_test.go`) plus `bench_test.go` for benchmarks.
## Dependencies
| Dependency | Scope | Purpose |
|------------|-------|---------|
| Go standard library | Runtime | All parsing, HTML rendering, file I/O, JSON decoding |
| `github.com/stretchr/testify` | Test only | Assertions and requirements in test files |
| `vhs` (charmbracelet) | Optional external binary | Required only by `RenderMP4` for MP4 video generation |
The package has **zero runtime dependencies** beyond the Go standard library. `testify` is fetched automatically by `go test` and is never imported outside test files.
## Supported Tool Types
The parser recognises the following Claude Code tool types and formats their input for human readability:
| Tool | Input format | Example |
|------|-------------|---------|
| Bash | `command # description` | `ls -la # list files` |
| Read | `file_path` | `/tmp/main.go` |
| Edit | `file_path (edit)` | `/tmp/main.go (edit)` |
| Write | `file_path (N bytes)` | `/tmp/out.txt (42 bytes)` |
| Grep | `/pattern/ in path` | `/TODO/ in /src` |
| Glob | `pattern` | `**/*.go` |
| Task | `[subagent_type] description` | `[research] Code review` |
| Any other (MCP tools, etc.) | Sorted top-level JSON keys | `body, repo, title` |
Unknown tools (including MCP tools like `mcp__forge__create_issue`) are handled gracefully by extracting and sorting the JSON field names from the raw input.
## Iterator Support
Several functions offer both slice-returning and iterator-based variants, using Go's `iter.Seq` type:
| Slice variant | Iterator variant | Description |
|---------------|-----------------|-------------|
| `ListSessions()` | `ListSessionsSeq()` | Enumerate sessions in a directory |
| `Search()` | `SearchSeq()` | Search across sessions |
| -- | `Session.EventsSeq()` | Iterate over events in a session |
The iterator variants avoid allocating the full result slice upfront and support early termination via `break` or `return` in `range` loops.
## Further Reading
- [Architecture](architecture.md) -- JSONL format, parsing pipeline, event model, analytics, HTML rendering, XSS protection, data flow
- [Development Guide](development.md) -- Prerequisites, build commands, test patterns, coding standards, how to add new tool types
- [Project History](history.md) -- Completed phases, known limitations, future considerations

View file

@ -0,0 +1,134 @@
---
title: go-store
description: Group-namespaced SQLite key-value store with TTL expiry, namespace isolation, quota enforcement, and reactive event hooks.
---
# go-store
`go-store` is a group-namespaced key-value store backed by SQLite. It provides persistent or in-memory storage with optional TTL expiry, namespace isolation for multi-tenant use, quota enforcement, and a reactive event system for observing mutations.
The package has a single runtime dependency -- a pure-Go SQLite driver (`modernc.org/sqlite`). No CGO is required. It compiles and runs on all platforms that Go supports.
**Module path:** `forge.lthn.ai/core/go-store`
**Go version:** 1.26+
**Licence:** EUPL-1.2
## Quick Start
```go
package main
import (
"fmt"
"time"
"forge.lthn.ai/core/go-store"
)
func main() {
// Open a store. Use ":memory:" for ephemeral data or a file path for persistence.
st, err := store.New("/tmp/app.db")
if err != nil {
panic(err)
}
defer st.Close()
// Basic CRUD
st.Set("config", "theme", "dark")
val, _ := st.Get("config", "theme")
fmt.Println(val) // "dark"
// TTL expiry -- key disappears after the duration elapses
st.SetWithTTL("session", "token", "abc123", 24*time.Hour)
// Fetch all keys in a group
all, _ := st.GetAll("config")
fmt.Println(all) // map[theme:dark]
// Template rendering from stored values
st.Set("mail", "host", "smtp.example.com")
st.Set("mail", "port", "587")
out, _ := st.Render(`{{ .host }}:{{ .port }}`, "mail")
fmt.Println(out) // "smtp.example.com:587"
// Namespace isolation for multi-tenant use
sc, _ := store.NewScoped(st, "tenant-42")
sc.Set("prefs", "locale", "en-GB")
// Stored internally as group "tenant-42:prefs", key "locale"
// Quota enforcement
quota := store.QuotaConfig{MaxKeys: 100, MaxGroups: 5}
sq, _ := store.NewScopedWithQuota(st, "tenant-99", quota)
err = sq.Set("g", "k", "v") // returns store.ErrQuotaExceeded if limits are hit
// Watch for mutations via a buffered channel
w := st.Watch("config", "*")
defer st.Unwatch(w)
go func() {
for e := range w.Ch {
fmt.Printf("event: %s %s/%s\n", e.Type, e.Group, e.Key)
}
}()
// Or register a synchronous callback
unreg := st.OnChange(func(e store.Event) {
fmt.Printf("changed: %s\n", e.Key)
})
defer unreg()
}
```
## Package Layout
The entire package lives in a single Go package (`package store`) with three source files:
| File | Purpose |
|------|---------|
| `store.go` | Core `Store` type, CRUD operations (`Get`, `Set`, `SetWithTTL`, `Delete`, `DeleteGroup`), bulk queries (`GetAll`, `All`, `Count`, `CountAll`, `Groups`, `GroupsSeq`), string splitting helpers (`GetSplit`, `GetFields`), template rendering (`Render`), TTL expiry, background purge goroutine |
| `events.go` | `EventType` constants, `Event` struct, `Watcher` type, `Watch`/`Unwatch` subscription management, `OnChange` callback registration, internal `notify` dispatch |
| `scope.go` | `ScopedStore` wrapper for namespace isolation, `QuotaConfig` struct, `NewScoped`/`NewScopedWithQuota` constructors, quota enforcement logic |
Tests are organised in corresponding files:
| File | Covers |
|------|--------|
| `store_test.go` | CRUD, TTL, concurrency, edge cases, persistence, WAL verification |
| `events_test.go` | Watch/Unwatch, OnChange, event dispatch, wildcard matching, buffer overflow |
| `scope_test.go` | Namespace isolation, quota enforcement, cross-namespace behaviour |
| `coverage_test.go` | Defensive error paths (scan errors, schema conflicts, database corruption) |
| `bench_test.go` | Performance benchmarks for all major operations |
## Dependencies
**Runtime:**
| Module | Purpose |
|--------|---------|
| `modernc.org/sqlite` | Pure-Go SQLite driver (no CGO). Registered as a `database/sql` driver. |
**Test only:**
| Module | Purpose |
|--------|---------|
| `github.com/stretchr/testify` | Assertion helpers (`assert`, `require`) for tests. |
There are no other direct dependencies. The package uses only the Go standard library (`database/sql`, `context`, `sync`, `time`, `text/template`, `iter`, `errors`, `fmt`, `strings`, `regexp`, `slices`, `sync/atomic`) beyond the SQLite driver.
## Key Types
- **`Store`** -- the central type. Holds a `*sql.DB`, manages the background purge goroutine, and maintains the watcher/callback registry.
- **`ScopedStore`** -- wraps a `*Store` with an auto-prefixed namespace. Provides the same API surface with group names transparently prefixed.
- **`QuotaConfig`** -- configures per-namespace limits on total keys and distinct groups.
- **`Event`** -- describes a single store mutation (type, group, key, value, timestamp).
- **`Watcher`** -- a channel-based subscription to store events, created by `Watch`.
- **`KV`** -- a simple key-value pair struct, used by the `All` iterator.
## Sentinel Errors
- **`ErrNotFound`** -- returned by `Get` when the requested key does not exist or has expired.
- **`ErrQuotaExceeded`** -- returned by `ScopedStore.Set`/`SetWithTTL` when a namespace quota limit is reached.
## Further Reading
- [Architecture](architecture.md) -- storage layer internals, TTL model, event system, concurrency design
- [Development Guide](development.md) -- building, testing, benchmarks, contribution workflow

View file

@ -0,0 +1,141 @@
---
title: go-webview
description: Chrome DevTools Protocol client for browser automation, testing, and scraping in Go.
---
# go-webview
`go-webview` is a Go package that provides browser automation via the Chrome DevTools Protocol (CDP). It connects to an externally managed Chrome or Chromium instance running with `--remote-debugging-port=9222` and exposes a high-level API for navigation, DOM queries, input simulation, screenshot capture, console monitoring, and JavaScript evaluation.
The package does not launch Chrome itself. The caller is responsible for starting the browser process before constructing a `Webview`.
**Module path:** `forge.lthn.ai/core/go-webview`
**Licence:** EUPL-1.2
**Go version:** 1.26+
**Dependencies:** `github.com/gorilla/websocket v1.5.3`
## Quick Start
Start Chrome with the remote debugging port enabled:
```bash
# macOS
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \
--remote-debugging-port=9222
# Linux
google-chrome --remote-debugging-port=9222
# Headless (suitable for CI)
google-chrome --headless=new --remote-debugging-port=9222 --no-sandbox --disable-gpu
```
Then use the package in Go:
```go
import "forge.lthn.ai/core/go-webview"
// Connect to Chrome
wv, err := webview.New(webview.WithDebugURL("http://localhost:9222"))
if err != nil {
log.Fatal(err)
}
defer wv.Close()
// Navigate and interact
if err := wv.Navigate("https://example.com"); err != nil {
log.Fatal(err)
}
if err := wv.Click("#submit-button"); err != nil {
log.Fatal(err)
}
```
### Fluent Action Sequences
Chain multiple browser actions together with `ActionSequence`:
```go
err := webview.NewActionSequence().
Navigate("https://example.com").
WaitForSelector("#login-form").
Type("#email", "user@example.com").
Type("#password", "secret").
Click("#submit").
Execute(ctx, wv)
```
### Console Monitoring
Capture and filter browser console output:
```go
cw := webview.NewConsoleWatcher(wv)
cw.AddFilter(webview.ConsoleFilter{Type: "error"})
// ... perform browser actions ...
if cw.HasErrors() {
for _, msg := range cw.Errors() {
log.Printf("JS error: %s at %s:%d", msg.Text, msg.URL, msg.Line)
}
}
```
### Screenshots
Capture the current page as PNG:
```go
png, err := wv.Screenshot()
if err != nil {
log.Fatal(err)
}
os.WriteFile("screenshot.png", png, 0644)
```
### Angular Applications
First-class support for Angular single-page applications:
```go
ah := webview.NewAngularHelper(wv)
// Wait for Angular to stabilise
if err := ah.WaitForAngular(); err != nil {
log.Fatal(err)
}
// Navigate using Angular Router
if err := ah.NavigateByRouter("/dashboard"); err != nil {
log.Fatal(err)
}
// Inspect component state (debug mode only)
value, err := ah.GetComponentProperty("app-widget", "title")
```
## Package Layout
| File | Responsibility |
|------|----------------|
| `webview.go` | `Webview` struct, public API (navigate, click, type, screenshot, JS evaluation, DOM queries) |
| `cdp.go` | `CDPClient` -- WebSocket transport, CDP message framing, event dispatch, tab management |
| `actions.go` | `Action` interface, 19 concrete action types, `ActionSequence` fluent builder |
| `console.go` | `ConsoleWatcher`, `ExceptionWatcher`, console log formatting |
| `angular.go` | `AngularHelper` -- Zone.js stability, router navigation, component introspection, ngModel |
| `webview_test.go` | Unit tests for structs, options, and action building |
## Configuration Options
| Option | Default | Description |
|--------|---------|-------------|
| `WithDebugURL(url)` | *(required)* | Chrome DevTools HTTP debug endpoint, e.g. `http://localhost:9222` |
| `WithTimeout(d)` | 30 seconds | Default timeout for all browser operations |
| `WithConsoleLimit(n)` | 1000 | Maximum number of console messages retained in memory |
## Further Documentation
- [Architecture](architecture.md) -- internals, data flow, CDP protocol, type reference
- [Development Guide](development.md) -- build, test, contribute, coding standards
- [Project History](history.md) -- extraction origin, completed phases, known limitations

126
docs/go/packages/go-ws.md Normal file
View file

@ -0,0 +1,126 @@
---
title: go-ws
description: WebSocket hub for real-time streaming in Go, with channel pub/sub, token authentication, reconnecting clients, and a Redis bridge for multi-instance coordination.
---
# go-ws
`go-ws` is a WebSocket hub library for Go. It implements centralised connection management using the hub pattern, named channel pub/sub, optional token-based authentication at upgrade time, a client-side reconnecting wrapper with exponential backoff, and a Redis pub/sub bridge that coordinates broadcasts across multiple hub instances.
| | |
|---|---|
| **Module** | `forge.lthn.ai/core/go-ws` |
| **Go version** | 1.26+ |
| **Licence** | EUPL-1.2 |
| **Repository** | `ssh://git@forge.lthn.ai:2223/core/go-ws.git` |
## Quick Start
```go
package main
import (
"context"
"net/http"
"forge.lthn.ai/core/go-ws"
)
func main() {
hub := ws.NewHub()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
go hub.Run(ctx)
http.HandleFunc("/ws", hub.Handler())
http.ListenAndServe(":8080", nil)
}
```
Once running, clients connect via WebSocket and can subscribe to named channels. The server pushes messages to all connected clients or to subscribers of a specific channel.
### Sending Messages
```go
// Broadcast to every connected client.
hub.SendEvent("deploy:started", map[string]any{"env": "production"})
// Send process output to subscribers of "process:build-42".
hub.SendProcessOutput("build-42", "Compiling main.go...")
// Send a process status change.
hub.SendProcessStatus("build-42", "exited", 0)
```
### Adding Authentication
```go
auth := ws.NewAPIKeyAuth(map[string]string{
"secret-key-1": "user-alice",
"secret-key-2": "user-bob",
})
hub := ws.NewHubWithConfig(ws.HubConfig{
Authenticator: auth,
OnAuthFailure: func(r *http.Request, result ws.AuthResult) {
log.Printf("rejected connection from %s: %v", r.RemoteAddr, result.Error)
},
})
go hub.Run(ctx)
```
Clients connect with `Authorization: Bearer <key>`. Without a valid key, the upgrade is rejected with HTTP 401. When no `Authenticator` is set, all connections are accepted.
### Redis Bridge for Multi-Instance Deployments
```go
bridge, err := ws.NewRedisBridge(hub, ws.RedisConfig{
Addr: "localhost:6379",
Prefix: "ws",
})
if err != nil {
log.Fatal(err)
}
if err := bridge.Start(ctx); err != nil {
log.Fatal(err)
}
defer bridge.Stop()
// Messages published via the bridge reach clients on all instances.
bridge.PublishBroadcast(ws.Message{Type: ws.TypeEvent, Data: "hello from instance A"})
bridge.PublishToChannel("process:build-42", ws.Message{
Type: ws.TypeProcessOutput,
Data: "output line",
})
```
## Package Layout
The entire library lives in a single Go package (`ws`). There are no sub-packages.
| File | Purpose |
|---|---|
| `ws.go` | Hub, Client, Message, ReconnectingClient, connection pumps, channel subscription |
| `auth.go` | Authenticator interface, AuthResult, APIKeyAuthenticator, BearerTokenAuth, QueryTokenAuth |
| `errors.go` | Sentinel authentication errors |
| `redis.go` | RedisBridge, RedisConfig, envelope pattern for loop prevention |
| `ws_test.go` | Hub lifecycle, broadcast, channel, subscription, and integration tests |
| `auth_test.go` | Authentication unit and integration tests |
| `redis_test.go` | Redis bridge integration tests (skipped when Redis is unavailable) |
| `ws_bench_test.go` | 9 benchmarks covering broadcast, channel send, subscribe/unsubscribe, fanout, and end-to-end WebSocket round-trips |
## Dependencies
| Module | Version | Role |
|---|---|---|
| `github.com/gorilla/websocket` | v1.5.3 | WebSocket server and client implementation |
| `github.com/redis/go-redis/v9` | v9.18.0 | Redis pub/sub bridge (runtime opt-in) |
| `github.com/stretchr/testify` | v1.11.1 | Test assertions (test-only) |
The Redis dependency is a compile-time import but a runtime opt-in. Applications that never create a `RedisBridge` incur no Redis connections. There are no CGO requirements; the module builds cleanly on Linux, macOS, and Windows.
## Further Reading
- [Architecture](architecture.md) -- Hub pattern, channels, authentication, Redis bridge, concurrency model
- [Development Guide](development.md) -- Building, testing, coding standards, contribution workflow

44
docs/go/packages/index.md Normal file
View file

@ -0,0 +1,44 @@
# Go Packages
The Core Go ecosystem is a collection of focused packages under `forge.lthn.ai/core/`.
| Package | Description |
|---------|-------------|
| [go-devops](go-devops.md) | Build automation, Ansible, release pipeline, infrastructure APIs |
| [go-ai](go-ai.md) | MCP hub — 49 tools for file ops, RAG, inference, browser automation |
| [go-ml](go-ml.md) | ML inference backends, scoring engine, agent orchestrator |
| [go-mlx](go-mlx.md) | Apple Metal GPU inference via mlx-c CGO bindings |
| [go-inference](go-inference.md) | Shared interface contract for text generation backends |
| [go-i18n](go-i18n.md) | Grammar engine — forward composition, reversal, GrammarImprint |
| [go-scm](go-scm.md) | SCM integration, AgentCI dispatch, Clotho Protocol |
| [go-html](go-html.md) | HLCRF DOM compositor with grammar pipeline and WASM |
| [go-crypt](go-crypt.md) | Cryptographic primitives, OpenPGP auth, trust policy engine |
| [go-blockchain](go-blockchain.md) | Pure Go CryptoNote blockchain implementation |
## Dependency Graph
```mermaid
graph TD
go-inference --> go-mlx
go-inference --> go-ml
go-ml --> go-ai
go-i18n --> go-html
go-devops --> CLI
go-ai --> CLI
go-scm --> CLI
go-crypt --> go-scm
```
## Installation
All packages use Go modules:
```bash
go get forge.lthn.ai/core/go-ai@latest
```
For private forge access:
```bash
export GOPRIVATE=forge.lthn.ai/*
```

137
docs/gui/index.md Normal file
View file

@ -0,0 +1,137 @@
---
title: GUI
description: IPC-based desktop GUI framework abstracting Wails v3 through typed services.
---
# GUI
**Module**: `forge.lthn.ai/core/gui`
**Language**: Go 1.25
**Licence**: EUPL-1.2
CoreGUI is an abstraction layer over Wails v3 — a "display server" that provides a stable API contract for desktop applications. Apps never import Wails directly; CoreGUI defines Platform interfaces that insulate all Wails types behind adapter boundaries. If Wails breaks, it is fixed in one place.
## Architecture
CoreGUI follows a three-layer stack:
```
IPC Bus (core/go ACTION / QUERY / PERFORM)
|
Service (core.ServiceRuntime + business logic)
|
Platform Interface (Wails v3 adapter, injected at startup)
```
Each feature area is a `core.Service` registered with the DI container. Services communicate via typed IPC messages — queries return data, tasks mutate state, and actions are fire-and-forget broadcasts. No service imports another directly; the display orchestrator bridges IPC events to a WebSocket pub/sub channel for TypeScript frontends.
```
pkg/display (orchestrator)
+-- imports pkg/window, pkg/systray, pkg/menu (message types only)
+-- imports pkg/clipboard, pkg/dialog, pkg/notification
+-- imports pkg/screen, pkg/environment, pkg/dock, pkg/lifecycle
+-- imports pkg/keybinding, pkg/contextmenu, pkg/browser
+-- imports pkg/webview, pkg/mcp
+-- imports core/go (DI, IPC) + go-config
```
No circular dependencies. Sub-packages do not import each other or the orchestrator.
## Packages
| Package | Description |
|---------|-------------|
| `pkg/display` | Orchestrator — owns Wails app, config, WSEventManager bridge |
| `pkg/window` | Window lifecycle, tiling, snapping, layouts, state persistence |
| `pkg/screen` | Screen enumeration, primary detection, point-to-screen queries |
| `pkg/clipboard` | Clipboard read/write (text) |
| `pkg/dialog` | File open/save, directory select, message dialogs |
| `pkg/notification` | Native system notifications with dialog fallback |
| `pkg/systray` | System tray icon, tooltip, dynamic menus, panel window |
| `pkg/menu` | Application menu builder (structure only, handlers injected) |
| `pkg/keybinding` | Global keyboard shortcuts with accelerator syntax |
| `pkg/contextmenu` | Right-click context menu registration and management |
| `pkg/dock` | macOS dock icon visibility, badge (taskbar badge on Windows) |
| `pkg/lifecycle` | Application lifecycle events (startup, shutdown, theme change) |
| `pkg/browser` | Open URLs and files in the system default browser |
| `pkg/environment` | OS/platform info, dark mode detection, accent colour, theme events |
| `pkg/webview` | CDP-based WebView interaction — JS eval, DOM queries, screenshots |
| `pkg/mcp` | MCP display subsystem exposing ~74 tools across all packages |
## Service Registration
Each package exposes a `Register(platform)` factory. The orchestrator creates Wails adapters and passes them to each service:
```go
wailsApp := application.New(application.Options{...})
core.New(
core.WithService(display.Register(wailsApp)),
core.WithService(window.Register(window.NewWailsPlatform(wailsApp))),
core.WithService(systray.Register(systray.NewWailsPlatform(wailsApp))),
core.WithService(menu.Register(menu.NewWailsPlatform(wailsApp))),
core.WithService(clipboard.Register(clipPlatform)),
core.WithService(screen.Register(screenPlatform)),
// ... remaining services
core.WithServiceLock(),
)
```
Display registers first (owns config via `go-config`). Sub-services query their config section during `OnStartup`. Shutdown runs in reverse order.
## Platform Insulation
Each sub-package defines a `Platform` interface — the adapter contract. Wails types never leak past this boundary:
```go
// pkg/window/platform.go
type Platform interface {
CreateWindow(opts PlatformWindowOptions) PlatformWindow
GetWindows() []PlatformWindow
}
```
Wails adapter implementations live alongside each package (e.g. `pkg/window/wails.go`). Mock implementations enable testing without a Wails runtime.
## IPC Message Pattern
Services define typed message structs in a `messages.go` file:
- **Query** — read-only, returns data (e.g. `QueryWindowList`, `QueryTheme`, `QueryAll`)
- **Task** — side-effects, returns result (e.g. `TaskOpenWindow`, `TaskSetText`, `TaskSend`)
- **Action** — fire-and-forget broadcast (e.g. `ActionWindowOpened`, `ActionThemeChanged`)
The display orchestrator's `HandleIPCEvents` converts IPC actions to WebSocket events for TypeScript apps. Inbound WebSocket messages are translated to IPC tasks/queries, with request ID correlation for responses.
## Config
Configuration lives at `~/.core/gui/config.yaml`, loaded via `go-config`. Top-level keys map to service names:
```yaml
window:
state_file: window_state.json
default_width: 1024
default_height: 768
systray:
icon: apptray.png
tooltip: "Core GUI"
menu:
show_dev_tools: true
```
The display orchestrator is the single writer to disk. Sub-services read via `QueryConfig` and save via `TaskSaveConfig`.
## MCP Integration
`pkg/mcp` is an MCP subsystem that translates tool calls into IPC messages across all packages. It structurally satisfies `core/mcp`'s `Subsystem` interface (no import required):
```go
guiSub := guimcp.New(coreInstance)
mcpSvc, _ := coremcp.New(coremcp.WithSubsystem(guiSub))
```
Tool categories include window management, layout control, screen queries, clipboard, dialogs, notifications, tray, environment, keybinding, context menus, dock, lifecycle, browser, and full WebView interaction (eval, click, type, navigate, screenshot, DOM queries).
## Repository
- **Source**: [forge.lthn.ai/core/gui](https://forge.lthn.ai/core/gui)

82
docs/index.md Normal file
View file

@ -0,0 +1,82 @@
---
title: core.help
description: Documentation for the Core CLI, Go packages, PHP modules, and MCP tools
---
# Core Platform
Build, deploy, and manage Go and PHP applications with a unified toolkit.
---
| | |
|---|---|
| **[Core GO →](go/index.md)** | **[Core PHP →](php/index.md)** |
| DI framework, service lifecycle, and message-passing bus for Go. | Modular monolith for Laravel with event-driven loading and multi-tenancy. |
| **[CLI →](cli/index.md)** | **[Go Packages →](go/packages/index.md)** |
| Unified `core` command for Go/PHP dev, multi-repo management, builds. | AI, ML, DevOps, crypto, i18n, blockchain, and more. |
| **[Deploy →](deploy/index.md)** | **[Publish →](publish/index.md)** |
| Docker, PHP, and LinuxKit deployment targets with templates. | Release to GitHub, Docker Hub, Homebrew, Scoop, AUR, npm. |
## Quick Start
=== "Go"
```bash
# Install the Core CLI
go install forge.lthn.ai/core/cli/cmd/core@latest
# Check your environment
core doctor
# Run tests, format, lint
core go test
core go fmt
core go lint
# Build your project
core build
```
=== "PHP"
```bash
# Install the framework
composer require lthn/php
# Create a module
php artisan make:mod Commerce
# Start dev environment
core php dev
# Run tests
core php test
```
=== "Multi-Repo"
```bash
# Health check across all repos
core dev health
# Full workflow: status, commit, push
core dev work
# Just show status
core dev work --status
```
## Architecture
The Core platform spans two ecosystems:
**Go** provides the CLI toolchain and infrastructure services — build system, release pipeline, multi-repo management, LinuxKit VMs, and AI/ML integration. The `core` binary is the single entry point.
**PHP** provides the application framework — a Laravel-based modular monolith with event-driven module loading, automatic multi-tenancy, and packages for admin, API, commerce, content, MCP, and developer portals.
Both are connected through the CLI (`core go`, `core php`, `core build`, `core dev`) and share deployment pipelines (`core ci`, `core deploy`).
## Licence
EUPL-1.2 — [European Union Public Licence](https://joinup.ec.europa.eu/collection/eupl/eupl-text-eupl-12)

35
docs/links.md Normal file
View file

@ -0,0 +1,35 @@
---
title: Community & Links
description: Find Core on GitHub, Forgejo, Hugging Face, and other platforms
---
# Community & Links
## Source Code
- **[Forgejo](https://forge.lthn.ai/core)** — Primary source, CI/CD, and issue tracking
- **[GitHub](https://github.com/LetheanNetwork)** — Public mirror
- **[GitLab](https://gitlab.com/lthn)** — Mirror
## AI & Models
- **[Hugging Face](https://huggingface.co/lthn)** — LEM models and training datasets
## Packages
- **[Go Modules](https://forge.lthn.ai/core)** — `forge.lthn.ai/core/*`
- **[Packagist](https://packagist.org/packages/lthn/)** — PHP packages via Composer (`lthn/*`)
- **[Docker Hub](https://hub.docker.com/u/lthn)** — Container images
## Social
- **[Twitter / X](https://twitter.com/LetheanNetwork)** — @LetheanNetwork
## Community
- **[Discord](https://discord.gg/CwZtD69wTg)** — Community chat
- **[BugSETI](https://bugseti.app)** — Bug bounty and testing platform
## Documentation
- **[core.help](https://core.help)** — This site

View file

@ -0,0 +1,8 @@
---
title: Architecture
description: Deep dives into the Core PHP architecture — module system, lifecycle events, lazy loading, and multi-tenancy
---
# Architecture
Deep dives into how the Core PHP framework works under the hood.

View file

@ -0,0 +1,8 @@
---
title: Features
description: Built-in features — actions, scheduled actions, tenancy, search, SEO, CDN, media, activity logging, studio, and seeders
---
# Features
Built-in capabilities available to every Core PHP module.

View file

@ -0,0 +1,243 @@
# Scheduled Actions
Declare schedules directly on Action classes using PHP attributes. No manual `routes/console.php` entries needed — the framework discovers, persists, and executes them automatically.
## Overview
Scheduled Actions combine the [Actions pattern](actions.md) with PHP 8.1 attributes to create a database-backed scheduling system. Actions declare their default schedule via `#[Scheduled]`, a sync command persists them to a `scheduled_actions` table, and a service provider wires them into Laravel's scheduler at runtime.
```
artisan schedule:sync artisan schedule:run
│ │
ScheduledActionScanner ScheduleServiceProvider
│ │
Discovers #[Scheduled] Reads scheduled_actions
attributes via reflection table, wires into Schedule
│ │
Upserts scheduled_actions Calls Action::run() at
table rows configured frequency
```
## Basic Usage
Add the `#[Scheduled]` attribute to any Action class:
```php
<?php
declare(strict_types=1);
namespace Mod\Social\Actions;
use Core\Actions\Action;
use Core\Actions\Scheduled;
#[Scheduled(frequency: 'dailyAt:09:00', timezone: 'Europe/London')]
class PublishDiscordDigest
{
use Action;
public function handle(): void
{
// Gather yesterday's commits, summarise, post to Discord
}
}
```
No Boot registration needed. No `routes/console.php` entry. The scanner discovers it, `schedule:sync` persists it, and the scheduler runs it.
## The `#[Scheduled]` Attribute
```php
#[Attribute(Attribute::TARGET_CLASS)]
class Scheduled
{
public function __construct(
public string $frequency,
public ?string $timezone = null,
public bool $withoutOverlapping = true,
public bool $runInBackground = true,
) {}
}
```
### Frequency Strings
The `frequency` string maps directly to Laravel Schedule methods. Arguments are colon-separated, with multiple arguments comma-separated:
| Frequency String | Laravel Equivalent |
|---|---|
| `everyMinute` | `->everyMinute()` |
| `hourly` | `->hourly()` |
| `dailyAt:09:00` | `->dailyAt('09:00')` |
| `weeklyOn:1,09:00` | `->weeklyOn(1, '09:00')` |
| `monthlyOn:1,00:00` | `->monthlyOn(1, '00:00')` |
| `cron:*/5 * * * *` | `->cron('*/5 * * * *')` |
Numeric arguments are automatically cast to integers, so `weeklyOn:1,09:00` correctly passes `(int) 1` and `'09:00'`.
## Syncing Schedules
The `schedule:sync` command scans for `#[Scheduled]` attributes and persists them to the database:
```bash
php artisan schedule:sync
# Schedule sync complete: 3 added, 1 disabled, 12 unchanged.
```
### Behaviour
- **New classes** are inserted with their attribute defaults
- **Existing rows** are preserved (manual edits to frequency are not overwritten)
- **Removed classes** are disabled (`is_enabled = false`), not deleted
- **Idempotent** — safe to run on every deploy
Run this command as part of your deployment pipeline, after migrations.
### Scan Paths
By default, the scanner checks:
- `app/Core`, `app/Mod`, `app/Website` (application code)
- `src/Core`, `src/Mod` (framework code)
Override with the `core.scheduled_action_paths` config key:
```php
// config/core.php
'scheduled_action_paths' => [
app_path('Core'),
app_path('Mod'),
],
```
## The `ScheduledAction` Model
Each discovered action is persisted as a `ScheduledAction` row:
| Column | Type | Description |
|---|---|---|
| `action_class` | `string` (unique) | Fully qualified class name |
| `frequency` | `string` | Schedule frequency string |
| `timezone` | `string` (nullable) | Timezone override |
| `without_overlapping` | `boolean` | Prevent concurrent runs |
| `run_in_background` | `boolean` | Run in background process |
| `is_enabled` | `boolean` | Toggle on/off |
| `last_run_at` | `timestamp` (nullable) | Last execution time |
| `next_run_at` | `timestamp` (nullable) | Computed next run |
### Querying
```php
use Core\Actions\ScheduledAction;
// All enabled actions
$active = ScheduledAction::enabled()->get();
// Check last run
$action = ScheduledAction::where('action_class', MyAction::class)->first();
echo $action->last_run_at?->diffForHumans(); // "2 hours ago"
// Parse frequency
$action->frequencyMethod(); // 'dailyAt'
$action->frequencyArgs(); // ['09:00']
```
## Runtime Execution
The `ScheduleServiceProvider` boots in console context and wires all enabled rows into Laravel's scheduler. It validates each action before registering:
- **Namespace allowlist** — only classes in `App\`, `Core\`, or `Mod\` namespaces are accepted
- **Action trait check** — the class must use the `Core\Actions\Action` trait
- **Frequency allowlist** — only recognised Laravel Schedule methods are permitted
After each run, `last_run_at` is updated automatically.
## Admin Control
The `scheduled_actions` table is designed for admin visibility. You can:
- **Disable** an action by setting `is_enabled = false` — it will not be re-enabled by subsequent syncs
- **Change frequency** by editing the `frequency` column — manual edits are preserved across syncs
- **Monitor** via `last_run_at` — see when each action last executed
## Migration Strategy
- Existing `routes/console.php` commands remain untouched
- New scheduled work uses `#[Scheduled]` actions
- Existing commands can be migrated to actions gradually at natural touch points
## Examples
### Every-minute health check
```php
#[Scheduled(frequency: 'everyMinute', withoutOverlapping: true)]
class CheckServiceHealth
{
use Action;
public function handle(): void
{
// Ping upstream services, alert on failure
}
}
```
### Weekly report with timezone
```php
#[Scheduled(frequency: 'weeklyOn:1,09:00', timezone: 'Europe/London')]
class SendWeeklyReport
{
use Action;
public function handle(): void
{
// Compile and email weekly metrics
}
}
```
### Cron expression
```php
#[Scheduled(frequency: 'cron:0 */6 * * *')]
class SyncExternalData
{
use Action;
public function handle(): void
{
// Pull data from external API every 6 hours
}
}
```
## Testing
```php
use Core\Actions\Scheduled;
use Core\Actions\ScheduledAction;
use Core\Actions\ScheduledActionScanner;
it('discovers scheduled actions', function () {
$scanner = new ScheduledActionScanner();
$results = $scanner->scan([app_path('Mod')]);
expect($results)->not->toBeEmpty();
expect(array_values($results)[0])->toBeInstanceOf(Scheduled::class);
});
it('syncs scheduled actions to database', function () {
$this->artisan('schedule:sync')->assertSuccessful();
expect(ScheduledAction::enabled()->count())->toBeGreaterThan(0);
});
```
## Learn More
- [Actions Pattern](actions.md)
- [Module System](/php/framework/modules)
- [Lifecycle Events](/php/framework/events)

314
docs/php/features/studio.md Normal file
View file

@ -0,0 +1,314 @@
# Studio Multimedia Pipeline
Studio is a CorePHP module that orchestrates video remixing, transcription, voice synthesis, and image generation by dispatching GPU work to remote services. It separates creative decisions (LEM/Ollama) from mechanical execution (ffmpeg, Whisper, TTS, ComfyUI).
## Architecture
Studio is a job orchestrator, not a renderer. All GPU-intensive work runs on remote Docker services accessed over HTTP.
```
Studio Module (CorePHP)
├── Livewire UI (asset browser, remix form, voice, thumbnails)
├── Artisan Commands (CLI)
└── API Routes (/api/studio/*)
Actions (CatalogueAsset, GenerateManifest, RenderManifest, etc.)
Redis Job Queue
├── Ollama (LEM) ─────── Creative decisions, scripts, manifests
├── Whisper ───────────── Speech-to-text transcription
├── Kokoro TTS ────────── Voiceover generation
├── ffmpeg Worker ─────── Video rendering from manifests
└── ComfyUI ──────────── Image generation, thumbnails
```
### Smart/Dumb Separation
LEM produces JSON manifests (the creative layer). ffmpeg and GPU services consume them mechanically (the execution layer). Neither side knows about the other's internals — the manifest format is the contract.
## Module Structure
The Studio module lives at `app/Mod/Studio/` and follows standard CorePHP patterns:
```
app/Mod/Studio/
├── Boot.php # Lifecycle events (API, Console, Web)
├── Actions/
│ ├── CatalogueAsset.php # Ingest files, extract metadata
│ ├── TranscribeAsset.php # Send to Whisper, store transcript
│ ├── GenerateManifest.php # Brief + library → LEM → manifest JSON
│ ├── RenderManifest.php # Dispatch manifest to ffmpeg worker
│ ├── SynthesiseSpeech.php # Text → TTS → audio file
│ ├── GenerateVoiceover.php # Script → voiced audio for remix
│ ├── GenerateImage.php # Prompt → ComfyUI → image
│ ├── GenerateThumbnail.php # Asset → thumbnail image
│ └── BatchRemix.php # Queue multiple remix jobs
├── Console/
│ ├── Catalogue.php # studio:catalogue — batch ingest
│ ├── Transcribe.php # studio:transcribe — batch transcription
│ ├── Remix.php # studio:remix — brief in, video out
│ ├── Voice.php # studio:voice — text-to-speech
│ ├── Thumbnail.php # studio:thumbnail — generate thumbnails
│ └── BatchRemixCommand.php # studio:batch-remix — queue batch jobs
├── Controllers/Api/
│ ├── AssetController.php # GET/POST /api/studio/assets
│ ├── RemixController.php # POST /api/studio/remix
│ ├── VoiceController.php # POST /api/studio/voice
│ └── ImageController.php # POST /api/studio/images/thumbnail
├── Models/
│ ├── StudioAsset.php # Multimedia asset with metadata
│ └── StudioJob.php # Job tracking (status, manifest, output)
├── Livewire/
│ ├── AssetBrowserPage.php # Browse/search/tag assets
│ ├── RemixPage.php # Remix form + job status
│ ├── VoicePage.php # Voice synthesis interface
│ └── ThumbnailPage.php # Thumbnail generator
└── Routes/
├── api.php # REST API endpoints
└── web.php # Livewire page routes
```
## Asset Cataloguing
Assets are multimedia files (video, image, audio) tracked in the `studio_assets` table with metadata including duration, resolution, tags, and transcripts.
### Ingesting Assets
```php
use Mod\Studio\Actions\CatalogueAsset;
// From an uploaded file
$asset = CatalogueAsset::run($uploadedFile, ['summer', 'beach']);
// From an existing storage path
$asset = CatalogueAsset::run('studio/raw/clip-001.mp4', ['interview']);
```
Only `video/*`, `image/*`, and `audio/*` MIME types are accepted.
### CLI Batch Ingest
```bash
php artisan studio:catalogue /path/to/media --tags=summer,promo
```
### Querying Assets
```php
use Mod\Studio\Models\StudioAsset;
// By type
$videos = StudioAsset::videos()->get();
$images = StudioAsset::images()->get();
$audio = StudioAsset::audio()->get();
// By tag
$summer = StudioAsset::tagged('summer')->get();
```
## Transcription
Transcription sends assets to a Whisper service and stores the returned text and detected language.
```php
use Mod\Studio\Actions\TranscribeAsset;
$asset = TranscribeAsset::run($asset);
echo $asset->transcript; // "Hello and welcome..."
echo $asset->transcript_language; // "en"
```
The action handles missing files and API failures gracefully — it returns the asset unchanged without throwing.
### CLI Batch Transcription
```bash
php artisan studio:transcribe
```
## Manifest-Driven Remixing
The remix pipeline has two stages: manifest generation (creative) and rendering (mechanical).
### Generating Manifests
```php
use Mod\Studio\Actions\GenerateManifest;
$job = GenerateManifest::run(
brief: 'Create a 15-second upbeat TikTok from the summer footage',
template: 'tiktok-15s',
);
// $job->manifest contains the JSON manifest
```
The action collects all video assets from the library, sends them as context to Ollama along with the brief, and parses the returned JSON manifest.
### Manifest Format
```json
{
"clips": [
{"asset_id": 42, "start_ms": 3200, "end_ms": 8100, "effects": ["fade_in"]},
{"asset_id": 17, "start_ms": 0, "end_ms": 5500, "effects": ["crossfade"]}
],
"audio": {"track": "original"},
"voiceover": {"script": "Summer vibes only", "voice": "default", "volume": 0.8},
"overlays": [
{"type": "image", "asset_id": 5, "at": 0.5, "duration": 3.0, "position": "bottom-right", "opacity": 0.8}
]
}
```
### Rendering
```php
use Mod\Studio\Actions\RenderManifest;
$job = RenderManifest::run($job);
```
This dispatches the manifest to the ffmpeg worker service, which renders the video and calls back when complete.
### CLI Remix
```bash
php artisan studio:remix "Create a relaxing travel montage" --template=tiktok-30s
```
## Voice & TTS
```php
use Mod\Studio\Actions\SynthesiseSpeech;
$audio = SynthesiseSpeech::run(
text: 'Welcome to our channel',
voice: 'default',
);
```
### CLI
```bash
php artisan studio:voice "Welcome to our channel" --voice=default
```
## Image Generation
Thumbnails and image overlays use ComfyUI:
```php
use Mod\Studio\Actions\GenerateThumbnail;
$thumbnail = GenerateThumbnail::run($asset);
```
### CLI
```bash
php artisan studio:thumbnail --asset=42
```
## API Endpoints
| Method | Endpoint | Description |
|---|---|---|
| `GET` | `/api/studio/assets` | List assets |
| `GET` | `/api/studio/assets/{id}` | Show asset details |
| `POST` | `/api/studio/assets` | Upload/catalogue asset |
| `POST` | `/api/studio/remix` | Submit remix brief |
| `GET` | `/api/studio/remix/{id}` | Poll job status |
| `POST` | `/api/studio/remix/{id}/callback` | Worker completion callback |
| `POST` | `/api/studio/voice` | Submit voice synthesis |
| `GET` | `/api/studio/voice/{id}` | Poll voice job status |
| `POST` | `/api/studio/images/thumbnail` | Generate thumbnail |
## GPU Services
All GPU services run as Docker containers, accessed over HTTP. Configuration is in `config/studio.php`:
| Service | Default Endpoint | Purpose |
|---|---|---|
| Ollama | `http://studio-ollama:11434` | Creative decisions via LEM |
| Whisper | `http://studio-whisper:9100` | Speech-to-text |
| Kokoro TTS | `http://studio-tts:9200` | Text-to-speech |
| ffmpeg Worker | `http://studio-worker:9300` | Video rendering |
| ComfyUI | `http://studio-comfyui:8188` | Image generation |
## Configuration
```php
// config/studio.php
return [
'ollama' => [
'url' => env('STUDIO_OLLAMA_URL', 'http://studio-ollama:11434'),
'model' => env('STUDIO_OLLAMA_MODEL', 'lem-4b'),
'timeout' => 60,
],
'whisper' => [
'url' => env('STUDIO_WHISPER_URL', 'http://studio-whisper:9100'),
'model' => 'large-v3-turbo',
'timeout' => 120,
],
'worker' => [
'url' => env('STUDIO_WORKER_URL', 'http://studio-worker:9300'),
'timeout' => 300,
],
'storage' => [
'disk' => 'local',
'assets_path' => 'studio/assets',
],
'templates' => [
'tiktok-15s' => ['duration' => 15, 'width' => 1080, 'height' => 1920, 'fps' => 30],
'tiktok-30s' => ['duration' => 30, 'width' => 1080, 'height' => 1920, 'fps' => 30],
'youtube-60s' => ['duration' => 60, 'width' => 1920, 'height' => 1080, 'fps' => 30],
],
];
```
## Livewire UI
Studio provides four Livewire page components:
- **Asset Browser** — browse, search, and tag multimedia assets
- **Remix Page** — enter a creative brief, select template, view job progress
- **Voice Page** — text-to-speech interface
- **Thumbnail Page** — generate thumbnails from assets
Components are registered via the module's Boot class and available under `mod.studio.livewire.*`.
## Testing
All actions are testable with `Http::fake()`:
```php
use Illuminate\Support\Facades\Http;
use Mod\Studio\Actions\TranscribeAsset;
use Mod\Studio\Models\StudioAsset;
it('transcribes an asset via Whisper', function () {
Storage::fake('local');
Storage::disk('local')->put('studio/test.mp4', 'fake-video');
Http::fake([
'*/transcribe' => Http::response([
'text' => 'Hello world',
'language' => 'en',
]),
]);
$asset = StudioAsset::factory()->create(['path' => 'studio/test.mp4']);
$result = TranscribeAsset::run($asset);
expect($result->transcript)->toBe('Hello world');
expect($result->transcript_language)->toBe('en');
});
```
## Learn More
- [Actions Pattern](actions.md)
- [Lifecycle Events](/php/framework/events)

View file

@ -0,0 +1,8 @@
---
title: Framework
description: Core PHP framework internals — modules, events, contracts, testing, and security
---
# Framework
The Core PHP framework internals that power every module and package.

8
docs/php/guides/index.md Normal file
View file

@ -0,0 +1,8 @@
---
title: Guides
description: Learn Core PHP — installation, configuration, framework internals, and built-in features
---
# Guides
Everything you need to get started and work effectively with Core PHP.

Some files were not shown because too many files have changed in this diff Show more