docs: update documentation from implemented plans
Some checks failed
Build and Deploy / deploy (push) Failing after 7s

Add new pages: scheduled-actions, studio, plug, uptelligence.
Update: go-blockchain, go-devops, go-process, mcp, lint, docs engine.
Update nav and indexes.

Co-Authored-By: Virgil <virgil@lethean.io>
This commit is contained in:
user.email 2026-03-14 08:09:17 +00:00
parent 0ee4c15ee2
commit 05305d9870
13 changed files with 1591 additions and 149 deletions

View file

@ -27,22 +27,66 @@ go-blockchain -- Go reimplementation of the Zano-fork protocol
The Lethean mainnet launched on **2026-02-12** with genesis timestamp `1770897600` (12:00 UTC). The chain runs a hybrid PoW/PoS consensus with 120-second block targets.
## Package Structure
## Binary
The repo produces a standalone `core-chain` binary via `cmd/core-chain/main.go`.
It uses `cli.Main()` and `cli.WithCommands()` from the Core CLI framework,
keeping the potentially heavy CGo dependencies out of the main `core` binary.
```bash
# Build
core build # uses .core/build.yaml
go build -o ./bin/core-chain ./cmd/core-chain
# TUI block explorer
core-chain chain explorer
# Headless P2P sync
core-chain chain sync
# Sync as a background daemon
core-chain chain sync --daemon
# Stop a running sync daemon
core-chain chain sync --stop
```
Persistent flags on the `chain` parent command:
| Flag | Default | Description |
|------|---------|-------------|
| `--data-dir` | `~/.lethean/chain` | Blockchain data directory |
| `--seed` | `seeds.lthn.io:36942` | Seed peer address |
| `--testnet` | `false` | Use testnet parameters |
The sync subcommand supports daemon mode via go-process, with PID file
locking at `{data-dir}/sync.pid` and automatic registration in the daemon
registry (`~/.core/daemons/`).
## Package structure
```
go-blockchain/
config/ Chain parameters (mainnet/testnet), hardfork schedule
types/ Core data types: Hash, PublicKey, Address, Block, Transaction
wire/ Binary serialisation (consensus-critical, bit-identical to C++)
crypto/ CGo bridge to libcryptonote (ring sigs, BP+, Zarcanum, stealth)
difficulty/ PoW + PoS difficulty adjustment (LWMA variant)
consensus/ Three-layer block/transaction validation
chain/ Blockchain storage, block/tx validation, mempool
p2p/ Levin TCP protocol, peer discovery, handshake
rpc/ Daemon and wallet JSON-RPC client
wallet/ Key management, output scanning, tx construction
mining/ Solo PoW miner (RandomX nonce grinding)
tui/ Terminal dashboard (bubbletea + lipgloss)
cmd/
core-chain/ Standalone binary entry point (cli.Main)
commands.go AddChainCommands() registration + shared helpers
cmd_explorer.go TUI block explorer subcommand
cmd_sync.go Headless sync subcommand with daemon support
sync_service.go Extracted P2P sync loop
config/ Chain parameters (mainnet/testnet), hardfork schedule
types/ Core data types: Hash, PublicKey, Address, Block, Transaction
wire/ Binary serialisation (consensus-critical, bit-identical to C++)
crypto/ CGo bridge to libcryptonote (ring sigs, BP+, Zarcanum, stealth)
difficulty/ PoW + PoS difficulty adjustment (LWMA variant)
consensus/ Three-layer block/transaction validation
chain/ Blockchain storage, block/tx validation, mempool
p2p/ Levin TCP protocol, peer discovery, handshake
rpc/ Daemon and wallet JSON-RPC client
wallet/ Key management, output scanning, tx construction
mining/ Solo PoW miner (RandomX nonce grinding)
tui/ Terminal dashboard (bubbletea + lipgloss)
.core/
build.yaml Build system config (targets: darwin/arm64, linux/amd64)
```
## Design Principles
@ -123,7 +167,20 @@ The project follows a 9-phase development plan. See the [wiki Development Phases
| 7 | Consensus Rules | Complete |
| 8 | Mining | Complete |
## Further Reading
## Dependencies
| Module | Purpose |
|--------|---------|
| `forge.lthn.ai/core/cli` | CLI framework (`cli.Main`, cobra, bubbletea TUI) |
| `forge.lthn.ai/core/go` | DI container and service lifecycle |
| `forge.lthn.ai/core/go-process` | Daemon lifecycle, PID file, registry (sync daemon mode) |
| `forge.lthn.ai/core/go-store` | SQLite storage backend for chain data |
| `forge.lthn.ai/core/go-p2p` | Levin protocol implementation |
| `forge.lthn.ai/core/go-crypt` | Cryptographic utilities |
| `golang.org/x/crypto` | SSH, additional crypto primitives |
| `github.com/stretchr/testify` | Test assertions (test-only) |
## Further reading
- [Architecture](architecture.md) -- Package dependencies, CGo boundary, data structures
- [Cryptography](cryptography.md) -- Crypto primitives, hashing, signatures, proofs

View file

@ -1,56 +1,60 @@
---
title: go-devops
description: Build system, release publishers, infrastructure management, and DevOps tooling for the Lethean ecosystem.
description: Multi-repo development workflows, deployment, and release snapshot generation for the Lethean ecosystem.
---
# go-devops
`forge.lthn.ai/core/go-devops` is the build, release, and infrastructure automation library for the Lethean ecosystem. It replaces goreleaser with a native Go pipeline that auto-detects project types, cross-compiles, signs artefacts, generates changelogs, and publishes to eight distribution targets.
`forge.lthn.ai/core/go-devops` provides multi-repo development workflow
commands (`core dev`), deployment orchestration, documentation sync, and
release snapshot generation (`core.json`).
**Module**: `forge.lthn.ai/core/go-devops`
**Go**: 1.26
**Licence**: EUPL-1.2
## Decomposition
go-devops was originally a 31K LOC monolith covering builds, releases,
infrastructure, Ansible, containers, and code quality. It has since been
decomposed into focused, independently-versioned packages:
| Extracted package | Former location | What moved |
|-------------------|-----------------|------------|
| [go-build](go-build.md) | `build/`, `release/`, `sdk/` | Cross-compilation, code signing, release publishing, SDK generation |
| [go-infra](go-infra.md) | `infra/` | Hetzner Cloud/Robot, CloudNS provider APIs, `infra.yaml` config |
| [go-ansible](go-ansible.md) | `ansible/` | Pure Go Ansible playbook engine (41 modules, SSH) |
| [go-container](go-container.md) | `container/`, `devops/` | LinuxKit VM management, dev environments, image sources |
The `devkit/` package (cyclomatic complexity, coverage, vulnerability scanning)
was merged into `core/lint`.
After decomposition, go-devops retains multi-repo orchestration, deployment,
documentation sync, and manifest snapshot generation.
## What it does
| Area | Summary |
|------|---------|
| **Build system** | Auto-detect project type from marker files, cross-compile for multiple OS/arch targets, archive and checksum artefacts |
| **Code signing** | macOS `codesign`, GPG detached signatures, Windows `signtool` |
| **Release publishers** | GitHub Releases, Docker, Homebrew, npm, AUR, Scoop, Chocolatey, LinuxKit |
| **SDK generation** | Generate typed API clients from OpenAPI specs (TypeScript, Python, Go, PHP) with breaking change detection |
| **Ansible executor** | Native Go playbook runner with ~30 modules over SSH — no `ansible-playbook` shell-out |
| **Infrastructure** | Hetzner Cloud/Robot provisioning, CloudNS DNS management |
| **Container/VM** | LinuxKit-based VMs via QEMU (Linux) or Hyperkit (macOS) |
| **Developer toolkit** | Cyclomatic complexity analysis, vulnerability scanning, coverage trending, secret scanning |
| **Doc sync** | Collect documentation from multi-repo workspaces into a central location |
| **Multi-repo workflows** | Status, commit, push, pull across all repos in a `repos.yaml` workspace |
| **GitHub integration** | Issue listing, PR review status, CI workflow checks |
| **Documentation sync** | Collect docs from multi-repo workspaces into a central location |
| **Deployment** | Coolify PaaS integration |
| **Release snapshots** | Generate `core.json` from `.core/manifest.yaml` for marketplace indexing |
| **Setup** | Repository and CI bootstrapping |
## Package layout
```
go-devops/
├── ansible/ Ansible playbook execution engine (native Go, no shell-out)
├── build/ Build system: project detection, archives, checksums
│ ├── builders/ Builders: Go, Wails, Docker, C++, LinuxKit, Taskfile
│ ├── signing/ Code signing: macOS codesign, GPG, Windows signtool
│ └── buildcmd/ CLI handlers for core build / core release
├── container/ LinuxKit VM management, hypervisor abstraction
├── deploy/ Deployment integrations (Coolify PaaS, embedded Python)
├── devkit/ Code quality, security, coverage trending
├── devops/ Portable dev environment management
│ └── sources/ Image download: GitHub Releases, S3/CDN
├── infra/ Infrastructure APIs: Hetzner Cloud, Hetzner Robot, CloudNS
├── release/ Release orchestration: version, changelog, publishing
│ └── publishers/ 8 publisher backends
├── sdk/ OpenAPI SDK generation and breaking change detection
│ └── generators/ Language generators: TypeScript, Python, Go, PHP
├── snapshot/ Frozen release manifest generation (core.json)
└── cmd/ CLI command registrations
├── dev/ Multi-repo workflow commands (work, health, commit, push, pull)
├── docs/ Documentation sync and listing
├── deploy/ Coolify deployment commands
├── setup/ Repository and CI bootstrapping
└── gitcmd/ Git helpers
├── cmd/ CLI command registrations
│ ├── dev/ Multi-repo workflow commands (work, health, commit, push, pull)
│ ├── docs/ Documentation sync and listing
│ ├── deploy/ Coolify deployment commands
│ ├── setup/ Repository and CI bootstrapping
│ └── gitcmd/ Git helpers
├── deploy/ Deployment integrations (Coolify PaaS)
└── snapshot/ Frozen release manifest generation (core.json)
```
## CLI commands
@ -58,14 +62,6 @@ go-devops/
go-devops registers commands into the `core` CLI binary (built from `forge.lthn.ai/core/cli`). Key commands:
```bash
# Build
core build # Auto-detect project type, build for configured targets
core build --ci # All targets, JSON output
core build sdk # Generate SDKs from OpenAPI spec
# Release
core build release # Build + changelog + publish (requires --we-are-go-for-launch)
# Multi-repo development
core dev health # Quick summary across all repos
core dev work # Combined status, commit, push workflow
@ -92,55 +88,31 @@ core setup repo # Generate .core/ configuration for a repo
core setup ci # Bootstrap CI configuration
```
## Configuration
## Release snapshots
Two YAML files in `.core/` at the project root control build and release behaviour:
The `snapshot` package generates a frozen `core.json` manifest from
`.core/manifest.yaml`, embedding the git commit SHA, tag, and build
timestamp. This file is consumed by the marketplace for self-describing
package listings.
| File | Purpose |
|------|---------|
| `.core/build.yaml` | Project name, binary, build flags, cross-compilation targets |
| `.core/release.yaml` | Repository, changelog rules, publisher configs, SDK settings |
See [Build System](build-system.md) and [Publishers](publishers.md) for full configuration reference.
## Core interfaces
Every extensible subsystem is defined by a small interface:
```go
// Builder — project type plugin (build/builders/)
type Builder interface {
Name() string
Detect(fs io.Medium, dir string) (bool, error)
Build(ctx context.Context, cfg *Config, targets []Target) ([]Artifact, error)
}
// Publisher — distribution target plugin (release/publishers/)
type Publisher interface {
Name() string
Publish(ctx context.Context, release *Release, pubCfg PublisherConfig,
relCfg ReleaseConfig, dryRun bool) error
}
// Generator — SDK language generator (sdk/generators/)
type Generator interface {
Language() string
Generate(ctx context.Context, spec, outputDir string, config *Config) error
}
// Signer — code signing plugin (build/signing/)
type Signer interface {
Name() string
Available() bool
Sign(filePath, keyID string) ([]byte, error)
```json
{
"schema": 1,
"code": "photo-browser",
"name": "Photo Browser",
"version": "0.1.0",
"commit": "a1b2c3d4...",
"tag": "v0.1.0",
"built": "2026-03-09T15:00:00Z",
"daemons": { ... },
"modules": [ ... ]
}
```
## Further reading
- [Build System](build-system.md) — Builders, project detection, `.core/build.yaml` reference
- [Publishers](publishers.md) — Release publishers, `.core/release.yaml` reference
- [SDK Generation](sdk-generation.md) — OpenAPI client generation and breaking change detection
- [Doc Sync](sync.md) — Documentation sync across multi-repo workspaces
- [Architecture](architecture.md) — Full architecture deep-dive (Ansible, infra, devkit, containers)
- [Development Guide](development.md) — Building, testing, coding standards
- [go-build](go-build.md) -- Build system, release pipeline, SDK generation
- [go-infra](go-infra.md) -- Infrastructure provider APIs
- [go-ansible](go-ansible.md) -- Pure Go Ansible playbook engine
- [go-container](go-container.md) -- LinuxKit VM management
- [Doc Sync](sync.md) -- Documentation sync across multi-repo workspaces

View file

@ -98,14 +98,105 @@ procs := process.List()
running := process.Running()
```
## Package Layout
## Daemon mode
go-process also manages *this process* as a long-running service. Where
`Process` manages child processes, `Daemon` manages the current process's own
lifecycle -- PID file locking, health endpoints, signal handling, and graceful
shutdown.
These types were extracted from `core/cli` to give any Go service daemon
capabilities without depending on the full CLI framework.
### PID file
`PIDFile` enforces single-instance execution. It writes the current PID on
`Acquire()`, detects stale lock files, and cleans up on `Release()`.
```go
pf := process.NewPIDFile("/var/run/myapp.pid")
if err := pf.Acquire(); err != nil {
log.Fatal("another instance is running")
}
defer pf.Release()
```
### Health server
`HealthServer` provides HTTP `/health` and `/ready` endpoints. Custom health
checks can be added and the ready state toggled independently.
```go
hs := process.NewHealthServer("127.0.0.1:9000")
hs.AddCheck(func() error { return db.Ping() })
hs.Start()
defer hs.Stop(ctx)
hs.SetReady(true)
```
### Daemon orchestration
`Daemon` combines PID file, health server, and signal handling into a single
struct. It listens for `SIGTERM`/`SIGINT` and calls registered shutdown hooks.
```go
d := process.NewDaemon(process.DaemonOptions{
PIDFile: "/var/run/myapp.pid",
HealthAddr: "127.0.0.1:9000",
ShutdownTimeout: 30 * time.Second,
})
d.Start()
d.SetReady(true)
d.Run(ctx) // blocks until signal
```
### Daemon registry
The `Registry` tracks all running daemons across the system via JSON files
in `~/.core/daemons/`. When a `Daemon` is configured with a `Registry`, it
auto-registers on start and auto-unregisters on stop.
```go
reg := process.DefaultRegistry()
// Manual registration
reg.Register(process.DaemonEntry{
Code: "my-app", Daemon: "serve", PID: os.Getpid(),
Health: "127.0.0.1:9000", Project: "/path/to/project",
})
// List all live daemons (stale entries are pruned automatically)
entries, _ := reg.List()
// Auto-registration via Daemon
d := process.NewDaemon(process.DaemonOptions{
Registry: reg,
RegistryEntry: process.DaemonEntry{
Code: "my-app", Daemon: "serve",
},
})
```
The registry is consumed by `core start/stop/list` CLI commands for
project-level daemon management.
## Package layout
| Path | Description |
|------|-------------|
| `*.go` (root) | Core process service, types, actions, runner, daemon, health, PID file, registry |
| `*.go` (root) | Process service, types, actions, runner, daemon, health, PID file, registry |
| `exec/` | Lightweight command wrapper with fluent API and structured logging |
## Module Information
Key files:
| File | Purpose |
|------|---------|
| `daemon.go` | `Daemon`, `DaemonOptions`, `Mode`, `DetectMode()` |
| `pidfile.go` | `PIDFile` (acquire, release, stale detection) |
| `health.go` | `HealthServer` with `/health` and `/ready` endpoints |
| `registry.go` | `Registry`, `DaemonEntry`, `DefaultRegistry()` |
## Module information
| Field | Value |
|-------|-------|

View file

@ -1,6 +1,6 @@
---
title: Features
description: Built-in features — actions, tenancy, search, SEO, CDN, media, activity logging, and seeders
description: Built-in features — actions, scheduled actions, tenancy, search, SEO, CDN, media, activity logging, studio, and seeders
---
# Features

View file

@ -0,0 +1,243 @@
# Scheduled Actions
Declare schedules directly on Action classes using PHP attributes. No manual `routes/console.php` entries needed — the framework discovers, persists, and executes them automatically.
## Overview
Scheduled Actions combine the [Actions pattern](actions.md) with PHP 8.1 attributes to create a database-backed scheduling system. Actions declare their default schedule via `#[Scheduled]`, a sync command persists them to a `scheduled_actions` table, and a service provider wires them into Laravel's scheduler at runtime.
```
artisan schedule:sync artisan schedule:run
│ │
ScheduledActionScanner ScheduleServiceProvider
│ │
Discovers #[Scheduled] Reads scheduled_actions
attributes via reflection table, wires into Schedule
│ │
Upserts scheduled_actions Calls Action::run() at
table rows configured frequency
```
## Basic Usage
Add the `#[Scheduled]` attribute to any Action class:
```php
<?php
declare(strict_types=1);
namespace Mod\Social\Actions;
use Core\Actions\Action;
use Core\Actions\Scheduled;
#[Scheduled(frequency: 'dailyAt:09:00', timezone: 'Europe/London')]
class PublishDiscordDigest
{
use Action;
public function handle(): void
{
// Gather yesterday's commits, summarise, post to Discord
}
}
```
No Boot registration needed. No `routes/console.php` entry. The scanner discovers it, `schedule:sync` persists it, and the scheduler runs it.
## The `#[Scheduled]` Attribute
```php
#[Attribute(Attribute::TARGET_CLASS)]
class Scheduled
{
public function __construct(
public string $frequency,
public ?string $timezone = null,
public bool $withoutOverlapping = true,
public bool $runInBackground = true,
) {}
}
```
### Frequency Strings
The `frequency` string maps directly to Laravel Schedule methods. Arguments are colon-separated, with multiple arguments comma-separated:
| Frequency String | Laravel Equivalent |
|---|---|
| `everyMinute` | `->everyMinute()` |
| `hourly` | `->hourly()` |
| `dailyAt:09:00` | `->dailyAt('09:00')` |
| `weeklyOn:1,09:00` | `->weeklyOn(1, '09:00')` |
| `monthlyOn:1,00:00` | `->monthlyOn(1, '00:00')` |
| `cron:*/5 * * * *` | `->cron('*/5 * * * *')` |
Numeric arguments are automatically cast to integers, so `weeklyOn:1,09:00` correctly passes `(int) 1` and `'09:00'`.
## Syncing Schedules
The `schedule:sync` command scans for `#[Scheduled]` attributes and persists them to the database:
```bash
php artisan schedule:sync
# Schedule sync complete: 3 added, 1 disabled, 12 unchanged.
```
### Behaviour
- **New classes** are inserted with their attribute defaults
- **Existing rows** are preserved (manual edits to frequency are not overwritten)
- **Removed classes** are disabled (`is_enabled = false`), not deleted
- **Idempotent** — safe to run on every deploy
Run this command as part of your deployment pipeline, after migrations.
### Scan Paths
By default, the scanner checks:
- `app/Core`, `app/Mod`, `app/Website` (application code)
- `src/Core`, `src/Mod` (framework code)
Override with the `core.scheduled_action_paths` config key:
```php
// config/core.php
'scheduled_action_paths' => [
app_path('Core'),
app_path('Mod'),
],
```
## The `ScheduledAction` Model
Each discovered action is persisted as a `ScheduledAction` row:
| Column | Type | Description |
|---|---|---|
| `action_class` | `string` (unique) | Fully qualified class name |
| `frequency` | `string` | Schedule frequency string |
| `timezone` | `string` (nullable) | Timezone override |
| `without_overlapping` | `boolean` | Prevent concurrent runs |
| `run_in_background` | `boolean` | Run in background process |
| `is_enabled` | `boolean` | Toggle on/off |
| `last_run_at` | `timestamp` (nullable) | Last execution time |
| `next_run_at` | `timestamp` (nullable) | Computed next run |
### Querying
```php
use Core\Actions\ScheduledAction;
// All enabled actions
$active = ScheduledAction::enabled()->get();
// Check last run
$action = ScheduledAction::where('action_class', MyAction::class)->first();
echo $action->last_run_at?->diffForHumans(); // "2 hours ago"
// Parse frequency
$action->frequencyMethod(); // 'dailyAt'
$action->frequencyArgs(); // ['09:00']
```
## Runtime Execution
The `ScheduleServiceProvider` boots in console context and wires all enabled rows into Laravel's scheduler. It validates each action before registering:
- **Namespace allowlist** — only classes in `App\`, `Core\`, or `Mod\` namespaces are accepted
- **Action trait check** — the class must use the `Core\Actions\Action` trait
- **Frequency allowlist** — only recognised Laravel Schedule methods are permitted
After each run, `last_run_at` is updated automatically.
## Admin Control
The `scheduled_actions` table is designed for admin visibility. You can:
- **Disable** an action by setting `is_enabled = false` — it will not be re-enabled by subsequent syncs
- **Change frequency** by editing the `frequency` column — manual edits are preserved across syncs
- **Monitor** via `last_run_at` — see when each action last executed
## Migration Strategy
- Existing `routes/console.php` commands remain untouched
- New scheduled work uses `#[Scheduled]` actions
- Existing commands can be migrated to actions gradually at natural touch points
## Examples
### Every-minute health check
```php
#[Scheduled(frequency: 'everyMinute', withoutOverlapping: true)]
class CheckServiceHealth
{
use Action;
public function handle(): void
{
// Ping upstream services, alert on failure
}
}
```
### Weekly report with timezone
```php
#[Scheduled(frequency: 'weeklyOn:1,09:00', timezone: 'Europe/London')]
class SendWeeklyReport
{
use Action;
public function handle(): void
{
// Compile and email weekly metrics
}
}
```
### Cron expression
```php
#[Scheduled(frequency: 'cron:0 */6 * * *')]
class SyncExternalData
{
use Action;
public function handle(): void
{
// Pull data from external API every 6 hours
}
}
```
## Testing
```php
use Core\Actions\Scheduled;
use Core\Actions\ScheduledAction;
use Core\Actions\ScheduledActionScanner;
it('discovers scheduled actions', function () {
$scanner = new ScheduledActionScanner();
$results = $scanner->scan([app_path('Mod')]);
expect($results)->not->toBeEmpty();
expect(array_values($results)[0])->toBeInstanceOf(Scheduled::class);
});
it('syncs scheduled actions to database', function () {
$this->artisan('schedule:sync')->assertSuccessful();
expect(ScheduledAction::enabled()->count())->toBeGreaterThan(0);
});
```
## Learn More
- [Actions Pattern](actions.md)
- [Module System](/php/framework/modules)
- [Lifecycle Events](/php/framework/events)

314
docs/php/features/studio.md Normal file
View file

@ -0,0 +1,314 @@
# Studio Multimedia Pipeline
Studio is a CorePHP module that orchestrates video remixing, transcription, voice synthesis, and image generation by dispatching GPU work to remote services. It separates creative decisions (LEM/Ollama) from mechanical execution (ffmpeg, Whisper, TTS, ComfyUI).
## Architecture
Studio is a job orchestrator, not a renderer. All GPU-intensive work runs on remote Docker services accessed over HTTP.
```
Studio Module (CorePHP)
├── Livewire UI (asset browser, remix form, voice, thumbnails)
├── Artisan Commands (CLI)
└── API Routes (/api/studio/*)
Actions (CatalogueAsset, GenerateManifest, RenderManifest, etc.)
Redis Job Queue
├── Ollama (LEM) ─────── Creative decisions, scripts, manifests
├── Whisper ───────────── Speech-to-text transcription
├── Kokoro TTS ────────── Voiceover generation
├── ffmpeg Worker ─────── Video rendering from manifests
└── ComfyUI ──────────── Image generation, thumbnails
```
### Smart/Dumb Separation
LEM produces JSON manifests (the creative layer). ffmpeg and GPU services consume them mechanically (the execution layer). Neither side knows about the other's internals — the manifest format is the contract.
## Module Structure
The Studio module lives at `app/Mod/Studio/` and follows standard CorePHP patterns:
```
app/Mod/Studio/
├── Boot.php # Lifecycle events (API, Console, Web)
├── Actions/
│ ├── CatalogueAsset.php # Ingest files, extract metadata
│ ├── TranscribeAsset.php # Send to Whisper, store transcript
│ ├── GenerateManifest.php # Brief + library → LEM → manifest JSON
│ ├── RenderManifest.php # Dispatch manifest to ffmpeg worker
│ ├── SynthesiseSpeech.php # Text → TTS → audio file
│ ├── GenerateVoiceover.php # Script → voiced audio for remix
│ ├── GenerateImage.php # Prompt → ComfyUI → image
│ ├── GenerateThumbnail.php # Asset → thumbnail image
│ └── BatchRemix.php # Queue multiple remix jobs
├── Console/
│ ├── Catalogue.php # studio:catalogue — batch ingest
│ ├── Transcribe.php # studio:transcribe — batch transcription
│ ├── Remix.php # studio:remix — brief in, video out
│ ├── Voice.php # studio:voice — text-to-speech
│ ├── Thumbnail.php # studio:thumbnail — generate thumbnails
│ └── BatchRemixCommand.php # studio:batch-remix — queue batch jobs
├── Controllers/Api/
│ ├── AssetController.php # GET/POST /api/studio/assets
│ ├── RemixController.php # POST /api/studio/remix
│ ├── VoiceController.php # POST /api/studio/voice
│ └── ImageController.php # POST /api/studio/images/thumbnail
├── Models/
│ ├── StudioAsset.php # Multimedia asset with metadata
│ └── StudioJob.php # Job tracking (status, manifest, output)
├── Livewire/
│ ├── AssetBrowserPage.php # Browse/search/tag assets
│ ├── RemixPage.php # Remix form + job status
│ ├── VoicePage.php # Voice synthesis interface
│ └── ThumbnailPage.php # Thumbnail generator
└── Routes/
├── api.php # REST API endpoints
└── web.php # Livewire page routes
```
## Asset Cataloguing
Assets are multimedia files (video, image, audio) tracked in the `studio_assets` table with metadata including duration, resolution, tags, and transcripts.
### Ingesting Assets
```php
use Mod\Studio\Actions\CatalogueAsset;
// From an uploaded file
$asset = CatalogueAsset::run($uploadedFile, ['summer', 'beach']);
// From an existing storage path
$asset = CatalogueAsset::run('studio/raw/clip-001.mp4', ['interview']);
```
Only `video/*`, `image/*`, and `audio/*` MIME types are accepted.
### CLI Batch Ingest
```bash
php artisan studio:catalogue /path/to/media --tags=summer,promo
```
### Querying Assets
```php
use Mod\Studio\Models\StudioAsset;
// By type
$videos = StudioAsset::videos()->get();
$images = StudioAsset::images()->get();
$audio = StudioAsset::audio()->get();
// By tag
$summer = StudioAsset::tagged('summer')->get();
```
## Transcription
Transcription sends assets to a Whisper service and stores the returned text and detected language.
```php
use Mod\Studio\Actions\TranscribeAsset;
$asset = TranscribeAsset::run($asset);
echo $asset->transcript; // "Hello and welcome..."
echo $asset->transcript_language; // "en"
```
The action handles missing files and API failures gracefully — it returns the asset unchanged without throwing.
### CLI Batch Transcription
```bash
php artisan studio:transcribe
```
## Manifest-Driven Remixing
The remix pipeline has two stages: manifest generation (creative) and rendering (mechanical).
### Generating Manifests
```php
use Mod\Studio\Actions\GenerateManifest;
$job = GenerateManifest::run(
brief: 'Create a 15-second upbeat TikTok from the summer footage',
template: 'tiktok-15s',
);
// $job->manifest contains the JSON manifest
```
The action collects all video assets from the library, sends them as context to Ollama along with the brief, and parses the returned JSON manifest.
### Manifest Format
```json
{
"clips": [
{"asset_id": 42, "start_ms": 3200, "end_ms": 8100, "effects": ["fade_in"]},
{"asset_id": 17, "start_ms": 0, "end_ms": 5500, "effects": ["crossfade"]}
],
"audio": {"track": "original"},
"voiceover": {"script": "Summer vibes only", "voice": "default", "volume": 0.8},
"overlays": [
{"type": "image", "asset_id": 5, "at": 0.5, "duration": 3.0, "position": "bottom-right", "opacity": 0.8}
]
}
```
### Rendering
```php
use Mod\Studio\Actions\RenderManifest;
$job = RenderManifest::run($job);
```
This dispatches the manifest to the ffmpeg worker service, which renders the video and calls back when complete.
### CLI Remix
```bash
php artisan studio:remix "Create a relaxing travel montage" --template=tiktok-30s
```
## Voice & TTS
```php
use Mod\Studio\Actions\SynthesiseSpeech;
$audio = SynthesiseSpeech::run(
text: 'Welcome to our channel',
voice: 'default',
);
```
### CLI
```bash
php artisan studio:voice "Welcome to our channel" --voice=default
```
## Image Generation
Thumbnails and image overlays use ComfyUI:
```php
use Mod\Studio\Actions\GenerateThumbnail;
$thumbnail = GenerateThumbnail::run($asset);
```
### CLI
```bash
php artisan studio:thumbnail --asset=42
```
## API Endpoints
| Method | Endpoint | Description |
|---|---|---|
| `GET` | `/api/studio/assets` | List assets |
| `GET` | `/api/studio/assets/{id}` | Show asset details |
| `POST` | `/api/studio/assets` | Upload/catalogue asset |
| `POST` | `/api/studio/remix` | Submit remix brief |
| `GET` | `/api/studio/remix/{id}` | Poll job status |
| `POST` | `/api/studio/remix/{id}/callback` | Worker completion callback |
| `POST` | `/api/studio/voice` | Submit voice synthesis |
| `GET` | `/api/studio/voice/{id}` | Poll voice job status |
| `POST` | `/api/studio/images/thumbnail` | Generate thumbnail |
## GPU Services
All GPU services run as Docker containers, accessed over HTTP. Configuration is in `config/studio.php`:
| Service | Default Endpoint | Purpose |
|---|---|---|
| Ollama | `http://studio-ollama:11434` | Creative decisions via LEM |
| Whisper | `http://studio-whisper:9100` | Speech-to-text |
| Kokoro TTS | `http://studio-tts:9200` | Text-to-speech |
| ffmpeg Worker | `http://studio-worker:9300` | Video rendering |
| ComfyUI | `http://studio-comfyui:8188` | Image generation |
## Configuration
```php
// config/studio.php
return [
'ollama' => [
'url' => env('STUDIO_OLLAMA_URL', 'http://studio-ollama:11434'),
'model' => env('STUDIO_OLLAMA_MODEL', 'lem-4b'),
'timeout' => 60,
],
'whisper' => [
'url' => env('STUDIO_WHISPER_URL', 'http://studio-whisper:9100'),
'model' => 'large-v3-turbo',
'timeout' => 120,
],
'worker' => [
'url' => env('STUDIO_WORKER_URL', 'http://studio-worker:9300'),
'timeout' => 300,
],
'storage' => [
'disk' => 'local',
'assets_path' => 'studio/assets',
],
'templates' => [
'tiktok-15s' => ['duration' => 15, 'width' => 1080, 'height' => 1920, 'fps' => 30],
'tiktok-30s' => ['duration' => 30, 'width' => 1080, 'height' => 1920, 'fps' => 30],
'youtube-60s' => ['duration' => 60, 'width' => 1920, 'height' => 1080, 'fps' => 30],
],
];
```
## Livewire UI
Studio provides four Livewire page components:
- **Asset Browser** — browse, search, and tag multimedia assets
- **Remix Page** — enter a creative brief, select template, view job progress
- **Voice Page** — text-to-speech interface
- **Thumbnail Page** — generate thumbnails from assets
Components are registered via the module's Boot class and available under `mod.studio.livewire.*`.
## Testing
All actions are testable with `Http::fake()`:
```php
use Illuminate\Support\Facades\Http;
use Mod\Studio\Actions\TranscribeAsset;
use Mod\Studio\Models\StudioAsset;
it('transcribes an asset via Whisper', function () {
Storage::fake('local');
Storage::disk('local')->put('studio/test.mp4', 'fake-video');
Http::fake([
'*/transcribe' => Http::response([
'text' => 'Hello world',
'language' => 'en',
]),
]);
$asset = StudioAsset::factory()->create(['path' => 'studio/test.mp4']);
$result = TranscribeAsset::run($asset);
expect($result->transcript)->toBe('Hello world');
expect($result->transcript_language)->toBe('en');
});
```
## Learn More
- [Actions Pattern](actions.md)
- [Lifecycle Events](/php/framework/events)

View file

@ -63,6 +63,18 @@ const packages = [
slug: 'developer',
description: 'Admin developer tools for debugging, monitoring, and server management',
icon: '🛠️'
},
{
name: 'Plug',
slug: 'plug',
description: 'Unified integrations for social media, messaging, CDN, storage, and stock media services',
icon: '🔗'
},
{
name: 'Uptelligence',
slug: 'uptelligence',
description: 'Vendor dependency monitoring, upstream release tracking, and update management',
icon: '📡'
}
]

286
docs/php/packages/plug.md Normal file
View file

@ -0,0 +1,286 @@
# Plug Packages
The Plug system provides a unified interface for integrating with external platforms — social media, messaging, content publishing, CDN, storage, and stock media services. It is split into a framework layer (contracts, registry, shared traits) in `core/php` and nine domain-specific packages.
## Architecture
```
core/php (framework)
├── src/Plug/Contract/* ← 8 shared interfaces
├── src/Plug/Registry.php ← provider registry
├── src/Plug/Response.php ← standardised operation response
├── src/Plug/Concern/* ← shared traits (UsesHttp, ManagesTokens, BuildsResponse)
├── src/Plug/Enum/Status.php ← OK, UNAUTHORIZED, RATE_LIMITED, etc.
└── src/Plug/Boot.php ← registry singleton registration
core/php-plug-social ─┐
core/php-plug-web3 ─┤
core/php-plug-content ─┤
core/php-plug-chat ─┤
core/php-plug-business ─┤ all depend on core/php
core/php-plug-cdn ─┤
core/php-plug-storage ─┤
core/php-plug-stock ─┤
core/php-plug-altum ─┘
```
All provider namespaces are under `Core\Plug\*`, matching the framework convention.
## Shared Contracts
Eight interfaces in `Core\Plug\Contract\` define the capabilities a provider can implement:
| Contract | Description |
|---|---|
| `Authenticable` | OAuth/token-based authentication |
| `Postable` | Create posts or content |
| `Readable` | Read posts, profiles, feeds |
| `Deletable` | Delete posts or content |
| `Commentable` | Read/write comments |
| `Listable` | List pages, boards, groups |
| `MediaUploadable` | Upload images, videos, media |
| `Refreshable` | Refresh OAuth tokens |
## Registry
The `Registry` class manages provider discovery and capability checking:
```php
use Core\Plug\Registry;
$registry = app(Registry::class);
// Check if a provider exists
$registry->has('twitter'); // true
// Check capabilities
$registry->supports('twitter', 'post'); // true
$registry->supports('twitter', 'comment'); // true
// Get an operation class
$postClass = $registry->operation('twitter', 'post');
// Returns: Core\Plug\Social\Twitter\Post
// Browse by category
$social = $registry->byCategory('Social'); // Collection of identifiers
// Find providers with a capability
$postable = $registry->withCapability('post'); // All providers that support posting
```
### Programmatic Registration
Packages self-register their providers via `Registry::register()`:
```php
$registry->register(
identifier: 'twitter',
category: 'Social',
name: 'Twitter',
namespace: 'Core\Plug\Social\Twitter',
);
```
## Shared Traits
### `UsesHttp`
Provides a pre-configured HTTP client with JSON acceptance:
```php
use Core\Plug\Concern\UsesHttp;
class Post
{
use UsesHttp;
public function create(array $data): Response
{
$response = $this->http()
->withToken($this->accessToken)
->post('https://api.twitter.com/2/tweets', $data);
// ...
}
}
```
### `ManagesTokens`
OAuth token storage and refresh logic.
### `BuildsResponse`
Fluent builder for `Core\Plug\Response` objects with status, data, and error handling.
## Standardised Response
All Plug operations return a `Core\Plug\Response`:
```php
use Core\Plug\Response;
use Core\Plug\Enum\Status;
$response = new Response(
status: Status::OK,
data: ['id' => '123456', 'url' => 'https://twitter.com/...'],
);
$response->successful(); // true
$response->data(); // ['id' => '123456', ...]
$response->status(); // Status::OK
```
Status values: `OK`, `UNAUTHORIZED`, `RATE_LIMITED`, `NOT_FOUND`, `ERROR`, `VALIDATION_ERROR`.
## Packages
### Social (`core/php-plug-social`)
8 social media providers for posting, reading, and managing content.
| Provider | Operations |
|---|---|
| Twitter | Auth, Post, Delete, Media, Read |
| Meta (Facebook/Instagram) | Auth, Post, Delete, Media, Pages, Read |
| LinkedIn | Auth, Post, Delete, Media, Pages, Read |
| Pinterest | Auth, Post, Delete, Media, Boards, Read |
| Reddit | Auth, Post, Delete, Media, Read, Subreddits |
| TikTok | Auth, Post, Read |
| VK | Auth, Post, Delete, Media, Groups, Read |
| YouTube | Auth, Post, Delete, Comment, Read |
### Web3 (`core/php-plug-web3`)
6 decentralised/federated platform providers.
| Provider | Operations |
|---|---|
| Bluesky | Auth, Post, Delete, Read |
| Farcaster | Auth, Post, Read |
| Lemmy | Auth, Post, Delete, Comment, Read |
| Mastodon | Auth, Post, Delete, Media, Read |
| Nostr | Auth, Post, Read |
| Threads | Auth, Post, Read |
### Content (`core/php-plug-content`)
4 content publishing platform providers.
| Provider | Operations |
|---|---|
| Dev.to | Auth, Post, Read |
| Hashnode | Auth, Post, Read |
| Medium | Auth, Post, Read |
| WordPress | Auth, Post, Delete, Read |
### Chat (`core/php-plug-chat`)
3 messaging platform providers.
| Provider | Operations |
|---|---|
| Discord | Auth, Post |
| Slack | Auth, Post |
| Telegram | Auth, Post |
### Business (`core/php-plug-business`)
| Provider | Operations |
|---|---|
| Google My Business | Auth, Post, Read |
### CDN (`core/php-plug-cdn`)
CDN management with domain-specific contracts (`Core\Plug\Cdn\Contract\Purgeable`, `HasStats`).
| Provider | Operations |
|---|---|
| Bunny CDN | Purge, Stats |
### Storage (`core/php-plug-storage`)
Object storage with domain-specific contracts (`Core\Plug\Storage\Contract\Browseable`, `Uploadable`, `Downloadable`, `Deletable`).
| Provider | Operations |
|---|---|
| Bunny Storage | Browse, Delete, Download, Upload, VBucket |
### Stock (`core/php-plug-stock`)
Stock media integrations.
| Provider | Operations |
|---|---|
| Unsplash | Search, Photo, Collection, Download |
### AltumCode (`core/php-plug-altum`)
Integration with AltumCode SaaS products (66analytics, 66biolinks, 66pusher, 66socialproof).
| Component | Description |
|---|---|
| `AltumClient` | HTTP client for AltumCode APIs |
| `AltumManager` | Multi-product management |
| `AltumServiceProvider` | Service registration |
| `AltumWebhookVerifier` | Webhook signature verification |
## Adding a Provider
Each provider is a set of operation classes in a subdirectory:
```php
<?php
declare(strict_types=1);
namespace Core\Plug\Social\Twitter;
use Core\Plug\Concern\BuildsResponse;
use Core\Plug\Concern\UsesHttp;
use Core\Plug\Contract\Postable;
use Core\Plug\Response;
class Post implements Postable
{
use BuildsResponse, UsesHttp;
public function create(string $accessToken, array $data): Response
{
$response = $this->http()
->withToken($accessToken)
->post('https://api.twitter.com/2/tweets', [
'text' => $data['text'],
]);
if (! $response->successful()) {
return $this->error($response->status(), $response->body());
}
return $this->ok($response->json());
}
}
```
## Composer Setup
Each plug package requires `core/php` and uses VCS repositories on Forge:
```json
{
"require": {
"core/php-plug-social": "dev-main"
},
"repositories": [
{
"type": "vcs",
"url": "ssh://git@forge.lthn.ai:2223/core/php-plug-social.git"
}
]
}
```
## Learn More
- [Actions Pattern](/php/features/actions)
- [API Package](/php/packages/api/)

View file

@ -0,0 +1,125 @@
# Uptelligence
Uptelligence is a vendor dependency monitoring package that tracks upstream releases, analyses diffs, generates upgrade plans, and sends digest notifications. It supports GitHub, Gitea, and AltumCode platforms.
## Vendor Update Checking
The `VendorUpdateCheckerService` checks all active vendors for new releases by querying their upstream sources.
### Supported Platforms
| Platform | Source Type | Check Method |
|---|---|---|
| GitHub | OSS repos | GitHub Releases API |
| Gitea/Forgejo | OSS repos | Gitea Releases API |
| AltumCode | Licensed products | Public `info.php` endpoints |
| AltumCode | Plugins | `dev.altumcode.com/plugins-versions` |
### Running Checks
```bash
# Check all active vendors
php artisan uptelligence:check-updates
# Output:
# Product Deployed Latest Status
# ──────────────────────────────────────────────
# 66analytics 65.0.0 66.0.0 UPDATE AVAILABLE
# 66biolinks 65.0.0 66.0.0 UPDATE AVAILABLE
# 66pusher 65.0.0 65.0.0 ✓ current
# laravel/framework 12.0.1 12.1.0 UPDATE AVAILABLE
```
### AltumCode Version Detection
AltumCode products expose public version endpoints that require no authentication:
| Endpoint | Returns |
|---|---|
| `https://66analytics.com/info.php` | `{"latest_release_version": "66.0.0", ...}` |
| `https://66biolinks.com/info.php` | Same format |
| `https://66pusher.com/info.php` | Same format |
| `https://66socialproof.com/info.php` | Same format |
| `https://dev.altumcode.com/plugins-versions` | All plugin versions in one response |
Plugin versions are cached in memory during a check run to avoid redundant HTTP calls.
### Syncing Deployed Versions
The `SyncAltumVersionsCommand` reads actual deployed versions from source files on disk:
```bash
# Show what would change
php artisan uptelligence:sync-altum-versions --dry-run
# Sync from default path
php artisan uptelligence:sync-altum-versions
# Sync from custom path
php artisan uptelligence:sync-altum-versions --path=/path/to/saas/services
```
The command reads:
- **Product versions** from `PRODUCT_VERSION` defines in each product's `app/init.php`
- **Plugin versions** from `'version'` entries in each plugin's `config.php`
Output is a table showing old version, new version, and status (UPDATED, current, or SKIPPED).
## Vendor Model
Vendors are tracked in the `uptelligence_vendors` table:
```php
use Core\Mod\Uptelligence\Models\Vendor;
// Source types
Vendor::SOURCE_OSS; // Open source (GitHub/Gitea)
Vendor::SOURCE_LICENSED; // Licensed products (AltumCode)
Vendor::SOURCE_PLUGIN; // Plugins (AltumCode)
// Platform types
Vendor::PLATFORM_ALTUM; // AltumCode products/plugins
// Query active vendors
$active = Vendor::active()->get();
```
## Services
| Service | Description |
|---|---|
| `VendorUpdateCheckerService` | Checks upstream sources for new releases |
| `DiffAnalyzerService` | Analyses differences between versions |
| `AIAnalyzerService` | AI-powered analysis of changes |
| `IssueGeneratorService` | Creates issues for available updates |
| `UpstreamPlanGeneratorService` | Generates upgrade plans |
| `UptelligenceDigestService` | Compiles and sends update digests |
| `VendorStorageService` | Manages downloaded vendor files |
| `WebhookReceiverService` | Receives webhook notifications |
| `AssetTrackerService` | Tracks vendor assets |
## Commands
| Command | Description |
|---|---|
| `uptelligence:check-updates` | Check all vendors for new releases |
| `uptelligence:sync-altum-versions` | Sync deployed AltumCode versions from source |
| `uptelligence:sync-forge` | Sync vendors from Forge repositories |
| `uptelligence:analyze` | Run AI analysis on pending updates |
| `uptelligence:issues` | Generate issues for available updates |
| `uptelligence:send-digests` | Send update digest notifications |
## AltumCode Vendor Seeder
Seed the vendors table with all AltumCode products and plugins:
```bash
php artisan db:seed --class="Core\Mod\Uptelligence\Database\Seeders\AltumCodeVendorSeeder"
```
This creates 4 product entries and 13 plugin entries. The seeder is idempotent — it uses `updateOrCreate` so it can be run repeatedly without creating duplicates.
## Learn More
- [Developer Package](/php/packages/developer/)

View file

@ -1,23 +1,23 @@
# core docs
Documentation management across repositories.
Documentation management and help engine for the Core ecosystem.
## Usage
The `core docs` command collects documentation from across repositories. The `core/docs` Go module also contains `pkg/help`, a full-text search engine and static site generator that powers [core.help](https://core.help).
## Commands
```bash
core docs <command> [flags]
```
## Commands
| Command | Description |
|---------|-------------|
| `list` | List documentation across repos |
| `sync` | Sync documentation to output directory |
| `list` | List documentation coverage across repos |
| `sync` | Sync documentation to an output directory |
## docs list
Show documentation coverage across all repos.
Show documentation coverage across all repos in the workspace.
```bash
core docs list [flags]
@ -27,16 +27,16 @@ core docs list [flags]
| Flag | Description |
|------|-------------|
| `--registry` | Path to repos.yaml |
| `--registry` | Path to `repos.yaml` |
### Output
```
Repo README CLAUDE CHANGELOG docs/
──────────────────────────────────────────────────────────────────────
core ✓ ✓ — 12 files
core-php ✓ ✓ ✓ 8 files
core-images ✓ — — —
----------------------------------------------------------------------
core yes yes -- 12 files
core-php yes yes yes 8 files
core-images yes -- -- --
Coverage: 3 with docs, 0 without
```
@ -53,25 +53,20 @@ core docs sync [flags]
| Flag | Description |
|------|-------------|
| `--registry` | Path to repos.yaml |
| `--output` | Output directory (default: ./docs-build) |
| `--registry` | Path to `repos.yaml` |
| `--output` | Output directory (default: `./docs-build`) |
| `--dry-run` | Show what would be synced |
### Output Structure
### What Gets Synced
```
docs-build/
└── packages/
├── core/
│ ├── index.md # from README.md
│ ├── claude.md # from CLAUDE.md
│ ├── changelog.md # from CHANGELOG.md
│ ├── build.md # from docs/build.md
│ └── ...
└── core-php/
├── index.md
└── ...
```
For each repo, the following files are collected:
| Source | Destination |
|--------|-------------|
| `README.md` | `index.md` |
| `CLAUDE.md` | `claude.md` |
| `CHANGELOG.md` | `changelog.md` |
| `docs/*.md` | `*.md` |
### Example
@ -86,25 +81,204 @@ core docs sync
core docs sync --output ./site/content
```
## What Gets Synced
---
For each repo, the following files are collected:
## Help Engine (`pkg/help`)
| Source | Destination |
|--------|-------------|
| `README.md` | `index.md` |
| `CLAUDE.md` | `claude.md` |
| `CHANGELOG.md` | `changelog.md` |
| `docs/*.md` | `*.md` |
The `core/docs` repository is a Go module (`forge.lthn.ai/core/docs`) containing `pkg/help`, a documentation engine that provides:
## Integration with core.help
- **Full-text search** with stemming, fuzzy matching, and phrase queries
- **Static site generation** for deployment to core.help
- **HTTP server** for in-app documentation (embedded in binaries)
- **HLCRF layout** via `go-html` for semantic HTML rendering
The synced docs are used to build https://core.help:
### Module Path
1. Run `core docs sync --output ../core-php/docs/packages`
2. VitePress builds the combined documentation
3. Deploy to core.help
```
forge.lthn.ai/core/docs
```
### Architecture
```
core/docs/
pkg/help/
catalog.go Topic store with search indexing
search.go Inverted index, stemming, fuzzy matching
parser.go YAML frontmatter + section extraction
render.go Goldmark Markdown to HTML
stemmer.go Porter-style word stemmer
server.go HTTP server (HTML + JSON API)
generate.go Static site generator
layout.go go-html HLCRF page compositor
ingest.go CLI help text to Topic conversion
templates.go Topic grouping helpers
docs/ MkDocs content (this documentation)
zensical.toml MkDocs Material configuration
go.mod
```
### Topics
A `Topic` is the fundamental unit of documentation:
```go
type Topic struct {
ID string // URL-safe slug (e.g. "rate-limiting")
Title string // Display title
Path string // Source file path
Content string // Markdown body
Sections []Section // Extracted headings with IDs
Tags []string // Categorisation tags
Related []string // Links to related topic IDs
Order int // Sort order
}
```
Topics are parsed from Markdown files with optional YAML frontmatter:
```markdown
---
title: Rate Limiting
tags: [api, security]
related: [authentication]
order: 5
---
## Overview
Rate limiting controls request throughput...
```
Headings are automatically extracted as sections with URL-safe IDs (e.g. `"Rate Limiting"` becomes `rate-limiting`).
### Search
The search engine builds an inverted index from topic content and supports:
- **Keyword search**: Single or multi-word queries
- **Phrase search**: Quoted strings (e.g. `"rate limit"`)
- **Fuzzy matching**: Levenshtein distance (max 2) for typo tolerance
- **Prefix matching**: Partial word completion
- **Stemming**: Porter-style stemming for morphological variants
Relevance scoring weights:
| Signal | Weight |
|--------|--------|
| Title match | 10x |
| Phrase match | 8x |
| Section title match | 5x |
| Tag match | 3x |
| All words present | 2x |
| Exact word | 1x |
| Stemmed word | 0.7x |
| Prefix match | 0.5x |
| Fuzzy match | 0.3x |
### Catalog
The `Catalog` manages a collection of topics with automatic search indexing:
```go
// Load from a directory of Markdown files
catalog, err := help.LoadContentDir("./content")
// Or build programmatically
catalog := help.DefaultCatalog()
catalog.Add(&help.Topic{ID: "my-topic", Title: "My Topic", Content: "..."})
// Search
results := catalog.Search("rate limit")
for _, r := range results {
fmt.Printf("%s (score: %.1f)\n", r.Topic.Title, r.Score)
}
// Get by ID
topic, err := catalog.Get("rate-limiting")
```
### HTTP Server
`pkg/help` provides an HTTP server with both HTML and JSON API endpoints:
```go
server := help.NewServer(catalog, ":8080")
server.ListenAndServe()
```
#### HTML Routes
| Route | Description |
|-------|-------------|
| `GET /` | Topic listing grouped by tags |
| `GET /topics/{id}` | Single topic page with table of contents |
| `GET /search?q=...` | Search results page |
#### JSON API Routes
| Route | Description |
|-------|-------------|
| `GET /api/topics` | All topics as JSON |
| `GET /api/topics/{id}` | Single topic with sections |
| `GET /api/search?q=...` | Search results with scores and snippets |
### Static Site Generation
The `Generate` function writes a complete static site:
```go
err := help.Generate(catalog, "./dist")
```
Output structure:
```
dist/
index.html Topic listing
search.html Client-side search (inline JS)
search-index.json JSON index for client-side search
404.html Not found page
topics/
rate-limiting.html One page per topic
authentication.html
...
```
The static site includes client-side JavaScript that fetches `search-index.json` and performs search without a server. All CSS is inlined -- no external stylesheets are needed.
### HLCRF Layout
Page rendering uses `go-html`'s HLCRF (Header, Left, Content, Right, Footer) compositor for semantic HTML:
| Slot | Element | Content |
|------|---------|---------|
| H | `<header role="banner">` | Nav bar with branding and search |
| L | `<aside role="complementary">` | Topic tree grouped by tag (topic pages only) |
| C | `<main role="main">` | Rendered Markdown with section anchors |
| F | `<footer role="contentinfo">` | Licence and source link |
The layout uses a dark theme with all CSS inlined. Pages are responsive -- the sidebar collapses on narrow viewports.
### Dual-Mode Serving
The help engine supports two deployment modes:
| Mode | Use Case | Details |
|------|----------|---------|
| **In-app** | CoreGUI webview, `core help serve` | `NewServer()` serves from `//go:embed` content, no network |
| **Public** | core.help | Static HTML from `Generate()`, deployed to CDN |
Both modes produce identical URLs and fragment anchors, so deep links work in either context.
## Dependencies
| Module | Purpose |
|--------|---------|
| `forge.lthn.ai/core/go-html` | HLCRF layout compositor |
| `github.com/yuin/goldmark` | Markdown to HTML rendering |
| `gopkg.in/yaml.v3` | YAML frontmatter parsing |
## See Also
- [Configuration](../../configuration.md) - Project configuration
- [core/lint](../../lint/) -- Pattern catalog and QA toolkit

View file

@ -118,7 +118,159 @@ Three built-in YAML catalogs ship with the module:
| `go-correctness.yaml` | 7 | Unsynchronised goroutines, silent error swallowing, panics in library code, file deletion |
| `go-modernise.yaml` | 5 | Replace legacy patterns with modern stdlib (`slices.Clone`, `slices.Sort`, `maps.Keys`, `errgroup`) |
Total: **18 rules** across 3 severity tiers (info, medium, high, critical). All rules target Go. The catalog is extensible -- add more YAML files to `catalog/` and they will be embedded automatically.
Total: **18 rules** across 4 severity levels. All rules currently target Go. The catalog is extensible -- add more YAML files to `catalog/` and they are embedded automatically at build time.
### Rule Schema
Each rule is defined in YAML with the following fields:
```yaml
- id: go-sec-001
title: "SQL wildcard injection in LIKE clauses"
severity: high # info, low, medium, high, critical
languages: [go]
tags: [security, injection]
pattern: 'LIKE\s+\?.*["%].*\+'
exclude_pattern: 'EscapeLike' # suppress if this also matches
fix: "Use parameterised LIKE with EscapeLike() helper"
found_in: [go-store] # repos where first discovered
example_bad: |
db.Query("SELECT * FROM users WHERE name LIKE ?", "%"+input+"%")
example_good: |
db.Query("SELECT * FROM users WHERE name LIKE ?", "%"+store.EscapeLike(input)+"%")
first_seen: "2026-03-09"
detection: regex # regex (only type currently supported)
auto_fixable: false
```
| Field | Required | Description |
|-------|----------|-------------|
| `id` | yes | Unique identifier (e.g. `go-sec-001`) |
| `title` | yes | Human-readable description |
| `severity` | yes | One of: `info`, `low`, `medium`, `high`, `critical` |
| `languages` | yes | Target languages (e.g. `[go]`, `[go, php]`) |
| `tags` | no | Categorisation tags (e.g. `security`, `concurrency`) |
| `pattern` | yes | Regex pattern to match against each line |
| `exclude_pattern` | no | Regex that suppresses the match if it also matches the line |
| `fix` | no | Recommended fix |
| `found_in` | no | Repos where the pattern was first discovered |
| `example_bad` | no | Code example that triggers the rule |
| `example_good` | no | Code example showing the correct approach |
| `first_seen` | no | Date the rule was added |
| `detection` | yes | Detection method (currently only `regex`) |
| `auto_fixable` | no | Whether automated fixing is supported |
### Rule Reference
#### Security Rules (`go-security.yaml`)
| ID | Title | Severity |
|----|-------|----------|
| `go-sec-001` | SQL wildcard injection in LIKE clauses | high |
| `go-sec-002` | Path traversal via `filepath.Join` | high |
| `go-sec-003` | XSS via unescaped HTML in `fmt.Sprintf` | high |
| `go-sec-004` | Non-constant-time authentication comparison | critical |
| `go-sec-005` | Log injection via string concatenation | medium |
| `go-sec-006` | Secrets leaked in log output | critical |
#### Correctness Rules (`go-correctness.yaml`)
| ID | Title | Severity |
|----|-------|----------|
| `go-cor-001` | Goroutine launched without synchronisation | medium |
| `go-cor-002` | `WaitGroup.Wait` without timeout or context | low |
| `go-cor-003` | Silent error swallowing with blank identifier | medium |
| `go-cor-004` | Panic in library code | high |
| `go-cor-005` | File deletion without path validation | high |
| `go-cor-006` | HTTP response error discarded | high |
| `go-cor-007` | Wrong signal type (`syscall.Signal` instead of `os.Signal`) | low |
#### Modernisation Rules (`go-modernise.yaml`)
| ID | Title | Severity |
|----|-------|----------|
| `go-mod-001` | Manual slice clone (use `slices.Clone`) | info |
| `go-mod-002` | Legacy sort functions (use `slices.Sort`) | info |
| `go-mod-003` | Manual reverse loop (use `slices.Reverse`) | info |
| `go-mod-004` | Manual WaitGroup Add+Done (use `errgroup`) | info |
| `go-mod-005` | Manual map key collection (use `maps.Keys`) | info |
### Adding New Rules
Create a new YAML file in `catalog/` or add entries to an existing file. Rules are validated at load time -- the regex patterns must compile, and all required fields must be present. New catalog files are automatically discovered and embedded on the next build.
## Scanner
The scanner walks directory trees, detects file languages by extension, and matches rules against each line. Directories named `vendor`, `node_modules`, `.git`, `testdata`, and `.core` are excluded by default.
Supported file extensions:
| Extension | Language |
|-----------|----------|
| `.go` | go |
| `.php` | php |
| `.ts`, `.tsx` | ts |
| `.js`, `.jsx` | js |
| `.cpp`, `.cc`, `.c`, `.h` | cpp |
| `.py` | py |
### Matching Behaviour
For each file, the scanner:
1. Detects the language from the file extension
2. Filters rules to those targeting that language
3. Checks each line against each rule's `pattern` regex
4. Suppresses the match if the line also matches the rule's `exclude_pattern`
5. Records a `Finding` with rule ID, severity, file, line number, matched text, and fix
### Output Formats
Findings can be written in three formats:
- **text** (default): `file:line [severity] title (rule-id)`
- **json**: Pretty-printed JSON array
- **jsonl**: Newline-delimited JSON, one finding per line (compatible with `~/.core/ai/metrics/`)
## QA Commands
The `core qa` command group provides workflow-level quality assurance for both Go and PHP projects. It is registered as a CLI plugin via `cmd/qa/` and appears as `core qa` when the lint module is linked into the CLI binary.
### Go Commands
| Command | Description |
|---------|-------------|
| `core qa watch` | Monitor GitHub Actions after a push |
| `core qa review` | PR review status with actionable next steps |
| `core qa health` | Aggregate CI health across all repos |
| `core qa issues` | Intelligent issue triage |
| `core qa docblock` | Check Go docblock coverage |
### PHP Commands
All PHP commands auto-detect the project by checking for `composer.json`. They shell out to the relevant PHP tool (which must be installed via Composer).
| Command | Tool | Description |
|---------|------|-------------|
| `core qa fmt` | Laravel Pint | Format PHP code (`--fix` to apply, `--diff` to preview) |
| `core qa stan` | PHPStan/Larastan | Static analysis (`--level 0-9`, `--json`, `--sarif`) |
| `core qa psalm` | Psalm | Deep type-level analysis (`--fix`, `--baseline`) |
| `core qa audit` | Composer/npm | Audit dependencies for vulnerabilities |
| `core qa security` | Built-in | Check `.env` exposure, debug mode, filesystem permissions, HTTP headers |
| `core qa rector` | Rector | Automated refactoring (`--fix` to apply) |
| `core qa infection` | Infection | Mutation testing (`--min-msi`, `--threads`) |
| `core qa test` | Pest/PHPUnit | Run tests (`--parallel`, `--coverage`, `--filter`) |
### Project Detection
The `pkg/detect` package identifies project types by filesystem markers:
| Marker | Type |
|--------|------|
| `go.mod` | Go |
| `composer.json` | PHP |
`detect.DetectAll()` returns all detected types for a directory, enabling mixed Go+PHP projects to use the full QA suite.
## Dependencies

View file

@ -1,12 +1,12 @@
---
title: Core MCP
description: Model Context Protocol server and tooling for AI agents -- Go binary + Laravel PHP package.
description: Model Context Protocol server and tooling for AI agents -- polyglot Go + PHP package.
---
# Core MCP
`forge.lthn.ai/core/mcp` provides a complete Model Context Protocol (MCP)
implementation spanning two languages:
implementation spanning two languages in a single polyglot repository:
- **Go** -- a standalone MCP server binary (`core-mcp`) with file operations,
ML inference, RAG, process management, webview automation, and WebSocket
@ -18,6 +18,18 @@ implementation spanning two languages:
Both halves speak the same protocol and can bridge to one another via REST or
WebSocket.
## Polyglot repository structure
The repo follows the same pattern as `core/agent` -- Go code lives at the root
(`go.mod`, `pkg/`, `cmd/`), PHP code lives under `src/php/`, and the root
`composer.json` publishes the PHP package as `lthn/mcp`. A `.gitattributes`
file excludes Go sources from Composer distributions.
The Go MCP server was originally part of `go-ai` and was extracted into its own
module to decouple MCP transport and tooling from the broader AI library. The
PHP package was previously published as `core/php-mcp` (now archived on Forge)
and is replaced by this unified repo.
## Quick start
### Go binary

View file

@ -95,6 +95,7 @@ nav = [
{"Features" = [
"php/features/index.md",
"php/features/actions.md",
"php/features/scheduled-actions.md",
"php/features/tenancy.md",
"php/features/search.md",
"php/features/seo.md",
@ -102,6 +103,7 @@ nav = [
"php/features/media.md",
"php/features/activity.md",
"php/features/seeder-system.md",
"php/features/studio.md",
]},
]},
{"Reference" = [
@ -185,6 +187,8 @@ nav = [
"php/packages/developer/architecture.md",
"php/packages/developer/security.md",
]},
"php/packages/plug.md",
"php/packages/uptelligence.md",
]},
]},