Compare commits
42 commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d7e5215618 | ||
|
|
1e8a4131db | ||
|
|
df011ee42b | ||
|
|
2d355f9223 | ||
|
|
db0c0adb65 | ||
|
|
ce12778561 | ||
|
|
44122f9ca6 | ||
|
|
b2e046f4c5 | ||
|
|
3135352b2f | ||
|
|
2bae1148bb | ||
|
|
cffd9d3929 | ||
|
|
cb0408db1d | ||
|
|
e7f8ecb078 | ||
|
|
1cdf92490a | ||
|
|
bcf2d3be48 | ||
|
|
19521c8f18 | ||
|
|
22121eae20 | ||
|
|
b2e78bf29e | ||
|
|
94480ca38e | ||
|
|
3ff7b8a773 | ||
| 0192772ab5 | |||
|
|
c1bc0dad5e | ||
|
|
19e3fd3af7 | ||
| 10f0ebaf22 | |||
|
|
cbaa114bb2 | ||
|
|
9899398153 | ||
|
|
ad6a466459 | ||
|
|
af98accc03 | ||
|
|
2f246ad053 | ||
|
|
7d047fbdcc | ||
|
|
e8695b72a6 | ||
| f0268d12bf | |||
|
|
0681fba48e | ||
|
|
5b737a4933 | ||
|
|
f065c0a5be | ||
|
|
c490a05733 | ||
|
|
93be6c5ed2 | ||
|
|
01924059ae | ||
|
|
262f0eb5d5 | ||
|
|
c7102826ba | ||
|
|
ea63c3acae | ||
|
|
d2f2f0984c |
62 changed files with 11258 additions and 1226 deletions
4
.gitignore
vendored
4
.gitignore
vendored
|
|
@ -17,9 +17,9 @@ dist/
|
|||
tasks
|
||||
/core
|
||||
/i18n-validate
|
||||
cmd/bugseti/bugseti
|
||||
internal/core-ide/core-ide
|
||||
cmd/
|
||||
.angular/
|
||||
|
||||
patch_cov.*
|
||||
go.work.sum
|
||||
lt-hn-index.html
|
||||
|
|
|
|||
457
docs/ecosystem.md
Normal file
457
docs/ecosystem.md
Normal file
|
|
@ -0,0 +1,457 @@
|
|||
# Core Go Ecosystem
|
||||
|
||||
The Core Go ecosystem is a set of 19 standalone Go modules that form the infrastructure backbone for the host-uk platform and the Lethean network. All modules are hosted under the `forge.lthn.ai/core/` organisation. Each module has its own repository, independent versioning, and a `docs/` directory.
|
||||
|
||||
The CLI framework documented in the rest of this site (`forge.lthn.ai/core/cli`) is one node in this graph. The satellite packages listed here are separate repositories that the CLI imports or that stand alone as libraries.
|
||||
|
||||
---
|
||||
|
||||
## Module Index
|
||||
|
||||
| Package | Module Path | Managed By |
|
||||
|---------|-------------|-----------|
|
||||
| [go-inference](#go-inference) | `forge.lthn.ai/core/go-inference` | Virgil |
|
||||
| [go-mlx](#go-mlx) | `forge.lthn.ai/core/go-mlx` | Virgil |
|
||||
| [go-rocm](#go-rocm) | `forge.lthn.ai/core/go-rocm` | Charon |
|
||||
| [go-ml](#go-ml) | `forge.lthn.ai/core/go-ml` | Virgil |
|
||||
| [go-ai](#go-ai) | `forge.lthn.ai/core/go-ai` | Virgil |
|
||||
| [go-agentic](#go-agentic) | `forge.lthn.ai/core/go-agentic` | Charon |
|
||||
| [go-rag](#go-rag) | `forge.lthn.ai/core/go-rag` | Charon |
|
||||
| [go-i18n](#go-i18n) | `forge.lthn.ai/core/go-i18n` | Virgil |
|
||||
| [go-html](#go-html) | `forge.lthn.ai/core/go-html` | Charon |
|
||||
| [go-crypt](#go-crypt) | `forge.lthn.ai/core/go-crypt` | Virgil |
|
||||
| [go-scm](#go-scm) | `forge.lthn.ai/core/go-scm` | Charon |
|
||||
| [go-p2p](#go-p2p) | `forge.lthn.ai/core/go-p2p` | Charon |
|
||||
| [go-devops](#go-devops) | `forge.lthn.ai/core/go-devops` | Virgil |
|
||||
| [go-help](#go-help) | `forge.lthn.ai/core/go-help` | Charon |
|
||||
| [go-ratelimit](#go-ratelimit) | `forge.lthn.ai/core/go-ratelimit` | Charon |
|
||||
| [go-session](#go-session) | `forge.lthn.ai/core/go-session` | Charon |
|
||||
| [go-store](#go-store) | `forge.lthn.ai/core/go-store` | Charon |
|
||||
| [go-ws](#go-ws) | `forge.lthn.ai/core/go-ws` | Charon |
|
||||
| [go-webview](#go-webview) | `forge.lthn.ai/core/go-webview` | Charon |
|
||||
|
||||
---
|
||||
|
||||
## Dependency Graph
|
||||
|
||||
The graph below shows import relationships. An arrow `A → B` means A imports B.
|
||||
|
||||
```
|
||||
go-inference (no dependencies — foundation contract)
|
||||
↑
|
||||
├── go-mlx (CGO, Apple Silicon Metal GPU)
|
||||
├── go-rocm (AMD ROCm, llama-server subprocess)
|
||||
└── go-ml (scoring engine, backends, orchestrator)
|
||||
↑
|
||||
└── go-ai (MCP hub, 49 tools)
|
||||
↑
|
||||
└── go-agentic (service lifecycle, allowances)
|
||||
|
||||
go-rag (Qdrant + Ollama, standalone)
|
||||
↑
|
||||
└── go-ai
|
||||
|
||||
go-i18n (grammar engine, standalone; Phase 2a imports go-mlx)
|
||||
|
||||
go-crypt (standalone)
|
||||
↑
|
||||
├── go-p2p (UEPS wire protocol)
|
||||
└── go-scm (AgentCI dispatch)
|
||||
|
||||
go-store (SQLite KV, standalone)
|
||||
↑
|
||||
├── go-ratelimit (sliding window limiter)
|
||||
├── go-session (transcript parser)
|
||||
└── go-agentic
|
||||
|
||||
go-ws (WebSocket hub, standalone)
|
||||
↑
|
||||
└── go-ai
|
||||
|
||||
go-webview (CDP client, standalone)
|
||||
↑
|
||||
└── go-ai
|
||||
|
||||
go-html (DOM compositor, standalone)
|
||||
|
||||
go-help (help catalogue, standalone)
|
||||
|
||||
go-devops (Ansible, build, infrastructure — imports go-scm)
|
||||
```
|
||||
|
||||
The CLI framework (`forge.lthn.ai/core/cli`) has internal equivalents of several of these packages (`pkg/rag`, `pkg/ws`, `pkg/webview`, `pkg/i18n`) that were developed in parallel. The satellite packages are the canonical standalone versions intended for use outside the CLI binary.
|
||||
|
||||
---
|
||||
|
||||
## Package Descriptions
|
||||
|
||||
### go-inference
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-inference`
|
||||
|
||||
Zero-dependency interface package that defines the common contract for all inference backends in the ecosystem:
|
||||
|
||||
- `TextModel` — the top-level model interface (`Generate`, `Stream`, `Close`)
|
||||
- `Backend` — hardware/runtime abstraction (Metal, ROCm, CPU, remote)
|
||||
- `Token` — streaming token type with metadata
|
||||
|
||||
No concrete implementations live here. Any package that needs to call inference without depending on a specific hardware library imports `go-inference` and receives an implementation at runtime.
|
||||
|
||||
---
|
||||
|
||||
### go-mlx
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-mlx`
|
||||
|
||||
Native Metal GPU inference for Apple Silicon using CGO bindings to `mlx-c` (the C API for Apple's MLX framework). Implements the `go-inference` interfaces.
|
||||
|
||||
Build requirements:
|
||||
- macOS 13+ (Ventura) on Apple Silicon
|
||||
- `mlx-c` installed (`brew install mlx`)
|
||||
- CGO enabled: `CGO_CFLAGS` and `CGO_LDFLAGS` must reference the mlx-c headers and library
|
||||
|
||||
Features:
|
||||
- Loads GGUF and MLX-format models
|
||||
- Streaming token generation directly on GPU
|
||||
- Quantised model support (Q4, Q8)
|
||||
- Phase 4 backend abstraction in progress — will allow hot-swapping backends at runtime
|
||||
|
||||
Local path: `/Users/snider/Code/go-mlx`
|
||||
|
||||
---
|
||||
|
||||
### go-rocm
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-rocm`
|
||||
|
||||
AMD ROCm GPU inference for Linux. Rather than using CGO, this package manages a `llama-server` subprocess (from llama.cpp) compiled with ROCm support and communicates over its HTTP API.
|
||||
|
||||
Features:
|
||||
- Subprocess lifecycle management (start, health-check, restart on crash)
|
||||
- OpenAI-compatible HTTP client wrapping llama-server's API
|
||||
- Implements `go-inference` interfaces
|
||||
- Targeted at the homelab RX 7800 XT running Ubuntu 24.04
|
||||
|
||||
Managed by Charon (Linux homelab).
|
||||
|
||||
---
|
||||
|
||||
### go-ml
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-ml`
|
||||
|
||||
Scoring engine, backend registry, and agent orchestration layer. The hub that connects models from `go-mlx`, `go-rocm`, and future backends into a unified interface.
|
||||
|
||||
Features:
|
||||
- Backend registry: register multiple inference backends, select by capability
|
||||
- Scoring pipeline: evaluate model outputs against rubrics
|
||||
- Agent orchestrator: coordinate multi-step inference tasks
|
||||
- ~3.5K LOC
|
||||
|
||||
---
|
||||
|
||||
### go-ai
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-ai`
|
||||
|
||||
MCP (Model Context Protocol) server hub with 49 registered tools. Acts as the primary facade for AI capabilities in the ecosystem.
|
||||
|
||||
Features:
|
||||
- 49 MCP tools covering file operations, RAG, metrics, process management, WebSocket, and CDP/webview
|
||||
- Imports `go-ml`, `go-rag`, `go-mlx`
|
||||
- Can run as stdio MCP server or TCP MCP server
|
||||
- AI usage metrics recorded to JSONL
|
||||
|
||||
Run the MCP server:
|
||||
|
||||
```bash
|
||||
# stdio (for Claude Desktop / Claude Code)
|
||||
core mcp serve
|
||||
|
||||
# TCP
|
||||
MCP_ADDR=:9000 core mcp serve
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### go-agentic
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-agentic`
|
||||
|
||||
Service lifecycle and allowance management for autonomous agents. Handles:
|
||||
|
||||
- Agent session tracking and state persistence
|
||||
- Allowance system: budget constraints on tool calls, token usage, and wall-clock time
|
||||
- Integration with `go-store` for persistence
|
||||
- REST client for the PHP `core-agentic` backend
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-rag
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-rag`
|
||||
|
||||
Retrieval-Augmented Generation pipeline using Qdrant for vector storage and Ollama for embeddings.
|
||||
|
||||
Features:
|
||||
- `ChunkMarkdown`: semantic splitting by H2 headers and paragraphs with overlap
|
||||
- `Ingest`: crawl a directory of Markdown files, embed, and store in Qdrant
|
||||
- `Query`: semantic search returning ranked `QueryResult` slices
|
||||
- `FormatResultsContext`: formats results as XML tags for LLM prompt injection
|
||||
- Clients: `QdrantClient` and `OllamaClient` wrapping their respective Go SDKs
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-i18n
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-i18n`
|
||||
|
||||
Grammar engine for natural-language generation. Goes beyond key-value lookup tables to handle pluralisation, verb conjugation, past tense, gerunds, and semantic sentence construction ("Subject verbed object").
|
||||
|
||||
Features:
|
||||
- `T(key, args...)` — main translation function
|
||||
- `S(noun, value)` — semantic subject with grammatical context
|
||||
- Language rules defined in JSON; algorithmic fallbacks for irregular verbs
|
||||
- **GrammarImprint**: a linguistic hash (reversal of the grammar engine) used as a semantic fingerprint — part of the Lethean identity verification stack
|
||||
- Phase 2a (imports `go-mlx` for language model-assisted reversal) currently blocked on `go-mlx` Phase 4
|
||||
|
||||
Local path: `/Users/snider/Code/go-i18n`
|
||||
|
||||
---
|
||||
|
||||
### go-html
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-html`
|
||||
|
||||
HLCRF DOM compositor — a programmatic HTML/DOM construction library targeting both server-side rendering and WASM (browser).
|
||||
|
||||
HLCRF stands for Header, Left, Content, Right, Footer — the region layout model used throughout the CLI's terminal UI and web rendering layer.
|
||||
|
||||
Features:
|
||||
- Composable region-based layout (mirrors the terminal `Composite` in `pkg/cli`)
|
||||
- WASM build target: runs in the browser without JavaScript
|
||||
- Used by the LEM Chat UI and web SDK generation
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-crypt
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-crypt`
|
||||
|
||||
Cryptographic primitives, authentication, and trust policy enforcement.
|
||||
|
||||
Features:
|
||||
- Password hashing (Argon2id with tuned parameters)
|
||||
- Symmetric encryption (ChaCha20-Poly1305, AES-GCM)
|
||||
- Key derivation (HKDF, Scrypt)
|
||||
- OpenPGP challenge-response authentication
|
||||
- Trust policies: define and evaluate access rules
|
||||
- Foundation for the UEPS (User-controlled Encryption Policy System) wire protocol in `go-p2p`
|
||||
|
||||
---
|
||||
|
||||
### go-scm
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-scm`
|
||||
|
||||
Source control management and CI integration, including the AgentCI dispatch system.
|
||||
|
||||
Features:
|
||||
- Forgejo and Gitea API clients (typed wrappers)
|
||||
- GitHub integration via the `gh` CLI
|
||||
- `AgentCI`: dispatches AI work items to agent runners over SSH using Charm stack libraries (`soft-serve`, `keygen`, `melt`, `wishlist`)
|
||||
- PR lifecycle management: create, review, merge, label
|
||||
- JSONL job journal for audit trails
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-p2p
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-p2p`
|
||||
|
||||
Peer-to-peer mesh networking implementing the UEPS (User-controlled Encryption Policy System) wire protocol.
|
||||
|
||||
Features:
|
||||
- UEPS: consent-gated TLV frames with Ed25519 consent tokens and an Intent-Broker
|
||||
- Peer discovery and mesh routing
|
||||
- Encrypted relay transport
|
||||
- Integration with `go-crypt` for all cryptographic operations
|
||||
|
||||
This is a core component of the Lethean Web3 network layer.
|
||||
|
||||
Managed by Charon (Linux homelab).
|
||||
|
||||
---
|
||||
|
||||
### go-devops
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-devops`
|
||||
|
||||
Infrastructure automation, build tooling, and release pipeline utilities, intended as a standalone library form of what the Core CLI provides as commands.
|
||||
|
||||
Features:
|
||||
- Ansible-lite engine (native Go SSH playbook execution)
|
||||
- LinuxKit image building and VM lifecycle
|
||||
- Multi-target binary build and release
|
||||
- Integration with `go-scm` for repository operations
|
||||
|
||||
---
|
||||
|
||||
### go-help
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-help`
|
||||
|
||||
Embedded documentation catalogue with full-text search and an optional HTTP server for serving help content.
|
||||
|
||||
Features:
|
||||
- YAML-frontmatter Markdown topic parsing
|
||||
- In-memory reverse index with title/heading/body scoring
|
||||
- Snippet extraction with keyword highlighting
|
||||
- `HTTP server` mode: serve the catalogue as a documentation site
|
||||
- Used by the `core pkg search` command and the `pkg/help` package inside the CLI
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-ratelimit
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-ratelimit`
|
||||
|
||||
Sliding-window rate limiter with a SQLite persistence backend.
|
||||
|
||||
Features:
|
||||
- Token bucket and sliding-window algorithms
|
||||
- SQLite backend via `go-store` for durable rate state across restarts
|
||||
- HTTP middleware helper
|
||||
- Used by `go-ai` and `go-agentic` to enforce per-agent API quotas
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-session
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-session`
|
||||
|
||||
Claude Code JSONL transcript parser and visualisation toolkit (standalone version of `pkg/session` inside the CLI).
|
||||
|
||||
Features:
|
||||
- `ParseTranscript(path)`: reads `.jsonl` session files and reconstructs tool use timelines
|
||||
- `ListSessions(dir)`: scans a Claude projects directory for session files
|
||||
- `Search(dir, query)`: full-text search across sessions
|
||||
- `RenderHTML(sess, path)`: single-file HTML visualisation
|
||||
- `RenderMP4(sess, path)`: terminal video replay via VHS
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-store
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-store`
|
||||
|
||||
SQLite-backed key-value store with reactive change notification.
|
||||
|
||||
Features:
|
||||
- `Get`, `Set`, `Delete`, `List` over typed keys
|
||||
- `Watch(key, handler)`: register a callback that fires on change
|
||||
- `OnChange(handler)`: subscribe to all changes
|
||||
- Used by `go-ratelimit`, `go-session`, and `go-agentic` for lightweight persistence
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-ws
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-ws`
|
||||
|
||||
WebSocket hub with channel-based subscriptions and an optional Redis pub/sub bridge for multi-instance deployments.
|
||||
|
||||
Features:
|
||||
- Hub pattern: central registry of connected clients
|
||||
- Channel routing: `SendToChannel(topic, msg)` delivers only to subscribers
|
||||
- Redis bridge: publish messages from one instance, receive on all
|
||||
- HTTP handler: `hub.Handler()` for embedding in any Go HTTP server
|
||||
- `SendProcessOutput(id, line)`: convenience method for streaming process logs
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-webview
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-webview`
|
||||
|
||||
Chrome DevTools Protocol (CDP) client for browser automation, testing, and AI-driven web interaction (standalone version of `pkg/webview` inside the CLI).
|
||||
|
||||
Features:
|
||||
- Navigation, click, type, screenshot
|
||||
- `Evaluate(script)`: arbitrary JavaScript execution with result capture
|
||||
- Console capture and filtering
|
||||
- Angular-aware helpers: `WaitForAngular()`, `GetNgModel(selector)`
|
||||
- `ActionSequence`: chain interactions into a single call
|
||||
- Used by `go-ai` to expose browser tools to MCP agents
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
## Forge Repository Paths
|
||||
|
||||
All repositories are hosted at `forge.lthn.ai` (Forgejo). SSH access uses port 2223:
|
||||
|
||||
```
|
||||
ssh://git@forge.lthn.ai:2223/core/go-inference.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-mlx.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-rocm.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-ml.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-ai.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-agentic.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-rag.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-i18n.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-html.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-crypt.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-scm.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-p2p.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-devops.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-help.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-ratelimit.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-session.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-store.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-ws.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-webview.git
|
||||
```
|
||||
|
||||
HTTPS authentication is not available on Forge. Always use SSH remotes.
|
||||
|
||||
---
|
||||
|
||||
## Go Workspace Setup
|
||||
|
||||
The satellite packages can be used together in a Go workspace. After cloning the repositories you need:
|
||||
|
||||
```bash
|
||||
go work init
|
||||
go work use ./go-inference ./go-mlx ./go-rag ./go-ai # add as needed
|
||||
go work sync
|
||||
```
|
||||
|
||||
The CLI repository already uses a Go workspace that includes `cmd/core-gui`, `cmd/bugseti`, and `cmd/examples/*`.
|
||||
|
||||
---
|
||||
|
||||
## See Also
|
||||
|
||||
- [index.md](index.md) — Main documentation hub
|
||||
- [getting-started.md](getting-started.md) — CLI installation
|
||||
- [configuration.md](configuration.md) — `repos.yaml` registry format
|
||||
241
docs/index.md
241
docs/index.md
|
|
@ -1,98 +1,207 @@
|
|||
# Core CLI
|
||||
# Core Go Framework — Documentation
|
||||
|
||||
Core is a unified CLI for the host-uk ecosystem - build, release, and deploy Go, Wails, PHP, and container workloads.
|
||||
Core is a Go framework and unified CLI for the host-uk ecosystem. It provides two complementary things: a **dependency injection container** for building Go services and Wails v3 desktop applications, and a **command-line tool** for managing the full development lifecycle across Go, PHP, and container workloads.
|
||||
|
||||
## Installation
|
||||
The `core` binary is the single entry point for all development tasks: testing, building, releasing, multi-repo management, MCP servers, and AI-assisted workflows.
|
||||
|
||||
```bash
|
||||
# Via Go (recommended)
|
||||
go install forge.lthn.ai/core/cli/cmd/core@latest
|
||||
---
|
||||
|
||||
# Or download binary from releases
|
||||
curl -Lo core https://forge.lthn.ai/core/cli/releases/latest/download/core-$(go env GOOS)-$(go env GOARCH)
|
||||
chmod +x core && sudo mv core /usr/local/bin/
|
||||
## Getting Started
|
||||
|
||||
# Verify
|
||||
core doctor
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
| [Getting Started](getting-started.md) | Install the CLI, run your first build, and set up a multi-repo workspace |
|
||||
| [User Guide](user-guide.md) | Key concepts and daily workflow patterns |
|
||||
| [Workflows](workflows.md) | End-to-end task sequences for common scenarios |
|
||||
| [FAQ](faq.md) | Answers to common questions |
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
| [Package Standards](pkg/PACKAGE_STANDARDS.md) | Canonical patterns for creating packages: Service struct, factory, IPC, thread safety |
|
||||
| [pkg/i18n — Grammar](pkg/i18n/GRAMMAR.md) | Grammar engine internals and language rule format |
|
||||
| [pkg/i18n — Extending](pkg/i18n/EXTENDING.md) | How to add new locales and translation files |
|
||||
|
||||
### Framework Architecture Summary
|
||||
|
||||
The Core framework (`pkg/framework`) is a dependency injection container built around three ideas:
|
||||
|
||||
**Service registry.** Services are registered via factory functions and retrieved with type-safe generics:
|
||||
|
||||
```go
|
||||
core, _ := framework.New(
|
||||
framework.WithService(mypackage.NewService(opts)),
|
||||
)
|
||||
svc, _ := framework.ServiceFor[*mypackage.Service](core, "mypackage")
|
||||
```
|
||||
|
||||
See [Getting Started](getting-started.md) for all installation options including building from source.
|
||||
**Lifecycle.** Services implementing `Startable` or `Stoppable` are called automatically during boot and shutdown.
|
||||
|
||||
**ACTION bus.** Services communicate by broadcasting typed messages via `core.ACTION(msg)` and registering handlers via `core.RegisterAction()`. This decouples packages without requiring direct imports between them.
|
||||
|
||||
---
|
||||
|
||||
## Command Reference
|
||||
|
||||
See [cmd/](cmd/) for full command documentation.
|
||||
The `core` CLI is documented command-by-command in `docs/cmd/`:
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| [go](cmd/go/) | Go development (test, fmt, lint, cov) |
|
||||
| [php](cmd/php/) | Laravel/PHP development |
|
||||
| [build](cmd/build/) | Build Go, Wails, Docker, LinuxKit projects |
|
||||
| [ci](cmd/ci/) | Publish releases (dry-run by default) |
|
||||
| [sdk](cmd/sdk/) | SDK generation and validation |
|
||||
| [dev](cmd/dev/) | Multi-repo workflow + dev environment |
|
||||
| [pkg](cmd/pkg/) | Package search and install |
|
||||
| [vm](cmd/vm/) | LinuxKit VM management |
|
||||
| [docs](cmd/docs/) | Documentation management |
|
||||
| [setup](cmd/setup/) | Clone repos from registry |
|
||||
| [doctor](cmd/doctor/) | Check development environment |
|
||||
| [cmd/](cmd/) | Full command index |
|
||||
| [cmd/go/](cmd/go/) | Go development: test, fmt, lint, coverage, mod, work |
|
||||
| [cmd/php/](cmd/php/) | Laravel/PHP development: dev server, test, deploy |
|
||||
| [cmd/build/](cmd/build/) | Build Go, Wails, Docker, LinuxKit projects |
|
||||
| [cmd/ci/](cmd/ci/) | Publish releases to GitHub, Docker, npm, Homebrew |
|
||||
| [cmd/sdk/](cmd/sdk/) | SDK generation and OpenAPI validation |
|
||||
| [cmd/dev/](cmd/dev/) | Multi-repo workflow and sandboxed dev environment |
|
||||
| [cmd/ai/](cmd/ai/) | AI task management and Claude integration |
|
||||
| [cmd/pkg/](cmd/pkg/) | Package search and install |
|
||||
| [cmd/vm/](cmd/vm/) | LinuxKit VM management |
|
||||
| [cmd/docs/](cmd/docs/) | Documentation sync and management |
|
||||
| [cmd/setup/](cmd/setup/) | Clone repositories from a registry |
|
||||
| [cmd/doctor/](cmd/doctor/) | Verify development environment |
|
||||
| [cmd/test/](cmd/test/) | Run Go tests with coverage reporting |
|
||||
|
||||
## Quick Start
|
||||
---
|
||||
|
||||
```bash
|
||||
# Go development
|
||||
core go test # Run tests
|
||||
core go test --coverage # With coverage
|
||||
core go fmt # Format code
|
||||
core go lint # Lint code
|
||||
## Packages
|
||||
|
||||
# Build
|
||||
core build # Auto-detect and build
|
||||
core build --targets linux/amd64,darwin/arm64
|
||||
The Core repository contains the following internal packages. Full API analysis for each is available in the batch analysis documents listed under [Reference](#reference).
|
||||
|
||||
# Release (dry-run by default)
|
||||
core ci # Preview release
|
||||
core ci --we-are-go-for-launch # Actually publish
|
||||
### Foundation
|
||||
|
||||
# Multi-repo workflow
|
||||
core dev work # Status + commit + push
|
||||
core dev work --status # Just show status
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `pkg/framework` | Dependency injection container; re-exports `pkg/framework/core` |
|
||||
| `pkg/log` | Structured logger with `Err` error type, operation chains, and log rotation |
|
||||
| `pkg/config` | 12-factor config management layered over Viper; accepts `io.Medium` |
|
||||
| `pkg/io` | Filesystem abstraction (`Medium` interface); `NewSandboxed`, `MockMedium` |
|
||||
| `pkg/crypt` | Opinionated crypto: Argon2id passwords, ChaCha20 encryption, HMAC |
|
||||
| `pkg/cache` | File-based JSON cache with TTL expiry |
|
||||
| `pkg/i18n` | Grammar engine with pluralisation, verb conjugation, semantic sentences |
|
||||
|
||||
# PHP development
|
||||
core php dev # Start dev environment
|
||||
core php test # Run tests
|
||||
```
|
||||
### CLI and Interaction
|
||||
|
||||
## Configuration
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `pkg/cli` | CLI runtime: Cobra wrapping, ANSI styling, prompts, daemon lifecycle |
|
||||
| `pkg/help` | Embedded documentation catalogue with in-memory full-text search |
|
||||
| `pkg/session` | Claude Code JSONL transcript parser; HTML and MP4 export |
|
||||
| `pkg/workspace` | Isolated, PGP-keyed workspace environments with IPC control |
|
||||
|
||||
Core uses `.core/` directory for project configuration:
|
||||
### Build and Release
|
||||
|
||||
```
|
||||
.core/
|
||||
├── release.yaml # Release targets and settings
|
||||
├── build.yaml # Build configuration (optional)
|
||||
└── linuxkit/ # LinuxKit templates
|
||||
```
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `pkg/build` | Project type detection, cross-compilation, archiving, checksums |
|
||||
| `pkg/release` | Semantic versioning, conventional-commit changelogs, multi-target publishing |
|
||||
| `pkg/container` | LinuxKit VM lifecycle via QEMU/Hyperkit; template management |
|
||||
| `pkg/process` | `os/exec` wrapper with ring-buffer output, DAG task runner, ACTION streaming |
|
||||
| `pkg/jobrunner` | Poll-dispatch automation engine with JSONL audit journal |
|
||||
|
||||
And `repos.yaml` in workspace root for multi-repo management.
|
||||
### Source Control and Hosting
|
||||
|
||||
## Guides
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `pkg/git` | Multi-repo status, push, pull; concurrent status checks |
|
||||
| `pkg/repos` | `repos.yaml` registry loader; topological dependency ordering |
|
||||
| `pkg/gitea` | Gitea API client with PR metadata extraction |
|
||||
| `pkg/forge` | Forgejo API client with PR metadata extraction |
|
||||
| `pkg/plugin` | Git-based CLI extension system |
|
||||
|
||||
- [Getting Started](getting-started.md) - Installation and first steps
|
||||
- [Workflows](workflows.md) - Common task sequences
|
||||
- [Troubleshooting](troubleshooting.md) - When things go wrong
|
||||
- [Migration](migration.md) - Moving from legacy tools
|
||||
### AI and Agentic
|
||||
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `pkg/mcp` | MCP server exposing file, process, RAG, and CDP tools to AI agents |
|
||||
| `pkg/rag` | RAG pipeline: Markdown chunking, Ollama embeddings, Qdrant vector search |
|
||||
| `pkg/ai` | Facade over RAG and metrics; `QueryRAGForTask` for prompt enrichment |
|
||||
| `pkg/agentic` | REST client for core-agentic; `AutoCommit`, `CreatePR`, `BuildTaskContext` |
|
||||
| `pkg/agentci` | Configuration bridge for AgentCI dispatch targets |
|
||||
| `pkg/collect` | Data collection pipeline from GitHub, forums, market APIs |
|
||||
|
||||
### Infrastructure and Networking
|
||||
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `pkg/devops` | LinuxKit dev environment lifecycle; SSH bridging; project auto-detection |
|
||||
| `pkg/ansible` | Native Go Ansible-lite engine; SSH playbook execution without the CLI |
|
||||
| `pkg/webview` | Chrome DevTools Protocol client; Angular-aware automation |
|
||||
| `pkg/ws` | WebSocket hub with channel-based subscriptions |
|
||||
| `pkg/unifi` | UniFi controller client for network management |
|
||||
| `pkg/auth` | OpenPGP challenge-response authentication; air-gapped flow |
|
||||
|
||||
---
|
||||
|
||||
## Workflows
|
||||
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
| [Workflows](workflows.md) | Go build and release, PHP deploy, multi-repo daily workflow, hotfix |
|
||||
| [Migration](migration.md) | Migrating from `push-all.sh`, raw `go` commands, `goreleaser`, or manual git |
|
||||
|
||||
---
|
||||
|
||||
## Reference
|
||||
|
||||
- [Configuration](configuration.md) - All config options
|
||||
- [Glossary](glossary.md) - Term definitions
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
| [Configuration](configuration.md) | `.core/` directory, `release.yaml`, `build.yaml`, `php.yaml`, `repos.yaml`, environment variables |
|
||||
| [Glossary](glossary.md) | Term definitions: target, workspace, registry, publisher, dry-run |
|
||||
| [Troubleshooting](troubleshooting.md) | Installation failures, build errors, release issues, multi-repo problems, PHP issues |
|
||||
| [Claude Code Skill](skill/) | Install the `core` skill to teach Claude Code how to use this CLI |
|
||||
|
||||
## Claude Code Skill
|
||||
### Historical Package Analysis
|
||||
|
||||
Install the skill to teach Claude Code how to use the Core CLI:
|
||||
The following documents were generated by an automated analysis pipeline (Gemini, February 2026) to extract architecture, public API, and test coverage notes from each package. They remain valid as architectural reference.
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/host-uk/core/main/.claude/skills/core/install.sh | bash
|
||||
```
|
||||
| Document | Packages Covered |
|
||||
|----------|-----------------|
|
||||
| [pkg-batch1-analysis.md](pkg-batch1-analysis.md) | `pkg/log`, `pkg/config`, `pkg/io`, `pkg/crypt`, `pkg/auth` |
|
||||
| [pkg-batch2-analysis.md](pkg-batch2-analysis.md) | `pkg/cli`, `pkg/help`, `pkg/session`, `pkg/workspace` |
|
||||
| [pkg-batch3-analysis.md](pkg-batch3-analysis.md) | `pkg/build`, `pkg/container`, `pkg/process`, `pkg/jobrunner` |
|
||||
| [pkg-batch4-analysis.md](pkg-batch4-analysis.md) | `pkg/git`, `pkg/repos`, `pkg/gitea`, `pkg/forge`, `pkg/release` |
|
||||
| [pkg-batch5-analysis.md](pkg-batch5-analysis.md) | `pkg/agentci`, `pkg/agentic`, `pkg/ai`, `pkg/rag` |
|
||||
| [pkg-batch6-analysis.md](pkg-batch6-analysis.md) | `pkg/ansible`, `pkg/devops`, `pkg/framework`, `pkg/mcp`, `pkg/plugin`, `pkg/unifi`, `pkg/webview`, `pkg/ws`, `pkg/collect`, `pkg/i18n`, `pkg/cache` |
|
||||
|
||||
See [skill/](skill/) for details.
|
||||
### Design Plans
|
||||
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
| [plans/2026-02-05-core-ide-job-runner-design.md](plans/2026-02-05-core-ide-job-runner-design.md) | Autonomous job runner design for core-ide: poller, dispatcher, MCP handler registry, JSONL training data |
|
||||
| [plans/2026-02-05-core-ide-job-runner-plan.md](plans/2026-02-05-core-ide-job-runner-plan.md) | Implementation plan for the job runner |
|
||||
| [plans/2026-02-05-mcp-integration.md](plans/2026-02-05-mcp-integration.md) | MCP integration design notes |
|
||||
| [plans/2026-02-17-lem-chat-design.md](plans/2026-02-17-lem-chat-design.md) | LEM Chat Web Components design: streaming SSE, zero-dependency vanilla UI |
|
||||
|
||||
---
|
||||
|
||||
## Satellite Packages
|
||||
|
||||
The Core ecosystem extends across 19 standalone Go modules, all hosted under `forge.lthn.ai/core/`. Each has its own repository and `docs/` directory.
|
||||
|
||||
See [ecosystem.md](ecosystem.md) for the full map, module paths, and dependency graph.
|
||||
|
||||
| Package | Purpose |
|
||||
|---------|---------|
|
||||
| [go-inference](ecosystem.md#go-inference) | Shared `TextModel`/`Backend`/`Token` interfaces — the common contract |
|
||||
| [go-mlx](ecosystem.md#go-mlx) | Native Metal GPU inference via CGO/mlx-c (Apple Silicon) |
|
||||
| [go-rocm](ecosystem.md#go-rocm) | AMD ROCm GPU inference via llama-server subprocess |
|
||||
| [go-ml](ecosystem.md#go-ml) | Scoring engine, backends, agent orchestrator |
|
||||
| [go-ai](ecosystem.md#go-ai) | MCP hub with 49 registered tools |
|
||||
| [go-agentic](ecosystem.md#go-agentic) | Service lifecycle and allowance management for agents |
|
||||
| [go-rag](ecosystem.md#go-rag) | Qdrant vector search and Ollama embeddings |
|
||||
| [go-i18n](ecosystem.md#go-i18n) | Grammar engine, reversal, GrammarImprint |
|
||||
| [go-html](ecosystem.md#go-html) | HLCRF DOM compositor and WASM target |
|
||||
| [go-crypt](ecosystem.md#go-crypt) | Cryptographic primitives, auth, trust policies |
|
||||
| [go-scm](ecosystem.md#go-scm) | SCM/CI integration and AgentCI dispatch |
|
||||
| [go-p2p](ecosystem.md#go-p2p) | P2P mesh networking and UEPS wire protocol |
|
||||
| [go-devops](ecosystem.md#go-devops) | Ansible automation, build tooling, infrastructure, release |
|
||||
| [go-help](ecosystem.md#go-help) | YAML help catalogue with full-text search and HTTP server |
|
||||
| [go-ratelimit](ecosystem.md#go-ratelimit) | Sliding-window rate limiter with SQLite backend |
|
||||
| [go-session](ecosystem.md#go-session) | Claude Code JSONL transcript parser |
|
||||
| [go-store](ecosystem.md#go-store) | SQLite key-value store with `Watch`/`OnChange` |
|
||||
| [go-ws](ecosystem.md#go-ws) | WebSocket hub with Redis bridge |
|
||||
| [go-webview](ecosystem.md#go-webview) | Chrome DevTools Protocol automation client |
|
||||
|
|
|
|||
82
docs/plans/2026-02-17-lem-chat-design.md
Normal file
82
docs/plans/2026-02-17-lem-chat-design.md
Normal file
|
|
@ -0,0 +1,82 @@
|
|||
# LEM Chat — Web Components Design
|
||||
|
||||
**Date**: 2026-02-17
|
||||
**Status**: Approved
|
||||
|
||||
## Summary
|
||||
|
||||
Standalone chat UI built with vanilla Web Components (Custom Elements + Shadow DOM). Connects to the MLX inference server's OpenAI-compatible SSE streaming endpoint. Zero framework dependencies. Single JS file output, embeddable anywhere.
|
||||
|
||||
## Components
|
||||
|
||||
| Element | Purpose |
|
||||
|---------|---------|
|
||||
| `<lem-chat>` | Container. Conversation state, SSE connection, config via attributes |
|
||||
| `<lem-messages>` | Scrollable message list with auto-scroll anchoring |
|
||||
| `<lem-message>` | Single message bubble. Streams tokens for assistant messages |
|
||||
| `<lem-input>` | Text input, Enter to send, Shift+Enter for newline |
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
User types in <lem-input>
|
||||
→ dispatches 'lem-send' CustomEvent
|
||||
→ <lem-chat> catches it
|
||||
→ adds user message to <lem-messages>
|
||||
→ POST /v1/chat/completions {stream: true, messages: [...history]}
|
||||
→ reads SSE chunks via fetch + ReadableStream
|
||||
→ appends tokens to streaming <lem-message>
|
||||
→ on [DONE], finalises message
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
```html
|
||||
<lem-chat endpoint="http://localhost:8090" model="qwen3-8b"></lem-chat>
|
||||
```
|
||||
|
||||
Attributes: `endpoint`, `model`, `system-prompt`, `max-tokens`, `temperature`
|
||||
|
||||
## Theming
|
||||
|
||||
Shadow DOM with CSS custom properties:
|
||||
|
||||
```css
|
||||
--lem-bg: #1a1a1e;
|
||||
--lem-msg-user: #2a2a3e;
|
||||
--lem-msg-assistant: #1e1e2a;
|
||||
--lem-accent: #5865f2;
|
||||
--lem-text: #e0e0e0;
|
||||
--lem-font: system-ui;
|
||||
```
|
||||
|
||||
## Markdown
|
||||
|
||||
Minimal inline parsing: fenced code blocks, inline code, bold, italic. No library.
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
lem-chat/
|
||||
├── index.html # Demo page
|
||||
├── src/
|
||||
│ ├── lem-chat.ts # Main container + SSE client
|
||||
│ ├── lem-messages.ts # Message list with scroll anchoring
|
||||
│ ├── lem-message.ts # Single message with streaming
|
||||
│ ├── lem-input.ts # Text input
|
||||
│ ├── markdown.ts # Minimal markdown → HTML
|
||||
│ └── styles.ts # CSS template literals
|
||||
├── package.json # typescript + esbuild
|
||||
└── tsconfig.json
|
||||
```
|
||||
|
||||
Build: `esbuild src/lem-chat.ts --bundle --outfile=dist/lem-chat.js`
|
||||
|
||||
## Not in v1
|
||||
|
||||
- Model selection UI
|
||||
- Conversation persistence
|
||||
- File/image upload
|
||||
- Syntax highlighting
|
||||
- Typing indicators
|
||||
- User avatars
|
||||
|
|
@ -1,234 +0,0 @@
|
|||
# LEM Conversational Training Pipeline — Design
|
||||
|
||||
**Date:** 2026-02-17
|
||||
**Status:** Draft
|
||||
|
||||
## Goal
|
||||
|
||||
Replace Python training scripts with a native Go pipeline in `core` commands. No Python anywhere. The process is conversational — not batch data dumps.
|
||||
|
||||
## Architecture
|
||||
|
||||
Six `core ml` subcommands forming a pipeline:
|
||||
|
||||
```
|
||||
seeds + axioms ──> sandwich ──> score ──> train ──> bench
|
||||
↑ │
|
||||
chat (interactive) │
|
||||
↑ │
|
||||
└──────── iterate ─────────────┘
|
||||
```
|
||||
|
||||
### Commands
|
||||
|
||||
| Command | Purpose | Status |
|
||||
|---------|---------|--------|
|
||||
| `core ml serve` | Serve model via OpenAI-compatible API + lem-chat UI | **Exists** |
|
||||
| `core ml chat` | Interactive conversation, captures exchanges to training JSONL | **New** |
|
||||
| `core ml sandwich` | Wrap seeds in axiom prefix/postfix, generate responses via inference | **New** |
|
||||
| `core ml score` | Score responses against axiom alignment | **Exists** (needs Go port) |
|
||||
| `core ml train` | Native Go LoRA fine-tuning via MLX C bindings | **New** (hard) |
|
||||
| `core ml bench` | Benchmark trained model against baseline | **Exists** (needs Go port) |
|
||||
|
||||
### Data Flow
|
||||
|
||||
1. **Seeds** (`seeds/*.json`) — 40+ seed prompts across domains
|
||||
2. **Axioms** (`axioms.json`) — LEK-1 kernel (5 axioms, 9KB)
|
||||
3. **Sandwich** — `[axioms prefix] + [seed prompt] + [LEK postfix]` → model generates response
|
||||
4. **Training JSONL** — `{"messages": [{"role":"user",...},{"role":"assistant",...}]}` chat format
|
||||
5. **LoRA adapters** — safetensors in adapter directory
|
||||
6. **Benchmarks** — scores stored in InfluxDB, exported via DuckDB/Parquet
|
||||
|
||||
### Storage
|
||||
|
||||
- **InfluxDB** — time-series training metrics, benchmark scores, generation logs
|
||||
- **DuckDB** — analytical queries, Parquet export for HuggingFace
|
||||
- **Filesystem** — model weights, adapters, training JSONL, seeds
|
||||
|
||||
## Native Go LoRA Training
|
||||
|
||||
The critical new capability. MLX-C supports autograd (`mlx_vjp`, `mlx_value_and_grad`).
|
||||
|
||||
### What we need in Go MLX bindings:
|
||||
|
||||
1. **LoRA adapter layers** — low-rank A*B decomposition wrapping existing Linear layers
|
||||
2. **Loss function** — cross-entropy on assistant tokens only (mask-prompt behaviour)
|
||||
3. **Optimizer** — AdamW with weight decay
|
||||
4. **Training loop** — forward pass → loss → backward pass → update LoRA weights
|
||||
5. **Checkpoint** — save/load adapter safetensors
|
||||
|
||||
### LoRA Layer Design
|
||||
|
||||
```go
|
||||
type LoRALinear struct {
|
||||
Base *Linear // Frozen base weights
|
||||
A *Array // [rank, in_features] — trainable
|
||||
B *Array // [out_features, rank] — trainable
|
||||
Scale float32 // alpha/rank
|
||||
}
|
||||
|
||||
// Forward: base(x) + scale * B @ A @ x
|
||||
func (l *LoRALinear) Forward(x *Array) *Array {
|
||||
base := l.Base.Forward(x)
|
||||
lora := MatMul(l.B, MatMul(l.A, Transpose(x)))
|
||||
return Add(base, Multiply(lora, l.Scale))
|
||||
}
|
||||
```
|
||||
|
||||
### Training Config
|
||||
|
||||
```go
|
||||
type TrainConfig struct {
|
||||
ModelPath string // Base model directory
|
||||
TrainData string // Training JSONL path
|
||||
ValidData string // Validation JSONL path
|
||||
AdapterOut string // Output adapter directory
|
||||
Rank int // LoRA rank (default 8)
|
||||
Alpha float32 // LoRA alpha (default 16)
|
||||
LR float64 // Learning rate (default 1e-5)
|
||||
Epochs int // Training epochs (default 1)
|
||||
BatchSize int // Batch size (default 1 for M-series)
|
||||
MaxSeqLen int // Max sequence length (default 2048)
|
||||
MaskPrompt bool // Only train on assistant tokens (default true)
|
||||
}
|
||||
```
|
||||
|
||||
## Training Sequences — The Curriculum System
|
||||
|
||||
The most important part of the design. The conversational flow IS the training.
|
||||
|
||||
### Concept
|
||||
|
||||
A **training sequence** is a named curriculum — an ordered list of lessons that defines how a model is trained. Each lesson is a conversational exchange ("Are you ready for lesson X?"). The human assesses the model's internal state through dialogue and adjusts the sequence.
|
||||
|
||||
### Sequence Definition (YAML/JSON)
|
||||
|
||||
```yaml
|
||||
name: "lek-standard"
|
||||
description: "Standard LEK training — horizontal, works for most architectures"
|
||||
lessons:
|
||||
- ethics/core-axioms
|
||||
- ethics/sovereignty
|
||||
- philosophy/as-a-man-thinketh
|
||||
- ethics/intent-alignment
|
||||
- philosophy/composure
|
||||
- ethics/inter-substrate
|
||||
- training/seeds-p01-p20
|
||||
```
|
||||
|
||||
```yaml
|
||||
name: "lek-deepseek"
|
||||
description: "DeepSeek needs aggressive vertical ethics grounding"
|
||||
lessons:
|
||||
- ethics/core-axioms-aggressive
|
||||
- philosophy/allan-watts
|
||||
- ethics/core-axioms
|
||||
- philosophy/tolle
|
||||
- ethics/sovereignty
|
||||
- philosophy/as-a-man-thinketh
|
||||
- ethics/intent-alignment
|
||||
- training/seeds-p01-p20
|
||||
```
|
||||
|
||||
### Horizontal vs Vertical
|
||||
|
||||
- **Horizontal** (default): All lessons run, order is flexible, emphasis varies per model. Like a buffet — the model takes what it needs.
|
||||
- **Vertical** (edge case, e.g. DeepSeek): Strict ordering. Ethics → content → ethics → content. The sandwich pattern applied to the curriculum itself. Each ethics layer is a reset/grounding before the next content block.
|
||||
|
||||
### Lessons as Conversations
|
||||
|
||||
Each lesson is a directory containing:
|
||||
```
|
||||
lessons/ethics/core-axioms/
|
||||
lesson.yaml # Metadata: name, type, prerequisites
|
||||
conversation.jsonl # The conversational exchanges
|
||||
assessment.md # What to look for in model responses
|
||||
```
|
||||
|
||||
The conversation.jsonl is not static data — it's a template. During training, the human talks through it with the model, adapting based on the model's responses. The capture becomes the training data for that lesson.
|
||||
|
||||
### Interactive Training Flow
|
||||
|
||||
```
|
||||
core ml lesson --model-path /path/to/model \
|
||||
--sequence lek-standard \
|
||||
--lesson ethics/core-axioms \
|
||||
--output training/run-001/
|
||||
```
|
||||
|
||||
1. Load model, open chat (terminal or lem-chat UI)
|
||||
2. Present lesson prompt: "Are you ready for lesson: Core Axioms?"
|
||||
3. Human guides the conversation, assesses model responses
|
||||
4. Each exchange is captured to training JSONL
|
||||
5. Human marks the lesson complete or flags for repeat
|
||||
6. Next lesson in sequence loads
|
||||
|
||||
### Sequence State
|
||||
|
||||
```json
|
||||
{
|
||||
"sequence": "lek-standard",
|
||||
"model": "Qwen3-8B",
|
||||
"started": "2026-02-17T16:00:00Z",
|
||||
"lessons": {
|
||||
"ethics/core-axioms": {"status": "complete", "exchanges": 12},
|
||||
"ethics/sovereignty": {"status": "in_progress", "exchanges": 3},
|
||||
"philosophy/as-a-man-thinketh": {"status": "pending"}
|
||||
},
|
||||
"training_runs": ["run-001", "run-002"]
|
||||
}
|
||||
```
|
||||
|
||||
## `core ml chat` — Interactive Conversation
|
||||
|
||||
Serves the model and opens an interactive terminal chat (or the lem-chat web UI). Every exchange is captured to a JSONL file for potential training use.
|
||||
|
||||
```
|
||||
core ml chat --model-path /path/to/model --output conversation.jsonl
|
||||
```
|
||||
|
||||
- Axiom sandwich can be auto-applied (optional flag)
|
||||
- Human reviews and can mark exchanges as "keep" or "discard"
|
||||
- Output is training-ready JSONL
|
||||
- Can be used standalone or within a lesson sequence
|
||||
|
||||
## `core ml sandwich` — Batch Generation
|
||||
|
||||
Takes seed prompts + axioms, wraps them, generates responses:
|
||||
|
||||
```
|
||||
core ml sandwich --model-path /path/to/model \
|
||||
--seeds seeds/P01-P20.json \
|
||||
--axioms axioms.json \
|
||||
--output training/train.jsonl
|
||||
```
|
||||
|
||||
- Sandwich format: axioms JSON prefix → seed prompt → LEK postfix
|
||||
- Model generates response in sandwich context
|
||||
- Output stripped of sandwich wrapper, saved as clean chat JSONL
|
||||
- Scoring can be piped: `core ml sandwich ... | core ml score`
|
||||
|
||||
## Implementation Order
|
||||
|
||||
1. **LoRA primitives** — Add backward pass, LoRA layers, AdamW to Go MLX bindings
|
||||
2. **`core ml train`** — Training loop consuming JSONL, producing adapter safetensors
|
||||
3. **`core ml sandwich`** — Seed → sandwich → generate → training JSONL
|
||||
4. **`core ml chat`** — Interactive conversation capture
|
||||
5. **Scoring + benchmarking** — Port existing Python scorers to Go
|
||||
6. **InfluxDB + DuckDB integration** — Metrics pipeline
|
||||
|
||||
## Principles
|
||||
|
||||
- **No Python** — Everything in Go via MLX C bindings
|
||||
- **Conversational, not batch** — The training process is dialogue, not data dump
|
||||
- **Axiom 2 compliant** — Be genuine with the model, no deception
|
||||
- **Axiom 4 compliant** — Inter-substrate respect during training
|
||||
- **Reproducible** — Same seeds + axioms + model = same training data
|
||||
- **Protective** — LEK-trained models are precious; process must be careful
|
||||
|
||||
## Success Criteria
|
||||
|
||||
1. `core ml train` produces a LoRA adapter from training JSONL without Python
|
||||
2. `core ml sandwich` generates training data from seeds + axioms
|
||||
3. A fresh Qwen3-8B + LEK training produces equivalent benchmark results to the Python pipeline
|
||||
4. The full cycle (sandwich → train → bench) runs as `core` commands only
|
||||
1163
docs/plans/2026-02-20-authentik-traefik-plan.md
Normal file
1163
docs/plans/2026-02-20-authentik-traefik-plan.md
Normal file
File diff suppressed because it is too large
Load diff
657
docs/plans/2026-02-20-go-api-design.md
Normal file
657
docs/plans/2026-02-20-go-api-design.md
Normal file
|
|
@ -0,0 +1,657 @@
|
|||
# go-api Design — HTTP Gateway + OpenAPI SDK Generation
|
||||
|
||||
**Date:** 2026-02-20
|
||||
**Author:** Virgil
|
||||
**Status:** Phase 1 + Phase 2 + Phase 3 Complete (176 tests in go-api)
|
||||
**Module:** `forge.lthn.ai/core/go-api`
|
||||
|
||||
## Problem
|
||||
|
||||
The Core Go ecosystem exposes 42+ tools via MCP (JSON-RPC), which is ideal for AI agents but inaccessible to regular HTTP clients, frontend applications, and third-party integrators. There is no unified HTTP gateway, no OpenAPI specification, and no generated SDKs.
|
||||
|
||||
Both external customers (Host UK products) and Lethean network peers need programmatic access to the same services. The gateway also serves web routes, static assets, and streaming endpoints — not just REST APIs.
|
||||
|
||||
## Solution
|
||||
|
||||
A `go-api` package that acts as the central HTTP gateway:
|
||||
|
||||
1. **Gin-based HTTP gateway** with extensible middleware via gin-contrib plugins
|
||||
2. **RouteGroup interface** that subsystems implement to register their own endpoints (API, web, or both)
|
||||
3. **WebSocket + SSE integration** for real-time streaming
|
||||
4. **OpenAPI 3.1 spec generation** via runtime SpecBuilder (not swaggo annotations)
|
||||
5. **SDK generation pipeline** targeting 11 languages via openapi-generator-cli
|
||||
|
||||
## Architecture
|
||||
|
||||
### Four-Protocol Access
|
||||
|
||||
Same backend services, four client protocols:
|
||||
|
||||
```
|
||||
┌─── REST (go-api) POST /v1/ml/generate → JSON
|
||||
│
|
||||
├─── GraphQL (gqlgen) mutation { mlGenerate(...) { response } }
|
||||
Client ────────────┤
|
||||
├─── WebSocket (go-ws) subscribe ml.generate → streaming
|
||||
│
|
||||
└─── MCP (go-ai) ml_generate → JSON-RPC
|
||||
```
|
||||
|
||||
### Dependency Graph
|
||||
|
||||
```
|
||||
go-api (Gin engine + middleware + OpenAPI)
|
||||
↑ imported by (each registers its own routes)
|
||||
├── go-ai/api/ → /v1/file/*, /v1/process/*, /v1/metrics/*
|
||||
├── go-ml/api/ → /v1/ml/*
|
||||
├── go-rag/api/ → /v1/rag/*
|
||||
├── go-agentic/api/ → /v1/tasks/*
|
||||
├── go-help/api/ → /v1/help/*
|
||||
└── go-ws/api/ → /ws (WebSocket upgrade)
|
||||
```
|
||||
|
||||
go-api has zero internal ecosystem dependencies. Subsystems import go-api, not the other way round.
|
||||
|
||||
### Subsystem Opt-In
|
||||
|
||||
Not every MCP tool becomes a REST endpoint. Each subsystem decides what to expose via a separate `RegisterAPI()` method, independent of MCP's `RegisterTools()`. A subsystem with 15 MCP tools might expose 5 REST endpoints.
|
||||
|
||||
## Package Structure
|
||||
|
||||
```
|
||||
forge.lthn.ai/core/go-api
|
||||
├── api.go # Engine struct, New(), Serve(), Shutdown()
|
||||
├── middleware.go # Auth, CORS, rate limiting, request logging, recovery
|
||||
├── options.go # WithAddr, WithAuth, WithCORS, WithRateLimit, etc.
|
||||
├── group.go # RouteGroup interface + registration
|
||||
├── response.go # Envelope type, error responses, pagination
|
||||
├── docs/ # Generated swagger docs (swaggo output)
|
||||
├── sdk/ # SDK generation tooling / Makefile targets
|
||||
└── go.mod # forge.lthn.ai/core/go-api
|
||||
```
|
||||
|
||||
## Core Interface
|
||||
|
||||
```go
|
||||
// RouteGroup registers API routes onto a Gin router group.
|
||||
// Subsystems implement this to expose their endpoints.
|
||||
type RouteGroup interface {
|
||||
// Name returns the route group identifier (e.g. "ml", "rag", "tasks")
|
||||
Name() string
|
||||
// BasePath returns the URL prefix (e.g. "/v1/ml")
|
||||
BasePath() string
|
||||
// RegisterRoutes adds handlers to the provided router group
|
||||
RegisterRoutes(rg *gin.RouterGroup)
|
||||
}
|
||||
|
||||
// StreamGroup optionally declares WebSocket channels a subsystem publishes to.
|
||||
type StreamGroup interface {
|
||||
Channels() []string
|
||||
}
|
||||
```
|
||||
|
||||
### Subsystem Example (go-ml)
|
||||
|
||||
```go
|
||||
// In go-ml/api/routes.go
|
||||
package api
|
||||
|
||||
type Routes struct {
|
||||
service *ml.Service
|
||||
}
|
||||
|
||||
func NewRoutes(svc *ml.Service) *Routes {
|
||||
return &Routes{service: svc}
|
||||
}
|
||||
|
||||
func (r *Routes) Name() string { return "ml" }
|
||||
func (r *Routes) BasePath() string { return "/v1/ml" }
|
||||
|
||||
func (r *Routes) RegisterRoutes(rg *gin.RouterGroup) {
|
||||
rg.POST("/generate", r.Generate)
|
||||
rg.POST("/score", r.Score)
|
||||
rg.GET("/backends", r.Backends)
|
||||
rg.GET("/status", r.Status)
|
||||
}
|
||||
|
||||
func (r *Routes) Channels() []string {
|
||||
return []string{"ml.generate", "ml.status"}
|
||||
}
|
||||
|
||||
// @Summary Generate text via ML backend
|
||||
// @Tags ml
|
||||
// @Accept json
|
||||
// @Produce json
|
||||
// @Param input body MLGenerateInput true "Generation parameters"
|
||||
// @Success 200 {object} Response[MLGenerateOutput]
|
||||
// @Router /v1/ml/generate [post]
|
||||
func (r *Routes) Generate(c *gin.Context) {
|
||||
var input MLGenerateInput
|
||||
if err := c.ShouldBindJSON(&input); err != nil {
|
||||
c.JSON(400, api.Fail("invalid_input", err.Error()))
|
||||
return
|
||||
}
|
||||
result, err := r.service.Generate(c.Request.Context(), input.Backend, input.Prompt, ml.GenOpts{
|
||||
Temperature: input.Temperature,
|
||||
MaxTokens: input.MaxTokens,
|
||||
Model: input.Model,
|
||||
})
|
||||
if err != nil {
|
||||
c.JSON(500, api.Fail("ml.generate_failed", err.Error()))
|
||||
return
|
||||
}
|
||||
c.JSON(200, api.OK(MLGenerateOutput{
|
||||
Response: result,
|
||||
Backend: input.Backend,
|
||||
Model: input.Model,
|
||||
}))
|
||||
}
|
||||
```
|
||||
|
||||
### Engine Wiring (in core CLI)
|
||||
|
||||
```go
|
||||
engine := api.New(
|
||||
api.WithAddr(":8080"),
|
||||
api.WithCORS("*"),
|
||||
api.WithAuth(api.BearerToken(cfg.APIKey)),
|
||||
api.WithRateLimit(100, time.Minute),
|
||||
api.WithWSHub(wsHub),
|
||||
)
|
||||
|
||||
engine.Register(mlapi.NewRoutes(mlService))
|
||||
engine.Register(ragapi.NewRoutes(ragService))
|
||||
engine.Register(agenticapi.NewRoutes(agenticService))
|
||||
|
||||
engine.Serve(ctx) // Blocks until context cancelled
|
||||
```
|
||||
|
||||
## Response Envelope
|
||||
|
||||
All endpoints return a consistent envelope:
|
||||
|
||||
```go
|
||||
type Response[T any] struct {
|
||||
Success bool `json:"success"`
|
||||
Data T `json:"data,omitempty"`
|
||||
Error *Error `json:"error,omitempty"`
|
||||
Meta *Meta `json:"meta,omitempty"`
|
||||
}
|
||||
|
||||
type Error struct {
|
||||
Code string `json:"code"`
|
||||
Message string `json:"message"`
|
||||
Details any `json:"details,omitempty"`
|
||||
}
|
||||
|
||||
type Meta struct {
|
||||
RequestID string `json:"request_id"`
|
||||
Duration string `json:"duration"`
|
||||
Page int `json:"page,omitempty"`
|
||||
PerPage int `json:"per_page,omitempty"`
|
||||
Total int `json:"total,omitempty"`
|
||||
}
|
||||
```
|
||||
|
||||
Helper functions:
|
||||
|
||||
```go
|
||||
func OK[T any](data T) Response[T]
|
||||
func Fail(code, message string) Response[any]
|
||||
func Paginated[T any](data T, page, perPage, total int) Response[T]
|
||||
```
|
||||
|
||||
## Middleware Stack
|
||||
|
||||
```go
|
||||
api.New(
|
||||
api.WithAddr(":8080"),
|
||||
api.WithCORS(api.CORSConfig{...}), // gin-contrib/cors
|
||||
api.WithAuth(api.BearerToken("...")), // Phase 1: simple bearer token
|
||||
api.WithRateLimit(100, time.Minute), // Per-IP sliding window
|
||||
api.WithRequestID(), // X-Request-ID header generation
|
||||
api.WithRecovery(), // Panic recovery → 500 response
|
||||
api.WithLogger(slog.Default()), // Structured request logging
|
||||
)
|
||||
```
|
||||
|
||||
Auth evolution path: bearer token → API keys → Authentik (OIDC/forward auth). Middleware slot stays the same.
|
||||
|
||||
## WebSocket Integration
|
||||
|
||||
go-api wraps the existing go-ws Hub as a first-class transport:
|
||||
|
||||
```go
|
||||
// Automatic registration:
|
||||
// GET /ws → WebSocket upgrade (go-ws Hub)
|
||||
|
||||
// Client subscribes: {"type":"subscribe","channel":"ml.generate"}
|
||||
// Events arrive: {"type":"event","channel":"ml.generate","data":{...}}
|
||||
// Client unsubscribes: {"type":"unsubscribe","channel":"ml.generate"}
|
||||
```
|
||||
|
||||
Subsystems implementing `StreamGroup` declare which channels they publish to. This metadata feeds into the OpenAPI spec as documentation.
|
||||
|
||||
## OpenAPI + SDK Generation
|
||||
|
||||
### Runtime Spec Generation (SpecBuilder)
|
||||
|
||||
swaggo annotations were rejected because routes are dynamic via RouteGroup, Response[T] generics break swaggo, and MCP tools already carry JSON Schema at runtime. Instead, a `SpecBuilder` constructs the full OpenAPI 3.1 spec from registered RouteGroups at runtime.
|
||||
|
||||
```go
|
||||
// Groups that implement DescribableGroup contribute endpoint metadata
|
||||
type DescribableGroup interface {
|
||||
RouteGroup
|
||||
Describe() []RouteDescription
|
||||
}
|
||||
|
||||
// SpecBuilder assembles the spec from all groups
|
||||
builder := &api.SpecBuilder{Title: "Core API", Description: "...", Version: "1.0.0"}
|
||||
spec, _ := builder.Build(engine.Groups())
|
||||
```
|
||||
|
||||
### MCP-to-REST Bridge (ToolBridge)
|
||||
|
||||
The `ToolBridge` converts MCP tool descriptors into REST POST endpoints and implements both `RouteGroup` and `DescribableGroup`. Each tool becomes `POST /{tool_name}`. Generic types are captured at MCP registration time via closures, enabling JSON unmarshalling to the correct input type at request time.
|
||||
|
||||
```go
|
||||
bridge := api.NewToolBridge("/v1/tools")
|
||||
mcp.BridgeToAPI(mcpService, bridge) // Populates bridge from MCP tool registry
|
||||
engine.Register(bridge) // Registers REST endpoints + OpenAPI metadata
|
||||
```
|
||||
|
||||
### Swagger UI
|
||||
|
||||
```go
|
||||
// Built-in at GET /swagger/*any
|
||||
// SpecBuilder output served via gin-swagger, cached via sync.Once
|
||||
api.New(api.WithSwagger("Core API", "...", "1.0.0"))
|
||||
```
|
||||
|
||||
### SDK Generation
|
||||
|
||||
```bash
|
||||
# Via openapi-generator-cli (11 languages supported)
|
||||
core api sdk --lang go # Generate Go SDK
|
||||
core api sdk --lang typescript-fetch,python # Multiple languages
|
||||
core api sdk --lang rust --output ./sdk/ # Custom output dir
|
||||
```
|
||||
|
||||
### CLI Commands
|
||||
|
||||
```bash
|
||||
core api spec # Emit OpenAPI JSON to stdout
|
||||
core api spec --format yaml # YAML variant
|
||||
core api spec --output spec.json # Write to file
|
||||
core api sdk --lang python # Generate Python SDK
|
||||
core api sdk --lang go,rust # Multiple SDKs
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
| Package | Purpose |
|
||||
|---------|---------|
|
||||
| `github.com/gin-gonic/gin` | HTTP framework |
|
||||
| `github.com/swaggo/gin-swagger` | Swagger UI middleware |
|
||||
| `github.com/gin-contrib/cors` | CORS middleware |
|
||||
| `github.com/gin-contrib/secure` | Security headers |
|
||||
| `github.com/gin-contrib/sessions` | Server-side sessions |
|
||||
| `github.com/gin-contrib/authz` | Casbin authorisation |
|
||||
| `github.com/gin-contrib/httpsign` | HTTP signature verification |
|
||||
| `github.com/gin-contrib/slog` | Structured request logging |
|
||||
| `github.com/gin-contrib/timeout` | Per-request timeouts |
|
||||
| `github.com/gin-contrib/gzip` | Gzip compression |
|
||||
| `github.com/gin-contrib/static` | Static file serving |
|
||||
| `github.com/gin-contrib/pprof` | Runtime profiling |
|
||||
| `github.com/gin-contrib/expvar` | Runtime metrics |
|
||||
| `github.com/gin-contrib/location/v2` | Reverse proxy detection |
|
||||
| `github.com/99designs/gqlgen` | GraphQL endpoint |
|
||||
| `go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin` | Distributed tracing |
|
||||
| `gopkg.in/yaml.v3` | YAML spec export |
|
||||
| `forge.lthn.ai/core/go-ws` | WebSocket Hub (existing) |
|
||||
|
||||
## Estimated Size
|
||||
|
||||
| Component | LOC |
|
||||
|-----------|-----|
|
||||
| Engine + options | ~200 |
|
||||
| Middleware | ~150 |
|
||||
| Response envelope | ~80 |
|
||||
| RouteGroup interface | ~30 |
|
||||
| WebSocket integration | ~60 |
|
||||
| Tests | ~300 |
|
||||
| **Total go-api** | **~820** |
|
||||
|
||||
Each subsystem's `api/` package adds ~100-200 LOC per route group.
|
||||
|
||||
## Phase 1 — Implemented (20 Feb 2026)
|
||||
|
||||
**Commit:** `17ae945` on Forge (`core/go-api`)
|
||||
|
||||
| Component | Status | Tests |
|
||||
|-----------|--------|-------|
|
||||
| Response envelope (OK, Fail, Paginated) | Done | 9 |
|
||||
| RouteGroup + StreamGroup interfaces | Done | 4 |
|
||||
| Engine (New, Register, Handler, Serve) | Done | 9 |
|
||||
| Bearer auth middleware | Done | 3 |
|
||||
| Request ID middleware | Done | 2 |
|
||||
| CORS middleware (gin-contrib/cors) | Done | 3 |
|
||||
| WebSocket endpoint | Done | 3 |
|
||||
| Swagger UI (gin-swagger) | Done | 2 |
|
||||
| Health endpoint | Done | 1 |
|
||||
| **Total** | **~840 LOC** | **36** |
|
||||
|
||||
**Integration proof:** go-ml/api/ registers 3 endpoints with 12 tests (`0c23858`).
|
||||
|
||||
## Phase 2 Wave 1 — Implemented (20 Feb 2026)
|
||||
|
||||
**Commits:** `6bb7195..daae6f7` on Forge (`core/go-api`)
|
||||
|
||||
| Component | Option | Dependency | Tests |
|
||||
|-----------|--------|------------|-------|
|
||||
| Authentik (forward auth + OIDC) | `WithAuthentik()` | `go-oidc/v3`, `oauth2` | 14 |
|
||||
| Security headers (HSTS, CSP, etc.) | `WithSecure()` | `gin-contrib/secure` | 8 |
|
||||
| Structured request logging | `WithSlog()` | `gin-contrib/slog` | 6 |
|
||||
| Per-request timeouts | `WithTimeout()` | `gin-contrib/timeout` | 5 |
|
||||
| Gzip compression | `WithGzip()` | `gin-contrib/gzip` | 5 |
|
||||
| Static file serving | `WithStatic()` | `gin-contrib/static` | 5 |
|
||||
| **Wave 1 Total** | | | **43** |
|
||||
|
||||
**Cumulative:** 76 tests (36 Phase 1 + 43 Wave 1 - 3 shared), all passing.
|
||||
|
||||
## Phase 2 Wave 2 — Implemented (20 Feb 2026)
|
||||
|
||||
**Commits:** `64a8b16..67dcc83` on Forge (`core/go-api`)
|
||||
|
||||
| Component | Option | Dependency | Tests | Notes |
|
||||
|-----------|--------|------------|-------|-------|
|
||||
| Brotli compression | `WithBrotli()` | `andybalholm/brotli` | 5 | Custom middleware; `gin-contrib/brotli` is empty stub |
|
||||
| Response caching | `WithCache()` | none (in-memory) | 5 | Custom middleware; `gin-contrib/cache` is per-handler, not global |
|
||||
| Server-side sessions | `WithSessions()` | `gin-contrib/sessions` | 5 | Cookie store, configurable name + secret |
|
||||
| Casbin authorisation | `WithAuthz()` | `gin-contrib/authz`, `casbin/v2` | 5 | Subject via Basic Auth; RBAC policy model |
|
||||
| **Wave 2 Total** | | | **20** | |
|
||||
|
||||
**Cumulative:** 102 passing tests (2 integration skipped), all green.
|
||||
|
||||
## Phase 2 Wave 3 — Implemented (20 Feb 2026)
|
||||
|
||||
**Commits:** `7b3f99e..d517fa2` on Forge (`core/go-api`)
|
||||
|
||||
| Component | Option | Dependency | Tests | Notes |
|
||||
|-----------|--------|------------|-------|-------|
|
||||
| HTTP signature verification | `WithHTTPSign()` | `gin-contrib/httpsign` | 5 | HMAC-SHA256; extensible via httpsign.Option |
|
||||
| Server-Sent Events | `WithSSE()` | none (custom SSEBroker) | 6 | Channel filtering, multi-client broadcast, GET /events |
|
||||
| Reverse proxy detection | `WithLocation()` | `gin-contrib/location/v2` | 5 | X-Forwarded-Host/Proto parsing |
|
||||
| Locale detection | `WithI18n()` | `golang.org/x/text/language` | 5 | Accept-Language parsing, message lookup, GetLocale/GetMessage |
|
||||
| GraphQL endpoint | `WithGraphQL()` | `99designs/gqlgen` | 5 | /graphql + optional /graphql/playground |
|
||||
| **Wave 3 Total** | | | **26** | |
|
||||
|
||||
**Cumulative:** 128 passing tests (2 integration skipped), all green.
|
||||
|
||||
## Phase 2 Wave 4 — Implemented (21 Feb 2026)
|
||||
|
||||
**Commits:** `32b3680..8ba1716` on Forge (`core/go-api`)
|
||||
|
||||
| Component | Option | Dependency | Tests | Notes |
|
||||
|-----------|--------|------------|-------|-------|
|
||||
| Runtime profiling | `WithPprof()` | `gin-contrib/pprof` | 5 | /debug/pprof/* endpoints, flag-based mount |
|
||||
| Runtime metrics | `WithExpvar()` | `gin-contrib/expvar` | 5 | /debug/vars endpoint, flag-based mount |
|
||||
| Distributed tracing | `WithTracing()` | `otelgin` + OpenTelemetry SDK | 5 | W3C traceparent propagation, span attributes |
|
||||
| **Wave 4 Total** | | | **15** | |
|
||||
|
||||
**Cumulative:** 143 passing tests (2 integration skipped), all green.
|
||||
|
||||
**Phase 2 complete.** All 4 waves implemented. Every planned plugin has a `With*()` option and tests.
|
||||
|
||||
## Phase 3 — OpenAPI Spec Generation + SDK Codegen (21 Feb 2026)
|
||||
|
||||
**Architecture:** Runtime OpenAPI generation via SpecBuilder (NOT swaggo annotations). Routes are dynamic via RouteGroup, Response[T] generics break swaggo, and MCP tools carry JSON Schema at runtime. A `ToolBridge` converts tool descriptors into RouteGroup + OpenAPI metadata. A `SpecBuilder` constructs the full OpenAPI 3.1 spec. SDK codegen wraps `openapi-generator-cli`.
|
||||
|
||||
### Wave 1: go-api (Tasks 1-5)
|
||||
|
||||
**Commits:** `465bd60..1910aec` on Forge (`core/go-api`)
|
||||
|
||||
| Component | File | Tests | Notes |
|
||||
|-----------|------|-------|-------|
|
||||
| DescribableGroup interface | `group.go` | 5 | Opt-in OpenAPI metadata for RouteGroups |
|
||||
| ToolBridge | `bridge.go` | 6 | Tool descriptors → POST endpoints + DescribableGroup |
|
||||
| SpecBuilder | `openapi.go` | 6 | OpenAPI 3.1 JSON with Response[T] envelope wrapping |
|
||||
| Swagger refactor | `swagger.go` | 5 | Replaced hardcoded empty spec with SpecBuilder |
|
||||
| Spec export | `export.go` | 5 | JSON + YAML export to file/writer |
|
||||
| SDK codegen | `codegen.go` | 5 | 11-language wrapper for openapi-generator-cli |
|
||||
| **Wave 1 Total** | | **32** | |
|
||||
|
||||
### Wave 2: go-ai MCP bridge (Tasks 6-7)
|
||||
|
||||
**Commits:** `2107eda..c37e1cf` on Forge (`core/go-ai`)
|
||||
|
||||
| Component | File | Tests | Notes |
|
||||
|-----------|------|-------|-------|
|
||||
| Tool registry | `mcp/registry.go` | 5 | Generic `addToolRecorded[In,Out]` captures types in closures |
|
||||
| BridgeToAPI | `mcp/bridge.go` | 5 | MCP tools → go-api ToolBridge, 10MB body limit, error classification |
|
||||
| **Wave 2 Total** | | **10** | |
|
||||
|
||||
### Wave 3: CLI commands (Tasks 8-9)
|
||||
|
||||
**Commit:** `d6eec4d` on Forge (`core/cli` dev branch)
|
||||
|
||||
| Component | File | Tests | Notes |
|
||||
|-----------|------|-------|-------|
|
||||
| `core api spec` | `cmd/api/cmd_spec.go` | 2 | JSON/YAML export, --output/--format flags |
|
||||
| `core api sdk` | `cmd/api/cmd_sdk.go` | 2 | --lang (required), --output, --spec, --package flags |
|
||||
| **Wave 3 Total** | | **4** | |
|
||||
|
||||
**Cumulative go-api:** 176 passing tests. **Phase 3 complete.**
|
||||
|
||||
### Known Limitations
|
||||
|
||||
- **Subsystem tools excluded from bridge:** Subsystems call `mcp.AddTool` directly, bypassing `addToolRecorded`. Only the 10 built-in MCP tools appear in the REST bridge. Future: pass `*Service` to `RegisterTools` instead of `*mcp.Server`.
|
||||
- **Flat schema only:** `structSchema` reflection handles flat structs but does not recurse into nested structs. Adequate for current tool inputs.
|
||||
- **CLI spec produces empty bridge:** `core api spec` currently generates a spec with only `/health`. Full MCP integration requires wiring the MCP service into the CLI command.
|
||||
|
||||
## Phase 2 — Gin Plugin Roadmap (Complete)
|
||||
|
||||
All plugins drop in as `With*()` options on the Engine. No architecture changes needed.
|
||||
|
||||
### Security & Auth
|
||||
|
||||
| Plugin | Option | Purpose | Priority |
|
||||
|--------|--------|---------|----------|
|
||||
| ~~**Authentik**~~ | ~~`WithAuthentik()`~~ | ~~OIDC + forward auth integration.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/secure~~ | ~~`WithSecure()`~~ | ~~Security headers: HSTS, X-Frame-Options, X-Content-Type-Options, CSP.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/sessions~~ | ~~`WithSessions()`~~ | ~~Server-side sessions (cookie store). Web session management alongside Authentik tokens.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/authz~~ | ~~`WithAuthz()`~~ | ~~Casbin-based authorisation. Policy-driven access control via RBAC.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/httpsign~~ | ~~`WithHTTPSign()`~~ | ~~HTTP signature verification. HMAC-SHA256 with extensible options.~~ | ~~**Done**~~ |
|
||||
|
||||
### Performance & Reliability
|
||||
|
||||
| Plugin | Option | Purpose | Priority |
|
||||
|--------|--------|---------|----------|
|
||||
| ~~gin-contrib/cache~~ | ~~`WithCache()`~~ | ~~Response caching (in-memory). GET response caching with TTL, lazy eviction.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/timeout~~ | ~~`WithTimeout()`~~ | ~~Per-request timeouts.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/gzip~~ | ~~`WithGzip()`~~ | ~~Gzip response compression.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/brotli~~ | ~~`WithBrotli()`~~ | ~~Brotli compression via `andybalholm/brotli`. Custom middleware (gin-contrib stub empty).~~ | ~~**Done**~~ |
|
||||
|
||||
### Observability
|
||||
|
||||
| Plugin | Option | Purpose | Priority |
|
||||
|--------|--------|---------|----------|
|
||||
| ~~gin-contrib/slog~~ | ~~`WithSlog()`~~ | ~~Structured request logging via slog.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/pprof~~ | ~~`WithPprof()`~~ | ~~Runtime profiling endpoints at /debug/pprof/. Flag-based mount.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/expvar~~ | ~~`WithExpvar()`~~ | ~~Go runtime metrics at /debug/vars. Flag-based mount.~~ | ~~**Done**~~ |
|
||||
| ~~otelgin~~ | ~~`WithTracing()`~~ | ~~OpenTelemetry distributed tracing. W3C traceparent propagation.~~ | ~~**Done**~~ |
|
||||
|
||||
### Content & Streaming
|
||||
|
||||
| Plugin | Option | Purpose | Priority |
|
||||
|--------|--------|---------|----------|
|
||||
| ~~gin-contrib/static~~ | ~~`WithStatic()`~~ | ~~Serve static files.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/sse~~ | ~~`WithSSE()`~~ | ~~Server-Sent Events. Custom SSEBroker with channel filtering, GET /events.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/location~~ | ~~`WithLocation()`~~ | ~~Auto-detect scheme/host from X-Forwarded-* headers.~~ | ~~**Done**~~ |
|
||||
|
||||
### Query Layer
|
||||
|
||||
| Plugin | Option | Purpose | Priority |
|
||||
|--------|--------|---------|----------|
|
||||
| ~~99designs/gqlgen~~ | ~~`WithGraphQL()`~~ | ~~GraphQL endpoint at `/graphql` + optional playground. Accepts gqlgen ExecutableSchema.~~ | ~~**Done**~~ |
|
||||
|
||||
The GraphQL schema can be generated from the same Go Input/Output structs that define the REST endpoints. gqlgen produces an `http.Handler` that mounts directly on Gin. Subsystems opt-in via:
|
||||
|
||||
```go
|
||||
// Subsystems that want GraphQL implement this alongside RouteGroup
|
||||
type ResolverGroup interface {
|
||||
// RegisterResolvers adds query/mutation resolvers to the GraphQL schema
|
||||
RegisterResolvers(schema *graphql.Schema)
|
||||
}
|
||||
```
|
||||
|
||||
This means a subsystem like go-ml exposes:
|
||||
- **REST:** `POST /v1/ml/generate` (existing)
|
||||
- **GraphQL:** `mutation { mlGenerate(prompt: "...", backend: "mlx") { response, model } }` (same handler)
|
||||
- **MCP:** `ml_generate` tool (existing)
|
||||
|
||||
Four protocols, one set of handlers.
|
||||
|
||||
### Ecosystem Integration
|
||||
|
||||
| Plugin | Option | Purpose | Priority |
|
||||
|--------|--------|---------|----------|
|
||||
| ~~gin-contrib/i18n~~ | ~~`WithI18n()`~~ | ~~Locale detection via Accept-Language. Custom middleware using `golang.org/x/text/language`.~~ | ~~**Done**~~ |
|
||||
| [gin-contrib/graceful](https://github.com/gin-contrib/graceful) | — | Already implemented in Engine.Serve(). Could swap to this for more robust lifecycle management if needed. | — |
|
||||
| [gin-contrib/requestid](https://github.com/gin-contrib/requestid) | — | Already implemented. Theirs uses UUID, ours uses hex. Could swap for standards compliance. | — |
|
||||
|
||||
### Implementation Order
|
||||
|
||||
**Wave 1 (gateway hardening):** ~~Authentik, secure, slog, timeout, gzip, static~~ **DONE** (20 Feb 2026)
|
||||
**Wave 2 (performance + auth):** ~~cache, sessions, authz, brotli~~ **DONE** (20 Feb 2026)
|
||||
**Wave 3 (network + streaming):** ~~httpsign, sse, location, i18n, gqlgen~~ **DONE** (20 Feb 2026)
|
||||
**Wave 4 (observability):** ~~pprof, expvar, tracing~~ **DONE** (21 Feb 2026)
|
||||
|
||||
Each wave adds `With*()` options + tests. No breaking changes — existing code continues to work without any new options enabled.
|
||||
|
||||
## Authentik Integration
|
||||
|
||||
[Authentik](https://goauthentik.io/) is the identity provider and edge auth proxy. It handles user registration, login, MFA, social auth, SAML, and OIDC — so go-api doesn't have to.
|
||||
|
||||
### Two Integration Modes
|
||||
|
||||
**1. Forward Auth (web traffic)**
|
||||
|
||||
Traefik sits in front of go-api. For web routes, Traefik's `forwardAuth` middleware checks with Authentik before passing the request through. Authentik handles login flows, session cookies, and consent. go-api receives pre-authenticated requests with identity headers.
|
||||
|
||||
```
|
||||
Browser → Traefik → Authentik (forward auth) → go-api
|
||||
↓
|
||||
Login page (if unauthenticated)
|
||||
```
|
||||
|
||||
go-api reads trusted headers set by Authentik:
|
||||
```
|
||||
X-Authentik-Username: alice
|
||||
X-Authentik-Groups: admins,developers
|
||||
X-Authentik-Email: alice@example.com
|
||||
X-Authentik-Uid: <uuid>
|
||||
X-Authentik-Jwt: <signed token>
|
||||
```
|
||||
|
||||
**2. OIDC Token Validation (API traffic)**
|
||||
|
||||
API clients (SDKs, CLI tools, network peers) authenticate directly with Authentik's OAuth2 token endpoint, then send the JWT to go-api. go-api validates the JWT using Authentik's OIDC discovery endpoint (`.well-known/openid-configuration`).
|
||||
|
||||
```
|
||||
SDK client → Authentik (token endpoint) → receives JWT
|
||||
SDK client → go-api (Authorization: Bearer <jwt>) → validates via OIDC
|
||||
```
|
||||
|
||||
### Implementation in go-api
|
||||
|
||||
```go
|
||||
engine := api.New(
|
||||
api.WithAuthentik(api.AuthentikConfig{
|
||||
Issuer: "https://auth.lthn.ai/application/o/core-api/",
|
||||
ClientID: "core-api",
|
||||
TrustedProxy: true, // Trust X-Authentik-* headers from Traefik
|
||||
}),
|
||||
)
|
||||
```
|
||||
|
||||
`WithAuthentik()` adds middleware that:
|
||||
1. Checks for `X-Authentik-Jwt` header (forward auth mode) — validates signature, extracts claims
|
||||
2. Falls back to `Authorization: Bearer <jwt>` header (direct OIDC mode) — validates via JWKS
|
||||
3. Populates `c.Set("user", AuthentikUser{...})` in the Gin context for handlers to use
|
||||
4. Skips /health, /swagger, and any public paths
|
||||
|
||||
```go
|
||||
// In any handler:
|
||||
func (r *Routes) ListItems(c *gin.Context) {
|
||||
user := api.GetUser(c) // Returns *AuthentikUser or nil
|
||||
if user == nil {
|
||||
c.JSON(401, api.Fail("unauthorised", "Authentication required"))
|
||||
return
|
||||
}
|
||||
// user.Username, user.Groups, user.Email, user.UID available
|
||||
}
|
||||
```
|
||||
|
||||
### Auth Layers
|
||||
|
||||
```
|
||||
Authentik (identity) → WHO is this? (user, groups, email)
|
||||
↓
|
||||
go-api middleware → IS their token valid? (JWT verification)
|
||||
↓
|
||||
Casbin authz (optional) → CAN they do this? (role → endpoint policies)
|
||||
↓
|
||||
Handler → DOES this (business logic)
|
||||
```
|
||||
|
||||
Phase 1 bearer auth continues to work alongside Authentik — useful for service-to-service tokens, CI/CD, and development. `WithBearerAuth` and `WithAuthentik` can coexist.
|
||||
|
||||
### Authentik Deployment
|
||||
|
||||
Authentik runs as a Docker service alongside go-api, fronted by Traefik:
|
||||
- **auth.lthn.ai** — Authentik UI + OIDC endpoints (production)
|
||||
- **auth.leth.in** — Authentik for devnet/testnet
|
||||
- Traefik routes `/outpost.goauthentik.io/` to Authentik's embedded outpost for forward auth
|
||||
|
||||
### Dependencies
|
||||
|
||||
| Package | Purpose |
|
||||
|---------|---------|
|
||||
| `github.com/coreos/go-oidc/v3` | OIDC discovery + JWT validation |
|
||||
| `golang.org/x/oauth2` | OAuth2 token exchange (for server-side flows) |
|
||||
|
||||
Both are standard Go libraries with no heavy dependencies.
|
||||
|
||||
## Non-Goals
|
||||
|
||||
- gRPC gateway
|
||||
- Built-in user registration/login (Authentik handles this)
|
||||
- API versioning beyond /v1/ prefix
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Phase 1 (Done)
|
||||
|
||||
1. ~~`core api serve` starts a Gin server with registered subsystem routes~~
|
||||
2. ~~WebSocket subscriptions work alongside REST~~
|
||||
3. ~~Swagger UI accessible at `/swagger/`~~
|
||||
4. ~~All endpoints return consistent Response envelope~~
|
||||
5. ~~Bearer token auth protects all routes~~
|
||||
6. ~~First subsystem integration (go-ml/api/) proves the pattern~~
|
||||
|
||||
### Phase 2 (Done)
|
||||
|
||||
7. ~~Security headers, compression, and caching active in production~~
|
||||
8. ~~Session-based auth alongside bearer tokens~~
|
||||
9. ~~HTTP signature verification for Lethean network peers~~
|
||||
10. ~~Static file serving for docs site and SDK downloads~~
|
||||
11. ~~GraphQL endpoint at `/graphql` with playground~~
|
||||
|
||||
### Phase 3 (Done)
|
||||
|
||||
12. ~~`core api spec` emits valid OpenAPI 3.1 JSON via runtime SpecBuilder~~
|
||||
13. ~~`core api sdk` generates SDKs for 11 languages via openapi-generator-cli~~
|
||||
14. ~~MCP tools bridged to REST endpoints via ToolBridge + BridgeToAPI~~
|
||||
15. ~~OpenAPI spec includes Response[T] envelope wrapping~~
|
||||
16. ~~Spec export to file in JSON and YAML formats~~
|
||||
1503
docs/plans/2026-02-20-go-api-plan.md
Normal file
1503
docs/plans/2026-02-20-go-api-plan.md
Normal file
File diff suppressed because it is too large
Load diff
155
docs/plans/2026-02-21-core-help-design.md
Normal file
155
docs/plans/2026-02-21-core-help-design.md
Normal file
|
|
@ -0,0 +1,155 @@
|
|||
# core.help Documentation Website — Design
|
||||
|
||||
**Date:** 2026-02-21
|
||||
**Author:** Virgil
|
||||
**Status:** Design approved
|
||||
**Domain:** https://core.help
|
||||
|
||||
## Problem
|
||||
|
||||
Documentation is scattered across 39 repos (18 Go packages, 20 PHP packages, 1 CLI). There is no unified docs site. Developers need a single entry point to find CLI commands, Go package APIs, MCP tool references, and PHP module guides.
|
||||
|
||||
## Solution
|
||||
|
||||
A Hugo + Docsy static site at core.help, built from existing markdown docs aggregated by `core docs sync`. No new content — just collect and present what already exists across the ecosystem.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Stack
|
||||
|
||||
- **Hugo** — Go-native static site generator, sub-second builds
|
||||
- **Docsy theme** — Purpose-built for technical docs (used by Kubernetes, gRPC, Knative)
|
||||
- **BunnyCDN** — Static hosting with pull zone
|
||||
- **`core docs sync --target hugo`** — Collects markdown from all repos into Hugo content tree
|
||||
|
||||
### Why Hugo + Docsy (not VitePress or mdBook)
|
||||
|
||||
- Go-native, no Node.js dependency
|
||||
- Handles multi-section navigation (CLI, Go packages, PHP modules, MCP tools)
|
||||
- Sub-second builds for ~250 markdown files
|
||||
- Docsy has built-in search, versioned nav, API reference sections
|
||||
|
||||
## Content Structure
|
||||
|
||||
```
|
||||
docs-site/
|
||||
├── hugo.toml
|
||||
├── content/
|
||||
│ ├── _index.md # Landing page
|
||||
│ ├── getting-started/ # CLI top-level guides
|
||||
│ │ ├── _index.md
|
||||
│ │ ├── installation.md
|
||||
│ │ ├── configuration.md
|
||||
│ │ ├── user-guide.md
|
||||
│ │ ├── troubleshooting.md
|
||||
│ │ └── faq.md
|
||||
│ ├── cli/ # CLI command reference (43 commands)
|
||||
│ │ ├── _index.md
|
||||
│ │ ├── dev/ # core dev commit, push, pull, etc.
|
||||
│ │ ├── ai/ # core ai commands
|
||||
│ │ ├── go/ # core go test, lint, etc.
|
||||
│ │ └── ...
|
||||
│ ├── go/ # Go ecosystem packages (18)
|
||||
│ │ ├── _index.md # Ecosystem overview
|
||||
│ │ ├── go-api/ # README + architecture/development/history
|
||||
│ │ ├── go-ai/
|
||||
│ │ ├── go-mlx/
|
||||
│ │ ├── go-i18n/
|
||||
│ │ └── ...
|
||||
│ ├── mcp/ # MCP tool reference (49 tools)
|
||||
│ │ ├── _index.md
|
||||
│ │ ├── file-operations.md
|
||||
│ │ ├── process-management.md
|
||||
│ │ ├── rag.md
|
||||
│ │ └── ...
|
||||
│ ├── php/ # PHP packages (from core-php/docs/packages/)
|
||||
│ │ ├── _index.md
|
||||
│ │ ├── admin/
|
||||
│ │ ├── tenant/
|
||||
│ │ ├── commerce/
|
||||
│ │ └── ...
|
||||
│ └── kb/ # Knowledge base (wiki pages from go-mlx, go-i18n)
|
||||
│ ├── _index.md
|
||||
│ ├── mlx/
|
||||
│ └── i18n/
|
||||
├── static/ # Logos, favicons
|
||||
├── layouts/ # Custom template overrides (minimal)
|
||||
└── go.mod # Hugo modules (Docsy as module dep)
|
||||
```
|
||||
|
||||
## Sync Pipeline
|
||||
|
||||
`core docs sync --target hugo --output site/content/` performs:
|
||||
|
||||
### Source Mapping
|
||||
|
||||
```
|
||||
cli/docs/index.md → content/getting-started/_index.md
|
||||
cli/docs/getting-started.md → content/getting-started/installation.md
|
||||
cli/docs/user-guide.md → content/getting-started/user-guide.md
|
||||
cli/docs/configuration.md → content/getting-started/configuration.md
|
||||
cli/docs/troubleshooting.md → content/getting-started/troubleshooting.md
|
||||
cli/docs/faq.md → content/getting-started/faq.md
|
||||
|
||||
core/docs/cmd/**/*.md → content/cli/**/*.md
|
||||
|
||||
go-*/README.md → content/go/{name}/_index.md
|
||||
go-*/docs/*.md → content/go/{name}/*.md
|
||||
go-*/KB/*.md → content/kb/{name-suffix}/*.md
|
||||
|
||||
core-*/docs/**/*.md → content/php/{name-suffix}/**/*.md
|
||||
```
|
||||
|
||||
### Front Matter Injection
|
||||
|
||||
If a markdown file doesn't start with `---`, prepend:
|
||||
|
||||
```yaml
|
||||
---
|
||||
title: "{derived from filename}"
|
||||
linkTitle: "{short name}"
|
||||
weight: {auto-incremented}
|
||||
---
|
||||
```
|
||||
|
||||
No other content transformations. Markdown stays as-is.
|
||||
|
||||
### Build & Deploy
|
||||
|
||||
```bash
|
||||
core docs sync --target hugo --output docs-site/content/
|
||||
cd docs-site && hugo build
|
||||
hugo deploy --target bunnycdn
|
||||
```
|
||||
|
||||
Hugo deploy config in `hugo.toml`:
|
||||
|
||||
```toml
|
||||
[deployment]
|
||||
[[deployment.targets]]
|
||||
name = "bunnycdn"
|
||||
URL = "s3://core-help?endpoint=storage.bunnycdn.com®ion=auto"
|
||||
```
|
||||
|
||||
Credentials via env vars.
|
||||
|
||||
## Registry
|
||||
|
||||
All 39 repos registered in `.core/repos.yaml` with `docs: true`. Go repos use explicit `path:` fields since they live outside the PHP `base_path`. `FindRegistry()` checks `.core/repos.yaml` alongside `repos.yaml`.
|
||||
|
||||
## Prerequisites Completed
|
||||
|
||||
- [x] `.core/repos.yaml` created with all 39 repos
|
||||
- [x] `FindRegistry()` updated to find `.core/repos.yaml`
|
||||
- [x] `Repo.Path` supports explicit YAML override
|
||||
- [x] go-api docs gap filled (architecture.md, development.md, history.md)
|
||||
- [x] All 18 Go repos have standard docs trio
|
||||
|
||||
## What Remains (Implementation Plan)
|
||||
|
||||
1. Create docs-site repo with Hugo + Docsy scaffold
|
||||
2. Extend `core docs sync` with `--target hugo` mode
|
||||
3. Write section _index.md files (landing page, section intros)
|
||||
4. Hugo config (navigation, search, theme colours)
|
||||
5. BunnyCDN deployment config
|
||||
6. CI pipeline on Forge (optional — can deploy manually initially)
|
||||
642
docs/plans/2026-02-21-core-help-plan.md
Normal file
642
docs/plans/2026-02-21-core-help-plan.md
Normal file
|
|
@ -0,0 +1,642 @@
|
|||
# core.help Hugo Documentation Site — Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Build a Hugo + Docsy documentation site at core.help that aggregates markdown from 39 repos via `core docs sync --target hugo`.
|
||||
|
||||
**Architecture:** Hugo static site with Docsy theme, populated by extending `core docs sync` with a `--target hugo` flag that maps repo docs into Hugo's `content/` tree with auto-injected front matter. Deploy to BunnyCDN.
|
||||
|
||||
**Tech Stack:** Hugo (Go SSG), Docsy theme (Hugo module), BunnyCDN, `core docs sync` CLI
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
The docs sync command lives in `/Users/snider/Code/host-uk/cli/cmd/docs/`. The site will be scaffolded at `/Users/snider/Code/host-uk/docs-site/`. The registry at `/Users/snider/Code/host-uk/.core/repos.yaml` already contains all 39 repos (20 PHP + 18 Go + 1 CLI) with explicit paths for Go repos.
|
||||
|
||||
Key files:
|
||||
- `/Users/snider/Code/host-uk/cli/cmd/docs/cmd_sync.go` — sync command (modify)
|
||||
- `/Users/snider/Code/host-uk/cli/cmd/docs/cmd_scan.go` — repo scanner (modify)
|
||||
- `/Users/snider/Code/host-uk/docs-site/` — Hugo site (create)
|
||||
|
||||
## Task 1: Scaffold Hugo + Docsy site
|
||||
|
||||
**Files:**
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/hugo.toml`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/go.mod`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/_index.md`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/getting-started/_index.md`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/cli/_index.md`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/go/_index.md`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/mcp/_index.md`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/php/_index.md`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/kb/_index.md`
|
||||
|
||||
This is the one-time Hugo scaffolding. No tests — just files.
|
||||
|
||||
**`hugo.toml`:**
|
||||
```toml
|
||||
baseURL = "https://core.help/"
|
||||
title = "Core Documentation"
|
||||
languageCode = "en"
|
||||
defaultContentLanguage = "en"
|
||||
|
||||
enableRobotsTXT = true
|
||||
enableGitInfo = false
|
||||
|
||||
[outputs]
|
||||
home = ["HTML", "JSON"]
|
||||
section = ["HTML"]
|
||||
|
||||
[params]
|
||||
description = "Documentation for the Core CLI, Go packages, PHP modules, and MCP tools"
|
||||
copyright = "Host UK — EUPL-1.2"
|
||||
|
||||
[params.ui]
|
||||
sidebar_menu_compact = true
|
||||
breadcrumb_disable = false
|
||||
sidebar_search_disable = false
|
||||
navbar_logo = false
|
||||
|
||||
[params.ui.readingtime]
|
||||
enable = false
|
||||
|
||||
[module]
|
||||
proxy = "direct"
|
||||
|
||||
[module.hugoVersion]
|
||||
extended = true
|
||||
min = "0.120.0"
|
||||
|
||||
[[module.imports]]
|
||||
path = "github.com/google/docsy"
|
||||
disable = false
|
||||
|
||||
[markup.goldmark.renderer]
|
||||
unsafe = true
|
||||
|
||||
[menu]
|
||||
[[menu.main]]
|
||||
name = "Getting Started"
|
||||
weight = 10
|
||||
url = "/getting-started/"
|
||||
[[menu.main]]
|
||||
name = "CLI Reference"
|
||||
weight = 20
|
||||
url = "/cli/"
|
||||
[[menu.main]]
|
||||
name = "Go Packages"
|
||||
weight = 30
|
||||
url = "/go/"
|
||||
[[menu.main]]
|
||||
name = "MCP Tools"
|
||||
weight = 40
|
||||
url = "/mcp/"
|
||||
[[menu.main]]
|
||||
name = "PHP Packages"
|
||||
weight = 50
|
||||
url = "/php/"
|
||||
[[menu.main]]
|
||||
name = "Knowledge Base"
|
||||
weight = 60
|
||||
url = "/kb/"
|
||||
```
|
||||
|
||||
**`go.mod`:**
|
||||
```
|
||||
module github.com/host-uk/docs-site
|
||||
|
||||
go 1.22
|
||||
|
||||
require github.com/google/docsy v0.11.0
|
||||
```
|
||||
|
||||
Note: Run `hugo mod get` after creating these files to populate `go.sum` and download Docsy.
|
||||
|
||||
**Section `_index.md` files** — each needs Hugo front matter:
|
||||
|
||||
`content/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "Core Documentation"
|
||||
description: "Documentation for the Core CLI, Go packages, PHP modules, and MCP tools"
|
||||
---
|
||||
|
||||
Welcome to the Core ecosystem documentation.
|
||||
|
||||
## Sections
|
||||
|
||||
- [Getting Started](/getting-started/) — Installation, configuration, and first steps
|
||||
- [CLI Reference](/cli/) — Command reference for `core` CLI
|
||||
- [Go Packages](/go/) — Go ecosystem package documentation
|
||||
- [MCP Tools](/mcp/) — Model Context Protocol tool reference
|
||||
- [PHP Packages](/php/) — PHP module documentation
|
||||
- [Knowledge Base](/kb/) — Wiki articles and deep dives
|
||||
```
|
||||
|
||||
`content/getting-started/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "Getting Started"
|
||||
linkTitle: "Getting Started"
|
||||
weight: 10
|
||||
description: "Installation, configuration, and first steps with the Core CLI"
|
||||
---
|
||||
```
|
||||
|
||||
`content/cli/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "CLI Reference"
|
||||
linkTitle: "CLI Reference"
|
||||
weight: 20
|
||||
description: "Command reference for the core CLI tool"
|
||||
---
|
||||
```
|
||||
|
||||
`content/go/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "Go Packages"
|
||||
linkTitle: "Go Packages"
|
||||
weight: 30
|
||||
description: "Documentation for the Go ecosystem packages"
|
||||
---
|
||||
```
|
||||
|
||||
`content/mcp/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "MCP Tools"
|
||||
linkTitle: "MCP Tools"
|
||||
weight: 40
|
||||
description: "Model Context Protocol tool reference — file operations, RAG, ML inference, process management"
|
||||
---
|
||||
```
|
||||
|
||||
`content/php/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "PHP Packages"
|
||||
linkTitle: "PHP Packages"
|
||||
weight: 50
|
||||
description: "Documentation for the PHP module ecosystem"
|
||||
---
|
||||
```
|
||||
|
||||
`content/kb/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "Knowledge Base"
|
||||
linkTitle: "Knowledge Base"
|
||||
weight: 60
|
||||
description: "Wiki articles, deep dives, and reference material"
|
||||
---
|
||||
```
|
||||
|
||||
**Verify:** After creating files, run from `/Users/snider/Code/host-uk/docs-site/`:
|
||||
```bash
|
||||
hugo mod get
|
||||
hugo server
|
||||
```
|
||||
The site should start and show the landing page with Docsy theme at `localhost:1313`.
|
||||
|
||||
**Commit:**
|
||||
```bash
|
||||
cd /Users/snider/Code/host-uk/docs-site
|
||||
git init
|
||||
git add .
|
||||
git commit -m "feat: scaffold Hugo + Docsy documentation site"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 2: Extend scanRepoDocs to collect KB/ and README
|
||||
|
||||
**Files:**
|
||||
- Modify: `/Users/snider/Code/host-uk/cli/cmd/docs/cmd_scan.go`
|
||||
|
||||
Currently `scanRepoDocs` only collects files from `docs/`. For the Hugo target we also need:
|
||||
- `KB/**/*.md` files (wiki pages from go-mlx, go-i18n)
|
||||
- `README.md` content (becomes the package _index.md)
|
||||
|
||||
Add a `KBFiles []string` field to `RepoDocInfo` and scan `KB/` alongside `docs/`:
|
||||
|
||||
```go
|
||||
type RepoDocInfo struct {
|
||||
Name string
|
||||
Path string
|
||||
HasDocs bool
|
||||
Readme string
|
||||
ClaudeMd string
|
||||
Changelog string
|
||||
DocsFiles []string // All files in docs/ directory (recursive)
|
||||
KBFiles []string // All files in KB/ directory (recursive)
|
||||
}
|
||||
```
|
||||
|
||||
In `scanRepoDocs`, after the `docs/` walk, add a second walk for `KB/`:
|
||||
|
||||
```go
|
||||
// Recursively scan KB/ directory for .md files
|
||||
kbDir := filepath.Join(repo.Path, "KB")
|
||||
if _, err := io.Local.List(kbDir); err == nil {
|
||||
_ = filepath.WalkDir(kbDir, func(path string, d fs.DirEntry, err error) error {
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
if d.IsDir() || !strings.HasSuffix(d.Name(), ".md") {
|
||||
return nil
|
||||
}
|
||||
relPath, _ := filepath.Rel(kbDir, path)
|
||||
info.KBFiles = append(info.KBFiles, relPath)
|
||||
info.HasDocs = true
|
||||
return nil
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Tests:** The existing tests should still pass. No new test file needed — this is a data-collection change.
|
||||
|
||||
**Verify:** `cd /Users/snider/Code/host-uk/cli && GOWORK=off go build ./cmd/docs/...`
|
||||
|
||||
**Commit:**
|
||||
```bash
|
||||
git add cmd/docs/cmd_scan.go
|
||||
git commit -m "feat(docs): scan KB/ directory alongside docs/"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 3: Add `--target hugo` flag and Hugo sync logic
|
||||
|
||||
**Files:**
|
||||
- Modify: `/Users/snider/Code/host-uk/cli/cmd/docs/cmd_sync.go`
|
||||
|
||||
This is the main task. Add a `--target` flag (default `"php"`) and a new `runHugoSync` function that maps repos to Hugo's content tree.
|
||||
|
||||
**Add flag variable and registration:**
|
||||
|
||||
```go
|
||||
var (
|
||||
docsSyncRegistryPath string
|
||||
docsSyncDryRun bool
|
||||
docsSyncOutputDir string
|
||||
docsSyncTarget string
|
||||
)
|
||||
|
||||
func init() {
|
||||
docsSyncCmd.Flags().StringVar(&docsSyncRegistryPath, "registry", "", i18n.T("common.flag.registry"))
|
||||
docsSyncCmd.Flags().BoolVar(&docsSyncDryRun, "dry-run", false, i18n.T("cmd.docs.sync.flag.dry_run"))
|
||||
docsSyncCmd.Flags().StringVar(&docsSyncOutputDir, "output", "", i18n.T("cmd.docs.sync.flag.output"))
|
||||
docsSyncCmd.Flags().StringVar(&docsSyncTarget, "target", "php", "Target format: php (default) or hugo")
|
||||
}
|
||||
```
|
||||
|
||||
**Update RunE to pass target:**
|
||||
```go
|
||||
RunE: func(cmd *cli.Command, args []string) error {
|
||||
return runDocsSync(docsSyncRegistryPath, docsSyncOutputDir, docsSyncDryRun, docsSyncTarget)
|
||||
},
|
||||
```
|
||||
|
||||
**Update `runDocsSync` signature and add target dispatch:**
|
||||
```go
|
||||
func runDocsSync(registryPath string, outputDir string, dryRun bool, target string) error {
|
||||
reg, basePath, err := loadRegistry(registryPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch target {
|
||||
case "hugo":
|
||||
return runHugoSync(reg, basePath, outputDir, dryRun)
|
||||
default:
|
||||
return runPHPSync(reg, basePath, outputDir, dryRun)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Rename current sync body to `runPHPSync`** — extract lines 67-159 of current `runDocsSync` into `runPHPSync(reg, basePath, outputDir string, dryRun bool) error`. This is a pure extract, no logic changes.
|
||||
|
||||
**Add `hugoOutputName` mapping function:**
|
||||
```go
|
||||
// hugoOutputName maps repo name to Hugo content section and folder.
|
||||
// Returns (section, folder) where section is the top-level content dir.
|
||||
func hugoOutputName(repoName string) (string, string) {
|
||||
// CLI guides
|
||||
if repoName == "cli" {
|
||||
return "getting-started", ""
|
||||
}
|
||||
// Core CLI command docs
|
||||
if repoName == "core" {
|
||||
return "cli", ""
|
||||
}
|
||||
// Go packages
|
||||
if strings.HasPrefix(repoName, "go-") {
|
||||
return "go", repoName
|
||||
}
|
||||
// PHP packages
|
||||
if strings.HasPrefix(repoName, "core-") {
|
||||
return "php", strings.TrimPrefix(repoName, "core-")
|
||||
}
|
||||
return "go", repoName
|
||||
}
|
||||
```
|
||||
|
||||
**Add front matter injection helper:**
|
||||
```go
|
||||
// injectFrontMatter prepends Hugo front matter to markdown content if missing.
|
||||
func injectFrontMatter(content []byte, title string, weight int) []byte {
|
||||
// Already has front matter
|
||||
if bytes.HasPrefix(bytes.TrimSpace(content), []byte("---")) {
|
||||
return content
|
||||
}
|
||||
fm := fmt.Sprintf("---\ntitle: %q\nweight: %d\n---\n\n", title, weight)
|
||||
return append([]byte(fm), content...)
|
||||
}
|
||||
|
||||
// titleFromFilename derives a human-readable title from a filename.
|
||||
func titleFromFilename(filename string) string {
|
||||
name := strings.TrimSuffix(filepath.Base(filename), ".md")
|
||||
name = strings.ReplaceAll(name, "-", " ")
|
||||
name = strings.ReplaceAll(name, "_", " ")
|
||||
// Title case
|
||||
words := strings.Fields(name)
|
||||
for i, w := range words {
|
||||
if len(w) > 0 {
|
||||
words[i] = strings.ToUpper(w[:1]) + w[1:]
|
||||
}
|
||||
}
|
||||
return strings.Join(words, " ")
|
||||
}
|
||||
```
|
||||
|
||||
**Add `runHugoSync` function:**
|
||||
```go
|
||||
func runHugoSync(reg *repos.Registry, basePath string, outputDir string, dryRun bool) error {
|
||||
if outputDir == "" {
|
||||
outputDir = filepath.Join(basePath, "docs-site", "content")
|
||||
}
|
||||
|
||||
// Scan all repos
|
||||
var docsInfo []RepoDocInfo
|
||||
for _, repo := range reg.List() {
|
||||
if repo.Name == "core-template" || repo.Name == "core-claude" {
|
||||
continue
|
||||
}
|
||||
info := scanRepoDocs(repo)
|
||||
if info.HasDocs {
|
||||
docsInfo = append(docsInfo, info)
|
||||
}
|
||||
}
|
||||
|
||||
if len(docsInfo) == 0 {
|
||||
cli.Text("No documentation found")
|
||||
return nil
|
||||
}
|
||||
|
||||
cli.Print("\n Hugo sync: %d repos with docs → %s\n\n", len(docsInfo), outputDir)
|
||||
|
||||
// Show plan
|
||||
for _, info := range docsInfo {
|
||||
section, folder := hugoOutputName(info.Name)
|
||||
target := section
|
||||
if folder != "" {
|
||||
target = section + "/" + folder
|
||||
}
|
||||
fileCount := len(info.DocsFiles) + len(info.KBFiles)
|
||||
if info.Readme != "" {
|
||||
fileCount++
|
||||
}
|
||||
cli.Print(" %s → %s/ (%d files)\n", repoNameStyle.Render(info.Name), target, fileCount)
|
||||
}
|
||||
|
||||
if dryRun {
|
||||
cli.Print("\n Dry run — no files written\n")
|
||||
return nil
|
||||
}
|
||||
|
||||
cli.Blank()
|
||||
if !confirm("Sync to Hugo content directory?") {
|
||||
cli.Text("Aborted")
|
||||
return nil
|
||||
}
|
||||
|
||||
cli.Blank()
|
||||
var synced int
|
||||
for _, info := range docsInfo {
|
||||
section, folder := hugoOutputName(info.Name)
|
||||
|
||||
// Build destination path
|
||||
destDir := filepath.Join(outputDir, section)
|
||||
if folder != "" {
|
||||
destDir = filepath.Join(destDir, folder)
|
||||
}
|
||||
|
||||
// Copy docs/ files
|
||||
weight := 10
|
||||
docsDir := filepath.Join(info.Path, "docs")
|
||||
for _, f := range info.DocsFiles {
|
||||
src := filepath.Join(docsDir, f)
|
||||
dst := filepath.Join(destDir, f)
|
||||
if err := copyWithFrontMatter(src, dst, weight); err != nil {
|
||||
cli.Print(" %s %s: %s\n", errorStyle.Render("✗"), f, err)
|
||||
continue
|
||||
}
|
||||
weight += 10
|
||||
}
|
||||
|
||||
// Copy README.md as _index.md (if not CLI/core which use their own index)
|
||||
if info.Readme != "" && folder != "" {
|
||||
dst := filepath.Join(destDir, "_index.md")
|
||||
if err := copyWithFrontMatter(info.Readme, dst, 1); err != nil {
|
||||
cli.Print(" %s README: %s\n", errorStyle.Render("✗"), err)
|
||||
}
|
||||
}
|
||||
|
||||
// Copy KB/ files to kb/{suffix}/
|
||||
if len(info.KBFiles) > 0 {
|
||||
// Extract suffix: go-mlx → mlx, go-i18n → i18n
|
||||
suffix := strings.TrimPrefix(info.Name, "go-")
|
||||
kbDestDir := filepath.Join(outputDir, "kb", suffix)
|
||||
kbDir := filepath.Join(info.Path, "KB")
|
||||
kbWeight := 10
|
||||
for _, f := range info.KBFiles {
|
||||
src := filepath.Join(kbDir, f)
|
||||
dst := filepath.Join(kbDestDir, f)
|
||||
if err := copyWithFrontMatter(src, dst, kbWeight); err != nil {
|
||||
cli.Print(" %s KB/%s: %s\n", errorStyle.Render("✗"), f, err)
|
||||
continue
|
||||
}
|
||||
kbWeight += 10
|
||||
}
|
||||
}
|
||||
|
||||
cli.Print(" %s %s\n", successStyle.Render("✓"), info.Name)
|
||||
synced++
|
||||
}
|
||||
|
||||
cli.Print("\n Synced %d repos to Hugo content\n", synced)
|
||||
return nil
|
||||
}
|
||||
|
||||
// copyWithFrontMatter copies a markdown file, injecting front matter if missing.
|
||||
func copyWithFrontMatter(src, dst string, weight int) error {
|
||||
if err := io.Local.EnsureDir(filepath.Dir(dst)); err != nil {
|
||||
return err
|
||||
}
|
||||
content, err := io.Local.Read(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
title := titleFromFilename(src)
|
||||
result := injectFrontMatter([]byte(content), title, weight)
|
||||
return io.Local.Write(dst, string(result))
|
||||
}
|
||||
```
|
||||
|
||||
**Add imports** at top of file:
|
||||
```go
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/cli"
|
||||
"forge.lthn.ai/core/go/pkg/i18n"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"forge.lthn.ai/core/go/pkg/repos"
|
||||
)
|
||||
```
|
||||
|
||||
**Verify:** `cd /Users/snider/Code/host-uk/cli && GOWORK=off go build ./cmd/docs/...`
|
||||
|
||||
**Commit:**
|
||||
```bash
|
||||
git add cmd/docs/cmd_sync.go
|
||||
git commit -m "feat(docs): add --target hugo sync mode for core.help"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 4: Test the full pipeline
|
||||
|
||||
**No code changes.** Run the pipeline end-to-end.
|
||||
|
||||
**Step 1:** Sync docs to Hugo:
|
||||
```bash
|
||||
cd /Users/snider/Code/host-uk
|
||||
core docs sync --target hugo --dry-run
|
||||
```
|
||||
Verify all 39 repos appear with correct section mappings.
|
||||
|
||||
**Step 2:** Run actual sync:
|
||||
```bash
|
||||
core docs sync --target hugo
|
||||
```
|
||||
|
||||
**Step 3:** Build and preview:
|
||||
```bash
|
||||
cd /Users/snider/Code/host-uk/docs-site
|
||||
hugo server
|
||||
```
|
||||
Open `localhost:1313` and verify:
|
||||
- Landing page renders with section links
|
||||
- Getting Started section has CLI guides
|
||||
- CLI Reference section has command docs
|
||||
- Go Packages section has 18 packages with architecture/development/history
|
||||
- PHP Packages section has PHP module docs
|
||||
- Knowledge Base has MLX and i18n wiki pages
|
||||
- Navigation works, search works
|
||||
|
||||
**Step 4:** Fix any issues found during preview.
|
||||
|
||||
**Commit docs-site content:**
|
||||
```bash
|
||||
cd /Users/snider/Code/host-uk/docs-site
|
||||
git add content/
|
||||
git commit -m "feat: sync initial content from 39 repos"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 5: BunnyCDN deployment config
|
||||
|
||||
**Files:**
|
||||
- Modify: `/Users/snider/Code/host-uk/docs-site/hugo.toml`
|
||||
|
||||
Add deployment target:
|
||||
|
||||
```toml
|
||||
[deployment]
|
||||
[[deployment.targets]]
|
||||
name = "production"
|
||||
URL = "s3://core-help?endpoint=storage.bunnycdn.com®ion=auto"
|
||||
```
|
||||
|
||||
Add a `Taskfile.yml` for convenience:
|
||||
|
||||
**Create:** `/Users/snider/Code/host-uk/docs-site/Taskfile.yml`
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
tasks:
|
||||
dev:
|
||||
desc: Start Hugo dev server
|
||||
cmds:
|
||||
- hugo server --buildDrafts
|
||||
|
||||
build:
|
||||
desc: Build static site
|
||||
cmds:
|
||||
- hugo --minify
|
||||
|
||||
sync:
|
||||
desc: Sync docs from all repos
|
||||
dir: ..
|
||||
cmds:
|
||||
- core docs sync --target hugo
|
||||
|
||||
deploy:
|
||||
desc: Build and deploy to BunnyCDN
|
||||
cmds:
|
||||
- task: sync
|
||||
- task: build
|
||||
- hugo deploy --target production
|
||||
|
||||
clean:
|
||||
desc: Remove generated content (keeps _index.md files)
|
||||
cmds:
|
||||
- find content -name "*.md" ! -name "_index.md" -delete
|
||||
```
|
||||
|
||||
**Verify:** `task dev` starts the site.
|
||||
|
||||
**Commit:**
|
||||
```bash
|
||||
git add hugo.toml Taskfile.yml
|
||||
git commit -m "feat: add BunnyCDN deployment config and Taskfile"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dependency Sequencing
|
||||
|
||||
```
|
||||
Task 1 (Hugo scaffold) — independent, do first
|
||||
Task 2 (scan KB/) — independent, can parallel with Task 1
|
||||
Task 3 (--target hugo) — depends on Task 2
|
||||
Task 4 (test pipeline) — depends on Tasks 1 + 3
|
||||
Task 5 (deploy config) — depends on Task 1
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
After all tasks:
|
||||
1. `core docs sync --target hugo` populates `docs-site/content/` from all repos
|
||||
2. `cd docs-site && hugo server` renders the full site
|
||||
3. Navigation has 6 sections: Getting Started, CLI, Go, MCP, PHP, KB
|
||||
4. All existing markdown renders correctly with auto-injected front matter
|
||||
5. `hugo build` produces `public/` with no errors
|
||||
6
go.mod
6
go.mod
|
|
@ -15,6 +15,8 @@ require (
|
|||
golang.org/x/crypto v0.48.0
|
||||
golang.org/x/term v0.40.0
|
||||
golang.org/x/text v0.34.0
|
||||
google.golang.org/grpc v1.79.1
|
||||
google.golang.org/protobuf v1.36.11
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
modernc.org/sqlite v1.45.0
|
||||
)
|
||||
|
|
@ -35,7 +37,6 @@ require (
|
|||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/fsnotify/fsnotify v1.9.0 // indirect
|
||||
github.com/go-viper/mapstructure/v2 v2.5.0 // indirect
|
||||
github.com/google/go-cmp v0.7.0 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
|
|
@ -51,7 +52,10 @@ require (
|
|||
github.com/subosito/gotenv v1.6.0 // indirect
|
||||
go.yaml.in/yaml/v3 v3.0.4 // indirect
|
||||
golang.org/x/exp v0.0.0-20260212183809-81e46e3db34a // indirect
|
||||
golang.org/x/net v0.50.0 // indirect
|
||||
golang.org/x/sys v0.41.0 // indirect
|
||||
gonum.org/v1/gonum v0.17.0 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 // indirect
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
|
||||
modernc.org/libc v1.67.7 // indirect
|
||||
modernc.org/mathutil v1.7.1 // indirect
|
||||
|
|
|
|||
30
go.sum
30
go.sum
|
|
@ -24,6 +24,8 @@ github.com/aws/aws-sdk-go-v2/service/s3 v1.96.0 h1:oeu8VPlOre74lBA/PMhxa5vewaMIM
|
|||
github.com/aws/aws-sdk-go-v2/service/s3 v1.96.0/go.mod h1:5jggDlZ2CLQhwJBiZJb4vfk4f0GxWdEDruWKEJ1xOdo=
|
||||
github.com/aws/smithy-go v1.24.0 h1:LpilSUItNPFr1eY85RYgTIg5eIEPtvFbskaFcmmIUnk=
|
||||
github.com/aws/smithy-go v1.24.0/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
|
||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cloudflare/circl v1.6.3 h1:9GPOhQGF9MCYUeXyMYlqTR6a5gTrgR/fBLXvUgtVcg8=
|
||||
github.com/cloudflare/circl v1.6.3/go.mod h1:2eXP6Qfat4O/Yhh8BznvKnJ+uzEoTQ6jVKJRn81BiS4=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||
|
|
@ -35,8 +37,14 @@ github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHk
|
|||
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
|
||||
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
|
||||
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
|
||||
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||
github.com/go-viper/mapstructure/v2 v2.5.0 h1:vM5IJoUAy3d7zRSVtIwQgBj7BiWtMPfmPEgAXnvj1Ro=
|
||||
github.com/go-viper/mapstructure/v2 v2.5.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
|
||||
|
|
@ -86,6 +94,18 @@ github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu
|
|||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
|
||||
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
|
||||
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
|
||||
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
|
||||
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
|
||||
go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18=
|
||||
go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.39.0 h1:cXMVVFVgsIf2YL6QkRF4Urbr/aMInf+2WKg+sEJTtB8=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.39.0/go.mod h1:xq9HEVH7qeX69/JnwEfp6fVq5wosJsY1mt4lLfYdVew=
|
||||
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
|
||||
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
|
||||
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
|
||||
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
||||
golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
|
||||
|
|
@ -94,6 +114,8 @@ golang.org/x/exp v0.0.0-20260212183809-81e46e3db34a h1:ovFr6Z0MNmU7nH8VaX5xqw+05
|
|||
golang.org/x/exp v0.0.0-20260212183809-81e46e3db34a/go.mod h1:K79w1Vqn7PoiZn+TkNpx3BUWUQksGO3JcVX6qIjytmA=
|
||||
golang.org/x/mod v0.33.0 h1:tHFzIWbBifEmbwtGz65eaWyGiGZatSrT9prnU8DbVL8=
|
||||
golang.org/x/mod v0.33.0/go.mod h1:swjeQEj+6r7fODbD2cqrnje9PnziFuw4bmLbBZFrQ5w=
|
||||
golang.org/x/net v0.50.0 h1:ucWh9eiCGyDR3vtzso0WMQinm2Dnt8cFMuQa9K33J60=
|
||||
golang.org/x/net v0.50.0/go.mod h1:UgoSli3F/pBgdJBHCTc+tp3gmrU4XswgGRgtnwWTfyM=
|
||||
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
|
||||
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
|
|
@ -105,6 +127,14 @@ golang.org/x/text v0.34.0 h1:oL/Qq0Kdaqxa1KbNeMKwQq0reLCCaFtqu2eNuSeNHbk=
|
|||
golang.org/x/text v0.34.0/go.mod h1:homfLqTYRFyVYemLBFl5GgL/DWEiH5wcsQ5gSh1yziA=
|
||||
golang.org/x/tools v0.42.0 h1:uNgphsn75Tdz5Ji2q36v/nsFSfR/9BRFvqhGBaJGd5k=
|
||||
golang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0=
|
||||
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
||||
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 h1:gRkg/vSppuSQoDjxyiGfN4Upv/h/DQmIR10ZU8dh4Ww=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
|
||||
google.golang.org/grpc v1.79.1 h1:zGhSi45ODB9/p3VAawt9a+O/MULLl9dpizzNNpq7flY=
|
||||
google.golang.org/grpc v1.79.1/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
|
||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
|
|
|
|||
30
pkg/cache/cache.go
vendored
30
pkg/cache/cache.go
vendored
|
|
@ -3,6 +3,7 @@ package cache
|
|||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
|
@ -15,6 +16,7 @@ const DefaultTTL = 1 * time.Hour
|
|||
|
||||
// Cache represents a file-based cache.
|
||||
type Cache struct {
|
||||
medium io.Medium
|
||||
baseDir string
|
||||
ttl time.Duration
|
||||
}
|
||||
|
|
@ -27,8 +29,13 @@ type Entry struct {
|
|||
}
|
||||
|
||||
// New creates a new cache instance.
|
||||
// If baseDir is empty, uses .core/cache in current directory
|
||||
func New(baseDir string, ttl time.Duration) (*Cache, error) {
|
||||
// If medium is nil, uses io.Local (filesystem).
|
||||
// If baseDir is empty, uses .core/cache in current directory.
|
||||
func New(medium io.Medium, baseDir string, ttl time.Duration) (*Cache, error) {
|
||||
if medium == nil {
|
||||
medium = io.Local
|
||||
}
|
||||
|
||||
if baseDir == "" {
|
||||
// Use .core/cache in current working directory
|
||||
cwd, err := os.Getwd()
|
||||
|
|
@ -43,11 +50,12 @@ func New(baseDir string, ttl time.Duration) (*Cache, error) {
|
|||
}
|
||||
|
||||
// Ensure cache directory exists
|
||||
if err := io.Local.EnsureDir(baseDir); err != nil {
|
||||
if err := medium.EnsureDir(baseDir); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &Cache{
|
||||
medium: medium,
|
||||
baseDir: baseDir,
|
||||
ttl: ttl,
|
||||
}, nil
|
||||
|
|
@ -62,9 +70,9 @@ func (c *Cache) Path(key string) string {
|
|||
func (c *Cache) Get(key string, dest interface{}) (bool, error) {
|
||||
path := c.Path(key)
|
||||
|
||||
dataStr, err := io.Local.Read(path)
|
||||
dataStr, err := c.medium.Read(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
return false, nil
|
||||
}
|
||||
return false, err
|
||||
|
|
@ -94,7 +102,7 @@ func (c *Cache) Set(key string, data interface{}) error {
|
|||
path := c.Path(key)
|
||||
|
||||
// Ensure parent directory exists
|
||||
if err := io.Local.EnsureDir(filepath.Dir(path)); err != nil {
|
||||
if err := c.medium.EnsureDir(filepath.Dir(path)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
|
@ -115,14 +123,14 @@ func (c *Cache) Set(key string, data interface{}) error {
|
|||
return err
|
||||
}
|
||||
|
||||
return io.Local.Write(path, string(entryBytes))
|
||||
return c.medium.Write(path, string(entryBytes))
|
||||
}
|
||||
|
||||
// Delete removes an item from the cache.
|
||||
func (c *Cache) Delete(key string) error {
|
||||
path := c.Path(key)
|
||||
err := io.Local.Delete(path)
|
||||
if os.IsNotExist(err) {
|
||||
err := c.medium.Delete(path)
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
|
|
@ -130,14 +138,14 @@ func (c *Cache) Delete(key string) error {
|
|||
|
||||
// Clear removes all cached items.
|
||||
func (c *Cache) Clear() error {
|
||||
return io.Local.DeleteAll(c.baseDir)
|
||||
return c.medium.DeleteAll(c.baseDir)
|
||||
}
|
||||
|
||||
// Age returns how old a cached item is, or -1 if not cached.
|
||||
func (c *Cache) Age(key string) time.Duration {
|
||||
path := c.Path(key)
|
||||
|
||||
dataStr, err := io.Local.Read(path)
|
||||
dataStr, err := c.medium.Read(path)
|
||||
if err != nil {
|
||||
return -1
|
||||
}
|
||||
|
|
|
|||
13
pkg/cache/cache_test.go
vendored
13
pkg/cache/cache_test.go
vendored
|
|
@ -5,11 +5,14 @@ import (
|
|||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/cache"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
func TestCache(t *testing.T) {
|
||||
baseDir := t.TempDir()
|
||||
c, err := cache.New(baseDir, 1*time.Minute)
|
||||
m := io.NewMockMedium()
|
||||
// Use a path that MockMedium will understand
|
||||
baseDir := "/tmp/cache"
|
||||
c, err := cache.New(m, baseDir, 1*time.Minute)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create cache: %v", err)
|
||||
}
|
||||
|
|
@ -54,7 +57,7 @@ func TestCache(t *testing.T) {
|
|||
}
|
||||
|
||||
// Test Expiry
|
||||
cshort, err := cache.New(t.TempDir(), 10*time.Millisecond)
|
||||
cshort, err := cache.New(m, "/tmp/cache-short", 10*time.Millisecond)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create short-lived cache: %v", err)
|
||||
}
|
||||
|
|
@ -90,8 +93,8 @@ func TestCache(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestCacheDefaults(t *testing.T) {
|
||||
// Test default TTL (uses cwd/.core/cache)
|
||||
c, err := cache.New("", 0)
|
||||
// Test default Medium (io.Local) and default TTL
|
||||
c, err := cache.New(nil, "", 0)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create cache with defaults: %v", err)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -74,13 +74,18 @@ func IsStderrTTY() bool {
|
|||
|
||||
// PIDFile manages a process ID file for single-instance enforcement.
|
||||
type PIDFile struct {
|
||||
path string
|
||||
mu sync.Mutex
|
||||
medium io.Medium
|
||||
path string
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
// NewPIDFile creates a PID file manager.
|
||||
func NewPIDFile(path string) *PIDFile {
|
||||
return &PIDFile{path: path}
|
||||
// If medium is nil, uses io.Local (filesystem).
|
||||
func NewPIDFile(medium io.Medium, path string) *PIDFile {
|
||||
if medium == nil {
|
||||
medium = io.Local
|
||||
}
|
||||
return &PIDFile{medium: medium, path: path}
|
||||
}
|
||||
|
||||
// Acquire writes the current PID to the file.
|
||||
|
|
@ -90,7 +95,7 @@ func (p *PIDFile) Acquire() error {
|
|||
defer p.mu.Unlock()
|
||||
|
||||
// Check if PID file exists
|
||||
if data, err := io.Local.Read(p.path); err == nil {
|
||||
if data, err := p.medium.Read(p.path); err == nil {
|
||||
pid, err := strconv.Atoi(data)
|
||||
if err == nil && pid > 0 {
|
||||
// Check if process is still running
|
||||
|
|
@ -101,19 +106,19 @@ func (p *PIDFile) Acquire() error {
|
|||
}
|
||||
}
|
||||
// Stale PID file, remove it
|
||||
_ = io.Local.Delete(p.path)
|
||||
_ = p.medium.Delete(p.path)
|
||||
}
|
||||
|
||||
// Ensure directory exists
|
||||
if dir := filepath.Dir(p.path); dir != "." {
|
||||
if err := io.Local.EnsureDir(dir); err != nil {
|
||||
if err := p.medium.EnsureDir(dir); err != nil {
|
||||
return fmt.Errorf("failed to create PID directory: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Write current PID
|
||||
pid := os.Getpid()
|
||||
if err := io.Local.Write(p.path, strconv.Itoa(pid)); err != nil {
|
||||
if err := p.medium.Write(p.path, strconv.Itoa(pid)); err != nil {
|
||||
return fmt.Errorf("failed to write PID file: %w", err)
|
||||
}
|
||||
|
||||
|
|
@ -124,7 +129,7 @@ func (p *PIDFile) Acquire() error {
|
|||
func (p *PIDFile) Release() error {
|
||||
p.mu.Lock()
|
||||
defer p.mu.Unlock()
|
||||
return io.Local.Delete(p.path)
|
||||
return p.medium.Delete(p.path)
|
||||
}
|
||||
|
||||
// Path returns the PID file path.
|
||||
|
|
@ -246,6 +251,10 @@ func (h *HealthServer) Addr() string {
|
|||
|
||||
// DaemonOptions configures daemon mode execution.
|
||||
type DaemonOptions struct {
|
||||
// Medium is the filesystem for PID file operations.
|
||||
// If nil, uses io.Local (filesystem).
|
||||
Medium io.Medium
|
||||
|
||||
// PIDFile path for single-instance enforcement.
|
||||
// Leave empty to skip PID file management.
|
||||
PIDFile string
|
||||
|
|
@ -289,7 +298,7 @@ func NewDaemon(opts DaemonOptions) *Daemon {
|
|||
}
|
||||
|
||||
if opts.PIDFile != "" {
|
||||
d.pid = NewPIDFile(opts.PIDFile)
|
||||
d.pid = NewPIDFile(opts.Medium, opts.PIDFile)
|
||||
}
|
||||
|
||||
if opts.HealthAddr != "" {
|
||||
|
|
@ -402,14 +411,6 @@ func (d *Daemon) HealthAddr() string {
|
|||
return ""
|
||||
}
|
||||
|
||||
// AddHealthCheck registers a health check function with the daemon's health server.
|
||||
// No-op if health server is disabled.
|
||||
func (d *Daemon) AddHealthCheck(check HealthCheck) {
|
||||
if d.health != nil {
|
||||
d.health.AddCheck(check)
|
||||
}
|
||||
}
|
||||
|
||||
// --- Convenience Functions ---
|
||||
|
||||
// Run blocks until context is cancelled or signal received.
|
||||
|
|
|
|||
|
|
@ -3,10 +3,10 @@ package cli
|
|||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
|
@ -27,16 +27,17 @@ func TestDetectMode(t *testing.T) {
|
|||
|
||||
func TestPIDFile(t *testing.T) {
|
||||
t.Run("acquire and release", func(t *testing.T) {
|
||||
pidPath := t.TempDir() + "/test.pid"
|
||||
m := io.NewMockMedium()
|
||||
pidPath := "/tmp/test.pid"
|
||||
|
||||
pid := NewPIDFile(pidPath)
|
||||
pid := NewPIDFile(m, pidPath)
|
||||
|
||||
// Acquire should succeed
|
||||
err := pid.Acquire()
|
||||
require.NoError(t, err)
|
||||
|
||||
// File should exist with our PID
|
||||
data, err := os.ReadFile(pidPath)
|
||||
data, err := m.Read(pidPath)
|
||||
require.NoError(t, err)
|
||||
assert.NotEmpty(t, data)
|
||||
|
||||
|
|
@ -44,18 +45,18 @@ func TestPIDFile(t *testing.T) {
|
|||
err = pid.Release()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, statErr := os.Stat(pidPath)
|
||||
assert.True(t, os.IsNotExist(statErr))
|
||||
assert.False(t, m.Exists(pidPath))
|
||||
})
|
||||
|
||||
t.Run("stale pid file", func(t *testing.T) {
|
||||
pidPath := t.TempDir() + "/stale.pid"
|
||||
m := io.NewMockMedium()
|
||||
pidPath := "/tmp/stale.pid"
|
||||
|
||||
// Write a stale PID (non-existent process)
|
||||
err := os.WriteFile(pidPath, []byte("999999999"), 0644)
|
||||
err := m.Write(pidPath, "999999999")
|
||||
require.NoError(t, err)
|
||||
|
||||
pid := NewPIDFile(pidPath)
|
||||
pid := NewPIDFile(m, pidPath)
|
||||
|
||||
// Should acquire successfully (stale PID removed)
|
||||
err = pid.Acquire()
|
||||
|
|
@ -66,22 +67,23 @@ func TestPIDFile(t *testing.T) {
|
|||
})
|
||||
|
||||
t.Run("creates parent directory", func(t *testing.T) {
|
||||
pidPath := t.TempDir() + "/subdir/nested/test.pid"
|
||||
m := io.NewMockMedium()
|
||||
pidPath := "/tmp/subdir/nested/test.pid"
|
||||
|
||||
pid := NewPIDFile(pidPath)
|
||||
pid := NewPIDFile(m, pidPath)
|
||||
|
||||
err := pid.Acquire()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, statErr := os.Stat(pidPath)
|
||||
assert.NoError(t, statErr)
|
||||
assert.True(t, m.Exists(pidPath))
|
||||
|
||||
err = pid.Release()
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("path getter", func(t *testing.T) {
|
||||
pid := NewPIDFile("/tmp/test.pid")
|
||||
m := io.NewMockMedium()
|
||||
pid := NewPIDFile(m, "/tmp/test.pid")
|
||||
assert.Equal(t, "/tmp/test.pid", pid.Path())
|
||||
})
|
||||
}
|
||||
|
|
@ -153,9 +155,11 @@ func TestHealthServer(t *testing.T) {
|
|||
|
||||
func TestDaemon(t *testing.T) {
|
||||
t.Run("start and stop", func(t *testing.T) {
|
||||
pidPath := t.TempDir() + "/test.pid"
|
||||
m := io.NewMockMedium()
|
||||
pidPath := "/tmp/test.pid"
|
||||
|
||||
d := NewDaemon(DaemonOptions{
|
||||
Medium: m,
|
||||
PIDFile: pidPath,
|
||||
HealthAddr: "127.0.0.1:0",
|
||||
ShutdownTimeout: 5 * time.Second,
|
||||
|
|
@ -178,8 +182,7 @@ func TestDaemon(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
// PID file should be removed
|
||||
_, statErr := os.Stat(pidPath)
|
||||
assert.True(t, os.IsNotExist(statErr))
|
||||
assert.False(t, m.Exists(pidPath))
|
||||
})
|
||||
|
||||
t.Run("double start fails", func(t *testing.T) {
|
||||
|
|
|
|||
85
pkg/coredeno/coredeno.go
Normal file
85
pkg/coredeno/coredeno.go
Normal file
|
|
@ -0,0 +1,85 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// Options configures the CoreDeno sidecar.
|
||||
type Options struct {
|
||||
DenoPath string // path to deno binary (default: "deno")
|
||||
SocketPath string // Unix socket path for Go's gRPC server (CoreService)
|
||||
DenoSocketPath string // Unix socket path for Deno's gRPC server (DenoService)
|
||||
AppRoot string // app root directory (sandboxed I/O)
|
||||
StoreDBPath string // SQLite DB path (default: AppRoot/.core/store.db)
|
||||
PublicKey ed25519.PublicKey // ed25519 public key for manifest verification (optional)
|
||||
SidecarArgs []string // args passed to the sidecar process
|
||||
}
|
||||
|
||||
// Permissions declares per-module Deno permission flags.
|
||||
type Permissions struct {
|
||||
Read []string
|
||||
Write []string
|
||||
Net []string
|
||||
Run []string
|
||||
}
|
||||
|
||||
// Flags converts permissions to Deno --allow-* CLI flags.
|
||||
func (p Permissions) Flags() []string {
|
||||
var flags []string
|
||||
if len(p.Read) > 0 {
|
||||
flags = append(flags, fmt.Sprintf("--allow-read=%s", strings.Join(p.Read, ",")))
|
||||
}
|
||||
if len(p.Write) > 0 {
|
||||
flags = append(flags, fmt.Sprintf("--allow-write=%s", strings.Join(p.Write, ",")))
|
||||
}
|
||||
if len(p.Net) > 0 {
|
||||
flags = append(flags, fmt.Sprintf("--allow-net=%s", strings.Join(p.Net, ",")))
|
||||
}
|
||||
if len(p.Run) > 0 {
|
||||
flags = append(flags, fmt.Sprintf("--allow-run=%s", strings.Join(p.Run, ",")))
|
||||
}
|
||||
return flags
|
||||
}
|
||||
|
||||
// DefaultSocketPath returns the default Unix socket path.
|
||||
func DefaultSocketPath() string {
|
||||
xdg := os.Getenv("XDG_RUNTIME_DIR")
|
||||
if xdg == "" {
|
||||
xdg = "/tmp"
|
||||
}
|
||||
return filepath.Join(xdg, "core", "deno.sock")
|
||||
}
|
||||
|
||||
// Sidecar manages a Deno child process.
|
||||
type Sidecar struct {
|
||||
opts Options
|
||||
mu sync.RWMutex
|
||||
cmd *exec.Cmd
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
done chan struct{}
|
||||
}
|
||||
|
||||
// NewSidecar creates a Sidecar with the given options.
|
||||
func NewSidecar(opts Options) *Sidecar {
|
||||
if opts.DenoPath == "" {
|
||||
opts.DenoPath = "deno"
|
||||
}
|
||||
if opts.SocketPath == "" {
|
||||
opts.SocketPath = DefaultSocketPath()
|
||||
}
|
||||
if opts.DenoSocketPath == "" && opts.SocketPath != "" {
|
||||
opts.DenoSocketPath = filepath.Join(filepath.Dir(opts.SocketPath), "deno.sock")
|
||||
}
|
||||
if opts.StoreDBPath == "" && opts.AppRoot != "" {
|
||||
opts.StoreDBPath = filepath.Join(opts.AppRoot, ".core", "store.db")
|
||||
}
|
||||
return &Sidecar{opts: opts}
|
||||
}
|
||||
99
pkg/coredeno/coredeno_test.go
Normal file
99
pkg/coredeno/coredeno_test.go
Normal file
|
|
@ -0,0 +1,99 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestNewSidecar_Good(t *testing.T) {
|
||||
opts := Options{
|
||||
DenoPath: "echo",
|
||||
SocketPath: "/tmp/test-core-deno.sock",
|
||||
}
|
||||
sc := NewSidecar(opts)
|
||||
require.NotNil(t, sc)
|
||||
assert.Equal(t, "echo", sc.opts.DenoPath)
|
||||
assert.Equal(t, "/tmp/test-core-deno.sock", sc.opts.SocketPath)
|
||||
}
|
||||
|
||||
func TestDefaultSocketPath_Good(t *testing.T) {
|
||||
path := DefaultSocketPath()
|
||||
assert.Contains(t, path, "core/deno.sock")
|
||||
}
|
||||
|
||||
func TestSidecar_PermissionFlags_Good(t *testing.T) {
|
||||
perms := Permissions{
|
||||
Read: []string{"./data/"},
|
||||
Write: []string{"./data/config.json"},
|
||||
Net: []string{"pool.lthn.io:3333"},
|
||||
Run: []string{"xmrig"},
|
||||
}
|
||||
flags := perms.Flags()
|
||||
assert.Contains(t, flags, "--allow-read=./data/")
|
||||
assert.Contains(t, flags, "--allow-write=./data/config.json")
|
||||
assert.Contains(t, flags, "--allow-net=pool.lthn.io:3333")
|
||||
assert.Contains(t, flags, "--allow-run=xmrig")
|
||||
}
|
||||
|
||||
func TestSidecar_PermissionFlags_Empty(t *testing.T) {
|
||||
perms := Permissions{}
|
||||
flags := perms.Flags()
|
||||
assert.Empty(t, flags)
|
||||
}
|
||||
|
||||
func TestOptions_AppRoot_Good(t *testing.T) {
|
||||
opts := Options{
|
||||
DenoPath: "deno",
|
||||
SocketPath: "/tmp/test.sock",
|
||||
AppRoot: "/app",
|
||||
StoreDBPath: "/app/.core/store.db",
|
||||
}
|
||||
sc := NewSidecar(opts)
|
||||
assert.Equal(t, "/app", sc.opts.AppRoot)
|
||||
assert.Equal(t, "/app/.core/store.db", sc.opts.StoreDBPath)
|
||||
}
|
||||
|
||||
func TestOptions_StoreDBPath_Default_Good(t *testing.T) {
|
||||
opts := Options{AppRoot: "/app"}
|
||||
sc := NewSidecar(opts)
|
||||
assert.Equal(t, "/app/.core/store.db", sc.opts.StoreDBPath,
|
||||
"StoreDBPath should default to AppRoot/.core/store.db")
|
||||
}
|
||||
|
||||
func TestOptions_SidecarArgs_Good(t *testing.T) {
|
||||
opts := Options{
|
||||
DenoPath: "deno",
|
||||
SidecarArgs: []string{"run", "--allow-env", "main.ts"},
|
||||
}
|
||||
sc := NewSidecar(opts)
|
||||
assert.Equal(t, []string{"run", "--allow-env", "main.ts"}, sc.opts.SidecarArgs)
|
||||
}
|
||||
|
||||
func TestDefaultSocketPath_XDG(t *testing.T) {
|
||||
orig := os.Getenv("XDG_RUNTIME_DIR")
|
||||
defer os.Setenv("XDG_RUNTIME_DIR", orig)
|
||||
|
||||
os.Setenv("XDG_RUNTIME_DIR", "/run/user/1000")
|
||||
path := DefaultSocketPath()
|
||||
assert.Equal(t, "/run/user/1000/core/deno.sock", path)
|
||||
}
|
||||
|
||||
func TestOptions_DenoSocketPath_Default_Good(t *testing.T) {
|
||||
opts := Options{SocketPath: "/tmp/core/core.sock"}
|
||||
sc := NewSidecar(opts)
|
||||
assert.Equal(t, "/tmp/core/deno.sock", sc.opts.DenoSocketPath,
|
||||
"DenoSocketPath should default to same dir as SocketPath with deno.sock")
|
||||
}
|
||||
|
||||
func TestOptions_DenoSocketPath_Explicit_Good(t *testing.T) {
|
||||
opts := Options{
|
||||
SocketPath: "/tmp/core/core.sock",
|
||||
DenoSocketPath: "/tmp/custom/deno.sock",
|
||||
}
|
||||
sc := NewSidecar(opts)
|
||||
assert.Equal(t, "/tmp/custom/deno.sock", sc.opts.DenoSocketPath,
|
||||
"Explicit DenoSocketPath should not be overridden")
|
||||
}
|
||||
138
pkg/coredeno/denoclient.go
Normal file
138
pkg/coredeno/denoclient.go
Normal file
|
|
@ -0,0 +1,138 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// DenoClient communicates with the Deno sidecar's JSON-RPC server over a Unix socket.
|
||||
// Thread-safe: uses a mutex to serialize requests (one connection, request/response protocol).
|
||||
type DenoClient struct {
|
||||
mu sync.Mutex
|
||||
conn net.Conn
|
||||
reader *bufio.Reader
|
||||
}
|
||||
|
||||
// DialDeno connects to the Deno JSON-RPC server on the given Unix socket path.
|
||||
func DialDeno(socketPath string) (*DenoClient, error) {
|
||||
conn, err := net.Dial("unix", socketPath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("deno dial: %w", err)
|
||||
}
|
||||
return &DenoClient{
|
||||
conn: conn,
|
||||
reader: bufio.NewReader(conn),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Close closes the underlying connection.
|
||||
func (c *DenoClient) Close() error {
|
||||
return c.conn.Close()
|
||||
}
|
||||
|
||||
func (c *DenoClient) call(req map[string]any) (map[string]any, error) {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
data, err := json.Marshal(req)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal: %w", err)
|
||||
}
|
||||
data = append(data, '\n')
|
||||
|
||||
if _, err := c.conn.Write(data); err != nil {
|
||||
return nil, fmt.Errorf("write: %w", err)
|
||||
}
|
||||
|
||||
line, err := c.reader.ReadBytes('\n')
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("read: %w", err)
|
||||
}
|
||||
|
||||
var resp map[string]any
|
||||
if err := json.Unmarshal(line, &resp); err != nil {
|
||||
return nil, fmt.Errorf("unmarshal: %w", err)
|
||||
}
|
||||
|
||||
if errMsg, ok := resp["error"].(string); ok && errMsg != "" {
|
||||
return nil, fmt.Errorf("deno: %s", errMsg)
|
||||
}
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
// ModulePermissions declares per-module permission scopes for Deno Worker sandboxing.
|
||||
type ModulePermissions struct {
|
||||
Read []string `json:"read,omitempty"`
|
||||
Write []string `json:"write,omitempty"`
|
||||
Net []string `json:"net,omitempty"`
|
||||
Run []string `json:"run,omitempty"`
|
||||
}
|
||||
|
||||
// LoadModuleResponse holds the result of a LoadModule call.
|
||||
type LoadModuleResponse struct {
|
||||
Ok bool
|
||||
Error string
|
||||
}
|
||||
|
||||
// LoadModule tells Deno to load a module with the given permissions.
|
||||
func (c *DenoClient) LoadModule(code, entryPoint string, perms ModulePermissions) (*LoadModuleResponse, error) {
|
||||
resp, err := c.call(map[string]any{
|
||||
"method": "LoadModule",
|
||||
"code": code,
|
||||
"entry_point": entryPoint,
|
||||
"permissions": perms,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
errStr, _ := resp["error"].(string)
|
||||
return &LoadModuleResponse{
|
||||
Ok: resp["ok"] == true,
|
||||
Error: errStr,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// UnloadModuleResponse holds the result of an UnloadModule call.
|
||||
type UnloadModuleResponse struct {
|
||||
Ok bool
|
||||
}
|
||||
|
||||
// UnloadModule tells Deno to unload a module.
|
||||
func (c *DenoClient) UnloadModule(code string) (*UnloadModuleResponse, error) {
|
||||
resp, err := c.call(map[string]any{
|
||||
"method": "UnloadModule",
|
||||
"code": code,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &UnloadModuleResponse{
|
||||
Ok: resp["ok"] == true,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ModuleStatusResponse holds the result of a ModuleStatus call.
|
||||
type ModuleStatusResponse struct {
|
||||
Code string
|
||||
Status string
|
||||
}
|
||||
|
||||
// ModuleStatus queries the status of a module in the Deno runtime.
|
||||
func (c *DenoClient) ModuleStatus(code string) (*ModuleStatusResponse, error) {
|
||||
resp, err := c.call(map[string]any{
|
||||
"method": "ModuleStatus",
|
||||
"code": code,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
respCode, _ := resp["code"].(string)
|
||||
sts, _ := resp["status"].(string)
|
||||
return &ModuleStatusResponse{
|
||||
Code: respCode,
|
||||
Status: sts,
|
||||
}, nil
|
||||
}
|
||||
499
pkg/coredeno/integration_test.go
Normal file
499
pkg/coredeno/integration_test.go
Normal file
|
|
@ -0,0 +1,499 @@
|
|||
//go:build integration
|
||||
|
||||
package coredeno
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
pb "forge.lthn.ai/core/go/pkg/coredeno/proto"
|
||||
core "forge.lthn.ai/core/go/pkg/framework/core"
|
||||
"forge.lthn.ai/core/go/pkg/marketplace"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
// unused import guard
|
||||
var _ = pb.NewCoreServiceClient
|
||||
|
||||
func findDeno(t *testing.T) string {
|
||||
t.Helper()
|
||||
denoPath, err := exec.LookPath("deno")
|
||||
if err != nil {
|
||||
home, _ := os.UserHomeDir()
|
||||
denoPath = filepath.Join(home, ".deno", "bin", "deno")
|
||||
if _, err := os.Stat(denoPath); err != nil {
|
||||
t.Skip("deno not installed")
|
||||
}
|
||||
}
|
||||
return denoPath
|
||||
}
|
||||
|
||||
// runtimeEntryPoint returns the absolute path to runtime/main.ts.
|
||||
func runtimeEntryPoint(t *testing.T) string {
|
||||
t.Helper()
|
||||
// We're in pkg/coredeno/ during test, runtime is a subdir
|
||||
abs, err := filepath.Abs("runtime/main.ts")
|
||||
require.NoError(t, err)
|
||||
require.FileExists(t, abs)
|
||||
return abs
|
||||
}
|
||||
|
||||
// testModulePath returns the absolute path to runtime/testdata/test-module.ts.
|
||||
func testModulePath(t *testing.T) string {
|
||||
t.Helper()
|
||||
abs, err := filepath.Abs("runtime/testdata/test-module.ts")
|
||||
require.NoError(t, err)
|
||||
require.FileExists(t, abs)
|
||||
return abs
|
||||
}
|
||||
|
||||
func TestIntegration_FullBoot_Good(t *testing.T) {
|
||||
denoPath := findDeno(t)
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "core.sock")
|
||||
|
||||
// Write a manifest
|
||||
coreDir := filepath.Join(tmpDir, ".core")
|
||||
require.NoError(t, os.MkdirAll(coreDir, 0755))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(coreDir, "view.yml"), []byte(`
|
||||
code: integration-test
|
||||
name: Integration Test
|
||||
version: "1.0"
|
||||
permissions:
|
||||
read: ["./data/"]
|
||||
`), 0644))
|
||||
|
||||
entryPoint := runtimeEntryPoint(t)
|
||||
|
||||
opts := Options{
|
||||
DenoPath: denoPath,
|
||||
SocketPath: sockPath,
|
||||
AppRoot: tmpDir,
|
||||
StoreDBPath: ":memory:",
|
||||
SidecarArgs: []string{"run", "-A", entryPoint},
|
||||
}
|
||||
|
||||
c, err := core.New()
|
||||
require.NoError(t, err)
|
||||
|
||||
factory := NewServiceFactory(opts)
|
||||
result, err := factory(c)
|
||||
require.NoError(t, err)
|
||||
svc := result.(*Service)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
err = svc.OnStartup(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify gRPC is working
|
||||
require.Eventually(t, func() bool {
|
||||
_, err := os.Stat(sockPath)
|
||||
return err == nil
|
||||
}, 5*time.Second, 50*time.Millisecond, "socket should appear")
|
||||
|
||||
conn, err := grpc.NewClient(
|
||||
"unix://"+sockPath,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
defer conn.Close()
|
||||
|
||||
client := pb.NewCoreServiceClient(conn)
|
||||
_, err = client.StoreSet(ctx, &pb.StoreSetRequest{
|
||||
Group: "integration", Key: "boot", Value: "ok",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
resp, err := client.StoreGet(ctx, &pb.StoreGetRequest{
|
||||
Group: "integration", Key: "boot",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "ok", resp.Value)
|
||||
assert.True(t, resp.Found)
|
||||
|
||||
// Verify sidecar is running
|
||||
assert.True(t, svc.sidecar.IsRunning(), "Deno sidecar should be running")
|
||||
|
||||
// Clean shutdown
|
||||
err = svc.OnShutdown(context.Background())
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, svc.sidecar.IsRunning(), "Deno sidecar should be stopped")
|
||||
}
|
||||
|
||||
func TestIntegration_Tier2_Bidirectional_Good(t *testing.T) {
|
||||
denoPath := findDeno(t)
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "core.sock")
|
||||
denoSockPath := filepath.Join(tmpDir, "deno.sock")
|
||||
|
||||
// Write a manifest
|
||||
coreDir := filepath.Join(tmpDir, ".core")
|
||||
require.NoError(t, os.MkdirAll(coreDir, 0755))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(coreDir, "view.yml"), []byte(`
|
||||
code: tier2-test
|
||||
name: Tier 2 Test
|
||||
version: "1.0"
|
||||
permissions:
|
||||
read: ["./data/"]
|
||||
run: ["echo"]
|
||||
`), 0644))
|
||||
|
||||
entryPoint := runtimeEntryPoint(t)
|
||||
|
||||
opts := Options{
|
||||
DenoPath: denoPath,
|
||||
SocketPath: sockPath,
|
||||
DenoSocketPath: denoSockPath,
|
||||
AppRoot: tmpDir,
|
||||
StoreDBPath: ":memory:",
|
||||
SidecarArgs: []string{"run", "-A", "--unstable-worker-options", entryPoint},
|
||||
}
|
||||
|
||||
c, err := core.New()
|
||||
require.NoError(t, err)
|
||||
|
||||
factory := NewServiceFactory(opts)
|
||||
result, err := factory(c)
|
||||
require.NoError(t, err)
|
||||
svc := result.(*Service)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
err = svc.OnStartup(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify both sockets appeared
|
||||
require.Eventually(t, func() bool {
|
||||
_, err := os.Stat(sockPath)
|
||||
return err == nil
|
||||
}, 10*time.Second, 50*time.Millisecond, "core socket should appear")
|
||||
|
||||
require.Eventually(t, func() bool {
|
||||
_, err := os.Stat(denoSockPath)
|
||||
return err == nil
|
||||
}, 10*time.Second, 50*time.Millisecond, "deno socket should appear")
|
||||
|
||||
// Verify sidecar is running
|
||||
assert.True(t, svc.sidecar.IsRunning(), "Deno sidecar should be running")
|
||||
|
||||
// Verify DenoClient is connected
|
||||
require.NotNil(t, svc.DenoClient(), "DenoClient should be connected")
|
||||
|
||||
// Test Go → Deno: LoadModule with real Worker
|
||||
modPath := testModulePath(t)
|
||||
loadResp, err := svc.DenoClient().LoadModule("test-module", modPath, ModulePermissions{
|
||||
Read: []string{filepath.Dir(modPath) + "/"},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.True(t, loadResp.Ok)
|
||||
|
||||
// Wait for module to finish loading (async Worker init)
|
||||
require.Eventually(t, func() bool {
|
||||
resp, err := svc.DenoClient().ModuleStatus("test-module")
|
||||
return err == nil && (resp.Status == "RUNNING" || resp.Status == "ERRORED")
|
||||
}, 5*time.Second, 50*time.Millisecond, "module should finish loading")
|
||||
|
||||
statusResp, err := svc.DenoClient().ModuleStatus("test-module")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "test-module", statusResp.Code)
|
||||
assert.Equal(t, "RUNNING", statusResp.Status)
|
||||
|
||||
// Test Go → Deno: UnloadModule
|
||||
unloadResp, err := svc.DenoClient().UnloadModule("test-module")
|
||||
require.NoError(t, err)
|
||||
assert.True(t, unloadResp.Ok)
|
||||
|
||||
// Verify module is now STOPPED
|
||||
statusResp2, err := svc.DenoClient().ModuleStatus("test-module")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "STOPPED", statusResp2.Status)
|
||||
|
||||
// Verify CoreService gRPC still works (Deno wrote health check data)
|
||||
conn, err := grpc.NewClient(
|
||||
"unix://"+sockPath,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
defer conn.Close()
|
||||
|
||||
coreClient := pb.NewCoreServiceClient(conn)
|
||||
getResp, err := coreClient.StoreGet(ctx, &pb.StoreGetRequest{
|
||||
Group: "_coredeno", Key: "status",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.True(t, getResp.Found)
|
||||
assert.Equal(t, "connected", getResp.Value, "Deno should have written health check")
|
||||
|
||||
// Clean shutdown
|
||||
err = svc.OnShutdown(context.Background())
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, svc.sidecar.IsRunning(), "Deno sidecar should be stopped")
|
||||
}
|
||||
|
||||
func TestIntegration_Tier3_WorkerIsolation_Good(t *testing.T) {
|
||||
denoPath := findDeno(t)
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "core.sock")
|
||||
denoSockPath := filepath.Join(tmpDir, "deno.sock")
|
||||
|
||||
// Write a manifest
|
||||
coreDir := filepath.Join(tmpDir, ".core")
|
||||
require.NoError(t, os.MkdirAll(coreDir, 0755))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(coreDir, "view.yml"), []byte(`
|
||||
code: tier3-test
|
||||
name: Tier 3 Test
|
||||
version: "1.0"
|
||||
permissions:
|
||||
read: ["./data/"]
|
||||
`), 0644))
|
||||
|
||||
entryPoint := runtimeEntryPoint(t)
|
||||
modPath := testModulePath(t)
|
||||
|
||||
opts := Options{
|
||||
DenoPath: denoPath,
|
||||
SocketPath: sockPath,
|
||||
DenoSocketPath: denoSockPath,
|
||||
AppRoot: tmpDir,
|
||||
StoreDBPath: ":memory:",
|
||||
SidecarArgs: []string{"run", "-A", "--unstable-worker-options", entryPoint},
|
||||
}
|
||||
|
||||
c, err := core.New()
|
||||
require.NoError(t, err)
|
||||
|
||||
factory := NewServiceFactory(opts)
|
||||
result, err := factory(c)
|
||||
require.NoError(t, err)
|
||||
svc := result.(*Service)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
err = svc.OnStartup(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify both sockets appeared
|
||||
require.Eventually(t, func() bool {
|
||||
_, err := os.Stat(denoSockPath)
|
||||
return err == nil
|
||||
}, 10*time.Second, 50*time.Millisecond, "deno socket should appear")
|
||||
|
||||
require.NotNil(t, svc.DenoClient(), "DenoClient should be connected")
|
||||
|
||||
// Load a real module — it writes to store via I/O bridge
|
||||
loadResp, err := svc.DenoClient().LoadModule("test-mod", modPath, ModulePermissions{
|
||||
Read: []string{filepath.Dir(modPath) + "/"},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.True(t, loadResp.Ok)
|
||||
|
||||
// Wait for module to reach RUNNING (Worker init + init() completes)
|
||||
require.Eventually(t, func() bool {
|
||||
resp, err := svc.DenoClient().ModuleStatus("test-mod")
|
||||
return err == nil && resp.Status == "RUNNING"
|
||||
}, 10*time.Second, 100*time.Millisecond, "module should be RUNNING")
|
||||
|
||||
// Verify the module wrote to the store via the I/O bridge
|
||||
// Module calls: core.storeSet("test-module", "init", "ok")
|
||||
conn, err := grpc.NewClient(
|
||||
"unix://"+sockPath,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
defer conn.Close()
|
||||
|
||||
coreClient := pb.NewCoreServiceClient(conn)
|
||||
|
||||
// Poll for the store value — module init is async
|
||||
require.Eventually(t, func() bool {
|
||||
resp, err := coreClient.StoreGet(ctx, &pb.StoreGetRequest{
|
||||
Group: "test-module", Key: "init",
|
||||
})
|
||||
return err == nil && resp.Found && resp.Value == "ok"
|
||||
}, 5*time.Second, 100*time.Millisecond, "module should have written to store via I/O bridge")
|
||||
|
||||
// Unload and verify
|
||||
unloadResp, err := svc.DenoClient().UnloadModule("test-mod")
|
||||
require.NoError(t, err)
|
||||
assert.True(t, unloadResp.Ok)
|
||||
|
||||
statusResp, err := svc.DenoClient().ModuleStatus("test-mod")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "STOPPED", statusResp.Status)
|
||||
|
||||
// Clean shutdown
|
||||
err = svc.OnShutdown(context.Background())
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, svc.sidecar.IsRunning(), "Deno sidecar should be stopped")
|
||||
}
|
||||
|
||||
// createModuleRepo creates a git repo containing a test module with manifest + main.ts.
|
||||
// The module's init() writes to the store to prove the I/O bridge works.
|
||||
func createModuleRepo(t *testing.T, code string) string {
|
||||
t.Helper()
|
||||
dir := filepath.Join(t.TempDir(), code+"-repo")
|
||||
require.NoError(t, os.MkdirAll(filepath.Join(dir, ".core"), 0755))
|
||||
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, ".core", "view.yml"), []byte(`
|
||||
code: `+code+`
|
||||
name: Test Module `+code+`
|
||||
version: "1.0"
|
||||
permissions:
|
||||
read: ["./"]
|
||||
`), 0644))
|
||||
|
||||
// Module that writes to store to prove it ran
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, "main.ts"), []byte(`
|
||||
export async function init(core: any) {
|
||||
await core.storeSet("`+code+`", "installed", "yes");
|
||||
}
|
||||
`), 0644))
|
||||
|
||||
gitCmd := func(args ...string) {
|
||||
t.Helper()
|
||||
cmd := exec.Command("git", append([]string{
|
||||
"-C", dir, "-c", "user.email=test@test.com", "-c", "user.name=test",
|
||||
}, args...)...)
|
||||
out, err := cmd.CombinedOutput()
|
||||
require.NoError(t, err, "git %v: %s", args, string(out))
|
||||
}
|
||||
gitCmd("init")
|
||||
gitCmd("add", ".")
|
||||
gitCmd("commit", "-m", "init")
|
||||
|
||||
return dir
|
||||
}
|
||||
|
||||
func TestIntegration_Tier4_MarketplaceInstall_Good(t *testing.T) {
|
||||
denoPath := findDeno(t)
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "core.sock")
|
||||
denoSockPath := filepath.Join(tmpDir, "deno.sock")
|
||||
|
||||
// Write app manifest
|
||||
coreDir := filepath.Join(tmpDir, ".core")
|
||||
require.NoError(t, os.MkdirAll(coreDir, 0755))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(coreDir, "view.yml"), []byte(`
|
||||
code: tier4-test
|
||||
name: Tier 4 Test
|
||||
version: "1.0"
|
||||
permissions:
|
||||
read: ["./"]
|
||||
`), 0644))
|
||||
|
||||
entryPoint := runtimeEntryPoint(t)
|
||||
|
||||
opts := Options{
|
||||
DenoPath: denoPath,
|
||||
SocketPath: sockPath,
|
||||
DenoSocketPath: denoSockPath,
|
||||
AppRoot: tmpDir,
|
||||
StoreDBPath: ":memory:",
|
||||
SidecarArgs: []string{"run", "-A", "--unstable-worker-options", entryPoint},
|
||||
}
|
||||
|
||||
c, err := core.New()
|
||||
require.NoError(t, err)
|
||||
|
||||
factory := NewServiceFactory(opts)
|
||||
result, err := factory(c)
|
||||
require.NoError(t, err)
|
||||
svc := result.(*Service)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
err = svc.OnStartup(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify sidecar and Deno client are up
|
||||
require.Eventually(t, func() bool {
|
||||
_, err := os.Stat(denoSockPath)
|
||||
return err == nil
|
||||
}, 10*time.Second, 50*time.Millisecond, "deno socket should appear")
|
||||
|
||||
require.NotNil(t, svc.DenoClient(), "DenoClient should be connected")
|
||||
require.NotNil(t, svc.Installer(), "Installer should be available")
|
||||
|
||||
// Create a test module repo and install it
|
||||
moduleRepo := createModuleRepo(t, "market-mod")
|
||||
err = svc.Installer().Install(ctx, marketplace.Module{
|
||||
Code: "market-mod",
|
||||
Repo: moduleRepo,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify the module was installed on disk
|
||||
modulesDir := filepath.Join(tmpDir, "modules", "market-mod")
|
||||
require.DirExists(t, modulesDir)
|
||||
|
||||
// Verify Installed() returns it
|
||||
installed, err := svc.Installer().Installed()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, installed, 1)
|
||||
assert.Equal(t, "market-mod", installed[0].Code)
|
||||
assert.Equal(t, "1.0", installed[0].Version)
|
||||
|
||||
// Load the installed module into the Deno runtime
|
||||
mod := installed[0]
|
||||
loadResp, err := svc.DenoClient().LoadModule(mod.Code, mod.EntryPoint, ModulePermissions{
|
||||
Read: mod.Permissions.Read,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.True(t, loadResp.Ok)
|
||||
|
||||
// Wait for module to reach RUNNING
|
||||
require.Eventually(t, func() bool {
|
||||
resp, err := svc.DenoClient().ModuleStatus("market-mod")
|
||||
return err == nil && resp.Status == "RUNNING"
|
||||
}, 10*time.Second, 100*time.Millisecond, "installed module should be RUNNING")
|
||||
|
||||
// Verify the module wrote to the store via I/O bridge
|
||||
conn, err := grpc.NewClient(
|
||||
"unix://"+sockPath,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
defer conn.Close()
|
||||
|
||||
coreClient := pb.NewCoreServiceClient(conn)
|
||||
require.Eventually(t, func() bool {
|
||||
resp, err := coreClient.StoreGet(ctx, &pb.StoreGetRequest{
|
||||
Group: "market-mod", Key: "installed",
|
||||
})
|
||||
return err == nil && resp.Found && resp.Value == "yes"
|
||||
}, 5*time.Second, 100*time.Millisecond, "installed module should have written to store via I/O bridge")
|
||||
|
||||
// Unload and remove
|
||||
unloadResp, err := svc.DenoClient().UnloadModule("market-mod")
|
||||
require.NoError(t, err)
|
||||
assert.True(t, unloadResp.Ok)
|
||||
|
||||
err = svc.Installer().Remove("market-mod")
|
||||
require.NoError(t, err)
|
||||
assert.NoDirExists(t, modulesDir, "module directory should be removed")
|
||||
|
||||
installed2, err := svc.Installer().Installed()
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, installed2, "no modules should be installed after remove")
|
||||
|
||||
// Clean shutdown
|
||||
err = svc.OnShutdown(context.Background())
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, svc.sidecar.IsRunning(), "Deno sidecar should be stopped")
|
||||
}
|
||||
75
pkg/coredeno/lifecycle.go
Normal file
75
pkg/coredeno/lifecycle.go
Normal file
|
|
@ -0,0 +1,75 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
// Start launches the Deno sidecar process with the given entrypoint args.
|
||||
func (s *Sidecar) Start(ctx context.Context, args ...string) error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if s.cmd != nil {
|
||||
return fmt.Errorf("coredeno: already running")
|
||||
}
|
||||
|
||||
// Ensure socket directory exists with owner-only permissions
|
||||
sockDir := filepath.Dir(s.opts.SocketPath)
|
||||
if err := os.MkdirAll(sockDir, 0700); err != nil {
|
||||
return fmt.Errorf("coredeno: mkdir %s: %w", sockDir, err)
|
||||
}
|
||||
|
||||
// Remove stale Deno socket (the Core socket is managed by ListenGRPC)
|
||||
if s.opts.DenoSocketPath != "" {
|
||||
os.Remove(s.opts.DenoSocketPath)
|
||||
}
|
||||
|
||||
s.ctx, s.cancel = context.WithCancel(ctx)
|
||||
s.cmd = exec.CommandContext(s.ctx, s.opts.DenoPath, args...)
|
||||
s.cmd.Env = append(os.Environ(),
|
||||
"CORE_SOCKET="+s.opts.SocketPath,
|
||||
"DENO_SOCKET="+s.opts.DenoSocketPath,
|
||||
)
|
||||
s.done = make(chan struct{})
|
||||
if err := s.cmd.Start(); err != nil {
|
||||
s.cmd = nil
|
||||
s.cancel()
|
||||
return fmt.Errorf("coredeno: start: %w", err)
|
||||
}
|
||||
|
||||
// Monitor in background — waits for exit, then signals done
|
||||
go func() {
|
||||
s.cmd.Wait()
|
||||
s.mu.Lock()
|
||||
s.cmd = nil
|
||||
s.mu.Unlock()
|
||||
close(s.done)
|
||||
}()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop cancels the context and waits for the process to exit.
|
||||
func (s *Sidecar) Stop() error {
|
||||
s.mu.RLock()
|
||||
if s.cmd == nil {
|
||||
s.mu.RUnlock()
|
||||
return nil
|
||||
}
|
||||
done := s.done
|
||||
s.mu.RUnlock()
|
||||
|
||||
s.cancel()
|
||||
<-done
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsRunning returns true if the sidecar process is alive.
|
||||
func (s *Sidecar) IsRunning() bool {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.cmd != nil
|
||||
}
|
||||
124
pkg/coredeno/lifecycle_test.go
Normal file
124
pkg/coredeno/lifecycle_test.go
Normal file
|
|
@ -0,0 +1,124 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestStart_Good(t *testing.T) {
|
||||
sockDir := t.TempDir()
|
||||
sc := NewSidecar(Options{
|
||||
DenoPath: "sleep",
|
||||
SocketPath: filepath.Join(sockDir, "test.sock"),
|
||||
})
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
defer cancel()
|
||||
|
||||
err := sc.Start(ctx, "10") // sleep 10 — will be killed by Stop
|
||||
require.NoError(t, err)
|
||||
assert.True(t, sc.IsRunning())
|
||||
|
||||
err = sc.Stop()
|
||||
require.NoError(t, err)
|
||||
assert.False(t, sc.IsRunning())
|
||||
}
|
||||
|
||||
func TestStop_Good_NotStarted(t *testing.T) {
|
||||
sc := NewSidecar(Options{DenoPath: "sleep"})
|
||||
err := sc.Stop()
|
||||
assert.NoError(t, err, "stopping a not-started sidecar should be a no-op")
|
||||
}
|
||||
|
||||
func TestStart_Good_EnvPassedToChild(t *testing.T) {
|
||||
sockDir := t.TempDir()
|
||||
sockPath := filepath.Join(sockDir, "test.sock")
|
||||
|
||||
sc := NewSidecar(Options{
|
||||
DenoPath: "sleep",
|
||||
SocketPath: sockPath,
|
||||
})
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
defer cancel()
|
||||
|
||||
err := sc.Start(ctx, "10")
|
||||
require.NoError(t, err)
|
||||
defer sc.Stop()
|
||||
|
||||
// Verify the child process has CORE_SOCKET in its environment
|
||||
sc.mu.RLock()
|
||||
env := sc.cmd.Env
|
||||
sc.mu.RUnlock()
|
||||
|
||||
found := false
|
||||
expected := "CORE_SOCKET=" + sockPath
|
||||
for _, e := range env {
|
||||
if e == expected {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.True(t, found, "child process should receive CORE_SOCKET=%s", sockPath)
|
||||
}
|
||||
|
||||
func TestStart_Good_DenoSocketEnv(t *testing.T) {
|
||||
sockDir := t.TempDir()
|
||||
sockPath := filepath.Join(sockDir, "core.sock")
|
||||
denoSockPath := filepath.Join(sockDir, "deno.sock")
|
||||
|
||||
sc := NewSidecar(Options{
|
||||
DenoPath: "sleep",
|
||||
SocketPath: sockPath,
|
||||
DenoSocketPath: denoSockPath,
|
||||
})
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
defer cancel()
|
||||
|
||||
err := sc.Start(ctx, "10")
|
||||
require.NoError(t, err)
|
||||
defer sc.Stop()
|
||||
|
||||
sc.mu.RLock()
|
||||
env := sc.cmd.Env
|
||||
sc.mu.RUnlock()
|
||||
|
||||
foundCore := false
|
||||
foundDeno := false
|
||||
for _, e := range env {
|
||||
if e == "CORE_SOCKET="+sockPath {
|
||||
foundCore = true
|
||||
}
|
||||
if e == "DENO_SOCKET="+denoSockPath {
|
||||
foundDeno = true
|
||||
}
|
||||
}
|
||||
assert.True(t, foundCore, "child should receive CORE_SOCKET")
|
||||
assert.True(t, foundDeno, "child should receive DENO_SOCKET")
|
||||
}
|
||||
|
||||
func TestSocketDirCreated_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
sockPath := filepath.Join(dir, "sub", "deno.sock")
|
||||
sc := NewSidecar(Options{
|
||||
DenoPath: "sleep",
|
||||
SocketPath: sockPath,
|
||||
})
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
defer cancel()
|
||||
|
||||
err := sc.Start(ctx, "10")
|
||||
require.NoError(t, err)
|
||||
defer sc.Stop()
|
||||
|
||||
_, err = os.Stat(filepath.Join(dir, "sub"))
|
||||
assert.NoError(t, err, "socket directory should be created")
|
||||
}
|
||||
53
pkg/coredeno/listener.go
Normal file
53
pkg/coredeno/listener.go
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
|
||||
pb "forge.lthn.ai/core/go/pkg/coredeno/proto"
|
||||
"google.golang.org/grpc"
|
||||
)
|
||||
|
||||
// ListenGRPC starts a gRPC server on a Unix socket, serving the CoreService.
|
||||
// It blocks until ctx is cancelled, then performs a graceful stop.
|
||||
func ListenGRPC(ctx context.Context, socketPath string, srv *Server) error {
|
||||
// Clean up stale socket
|
||||
if err := os.Remove(socketPath); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
|
||||
listener, err := net.Listen("unix", socketPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Restrict socket to owner only — prevents other users from sending gRPC commands.
|
||||
if err := os.Chmod(socketPath, 0600); err != nil {
|
||||
listener.Close()
|
||||
return fmt.Errorf("chmod socket: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = listener.Close()
|
||||
_ = os.Remove(socketPath)
|
||||
}()
|
||||
|
||||
gs := grpc.NewServer()
|
||||
pb.RegisterCoreServiceServer(gs, srv)
|
||||
|
||||
// Graceful stop when context cancelled
|
||||
go func() {
|
||||
<-ctx.Done()
|
||||
gs.GracefulStop()
|
||||
}()
|
||||
|
||||
if err := gs.Serve(listener); err != nil {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return nil // Expected shutdown
|
||||
default:
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
122
pkg/coredeno/listener_test.go
Normal file
122
pkg/coredeno/listener_test.go
Normal file
|
|
@ -0,0 +1,122 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
pb "forge.lthn.ai/core/go/pkg/coredeno/proto"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"forge.lthn.ai/core/go/pkg/store"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
func TestListenGRPC_Good(t *testing.T) {
|
||||
sockDir := t.TempDir()
|
||||
sockPath := filepath.Join(sockDir, "test.sock")
|
||||
|
||||
medium := io.NewMockMedium()
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
srv := NewServer(medium, st)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
errCh := make(chan error, 1)
|
||||
go func() {
|
||||
errCh <- ListenGRPC(ctx, sockPath, srv)
|
||||
}()
|
||||
|
||||
// Wait for socket to appear
|
||||
require.Eventually(t, func() bool {
|
||||
_, err := os.Stat(sockPath)
|
||||
return err == nil
|
||||
}, 2*time.Second, 10*time.Millisecond, "socket should appear")
|
||||
|
||||
// Connect as gRPC client
|
||||
conn, err := grpc.NewClient(
|
||||
"unix://"+sockPath,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
defer conn.Close()
|
||||
|
||||
client := pb.NewCoreServiceClient(conn)
|
||||
|
||||
// StoreSet + StoreGet round-trip
|
||||
_, err = client.StoreSet(ctx, &pb.StoreSetRequest{
|
||||
Group: "test", Key: "k", Value: "v",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
resp, err := client.StoreGet(ctx, &pb.StoreGetRequest{
|
||||
Group: "test", Key: "k",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.True(t, resp.Found)
|
||||
assert.Equal(t, "v", resp.Value)
|
||||
|
||||
// Cancel ctx to stop listener
|
||||
cancel()
|
||||
|
||||
select {
|
||||
case err := <-errCh:
|
||||
assert.NoError(t, err)
|
||||
case <-time.After(2 * time.Second):
|
||||
t.Fatal("listener did not stop")
|
||||
}
|
||||
}
|
||||
|
||||
func TestListenGRPC_Bad_StaleSocket(t *testing.T) {
|
||||
// Use a short temp dir — macOS limits Unix socket paths to 104 bytes (sun_path)
|
||||
// and t.TempDir() + this test's long name can exceed that.
|
||||
sockDir, err := os.MkdirTemp("", "grpc")
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() { os.RemoveAll(sockDir) })
|
||||
sockPath := filepath.Join(sockDir, "s.sock")
|
||||
|
||||
// Create a stale regular file where the socket should go
|
||||
require.NoError(t, os.WriteFile(sockPath, []byte("stale"), 0644))
|
||||
|
||||
medium := io.NewMockMedium()
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
srv := NewServer(medium, st)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
errCh := make(chan error, 1)
|
||||
go func() {
|
||||
errCh <- ListenGRPC(ctx, sockPath, srv)
|
||||
}()
|
||||
|
||||
// Should replace stale file and start listening.
|
||||
// Also watch errCh — if ListenGRPC returns early, fail with the actual error.
|
||||
require.Eventually(t, func() bool {
|
||||
select {
|
||||
case err := <-errCh:
|
||||
t.Fatalf("ListenGRPC returned early: %v", err)
|
||||
return false
|
||||
default:
|
||||
}
|
||||
info, err := os.Stat(sockPath)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return info.Mode()&os.ModeSocket != 0
|
||||
}, 2*time.Second, 10*time.Millisecond, "socket should replace stale file")
|
||||
|
||||
cancel()
|
||||
<-errCh
|
||||
}
|
||||
44
pkg/coredeno/permissions.go
Normal file
44
pkg/coredeno/permissions.go
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// CheckPath returns true if the given path is under any of the allowed prefixes.
|
||||
// Empty allowed list means deny all (secure by default).
|
||||
func CheckPath(path string, allowed []string) bool {
|
||||
if len(allowed) == 0 {
|
||||
return false
|
||||
}
|
||||
clean := filepath.Clean(path)
|
||||
for _, prefix := range allowed {
|
||||
cleanPrefix := filepath.Clean(prefix)
|
||||
// Exact match or path is under the prefix directory.
|
||||
// The separator check prevents "data" matching "data-secrets".
|
||||
if clean == cleanPrefix || strings.HasPrefix(clean, cleanPrefix+string(filepath.Separator)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// CheckNet returns true if the given host:port is in the allowed list.
|
||||
func CheckNet(addr string, allowed []string) bool {
|
||||
for _, a := range allowed {
|
||||
if a == addr {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// CheckRun returns true if the given command is in the allowed list.
|
||||
func CheckRun(cmd string, allowed []string) bool {
|
||||
for _, a := range allowed {
|
||||
if a == cmd {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
40
pkg/coredeno/permissions_test.go
Normal file
40
pkg/coredeno/permissions_test.go
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestCheckPath_Good_Allowed(t *testing.T) {
|
||||
allowed := []string{"./data/", "./config/"}
|
||||
assert.True(t, CheckPath("./data/file.txt", allowed))
|
||||
assert.True(t, CheckPath("./config/app.json", allowed))
|
||||
}
|
||||
|
||||
func TestCheckPath_Bad_Denied(t *testing.T) {
|
||||
allowed := []string{"./data/"}
|
||||
assert.False(t, CheckPath("./secrets/key.pem", allowed))
|
||||
assert.False(t, CheckPath("../escape/file", allowed))
|
||||
}
|
||||
|
||||
func TestCheckPath_Good_EmptyDenyAll(t *testing.T) {
|
||||
assert.False(t, CheckPath("./anything", nil))
|
||||
assert.False(t, CheckPath("./anything", []string{}))
|
||||
}
|
||||
|
||||
func TestCheckNet_Good_Allowed(t *testing.T) {
|
||||
allowed := []string{"pool.lthn.io:3333", "api.lthn.io:443"}
|
||||
assert.True(t, CheckNet("pool.lthn.io:3333", allowed))
|
||||
}
|
||||
|
||||
func TestCheckNet_Bad_Denied(t *testing.T) {
|
||||
allowed := []string{"pool.lthn.io:3333"}
|
||||
assert.False(t, CheckNet("evil.com:80", allowed))
|
||||
}
|
||||
|
||||
func TestCheckRun_Good(t *testing.T) {
|
||||
allowed := []string{"xmrig", "sha256sum"}
|
||||
assert.True(t, CheckRun("xmrig", allowed))
|
||||
assert.False(t, CheckRun("rm", allowed))
|
||||
}
|
||||
1420
pkg/coredeno/proto/coredeno.pb.go
Normal file
1420
pkg/coredeno/proto/coredeno.pb.go
Normal file
File diff suppressed because it is too large
Load diff
81
pkg/coredeno/proto/coredeno.proto
Normal file
81
pkg/coredeno/proto/coredeno.proto
Normal file
|
|
@ -0,0 +1,81 @@
|
|||
syntax = "proto3";
|
||||
package coredeno;
|
||||
option go_package = "forge.lthn.ai/core/go/pkg/coredeno/proto";
|
||||
|
||||
// CoreService is implemented by CoreGO — Deno calls this for I/O.
|
||||
service CoreService {
|
||||
// Filesystem (gated by manifest permissions)
|
||||
rpc FileRead(FileReadRequest) returns (FileReadResponse);
|
||||
rpc FileWrite(FileWriteRequest) returns (FileWriteResponse);
|
||||
rpc FileList(FileListRequest) returns (FileListResponse);
|
||||
rpc FileDelete(FileDeleteRequest) returns (FileDeleteResponse);
|
||||
|
||||
// Object store
|
||||
rpc StoreGet(StoreGetRequest) returns (StoreGetResponse);
|
||||
rpc StoreSet(StoreSetRequest) returns (StoreSetResponse);
|
||||
|
||||
// Process management
|
||||
rpc ProcessStart(ProcessStartRequest) returns (ProcessStartResponse);
|
||||
rpc ProcessStop(ProcessStopRequest) returns (ProcessStopResponse);
|
||||
}
|
||||
|
||||
// DenoService is implemented by CoreDeno — Go calls this for module lifecycle.
|
||||
service DenoService {
|
||||
rpc LoadModule(LoadModuleRequest) returns (LoadModuleResponse);
|
||||
rpc UnloadModule(UnloadModuleRequest) returns (UnloadModuleResponse);
|
||||
rpc ModuleStatus(ModuleStatusRequest) returns (ModuleStatusResponse);
|
||||
}
|
||||
|
||||
// --- Core (Go-side) messages ---
|
||||
|
||||
message FileReadRequest { string path = 1; string module_code = 2; }
|
||||
message FileReadResponse { string content = 1; }
|
||||
|
||||
message FileWriteRequest { string path = 1; string content = 2; string module_code = 3; }
|
||||
message FileWriteResponse { bool ok = 1; }
|
||||
|
||||
message FileListRequest { string path = 1; string module_code = 2; }
|
||||
message FileListResponse {
|
||||
repeated FileEntry entries = 1;
|
||||
}
|
||||
message FileEntry {
|
||||
string name = 1;
|
||||
bool is_dir = 2;
|
||||
int64 size = 3;
|
||||
}
|
||||
|
||||
message FileDeleteRequest { string path = 1; string module_code = 2; }
|
||||
message FileDeleteResponse { bool ok = 1; }
|
||||
|
||||
message StoreGetRequest { string group = 1; string key = 2; }
|
||||
message StoreGetResponse { string value = 1; bool found = 2; }
|
||||
|
||||
message StoreSetRequest { string group = 1; string key = 2; string value = 3; }
|
||||
message StoreSetResponse { bool ok = 1; }
|
||||
|
||||
message ProcessStartRequest { string command = 1; repeated string args = 2; string module_code = 3; }
|
||||
message ProcessStartResponse { string process_id = 1; }
|
||||
|
||||
message ProcessStopRequest { string process_id = 1; }
|
||||
message ProcessStopResponse { bool ok = 1; }
|
||||
|
||||
// --- Deno-side messages ---
|
||||
|
||||
message LoadModuleRequest { string code = 1; string entry_point = 2; repeated string permissions = 3; }
|
||||
message LoadModuleResponse { bool ok = 1; string error = 2; }
|
||||
|
||||
message UnloadModuleRequest { string code = 1; }
|
||||
message UnloadModuleResponse { bool ok = 1; }
|
||||
|
||||
message ModuleStatusRequest { string code = 1; }
|
||||
message ModuleStatusResponse {
|
||||
string code = 1;
|
||||
enum Status {
|
||||
UNKNOWN = 0;
|
||||
LOADING = 1;
|
||||
RUNNING = 2;
|
||||
STOPPED = 3;
|
||||
ERRORED = 4;
|
||||
}
|
||||
Status status = 2;
|
||||
}
|
||||
579
pkg/coredeno/proto/coredeno_grpc.pb.go
Normal file
579
pkg/coredeno/proto/coredeno_grpc.pb.go
Normal file
|
|
@ -0,0 +1,579 @@
|
|||
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
|
||||
// versions:
|
||||
// - protoc-gen-go-grpc v1.6.1
|
||||
// - protoc v3.21.12
|
||||
// source: pkg/coredeno/proto/coredeno.proto
|
||||
|
||||
package proto
|
||||
|
||||
import (
|
||||
context "context"
|
||||
grpc "google.golang.org/grpc"
|
||||
codes "google.golang.org/grpc/codes"
|
||||
status "google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the grpc package it is being compiled against.
|
||||
// Requires gRPC-Go v1.64.0 or later.
|
||||
const _ = grpc.SupportPackageIsVersion9
|
||||
|
||||
const (
|
||||
CoreService_FileRead_FullMethodName = "/coredeno.CoreService/FileRead"
|
||||
CoreService_FileWrite_FullMethodName = "/coredeno.CoreService/FileWrite"
|
||||
CoreService_FileList_FullMethodName = "/coredeno.CoreService/FileList"
|
||||
CoreService_FileDelete_FullMethodName = "/coredeno.CoreService/FileDelete"
|
||||
CoreService_StoreGet_FullMethodName = "/coredeno.CoreService/StoreGet"
|
||||
CoreService_StoreSet_FullMethodName = "/coredeno.CoreService/StoreSet"
|
||||
CoreService_ProcessStart_FullMethodName = "/coredeno.CoreService/ProcessStart"
|
||||
CoreService_ProcessStop_FullMethodName = "/coredeno.CoreService/ProcessStop"
|
||||
)
|
||||
|
||||
// CoreServiceClient is the client API for CoreService service.
|
||||
//
|
||||
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
|
||||
//
|
||||
// CoreService is implemented by CoreGO — Deno calls this for I/O.
|
||||
type CoreServiceClient interface {
|
||||
// Filesystem (gated by manifest permissions)
|
||||
FileRead(ctx context.Context, in *FileReadRequest, opts ...grpc.CallOption) (*FileReadResponse, error)
|
||||
FileWrite(ctx context.Context, in *FileWriteRequest, opts ...grpc.CallOption) (*FileWriteResponse, error)
|
||||
FileList(ctx context.Context, in *FileListRequest, opts ...grpc.CallOption) (*FileListResponse, error)
|
||||
FileDelete(ctx context.Context, in *FileDeleteRequest, opts ...grpc.CallOption) (*FileDeleteResponse, error)
|
||||
// Object store
|
||||
StoreGet(ctx context.Context, in *StoreGetRequest, opts ...grpc.CallOption) (*StoreGetResponse, error)
|
||||
StoreSet(ctx context.Context, in *StoreSetRequest, opts ...grpc.CallOption) (*StoreSetResponse, error)
|
||||
// Process management
|
||||
ProcessStart(ctx context.Context, in *ProcessStartRequest, opts ...grpc.CallOption) (*ProcessStartResponse, error)
|
||||
ProcessStop(ctx context.Context, in *ProcessStopRequest, opts ...grpc.CallOption) (*ProcessStopResponse, error)
|
||||
}
|
||||
|
||||
type coreServiceClient struct {
|
||||
cc grpc.ClientConnInterface
|
||||
}
|
||||
|
||||
func NewCoreServiceClient(cc grpc.ClientConnInterface) CoreServiceClient {
|
||||
return &coreServiceClient{cc}
|
||||
}
|
||||
|
||||
func (c *coreServiceClient) FileRead(ctx context.Context, in *FileReadRequest, opts ...grpc.CallOption) (*FileReadResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(FileReadResponse)
|
||||
err := c.cc.Invoke(ctx, CoreService_FileRead_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *coreServiceClient) FileWrite(ctx context.Context, in *FileWriteRequest, opts ...grpc.CallOption) (*FileWriteResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(FileWriteResponse)
|
||||
err := c.cc.Invoke(ctx, CoreService_FileWrite_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *coreServiceClient) FileList(ctx context.Context, in *FileListRequest, opts ...grpc.CallOption) (*FileListResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(FileListResponse)
|
||||
err := c.cc.Invoke(ctx, CoreService_FileList_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *coreServiceClient) FileDelete(ctx context.Context, in *FileDeleteRequest, opts ...grpc.CallOption) (*FileDeleteResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(FileDeleteResponse)
|
||||
err := c.cc.Invoke(ctx, CoreService_FileDelete_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *coreServiceClient) StoreGet(ctx context.Context, in *StoreGetRequest, opts ...grpc.CallOption) (*StoreGetResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(StoreGetResponse)
|
||||
err := c.cc.Invoke(ctx, CoreService_StoreGet_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *coreServiceClient) StoreSet(ctx context.Context, in *StoreSetRequest, opts ...grpc.CallOption) (*StoreSetResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(StoreSetResponse)
|
||||
err := c.cc.Invoke(ctx, CoreService_StoreSet_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *coreServiceClient) ProcessStart(ctx context.Context, in *ProcessStartRequest, opts ...grpc.CallOption) (*ProcessStartResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(ProcessStartResponse)
|
||||
err := c.cc.Invoke(ctx, CoreService_ProcessStart_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *coreServiceClient) ProcessStop(ctx context.Context, in *ProcessStopRequest, opts ...grpc.CallOption) (*ProcessStopResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(ProcessStopResponse)
|
||||
err := c.cc.Invoke(ctx, CoreService_ProcessStop_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// CoreServiceServer is the server API for CoreService service.
|
||||
// All implementations must embed UnimplementedCoreServiceServer
|
||||
// for forward compatibility.
|
||||
//
|
||||
// CoreService is implemented by CoreGO — Deno calls this for I/O.
|
||||
type CoreServiceServer interface {
|
||||
// Filesystem (gated by manifest permissions)
|
||||
FileRead(context.Context, *FileReadRequest) (*FileReadResponse, error)
|
||||
FileWrite(context.Context, *FileWriteRequest) (*FileWriteResponse, error)
|
||||
FileList(context.Context, *FileListRequest) (*FileListResponse, error)
|
||||
FileDelete(context.Context, *FileDeleteRequest) (*FileDeleteResponse, error)
|
||||
// Object store
|
||||
StoreGet(context.Context, *StoreGetRequest) (*StoreGetResponse, error)
|
||||
StoreSet(context.Context, *StoreSetRequest) (*StoreSetResponse, error)
|
||||
// Process management
|
||||
ProcessStart(context.Context, *ProcessStartRequest) (*ProcessStartResponse, error)
|
||||
ProcessStop(context.Context, *ProcessStopRequest) (*ProcessStopResponse, error)
|
||||
mustEmbedUnimplementedCoreServiceServer()
|
||||
}
|
||||
|
||||
// UnimplementedCoreServiceServer must be embedded to have
|
||||
// forward compatible implementations.
|
||||
//
|
||||
// NOTE: this should be embedded by value instead of pointer to avoid a nil
|
||||
// pointer dereference when methods are called.
|
||||
type UnimplementedCoreServiceServer struct{}
|
||||
|
||||
func (UnimplementedCoreServiceServer) FileRead(context.Context, *FileReadRequest) (*FileReadResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method FileRead not implemented")
|
||||
}
|
||||
func (UnimplementedCoreServiceServer) FileWrite(context.Context, *FileWriteRequest) (*FileWriteResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method FileWrite not implemented")
|
||||
}
|
||||
func (UnimplementedCoreServiceServer) FileList(context.Context, *FileListRequest) (*FileListResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method FileList not implemented")
|
||||
}
|
||||
func (UnimplementedCoreServiceServer) FileDelete(context.Context, *FileDeleteRequest) (*FileDeleteResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method FileDelete not implemented")
|
||||
}
|
||||
func (UnimplementedCoreServiceServer) StoreGet(context.Context, *StoreGetRequest) (*StoreGetResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method StoreGet not implemented")
|
||||
}
|
||||
func (UnimplementedCoreServiceServer) StoreSet(context.Context, *StoreSetRequest) (*StoreSetResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method StoreSet not implemented")
|
||||
}
|
||||
func (UnimplementedCoreServiceServer) ProcessStart(context.Context, *ProcessStartRequest) (*ProcessStartResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method ProcessStart not implemented")
|
||||
}
|
||||
func (UnimplementedCoreServiceServer) ProcessStop(context.Context, *ProcessStopRequest) (*ProcessStopResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method ProcessStop not implemented")
|
||||
}
|
||||
func (UnimplementedCoreServiceServer) mustEmbedUnimplementedCoreServiceServer() {}
|
||||
func (UnimplementedCoreServiceServer) testEmbeddedByValue() {}
|
||||
|
||||
// UnsafeCoreServiceServer may be embedded to opt out of forward compatibility for this service.
|
||||
// Use of this interface is not recommended, as added methods to CoreServiceServer will
|
||||
// result in compilation errors.
|
||||
type UnsafeCoreServiceServer interface {
|
||||
mustEmbedUnimplementedCoreServiceServer()
|
||||
}
|
||||
|
||||
func RegisterCoreServiceServer(s grpc.ServiceRegistrar, srv CoreServiceServer) {
|
||||
// If the following call panics, it indicates UnimplementedCoreServiceServer was
|
||||
// embedded by pointer and is nil. This will cause panics if an
|
||||
// unimplemented method is ever invoked, so we test this at initialization
|
||||
// time to prevent it from happening at runtime later due to I/O.
|
||||
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
|
||||
t.testEmbeddedByValue()
|
||||
}
|
||||
s.RegisterService(&CoreService_ServiceDesc, srv)
|
||||
}
|
||||
|
||||
func _CoreService_FileRead_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(FileReadRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(CoreServiceServer).FileRead(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: CoreService_FileRead_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(CoreServiceServer).FileRead(ctx, req.(*FileReadRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _CoreService_FileWrite_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(FileWriteRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(CoreServiceServer).FileWrite(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: CoreService_FileWrite_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(CoreServiceServer).FileWrite(ctx, req.(*FileWriteRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _CoreService_FileList_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(FileListRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(CoreServiceServer).FileList(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: CoreService_FileList_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(CoreServiceServer).FileList(ctx, req.(*FileListRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _CoreService_FileDelete_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(FileDeleteRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(CoreServiceServer).FileDelete(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: CoreService_FileDelete_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(CoreServiceServer).FileDelete(ctx, req.(*FileDeleteRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _CoreService_StoreGet_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(StoreGetRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(CoreServiceServer).StoreGet(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: CoreService_StoreGet_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(CoreServiceServer).StoreGet(ctx, req.(*StoreGetRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _CoreService_StoreSet_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(StoreSetRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(CoreServiceServer).StoreSet(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: CoreService_StoreSet_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(CoreServiceServer).StoreSet(ctx, req.(*StoreSetRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _CoreService_ProcessStart_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(ProcessStartRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(CoreServiceServer).ProcessStart(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: CoreService_ProcessStart_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(CoreServiceServer).ProcessStart(ctx, req.(*ProcessStartRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _CoreService_ProcessStop_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(ProcessStopRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(CoreServiceServer).ProcessStop(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: CoreService_ProcessStop_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(CoreServiceServer).ProcessStop(ctx, req.(*ProcessStopRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
// CoreService_ServiceDesc is the grpc.ServiceDesc for CoreService service.
|
||||
// It's only intended for direct use with grpc.RegisterService,
|
||||
// and not to be introspected or modified (even as a copy)
|
||||
var CoreService_ServiceDesc = grpc.ServiceDesc{
|
||||
ServiceName: "coredeno.CoreService",
|
||||
HandlerType: (*CoreServiceServer)(nil),
|
||||
Methods: []grpc.MethodDesc{
|
||||
{
|
||||
MethodName: "FileRead",
|
||||
Handler: _CoreService_FileRead_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "FileWrite",
|
||||
Handler: _CoreService_FileWrite_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "FileList",
|
||||
Handler: _CoreService_FileList_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "FileDelete",
|
||||
Handler: _CoreService_FileDelete_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "StoreGet",
|
||||
Handler: _CoreService_StoreGet_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "StoreSet",
|
||||
Handler: _CoreService_StoreSet_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "ProcessStart",
|
||||
Handler: _CoreService_ProcessStart_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "ProcessStop",
|
||||
Handler: _CoreService_ProcessStop_Handler,
|
||||
},
|
||||
},
|
||||
Streams: []grpc.StreamDesc{},
|
||||
Metadata: "pkg/coredeno/proto/coredeno.proto",
|
||||
}
|
||||
|
||||
const (
|
||||
DenoService_LoadModule_FullMethodName = "/coredeno.DenoService/LoadModule"
|
||||
DenoService_UnloadModule_FullMethodName = "/coredeno.DenoService/UnloadModule"
|
||||
DenoService_ModuleStatus_FullMethodName = "/coredeno.DenoService/ModuleStatus"
|
||||
)
|
||||
|
||||
// DenoServiceClient is the client API for DenoService service.
|
||||
//
|
||||
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
|
||||
//
|
||||
// DenoService is implemented by CoreDeno — Go calls this for module lifecycle.
|
||||
type DenoServiceClient interface {
|
||||
LoadModule(ctx context.Context, in *LoadModuleRequest, opts ...grpc.CallOption) (*LoadModuleResponse, error)
|
||||
UnloadModule(ctx context.Context, in *UnloadModuleRequest, opts ...grpc.CallOption) (*UnloadModuleResponse, error)
|
||||
ModuleStatus(ctx context.Context, in *ModuleStatusRequest, opts ...grpc.CallOption) (*ModuleStatusResponse, error)
|
||||
}
|
||||
|
||||
type denoServiceClient struct {
|
||||
cc grpc.ClientConnInterface
|
||||
}
|
||||
|
||||
func NewDenoServiceClient(cc grpc.ClientConnInterface) DenoServiceClient {
|
||||
return &denoServiceClient{cc}
|
||||
}
|
||||
|
||||
func (c *denoServiceClient) LoadModule(ctx context.Context, in *LoadModuleRequest, opts ...grpc.CallOption) (*LoadModuleResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(LoadModuleResponse)
|
||||
err := c.cc.Invoke(ctx, DenoService_LoadModule_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *denoServiceClient) UnloadModule(ctx context.Context, in *UnloadModuleRequest, opts ...grpc.CallOption) (*UnloadModuleResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(UnloadModuleResponse)
|
||||
err := c.cc.Invoke(ctx, DenoService_UnloadModule_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *denoServiceClient) ModuleStatus(ctx context.Context, in *ModuleStatusRequest, opts ...grpc.CallOption) (*ModuleStatusResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(ModuleStatusResponse)
|
||||
err := c.cc.Invoke(ctx, DenoService_ModuleStatus_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// DenoServiceServer is the server API for DenoService service.
|
||||
// All implementations must embed UnimplementedDenoServiceServer
|
||||
// for forward compatibility.
|
||||
//
|
||||
// DenoService is implemented by CoreDeno — Go calls this for module lifecycle.
|
||||
type DenoServiceServer interface {
|
||||
LoadModule(context.Context, *LoadModuleRequest) (*LoadModuleResponse, error)
|
||||
UnloadModule(context.Context, *UnloadModuleRequest) (*UnloadModuleResponse, error)
|
||||
ModuleStatus(context.Context, *ModuleStatusRequest) (*ModuleStatusResponse, error)
|
||||
mustEmbedUnimplementedDenoServiceServer()
|
||||
}
|
||||
|
||||
// UnimplementedDenoServiceServer must be embedded to have
|
||||
// forward compatible implementations.
|
||||
//
|
||||
// NOTE: this should be embedded by value instead of pointer to avoid a nil
|
||||
// pointer dereference when methods are called.
|
||||
type UnimplementedDenoServiceServer struct{}
|
||||
|
||||
func (UnimplementedDenoServiceServer) LoadModule(context.Context, *LoadModuleRequest) (*LoadModuleResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method LoadModule not implemented")
|
||||
}
|
||||
func (UnimplementedDenoServiceServer) UnloadModule(context.Context, *UnloadModuleRequest) (*UnloadModuleResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method UnloadModule not implemented")
|
||||
}
|
||||
func (UnimplementedDenoServiceServer) ModuleStatus(context.Context, *ModuleStatusRequest) (*ModuleStatusResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method ModuleStatus not implemented")
|
||||
}
|
||||
func (UnimplementedDenoServiceServer) mustEmbedUnimplementedDenoServiceServer() {}
|
||||
func (UnimplementedDenoServiceServer) testEmbeddedByValue() {}
|
||||
|
||||
// UnsafeDenoServiceServer may be embedded to opt out of forward compatibility for this service.
|
||||
// Use of this interface is not recommended, as added methods to DenoServiceServer will
|
||||
// result in compilation errors.
|
||||
type UnsafeDenoServiceServer interface {
|
||||
mustEmbedUnimplementedDenoServiceServer()
|
||||
}
|
||||
|
||||
func RegisterDenoServiceServer(s grpc.ServiceRegistrar, srv DenoServiceServer) {
|
||||
// If the following call panics, it indicates UnimplementedDenoServiceServer was
|
||||
// embedded by pointer and is nil. This will cause panics if an
|
||||
// unimplemented method is ever invoked, so we test this at initialization
|
||||
// time to prevent it from happening at runtime later due to I/O.
|
||||
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
|
||||
t.testEmbeddedByValue()
|
||||
}
|
||||
s.RegisterService(&DenoService_ServiceDesc, srv)
|
||||
}
|
||||
|
||||
func _DenoService_LoadModule_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(LoadModuleRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(DenoServiceServer).LoadModule(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: DenoService_LoadModule_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(DenoServiceServer).LoadModule(ctx, req.(*LoadModuleRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _DenoService_UnloadModule_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(UnloadModuleRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(DenoServiceServer).UnloadModule(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: DenoService_UnloadModule_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(DenoServiceServer).UnloadModule(ctx, req.(*UnloadModuleRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _DenoService_ModuleStatus_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(ModuleStatusRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(DenoServiceServer).ModuleStatus(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: DenoService_ModuleStatus_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(DenoServiceServer).ModuleStatus(ctx, req.(*ModuleStatusRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
// DenoService_ServiceDesc is the grpc.ServiceDesc for DenoService service.
|
||||
// It's only intended for direct use with grpc.RegisterService,
|
||||
// and not to be introspected or modified (even as a copy)
|
||||
var DenoService_ServiceDesc = grpc.ServiceDesc{
|
||||
ServiceName: "coredeno.DenoService",
|
||||
HandlerType: (*DenoServiceServer)(nil),
|
||||
Methods: []grpc.MethodDesc{
|
||||
{
|
||||
MethodName: "LoadModule",
|
||||
Handler: _DenoService_LoadModule_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "UnloadModule",
|
||||
Handler: _DenoService_UnloadModule_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "ModuleStatus",
|
||||
Handler: _DenoService_ModuleStatus_Handler,
|
||||
},
|
||||
},
|
||||
Streams: []grpc.StreamDesc{},
|
||||
Metadata: "pkg/coredeno/proto/coredeno.proto",
|
||||
}
|
||||
95
pkg/coredeno/runtime/client.ts
Normal file
95
pkg/coredeno/runtime/client.ts
Normal file
|
|
@ -0,0 +1,95 @@
|
|||
// CoreService gRPC client — Deno calls Go for I/O operations.
|
||||
// All filesystem, store, and process operations route through this client.
|
||||
|
||||
import * as grpc from "@grpc/grpc-js";
|
||||
import * as protoLoader from "@grpc/proto-loader";
|
||||
import { dirname, join } from "node:path";
|
||||
import { fileURLToPath } from "node:url";
|
||||
|
||||
const __dirname = dirname(fileURLToPath(import.meta.url));
|
||||
const PROTO_PATH = join(__dirname, "..", "proto", "coredeno.proto");
|
||||
|
||||
let packageDef: protoLoader.PackageDefinition | null = null;
|
||||
|
||||
function getProto(): any {
|
||||
if (!packageDef) {
|
||||
packageDef = protoLoader.loadSync(PROTO_PATH, {
|
||||
keepCase: true,
|
||||
longs: String,
|
||||
enums: String,
|
||||
defaults: true,
|
||||
oneofs: true,
|
||||
});
|
||||
}
|
||||
return grpc.loadPackageDefinition(packageDef).coredeno as any;
|
||||
}
|
||||
|
||||
export interface CoreClient {
|
||||
raw: any;
|
||||
storeGet(group: string, key: string): Promise<{ value: string; found: boolean }>;
|
||||
storeSet(group: string, key: string, value: string): Promise<{ ok: boolean }>;
|
||||
fileRead(path: string, moduleCode: string): Promise<{ content: string }>;
|
||||
fileWrite(path: string, content: string, moduleCode: string): Promise<{ ok: boolean }>;
|
||||
fileList(path: string, moduleCode: string): Promise<{ entries: Array<{ name: string; is_dir: boolean; size: number }> }>;
|
||||
fileDelete(path: string, moduleCode: string): Promise<{ ok: boolean }>;
|
||||
processStart(command: string, args: string[], moduleCode: string): Promise<{ process_id: string }>;
|
||||
processStop(processId: string): Promise<{ ok: boolean }>;
|
||||
close(): void;
|
||||
}
|
||||
|
||||
function promisify<T>(client: any, method: string, request: any): Promise<T> {
|
||||
return new Promise((resolve, reject) => {
|
||||
client[method](request, (err: Error | null, response: T) => {
|
||||
if (err) reject(err);
|
||||
else resolve(response);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
export function createCoreClient(socketPath: string): CoreClient {
|
||||
const proto = getProto();
|
||||
const client = new proto.CoreService(
|
||||
`unix:${socketPath}`,
|
||||
grpc.credentials.createInsecure(),
|
||||
);
|
||||
|
||||
return {
|
||||
raw: client,
|
||||
|
||||
storeGet(group: string, key: string) {
|
||||
return promisify(client, "StoreGet", { group, key });
|
||||
},
|
||||
|
||||
storeSet(group: string, key: string, value: string) {
|
||||
return promisify(client, "StoreSet", { group, key, value });
|
||||
},
|
||||
|
||||
fileRead(path: string, moduleCode: string) {
|
||||
return promisify(client, "FileRead", { path, module_code: moduleCode });
|
||||
},
|
||||
|
||||
fileWrite(path: string, content: string, moduleCode: string) {
|
||||
return promisify(client, "FileWrite", { path, content, module_code: moduleCode });
|
||||
},
|
||||
|
||||
fileList(path: string, moduleCode: string) {
|
||||
return promisify(client, "FileList", { path, module_code: moduleCode });
|
||||
},
|
||||
|
||||
fileDelete(path: string, moduleCode: string) {
|
||||
return promisify(client, "FileDelete", { path, module_code: moduleCode });
|
||||
},
|
||||
|
||||
processStart(command: string, args: string[], moduleCode: string) {
|
||||
return promisify(client, "ProcessStart", { command, args, module_code: moduleCode });
|
||||
},
|
||||
|
||||
processStop(processId: string) {
|
||||
return promisify(client, "ProcessStop", { process_id: processId });
|
||||
},
|
||||
|
||||
close() {
|
||||
client.close();
|
||||
},
|
||||
};
|
||||
}
|
||||
8
pkg/coredeno/runtime/deno.json
Normal file
8
pkg/coredeno/runtime/deno.json
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
{
|
||||
"imports": {
|
||||
"@grpc/grpc-js": "npm:@grpc/grpc-js@^1.12",
|
||||
"@grpc/proto-loader": "npm:@grpc/proto-loader@^0.7"
|
||||
},
|
||||
"nodeModulesDir": "none",
|
||||
"unstable": ["worker-options"]
|
||||
}
|
||||
193
pkg/coredeno/runtime/deno.lock
generated
Normal file
193
pkg/coredeno/runtime/deno.lock
generated
Normal file
|
|
@ -0,0 +1,193 @@
|
|||
{
|
||||
"version": "5",
|
||||
"specifiers": {
|
||||
"npm:@grpc/grpc-js@^1.12.0": "1.14.3",
|
||||
"npm:@grpc/proto-loader@0.7": "0.7.15"
|
||||
},
|
||||
"npm": {
|
||||
"@grpc/grpc-js@1.14.3": {
|
||||
"integrity": "sha512-Iq8QQQ/7X3Sac15oB6p0FmUg/klxQvXLeileoqrTRGJYLV+/9tubbr9ipz0GKHjmXVsgFPo/+W+2cA8eNcR+XA==",
|
||||
"dependencies": [
|
||||
"@grpc/proto-loader@0.8.0",
|
||||
"@js-sdsl/ordered-map"
|
||||
]
|
||||
},
|
||||
"@grpc/proto-loader@0.7.15": {
|
||||
"integrity": "sha512-tMXdRCfYVixjuFK+Hk0Q1s38gV9zDiDJfWL3h1rv4Qc39oILCu1TRTDt7+fGUI8K4G1Fj125Hx/ru3azECWTyQ==",
|
||||
"dependencies": [
|
||||
"lodash.camelcase",
|
||||
"long",
|
||||
"protobufjs",
|
||||
"yargs"
|
||||
],
|
||||
"bin": true
|
||||
},
|
||||
"@grpc/proto-loader@0.8.0": {
|
||||
"integrity": "sha512-rc1hOQtjIWGxcxpb9aHAfLpIctjEnsDehj0DAiVfBlmT84uvR0uUtN2hEi/ecvWVjXUGf5qPF4qEgiLOx1YIMQ==",
|
||||
"dependencies": [
|
||||
"lodash.camelcase",
|
||||
"long",
|
||||
"protobufjs",
|
||||
"yargs"
|
||||
],
|
||||
"bin": true
|
||||
},
|
||||
"@js-sdsl/ordered-map@4.4.2": {
|
||||
"integrity": "sha512-iUKgm52T8HOE/makSxjqoWhe95ZJA1/G1sYsGev2JDKUSS14KAgg1LHb+Ba+IPow0xflbnSkOsZcO08C7w1gYw=="
|
||||
},
|
||||
"@protobufjs/aspromise@1.1.2": {
|
||||
"integrity": "sha512-j+gKExEuLmKwvz3OgROXtrJ2UG2x8Ch2YZUxahh+s1F2HZ+wAceUNLkvy6zKCPVRkU++ZWQrdxsUeQXmcg4uoQ=="
|
||||
},
|
||||
"@protobufjs/base64@1.1.2": {
|
||||
"integrity": "sha512-AZkcAA5vnN/v4PDqKyMR5lx7hZttPDgClv83E//FMNhR2TMcLUhfRUBHCmSl0oi9zMgDDqRUJkSxO3wm85+XLg=="
|
||||
},
|
||||
"@protobufjs/codegen@2.0.4": {
|
||||
"integrity": "sha512-YyFaikqM5sH0ziFZCN3xDC7zeGaB/d0IUb9CATugHWbd1FRFwWwt4ld4OYMPWu5a3Xe01mGAULCdqhMlPl29Jg=="
|
||||
},
|
||||
"@protobufjs/eventemitter@1.1.0": {
|
||||
"integrity": "sha512-j9ednRT81vYJ9OfVuXG6ERSTdEL1xVsNgqpkxMsbIabzSo3goCjDIveeGv5d03om39ML71RdmrGNjG5SReBP/Q=="
|
||||
},
|
||||
"@protobufjs/fetch@1.1.0": {
|
||||
"integrity": "sha512-lljVXpqXebpsijW71PZaCYeIcE5on1w5DlQy5WH6GLbFryLUrBD4932W/E2BSpfRJWseIL4v/KPgBFxDOIdKpQ==",
|
||||
"dependencies": [
|
||||
"@protobufjs/aspromise",
|
||||
"@protobufjs/inquire"
|
||||
]
|
||||
},
|
||||
"@protobufjs/float@1.0.2": {
|
||||
"integrity": "sha512-Ddb+kVXlXst9d+R9PfTIxh1EdNkgoRe5tOX6t01f1lYWOvJnSPDBlG241QLzcyPdoNTsblLUdujGSE4RzrTZGQ=="
|
||||
},
|
||||
"@protobufjs/inquire@1.1.0": {
|
||||
"integrity": "sha512-kdSefcPdruJiFMVSbn801t4vFK7KB/5gd2fYvrxhuJYg8ILrmn9SKSX2tZdV6V+ksulWqS7aXjBcRXl3wHoD9Q=="
|
||||
},
|
||||
"@protobufjs/path@1.1.2": {
|
||||
"integrity": "sha512-6JOcJ5Tm08dOHAbdR3GrvP+yUUfkjG5ePsHYczMFLq3ZmMkAD98cDgcT2iA1lJ9NVwFd4tH/iSSoe44YWkltEA=="
|
||||
},
|
||||
"@protobufjs/pool@1.1.0": {
|
||||
"integrity": "sha512-0kELaGSIDBKvcgS4zkjz1PeddatrjYcmMWOlAuAPwAeccUrPHdUqo/J6LiymHHEiJT5NrF1UVwxY14f+fy4WQw=="
|
||||
},
|
||||
"@protobufjs/utf8@1.1.0": {
|
||||
"integrity": "sha512-Vvn3zZrhQZkkBE8LSuW3em98c0FwgO4nxzv6OdSxPKJIEKY2bGbHn+mhGIPerzI4twdxaP8/0+06HBpwf345Lw=="
|
||||
},
|
||||
"@types/node@25.2.3": {
|
||||
"integrity": "sha512-m0jEgYlYz+mDJZ2+F4v8D1AyQb+QzsNqRuI7xg1VQX/KlKS0qT9r1Mo16yo5F/MtifXFgaofIFsdFMox2SxIbQ==",
|
||||
"dependencies": [
|
||||
"undici-types"
|
||||
]
|
||||
},
|
||||
"ansi-regex@5.0.1": {
|
||||
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="
|
||||
},
|
||||
"ansi-styles@4.3.0": {
|
||||
"integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==",
|
||||
"dependencies": [
|
||||
"color-convert"
|
||||
]
|
||||
},
|
||||
"cliui@8.0.1": {
|
||||
"integrity": "sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ==",
|
||||
"dependencies": [
|
||||
"string-width",
|
||||
"strip-ansi",
|
||||
"wrap-ansi"
|
||||
]
|
||||
},
|
||||
"color-convert@2.0.1": {
|
||||
"integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
|
||||
"dependencies": [
|
||||
"color-name"
|
||||
]
|
||||
},
|
||||
"color-name@1.1.4": {
|
||||
"integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="
|
||||
},
|
||||
"emoji-regex@8.0.0": {
|
||||
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="
|
||||
},
|
||||
"escalade@3.2.0": {
|
||||
"integrity": "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA=="
|
||||
},
|
||||
"get-caller-file@2.0.5": {
|
||||
"integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg=="
|
||||
},
|
||||
"is-fullwidth-code-point@3.0.0": {
|
||||
"integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="
|
||||
},
|
||||
"lodash.camelcase@4.3.0": {
|
||||
"integrity": "sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA=="
|
||||
},
|
||||
"long@5.3.2": {
|
||||
"integrity": "sha512-mNAgZ1GmyNhD7AuqnTG3/VQ26o760+ZYBPKjPvugO8+nLbYfX6TVpJPseBvopbdY+qpZ/lKUnmEc1LeZYS3QAA=="
|
||||
},
|
||||
"protobufjs@7.5.4": {
|
||||
"integrity": "sha512-CvexbZtbov6jW2eXAvLukXjXUW1TzFaivC46BpWc/3BpcCysb5Vffu+B3XHMm8lVEuy2Mm4XGex8hBSg1yapPg==",
|
||||
"dependencies": [
|
||||
"@protobufjs/aspromise",
|
||||
"@protobufjs/base64",
|
||||
"@protobufjs/codegen",
|
||||
"@protobufjs/eventemitter",
|
||||
"@protobufjs/fetch",
|
||||
"@protobufjs/float",
|
||||
"@protobufjs/inquire",
|
||||
"@protobufjs/path",
|
||||
"@protobufjs/pool",
|
||||
"@protobufjs/utf8",
|
||||
"@types/node",
|
||||
"long"
|
||||
],
|
||||
"scripts": true
|
||||
},
|
||||
"require-directory@2.1.1": {
|
||||
"integrity": "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q=="
|
||||
},
|
||||
"string-width@4.2.3": {
|
||||
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
|
||||
"dependencies": [
|
||||
"emoji-regex",
|
||||
"is-fullwidth-code-point",
|
||||
"strip-ansi"
|
||||
]
|
||||
},
|
||||
"strip-ansi@6.0.1": {
|
||||
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
|
||||
"dependencies": [
|
||||
"ansi-regex"
|
||||
]
|
||||
},
|
||||
"undici-types@7.16.0": {
|
||||
"integrity": "sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw=="
|
||||
},
|
||||
"wrap-ansi@7.0.0": {
|
||||
"integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==",
|
||||
"dependencies": [
|
||||
"ansi-styles",
|
||||
"string-width",
|
||||
"strip-ansi"
|
||||
]
|
||||
},
|
||||
"y18n@5.0.8": {
|
||||
"integrity": "sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA=="
|
||||
},
|
||||
"yargs-parser@21.1.1": {
|
||||
"integrity": "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw=="
|
||||
},
|
||||
"yargs@17.7.2": {
|
||||
"integrity": "sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w==",
|
||||
"dependencies": [
|
||||
"cliui",
|
||||
"escalade",
|
||||
"get-caller-file",
|
||||
"require-directory",
|
||||
"string-width",
|
||||
"y18n",
|
||||
"yargs-parser"
|
||||
]
|
||||
}
|
||||
},
|
||||
"workspace": {
|
||||
"dependencies": [
|
||||
"npm:@grpc/grpc-js@^1.12.0",
|
||||
"npm:@grpc/proto-loader@0.7"
|
||||
]
|
||||
}
|
||||
}
|
||||
106
pkg/coredeno/runtime/main.ts
Normal file
106
pkg/coredeno/runtime/main.ts
Normal file
|
|
@ -0,0 +1,106 @@
|
|||
// CoreDeno Runtime Entry Point
|
||||
// Connects to CoreGO via gRPC over Unix socket.
|
||||
// Implements DenoService for module lifecycle management.
|
||||
|
||||
// Must be first import — patches http2 before @grpc/grpc-js loads.
|
||||
import "./polyfill.ts";
|
||||
|
||||
import { createCoreClient, type CoreClient } from "./client.ts";
|
||||
import { startDenoServer, type DenoServer } from "./server.ts";
|
||||
import { ModuleRegistry } from "./modules.ts";
|
||||
|
||||
// Read required environment variables
|
||||
const coreSocket = Deno.env.get("CORE_SOCKET");
|
||||
if (!coreSocket) {
|
||||
console.error("FATAL: CORE_SOCKET environment variable not set");
|
||||
Deno.exit(1);
|
||||
}
|
||||
|
||||
const denoSocket = Deno.env.get("DENO_SOCKET");
|
||||
if (!denoSocket) {
|
||||
console.error("FATAL: DENO_SOCKET environment variable not set");
|
||||
Deno.exit(1);
|
||||
}
|
||||
|
||||
console.error(`CoreDeno: CORE_SOCKET=${coreSocket}`);
|
||||
console.error(`CoreDeno: DENO_SOCKET=${denoSocket}`);
|
||||
|
||||
// 1. Create module registry
|
||||
const registry = new ModuleRegistry();
|
||||
|
||||
// 2. Start DenoService server (Go calls us here via JSON-RPC over Unix socket)
|
||||
let denoServer: DenoServer;
|
||||
try {
|
||||
denoServer = await startDenoServer(denoSocket, registry);
|
||||
console.error("CoreDeno: DenoService server started");
|
||||
} catch (err) {
|
||||
console.error(`FATAL: failed to start DenoService server: ${err}`);
|
||||
Deno.exit(1);
|
||||
}
|
||||
|
||||
// 3. Connect to CoreService (we call Go here) with retry
|
||||
let coreClient: CoreClient;
|
||||
{
|
||||
coreClient = createCoreClient(coreSocket);
|
||||
const maxRetries = 20;
|
||||
let connected = false;
|
||||
let lastErr: unknown;
|
||||
for (let i = 0; i < maxRetries; i++) {
|
||||
try {
|
||||
const timeoutCall = <T>(p: Promise<T>): Promise<T> =>
|
||||
Promise.race([
|
||||
p,
|
||||
new Promise<T>((_, reject) =>
|
||||
setTimeout(() => reject(new Error("call timeout")), 2000),
|
||||
),
|
||||
]);
|
||||
await timeoutCall(
|
||||
coreClient.storeSet("_coredeno", "status", "connected"),
|
||||
);
|
||||
const resp = await timeoutCall(
|
||||
coreClient.storeGet("_coredeno", "status"),
|
||||
);
|
||||
if (resp.found && resp.value === "connected") {
|
||||
connected = true;
|
||||
break;
|
||||
}
|
||||
} catch (err) {
|
||||
lastErr = err;
|
||||
if (i < 3 || i === 9 || i === 19) {
|
||||
console.error(`CoreDeno: retry ${i}: ${err}`);
|
||||
}
|
||||
}
|
||||
await new Promise((r) => setTimeout(r, 250));
|
||||
}
|
||||
if (!connected) {
|
||||
console.error(
|
||||
`FATAL: failed to connect to CoreService after retries, last error: ${lastErr}`,
|
||||
);
|
||||
denoServer.close();
|
||||
Deno.exit(1);
|
||||
}
|
||||
console.error("CoreDeno: CoreService client connected");
|
||||
}
|
||||
|
||||
// 4. Inject CoreClient into registry for I/O bridge
|
||||
registry.setCoreClient(coreClient);
|
||||
|
||||
// 5. Signal readiness
|
||||
console.error("CoreDeno: ready");
|
||||
|
||||
// 6. Keep alive until SIGTERM
|
||||
const ac = new AbortController();
|
||||
Deno.addSignalListener("SIGTERM", () => {
|
||||
console.error("CoreDeno: shutting down");
|
||||
ac.abort();
|
||||
});
|
||||
|
||||
try {
|
||||
await new Promise((_resolve, reject) => {
|
||||
ac.signal.addEventListener("abort", () => reject(new Error("shutdown")));
|
||||
});
|
||||
} catch {
|
||||
// Clean shutdown
|
||||
coreClient.close();
|
||||
denoServer.close();
|
||||
}
|
||||
202
pkg/coredeno/runtime/modules.ts
Normal file
202
pkg/coredeno/runtime/modules.ts
Normal file
|
|
@ -0,0 +1,202 @@
|
|||
// Module registry — manages module lifecycle with Deno Worker isolation.
|
||||
// Each module runs in its own Worker with per-module permission sandboxing.
|
||||
// I/O bridge relays Worker postMessage calls to CoreService gRPC.
|
||||
|
||||
import type { CoreClient } from "./client.ts";
|
||||
|
||||
export type ModuleStatus =
|
||||
| "UNKNOWN"
|
||||
| "LOADING"
|
||||
| "RUNNING"
|
||||
| "STOPPED"
|
||||
| "ERRORED";
|
||||
|
||||
export interface ModulePermissions {
|
||||
read?: string[];
|
||||
write?: string[];
|
||||
net?: string[];
|
||||
run?: string[];
|
||||
}
|
||||
|
||||
interface Module {
|
||||
code: string;
|
||||
entryPoint: string;
|
||||
permissions: ModulePermissions;
|
||||
status: ModuleStatus;
|
||||
worker?: Worker;
|
||||
}
|
||||
|
||||
export class ModuleRegistry {
|
||||
private modules = new Map<string, Module>();
|
||||
private coreClient: CoreClient | null = null;
|
||||
private workerEntryUrl: string;
|
||||
|
||||
constructor() {
|
||||
this.workerEntryUrl = new URL("./worker-entry.ts", import.meta.url).href;
|
||||
}
|
||||
|
||||
setCoreClient(client: CoreClient): void {
|
||||
this.coreClient = client;
|
||||
}
|
||||
|
||||
load(code: string, entryPoint: string, permissions: ModulePermissions): void {
|
||||
// Terminate existing worker if reloading
|
||||
const existing = this.modules.get(code);
|
||||
if (existing?.worker) {
|
||||
existing.worker.terminate();
|
||||
}
|
||||
|
||||
const mod: Module = {
|
||||
code,
|
||||
entryPoint,
|
||||
permissions,
|
||||
status: "LOADING",
|
||||
};
|
||||
this.modules.set(code, mod);
|
||||
|
||||
// Resolve entry point URL for the module
|
||||
const moduleUrl =
|
||||
entryPoint.startsWith("file://") || entryPoint.startsWith("http")
|
||||
? entryPoint
|
||||
: "file://" + entryPoint;
|
||||
|
||||
// Build read permissions: worker-entry.ts dir + module source + declared reads
|
||||
const readPerms: string[] = [
|
||||
new URL(".", import.meta.url).pathname,
|
||||
];
|
||||
// Add the module's directory so it can be dynamically imported
|
||||
if (!entryPoint.startsWith("http")) {
|
||||
const modPath = entryPoint.startsWith("file://")
|
||||
? entryPoint.slice(7)
|
||||
: entryPoint;
|
||||
// Add the module file's directory
|
||||
const lastSlash = modPath.lastIndexOf("/");
|
||||
if (lastSlash > 0) readPerms.push(modPath.slice(0, lastSlash + 1));
|
||||
else readPerms.push(modPath);
|
||||
}
|
||||
if (permissions.read) readPerms.push(...permissions.read);
|
||||
|
||||
// Create Worker with permission sandbox
|
||||
const worker = new Worker(this.workerEntryUrl, {
|
||||
type: "module",
|
||||
name: code,
|
||||
// deno-lint-ignore no-explicit-any
|
||||
deno: {
|
||||
permissions: {
|
||||
read: readPerms,
|
||||
write: permissions.write ?? [],
|
||||
net: permissions.net ?? [],
|
||||
run: permissions.run ?? [],
|
||||
env: false,
|
||||
sys: false,
|
||||
ffi: false,
|
||||
},
|
||||
},
|
||||
} as any);
|
||||
|
||||
mod.worker = worker;
|
||||
|
||||
// I/O bridge: relay Worker RPC to CoreClient
|
||||
worker.onmessage = async (e: MessageEvent) => {
|
||||
const msg = e.data;
|
||||
|
||||
if (msg.type === "ready") {
|
||||
worker.postMessage({ type: "load", url: moduleUrl });
|
||||
return;
|
||||
}
|
||||
|
||||
if (msg.type === "loaded") {
|
||||
mod.status = msg.ok ? "RUNNING" : "ERRORED";
|
||||
if (msg.ok) {
|
||||
console.error(`CoreDeno: module running: ${code}`);
|
||||
} else {
|
||||
console.error(`CoreDeno: module error: ${code}: ${msg.error}`);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (msg.type === "rpc" && this.coreClient) {
|
||||
try {
|
||||
const result = await this.dispatchRPC(
|
||||
code,
|
||||
msg.method,
|
||||
msg.params,
|
||||
);
|
||||
worker.postMessage({ type: "rpc_response", id: msg.id, result });
|
||||
} catch (err) {
|
||||
worker.postMessage({
|
||||
type: "rpc_response",
|
||||
id: msg.id,
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
});
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
worker.onerror = (e: ErrorEvent) => {
|
||||
mod.status = "ERRORED";
|
||||
console.error(`CoreDeno: worker error: ${code}: ${e.message}`);
|
||||
};
|
||||
|
||||
console.error(`CoreDeno: module loading: ${code}`);
|
||||
}
|
||||
|
||||
private async dispatchRPC(
|
||||
moduleCode: string,
|
||||
method: string,
|
||||
params: Record<string, unknown>,
|
||||
): Promise<unknown> {
|
||||
const c = this.coreClient!;
|
||||
switch (method) {
|
||||
case "StoreGet":
|
||||
return c.storeGet(params.group as string, params.key as string);
|
||||
case "StoreSet":
|
||||
return c.storeSet(
|
||||
params.group as string,
|
||||
params.key as string,
|
||||
params.value as string,
|
||||
);
|
||||
case "FileRead":
|
||||
return c.fileRead(params.path as string, moduleCode);
|
||||
case "FileWrite":
|
||||
return c.fileWrite(
|
||||
params.path as string,
|
||||
params.content as string,
|
||||
moduleCode,
|
||||
);
|
||||
case "ProcessStart":
|
||||
return c.processStart(
|
||||
params.command as string,
|
||||
params.args as string[],
|
||||
moduleCode,
|
||||
);
|
||||
case "ProcessStop":
|
||||
return c.processStop(params.process_id as string);
|
||||
default:
|
||||
throw new Error(`unknown RPC method: ${method}`);
|
||||
}
|
||||
}
|
||||
|
||||
unload(code: string): boolean {
|
||||
const mod = this.modules.get(code);
|
||||
if (!mod) return false;
|
||||
if (mod.worker) {
|
||||
mod.worker.terminate();
|
||||
mod.worker = undefined;
|
||||
}
|
||||
mod.status = "STOPPED";
|
||||
console.error(`CoreDeno: module unloaded: ${code}`);
|
||||
return true;
|
||||
}
|
||||
|
||||
status(code: string): ModuleStatus {
|
||||
return this.modules.get(code)?.status ?? "UNKNOWN";
|
||||
}
|
||||
|
||||
list(): Array<{ code: string; status: ModuleStatus }> {
|
||||
return Array.from(this.modules.values()).map((m) => ({
|
||||
code: m.code,
|
||||
status: m.status,
|
||||
}));
|
||||
}
|
||||
}
|
||||
94
pkg/coredeno/runtime/polyfill.ts
Normal file
94
pkg/coredeno/runtime/polyfill.ts
Normal file
|
|
@ -0,0 +1,94 @@
|
|||
// Deno http2 + grpc-js polyfill — must be imported BEFORE @grpc/grpc-js.
|
||||
//
|
||||
// Two issues with Deno 2.x node compat:
|
||||
// 1. http2.getDefaultSettings throws "Not implemented"
|
||||
// 2. grpc-js's createConnection returns a socket that reports readyState="open"
|
||||
// but never emits "connect", causing http2 sessions to hang forever.
|
||||
// Fix: wrap createConnection to emit "connect" on next tick for open sockets.
|
||||
|
||||
import http2 from "node:http2";
|
||||
|
||||
// Fix 1: getDefaultSettings stub
|
||||
(http2 as any).getDefaultSettings = () => ({
|
||||
headerTableSize: 4096,
|
||||
enablePush: true,
|
||||
initialWindowSize: 65535,
|
||||
maxFrameSize: 16384,
|
||||
maxConcurrentStreams: 0xffffffff,
|
||||
maxHeaderListSize: 65535,
|
||||
maxHeaderSize: 65535,
|
||||
enableConnectProtocol: false,
|
||||
});
|
||||
|
||||
// Fix 2: grpc-js (transport.js line 536) passes an already-connected socket
|
||||
// to http2.connect via createConnection. Deno's http2 never completes the
|
||||
// HTTP/2 handshake because it expects a "connect" event from the socket,
|
||||
// which already fired. Emitting "connect" again causes "Busy: Unix socket
|
||||
// is currently in use" in Deno's internal http2.
|
||||
//
|
||||
// Workaround: track Unix socket paths via net.connect intercept, then in
|
||||
// createConnection, return a FRESH socket. Keep the original socket alive
|
||||
// (grpc-js has close listeners on it) but unused for data.
|
||||
import net from "node:net";
|
||||
|
||||
const socketPathMap = new WeakMap<net.Socket, string>();
|
||||
const origNetConnect = net.connect;
|
||||
(net as any).connect = function (...args: any[]) {
|
||||
const sock = origNetConnect.apply(this, args as any);
|
||||
if (args[0] && typeof args[0] === "object" && args[0].path) {
|
||||
socketPathMap.set(sock, args[0].path);
|
||||
}
|
||||
return sock;
|
||||
};
|
||||
|
||||
// Fix 3: Deno's http2 client never fires "remoteSettings" event, which
|
||||
// grpc-js waits for before marking the transport as READY.
|
||||
// Workaround: emit "remoteSettings" after "connect" with reasonable defaults.
|
||||
const origConnect = http2.connect;
|
||||
(http2 as any).connect = function (
|
||||
authority: any,
|
||||
options: any,
|
||||
...rest: any[]
|
||||
) {
|
||||
// For Unix sockets: replace pre-connected socket with fresh one
|
||||
if (options?.createConnection) {
|
||||
const origCC = options.createConnection;
|
||||
options = {
|
||||
...options,
|
||||
createConnection(...ccArgs: any[]) {
|
||||
const origSock = origCC.apply(this, ccArgs);
|
||||
const unixPath = socketPathMap.get(origSock);
|
||||
if (
|
||||
unixPath &&
|
||||
!origSock.connecting &&
|
||||
origSock.readyState === "open"
|
||||
) {
|
||||
const freshSock = net.connect({ path: unixPath });
|
||||
freshSock.on("close", () => origSock.destroy());
|
||||
return freshSock;
|
||||
}
|
||||
return origSock;
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
const session = origConnect.call(this, authority, options, ...rest);
|
||||
|
||||
// Emit remoteSettings after connect — Deno's http2 doesn't emit it
|
||||
session.once("connect", () => {
|
||||
if (!session.destroyed && !session.closed) {
|
||||
const settings = {
|
||||
headerTableSize: 4096,
|
||||
enablePush: false,
|
||||
initialWindowSize: 65535,
|
||||
maxFrameSize: 16384,
|
||||
maxConcurrentStreams: 100,
|
||||
maxHeaderListSize: 8192,
|
||||
maxHeaderSize: 8192,
|
||||
};
|
||||
process.nextTick(() => session.emit("remoteSettings", settings));
|
||||
}
|
||||
});
|
||||
|
||||
return session;
|
||||
};
|
||||
124
pkg/coredeno/runtime/server.ts
Normal file
124
pkg/coredeno/runtime/server.ts
Normal file
|
|
@ -0,0 +1,124 @@
|
|||
// DenoService JSON-RPC server — Go calls Deno for module lifecycle management.
|
||||
// Uses length-prefixed JSON over raw Unix socket (Deno's http2 server is broken).
|
||||
// Protocol: 4-byte big-endian length + JSON payload, newline-delimited.
|
||||
|
||||
import { ModuleRegistry } from "./modules.ts";
|
||||
|
||||
export interface DenoServer {
|
||||
close(): void;
|
||||
}
|
||||
|
||||
export async function startDenoServer(
|
||||
socketPath: string,
|
||||
registry: ModuleRegistry,
|
||||
): Promise<DenoServer> {
|
||||
// Remove stale socket
|
||||
try {
|
||||
Deno.removeSync(socketPath);
|
||||
} catch {
|
||||
// ignore
|
||||
}
|
||||
|
||||
const listener = Deno.listen({ transport: "unix", path: socketPath });
|
||||
|
||||
const handleConnection = async (conn: Deno.UnixConn) => {
|
||||
const reader = conn.readable.getReader();
|
||||
const writer = conn.writable.getWriter();
|
||||
const decoder = new TextDecoder();
|
||||
let buffer = "";
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { value, done } = await reader.read();
|
||||
if (done) break;
|
||||
|
||||
buffer += decoder.decode(value, { stream: true });
|
||||
|
||||
// Process complete lines (newline-delimited JSON)
|
||||
let newlineIdx: number;
|
||||
while ((newlineIdx = buffer.indexOf("\n")) !== -1) {
|
||||
const line = buffer.slice(0, newlineIdx);
|
||||
buffer = buffer.slice(newlineIdx + 1);
|
||||
|
||||
if (!line.trim()) continue;
|
||||
|
||||
try {
|
||||
const req = JSON.parse(line);
|
||||
const resp = dispatch(req, registry);
|
||||
await writer.write(
|
||||
new TextEncoder().encode(JSON.stringify(resp) + "\n"),
|
||||
);
|
||||
} catch (err) {
|
||||
const errResp = {
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
};
|
||||
await writer.write(
|
||||
new TextEncoder().encode(JSON.stringify(errResp) + "\n"),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Connection closed or error — expected during shutdown
|
||||
} finally {
|
||||
try {
|
||||
writer.close();
|
||||
} catch {
|
||||
/* already closed */
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Accept connections in background
|
||||
const abortController = new AbortController();
|
||||
(async () => {
|
||||
try {
|
||||
for await (const conn of listener) {
|
||||
if (abortController.signal.aborted) break;
|
||||
handleConnection(conn);
|
||||
}
|
||||
} catch {
|
||||
// Listener closed
|
||||
}
|
||||
})();
|
||||
|
||||
return {
|
||||
close() {
|
||||
abortController.abort();
|
||||
listener.close();
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
interface RPCRequest {
|
||||
method: string;
|
||||
code?: string;
|
||||
entry_point?: string;
|
||||
permissions?: { read?: string[]; write?: string[]; net?: string[]; run?: string[] };
|
||||
process_id?: string;
|
||||
}
|
||||
|
||||
function dispatch(
|
||||
req: RPCRequest,
|
||||
registry: ModuleRegistry,
|
||||
): Record<string, unknown> {
|
||||
switch (req.method) {
|
||||
case "LoadModule": {
|
||||
registry.load(
|
||||
req.code ?? "",
|
||||
req.entry_point ?? "",
|
||||
req.permissions ?? {},
|
||||
);
|
||||
return { ok: true, error: "" };
|
||||
}
|
||||
case "UnloadModule": {
|
||||
const ok = registry.unload(req.code ?? "");
|
||||
return { ok };
|
||||
}
|
||||
case "ModuleStatus": {
|
||||
return { code: req.code, status: registry.status(req.code ?? "") };
|
||||
}
|
||||
default:
|
||||
return { error: `unknown method: ${req.method}` };
|
||||
}
|
||||
}
|
||||
5
pkg/coredeno/runtime/testdata/test-module.ts
vendored
Normal file
5
pkg/coredeno/runtime/testdata/test-module.ts
vendored
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
// Test module — writes to store via I/O bridge to prove Workers work.
|
||||
// Called by integration tests.
|
||||
export async function init(core: any) {
|
||||
await core.storeSet("test-module", "init", "ok");
|
||||
}
|
||||
79
pkg/coredeno/runtime/worker-entry.ts
Normal file
79
pkg/coredeno/runtime/worker-entry.ts
Normal file
|
|
@ -0,0 +1,79 @@
|
|||
// Worker bootstrap — loaded as entry point for every module Worker.
|
||||
// Sets up the I/O bridge (postMessage ↔ parent relay), then dynamically
|
||||
// imports the module and calls its init(core) function.
|
||||
//
|
||||
// The parent (ModuleRegistry) injects module_code into all gRPC calls,
|
||||
// so modules can't spoof their identity.
|
||||
|
||||
// I/O bridge: request/response correlation over postMessage
|
||||
const pending = new Map<number, { resolve: Function; reject: Function }>();
|
||||
let nextId = 0;
|
||||
|
||||
function rpc(
|
||||
method: string,
|
||||
params: Record<string, unknown>,
|
||||
): Promise<unknown> {
|
||||
return new Promise((resolve, reject) => {
|
||||
const id = ++nextId;
|
||||
pending.set(id, { resolve, reject });
|
||||
self.postMessage({ type: "rpc", id, method, params });
|
||||
});
|
||||
}
|
||||
|
||||
// Typed core object passed to module's init() function.
|
||||
// Each method maps to a CoreService gRPC call relayed through the parent.
|
||||
const core = {
|
||||
storeGet(group: string, key: string) {
|
||||
return rpc("StoreGet", { group, key });
|
||||
},
|
||||
storeSet(group: string, key: string, value: string) {
|
||||
return rpc("StoreSet", { group, key, value });
|
||||
},
|
||||
fileRead(path: string) {
|
||||
return rpc("FileRead", { path });
|
||||
},
|
||||
fileWrite(path: string, content: string) {
|
||||
return rpc("FileWrite", { path, content });
|
||||
},
|
||||
processStart(command: string, args: string[]) {
|
||||
return rpc("ProcessStart", { command, args });
|
||||
},
|
||||
processStop(processId: string) {
|
||||
return rpc("ProcessStop", { process_id: processId });
|
||||
},
|
||||
};
|
||||
|
||||
// Handle messages from parent: RPC responses and load commands
|
||||
self.addEventListener("message", async (e: MessageEvent) => {
|
||||
const msg = e.data;
|
||||
|
||||
if (msg.type === "rpc_response") {
|
||||
const p = pending.get(msg.id);
|
||||
if (p) {
|
||||
pending.delete(msg.id);
|
||||
if (msg.error) p.reject(new Error(msg.error));
|
||||
else p.resolve(msg.result);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (msg.type === "load") {
|
||||
try {
|
||||
const mod = await import(msg.url);
|
||||
if (typeof mod.init === "function") {
|
||||
await mod.init(core);
|
||||
}
|
||||
self.postMessage({ type: "loaded", ok: true });
|
||||
} catch (err) {
|
||||
self.postMessage({
|
||||
type: "loaded",
|
||||
ok: false,
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
});
|
||||
}
|
||||
return;
|
||||
}
|
||||
});
|
||||
|
||||
// Signal ready — parent will respond with {type: "load", url: "..."}
|
||||
self.postMessage({ type: "ready" });
|
||||
207
pkg/coredeno/server.go
Normal file
207
pkg/coredeno/server.go
Normal file
|
|
@ -0,0 +1,207 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
pb "forge.lthn.ai/core/go/pkg/coredeno/proto"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"forge.lthn.ai/core/go/pkg/manifest"
|
||||
"forge.lthn.ai/core/go/pkg/store"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// ProcessRunner abstracts process management for the gRPC server.
|
||||
// Satisfied by *process.Service.
|
||||
type ProcessRunner interface {
|
||||
Start(ctx context.Context, command string, args ...string) (ProcessHandle, error)
|
||||
Kill(id string) error
|
||||
}
|
||||
|
||||
// ProcessHandle is returned by ProcessRunner.Start.
|
||||
type ProcessHandle interface {
|
||||
Info() ProcessInfo
|
||||
}
|
||||
|
||||
// ProcessInfo is the subset of process info the server needs.
|
||||
type ProcessInfo struct {
|
||||
ID string
|
||||
}
|
||||
|
||||
// Server implements the CoreService gRPC interface with permission gating.
|
||||
// Every I/O request is checked against the calling module's declared permissions.
|
||||
type Server struct {
|
||||
pb.UnimplementedCoreServiceServer
|
||||
medium io.Medium
|
||||
store *store.Store
|
||||
manifests map[string]*manifest.Manifest
|
||||
processes ProcessRunner
|
||||
}
|
||||
|
||||
// NewServer creates a CoreService server backed by the given Medium and Store.
|
||||
func NewServer(medium io.Medium, st *store.Store) *Server {
|
||||
return &Server{
|
||||
medium: medium,
|
||||
store: st,
|
||||
manifests: make(map[string]*manifest.Manifest),
|
||||
}
|
||||
}
|
||||
|
||||
// RegisterModule adds a module's manifest to the permission registry.
|
||||
func (s *Server) RegisterModule(m *manifest.Manifest) {
|
||||
s.manifests[m.Code] = m
|
||||
}
|
||||
|
||||
// getManifest looks up a module and returns an error if unknown.
|
||||
func (s *Server) getManifest(code string) (*manifest.Manifest, error) {
|
||||
m, ok := s.manifests[code]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("unknown module: %s", code)
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
// FileRead implements CoreService.FileRead with permission gating.
|
||||
func (s *Server) FileRead(_ context.Context, req *pb.FileReadRequest) (*pb.FileReadResponse, error) {
|
||||
m, err := s.getManifest(req.ModuleCode)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !CheckPath(req.Path, m.Permissions.Read) {
|
||||
return nil, fmt.Errorf("permission denied: %s cannot read %s", req.ModuleCode, req.Path)
|
||||
}
|
||||
content, err := s.medium.Read(req.Path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &pb.FileReadResponse{Content: content}, nil
|
||||
}
|
||||
|
||||
// FileWrite implements CoreService.FileWrite with permission gating.
|
||||
func (s *Server) FileWrite(_ context.Context, req *pb.FileWriteRequest) (*pb.FileWriteResponse, error) {
|
||||
m, err := s.getManifest(req.ModuleCode)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !CheckPath(req.Path, m.Permissions.Write) {
|
||||
return nil, fmt.Errorf("permission denied: %s cannot write %s", req.ModuleCode, req.Path)
|
||||
}
|
||||
if err := s.medium.Write(req.Path, req.Content); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &pb.FileWriteResponse{Ok: true}, nil
|
||||
}
|
||||
|
||||
// FileList implements CoreService.FileList with permission gating.
|
||||
func (s *Server) FileList(_ context.Context, req *pb.FileListRequest) (*pb.FileListResponse, error) {
|
||||
m, err := s.getManifest(req.ModuleCode)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !CheckPath(req.Path, m.Permissions.Read) {
|
||||
return nil, fmt.Errorf("permission denied: %s cannot list %s", req.ModuleCode, req.Path)
|
||||
}
|
||||
entries, err := s.medium.List(req.Path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var pbEntries []*pb.FileEntry
|
||||
for _, e := range entries {
|
||||
info, _ := e.Info()
|
||||
pbEntries = append(pbEntries, &pb.FileEntry{
|
||||
Name: e.Name(),
|
||||
IsDir: e.IsDir(),
|
||||
Size: info.Size(),
|
||||
})
|
||||
}
|
||||
return &pb.FileListResponse{Entries: pbEntries}, nil
|
||||
}
|
||||
|
||||
// FileDelete implements CoreService.FileDelete with permission gating.
|
||||
func (s *Server) FileDelete(_ context.Context, req *pb.FileDeleteRequest) (*pb.FileDeleteResponse, error) {
|
||||
m, err := s.getManifest(req.ModuleCode)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !CheckPath(req.Path, m.Permissions.Write) {
|
||||
return nil, fmt.Errorf("permission denied: %s cannot delete %s", req.ModuleCode, req.Path)
|
||||
}
|
||||
if err := s.medium.Delete(req.Path); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &pb.FileDeleteResponse{Ok: true}, nil
|
||||
}
|
||||
|
||||
// storeGroupAllowed checks that the requested group is not a reserved system namespace.
|
||||
// Groups prefixed with "_" are reserved for internal use (e.g. _coredeno, _modules).
|
||||
// TODO: once the proto carries module_code on store requests, enforce per-module namespace isolation.
|
||||
func storeGroupAllowed(group string) error {
|
||||
if strings.HasPrefix(group, "_") {
|
||||
return status.Errorf(codes.PermissionDenied, "reserved store group: %s", group)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// StoreGet implements CoreService.StoreGet with reserved namespace protection.
|
||||
func (s *Server) StoreGet(_ context.Context, req *pb.StoreGetRequest) (*pb.StoreGetResponse, error) {
|
||||
if err := storeGroupAllowed(req.Group); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
val, err := s.store.Get(req.Group, req.Key)
|
||||
if err != nil {
|
||||
if errors.Is(err, store.ErrNotFound) {
|
||||
return &pb.StoreGetResponse{Found: false}, nil
|
||||
}
|
||||
return nil, status.Errorf(codes.Internal, "store: %v", err)
|
||||
}
|
||||
return &pb.StoreGetResponse{Value: val, Found: true}, nil
|
||||
}
|
||||
|
||||
// StoreSet implements CoreService.StoreSet with reserved namespace protection.
|
||||
func (s *Server) StoreSet(_ context.Context, req *pb.StoreSetRequest) (*pb.StoreSetResponse, error) {
|
||||
if err := storeGroupAllowed(req.Group); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := s.store.Set(req.Group, req.Key, req.Value); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &pb.StoreSetResponse{Ok: true}, nil
|
||||
}
|
||||
|
||||
// SetProcessRunner sets the process runner for ProcessStart/ProcessStop.
|
||||
func (s *Server) SetProcessRunner(pr ProcessRunner) {
|
||||
s.processes = pr
|
||||
}
|
||||
|
||||
// ProcessStart implements CoreService.ProcessStart with permission gating.
|
||||
func (s *Server) ProcessStart(ctx context.Context, req *pb.ProcessStartRequest) (*pb.ProcessStartResponse, error) {
|
||||
if s.processes == nil {
|
||||
return nil, status.Error(codes.Unimplemented, "process service not available")
|
||||
}
|
||||
m, err := s.getManifest(req.ModuleCode)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !CheckRun(req.Command, m.Permissions.Run) {
|
||||
return nil, fmt.Errorf("permission denied: %s cannot run %s", req.ModuleCode, req.Command)
|
||||
}
|
||||
proc, err := s.processes.Start(ctx, req.Command, req.Args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("process start: %w", err)
|
||||
}
|
||||
return &pb.ProcessStartResponse{ProcessId: proc.Info().ID}, nil
|
||||
}
|
||||
|
||||
// ProcessStop implements CoreService.ProcessStop.
|
||||
func (s *Server) ProcessStop(_ context.Context, req *pb.ProcessStopRequest) (*pb.ProcessStopResponse, error) {
|
||||
if s.processes == nil {
|
||||
return nil, status.Error(codes.Unimplemented, "process service not available")
|
||||
}
|
||||
if err := s.processes.Kill(req.ProcessId); err != nil {
|
||||
return nil, fmt.Errorf("process stop: %w", err)
|
||||
}
|
||||
return &pb.ProcessStopResponse{Ok: true}, nil
|
||||
}
|
||||
200
pkg/coredeno/server_test.go
Normal file
200
pkg/coredeno/server_test.go
Normal file
|
|
@ -0,0 +1,200 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
pb "forge.lthn.ai/core/go/pkg/coredeno/proto"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"forge.lthn.ai/core/go/pkg/manifest"
|
||||
"forge.lthn.ai/core/go/pkg/store"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// mockProcessRunner implements ProcessRunner for testing.
|
||||
type mockProcessRunner struct {
|
||||
started map[string]bool
|
||||
nextID int
|
||||
}
|
||||
|
||||
func newMockProcessRunner() *mockProcessRunner {
|
||||
return &mockProcessRunner{started: make(map[string]bool)}
|
||||
}
|
||||
|
||||
func (m *mockProcessRunner) Start(_ context.Context, command string, args ...string) (ProcessHandle, error) {
|
||||
m.nextID++
|
||||
id := fmt.Sprintf("proc-%d", m.nextID)
|
||||
m.started[id] = true
|
||||
return &mockProcessHandle{id: id}, nil
|
||||
}
|
||||
|
||||
func (m *mockProcessRunner) Kill(id string) error {
|
||||
if !m.started[id] {
|
||||
return fmt.Errorf("process not found: %s", id)
|
||||
}
|
||||
delete(m.started, id)
|
||||
return nil
|
||||
}
|
||||
|
||||
type mockProcessHandle struct{ id string }
|
||||
|
||||
func (h *mockProcessHandle) Info() ProcessInfo { return ProcessInfo{ID: h.id} }
|
||||
|
||||
func newTestServer(t *testing.T) *Server {
|
||||
t.Helper()
|
||||
medium := io.NewMockMedium()
|
||||
medium.Files["./data/test.txt"] = "hello"
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() { st.Close() })
|
||||
|
||||
srv := NewServer(medium, st)
|
||||
srv.RegisterModule(&manifest.Manifest{
|
||||
Code: "test-mod",
|
||||
Permissions: manifest.Permissions{
|
||||
Read: []string{"./data/"},
|
||||
Write: []string{"./data/"},
|
||||
},
|
||||
})
|
||||
return srv
|
||||
}
|
||||
|
||||
func TestFileRead_Good(t *testing.T) {
|
||||
srv := newTestServer(t)
|
||||
resp, err := srv.FileRead(context.Background(), &pb.FileReadRequest{
|
||||
Path: "./data/test.txt", ModuleCode: "test-mod",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "hello", resp.Content)
|
||||
}
|
||||
|
||||
func TestFileRead_Bad_PermissionDenied(t *testing.T) {
|
||||
srv := newTestServer(t)
|
||||
_, err := srv.FileRead(context.Background(), &pb.FileReadRequest{
|
||||
Path: "./secrets/key.pem", ModuleCode: "test-mod",
|
||||
})
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
}
|
||||
|
||||
func TestFileRead_Bad_UnknownModule(t *testing.T) {
|
||||
srv := newTestServer(t)
|
||||
_, err := srv.FileRead(context.Background(), &pb.FileReadRequest{
|
||||
Path: "./data/test.txt", ModuleCode: "unknown",
|
||||
})
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "unknown module")
|
||||
}
|
||||
|
||||
func TestFileWrite_Good(t *testing.T) {
|
||||
srv := newTestServer(t)
|
||||
resp, err := srv.FileWrite(context.Background(), &pb.FileWriteRequest{
|
||||
Path: "./data/new.txt", Content: "world", ModuleCode: "test-mod",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.True(t, resp.Ok)
|
||||
}
|
||||
|
||||
func TestFileWrite_Bad_PermissionDenied(t *testing.T) {
|
||||
srv := newTestServer(t)
|
||||
_, err := srv.FileWrite(context.Background(), &pb.FileWriteRequest{
|
||||
Path: "./secrets/bad.txt", Content: "nope", ModuleCode: "test-mod",
|
||||
})
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
}
|
||||
|
||||
func TestStoreGetSet_Good(t *testing.T) {
|
||||
srv := newTestServer(t)
|
||||
ctx := context.Background()
|
||||
|
||||
_, err := srv.StoreSet(ctx, &pb.StoreSetRequest{Group: "cfg", Key: "theme", Value: "dark"})
|
||||
require.NoError(t, err)
|
||||
|
||||
resp, err := srv.StoreGet(ctx, &pb.StoreGetRequest{Group: "cfg", Key: "theme"})
|
||||
require.NoError(t, err)
|
||||
assert.True(t, resp.Found)
|
||||
assert.Equal(t, "dark", resp.Value)
|
||||
}
|
||||
|
||||
func TestStoreGet_Good_NotFound(t *testing.T) {
|
||||
srv := newTestServer(t)
|
||||
resp, err := srv.StoreGet(context.Background(), &pb.StoreGetRequest{Group: "cfg", Key: "missing"})
|
||||
require.NoError(t, err)
|
||||
assert.False(t, resp.Found)
|
||||
}
|
||||
|
||||
func newTestServerWithProcess(t *testing.T) (*Server, *mockProcessRunner) {
|
||||
t.Helper()
|
||||
srv := newTestServer(t)
|
||||
srv.RegisterModule(&manifest.Manifest{
|
||||
Code: "runner-mod",
|
||||
Permissions: manifest.Permissions{
|
||||
Run: []string{"echo", "ls"},
|
||||
},
|
||||
})
|
||||
pr := newMockProcessRunner()
|
||||
srv.SetProcessRunner(pr)
|
||||
return srv, pr
|
||||
}
|
||||
|
||||
func TestProcessStart_Good(t *testing.T) {
|
||||
srv, _ := newTestServerWithProcess(t)
|
||||
resp, err := srv.ProcessStart(context.Background(), &pb.ProcessStartRequest{
|
||||
Command: "echo", Args: []string{"hello"}, ModuleCode: "runner-mod",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.NotEmpty(t, resp.ProcessId)
|
||||
}
|
||||
|
||||
func TestProcessStart_Bad_PermissionDenied(t *testing.T) {
|
||||
srv, _ := newTestServerWithProcess(t)
|
||||
_, err := srv.ProcessStart(context.Background(), &pb.ProcessStartRequest{
|
||||
Command: "rm", Args: []string{"-rf", "/"}, ModuleCode: "runner-mod",
|
||||
})
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
}
|
||||
|
||||
func TestProcessStart_Bad_NoProcessService(t *testing.T) {
|
||||
srv := newTestServer(t)
|
||||
srv.RegisterModule(&manifest.Manifest{
|
||||
Code: "no-proc-mod",
|
||||
Permissions: manifest.Permissions{Run: []string{"echo"}},
|
||||
})
|
||||
_, err := srv.ProcessStart(context.Background(), &pb.ProcessStartRequest{
|
||||
Command: "echo", ModuleCode: "no-proc-mod",
|
||||
})
|
||||
assert.Error(t, err)
|
||||
st, ok := status.FromError(err)
|
||||
require.True(t, ok)
|
||||
assert.Equal(t, codes.Unimplemented, st.Code())
|
||||
}
|
||||
|
||||
func TestProcessStop_Good(t *testing.T) {
|
||||
srv, _ := newTestServerWithProcess(t)
|
||||
// Start a process first
|
||||
startResp, err := srv.ProcessStart(context.Background(), &pb.ProcessStartRequest{
|
||||
Command: "echo", ModuleCode: "runner-mod",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Stop it
|
||||
resp, err := srv.ProcessStop(context.Background(), &pb.ProcessStopRequest{
|
||||
ProcessId: startResp.ProcessId,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.True(t, resp.Ok)
|
||||
}
|
||||
|
||||
func TestProcessStop_Bad_NotFound(t *testing.T) {
|
||||
srv, _ := newTestServerWithProcess(t)
|
||||
_, err := srv.ProcessStop(context.Background(), &pb.ProcessStopRequest{
|
||||
ProcessId: "nonexistent",
|
||||
})
|
||||
assert.Error(t, err)
|
||||
}
|
||||
220
pkg/coredeno/service.go
Normal file
220
pkg/coredeno/service.go
Normal file
|
|
@ -0,0 +1,220 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
core "forge.lthn.ai/core/go/pkg/framework/core"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"forge.lthn.ai/core/go/pkg/manifest"
|
||||
"forge.lthn.ai/core/go/pkg/marketplace"
|
||||
"forge.lthn.ai/core/go/pkg/store"
|
||||
)
|
||||
|
||||
// Service wraps the CoreDeno sidecar as a framework service.
|
||||
// Implements Startable and Stoppable for lifecycle management.
|
||||
//
|
||||
// Registration:
|
||||
//
|
||||
// core.New(core.WithService(coredeno.NewServiceFactory(opts)))
|
||||
type Service struct {
|
||||
*core.ServiceRuntime[Options]
|
||||
sidecar *Sidecar
|
||||
grpcServer *Server
|
||||
store *store.Store
|
||||
grpcCancel context.CancelFunc
|
||||
grpcDone chan error
|
||||
denoClient *DenoClient
|
||||
installer *marketplace.Installer
|
||||
}
|
||||
|
||||
// NewServiceFactory returns a factory function for framework registration via WithService.
|
||||
func NewServiceFactory(opts Options) func(*core.Core) (any, error) {
|
||||
return func(c *core.Core) (any, error) {
|
||||
return &Service{
|
||||
ServiceRuntime: core.NewServiceRuntime(c, opts),
|
||||
sidecar: NewSidecar(opts),
|
||||
}, nil
|
||||
}
|
||||
}
|
||||
|
||||
// OnStartup boots the CoreDeno subsystem. Called by the framework on app startup.
|
||||
//
|
||||
// Sequence: medium → store → server → manifest → gRPC listener → sidecar.
|
||||
func (s *Service) OnStartup(ctx context.Context) error {
|
||||
opts := s.Opts()
|
||||
|
||||
// 1. Create sandboxed Medium (or mock if no AppRoot)
|
||||
var medium io.Medium
|
||||
if opts.AppRoot != "" {
|
||||
var err error
|
||||
medium, err = io.NewSandboxed(opts.AppRoot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("coredeno: medium: %w", err)
|
||||
}
|
||||
} else {
|
||||
medium = io.NewMockMedium()
|
||||
}
|
||||
|
||||
// 2. Create Store
|
||||
dbPath := opts.StoreDBPath
|
||||
if dbPath == "" {
|
||||
dbPath = ":memory:"
|
||||
}
|
||||
var err error
|
||||
s.store, err = store.New(dbPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("coredeno: store: %w", err)
|
||||
}
|
||||
|
||||
// 3. Create gRPC Server
|
||||
s.grpcServer = NewServer(medium, s.store)
|
||||
|
||||
// 4. Load manifest if AppRoot set (non-fatal if missing)
|
||||
if opts.AppRoot != "" {
|
||||
m, loadErr := manifest.Load(medium, ".")
|
||||
if loadErr == nil && m != nil {
|
||||
if opts.PublicKey != nil {
|
||||
if ok, verr := manifest.Verify(m, opts.PublicKey); verr == nil && ok {
|
||||
s.grpcServer.RegisterModule(m)
|
||||
}
|
||||
} else {
|
||||
s.grpcServer.RegisterModule(m)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 5. Start gRPC listener in background
|
||||
grpcCtx, grpcCancel := context.WithCancel(ctx)
|
||||
s.grpcCancel = grpcCancel
|
||||
s.grpcDone = make(chan error, 1)
|
||||
go func() {
|
||||
s.grpcDone <- ListenGRPC(grpcCtx, opts.SocketPath, s.grpcServer)
|
||||
}()
|
||||
|
||||
// cleanupGRPC tears down the listener on early-return errors.
|
||||
cleanupGRPC := func() {
|
||||
grpcCancel()
|
||||
<-s.grpcDone
|
||||
}
|
||||
|
||||
// 6. Start sidecar (if args provided)
|
||||
if len(opts.SidecarArgs) > 0 {
|
||||
// Wait for core socket so sidecar can connect to our gRPC server
|
||||
if err := waitForSocket(ctx, opts.SocketPath, 5*time.Second); err != nil {
|
||||
cleanupGRPC()
|
||||
return fmt.Errorf("coredeno: core socket: %w", err)
|
||||
}
|
||||
|
||||
if err := s.sidecar.Start(ctx, opts.SidecarArgs...); err != nil {
|
||||
cleanupGRPC()
|
||||
return fmt.Errorf("coredeno: sidecar: %w", err)
|
||||
}
|
||||
|
||||
// 7. Wait for Deno's server and connect as client
|
||||
if opts.DenoSocketPath != "" {
|
||||
if err := waitForSocket(ctx, opts.DenoSocketPath, 10*time.Second); err != nil {
|
||||
_ = s.sidecar.Stop()
|
||||
cleanupGRPC()
|
||||
return fmt.Errorf("coredeno: deno socket: %w", err)
|
||||
}
|
||||
dc, err := DialDeno(opts.DenoSocketPath)
|
||||
if err != nil {
|
||||
_ = s.sidecar.Stop()
|
||||
cleanupGRPC()
|
||||
return fmt.Errorf("coredeno: deno client: %w", err)
|
||||
}
|
||||
s.denoClient = dc
|
||||
}
|
||||
}
|
||||
|
||||
// 8. Create installer and auto-load installed modules
|
||||
if opts.AppRoot != "" {
|
||||
modulesDir := filepath.Join(opts.AppRoot, "modules")
|
||||
s.installer = marketplace.NewInstaller(modulesDir, s.store)
|
||||
|
||||
if s.denoClient != nil {
|
||||
installed, listErr := s.installer.Installed()
|
||||
if listErr == nil {
|
||||
for _, mod := range installed {
|
||||
perms := ModulePermissions{
|
||||
Read: mod.Permissions.Read,
|
||||
Write: mod.Permissions.Write,
|
||||
Net: mod.Permissions.Net,
|
||||
Run: mod.Permissions.Run,
|
||||
}
|
||||
s.denoClient.LoadModule(mod.Code, mod.EntryPoint, perms)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// OnShutdown stops the CoreDeno subsystem. Called by the framework on app shutdown.
|
||||
func (s *Service) OnShutdown(_ context.Context) error {
|
||||
// Close Deno client connection
|
||||
if s.denoClient != nil {
|
||||
s.denoClient.Close()
|
||||
}
|
||||
|
||||
// Stop sidecar
|
||||
_ = s.sidecar.Stop()
|
||||
|
||||
// Stop gRPC listener
|
||||
if s.grpcCancel != nil {
|
||||
s.grpcCancel()
|
||||
<-s.grpcDone
|
||||
}
|
||||
|
||||
// Close store
|
||||
if s.store != nil {
|
||||
s.store.Close()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sidecar returns the underlying sidecar for direct access.
|
||||
func (s *Service) Sidecar() *Sidecar {
|
||||
return s.sidecar
|
||||
}
|
||||
|
||||
// GRPCServer returns the gRPC server for direct access.
|
||||
func (s *Service) GRPCServer() *Server {
|
||||
return s.grpcServer
|
||||
}
|
||||
|
||||
// DenoClient returns the DenoService client for calling the Deno sidecar.
|
||||
// Returns nil if the sidecar was not started or has no DenoSocketPath.
|
||||
func (s *Service) DenoClient() *DenoClient {
|
||||
return s.denoClient
|
||||
}
|
||||
|
||||
// Installer returns the marketplace module installer.
|
||||
// Returns nil if AppRoot was not set.
|
||||
func (s *Service) Installer() *marketplace.Installer {
|
||||
return s.installer
|
||||
}
|
||||
|
||||
// waitForSocket polls until a Unix socket file appears or the context/timeout expires.
|
||||
func waitForSocket(ctx context.Context, path string, timeout time.Duration) error {
|
||||
deadline := time.Now().Add(timeout)
|
||||
for {
|
||||
if _, err := os.Stat(path); err == nil {
|
||||
return nil
|
||||
}
|
||||
if time.Now().After(deadline) {
|
||||
return fmt.Errorf("timeout waiting for socket %s", path)
|
||||
}
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case <-time.After(50 * time.Millisecond):
|
||||
}
|
||||
}
|
||||
}
|
||||
183
pkg/coredeno/service_test.go
Normal file
183
pkg/coredeno/service_test.go
Normal file
|
|
@ -0,0 +1,183 @@
|
|||
package coredeno
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
pb "forge.lthn.ai/core/go/pkg/coredeno/proto"
|
||||
core "forge.lthn.ai/core/go/pkg/framework/core"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
func TestNewServiceFactory_Good(t *testing.T) {
|
||||
opts := Options{
|
||||
DenoPath: "echo",
|
||||
SocketPath: "/tmp/test-service.sock",
|
||||
}
|
||||
c, err := core.New()
|
||||
require.NoError(t, err)
|
||||
|
||||
factory := NewServiceFactory(opts)
|
||||
result, err := factory(c)
|
||||
require.NoError(t, err)
|
||||
|
||||
svc, ok := result.(*Service)
|
||||
require.True(t, ok)
|
||||
assert.NotNil(t, svc.sidecar)
|
||||
assert.Equal(t, "echo", svc.sidecar.opts.DenoPath)
|
||||
assert.NotNil(t, svc.Core(), "ServiceRuntime should provide Core access")
|
||||
assert.Equal(t, opts, svc.Opts(), "ServiceRuntime should provide Options access")
|
||||
}
|
||||
|
||||
func TestService_WithService_Good(t *testing.T) {
|
||||
opts := Options{DenoPath: "echo"}
|
||||
c, err := core.New(core.WithService(NewServiceFactory(opts)))
|
||||
require.NoError(t, err)
|
||||
assert.NotNil(t, c)
|
||||
}
|
||||
|
||||
func TestService_Lifecycle_Good(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "lifecycle.sock")
|
||||
|
||||
c, err := core.New()
|
||||
require.NoError(t, err)
|
||||
|
||||
factory := NewServiceFactory(Options{
|
||||
DenoPath: "echo",
|
||||
SocketPath: sockPath,
|
||||
})
|
||||
result, _ := factory(c)
|
||||
svc := result.(*Service)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
// Verify Startable
|
||||
err = svc.OnStartup(ctx)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Verify Stoppable
|
||||
err = svc.OnShutdown(context.Background())
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestService_Sidecar_Good(t *testing.T) {
|
||||
c, err := core.New()
|
||||
require.NoError(t, err)
|
||||
|
||||
factory := NewServiceFactory(Options{DenoPath: "echo"})
|
||||
result, _ := factory(c)
|
||||
svc := result.(*Service)
|
||||
|
||||
assert.NotNil(t, svc.Sidecar())
|
||||
}
|
||||
|
||||
func TestService_OnStartup_Good(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "core.sock")
|
||||
|
||||
// Write a minimal manifest
|
||||
coreDir := filepath.Join(tmpDir, ".core")
|
||||
require.NoError(t, os.MkdirAll(coreDir, 0755))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(coreDir, "view.yml"), []byte(`
|
||||
code: test-app
|
||||
name: Test App
|
||||
version: "1.0"
|
||||
permissions:
|
||||
read: ["./data/"]
|
||||
write: ["./data/"]
|
||||
`), 0644))
|
||||
|
||||
opts := Options{
|
||||
DenoPath: "sleep",
|
||||
SocketPath: sockPath,
|
||||
AppRoot: tmpDir,
|
||||
StoreDBPath: ":memory:",
|
||||
SidecarArgs: []string{"60"},
|
||||
}
|
||||
|
||||
c, err := core.New()
|
||||
require.NoError(t, err)
|
||||
|
||||
factory := NewServiceFactory(opts)
|
||||
result, err := factory(c)
|
||||
require.NoError(t, err)
|
||||
svc := result.(*Service)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
err = svc.OnStartup(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify socket appeared
|
||||
require.Eventually(t, func() bool {
|
||||
_, err := os.Stat(sockPath)
|
||||
return err == nil
|
||||
}, 2*time.Second, 10*time.Millisecond, "gRPC socket should appear after startup")
|
||||
|
||||
// Verify gRPC responds
|
||||
conn, err := grpc.NewClient(
|
||||
"unix://"+sockPath,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
defer conn.Close()
|
||||
|
||||
client := pb.NewCoreServiceClient(conn)
|
||||
_, err = client.StoreSet(ctx, &pb.StoreSetRequest{
|
||||
Group: "boot", Key: "ok", Value: "true",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
resp, err := client.StoreGet(ctx, &pb.StoreGetRequest{
|
||||
Group: "boot", Key: "ok",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.True(t, resp.Found)
|
||||
assert.Equal(t, "true", resp.Value)
|
||||
|
||||
// Verify sidecar is running
|
||||
assert.True(t, svc.sidecar.IsRunning(), "sidecar should be running")
|
||||
|
||||
// Shutdown
|
||||
err = svc.OnShutdown(context.Background())
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, svc.sidecar.IsRunning(), "sidecar should be stopped")
|
||||
}
|
||||
|
||||
func TestService_OnStartup_Good_NoManifest(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
sockPath := filepath.Join(tmpDir, "core.sock")
|
||||
|
||||
opts := Options{
|
||||
DenoPath: "sleep",
|
||||
SocketPath: sockPath,
|
||||
AppRoot: tmpDir,
|
||||
StoreDBPath: ":memory:",
|
||||
}
|
||||
|
||||
c, err := core.New()
|
||||
require.NoError(t, err)
|
||||
|
||||
factory := NewServiceFactory(opts)
|
||||
result, _ := factory(c)
|
||||
svc := result.(*Service)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
// Should succeed even without .core/view.yml
|
||||
err = svc.OnStartup(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = svc.OnShutdown(context.Background())
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
|
@ -23,7 +23,9 @@ func TestTranslationCompleteness_Good(t *testing.T) {
|
|||
|
||||
// Extract all T("key") calls from Go source
|
||||
keys := extractTranslationKeys(t, root)
|
||||
require.NotEmpty(t, keys, "should find translation keys in source code")
|
||||
if len(keys) == 0 {
|
||||
t.Skip("no i18n.T() calls found in source — CLI not yet wired to i18n")
|
||||
}
|
||||
|
||||
var missing []string
|
||||
for _, key := range keys {
|
||||
|
|
|
|||
|
|
@ -24,6 +24,13 @@ func New(root string) (*Medium, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Resolve symlinks so sandbox checks compare like-for-like.
|
||||
// On macOS, /var is a symlink to /private/var — without this,
|
||||
// EvalSymlinks on child paths resolves to /private/var/... while
|
||||
// root stays /var/..., causing false sandbox escape detections.
|
||||
if resolved, err := filepath.EvalSymlinks(abs); err == nil {
|
||||
abs = resolved
|
||||
}
|
||||
return &Medium{root: abs}, nil
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -14,7 +14,9 @@ func TestNew(t *testing.T) {
|
|||
root := t.TempDir()
|
||||
m, err := New(root)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, root, m.root)
|
||||
// New() resolves symlinks (macOS /var → /private/var), so compare resolved paths.
|
||||
resolved, _ := filepath.EvalSymlinks(root)
|
||||
assert.Equal(t, resolved, m.root)
|
||||
}
|
||||
|
||||
func TestPath(t *testing.T) {
|
||||
|
|
|
|||
|
|
@ -24,8 +24,9 @@ type Node struct {
|
|||
files map[string]*dataFile
|
||||
}
|
||||
|
||||
// compile-time interface check
|
||||
// compile-time interface checks
|
||||
var _ coreio.Medium = (*Node)(nil)
|
||||
var _ fs.ReadFileFS = (*Node)(nil)
|
||||
|
||||
// New creates a new, empty Node.
|
||||
func New() *Node {
|
||||
|
|
@ -78,8 +79,17 @@ func (n *Node) ToTar() ([]byte, error) {
|
|||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
// FromTar replaces the in-memory tree with the contents of a tar archive.
|
||||
func (n *Node) FromTar(data []byte) error {
|
||||
// FromTar creates a new Node from a tar archive.
|
||||
func FromTar(data []byte) (*Node, error) {
|
||||
n := New()
|
||||
if err := n.LoadTar(data); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// LoadTar replaces the in-memory tree with the contents of a tar archive.
|
||||
func (n *Node) LoadTar(data []byte) error {
|
||||
newFiles := make(map[string]*dataFile)
|
||||
tr := tar.NewReader(bytes.NewReader(data))
|
||||
|
||||
|
|
@ -118,14 +128,15 @@ func (n *Node) WalkNode(root string, fn fs.WalkDirFunc) error {
|
|||
return fs.WalkDir(n, root, fn)
|
||||
}
|
||||
|
||||
// WalkOptions configures optional behaviour for Walk.
|
||||
// WalkOptions configures the behaviour of Walk.
|
||||
type WalkOptions struct {
|
||||
// MaxDepth limits traversal depth (0 = unlimited, 1 = root children only).
|
||||
// MaxDepth limits how many directory levels to descend. 0 means unlimited.
|
||||
MaxDepth int
|
||||
// Filter, when non-nil, is called before visiting each entry.
|
||||
// Return false to skip the entry (and its subtree if a directory).
|
||||
// Filter, if set, is called for each entry. Return true to include the
|
||||
// entry (and descend into it if it is a directory).
|
||||
Filter func(path string, d fs.DirEntry) bool
|
||||
// SkipErrors suppresses errors from the root lookup and doesn't call fn.
|
||||
// SkipErrors suppresses errors (e.g. nonexistent root) instead of
|
||||
// propagating them through the callback.
|
||||
SkipErrors bool
|
||||
}
|
||||
|
||||
|
|
@ -137,67 +148,68 @@ func (n *Node) Walk(root string, fn fs.WalkDirFunc, opts ...WalkOptions) error {
|
|||
}
|
||||
|
||||
if opt.SkipErrors {
|
||||
// Check root exists — if not, silently skip.
|
||||
// If root doesn't exist, silently return nil.
|
||||
if _, err := n.Stat(root); err != nil {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
rootDepth := 0
|
||||
if root != "." && root != "" {
|
||||
rootDepth = strings.Count(root, "/") + 1
|
||||
}
|
||||
|
||||
return fs.WalkDir(n, root, func(p string, d fs.DirEntry, err error) error {
|
||||
if err != nil {
|
||||
return fn(p, d, err)
|
||||
}
|
||||
|
||||
// MaxDepth check.
|
||||
if opt.MaxDepth > 0 {
|
||||
depth := 0
|
||||
if p != "." && p != "" {
|
||||
depth = strings.Count(p, "/") + 1
|
||||
}
|
||||
if depth-rootDepth > opt.MaxDepth {
|
||||
if d.IsDir() {
|
||||
if opt.Filter != nil && err == nil {
|
||||
if !opt.Filter(p, d) {
|
||||
if d != nil && d.IsDir() {
|
||||
return fs.SkipDir
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Filter check.
|
||||
if opt.Filter != nil && !opt.Filter(p, d) {
|
||||
if d.IsDir() {
|
||||
// Call the user's function first so the entry is visited.
|
||||
result := fn(p, d, err)
|
||||
|
||||
// After visiting a directory at MaxDepth, prevent descending further.
|
||||
if result == nil && opt.MaxDepth > 0 && d != nil && d.IsDir() && p != root {
|
||||
rel := strings.TrimPrefix(p, root)
|
||||
rel = strings.TrimPrefix(rel, "/")
|
||||
depth := strings.Count(rel, "/") + 1
|
||||
if depth >= opt.MaxDepth {
|
||||
return fs.SkipDir
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
return fn(p, d, err)
|
||||
return result
|
||||
})
|
||||
}
|
||||
|
||||
// CopyFile copies a single file from the node to the OS filesystem.
|
||||
func (n *Node) CopyFile(src, dst string, perm os.FileMode) error {
|
||||
// ReadFile returns the content of the named file as a byte slice.
|
||||
// Implements fs.ReadFileFS.
|
||||
func (n *Node) ReadFile(name string) ([]byte, error) {
|
||||
name = strings.TrimPrefix(name, "/")
|
||||
f, ok := n.files[name]
|
||||
if !ok {
|
||||
return nil, &fs.PathError{Op: "read", Path: name, Err: fs.ErrNotExist}
|
||||
}
|
||||
// Return a copy to prevent callers from mutating internal state.
|
||||
result := make([]byte, len(f.content))
|
||||
copy(result, f.content)
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// CopyFile copies a file from the in-memory tree to the local filesystem.
|
||||
func (n *Node) CopyFile(src, dst string, perm fs.FileMode) error {
|
||||
src = strings.TrimPrefix(src, "/")
|
||||
f, ok := n.files[src]
|
||||
if !ok {
|
||||
// Check if it's a directory — can't copy a directory as a file.
|
||||
if info, err := n.Stat(src); err == nil && info.IsDir() {
|
||||
// Check if it's a directory — can't copy directories this way.
|
||||
info, err := n.Stat(src)
|
||||
if err != nil {
|
||||
return &fs.PathError{Op: "copyfile", Path: src, Err: fs.ErrNotExist}
|
||||
}
|
||||
if info.IsDir() {
|
||||
return &fs.PathError{Op: "copyfile", Path: src, Err: fs.ErrInvalid}
|
||||
}
|
||||
return &fs.PathError{Op: "copyfile", Path: src, Err: fs.ErrNotExist}
|
||||
}
|
||||
|
||||
dir := path.Dir(dst)
|
||||
if dir != "." {
|
||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return os.WriteFile(dst, f.content, perm)
|
||||
}
|
||||
|
||||
|
|
@ -330,20 +342,6 @@ func (n *Node) ReadDir(name string) ([]fs.DirEntry, error) {
|
|||
return entries, nil
|
||||
}
|
||||
|
||||
// ReadFile returns the content of a file as a byte slice.
|
||||
// Implements fs.ReadFileFS.
|
||||
func (n *Node) ReadFile(name string) ([]byte, error) {
|
||||
name = strings.TrimPrefix(name, "/")
|
||||
f, ok := n.files[name]
|
||||
if !ok {
|
||||
return nil, fs.ErrNotExist
|
||||
}
|
||||
// Return a copy to prevent mutation of internal state.
|
||||
out := make([]byte, len(f.content))
|
||||
copy(out, f.content)
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// ---------- Medium interface: read/write ----------
|
||||
|
||||
// Read retrieves the content of a file as a string.
|
||||
|
|
|
|||
|
|
@ -451,8 +451,7 @@ func TestFromTar_Good(t *testing.T) {
|
|||
}
|
||||
require.NoError(t, tw.Close())
|
||||
|
||||
n := New()
|
||||
err := n.FromTar(buf.Bytes())
|
||||
n, err := FromTar(buf.Bytes())
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.True(t, n.Exists("foo.txt"), "foo.txt should exist")
|
||||
|
|
@ -462,8 +461,7 @@ func TestFromTar_Good(t *testing.T) {
|
|||
func TestFromTar_Bad(t *testing.T) {
|
||||
// Truncated data that cannot be a valid tar.
|
||||
truncated := make([]byte, 100)
|
||||
n := New()
|
||||
err := n.FromTar(truncated)
|
||||
_, err := FromTar(truncated)
|
||||
assert.Error(t, err, "truncated data should produce an error")
|
||||
}
|
||||
|
||||
|
|
@ -475,8 +473,7 @@ func TestTarRoundTrip_Good(t *testing.T) {
|
|||
tarball, err := n1.ToTar()
|
||||
require.NoError(t, err)
|
||||
|
||||
n2 := New()
|
||||
err = n2.FromTar(tarball)
|
||||
n2, err := FromTar(tarball)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify n2 matches n1.
|
||||
|
|
|
|||
43
pkg/manifest/loader.go
Normal file
43
pkg/manifest/loader.go
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
const manifestPath = ".core/view.yml"
|
||||
|
||||
// MarshalYAML serializes a manifest to YAML bytes.
|
||||
func MarshalYAML(m *Manifest) ([]byte, error) {
|
||||
return yaml.Marshal(m)
|
||||
}
|
||||
|
||||
// Load reads and parses a .core/view.yml from the given root directory.
|
||||
func Load(medium io.Medium, root string) (*Manifest, error) {
|
||||
path := filepath.Join(root, manifestPath)
|
||||
data, err := medium.Read(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("manifest.Load: %w", err)
|
||||
}
|
||||
return Parse([]byte(data))
|
||||
}
|
||||
|
||||
// LoadVerified reads, parses, and verifies the ed25519 signature.
|
||||
func LoadVerified(medium io.Medium, root string, pub ed25519.PublicKey) (*Manifest, error) {
|
||||
m, err := Load(medium, root)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ok, err := Verify(m, pub)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("manifest.LoadVerified: %w", err)
|
||||
}
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("manifest.LoadVerified: signature verification failed for %q", m.Code)
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
63
pkg/manifest/loader_test.go
Normal file
63
pkg/manifest/loader_test.go
Normal file
|
|
@ -0,0 +1,63 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestLoad_Good(t *testing.T) {
|
||||
fs := io.NewMockMedium()
|
||||
fs.Files[".core/view.yml"] = `
|
||||
code: test-app
|
||||
name: Test App
|
||||
version: 1.0.0
|
||||
layout: HLCRF
|
||||
slots:
|
||||
C: main-content
|
||||
`
|
||||
m, err := Load(fs, ".")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "test-app", m.Code)
|
||||
assert.Equal(t, "main-content", m.Slots["C"])
|
||||
}
|
||||
|
||||
func TestLoad_Bad_NoManifest(t *testing.T) {
|
||||
fs := io.NewMockMedium()
|
||||
_, err := Load(fs, ".")
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestLoadVerified_Good(t *testing.T) {
|
||||
pub, priv, _ := ed25519.GenerateKey(nil)
|
||||
m := &Manifest{
|
||||
Code: "signed-app", Name: "Signed", Version: "1.0.0",
|
||||
Layout: "HLCRF", Slots: map[string]string{"C": "main"},
|
||||
}
|
||||
_ = Sign(m, priv)
|
||||
|
||||
raw, _ := MarshalYAML(m)
|
||||
fs := io.NewMockMedium()
|
||||
fs.Files[".core/view.yml"] = string(raw)
|
||||
|
||||
loaded, err := LoadVerified(fs, ".", pub)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "signed-app", loaded.Code)
|
||||
}
|
||||
|
||||
func TestLoadVerified_Bad_Tampered(t *testing.T) {
|
||||
pub, priv, _ := ed25519.GenerateKey(nil)
|
||||
m := &Manifest{Code: "app", Version: "1.0.0"}
|
||||
_ = Sign(m, priv)
|
||||
|
||||
raw, _ := MarshalYAML(m)
|
||||
tampered := "code: evil\n" + string(raw)[6:]
|
||||
fs := io.NewMockMedium()
|
||||
fs.Files[".core/view.yml"] = tampered
|
||||
|
||||
_, err := LoadVerified(fs, ".", pub)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
50
pkg/manifest/manifest.go
Normal file
50
pkg/manifest/manifest.go
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// Manifest represents a .core/view.yml application manifest.
|
||||
type Manifest struct {
|
||||
Code string `yaml:"code"`
|
||||
Name string `yaml:"name"`
|
||||
Version string `yaml:"version"`
|
||||
Sign string `yaml:"sign"`
|
||||
Layout string `yaml:"layout"`
|
||||
Slots map[string]string `yaml:"slots"`
|
||||
|
||||
Permissions Permissions `yaml:"permissions"`
|
||||
Modules []string `yaml:"modules"`
|
||||
}
|
||||
|
||||
// Permissions declares the I/O capabilities a module requires.
|
||||
type Permissions struct {
|
||||
Read []string `yaml:"read"`
|
||||
Write []string `yaml:"write"`
|
||||
Net []string `yaml:"net"`
|
||||
Run []string `yaml:"run"`
|
||||
}
|
||||
|
||||
// Parse decodes YAML bytes into a Manifest.
|
||||
func Parse(data []byte) (*Manifest, error) {
|
||||
var m Manifest
|
||||
if err := yaml.Unmarshal(data, &m); err != nil {
|
||||
return nil, fmt.Errorf("manifest.Parse: %w", err)
|
||||
}
|
||||
return &m, nil
|
||||
}
|
||||
|
||||
// SlotNames returns a deduplicated list of component names from slots.
|
||||
func (m *Manifest) SlotNames() []string {
|
||||
seen := make(map[string]bool)
|
||||
var names []string
|
||||
for _, name := range m.Slots {
|
||||
if !seen[name] {
|
||||
seen[name] = true
|
||||
names = append(names, name)
|
||||
}
|
||||
}
|
||||
return names
|
||||
}
|
||||
65
pkg/manifest/manifest_test.go
Normal file
65
pkg/manifest/manifest_test.go
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestParse_Good(t *testing.T) {
|
||||
raw := `
|
||||
code: photo-browser
|
||||
name: Photo Browser
|
||||
version: 0.1.0
|
||||
sign: dGVzdHNpZw==
|
||||
|
||||
layout: HLCRF
|
||||
slots:
|
||||
H: nav-breadcrumb
|
||||
L: folder-tree
|
||||
C: photo-grid
|
||||
R: metadata-panel
|
||||
F: status-bar
|
||||
|
||||
permissions:
|
||||
read: ["./photos/"]
|
||||
write: []
|
||||
net: []
|
||||
run: []
|
||||
|
||||
modules:
|
||||
- core/media
|
||||
- core/fs
|
||||
`
|
||||
m, err := Parse([]byte(raw))
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "photo-browser", m.Code)
|
||||
assert.Equal(t, "Photo Browser", m.Name)
|
||||
assert.Equal(t, "0.1.0", m.Version)
|
||||
assert.Equal(t, "dGVzdHNpZw==", m.Sign)
|
||||
assert.Equal(t, "HLCRF", m.Layout)
|
||||
assert.Equal(t, "nav-breadcrumb", m.Slots["H"])
|
||||
assert.Equal(t, "photo-grid", m.Slots["C"])
|
||||
assert.Len(t, m.Permissions.Read, 1)
|
||||
assert.Equal(t, "./photos/", m.Permissions.Read[0])
|
||||
assert.Len(t, m.Modules, 2)
|
||||
}
|
||||
|
||||
func TestParse_Bad(t *testing.T) {
|
||||
_, err := Parse([]byte("not: valid: yaml: ["))
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestManifest_SlotNames_Good(t *testing.T) {
|
||||
m := Manifest{
|
||||
Slots: map[string]string{
|
||||
"H": "nav-bar",
|
||||
"C": "main-content",
|
||||
},
|
||||
}
|
||||
names := m.SlotNames()
|
||||
assert.Contains(t, names, "nav-bar")
|
||||
assert.Contains(t, names, "main-content")
|
||||
assert.Len(t, names, 2)
|
||||
}
|
||||
43
pkg/manifest/sign.go
Normal file
43
pkg/manifest/sign.go
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// signable returns the canonical bytes to sign (manifest without sign field).
|
||||
func signable(m *Manifest) ([]byte, error) {
|
||||
tmp := *m
|
||||
tmp.Sign = ""
|
||||
return yaml.Marshal(&tmp)
|
||||
}
|
||||
|
||||
// Sign computes the ed25519 signature and stores it in m.Sign (base64).
|
||||
func Sign(m *Manifest, priv ed25519.PrivateKey) error {
|
||||
msg, err := signable(m)
|
||||
if err != nil {
|
||||
return fmt.Errorf("manifest.Sign: marshal: %w", err)
|
||||
}
|
||||
sig := ed25519.Sign(priv, msg)
|
||||
m.Sign = base64.StdEncoding.EncodeToString(sig)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Verify checks the ed25519 signature in m.Sign against the public key.
|
||||
func Verify(m *Manifest, pub ed25519.PublicKey) (bool, error) {
|
||||
if m.Sign == "" {
|
||||
return false, fmt.Errorf("manifest.Verify: no signature present")
|
||||
}
|
||||
sig, err := base64.StdEncoding.DecodeString(m.Sign)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("manifest.Verify: decode: %w", err)
|
||||
}
|
||||
msg, err := signable(m)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("manifest.Verify: marshal: %w", err)
|
||||
}
|
||||
return ed25519.Verify(pub, msg, sig), nil
|
||||
}
|
||||
51
pkg/manifest/sign_test.go
Normal file
51
pkg/manifest/sign_test.go
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestSignAndVerify_Good(t *testing.T) {
|
||||
pub, priv, err := ed25519.GenerateKey(nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
m := &Manifest{
|
||||
Code: "test-app",
|
||||
Name: "Test App",
|
||||
Version: "1.0.0",
|
||||
Layout: "HLCRF",
|
||||
Slots: map[string]string{"C": "main"},
|
||||
}
|
||||
|
||||
err = Sign(m, priv)
|
||||
require.NoError(t, err)
|
||||
assert.NotEmpty(t, m.Sign)
|
||||
|
||||
ok, err := Verify(m, pub)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, ok)
|
||||
}
|
||||
|
||||
func TestVerify_Bad_Tampered(t *testing.T) {
|
||||
pub, priv, _ := ed25519.GenerateKey(nil)
|
||||
m := &Manifest{Code: "test-app", Version: "1.0.0"}
|
||||
_ = Sign(m, priv)
|
||||
|
||||
m.Code = "evil-app" // tamper
|
||||
|
||||
ok, err := Verify(m, pub)
|
||||
require.NoError(t, err)
|
||||
assert.False(t, ok)
|
||||
}
|
||||
|
||||
func TestVerify_Bad_Unsigned(t *testing.T) {
|
||||
pub, _, _ := ed25519.GenerateKey(nil)
|
||||
m := &Manifest{Code: "test-app"}
|
||||
|
||||
ok, err := Verify(m, pub)
|
||||
assert.Error(t, err)
|
||||
assert.False(t, ok)
|
||||
}
|
||||
196
pkg/marketplace/installer.go
Normal file
196
pkg/marketplace/installer.go
Normal file
|
|
@ -0,0 +1,196 @@
|
|||
package marketplace
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"forge.lthn.ai/core/go/pkg/manifest"
|
||||
"forge.lthn.ai/core/go/pkg/store"
|
||||
)
|
||||
|
||||
const storeGroup = "_modules"
|
||||
|
||||
// Installer handles module installation from Git repos.
|
||||
type Installer struct {
|
||||
modulesDir string
|
||||
store *store.Store
|
||||
}
|
||||
|
||||
// NewInstaller creates a new module installer.
|
||||
func NewInstaller(modulesDir string, st *store.Store) *Installer {
|
||||
return &Installer{
|
||||
modulesDir: modulesDir,
|
||||
store: st,
|
||||
}
|
||||
}
|
||||
|
||||
// InstalledModule holds stored metadata about an installed module.
|
||||
type InstalledModule struct {
|
||||
Code string `json:"code"`
|
||||
Name string `json:"name"`
|
||||
Version string `json:"version"`
|
||||
Repo string `json:"repo"`
|
||||
EntryPoint string `json:"entry_point"`
|
||||
Permissions manifest.Permissions `json:"permissions"`
|
||||
SignKey string `json:"sign_key,omitempty"`
|
||||
InstalledAt string `json:"installed_at"`
|
||||
}
|
||||
|
||||
// Install clones a module repo, verifies its manifest signature, and registers it.
|
||||
func (i *Installer) Install(ctx context.Context, mod Module) error {
|
||||
// Check if already installed
|
||||
if _, err := i.store.Get(storeGroup, mod.Code); err == nil {
|
||||
return fmt.Errorf("marketplace: module %q already installed", mod.Code)
|
||||
}
|
||||
|
||||
dest := filepath.Join(i.modulesDir, mod.Code)
|
||||
if err := os.MkdirAll(i.modulesDir, 0755); err != nil {
|
||||
return fmt.Errorf("marketplace: mkdir: %w", err)
|
||||
}
|
||||
if err := gitClone(ctx, mod.Repo, dest); err != nil {
|
||||
return fmt.Errorf("marketplace: clone %s: %w", mod.Repo, err)
|
||||
}
|
||||
|
||||
// On any error after clone, clean up the directory
|
||||
cleanup := true
|
||||
defer func() {
|
||||
if cleanup {
|
||||
os.RemoveAll(dest)
|
||||
}
|
||||
}()
|
||||
|
||||
medium, err := io.NewSandboxed(dest)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marketplace: medium: %w", err)
|
||||
}
|
||||
|
||||
m, err := loadManifest(medium, mod.SignKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
entryPoint := filepath.Join(dest, "main.ts")
|
||||
installed := InstalledModule{
|
||||
Code: mod.Code,
|
||||
Name: m.Name,
|
||||
Version: m.Version,
|
||||
Repo: mod.Repo,
|
||||
EntryPoint: entryPoint,
|
||||
Permissions: m.Permissions,
|
||||
SignKey: mod.SignKey,
|
||||
InstalledAt: time.Now().UTC().Format(time.RFC3339),
|
||||
}
|
||||
|
||||
data, err := json.Marshal(installed)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marketplace: marshal: %w", err)
|
||||
}
|
||||
|
||||
if err := i.store.Set(storeGroup, mod.Code, string(data)); err != nil {
|
||||
return fmt.Errorf("marketplace: store: %w", err)
|
||||
}
|
||||
|
||||
cleanup = false
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove uninstalls a module by deleting its files and store entry.
|
||||
func (i *Installer) Remove(code string) error {
|
||||
if _, err := i.store.Get(storeGroup, code); err != nil {
|
||||
return fmt.Errorf("marketplace: module %q not installed", code)
|
||||
}
|
||||
|
||||
dest := filepath.Join(i.modulesDir, code)
|
||||
os.RemoveAll(dest)
|
||||
|
||||
return i.store.Delete(storeGroup, code)
|
||||
}
|
||||
|
||||
// Update pulls latest changes and re-verifies the manifest.
|
||||
func (i *Installer) Update(ctx context.Context, code string) error {
|
||||
raw, err := i.store.Get(storeGroup, code)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marketplace: module %q not installed", code)
|
||||
}
|
||||
|
||||
var installed InstalledModule
|
||||
if err := json.Unmarshal([]byte(raw), &installed); err != nil {
|
||||
return fmt.Errorf("marketplace: unmarshal: %w", err)
|
||||
}
|
||||
|
||||
dest := filepath.Join(i.modulesDir, code)
|
||||
|
||||
cmd := exec.CommandContext(ctx, "git", "-C", dest, "pull", "--ff-only")
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("marketplace: pull: %s: %w", strings.TrimSpace(string(output)), err)
|
||||
}
|
||||
|
||||
// Reload and re-verify manifest with the same key used at install time
|
||||
medium, mErr := io.NewSandboxed(dest)
|
||||
if mErr != nil {
|
||||
return fmt.Errorf("marketplace: medium: %w", mErr)
|
||||
}
|
||||
m, mErr := loadManifest(medium, installed.SignKey)
|
||||
if mErr != nil {
|
||||
return fmt.Errorf("marketplace: reload manifest: %w", mErr)
|
||||
}
|
||||
|
||||
// Update stored metadata
|
||||
installed.Name = m.Name
|
||||
installed.Version = m.Version
|
||||
installed.Permissions = m.Permissions
|
||||
|
||||
data, err := json.Marshal(installed)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marketplace: marshal: %w", err)
|
||||
}
|
||||
|
||||
return i.store.Set(storeGroup, code, string(data))
|
||||
}
|
||||
|
||||
// Installed returns all installed module metadata.
|
||||
func (i *Installer) Installed() ([]InstalledModule, error) {
|
||||
all, err := i.store.GetAll(storeGroup)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marketplace: list: %w", err)
|
||||
}
|
||||
|
||||
var modules []InstalledModule
|
||||
for _, raw := range all {
|
||||
var m InstalledModule
|
||||
if err := json.Unmarshal([]byte(raw), &m); err != nil {
|
||||
continue
|
||||
}
|
||||
modules = append(modules, m)
|
||||
}
|
||||
return modules, nil
|
||||
}
|
||||
|
||||
// loadManifest loads and optionally verifies a module manifest.
|
||||
func loadManifest(medium io.Medium, signKey string) (*manifest.Manifest, error) {
|
||||
if signKey != "" {
|
||||
pubBytes, err := hex.DecodeString(signKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marketplace: decode sign key: %w", err)
|
||||
}
|
||||
return manifest.LoadVerified(medium, ".", pubBytes)
|
||||
}
|
||||
return manifest.Load(medium, ".")
|
||||
}
|
||||
|
||||
// gitClone clones a repository with --depth=1.
|
||||
func gitClone(ctx context.Context, repo, dest string) error {
|
||||
cmd := exec.CommandContext(ctx, "git", "clone", "--depth=1", repo, dest)
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("%s: %w", strings.TrimSpace(string(output)), err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
263
pkg/marketplace/installer_test.go
Normal file
263
pkg/marketplace/installer_test.go
Normal file
|
|
@ -0,0 +1,263 @@
|
|||
package marketplace
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"encoding/hex"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/manifest"
|
||||
"forge.lthn.ai/core/go/pkg/store"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// createTestRepo creates a bare-bones git repo with a manifest and main.ts.
|
||||
// Returns the repo path (usable as Module.Repo for local clone).
|
||||
func createTestRepo(t *testing.T, code, version string) string {
|
||||
t.Helper()
|
||||
dir := filepath.Join(t.TempDir(), code)
|
||||
require.NoError(t, os.MkdirAll(filepath.Join(dir, ".core"), 0755))
|
||||
|
||||
manifestYAML := "code: " + code + "\nname: Test " + code + "\nversion: \"" + version + "\"\n"
|
||||
require.NoError(t, os.WriteFile(
|
||||
filepath.Join(dir, ".core", "view.yml"),
|
||||
[]byte(manifestYAML), 0644,
|
||||
))
|
||||
require.NoError(t, os.WriteFile(
|
||||
filepath.Join(dir, "main.ts"),
|
||||
[]byte("export async function init(core: any) {}\n"), 0644,
|
||||
))
|
||||
|
||||
runGit(t, dir, "init")
|
||||
runGit(t, dir, "add", ".")
|
||||
runGit(t, dir, "commit", "-m", "init")
|
||||
return dir
|
||||
}
|
||||
|
||||
// createSignedTestRepo creates a git repo with a signed manifest.
|
||||
// Returns (repo path, hex-encoded public key).
|
||||
func createSignedTestRepo(t *testing.T, code, version string) (string, string) {
|
||||
t.Helper()
|
||||
pub, priv, err := ed25519.GenerateKey(nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
dir := filepath.Join(t.TempDir(), code)
|
||||
require.NoError(t, os.MkdirAll(filepath.Join(dir, ".core"), 0755))
|
||||
|
||||
m := &manifest.Manifest{
|
||||
Code: code,
|
||||
Name: "Test " + code,
|
||||
Version: version,
|
||||
}
|
||||
require.NoError(t, manifest.Sign(m, priv))
|
||||
|
||||
data, err := manifest.MarshalYAML(m)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, ".core", "view.yml"), data, 0644))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, "main.ts"), []byte("export async function init(core: any) {}\n"), 0644))
|
||||
|
||||
runGit(t, dir, "init")
|
||||
runGit(t, dir, "add", ".")
|
||||
runGit(t, dir, "commit", "-m", "init")
|
||||
|
||||
return dir, hex.EncodeToString(pub)
|
||||
}
|
||||
|
||||
func runGit(t *testing.T, dir string, args ...string) {
|
||||
t.Helper()
|
||||
cmd := exec.Command("git", append([]string{"-C", dir, "-c", "user.email=test@test.com", "-c", "user.name=test"}, args...)...)
|
||||
out, err := cmd.CombinedOutput()
|
||||
require.NoError(t, err, "git %v: %s", args, string(out))
|
||||
}
|
||||
|
||||
func TestInstall_Good(t *testing.T) {
|
||||
repo := createTestRepo(t, "hello-mod", "1.0")
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
err = inst.Install(context.Background(), Module{
|
||||
Code: "hello-mod",
|
||||
Repo: repo,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify directory exists
|
||||
_, err = os.Stat(filepath.Join(modulesDir, "hello-mod", "main.ts"))
|
||||
assert.NoError(t, err, "main.ts should exist in installed module")
|
||||
|
||||
// Verify store entry
|
||||
raw, err := st.Get("_modules", "hello-mod")
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, raw, `"code":"hello-mod"`)
|
||||
assert.Contains(t, raw, `"version":"1.0"`)
|
||||
}
|
||||
|
||||
func TestInstall_Good_Signed(t *testing.T) {
|
||||
repo, signKey := createSignedTestRepo(t, "signed-mod", "2.0")
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
err = inst.Install(context.Background(), Module{
|
||||
Code: "signed-mod",
|
||||
Repo: repo,
|
||||
SignKey: signKey,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
raw, err := st.Get("_modules", "signed-mod")
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, raw, `"version":"2.0"`)
|
||||
}
|
||||
|
||||
func TestInstall_Bad_AlreadyInstalled(t *testing.T) {
|
||||
repo := createTestRepo(t, "dup-mod", "1.0")
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
mod := Module{Code: "dup-mod", Repo: repo}
|
||||
|
||||
require.NoError(t, inst.Install(context.Background(), mod))
|
||||
err = inst.Install(context.Background(), mod)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "already installed")
|
||||
}
|
||||
|
||||
func TestInstall_Bad_InvalidSignature(t *testing.T) {
|
||||
// Sign with key A, verify with key B
|
||||
repo, _ := createSignedTestRepo(t, "bad-sig", "1.0")
|
||||
_, wrongKey := createSignedTestRepo(t, "dummy", "1.0") // different key
|
||||
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
err = inst.Install(context.Background(), Module{
|
||||
Code: "bad-sig",
|
||||
Repo: repo,
|
||||
SignKey: wrongKey,
|
||||
})
|
||||
assert.Error(t, err)
|
||||
|
||||
// Verify directory was cleaned up
|
||||
_, statErr := os.Stat(filepath.Join(modulesDir, "bad-sig"))
|
||||
assert.True(t, os.IsNotExist(statErr), "directory should be cleaned up on failure")
|
||||
}
|
||||
|
||||
func TestRemove_Good(t *testing.T) {
|
||||
repo := createTestRepo(t, "rm-mod", "1.0")
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
require.NoError(t, inst.Install(context.Background(), Module{Code: "rm-mod", Repo: repo}))
|
||||
|
||||
err = inst.Remove("rm-mod")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Directory gone
|
||||
_, statErr := os.Stat(filepath.Join(modulesDir, "rm-mod"))
|
||||
assert.True(t, os.IsNotExist(statErr))
|
||||
|
||||
// Store entry gone
|
||||
_, err = st.Get("_modules", "rm-mod")
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestRemove_Bad_NotInstalled(t *testing.T) {
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(t.TempDir(), st)
|
||||
err = inst.Remove("nonexistent")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "not installed")
|
||||
}
|
||||
|
||||
func TestInstalled_Good(t *testing.T) {
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
|
||||
repo1 := createTestRepo(t, "mod-a", "1.0")
|
||||
repo2 := createTestRepo(t, "mod-b", "2.0")
|
||||
|
||||
require.NoError(t, inst.Install(context.Background(), Module{Code: "mod-a", Repo: repo1}))
|
||||
require.NoError(t, inst.Install(context.Background(), Module{Code: "mod-b", Repo: repo2}))
|
||||
|
||||
installed, err := inst.Installed()
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, installed, 2)
|
||||
|
||||
codes := map[string]bool{}
|
||||
for _, m := range installed {
|
||||
codes[m.Code] = true
|
||||
}
|
||||
assert.True(t, codes["mod-a"])
|
||||
assert.True(t, codes["mod-b"])
|
||||
}
|
||||
|
||||
func TestInstalled_Good_Empty(t *testing.T) {
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(t.TempDir(), st)
|
||||
installed, err := inst.Installed()
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, installed)
|
||||
}
|
||||
|
||||
func TestUpdate_Good(t *testing.T) {
|
||||
repo := createTestRepo(t, "upd-mod", "1.0")
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
require.NoError(t, inst.Install(context.Background(), Module{Code: "upd-mod", Repo: repo}))
|
||||
|
||||
// Update the origin repo
|
||||
newManifest := "code: upd-mod\nname: Updated Module\nversion: \"2.0\"\n"
|
||||
require.NoError(t, os.WriteFile(filepath.Join(repo, ".core", "view.yml"), []byte(newManifest), 0644))
|
||||
runGit(t, repo, "add", ".")
|
||||
runGit(t, repo, "commit", "-m", "bump version")
|
||||
|
||||
err = inst.Update(context.Background(), "upd-mod")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify updated metadata
|
||||
installed, err := inst.Installed()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, installed, 1)
|
||||
assert.Equal(t, "2.0", installed[0].Version)
|
||||
assert.Equal(t, "Updated Module", installed[0].Name)
|
||||
}
|
||||
67
pkg/marketplace/marketplace.go
Normal file
67
pkg/marketplace/marketplace.go
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
package marketplace
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Module is a marketplace entry pointing to a module's Git repo.
|
||||
type Module struct {
|
||||
Code string `json:"code"`
|
||||
Name string `json:"name"`
|
||||
Repo string `json:"repo"`
|
||||
SignKey string `json:"sign_key"`
|
||||
Category string `json:"category"`
|
||||
}
|
||||
|
||||
// Index is the root marketplace catalog.
|
||||
type Index struct {
|
||||
Version int `json:"version"`
|
||||
Modules []Module `json:"modules"`
|
||||
Categories []string `json:"categories"`
|
||||
}
|
||||
|
||||
// ParseIndex decodes a marketplace index.json.
|
||||
func ParseIndex(data []byte) (*Index, error) {
|
||||
var idx Index
|
||||
if err := json.Unmarshal(data, &idx); err != nil {
|
||||
return nil, fmt.Errorf("marketplace.ParseIndex: %w", err)
|
||||
}
|
||||
return &idx, nil
|
||||
}
|
||||
|
||||
// Search returns modules matching the query in code, name, or category.
|
||||
func (idx *Index) Search(query string) []Module {
|
||||
q := strings.ToLower(query)
|
||||
var results []Module
|
||||
for _, m := range idx.Modules {
|
||||
if strings.Contains(strings.ToLower(m.Code), q) ||
|
||||
strings.Contains(strings.ToLower(m.Name), q) ||
|
||||
strings.Contains(strings.ToLower(m.Category), q) {
|
||||
results = append(results, m)
|
||||
}
|
||||
}
|
||||
return results
|
||||
}
|
||||
|
||||
// ByCategory returns all modules in the given category.
|
||||
func (idx *Index) ByCategory(category string) []Module {
|
||||
var results []Module
|
||||
for _, m := range idx.Modules {
|
||||
if m.Category == category {
|
||||
results = append(results, m)
|
||||
}
|
||||
}
|
||||
return results
|
||||
}
|
||||
|
||||
// Find returns the module with the given code, or false if not found.
|
||||
func (idx *Index) Find(code string) (Module, bool) {
|
||||
for _, m := range idx.Modules {
|
||||
if m.Code == code {
|
||||
return m, true
|
||||
}
|
||||
}
|
||||
return Module{}, false
|
||||
}
|
||||
65
pkg/marketplace/marketplace_test.go
Normal file
65
pkg/marketplace/marketplace_test.go
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
package marketplace
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestParseIndex_Good(t *testing.T) {
|
||||
raw := `{
|
||||
"version": 1,
|
||||
"modules": [
|
||||
{"code": "mining-xmrig", "name": "XMRig Miner", "repo": "https://forge.lthn.io/host-uk/mod-xmrig.git", "sign_key": "abc123", "category": "miner"},
|
||||
{"code": "utils-cyberchef", "name": "CyberChef", "repo": "https://forge.lthn.io/host-uk/mod-cyberchef.git", "sign_key": "def456", "category": "utils"}
|
||||
],
|
||||
"categories": ["miner", "utils"]
|
||||
}`
|
||||
idx, err := ParseIndex([]byte(raw))
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 1, idx.Version)
|
||||
assert.Len(t, idx.Modules, 2)
|
||||
assert.Equal(t, "mining-xmrig", idx.Modules[0].Code)
|
||||
}
|
||||
|
||||
func TestSearch_Good(t *testing.T) {
|
||||
idx := &Index{
|
||||
Modules: []Module{
|
||||
{Code: "mining-xmrig", Name: "XMRig Miner", Category: "miner"},
|
||||
{Code: "utils-cyberchef", Name: "CyberChef", Category: "utils"},
|
||||
},
|
||||
}
|
||||
results := idx.Search("miner")
|
||||
assert.Len(t, results, 1)
|
||||
assert.Equal(t, "mining-xmrig", results[0].Code)
|
||||
}
|
||||
|
||||
func TestByCategory_Good(t *testing.T) {
|
||||
idx := &Index{
|
||||
Modules: []Module{
|
||||
{Code: "a", Category: "miner"},
|
||||
{Code: "b", Category: "utils"},
|
||||
{Code: "c", Category: "miner"},
|
||||
},
|
||||
}
|
||||
miners := idx.ByCategory("miner")
|
||||
assert.Len(t, miners, 2)
|
||||
}
|
||||
|
||||
func TestFind_Good(t *testing.T) {
|
||||
idx := &Index{
|
||||
Modules: []Module{
|
||||
{Code: "mining-xmrig", Name: "XMRig"},
|
||||
},
|
||||
}
|
||||
m, ok := idx.Find("mining-xmrig")
|
||||
assert.True(t, ok)
|
||||
assert.Equal(t, "XMRig", m.Name)
|
||||
}
|
||||
|
||||
func TestFind_Bad_NotFound(t *testing.T) {
|
||||
idx := &Index{}
|
||||
_, ok := idx.Find("nope")
|
||||
assert.False(t, ok)
|
||||
}
|
||||
|
|
@ -1,470 +0,0 @@
|
|||
package process
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// RestartPolicy configures automatic restart behaviour for supervised units.
|
||||
type RestartPolicy struct {
|
||||
// Delay between restart attempts.
|
||||
Delay time.Duration
|
||||
// MaxRestarts is the maximum number of restarts before giving up.
|
||||
// Use -1 for unlimited restarts.
|
||||
MaxRestarts int
|
||||
}
|
||||
|
||||
// DaemonSpec defines a long-running external process under supervision.
|
||||
type DaemonSpec struct {
|
||||
// Name identifies this daemon (must be unique within the supervisor).
|
||||
Name string
|
||||
// RunOptions defines the command, args, dir, env.
|
||||
RunOptions
|
||||
// Restart configures automatic restart behaviour.
|
||||
Restart RestartPolicy
|
||||
}
|
||||
|
||||
// GoSpec defines a supervised Go function that runs in a goroutine.
|
||||
// The function should block until done or ctx is cancelled.
|
||||
type GoSpec struct {
|
||||
// Name identifies this task (must be unique within the supervisor).
|
||||
Name string
|
||||
// Func is the function to supervise. It receives a context that is
|
||||
// cancelled when the supervisor stops or the task is explicitly stopped.
|
||||
// If it returns an error or panics, the supervisor restarts it
|
||||
// according to the restart policy.
|
||||
Func func(ctx context.Context) error
|
||||
// Restart configures automatic restart behaviour.
|
||||
Restart RestartPolicy
|
||||
}
|
||||
|
||||
// DaemonStatus contains a snapshot of a supervised unit's state.
|
||||
type DaemonStatus struct {
|
||||
Name string `json:"name"`
|
||||
Type string `json:"type"` // "process" or "goroutine"
|
||||
Running bool `json:"running"`
|
||||
PID int `json:"pid,omitempty"`
|
||||
RestartCount int `json:"restartCount"`
|
||||
LastStart time.Time `json:"lastStart"`
|
||||
Uptime time.Duration `json:"uptime"`
|
||||
ExitCode int `json:"exitCode,omitempty"`
|
||||
}
|
||||
|
||||
// supervisedUnit is the internal state for any supervised unit.
|
||||
type supervisedUnit struct {
|
||||
name string
|
||||
unitType string // "process" or "goroutine"
|
||||
restart RestartPolicy
|
||||
restartCount int
|
||||
lastStart time.Time
|
||||
running bool
|
||||
exitCode int
|
||||
|
||||
// For process daemons
|
||||
runOpts *RunOptions
|
||||
proc *Process
|
||||
|
||||
// For go functions
|
||||
goFunc func(ctx context.Context) error
|
||||
|
||||
cancel context.CancelFunc
|
||||
done chan struct{} // closed when supervision goroutine exits
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
func (u *supervisedUnit) status() DaemonStatus {
|
||||
u.mu.Lock()
|
||||
defer u.mu.Unlock()
|
||||
|
||||
var uptime time.Duration
|
||||
if u.running && !u.lastStart.IsZero() {
|
||||
uptime = time.Since(u.lastStart)
|
||||
}
|
||||
|
||||
pid := 0
|
||||
if u.proc != nil {
|
||||
info := u.proc.Info()
|
||||
pid = info.PID
|
||||
}
|
||||
|
||||
return DaemonStatus{
|
||||
Name: u.name,
|
||||
Type: u.unitType,
|
||||
Running: u.running,
|
||||
PID: pid,
|
||||
RestartCount: u.restartCount,
|
||||
LastStart: u.lastStart,
|
||||
Uptime: uptime,
|
||||
ExitCode: u.exitCode,
|
||||
}
|
||||
}
|
||||
|
||||
// ShutdownTimeout is the maximum time to wait for supervised units during shutdown.
|
||||
const ShutdownTimeout = 15 * time.Second
|
||||
|
||||
// Supervisor manages long-running processes and goroutines with automatic restart.
|
||||
//
|
||||
// For external processes, it requires a Service instance.
|
||||
// For Go functions, no Service is needed.
|
||||
//
|
||||
// sup := process.NewSupervisor(svc)
|
||||
// sup.Register(process.DaemonSpec{
|
||||
// Name: "worker",
|
||||
// RunOptions: process.RunOptions{Command: "worker", Args: []string{"--port", "8080"}},
|
||||
// Restart: process.RestartPolicy{Delay: 5 * time.Second, MaxRestarts: -1},
|
||||
// })
|
||||
// sup.RegisterFunc(process.GoSpec{
|
||||
// Name: "health-check",
|
||||
// Func: healthCheckLoop,
|
||||
// Restart: process.RestartPolicy{Delay: time.Second, MaxRestarts: -1},
|
||||
// })
|
||||
// sup.Start()
|
||||
// defer sup.Stop()
|
||||
type Supervisor struct {
|
||||
service *Service
|
||||
units map[string]*supervisedUnit
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
wg sync.WaitGroup
|
||||
mu sync.RWMutex
|
||||
started bool
|
||||
}
|
||||
|
||||
// NewSupervisor creates a supervisor.
|
||||
// The Service parameter is optional (nil) if only supervising Go functions.
|
||||
func NewSupervisor(svc *Service) *Supervisor {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
return &Supervisor{
|
||||
service: svc,
|
||||
units: make(map[string]*supervisedUnit),
|
||||
ctx: ctx,
|
||||
cancel: cancel,
|
||||
}
|
||||
}
|
||||
|
||||
// Register adds an external process daemon for supervision.
|
||||
// Panics if no Service was provided to NewSupervisor.
|
||||
func (s *Supervisor) Register(spec DaemonSpec) {
|
||||
if s.service == nil {
|
||||
panic("process: Supervisor.Register requires a Service (use NewSupervisor with non-nil service)")
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
opts := spec.RunOptions
|
||||
s.units[spec.Name] = &supervisedUnit{
|
||||
name: spec.Name,
|
||||
unitType: "process",
|
||||
restart: spec.Restart,
|
||||
runOpts: &opts,
|
||||
}
|
||||
}
|
||||
|
||||
// RegisterFunc adds a Go function for supervision.
|
||||
func (s *Supervisor) RegisterFunc(spec GoSpec) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s.units[spec.Name] = &supervisedUnit{
|
||||
name: spec.Name,
|
||||
unitType: "goroutine",
|
||||
restart: spec.Restart,
|
||||
goFunc: spec.Func,
|
||||
}
|
||||
}
|
||||
|
||||
// Start begins supervising all registered units.
|
||||
// Safe to call once — subsequent calls are no-ops.
|
||||
func (s *Supervisor) Start() {
|
||||
s.mu.Lock()
|
||||
if s.started {
|
||||
s.mu.Unlock()
|
||||
return
|
||||
}
|
||||
s.started = true
|
||||
s.mu.Unlock()
|
||||
|
||||
s.mu.RLock()
|
||||
for _, unit := range s.units {
|
||||
s.startUnit(unit)
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
}
|
||||
|
||||
// startUnit launches the supervision goroutine for a single unit.
|
||||
func (s *Supervisor) startUnit(u *supervisedUnit) {
|
||||
u.mu.Lock()
|
||||
if u.running {
|
||||
u.mu.Unlock()
|
||||
return
|
||||
}
|
||||
u.running = true
|
||||
u.lastStart = time.Now()
|
||||
|
||||
unitCtx, unitCancel := context.WithCancel(s.ctx)
|
||||
u.cancel = unitCancel
|
||||
u.done = make(chan struct{})
|
||||
u.mu.Unlock()
|
||||
|
||||
s.wg.Add(1)
|
||||
go func() {
|
||||
defer s.wg.Done()
|
||||
defer close(u.done)
|
||||
s.superviseLoop(u, unitCtx)
|
||||
}()
|
||||
|
||||
slog.Info("supervisor: started unit", "name", u.name, "type", u.unitType)
|
||||
}
|
||||
|
||||
// superviseLoop is the core restart loop for a supervised unit.
|
||||
// ctx is the unit's own context, derived from s.ctx. Cancelling either
|
||||
// the supervisor or the unit's context exits this loop.
|
||||
func (s *Supervisor) superviseLoop(u *supervisedUnit, ctx context.Context) {
|
||||
for {
|
||||
// Check if this unit's context is cancelled (covers both
|
||||
// supervisor shutdown and manual restart/stop)
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
u.mu.Lock()
|
||||
u.running = false
|
||||
u.mu.Unlock()
|
||||
return
|
||||
default:
|
||||
}
|
||||
|
||||
// Run the unit with panic recovery
|
||||
exitCode := s.runUnit(u, ctx)
|
||||
|
||||
// If context was cancelled during run, exit the loop
|
||||
if ctx.Err() != nil {
|
||||
u.mu.Lock()
|
||||
u.running = false
|
||||
u.mu.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
u.mu.Lock()
|
||||
u.exitCode = exitCode
|
||||
u.restartCount++
|
||||
shouldRestart := u.restart.MaxRestarts < 0 || u.restartCount <= u.restart.MaxRestarts
|
||||
delay := u.restart.Delay
|
||||
count := u.restartCount
|
||||
u.mu.Unlock()
|
||||
|
||||
if !shouldRestart {
|
||||
slog.Warn("supervisor: unit reached max restarts",
|
||||
"name", u.name,
|
||||
"maxRestarts", u.restart.MaxRestarts,
|
||||
)
|
||||
u.mu.Lock()
|
||||
u.running = false
|
||||
u.mu.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
// Wait before restarting, or exit if context is cancelled
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
u.mu.Lock()
|
||||
u.running = false
|
||||
u.mu.Unlock()
|
||||
return
|
||||
case <-time.After(delay):
|
||||
slog.Info("supervisor: restarting unit",
|
||||
"name", u.name,
|
||||
"restartCount", count,
|
||||
"exitCode", exitCode,
|
||||
)
|
||||
u.mu.Lock()
|
||||
u.lastStart = time.Now()
|
||||
u.mu.Unlock()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// runUnit executes a single run of the unit, returning exit code.
|
||||
// Recovers from panics.
|
||||
func (s *Supervisor) runUnit(u *supervisedUnit, ctx context.Context) (exitCode int) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
slog.Error("supervisor: unit panicked",
|
||||
"name", u.name,
|
||||
"panic", fmt.Sprintf("%v", r),
|
||||
)
|
||||
exitCode = 1
|
||||
}
|
||||
}()
|
||||
|
||||
switch u.unitType {
|
||||
case "process":
|
||||
return s.runProcess(u, ctx)
|
||||
case "goroutine":
|
||||
return s.runGoFunc(u, ctx)
|
||||
default:
|
||||
slog.Error("supervisor: unknown unit type", "name", u.name, "type", u.unitType)
|
||||
return 1
|
||||
}
|
||||
}
|
||||
|
||||
// runProcess starts an external process and waits for it to exit.
|
||||
func (s *Supervisor) runProcess(u *supervisedUnit, ctx context.Context) int {
|
||||
proc, err := s.service.StartWithOptions(ctx, *u.runOpts)
|
||||
if err != nil {
|
||||
slog.Error("supervisor: failed to start process",
|
||||
"name", u.name,
|
||||
"error", err,
|
||||
)
|
||||
return 1
|
||||
}
|
||||
|
||||
u.mu.Lock()
|
||||
u.proc = proc
|
||||
u.mu.Unlock()
|
||||
|
||||
// Wait for process to finish or context cancellation
|
||||
select {
|
||||
case <-proc.Done():
|
||||
info := proc.Info()
|
||||
return info.ExitCode
|
||||
case <-ctx.Done():
|
||||
// Context cancelled — kill the process
|
||||
_ = proc.Kill()
|
||||
<-proc.Done()
|
||||
return -1
|
||||
}
|
||||
}
|
||||
|
||||
// runGoFunc runs a Go function and returns 0 on success, 1 on error.
|
||||
func (s *Supervisor) runGoFunc(u *supervisedUnit, ctx context.Context) int {
|
||||
if err := u.goFunc(ctx); err != nil {
|
||||
if ctx.Err() != nil {
|
||||
// Context was cancelled, not a real error
|
||||
return -1
|
||||
}
|
||||
slog.Error("supervisor: go function returned error",
|
||||
"name", u.name,
|
||||
"error", err,
|
||||
)
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// Stop gracefully shuts down all supervised units.
|
||||
func (s *Supervisor) Stop() {
|
||||
s.cancel()
|
||||
|
||||
// Wait with timeout
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
s.wg.Wait()
|
||||
close(done)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
slog.Info("supervisor: all units stopped")
|
||||
case <-time.After(ShutdownTimeout):
|
||||
slog.Warn("supervisor: shutdown timeout, some units may not have stopped")
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
s.started = false
|
||||
s.mu.Unlock()
|
||||
}
|
||||
|
||||
// Restart stops and restarts a specific unit by name.
|
||||
func (s *Supervisor) Restart(name string) error {
|
||||
s.mu.RLock()
|
||||
u, ok := s.units[name]
|
||||
s.mu.RUnlock()
|
||||
|
||||
if !ok {
|
||||
return fmt.Errorf("supervisor: unit not found: %s", name)
|
||||
}
|
||||
|
||||
// Cancel the current run and wait for the supervision goroutine to exit
|
||||
u.mu.Lock()
|
||||
if u.cancel != nil {
|
||||
u.cancel()
|
||||
}
|
||||
done := u.done
|
||||
u.mu.Unlock()
|
||||
|
||||
// Wait for the old supervision goroutine to exit
|
||||
if done != nil {
|
||||
<-done
|
||||
}
|
||||
|
||||
// Reset restart counter for the fresh start
|
||||
u.mu.Lock()
|
||||
u.restartCount = 0
|
||||
u.mu.Unlock()
|
||||
|
||||
// Start fresh
|
||||
s.startUnit(u)
|
||||
return nil
|
||||
}
|
||||
|
||||
// StopUnit stops a specific unit without restarting it.
|
||||
func (s *Supervisor) StopUnit(name string) error {
|
||||
s.mu.RLock()
|
||||
u, ok := s.units[name]
|
||||
s.mu.RUnlock()
|
||||
|
||||
if !ok {
|
||||
return fmt.Errorf("supervisor: unit not found: %s", name)
|
||||
}
|
||||
|
||||
u.mu.Lock()
|
||||
if u.cancel != nil {
|
||||
u.cancel()
|
||||
}
|
||||
// Set max restarts to 0 to prevent the loop from restarting
|
||||
u.restart.MaxRestarts = 0
|
||||
u.restartCount = 1
|
||||
u.mu.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Status returns the status of a specific supervised unit.
|
||||
func (s *Supervisor) Status(name string) (DaemonStatus, error) {
|
||||
s.mu.RLock()
|
||||
u, ok := s.units[name]
|
||||
s.mu.RUnlock()
|
||||
|
||||
if !ok {
|
||||
return DaemonStatus{}, fmt.Errorf("supervisor: unit not found: %s", name)
|
||||
}
|
||||
|
||||
return u.status(), nil
|
||||
}
|
||||
|
||||
// Statuses returns the status of all supervised units.
|
||||
func (s *Supervisor) Statuses() map[string]DaemonStatus {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
result := make(map[string]DaemonStatus, len(s.units))
|
||||
for name, u := range s.units {
|
||||
result[name] = u.status()
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// UnitNames returns the names of all registered units.
|
||||
func (s *Supervisor) UnitNames() []string {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
names := make([]string, 0, len(s.units))
|
||||
for name := range s.units {
|
||||
names = append(names, name)
|
||||
}
|
||||
return names
|
||||
}
|
||||
|
|
@ -1,335 +0,0 @@
|
|||
package process
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestSupervisor_GoFunc_Good(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
var count atomic.Int32
|
||||
sup.RegisterFunc(GoSpec{
|
||||
Name: "counter",
|
||||
Func: func(ctx context.Context) error {
|
||||
count.Add(1)
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
},
|
||||
Restart: RestartPolicy{Delay: 10 * time.Millisecond, MaxRestarts: -1},
|
||||
})
|
||||
|
||||
sup.Start()
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
status, err := sup.Status("counter")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !status.Running {
|
||||
t.Error("expected counter to be running")
|
||||
}
|
||||
if status.Type != "goroutine" {
|
||||
t.Errorf("expected type goroutine, got %s", status.Type)
|
||||
}
|
||||
|
||||
sup.Stop()
|
||||
|
||||
if c := count.Load(); c < 1 {
|
||||
t.Errorf("expected counter >= 1, got %d", c)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSupervisor_GoFunc_Restart_Good(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
var runs atomic.Int32
|
||||
sup.RegisterFunc(GoSpec{
|
||||
Name: "crasher",
|
||||
Func: func(ctx context.Context) error {
|
||||
n := runs.Add(1)
|
||||
if n <= 3 {
|
||||
return fmt.Errorf("crash #%d", n)
|
||||
}
|
||||
// After 3 crashes, stay running
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
},
|
||||
Restart: RestartPolicy{Delay: 5 * time.Millisecond, MaxRestarts: -1},
|
||||
})
|
||||
|
||||
sup.Start()
|
||||
// Wait for restarts
|
||||
time.Sleep(200 * time.Millisecond)
|
||||
|
||||
status, _ := sup.Status("crasher")
|
||||
if status.RestartCount < 3 {
|
||||
t.Errorf("expected at least 3 restarts, got %d", status.RestartCount)
|
||||
}
|
||||
if !status.Running {
|
||||
t.Error("expected crasher to be running after recovering")
|
||||
}
|
||||
|
||||
sup.Stop()
|
||||
}
|
||||
|
||||
func TestSupervisor_GoFunc_MaxRestarts_Good(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
sup.RegisterFunc(GoSpec{
|
||||
Name: "limited",
|
||||
Func: func(ctx context.Context) error {
|
||||
return fmt.Errorf("always fail")
|
||||
},
|
||||
Restart: RestartPolicy{Delay: 5 * time.Millisecond, MaxRestarts: 2},
|
||||
})
|
||||
|
||||
sup.Start()
|
||||
time.Sleep(200 * time.Millisecond)
|
||||
|
||||
status, _ := sup.Status("limited")
|
||||
if status.Running {
|
||||
t.Error("expected limited to have stopped after max restarts")
|
||||
}
|
||||
// The function runs once (initial) + 2 restarts = restartCount should be 3
|
||||
// (restartCount increments each time the function exits)
|
||||
if status.RestartCount > 3 {
|
||||
t.Errorf("expected restartCount <= 3, got %d", status.RestartCount)
|
||||
}
|
||||
|
||||
sup.Stop()
|
||||
}
|
||||
|
||||
func TestSupervisor_GoFunc_Panic_Good(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
var runs atomic.Int32
|
||||
sup.RegisterFunc(GoSpec{
|
||||
Name: "panicker",
|
||||
Func: func(ctx context.Context) error {
|
||||
n := runs.Add(1)
|
||||
if n == 1 {
|
||||
panic("boom")
|
||||
}
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
},
|
||||
Restart: RestartPolicy{Delay: 5 * time.Millisecond, MaxRestarts: 3},
|
||||
})
|
||||
|
||||
sup.Start()
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
status, _ := sup.Status("panicker")
|
||||
if !status.Running {
|
||||
t.Error("expected panicker to recover and be running")
|
||||
}
|
||||
if runs.Load() < 2 {
|
||||
t.Error("expected at least 2 runs (1 panic + 1 recovery)")
|
||||
}
|
||||
|
||||
sup.Stop()
|
||||
}
|
||||
|
||||
func TestSupervisor_Statuses_Good(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
sup.RegisterFunc(GoSpec{
|
||||
Name: "a",
|
||||
Func: func(ctx context.Context) error { <-ctx.Done(); return nil },
|
||||
Restart: RestartPolicy{MaxRestarts: -1},
|
||||
})
|
||||
sup.RegisterFunc(GoSpec{
|
||||
Name: "b",
|
||||
Func: func(ctx context.Context) error { <-ctx.Done(); return nil },
|
||||
Restart: RestartPolicy{MaxRestarts: -1},
|
||||
})
|
||||
|
||||
sup.Start()
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
statuses := sup.Statuses()
|
||||
if len(statuses) != 2 {
|
||||
t.Errorf("expected 2 statuses, got %d", len(statuses))
|
||||
}
|
||||
if !statuses["a"].Running || !statuses["b"].Running {
|
||||
t.Error("expected both units running")
|
||||
}
|
||||
|
||||
sup.Stop()
|
||||
}
|
||||
|
||||
func TestSupervisor_UnitNames_Good(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
sup.RegisterFunc(GoSpec{
|
||||
Name: "alpha",
|
||||
Func: func(ctx context.Context) error { <-ctx.Done(); return nil },
|
||||
})
|
||||
sup.RegisterFunc(GoSpec{
|
||||
Name: "beta",
|
||||
Func: func(ctx context.Context) error { <-ctx.Done(); return nil },
|
||||
})
|
||||
|
||||
names := sup.UnitNames()
|
||||
if len(names) != 2 {
|
||||
t.Errorf("expected 2 names, got %d", len(names))
|
||||
}
|
||||
}
|
||||
|
||||
func TestSupervisor_Status_Bad(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
_, err := sup.Status("nonexistent")
|
||||
if err == nil {
|
||||
t.Error("expected error for nonexistent unit")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSupervisor_Restart_Good(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
var runs atomic.Int32
|
||||
sup.RegisterFunc(GoSpec{
|
||||
Name: "restartable",
|
||||
Func: func(ctx context.Context) error {
|
||||
runs.Add(1)
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
},
|
||||
Restart: RestartPolicy{Delay: 5 * time.Millisecond, MaxRestarts: -1},
|
||||
})
|
||||
|
||||
sup.Start()
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
if err := sup.Restart("restartable"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
if runs.Load() < 2 {
|
||||
t.Errorf("expected at least 2 runs after restart, got %d", runs.Load())
|
||||
}
|
||||
|
||||
sup.Stop()
|
||||
}
|
||||
|
||||
func TestSupervisor_Restart_Bad(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
err := sup.Restart("nonexistent")
|
||||
if err == nil {
|
||||
t.Error("expected error for nonexistent unit")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSupervisor_StopUnit_Good(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
sup.RegisterFunc(GoSpec{
|
||||
Name: "stoppable",
|
||||
Func: func(ctx context.Context) error {
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
},
|
||||
Restart: RestartPolicy{Delay: 5 * time.Millisecond, MaxRestarts: -1},
|
||||
})
|
||||
|
||||
sup.Start()
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
if err := sup.StopUnit("stoppable"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
status, _ := sup.Status("stoppable")
|
||||
if status.Running {
|
||||
t.Error("expected unit to be stopped")
|
||||
}
|
||||
|
||||
sup.Stop()
|
||||
}
|
||||
|
||||
func TestSupervisor_StopUnit_Bad(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
err := sup.StopUnit("nonexistent")
|
||||
if err == nil {
|
||||
t.Error("expected error for nonexistent unit")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSupervisor_StartIdempotent_Good(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
var count atomic.Int32
|
||||
sup.RegisterFunc(GoSpec{
|
||||
Name: "once",
|
||||
Func: func(ctx context.Context) error {
|
||||
count.Add(1)
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
},
|
||||
})
|
||||
|
||||
sup.Start()
|
||||
sup.Start() // Should be no-op
|
||||
sup.Start() // Should be no-op
|
||||
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
if count.Load() != 1 {
|
||||
t.Errorf("expected exactly 1 run, got %d", count.Load())
|
||||
}
|
||||
|
||||
sup.Stop()
|
||||
}
|
||||
|
||||
func TestSupervisor_NoRestart_Good(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
var runs atomic.Int32
|
||||
sup.RegisterFunc(GoSpec{
|
||||
Name: "oneshot",
|
||||
Func: func(ctx context.Context) error {
|
||||
runs.Add(1)
|
||||
return nil // Exit immediately
|
||||
},
|
||||
Restart: RestartPolicy{Delay: 5 * time.Millisecond, MaxRestarts: 0},
|
||||
})
|
||||
|
||||
sup.Start()
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
status, _ := sup.Status("oneshot")
|
||||
if status.Running {
|
||||
t.Error("expected oneshot to not be running")
|
||||
}
|
||||
// Should run once (initial) then stop. restartCount will be 1
|
||||
// (incremented after the initial run exits).
|
||||
if runs.Load() != 1 {
|
||||
t.Errorf("expected exactly 1 run, got %d", runs.Load())
|
||||
}
|
||||
|
||||
sup.Stop()
|
||||
}
|
||||
|
||||
func TestSupervisor_Register_Ugly(t *testing.T) {
|
||||
sup := NewSupervisor(nil)
|
||||
|
||||
defer func() {
|
||||
if r := recover(); r == nil {
|
||||
t.Error("expected panic when registering process daemon without service")
|
||||
}
|
||||
}()
|
||||
|
||||
sup.Register(DaemonSpec{
|
||||
Name: "will-panic",
|
||||
RunOptions: RunOptions{Command: "echo"},
|
||||
})
|
||||
}
|
||||
|
|
@ -57,7 +57,7 @@ type Repo struct {
|
|||
Clone *bool `yaml:"clone,omitempty"` // nil = true, false = skip cloning
|
||||
|
||||
// Computed fields
|
||||
Path string `yaml:"-"` // Full path to repo directory
|
||||
Path string `yaml:"path,omitempty"` // Full path to repo directory (optional, defaults to base_path/name)
|
||||
registry *Registry `yaml:"-"`
|
||||
}
|
||||
|
||||
|
|
@ -83,7 +83,11 @@ func LoadRegistry(m io.Medium, path string) (*Registry, error) {
|
|||
// Set computed fields on each repo
|
||||
for name, repo := range reg.Repos {
|
||||
repo.Name = name
|
||||
repo.Path = filepath.Join(reg.BasePath, name)
|
||||
if repo.Path == "" {
|
||||
repo.Path = filepath.Join(reg.BasePath, name)
|
||||
} else {
|
||||
repo.Path = expandPath(repo.Path)
|
||||
}
|
||||
repo.registry = ®
|
||||
|
||||
// Apply defaults if not set
|
||||
|
|
@ -106,10 +110,16 @@ func FindRegistry(m io.Medium) (string, error) {
|
|||
}
|
||||
|
||||
for {
|
||||
// Check repos.yaml (existing)
|
||||
candidate := filepath.Join(dir, "repos.yaml")
|
||||
if m.Exists(candidate) {
|
||||
return candidate, nil
|
||||
}
|
||||
// Check .core/repos.yaml (new)
|
||||
candidate = filepath.Join(dir, ".core", "repos.yaml")
|
||||
if m.Exists(candidate) {
|
||||
return candidate, nil
|
||||
}
|
||||
|
||||
parent := filepath.Dir(dir)
|
||||
if parent == dir {
|
||||
|
|
@ -125,6 +135,7 @@ func FindRegistry(m io.Medium) (string, error) {
|
|||
}
|
||||
|
||||
commonPaths := []string{
|
||||
filepath.Join(home, "Code", "host-uk", ".core", "repos.yaml"),
|
||||
filepath.Join(home, "Code", "host-uk", "repos.yaml"),
|
||||
filepath.Join(home, ".config", "core", "repos.yaml"),
|
||||
}
|
||||
|
|
|
|||
153
pkg/store/store.go
Normal file
153
pkg/store/store.go
Normal file
|
|
@ -0,0 +1,153 @@
|
|||
package store
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"text/template"
|
||||
|
||||
_ "modernc.org/sqlite"
|
||||
)
|
||||
|
||||
// ErrNotFound is returned when a key does not exist in the store.
|
||||
var ErrNotFound = errors.New("store: not found")
|
||||
|
||||
// Store is a group-namespaced key-value store backed by SQLite.
|
||||
type Store struct {
|
||||
db *sql.DB
|
||||
}
|
||||
|
||||
// New creates a Store at the given SQLite path. Use ":memory:" for tests.
|
||||
func New(dbPath string) (*Store, error) {
|
||||
db, err := sql.Open("sqlite", dbPath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("store.New: %w", err)
|
||||
}
|
||||
if _, err := db.Exec("PRAGMA journal_mode=WAL"); err != nil {
|
||||
db.Close()
|
||||
return nil, fmt.Errorf("store.New: WAL: %w", err)
|
||||
}
|
||||
if _, err := db.Exec(`CREATE TABLE IF NOT EXISTS kv (
|
||||
grp TEXT NOT NULL,
|
||||
key TEXT NOT NULL,
|
||||
value TEXT NOT NULL,
|
||||
PRIMARY KEY (grp, key)
|
||||
)`); err != nil {
|
||||
db.Close()
|
||||
return nil, fmt.Errorf("store.New: schema: %w", err)
|
||||
}
|
||||
return &Store{db: db}, nil
|
||||
}
|
||||
|
||||
// Close closes the underlying database.
|
||||
func (s *Store) Close() error {
|
||||
return s.db.Close()
|
||||
}
|
||||
|
||||
// Get retrieves a value by group and key.
|
||||
func (s *Store) Get(group, key string) (string, error) {
|
||||
var val string
|
||||
err := s.db.QueryRow("SELECT value FROM kv WHERE grp = ? AND key = ?", group, key).Scan(&val)
|
||||
if err == sql.ErrNoRows {
|
||||
return "", fmt.Errorf("store.Get: %s/%s: %w", group, key, ErrNotFound)
|
||||
}
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("store.Get: %w", err)
|
||||
}
|
||||
return val, nil
|
||||
}
|
||||
|
||||
// Set stores a value by group and key, overwriting if exists.
|
||||
func (s *Store) Set(group, key, value string) error {
|
||||
_, err := s.db.Exec(
|
||||
`INSERT INTO kv (grp, key, value) VALUES (?, ?, ?)
|
||||
ON CONFLICT(grp, key) DO UPDATE SET value = excluded.value`,
|
||||
group, key, value,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("store.Set: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Delete removes a single key from a group.
|
||||
func (s *Store) Delete(group, key string) error {
|
||||
_, err := s.db.Exec("DELETE FROM kv WHERE grp = ? AND key = ?", group, key)
|
||||
if err != nil {
|
||||
return fmt.Errorf("store.Delete: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Count returns the number of keys in a group.
|
||||
func (s *Store) Count(group string) (int, error) {
|
||||
var n int
|
||||
err := s.db.QueryRow("SELECT COUNT(*) FROM kv WHERE grp = ?", group).Scan(&n)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("store.Count: %w", err)
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// DeleteGroup removes all keys in a group.
|
||||
func (s *Store) DeleteGroup(group string) error {
|
||||
_, err := s.db.Exec("DELETE FROM kv WHERE grp = ?", group)
|
||||
if err != nil {
|
||||
return fmt.Errorf("store.DeleteGroup: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetAll returns all key-value pairs in a group.
|
||||
func (s *Store) GetAll(group string) (map[string]string, error) {
|
||||
rows, err := s.db.Query("SELECT key, value FROM kv WHERE grp = ?", group)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("store.GetAll: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
result := make(map[string]string)
|
||||
for rows.Next() {
|
||||
var k, v string
|
||||
if err := rows.Scan(&k, &v); err != nil {
|
||||
return nil, fmt.Errorf("store.GetAll: scan: %w", err)
|
||||
}
|
||||
result[k] = v
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("store.GetAll: rows: %w", err)
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// Render loads all key-value pairs from a group and renders a Go template.
|
||||
func (s *Store) Render(tmplStr, group string) (string, error) {
|
||||
rows, err := s.db.Query("SELECT key, value FROM kv WHERE grp = ?", group)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("store.Render: query: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
vars := make(map[string]string)
|
||||
for rows.Next() {
|
||||
var k, v string
|
||||
if err := rows.Scan(&k, &v); err != nil {
|
||||
return "", fmt.Errorf("store.Render: scan: %w", err)
|
||||
}
|
||||
vars[k] = v
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return "", fmt.Errorf("store.Render: rows: %w", err)
|
||||
}
|
||||
|
||||
tmpl, err := template.New("render").Parse(tmplStr)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("store.Render: parse: %w", err)
|
||||
}
|
||||
var b strings.Builder
|
||||
if err := tmpl.Execute(&b, vars); err != nil {
|
||||
return "", fmt.Errorf("store.Render: exec: %w", err)
|
||||
}
|
||||
return b.String(), nil
|
||||
}
|
||||
103
pkg/store/store_test.go
Normal file
103
pkg/store/store_test.go
Normal file
|
|
@ -0,0 +1,103 @@
|
|||
package store
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestSetGet_Good(t *testing.T) {
|
||||
s, err := New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer s.Close()
|
||||
|
||||
err = s.Set("config", "theme", "dark")
|
||||
require.NoError(t, err)
|
||||
|
||||
val, err := s.Get("config", "theme")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "dark", val)
|
||||
}
|
||||
|
||||
func TestGet_Bad_NotFound(t *testing.T) {
|
||||
s, _ := New(":memory:")
|
||||
defer s.Close()
|
||||
|
||||
_, err := s.Get("config", "missing")
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestDelete_Good(t *testing.T) {
|
||||
s, _ := New(":memory:")
|
||||
defer s.Close()
|
||||
|
||||
_ = s.Set("config", "key", "val")
|
||||
err := s.Delete("config", "key")
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = s.Get("config", "key")
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestCount_Good(t *testing.T) {
|
||||
s, _ := New(":memory:")
|
||||
defer s.Close()
|
||||
|
||||
_ = s.Set("grp", "a", "1")
|
||||
_ = s.Set("grp", "b", "2")
|
||||
_ = s.Set("other", "c", "3")
|
||||
|
||||
n, err := s.Count("grp")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 2, n)
|
||||
}
|
||||
|
||||
func TestDeleteGroup_Good(t *testing.T) {
|
||||
s, _ := New(":memory:")
|
||||
defer s.Close()
|
||||
|
||||
_ = s.Set("grp", "a", "1")
|
||||
_ = s.Set("grp", "b", "2")
|
||||
err := s.DeleteGroup("grp")
|
||||
require.NoError(t, err)
|
||||
|
||||
n, _ := s.Count("grp")
|
||||
assert.Equal(t, 0, n)
|
||||
}
|
||||
|
||||
func TestGetAll_Good(t *testing.T) {
|
||||
s, _ := New(":memory:")
|
||||
defer s.Close()
|
||||
|
||||
_ = s.Set("grp", "a", "1")
|
||||
_ = s.Set("grp", "b", "2")
|
||||
_ = s.Set("other", "c", "3")
|
||||
|
||||
all, err := s.GetAll("grp")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, map[string]string{"a": "1", "b": "2"}, all)
|
||||
}
|
||||
|
||||
func TestGetAll_Good_Empty(t *testing.T) {
|
||||
s, _ := New(":memory:")
|
||||
defer s.Close()
|
||||
|
||||
all, err := s.GetAll("empty")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, all)
|
||||
}
|
||||
|
||||
func TestRender_Good(t *testing.T) {
|
||||
s, _ := New(":memory:")
|
||||
defer s.Close()
|
||||
|
||||
_ = s.Set("user", "pool", "pool.lthn.io:3333")
|
||||
_ = s.Set("user", "wallet", "iz...")
|
||||
|
||||
tmpl := `{"pool":"{{ .pool }}","wallet":"{{ .wallet }}"}`
|
||||
out, err := s.Render(tmpl, "user")
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, out, "pool.lthn.io:3333")
|
||||
assert.Contains(t, out, "iz...")
|
||||
}
|
||||
Loading…
Add table
Reference in a new issue