Compare commits
28 commits
phase4-fou
...
dev
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3eee3569b9 | ||
|
|
d7e5215618 | ||
|
|
1e8a4131db | ||
|
|
df011ee42b | ||
|
|
2d355f9223 | ||
|
|
db0c0adb65 | ||
|
|
ce12778561 | ||
|
|
44122f9ca6 | ||
|
|
b2e046f4c5 | ||
|
|
3135352b2f | ||
|
|
2bae1148bb | ||
|
|
cffd9d3929 | ||
|
|
cb0408db1d | ||
|
|
e7f8ecb078 | ||
|
|
1cdf92490a | ||
|
|
bcf2d3be48 | ||
|
|
19521c8f18 | ||
|
|
22121eae20 | ||
|
|
b2e78bf29e | ||
|
|
94480ca38e | ||
|
|
3ff7b8a773 | ||
| 0192772ab5 | |||
|
|
c1bc0dad5e | ||
|
|
19e3fd3af7 | ||
| 10f0ebaf22 | |||
|
|
cbaa114bb2 | ||
| f0268d12bf | |||
| fc8ebe53e1 |
409 changed files with 5080 additions and 70006 deletions
4
.gitignore
vendored
4
.gitignore
vendored
|
|
@ -17,9 +17,9 @@ dist/
|
|||
tasks
|
||||
/core
|
||||
/i18n-validate
|
||||
cmd/bugseti/bugseti
|
||||
internal/core-ide/core-ide
|
||||
cmd/
|
||||
.angular/
|
||||
|
||||
patch_cov.*
|
||||
go.work.sum
|
||||
lt-hn-index.html
|
||||
|
|
|
|||
457
docs/ecosystem.md
Normal file
457
docs/ecosystem.md
Normal file
|
|
@ -0,0 +1,457 @@
|
|||
# Core Go Ecosystem
|
||||
|
||||
The Core Go ecosystem is a set of 19 standalone Go modules that form the infrastructure backbone for the host-uk platform and the Lethean network. All modules are hosted under the `forge.lthn.ai/core/` organisation. Each module has its own repository, independent versioning, and a `docs/` directory.
|
||||
|
||||
The CLI framework documented in the rest of this site (`forge.lthn.ai/core/cli`) is one node in this graph. The satellite packages listed here are separate repositories that the CLI imports or that stand alone as libraries.
|
||||
|
||||
---
|
||||
|
||||
## Module Index
|
||||
|
||||
| Package | Module Path | Managed By |
|
||||
|---------|-------------|-----------|
|
||||
| [go-inference](#go-inference) | `forge.lthn.ai/core/go-inference` | Virgil |
|
||||
| [go-mlx](#go-mlx) | `forge.lthn.ai/core/go-mlx` | Virgil |
|
||||
| [go-rocm](#go-rocm) | `forge.lthn.ai/core/go-rocm` | Charon |
|
||||
| [go-ml](#go-ml) | `forge.lthn.ai/core/go-ml` | Virgil |
|
||||
| [go-ai](#go-ai) | `forge.lthn.ai/core/go-ai` | Virgil |
|
||||
| [go-agentic](#go-agentic) | `forge.lthn.ai/core/go-agentic` | Charon |
|
||||
| [go-rag](#go-rag) | `forge.lthn.ai/core/go-rag` | Charon |
|
||||
| [go-i18n](#go-i18n) | `forge.lthn.ai/core/go-i18n` | Virgil |
|
||||
| [go-html](#go-html) | `forge.lthn.ai/core/go-html` | Charon |
|
||||
| [go-crypt](#go-crypt) | `forge.lthn.ai/core/go-crypt` | Virgil |
|
||||
| [go-scm](#go-scm) | `forge.lthn.ai/core/go-scm` | Charon |
|
||||
| [go-p2p](#go-p2p) | `forge.lthn.ai/core/go-p2p` | Charon |
|
||||
| [go-devops](#go-devops) | `forge.lthn.ai/core/go-devops` | Virgil |
|
||||
| [go-help](#go-help) | `forge.lthn.ai/core/go-help` | Charon |
|
||||
| [go-ratelimit](#go-ratelimit) | `forge.lthn.ai/core/go-ratelimit` | Charon |
|
||||
| [go-session](#go-session) | `forge.lthn.ai/core/go-session` | Charon |
|
||||
| [go-store](#go-store) | `forge.lthn.ai/core/go-store` | Charon |
|
||||
| [go-ws](#go-ws) | `forge.lthn.ai/core/go-ws` | Charon |
|
||||
| [go-webview](#go-webview) | `forge.lthn.ai/core/go-webview` | Charon |
|
||||
|
||||
---
|
||||
|
||||
## Dependency Graph
|
||||
|
||||
The graph below shows import relationships. An arrow `A → B` means A imports B.
|
||||
|
||||
```
|
||||
go-inference (no dependencies — foundation contract)
|
||||
↑
|
||||
├── go-mlx (CGO, Apple Silicon Metal GPU)
|
||||
├── go-rocm (AMD ROCm, llama-server subprocess)
|
||||
└── go-ml (scoring engine, backends, orchestrator)
|
||||
↑
|
||||
└── go-ai (MCP hub, 49 tools)
|
||||
↑
|
||||
└── go-agentic (service lifecycle, allowances)
|
||||
|
||||
go-rag (Qdrant + Ollama, standalone)
|
||||
↑
|
||||
└── go-ai
|
||||
|
||||
go-i18n (grammar engine, standalone; Phase 2a imports go-mlx)
|
||||
|
||||
go-crypt (standalone)
|
||||
↑
|
||||
├── go-p2p (UEPS wire protocol)
|
||||
└── go-scm (AgentCI dispatch)
|
||||
|
||||
go-store (SQLite KV, standalone)
|
||||
↑
|
||||
├── go-ratelimit (sliding window limiter)
|
||||
├── go-session (transcript parser)
|
||||
└── go-agentic
|
||||
|
||||
go-ws (WebSocket hub, standalone)
|
||||
↑
|
||||
└── go-ai
|
||||
|
||||
go-webview (CDP client, standalone)
|
||||
↑
|
||||
└── go-ai
|
||||
|
||||
go-html (DOM compositor, standalone)
|
||||
|
||||
go-help (help catalogue, standalone)
|
||||
|
||||
go-devops (Ansible, build, infrastructure — imports go-scm)
|
||||
```
|
||||
|
||||
The CLI framework (`forge.lthn.ai/core/cli`) has internal equivalents of several of these packages (`pkg/rag`, `pkg/ws`, `pkg/webview`, `pkg/i18n`) that were developed in parallel. The satellite packages are the canonical standalone versions intended for use outside the CLI binary.
|
||||
|
||||
---
|
||||
|
||||
## Package Descriptions
|
||||
|
||||
### go-inference
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-inference`
|
||||
|
||||
Zero-dependency interface package that defines the common contract for all inference backends in the ecosystem:
|
||||
|
||||
- `TextModel` — the top-level model interface (`Generate`, `Stream`, `Close`)
|
||||
- `Backend` — hardware/runtime abstraction (Metal, ROCm, CPU, remote)
|
||||
- `Token` — streaming token type with metadata
|
||||
|
||||
No concrete implementations live here. Any package that needs to call inference without depending on a specific hardware library imports `go-inference` and receives an implementation at runtime.
|
||||
|
||||
---
|
||||
|
||||
### go-mlx
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-mlx`
|
||||
|
||||
Native Metal GPU inference for Apple Silicon using CGO bindings to `mlx-c` (the C API for Apple's MLX framework). Implements the `go-inference` interfaces.
|
||||
|
||||
Build requirements:
|
||||
- macOS 13+ (Ventura) on Apple Silicon
|
||||
- `mlx-c` installed (`brew install mlx`)
|
||||
- CGO enabled: `CGO_CFLAGS` and `CGO_LDFLAGS` must reference the mlx-c headers and library
|
||||
|
||||
Features:
|
||||
- Loads GGUF and MLX-format models
|
||||
- Streaming token generation directly on GPU
|
||||
- Quantised model support (Q4, Q8)
|
||||
- Phase 4 backend abstraction in progress — will allow hot-swapping backends at runtime
|
||||
|
||||
Local path: `/Users/snider/Code/go-mlx`
|
||||
|
||||
---
|
||||
|
||||
### go-rocm
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-rocm`
|
||||
|
||||
AMD ROCm GPU inference for Linux. Rather than using CGO, this package manages a `llama-server` subprocess (from llama.cpp) compiled with ROCm support and communicates over its HTTP API.
|
||||
|
||||
Features:
|
||||
- Subprocess lifecycle management (start, health-check, restart on crash)
|
||||
- OpenAI-compatible HTTP client wrapping llama-server's API
|
||||
- Implements `go-inference` interfaces
|
||||
- Targeted at the homelab RX 7800 XT running Ubuntu 24.04
|
||||
|
||||
Managed by Charon (Linux homelab).
|
||||
|
||||
---
|
||||
|
||||
### go-ml
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-ml`
|
||||
|
||||
Scoring engine, backend registry, and agent orchestration layer. The hub that connects models from `go-mlx`, `go-rocm`, and future backends into a unified interface.
|
||||
|
||||
Features:
|
||||
- Backend registry: register multiple inference backends, select by capability
|
||||
- Scoring pipeline: evaluate model outputs against rubrics
|
||||
- Agent orchestrator: coordinate multi-step inference tasks
|
||||
- ~3.5K LOC
|
||||
|
||||
---
|
||||
|
||||
### go-ai
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-ai`
|
||||
|
||||
MCP (Model Context Protocol) server hub with 49 registered tools. Acts as the primary facade for AI capabilities in the ecosystem.
|
||||
|
||||
Features:
|
||||
- 49 MCP tools covering file operations, RAG, metrics, process management, WebSocket, and CDP/webview
|
||||
- Imports `go-ml`, `go-rag`, `go-mlx`
|
||||
- Can run as stdio MCP server or TCP MCP server
|
||||
- AI usage metrics recorded to JSONL
|
||||
|
||||
Run the MCP server:
|
||||
|
||||
```bash
|
||||
# stdio (for Claude Desktop / Claude Code)
|
||||
core mcp serve
|
||||
|
||||
# TCP
|
||||
MCP_ADDR=:9000 core mcp serve
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### go-agentic
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-agentic`
|
||||
|
||||
Service lifecycle and allowance management for autonomous agents. Handles:
|
||||
|
||||
- Agent session tracking and state persistence
|
||||
- Allowance system: budget constraints on tool calls, token usage, and wall-clock time
|
||||
- Integration with `go-store` for persistence
|
||||
- REST client for the PHP `core-agentic` backend
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-rag
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-rag`
|
||||
|
||||
Retrieval-Augmented Generation pipeline using Qdrant for vector storage and Ollama for embeddings.
|
||||
|
||||
Features:
|
||||
- `ChunkMarkdown`: semantic splitting by H2 headers and paragraphs with overlap
|
||||
- `Ingest`: crawl a directory of Markdown files, embed, and store in Qdrant
|
||||
- `Query`: semantic search returning ranked `QueryResult` slices
|
||||
- `FormatResultsContext`: formats results as XML tags for LLM prompt injection
|
||||
- Clients: `QdrantClient` and `OllamaClient` wrapping their respective Go SDKs
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-i18n
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-i18n`
|
||||
|
||||
Grammar engine for natural-language generation. Goes beyond key-value lookup tables to handle pluralisation, verb conjugation, past tense, gerunds, and semantic sentence construction ("Subject verbed object").
|
||||
|
||||
Features:
|
||||
- `T(key, args...)` — main translation function
|
||||
- `S(noun, value)` — semantic subject with grammatical context
|
||||
- Language rules defined in JSON; algorithmic fallbacks for irregular verbs
|
||||
- **GrammarImprint**: a linguistic hash (reversal of the grammar engine) used as a semantic fingerprint — part of the Lethean identity verification stack
|
||||
- Phase 2a (imports `go-mlx` for language model-assisted reversal) currently blocked on `go-mlx` Phase 4
|
||||
|
||||
Local path: `/Users/snider/Code/go-i18n`
|
||||
|
||||
---
|
||||
|
||||
### go-html
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-html`
|
||||
|
||||
HLCRF DOM compositor — a programmatic HTML/DOM construction library targeting both server-side rendering and WASM (browser).
|
||||
|
||||
HLCRF stands for Header, Left, Content, Right, Footer — the region layout model used throughout the CLI's terminal UI and web rendering layer.
|
||||
|
||||
Features:
|
||||
- Composable region-based layout (mirrors the terminal `Composite` in `pkg/cli`)
|
||||
- WASM build target: runs in the browser without JavaScript
|
||||
- Used by the LEM Chat UI and web SDK generation
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-crypt
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-crypt`
|
||||
|
||||
Cryptographic primitives, authentication, and trust policy enforcement.
|
||||
|
||||
Features:
|
||||
- Password hashing (Argon2id with tuned parameters)
|
||||
- Symmetric encryption (ChaCha20-Poly1305, AES-GCM)
|
||||
- Key derivation (HKDF, Scrypt)
|
||||
- OpenPGP challenge-response authentication
|
||||
- Trust policies: define and evaluate access rules
|
||||
- Foundation for the UEPS (User-controlled Encryption Policy System) wire protocol in `go-p2p`
|
||||
|
||||
---
|
||||
|
||||
### go-scm
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-scm`
|
||||
|
||||
Source control management and CI integration, including the AgentCI dispatch system.
|
||||
|
||||
Features:
|
||||
- Forgejo and Gitea API clients (typed wrappers)
|
||||
- GitHub integration via the `gh` CLI
|
||||
- `AgentCI`: dispatches AI work items to agent runners over SSH using Charm stack libraries (`soft-serve`, `keygen`, `melt`, `wishlist`)
|
||||
- PR lifecycle management: create, review, merge, label
|
||||
- JSONL job journal for audit trails
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-p2p
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-p2p`
|
||||
|
||||
Peer-to-peer mesh networking implementing the UEPS (User-controlled Encryption Policy System) wire protocol.
|
||||
|
||||
Features:
|
||||
- UEPS: consent-gated TLV frames with Ed25519 consent tokens and an Intent-Broker
|
||||
- Peer discovery and mesh routing
|
||||
- Encrypted relay transport
|
||||
- Integration with `go-crypt` for all cryptographic operations
|
||||
|
||||
This is a core component of the Lethean Web3 network layer.
|
||||
|
||||
Managed by Charon (Linux homelab).
|
||||
|
||||
---
|
||||
|
||||
### go-devops
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-devops`
|
||||
|
||||
Infrastructure automation, build tooling, and release pipeline utilities, intended as a standalone library form of what the Core CLI provides as commands.
|
||||
|
||||
Features:
|
||||
- Ansible-lite engine (native Go SSH playbook execution)
|
||||
- LinuxKit image building and VM lifecycle
|
||||
- Multi-target binary build and release
|
||||
- Integration with `go-scm` for repository operations
|
||||
|
||||
---
|
||||
|
||||
### go-help
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-help`
|
||||
|
||||
Embedded documentation catalogue with full-text search and an optional HTTP server for serving help content.
|
||||
|
||||
Features:
|
||||
- YAML-frontmatter Markdown topic parsing
|
||||
- In-memory reverse index with title/heading/body scoring
|
||||
- Snippet extraction with keyword highlighting
|
||||
- `HTTP server` mode: serve the catalogue as a documentation site
|
||||
- Used by the `core pkg search` command and the `pkg/help` package inside the CLI
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-ratelimit
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-ratelimit`
|
||||
|
||||
Sliding-window rate limiter with a SQLite persistence backend.
|
||||
|
||||
Features:
|
||||
- Token bucket and sliding-window algorithms
|
||||
- SQLite backend via `go-store` for durable rate state across restarts
|
||||
- HTTP middleware helper
|
||||
- Used by `go-ai` and `go-agentic` to enforce per-agent API quotas
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-session
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-session`
|
||||
|
||||
Claude Code JSONL transcript parser and visualisation toolkit (standalone version of `pkg/session` inside the CLI).
|
||||
|
||||
Features:
|
||||
- `ParseTranscript(path)`: reads `.jsonl` session files and reconstructs tool use timelines
|
||||
- `ListSessions(dir)`: scans a Claude projects directory for session files
|
||||
- `Search(dir, query)`: full-text search across sessions
|
||||
- `RenderHTML(sess, path)`: single-file HTML visualisation
|
||||
- `RenderMP4(sess, path)`: terminal video replay via VHS
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-store
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-store`
|
||||
|
||||
SQLite-backed key-value store with reactive change notification.
|
||||
|
||||
Features:
|
||||
- `Get`, `Set`, `Delete`, `List` over typed keys
|
||||
- `Watch(key, handler)`: register a callback that fires on change
|
||||
- `OnChange(handler)`: subscribe to all changes
|
||||
- Used by `go-ratelimit`, `go-session`, and `go-agentic` for lightweight persistence
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-ws
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-ws`
|
||||
|
||||
WebSocket hub with channel-based subscriptions and an optional Redis pub/sub bridge for multi-instance deployments.
|
||||
|
||||
Features:
|
||||
- Hub pattern: central registry of connected clients
|
||||
- Channel routing: `SendToChannel(topic, msg)` delivers only to subscribers
|
||||
- Redis bridge: publish messages from one instance, receive on all
|
||||
- HTTP handler: `hub.Handler()` for embedding in any Go HTTP server
|
||||
- `SendProcessOutput(id, line)`: convenience method for streaming process logs
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
### go-webview
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-webview`
|
||||
|
||||
Chrome DevTools Protocol (CDP) client for browser automation, testing, and AI-driven web interaction (standalone version of `pkg/webview` inside the CLI).
|
||||
|
||||
Features:
|
||||
- Navigation, click, type, screenshot
|
||||
- `Evaluate(script)`: arbitrary JavaScript execution with result capture
|
||||
- Console capture and filtering
|
||||
- Angular-aware helpers: `WaitForAngular()`, `GetNgModel(selector)`
|
||||
- `ActionSequence`: chain interactions into a single call
|
||||
- Used by `go-ai` to expose browser tools to MCP agents
|
||||
|
||||
Managed by Charon.
|
||||
|
||||
---
|
||||
|
||||
## Forge Repository Paths
|
||||
|
||||
All repositories are hosted at `forge.lthn.ai` (Forgejo). SSH access uses port 2223:
|
||||
|
||||
```
|
||||
ssh://git@forge.lthn.ai:2223/core/go-inference.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-mlx.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-rocm.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-ml.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-ai.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-agentic.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-rag.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-i18n.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-html.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-crypt.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-scm.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-p2p.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-devops.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-help.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-ratelimit.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-session.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-store.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-ws.git
|
||||
ssh://git@forge.lthn.ai:2223/core/go-webview.git
|
||||
```
|
||||
|
||||
HTTPS authentication is not available on Forge. Always use SSH remotes.
|
||||
|
||||
---
|
||||
|
||||
## Go Workspace Setup
|
||||
|
||||
The satellite packages can be used together in a Go workspace. After cloning the repositories you need:
|
||||
|
||||
```bash
|
||||
go work init
|
||||
go work use ./go-inference ./go-mlx ./go-rag ./go-ai # add as needed
|
||||
go work sync
|
||||
```
|
||||
|
||||
The CLI repository already uses a Go workspace that includes `cmd/core-gui`, `cmd/bugseti`, and `cmd/examples/*`.
|
||||
|
||||
---
|
||||
|
||||
## See Also
|
||||
|
||||
- [index.md](index.md) — Main documentation hub
|
||||
- [getting-started.md](getting-started.md) — CLI installation
|
||||
- [configuration.md](configuration.md) — `repos.yaml` registry format
|
||||
241
docs/index.md
241
docs/index.md
|
|
@ -1,98 +1,207 @@
|
|||
# Core CLI
|
||||
# Core Go Framework — Documentation
|
||||
|
||||
Core is a unified CLI for the host-uk ecosystem - build, release, and deploy Go, Wails, PHP, and container workloads.
|
||||
Core is a Go framework and unified CLI for the host-uk ecosystem. It provides two complementary things: a **dependency injection container** for building Go services and Wails v3 desktop applications, and a **command-line tool** for managing the full development lifecycle across Go, PHP, and container workloads.
|
||||
|
||||
## Installation
|
||||
The `core` binary is the single entry point for all development tasks: testing, building, releasing, multi-repo management, MCP servers, and AI-assisted workflows.
|
||||
|
||||
```bash
|
||||
# Via Go (recommended)
|
||||
go install forge.lthn.ai/core/cli/cmd/core@latest
|
||||
---
|
||||
|
||||
# Or download binary from releases
|
||||
curl -Lo core https://forge.lthn.ai/core/cli/releases/latest/download/core-$(go env GOOS)-$(go env GOARCH)
|
||||
chmod +x core && sudo mv core /usr/local/bin/
|
||||
## Getting Started
|
||||
|
||||
# Verify
|
||||
core doctor
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
| [Getting Started](getting-started.md) | Install the CLI, run your first build, and set up a multi-repo workspace |
|
||||
| [User Guide](user-guide.md) | Key concepts and daily workflow patterns |
|
||||
| [Workflows](workflows.md) | End-to-end task sequences for common scenarios |
|
||||
| [FAQ](faq.md) | Answers to common questions |
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
| [Package Standards](pkg/PACKAGE_STANDARDS.md) | Canonical patterns for creating packages: Service struct, factory, IPC, thread safety |
|
||||
| [pkg/i18n — Grammar](pkg/i18n/GRAMMAR.md) | Grammar engine internals and language rule format |
|
||||
| [pkg/i18n — Extending](pkg/i18n/EXTENDING.md) | How to add new locales and translation files |
|
||||
|
||||
### Framework Architecture Summary
|
||||
|
||||
The Core framework (`pkg/framework`) is a dependency injection container built around three ideas:
|
||||
|
||||
**Service registry.** Services are registered via factory functions and retrieved with type-safe generics:
|
||||
|
||||
```go
|
||||
core, _ := framework.New(
|
||||
framework.WithService(mypackage.NewService(opts)),
|
||||
)
|
||||
svc, _ := framework.ServiceFor[*mypackage.Service](core, "mypackage")
|
||||
```
|
||||
|
||||
See [Getting Started](getting-started.md) for all installation options including building from source.
|
||||
**Lifecycle.** Services implementing `Startable` or `Stoppable` are called automatically during boot and shutdown.
|
||||
|
||||
**ACTION bus.** Services communicate by broadcasting typed messages via `core.ACTION(msg)` and registering handlers via `core.RegisterAction()`. This decouples packages without requiring direct imports between them.
|
||||
|
||||
---
|
||||
|
||||
## Command Reference
|
||||
|
||||
See [cmd/](cmd/) for full command documentation.
|
||||
The `core` CLI is documented command-by-command in `docs/cmd/`:
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| [go](cmd/go/) | Go development (test, fmt, lint, cov) |
|
||||
| [php](cmd/php/) | Laravel/PHP development |
|
||||
| [build](cmd/build/) | Build Go, Wails, Docker, LinuxKit projects |
|
||||
| [ci](cmd/ci/) | Publish releases (dry-run by default) |
|
||||
| [sdk](cmd/sdk/) | SDK generation and validation |
|
||||
| [dev](cmd/dev/) | Multi-repo workflow + dev environment |
|
||||
| [pkg](cmd/pkg/) | Package search and install |
|
||||
| [vm](cmd/vm/) | LinuxKit VM management |
|
||||
| [docs](cmd/docs/) | Documentation management |
|
||||
| [setup](cmd/setup/) | Clone repos from registry |
|
||||
| [doctor](cmd/doctor/) | Check development environment |
|
||||
| [cmd/](cmd/) | Full command index |
|
||||
| [cmd/go/](cmd/go/) | Go development: test, fmt, lint, coverage, mod, work |
|
||||
| [cmd/php/](cmd/php/) | Laravel/PHP development: dev server, test, deploy |
|
||||
| [cmd/build/](cmd/build/) | Build Go, Wails, Docker, LinuxKit projects |
|
||||
| [cmd/ci/](cmd/ci/) | Publish releases to GitHub, Docker, npm, Homebrew |
|
||||
| [cmd/sdk/](cmd/sdk/) | SDK generation and OpenAPI validation |
|
||||
| [cmd/dev/](cmd/dev/) | Multi-repo workflow and sandboxed dev environment |
|
||||
| [cmd/ai/](cmd/ai/) | AI task management and Claude integration |
|
||||
| [cmd/pkg/](cmd/pkg/) | Package search and install |
|
||||
| [cmd/vm/](cmd/vm/) | LinuxKit VM management |
|
||||
| [cmd/docs/](cmd/docs/) | Documentation sync and management |
|
||||
| [cmd/setup/](cmd/setup/) | Clone repositories from a registry |
|
||||
| [cmd/doctor/](cmd/doctor/) | Verify development environment |
|
||||
| [cmd/test/](cmd/test/) | Run Go tests with coverage reporting |
|
||||
|
||||
## Quick Start
|
||||
---
|
||||
|
||||
```bash
|
||||
# Go development
|
||||
core go test # Run tests
|
||||
core go test --coverage # With coverage
|
||||
core go fmt # Format code
|
||||
core go lint # Lint code
|
||||
## Packages
|
||||
|
||||
# Build
|
||||
core build # Auto-detect and build
|
||||
core build --targets linux/amd64,darwin/arm64
|
||||
The Core repository contains the following internal packages. Full API analysis for each is available in the batch analysis documents listed under [Reference](#reference).
|
||||
|
||||
# Release (dry-run by default)
|
||||
core ci # Preview release
|
||||
core ci --we-are-go-for-launch # Actually publish
|
||||
### Foundation
|
||||
|
||||
# Multi-repo workflow
|
||||
core dev work # Status + commit + push
|
||||
core dev work --status # Just show status
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `pkg/framework` | Dependency injection container; re-exports `pkg/framework/core` |
|
||||
| `pkg/log` | Structured logger with `Err` error type, operation chains, and log rotation |
|
||||
| `pkg/config` | 12-factor config management layered over Viper; accepts `io.Medium` |
|
||||
| `pkg/io` | Filesystem abstraction (`Medium` interface); `NewSandboxed`, `MockMedium` |
|
||||
| `pkg/crypt` | Opinionated crypto: Argon2id passwords, ChaCha20 encryption, HMAC |
|
||||
| `pkg/cache` | File-based JSON cache with TTL expiry |
|
||||
| `pkg/i18n` | Grammar engine with pluralisation, verb conjugation, semantic sentences |
|
||||
|
||||
# PHP development
|
||||
core php dev # Start dev environment
|
||||
core php test # Run tests
|
||||
```
|
||||
### CLI and Interaction
|
||||
|
||||
## Configuration
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `pkg/cli` | CLI runtime: Cobra wrapping, ANSI styling, prompts, daemon lifecycle |
|
||||
| `pkg/help` | Embedded documentation catalogue with in-memory full-text search |
|
||||
| `pkg/session` | Claude Code JSONL transcript parser; HTML and MP4 export |
|
||||
| `pkg/workspace` | Isolated, PGP-keyed workspace environments with IPC control |
|
||||
|
||||
Core uses `.core/` directory for project configuration:
|
||||
### Build and Release
|
||||
|
||||
```
|
||||
.core/
|
||||
├── release.yaml # Release targets and settings
|
||||
├── build.yaml # Build configuration (optional)
|
||||
└── linuxkit/ # LinuxKit templates
|
||||
```
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `pkg/build` | Project type detection, cross-compilation, archiving, checksums |
|
||||
| `pkg/release` | Semantic versioning, conventional-commit changelogs, multi-target publishing |
|
||||
| `pkg/container` | LinuxKit VM lifecycle via QEMU/Hyperkit; template management |
|
||||
| `pkg/process` | `os/exec` wrapper with ring-buffer output, DAG task runner, ACTION streaming |
|
||||
| `pkg/jobrunner` | Poll-dispatch automation engine with JSONL audit journal |
|
||||
|
||||
And `repos.yaml` in workspace root for multi-repo management.
|
||||
### Source Control and Hosting
|
||||
|
||||
## Guides
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `pkg/git` | Multi-repo status, push, pull; concurrent status checks |
|
||||
| `pkg/repos` | `repos.yaml` registry loader; topological dependency ordering |
|
||||
| `pkg/gitea` | Gitea API client with PR metadata extraction |
|
||||
| `pkg/forge` | Forgejo API client with PR metadata extraction |
|
||||
| `pkg/plugin` | Git-based CLI extension system |
|
||||
|
||||
- [Getting Started](getting-started.md) - Installation and first steps
|
||||
- [Workflows](workflows.md) - Common task sequences
|
||||
- [Troubleshooting](troubleshooting.md) - When things go wrong
|
||||
- [Migration](migration.md) - Moving from legacy tools
|
||||
### AI and Agentic
|
||||
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `pkg/mcp` | MCP server exposing file, process, RAG, and CDP tools to AI agents |
|
||||
| `pkg/rag` | RAG pipeline: Markdown chunking, Ollama embeddings, Qdrant vector search |
|
||||
| `pkg/ai` | Facade over RAG and metrics; `QueryRAGForTask` for prompt enrichment |
|
||||
| `pkg/agentic` | REST client for core-agentic; `AutoCommit`, `CreatePR`, `BuildTaskContext` |
|
||||
| `pkg/agentci` | Configuration bridge for AgentCI dispatch targets |
|
||||
| `pkg/collect` | Data collection pipeline from GitHub, forums, market APIs |
|
||||
|
||||
### Infrastructure and Networking
|
||||
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `pkg/devops` | LinuxKit dev environment lifecycle; SSH bridging; project auto-detection |
|
||||
| `pkg/ansible` | Native Go Ansible-lite engine; SSH playbook execution without the CLI |
|
||||
| `pkg/webview` | Chrome DevTools Protocol client; Angular-aware automation |
|
||||
| `pkg/ws` | WebSocket hub with channel-based subscriptions |
|
||||
| `pkg/unifi` | UniFi controller client for network management |
|
||||
| `pkg/auth` | OpenPGP challenge-response authentication; air-gapped flow |
|
||||
|
||||
---
|
||||
|
||||
## Workflows
|
||||
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
| [Workflows](workflows.md) | Go build and release, PHP deploy, multi-repo daily workflow, hotfix |
|
||||
| [Migration](migration.md) | Migrating from `push-all.sh`, raw `go` commands, `goreleaser`, or manual git |
|
||||
|
||||
---
|
||||
|
||||
## Reference
|
||||
|
||||
- [Configuration](configuration.md) - All config options
|
||||
- [Glossary](glossary.md) - Term definitions
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
| [Configuration](configuration.md) | `.core/` directory, `release.yaml`, `build.yaml`, `php.yaml`, `repos.yaml`, environment variables |
|
||||
| [Glossary](glossary.md) | Term definitions: target, workspace, registry, publisher, dry-run |
|
||||
| [Troubleshooting](troubleshooting.md) | Installation failures, build errors, release issues, multi-repo problems, PHP issues |
|
||||
| [Claude Code Skill](skill/) | Install the `core` skill to teach Claude Code how to use this CLI |
|
||||
|
||||
## Claude Code Skill
|
||||
### Historical Package Analysis
|
||||
|
||||
Install the skill to teach Claude Code how to use the Core CLI:
|
||||
The following documents were generated by an automated analysis pipeline (Gemini, February 2026) to extract architecture, public API, and test coverage notes from each package. They remain valid as architectural reference.
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/host-uk/core/main/.claude/skills/core/install.sh | bash
|
||||
```
|
||||
| Document | Packages Covered |
|
||||
|----------|-----------------|
|
||||
| [pkg-batch1-analysis.md](pkg-batch1-analysis.md) | `pkg/log`, `pkg/config`, `pkg/io`, `pkg/crypt`, `pkg/auth` |
|
||||
| [pkg-batch2-analysis.md](pkg-batch2-analysis.md) | `pkg/cli`, `pkg/help`, `pkg/session`, `pkg/workspace` |
|
||||
| [pkg-batch3-analysis.md](pkg-batch3-analysis.md) | `pkg/build`, `pkg/container`, `pkg/process`, `pkg/jobrunner` |
|
||||
| [pkg-batch4-analysis.md](pkg-batch4-analysis.md) | `pkg/git`, `pkg/repos`, `pkg/gitea`, `pkg/forge`, `pkg/release` |
|
||||
| [pkg-batch5-analysis.md](pkg-batch5-analysis.md) | `pkg/agentci`, `pkg/agentic`, `pkg/ai`, `pkg/rag` |
|
||||
| [pkg-batch6-analysis.md](pkg-batch6-analysis.md) | `pkg/ansible`, `pkg/devops`, `pkg/framework`, `pkg/mcp`, `pkg/plugin`, `pkg/unifi`, `pkg/webview`, `pkg/ws`, `pkg/collect`, `pkg/i18n`, `pkg/cache` |
|
||||
|
||||
See [skill/](skill/) for details.
|
||||
### Design Plans
|
||||
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
| [plans/2026-02-05-core-ide-job-runner-design.md](plans/2026-02-05-core-ide-job-runner-design.md) | Autonomous job runner design for core-ide: poller, dispatcher, MCP handler registry, JSONL training data |
|
||||
| [plans/2026-02-05-core-ide-job-runner-plan.md](plans/2026-02-05-core-ide-job-runner-plan.md) | Implementation plan for the job runner |
|
||||
| [plans/2026-02-05-mcp-integration.md](plans/2026-02-05-mcp-integration.md) | MCP integration design notes |
|
||||
| [plans/2026-02-17-lem-chat-design.md](plans/2026-02-17-lem-chat-design.md) | LEM Chat Web Components design: streaming SSE, zero-dependency vanilla UI |
|
||||
|
||||
---
|
||||
|
||||
## Satellite Packages
|
||||
|
||||
The Core ecosystem extends across 19 standalone Go modules, all hosted under `forge.lthn.ai/core/`. Each has its own repository and `docs/` directory.
|
||||
|
||||
See [ecosystem.md](ecosystem.md) for the full map, module paths, and dependency graph.
|
||||
|
||||
| Package | Purpose |
|
||||
|---------|---------|
|
||||
| [go-inference](ecosystem.md#go-inference) | Shared `TextModel`/`Backend`/`Token` interfaces — the common contract |
|
||||
| [go-mlx](ecosystem.md#go-mlx) | Native Metal GPU inference via CGO/mlx-c (Apple Silicon) |
|
||||
| [go-rocm](ecosystem.md#go-rocm) | AMD ROCm GPU inference via llama-server subprocess |
|
||||
| [go-ml](ecosystem.md#go-ml) | Scoring engine, backends, agent orchestrator |
|
||||
| [go-ai](ecosystem.md#go-ai) | MCP hub with 49 registered tools |
|
||||
| [go-agentic](ecosystem.md#go-agentic) | Service lifecycle and allowance management for agents |
|
||||
| [go-rag](ecosystem.md#go-rag) | Qdrant vector search and Ollama embeddings |
|
||||
| [go-i18n](ecosystem.md#go-i18n) | Grammar engine, reversal, GrammarImprint |
|
||||
| [go-html](ecosystem.md#go-html) | HLCRF DOM compositor and WASM target |
|
||||
| [go-crypt](ecosystem.md#go-crypt) | Cryptographic primitives, auth, trust policies |
|
||||
| [go-scm](ecosystem.md#go-scm) | SCM/CI integration and AgentCI dispatch |
|
||||
| [go-p2p](ecosystem.md#go-p2p) | P2P mesh networking and UEPS wire protocol |
|
||||
| [go-devops](ecosystem.md#go-devops) | Ansible automation, build tooling, infrastructure, release |
|
||||
| [go-help](ecosystem.md#go-help) | YAML help catalogue with full-text search and HTTP server |
|
||||
| [go-ratelimit](ecosystem.md#go-ratelimit) | Sliding-window rate limiter with SQLite backend |
|
||||
| [go-session](ecosystem.md#go-session) | Claude Code JSONL transcript parser |
|
||||
| [go-store](ecosystem.md#go-store) | SQLite key-value store with `Watch`/`OnChange` |
|
||||
| [go-ws](ecosystem.md#go-ws) | WebSocket hub with Redis bridge |
|
||||
| [go-webview](ecosystem.md#go-webview) | Chrome DevTools Protocol automation client |
|
||||
|
|
|
|||
82
docs/plans/2026-02-17-lem-chat-design.md
Normal file
82
docs/plans/2026-02-17-lem-chat-design.md
Normal file
|
|
@ -0,0 +1,82 @@
|
|||
# LEM Chat — Web Components Design
|
||||
|
||||
**Date**: 2026-02-17
|
||||
**Status**: Approved
|
||||
|
||||
## Summary
|
||||
|
||||
Standalone chat UI built with vanilla Web Components (Custom Elements + Shadow DOM). Connects to the MLX inference server's OpenAI-compatible SSE streaming endpoint. Zero framework dependencies. Single JS file output, embeddable anywhere.
|
||||
|
||||
## Components
|
||||
|
||||
| Element | Purpose |
|
||||
|---------|---------|
|
||||
| `<lem-chat>` | Container. Conversation state, SSE connection, config via attributes |
|
||||
| `<lem-messages>` | Scrollable message list with auto-scroll anchoring |
|
||||
| `<lem-message>` | Single message bubble. Streams tokens for assistant messages |
|
||||
| `<lem-input>` | Text input, Enter to send, Shift+Enter for newline |
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
User types in <lem-input>
|
||||
→ dispatches 'lem-send' CustomEvent
|
||||
→ <lem-chat> catches it
|
||||
→ adds user message to <lem-messages>
|
||||
→ POST /v1/chat/completions {stream: true, messages: [...history]}
|
||||
→ reads SSE chunks via fetch + ReadableStream
|
||||
→ appends tokens to streaming <lem-message>
|
||||
→ on [DONE], finalises message
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
```html
|
||||
<lem-chat endpoint="http://localhost:8090" model="qwen3-8b"></lem-chat>
|
||||
```
|
||||
|
||||
Attributes: `endpoint`, `model`, `system-prompt`, `max-tokens`, `temperature`
|
||||
|
||||
## Theming
|
||||
|
||||
Shadow DOM with CSS custom properties:
|
||||
|
||||
```css
|
||||
--lem-bg: #1a1a1e;
|
||||
--lem-msg-user: #2a2a3e;
|
||||
--lem-msg-assistant: #1e1e2a;
|
||||
--lem-accent: #5865f2;
|
||||
--lem-text: #e0e0e0;
|
||||
--lem-font: system-ui;
|
||||
```
|
||||
|
||||
## Markdown
|
||||
|
||||
Minimal inline parsing: fenced code blocks, inline code, bold, italic. No library.
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
lem-chat/
|
||||
├── index.html # Demo page
|
||||
├── src/
|
||||
│ ├── lem-chat.ts # Main container + SSE client
|
||||
│ ├── lem-messages.ts # Message list with scroll anchoring
|
||||
│ ├── lem-message.ts # Single message with streaming
|
||||
│ ├── lem-input.ts # Text input
|
||||
│ ├── markdown.ts # Minimal markdown → HTML
|
||||
│ └── styles.ts # CSS template literals
|
||||
├── package.json # typescript + esbuild
|
||||
└── tsconfig.json
|
||||
```
|
||||
|
||||
Build: `esbuild src/lem-chat.ts --bundle --outfile=dist/lem-chat.js`
|
||||
|
||||
## Not in v1
|
||||
|
||||
- Model selection UI
|
||||
- Conversation persistence
|
||||
- File/image upload
|
||||
- Syntax highlighting
|
||||
- Typing indicators
|
||||
- User avatars
|
||||
1163
docs/plans/2026-02-20-authentik-traefik-plan.md
Normal file
1163
docs/plans/2026-02-20-authentik-traefik-plan.md
Normal file
File diff suppressed because it is too large
Load diff
657
docs/plans/2026-02-20-go-api-design.md
Normal file
657
docs/plans/2026-02-20-go-api-design.md
Normal file
|
|
@ -0,0 +1,657 @@
|
|||
# go-api Design — HTTP Gateway + OpenAPI SDK Generation
|
||||
|
||||
**Date:** 2026-02-20
|
||||
**Author:** Virgil
|
||||
**Status:** Phase 1 + Phase 2 + Phase 3 Complete (176 tests in go-api)
|
||||
**Module:** `forge.lthn.ai/core/go-api`
|
||||
|
||||
## Problem
|
||||
|
||||
The Core Go ecosystem exposes 42+ tools via MCP (JSON-RPC), which is ideal for AI agents but inaccessible to regular HTTP clients, frontend applications, and third-party integrators. There is no unified HTTP gateway, no OpenAPI specification, and no generated SDKs.
|
||||
|
||||
Both external customers (Host UK products) and Lethean network peers need programmatic access to the same services. The gateway also serves web routes, static assets, and streaming endpoints — not just REST APIs.
|
||||
|
||||
## Solution
|
||||
|
||||
A `go-api` package that acts as the central HTTP gateway:
|
||||
|
||||
1. **Gin-based HTTP gateway** with extensible middleware via gin-contrib plugins
|
||||
2. **RouteGroup interface** that subsystems implement to register their own endpoints (API, web, or both)
|
||||
3. **WebSocket + SSE integration** for real-time streaming
|
||||
4. **OpenAPI 3.1 spec generation** via runtime SpecBuilder (not swaggo annotations)
|
||||
5. **SDK generation pipeline** targeting 11 languages via openapi-generator-cli
|
||||
|
||||
## Architecture
|
||||
|
||||
### Four-Protocol Access
|
||||
|
||||
Same backend services, four client protocols:
|
||||
|
||||
```
|
||||
┌─── REST (go-api) POST /v1/ml/generate → JSON
|
||||
│
|
||||
├─── GraphQL (gqlgen) mutation { mlGenerate(...) { response } }
|
||||
Client ────────────┤
|
||||
├─── WebSocket (go-ws) subscribe ml.generate → streaming
|
||||
│
|
||||
└─── MCP (go-ai) ml_generate → JSON-RPC
|
||||
```
|
||||
|
||||
### Dependency Graph
|
||||
|
||||
```
|
||||
go-api (Gin engine + middleware + OpenAPI)
|
||||
↑ imported by (each registers its own routes)
|
||||
├── go-ai/api/ → /v1/file/*, /v1/process/*, /v1/metrics/*
|
||||
├── go-ml/api/ → /v1/ml/*
|
||||
├── go-rag/api/ → /v1/rag/*
|
||||
├── go-agentic/api/ → /v1/tasks/*
|
||||
├── go-help/api/ → /v1/help/*
|
||||
└── go-ws/api/ → /ws (WebSocket upgrade)
|
||||
```
|
||||
|
||||
go-api has zero internal ecosystem dependencies. Subsystems import go-api, not the other way round.
|
||||
|
||||
### Subsystem Opt-In
|
||||
|
||||
Not every MCP tool becomes a REST endpoint. Each subsystem decides what to expose via a separate `RegisterAPI()` method, independent of MCP's `RegisterTools()`. A subsystem with 15 MCP tools might expose 5 REST endpoints.
|
||||
|
||||
## Package Structure
|
||||
|
||||
```
|
||||
forge.lthn.ai/core/go-api
|
||||
├── api.go # Engine struct, New(), Serve(), Shutdown()
|
||||
├── middleware.go # Auth, CORS, rate limiting, request logging, recovery
|
||||
├── options.go # WithAddr, WithAuth, WithCORS, WithRateLimit, etc.
|
||||
├── group.go # RouteGroup interface + registration
|
||||
├── response.go # Envelope type, error responses, pagination
|
||||
├── docs/ # Generated swagger docs (swaggo output)
|
||||
├── sdk/ # SDK generation tooling / Makefile targets
|
||||
└── go.mod # forge.lthn.ai/core/go-api
|
||||
```
|
||||
|
||||
## Core Interface
|
||||
|
||||
```go
|
||||
// RouteGroup registers API routes onto a Gin router group.
|
||||
// Subsystems implement this to expose their endpoints.
|
||||
type RouteGroup interface {
|
||||
// Name returns the route group identifier (e.g. "ml", "rag", "tasks")
|
||||
Name() string
|
||||
// BasePath returns the URL prefix (e.g. "/v1/ml")
|
||||
BasePath() string
|
||||
// RegisterRoutes adds handlers to the provided router group
|
||||
RegisterRoutes(rg *gin.RouterGroup)
|
||||
}
|
||||
|
||||
// StreamGroup optionally declares WebSocket channels a subsystem publishes to.
|
||||
type StreamGroup interface {
|
||||
Channels() []string
|
||||
}
|
||||
```
|
||||
|
||||
### Subsystem Example (go-ml)
|
||||
|
||||
```go
|
||||
// In go-ml/api/routes.go
|
||||
package api
|
||||
|
||||
type Routes struct {
|
||||
service *ml.Service
|
||||
}
|
||||
|
||||
func NewRoutes(svc *ml.Service) *Routes {
|
||||
return &Routes{service: svc}
|
||||
}
|
||||
|
||||
func (r *Routes) Name() string { return "ml" }
|
||||
func (r *Routes) BasePath() string { return "/v1/ml" }
|
||||
|
||||
func (r *Routes) RegisterRoutes(rg *gin.RouterGroup) {
|
||||
rg.POST("/generate", r.Generate)
|
||||
rg.POST("/score", r.Score)
|
||||
rg.GET("/backends", r.Backends)
|
||||
rg.GET("/status", r.Status)
|
||||
}
|
||||
|
||||
func (r *Routes) Channels() []string {
|
||||
return []string{"ml.generate", "ml.status"}
|
||||
}
|
||||
|
||||
// @Summary Generate text via ML backend
|
||||
// @Tags ml
|
||||
// @Accept json
|
||||
// @Produce json
|
||||
// @Param input body MLGenerateInput true "Generation parameters"
|
||||
// @Success 200 {object} Response[MLGenerateOutput]
|
||||
// @Router /v1/ml/generate [post]
|
||||
func (r *Routes) Generate(c *gin.Context) {
|
||||
var input MLGenerateInput
|
||||
if err := c.ShouldBindJSON(&input); err != nil {
|
||||
c.JSON(400, api.Fail("invalid_input", err.Error()))
|
||||
return
|
||||
}
|
||||
result, err := r.service.Generate(c.Request.Context(), input.Backend, input.Prompt, ml.GenOpts{
|
||||
Temperature: input.Temperature,
|
||||
MaxTokens: input.MaxTokens,
|
||||
Model: input.Model,
|
||||
})
|
||||
if err != nil {
|
||||
c.JSON(500, api.Fail("ml.generate_failed", err.Error()))
|
||||
return
|
||||
}
|
||||
c.JSON(200, api.OK(MLGenerateOutput{
|
||||
Response: result,
|
||||
Backend: input.Backend,
|
||||
Model: input.Model,
|
||||
}))
|
||||
}
|
||||
```
|
||||
|
||||
### Engine Wiring (in core CLI)
|
||||
|
||||
```go
|
||||
engine := api.New(
|
||||
api.WithAddr(":8080"),
|
||||
api.WithCORS("*"),
|
||||
api.WithAuth(api.BearerToken(cfg.APIKey)),
|
||||
api.WithRateLimit(100, time.Minute),
|
||||
api.WithWSHub(wsHub),
|
||||
)
|
||||
|
||||
engine.Register(mlapi.NewRoutes(mlService))
|
||||
engine.Register(ragapi.NewRoutes(ragService))
|
||||
engine.Register(agenticapi.NewRoutes(agenticService))
|
||||
|
||||
engine.Serve(ctx) // Blocks until context cancelled
|
||||
```
|
||||
|
||||
## Response Envelope
|
||||
|
||||
All endpoints return a consistent envelope:
|
||||
|
||||
```go
|
||||
type Response[T any] struct {
|
||||
Success bool `json:"success"`
|
||||
Data T `json:"data,omitempty"`
|
||||
Error *Error `json:"error,omitempty"`
|
||||
Meta *Meta `json:"meta,omitempty"`
|
||||
}
|
||||
|
||||
type Error struct {
|
||||
Code string `json:"code"`
|
||||
Message string `json:"message"`
|
||||
Details any `json:"details,omitempty"`
|
||||
}
|
||||
|
||||
type Meta struct {
|
||||
RequestID string `json:"request_id"`
|
||||
Duration string `json:"duration"`
|
||||
Page int `json:"page,omitempty"`
|
||||
PerPage int `json:"per_page,omitempty"`
|
||||
Total int `json:"total,omitempty"`
|
||||
}
|
||||
```
|
||||
|
||||
Helper functions:
|
||||
|
||||
```go
|
||||
func OK[T any](data T) Response[T]
|
||||
func Fail(code, message string) Response[any]
|
||||
func Paginated[T any](data T, page, perPage, total int) Response[T]
|
||||
```
|
||||
|
||||
## Middleware Stack
|
||||
|
||||
```go
|
||||
api.New(
|
||||
api.WithAddr(":8080"),
|
||||
api.WithCORS(api.CORSConfig{...}), // gin-contrib/cors
|
||||
api.WithAuth(api.BearerToken("...")), // Phase 1: simple bearer token
|
||||
api.WithRateLimit(100, time.Minute), // Per-IP sliding window
|
||||
api.WithRequestID(), // X-Request-ID header generation
|
||||
api.WithRecovery(), // Panic recovery → 500 response
|
||||
api.WithLogger(slog.Default()), // Structured request logging
|
||||
)
|
||||
```
|
||||
|
||||
Auth evolution path: bearer token → API keys → Authentik (OIDC/forward auth). Middleware slot stays the same.
|
||||
|
||||
## WebSocket Integration
|
||||
|
||||
go-api wraps the existing go-ws Hub as a first-class transport:
|
||||
|
||||
```go
|
||||
// Automatic registration:
|
||||
// GET /ws → WebSocket upgrade (go-ws Hub)
|
||||
|
||||
// Client subscribes: {"type":"subscribe","channel":"ml.generate"}
|
||||
// Events arrive: {"type":"event","channel":"ml.generate","data":{...}}
|
||||
// Client unsubscribes: {"type":"unsubscribe","channel":"ml.generate"}
|
||||
```
|
||||
|
||||
Subsystems implementing `StreamGroup` declare which channels they publish to. This metadata feeds into the OpenAPI spec as documentation.
|
||||
|
||||
## OpenAPI + SDK Generation
|
||||
|
||||
### Runtime Spec Generation (SpecBuilder)
|
||||
|
||||
swaggo annotations were rejected because routes are dynamic via RouteGroup, Response[T] generics break swaggo, and MCP tools already carry JSON Schema at runtime. Instead, a `SpecBuilder` constructs the full OpenAPI 3.1 spec from registered RouteGroups at runtime.
|
||||
|
||||
```go
|
||||
// Groups that implement DescribableGroup contribute endpoint metadata
|
||||
type DescribableGroup interface {
|
||||
RouteGroup
|
||||
Describe() []RouteDescription
|
||||
}
|
||||
|
||||
// SpecBuilder assembles the spec from all groups
|
||||
builder := &api.SpecBuilder{Title: "Core API", Description: "...", Version: "1.0.0"}
|
||||
spec, _ := builder.Build(engine.Groups())
|
||||
```
|
||||
|
||||
### MCP-to-REST Bridge (ToolBridge)
|
||||
|
||||
The `ToolBridge` converts MCP tool descriptors into REST POST endpoints and implements both `RouteGroup` and `DescribableGroup`. Each tool becomes `POST /{tool_name}`. Generic types are captured at MCP registration time via closures, enabling JSON unmarshalling to the correct input type at request time.
|
||||
|
||||
```go
|
||||
bridge := api.NewToolBridge("/v1/tools")
|
||||
mcp.BridgeToAPI(mcpService, bridge) // Populates bridge from MCP tool registry
|
||||
engine.Register(bridge) // Registers REST endpoints + OpenAPI metadata
|
||||
```
|
||||
|
||||
### Swagger UI
|
||||
|
||||
```go
|
||||
// Built-in at GET /swagger/*any
|
||||
// SpecBuilder output served via gin-swagger, cached via sync.Once
|
||||
api.New(api.WithSwagger("Core API", "...", "1.0.0"))
|
||||
```
|
||||
|
||||
### SDK Generation
|
||||
|
||||
```bash
|
||||
# Via openapi-generator-cli (11 languages supported)
|
||||
core api sdk --lang go # Generate Go SDK
|
||||
core api sdk --lang typescript-fetch,python # Multiple languages
|
||||
core api sdk --lang rust --output ./sdk/ # Custom output dir
|
||||
```
|
||||
|
||||
### CLI Commands
|
||||
|
||||
```bash
|
||||
core api spec # Emit OpenAPI JSON to stdout
|
||||
core api spec --format yaml # YAML variant
|
||||
core api spec --output spec.json # Write to file
|
||||
core api sdk --lang python # Generate Python SDK
|
||||
core api sdk --lang go,rust # Multiple SDKs
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
| Package | Purpose |
|
||||
|---------|---------|
|
||||
| `github.com/gin-gonic/gin` | HTTP framework |
|
||||
| `github.com/swaggo/gin-swagger` | Swagger UI middleware |
|
||||
| `github.com/gin-contrib/cors` | CORS middleware |
|
||||
| `github.com/gin-contrib/secure` | Security headers |
|
||||
| `github.com/gin-contrib/sessions` | Server-side sessions |
|
||||
| `github.com/gin-contrib/authz` | Casbin authorisation |
|
||||
| `github.com/gin-contrib/httpsign` | HTTP signature verification |
|
||||
| `github.com/gin-contrib/slog` | Structured request logging |
|
||||
| `github.com/gin-contrib/timeout` | Per-request timeouts |
|
||||
| `github.com/gin-contrib/gzip` | Gzip compression |
|
||||
| `github.com/gin-contrib/static` | Static file serving |
|
||||
| `github.com/gin-contrib/pprof` | Runtime profiling |
|
||||
| `github.com/gin-contrib/expvar` | Runtime metrics |
|
||||
| `github.com/gin-contrib/location/v2` | Reverse proxy detection |
|
||||
| `github.com/99designs/gqlgen` | GraphQL endpoint |
|
||||
| `go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin` | Distributed tracing |
|
||||
| `gopkg.in/yaml.v3` | YAML spec export |
|
||||
| `forge.lthn.ai/core/go-ws` | WebSocket Hub (existing) |
|
||||
|
||||
## Estimated Size
|
||||
|
||||
| Component | LOC |
|
||||
|-----------|-----|
|
||||
| Engine + options | ~200 |
|
||||
| Middleware | ~150 |
|
||||
| Response envelope | ~80 |
|
||||
| RouteGroup interface | ~30 |
|
||||
| WebSocket integration | ~60 |
|
||||
| Tests | ~300 |
|
||||
| **Total go-api** | **~820** |
|
||||
|
||||
Each subsystem's `api/` package adds ~100-200 LOC per route group.
|
||||
|
||||
## Phase 1 — Implemented (20 Feb 2026)
|
||||
|
||||
**Commit:** `17ae945` on Forge (`core/go-api`)
|
||||
|
||||
| Component | Status | Tests |
|
||||
|-----------|--------|-------|
|
||||
| Response envelope (OK, Fail, Paginated) | Done | 9 |
|
||||
| RouteGroup + StreamGroup interfaces | Done | 4 |
|
||||
| Engine (New, Register, Handler, Serve) | Done | 9 |
|
||||
| Bearer auth middleware | Done | 3 |
|
||||
| Request ID middleware | Done | 2 |
|
||||
| CORS middleware (gin-contrib/cors) | Done | 3 |
|
||||
| WebSocket endpoint | Done | 3 |
|
||||
| Swagger UI (gin-swagger) | Done | 2 |
|
||||
| Health endpoint | Done | 1 |
|
||||
| **Total** | **~840 LOC** | **36** |
|
||||
|
||||
**Integration proof:** go-ml/api/ registers 3 endpoints with 12 tests (`0c23858`).
|
||||
|
||||
## Phase 2 Wave 1 — Implemented (20 Feb 2026)
|
||||
|
||||
**Commits:** `6bb7195..daae6f7` on Forge (`core/go-api`)
|
||||
|
||||
| Component | Option | Dependency | Tests |
|
||||
|-----------|--------|------------|-------|
|
||||
| Authentik (forward auth + OIDC) | `WithAuthentik()` | `go-oidc/v3`, `oauth2` | 14 |
|
||||
| Security headers (HSTS, CSP, etc.) | `WithSecure()` | `gin-contrib/secure` | 8 |
|
||||
| Structured request logging | `WithSlog()` | `gin-contrib/slog` | 6 |
|
||||
| Per-request timeouts | `WithTimeout()` | `gin-contrib/timeout` | 5 |
|
||||
| Gzip compression | `WithGzip()` | `gin-contrib/gzip` | 5 |
|
||||
| Static file serving | `WithStatic()` | `gin-contrib/static` | 5 |
|
||||
| **Wave 1 Total** | | | **43** |
|
||||
|
||||
**Cumulative:** 76 tests (36 Phase 1 + 43 Wave 1 - 3 shared), all passing.
|
||||
|
||||
## Phase 2 Wave 2 — Implemented (20 Feb 2026)
|
||||
|
||||
**Commits:** `64a8b16..67dcc83` on Forge (`core/go-api`)
|
||||
|
||||
| Component | Option | Dependency | Tests | Notes |
|
||||
|-----------|--------|------------|-------|-------|
|
||||
| Brotli compression | `WithBrotli()` | `andybalholm/brotli` | 5 | Custom middleware; `gin-contrib/brotli` is empty stub |
|
||||
| Response caching | `WithCache()` | none (in-memory) | 5 | Custom middleware; `gin-contrib/cache` is per-handler, not global |
|
||||
| Server-side sessions | `WithSessions()` | `gin-contrib/sessions` | 5 | Cookie store, configurable name + secret |
|
||||
| Casbin authorisation | `WithAuthz()` | `gin-contrib/authz`, `casbin/v2` | 5 | Subject via Basic Auth; RBAC policy model |
|
||||
| **Wave 2 Total** | | | **20** | |
|
||||
|
||||
**Cumulative:** 102 passing tests (2 integration skipped), all green.
|
||||
|
||||
## Phase 2 Wave 3 — Implemented (20 Feb 2026)
|
||||
|
||||
**Commits:** `7b3f99e..d517fa2` on Forge (`core/go-api`)
|
||||
|
||||
| Component | Option | Dependency | Tests | Notes |
|
||||
|-----------|--------|------------|-------|-------|
|
||||
| HTTP signature verification | `WithHTTPSign()` | `gin-contrib/httpsign` | 5 | HMAC-SHA256; extensible via httpsign.Option |
|
||||
| Server-Sent Events | `WithSSE()` | none (custom SSEBroker) | 6 | Channel filtering, multi-client broadcast, GET /events |
|
||||
| Reverse proxy detection | `WithLocation()` | `gin-contrib/location/v2` | 5 | X-Forwarded-Host/Proto parsing |
|
||||
| Locale detection | `WithI18n()` | `golang.org/x/text/language` | 5 | Accept-Language parsing, message lookup, GetLocale/GetMessage |
|
||||
| GraphQL endpoint | `WithGraphQL()` | `99designs/gqlgen` | 5 | /graphql + optional /graphql/playground |
|
||||
| **Wave 3 Total** | | | **26** | |
|
||||
|
||||
**Cumulative:** 128 passing tests (2 integration skipped), all green.
|
||||
|
||||
## Phase 2 Wave 4 — Implemented (21 Feb 2026)
|
||||
|
||||
**Commits:** `32b3680..8ba1716` on Forge (`core/go-api`)
|
||||
|
||||
| Component | Option | Dependency | Tests | Notes |
|
||||
|-----------|--------|------------|-------|-------|
|
||||
| Runtime profiling | `WithPprof()` | `gin-contrib/pprof` | 5 | /debug/pprof/* endpoints, flag-based mount |
|
||||
| Runtime metrics | `WithExpvar()` | `gin-contrib/expvar` | 5 | /debug/vars endpoint, flag-based mount |
|
||||
| Distributed tracing | `WithTracing()` | `otelgin` + OpenTelemetry SDK | 5 | W3C traceparent propagation, span attributes |
|
||||
| **Wave 4 Total** | | | **15** | |
|
||||
|
||||
**Cumulative:** 143 passing tests (2 integration skipped), all green.
|
||||
|
||||
**Phase 2 complete.** All 4 waves implemented. Every planned plugin has a `With*()` option and tests.
|
||||
|
||||
## Phase 3 — OpenAPI Spec Generation + SDK Codegen (21 Feb 2026)
|
||||
|
||||
**Architecture:** Runtime OpenAPI generation via SpecBuilder (NOT swaggo annotations). Routes are dynamic via RouteGroup, Response[T] generics break swaggo, and MCP tools carry JSON Schema at runtime. A `ToolBridge` converts tool descriptors into RouteGroup + OpenAPI metadata. A `SpecBuilder` constructs the full OpenAPI 3.1 spec. SDK codegen wraps `openapi-generator-cli`.
|
||||
|
||||
### Wave 1: go-api (Tasks 1-5)
|
||||
|
||||
**Commits:** `465bd60..1910aec` on Forge (`core/go-api`)
|
||||
|
||||
| Component | File | Tests | Notes |
|
||||
|-----------|------|-------|-------|
|
||||
| DescribableGroup interface | `group.go` | 5 | Opt-in OpenAPI metadata for RouteGroups |
|
||||
| ToolBridge | `bridge.go` | 6 | Tool descriptors → POST endpoints + DescribableGroup |
|
||||
| SpecBuilder | `openapi.go` | 6 | OpenAPI 3.1 JSON with Response[T] envelope wrapping |
|
||||
| Swagger refactor | `swagger.go` | 5 | Replaced hardcoded empty spec with SpecBuilder |
|
||||
| Spec export | `export.go` | 5 | JSON + YAML export to file/writer |
|
||||
| SDK codegen | `codegen.go` | 5 | 11-language wrapper for openapi-generator-cli |
|
||||
| **Wave 1 Total** | | **32** | |
|
||||
|
||||
### Wave 2: go-ai MCP bridge (Tasks 6-7)
|
||||
|
||||
**Commits:** `2107eda..c37e1cf` on Forge (`core/go-ai`)
|
||||
|
||||
| Component | File | Tests | Notes |
|
||||
|-----------|------|-------|-------|
|
||||
| Tool registry | `mcp/registry.go` | 5 | Generic `addToolRecorded[In,Out]` captures types in closures |
|
||||
| BridgeToAPI | `mcp/bridge.go` | 5 | MCP tools → go-api ToolBridge, 10MB body limit, error classification |
|
||||
| **Wave 2 Total** | | **10** | |
|
||||
|
||||
### Wave 3: CLI commands (Tasks 8-9)
|
||||
|
||||
**Commit:** `d6eec4d` on Forge (`core/cli` dev branch)
|
||||
|
||||
| Component | File | Tests | Notes |
|
||||
|-----------|------|-------|-------|
|
||||
| `core api spec` | `cmd/api/cmd_spec.go` | 2 | JSON/YAML export, --output/--format flags |
|
||||
| `core api sdk` | `cmd/api/cmd_sdk.go` | 2 | --lang (required), --output, --spec, --package flags |
|
||||
| **Wave 3 Total** | | **4** | |
|
||||
|
||||
**Cumulative go-api:** 176 passing tests. **Phase 3 complete.**
|
||||
|
||||
### Known Limitations
|
||||
|
||||
- **Subsystem tools excluded from bridge:** Subsystems call `mcp.AddTool` directly, bypassing `addToolRecorded`. Only the 10 built-in MCP tools appear in the REST bridge. Future: pass `*Service` to `RegisterTools` instead of `*mcp.Server`.
|
||||
- **Flat schema only:** `structSchema` reflection handles flat structs but does not recurse into nested structs. Adequate for current tool inputs.
|
||||
- **CLI spec produces empty bridge:** `core api spec` currently generates a spec with only `/health`. Full MCP integration requires wiring the MCP service into the CLI command.
|
||||
|
||||
## Phase 2 — Gin Plugin Roadmap (Complete)
|
||||
|
||||
All plugins drop in as `With*()` options on the Engine. No architecture changes needed.
|
||||
|
||||
### Security & Auth
|
||||
|
||||
| Plugin | Option | Purpose | Priority |
|
||||
|--------|--------|---------|----------|
|
||||
| ~~**Authentik**~~ | ~~`WithAuthentik()`~~ | ~~OIDC + forward auth integration.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/secure~~ | ~~`WithSecure()`~~ | ~~Security headers: HSTS, X-Frame-Options, X-Content-Type-Options, CSP.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/sessions~~ | ~~`WithSessions()`~~ | ~~Server-side sessions (cookie store). Web session management alongside Authentik tokens.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/authz~~ | ~~`WithAuthz()`~~ | ~~Casbin-based authorisation. Policy-driven access control via RBAC.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/httpsign~~ | ~~`WithHTTPSign()`~~ | ~~HTTP signature verification. HMAC-SHA256 with extensible options.~~ | ~~**Done**~~ |
|
||||
|
||||
### Performance & Reliability
|
||||
|
||||
| Plugin | Option | Purpose | Priority |
|
||||
|--------|--------|---------|----------|
|
||||
| ~~gin-contrib/cache~~ | ~~`WithCache()`~~ | ~~Response caching (in-memory). GET response caching with TTL, lazy eviction.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/timeout~~ | ~~`WithTimeout()`~~ | ~~Per-request timeouts.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/gzip~~ | ~~`WithGzip()`~~ | ~~Gzip response compression.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/brotli~~ | ~~`WithBrotli()`~~ | ~~Brotli compression via `andybalholm/brotli`. Custom middleware (gin-contrib stub empty).~~ | ~~**Done**~~ |
|
||||
|
||||
### Observability
|
||||
|
||||
| Plugin | Option | Purpose | Priority |
|
||||
|--------|--------|---------|----------|
|
||||
| ~~gin-contrib/slog~~ | ~~`WithSlog()`~~ | ~~Structured request logging via slog.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/pprof~~ | ~~`WithPprof()`~~ | ~~Runtime profiling endpoints at /debug/pprof/. Flag-based mount.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/expvar~~ | ~~`WithExpvar()`~~ | ~~Go runtime metrics at /debug/vars. Flag-based mount.~~ | ~~**Done**~~ |
|
||||
| ~~otelgin~~ | ~~`WithTracing()`~~ | ~~OpenTelemetry distributed tracing. W3C traceparent propagation.~~ | ~~**Done**~~ |
|
||||
|
||||
### Content & Streaming
|
||||
|
||||
| Plugin | Option | Purpose | Priority |
|
||||
|--------|--------|---------|----------|
|
||||
| ~~gin-contrib/static~~ | ~~`WithStatic()`~~ | ~~Serve static files.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/sse~~ | ~~`WithSSE()`~~ | ~~Server-Sent Events. Custom SSEBroker with channel filtering, GET /events.~~ | ~~**Done**~~ |
|
||||
| ~~gin-contrib/location~~ | ~~`WithLocation()`~~ | ~~Auto-detect scheme/host from X-Forwarded-* headers.~~ | ~~**Done**~~ |
|
||||
|
||||
### Query Layer
|
||||
|
||||
| Plugin | Option | Purpose | Priority |
|
||||
|--------|--------|---------|----------|
|
||||
| ~~99designs/gqlgen~~ | ~~`WithGraphQL()`~~ | ~~GraphQL endpoint at `/graphql` + optional playground. Accepts gqlgen ExecutableSchema.~~ | ~~**Done**~~ |
|
||||
|
||||
The GraphQL schema can be generated from the same Go Input/Output structs that define the REST endpoints. gqlgen produces an `http.Handler` that mounts directly on Gin. Subsystems opt-in via:
|
||||
|
||||
```go
|
||||
// Subsystems that want GraphQL implement this alongside RouteGroup
|
||||
type ResolverGroup interface {
|
||||
// RegisterResolvers adds query/mutation resolvers to the GraphQL schema
|
||||
RegisterResolvers(schema *graphql.Schema)
|
||||
}
|
||||
```
|
||||
|
||||
This means a subsystem like go-ml exposes:
|
||||
- **REST:** `POST /v1/ml/generate` (existing)
|
||||
- **GraphQL:** `mutation { mlGenerate(prompt: "...", backend: "mlx") { response, model } }` (same handler)
|
||||
- **MCP:** `ml_generate` tool (existing)
|
||||
|
||||
Four protocols, one set of handlers.
|
||||
|
||||
### Ecosystem Integration
|
||||
|
||||
| Plugin | Option | Purpose | Priority |
|
||||
|--------|--------|---------|----------|
|
||||
| ~~gin-contrib/i18n~~ | ~~`WithI18n()`~~ | ~~Locale detection via Accept-Language. Custom middleware using `golang.org/x/text/language`.~~ | ~~**Done**~~ |
|
||||
| [gin-contrib/graceful](https://github.com/gin-contrib/graceful) | — | Already implemented in Engine.Serve(). Could swap to this for more robust lifecycle management if needed. | — |
|
||||
| [gin-contrib/requestid](https://github.com/gin-contrib/requestid) | — | Already implemented. Theirs uses UUID, ours uses hex. Could swap for standards compliance. | — |
|
||||
|
||||
### Implementation Order
|
||||
|
||||
**Wave 1 (gateway hardening):** ~~Authentik, secure, slog, timeout, gzip, static~~ **DONE** (20 Feb 2026)
|
||||
**Wave 2 (performance + auth):** ~~cache, sessions, authz, brotli~~ **DONE** (20 Feb 2026)
|
||||
**Wave 3 (network + streaming):** ~~httpsign, sse, location, i18n, gqlgen~~ **DONE** (20 Feb 2026)
|
||||
**Wave 4 (observability):** ~~pprof, expvar, tracing~~ **DONE** (21 Feb 2026)
|
||||
|
||||
Each wave adds `With*()` options + tests. No breaking changes — existing code continues to work without any new options enabled.
|
||||
|
||||
## Authentik Integration
|
||||
|
||||
[Authentik](https://goauthentik.io/) is the identity provider and edge auth proxy. It handles user registration, login, MFA, social auth, SAML, and OIDC — so go-api doesn't have to.
|
||||
|
||||
### Two Integration Modes
|
||||
|
||||
**1. Forward Auth (web traffic)**
|
||||
|
||||
Traefik sits in front of go-api. For web routes, Traefik's `forwardAuth` middleware checks with Authentik before passing the request through. Authentik handles login flows, session cookies, and consent. go-api receives pre-authenticated requests with identity headers.
|
||||
|
||||
```
|
||||
Browser → Traefik → Authentik (forward auth) → go-api
|
||||
↓
|
||||
Login page (if unauthenticated)
|
||||
```
|
||||
|
||||
go-api reads trusted headers set by Authentik:
|
||||
```
|
||||
X-Authentik-Username: alice
|
||||
X-Authentik-Groups: admins,developers
|
||||
X-Authentik-Email: alice@example.com
|
||||
X-Authentik-Uid: <uuid>
|
||||
X-Authentik-Jwt: <signed token>
|
||||
```
|
||||
|
||||
**2. OIDC Token Validation (API traffic)**
|
||||
|
||||
API clients (SDKs, CLI tools, network peers) authenticate directly with Authentik's OAuth2 token endpoint, then send the JWT to go-api. go-api validates the JWT using Authentik's OIDC discovery endpoint (`.well-known/openid-configuration`).
|
||||
|
||||
```
|
||||
SDK client → Authentik (token endpoint) → receives JWT
|
||||
SDK client → go-api (Authorization: Bearer <jwt>) → validates via OIDC
|
||||
```
|
||||
|
||||
### Implementation in go-api
|
||||
|
||||
```go
|
||||
engine := api.New(
|
||||
api.WithAuthentik(api.AuthentikConfig{
|
||||
Issuer: "https://auth.lthn.ai/application/o/core-api/",
|
||||
ClientID: "core-api",
|
||||
TrustedProxy: true, // Trust X-Authentik-* headers from Traefik
|
||||
}),
|
||||
)
|
||||
```
|
||||
|
||||
`WithAuthentik()` adds middleware that:
|
||||
1. Checks for `X-Authentik-Jwt` header (forward auth mode) — validates signature, extracts claims
|
||||
2. Falls back to `Authorization: Bearer <jwt>` header (direct OIDC mode) — validates via JWKS
|
||||
3. Populates `c.Set("user", AuthentikUser{...})` in the Gin context for handlers to use
|
||||
4. Skips /health, /swagger, and any public paths
|
||||
|
||||
```go
|
||||
// In any handler:
|
||||
func (r *Routes) ListItems(c *gin.Context) {
|
||||
user := api.GetUser(c) // Returns *AuthentikUser or nil
|
||||
if user == nil {
|
||||
c.JSON(401, api.Fail("unauthorised", "Authentication required"))
|
||||
return
|
||||
}
|
||||
// user.Username, user.Groups, user.Email, user.UID available
|
||||
}
|
||||
```
|
||||
|
||||
### Auth Layers
|
||||
|
||||
```
|
||||
Authentik (identity) → WHO is this? (user, groups, email)
|
||||
↓
|
||||
go-api middleware → IS their token valid? (JWT verification)
|
||||
↓
|
||||
Casbin authz (optional) → CAN they do this? (role → endpoint policies)
|
||||
↓
|
||||
Handler → DOES this (business logic)
|
||||
```
|
||||
|
||||
Phase 1 bearer auth continues to work alongside Authentik — useful for service-to-service tokens, CI/CD, and development. `WithBearerAuth` and `WithAuthentik` can coexist.
|
||||
|
||||
### Authentik Deployment
|
||||
|
||||
Authentik runs as a Docker service alongside go-api, fronted by Traefik:
|
||||
- **auth.lthn.ai** — Authentik UI + OIDC endpoints (production)
|
||||
- **auth.leth.in** — Authentik for devnet/testnet
|
||||
- Traefik routes `/outpost.goauthentik.io/` to Authentik's embedded outpost for forward auth
|
||||
|
||||
### Dependencies
|
||||
|
||||
| Package | Purpose |
|
||||
|---------|---------|
|
||||
| `github.com/coreos/go-oidc/v3` | OIDC discovery + JWT validation |
|
||||
| `golang.org/x/oauth2` | OAuth2 token exchange (for server-side flows) |
|
||||
|
||||
Both are standard Go libraries with no heavy dependencies.
|
||||
|
||||
## Non-Goals
|
||||
|
||||
- gRPC gateway
|
||||
- Built-in user registration/login (Authentik handles this)
|
||||
- API versioning beyond /v1/ prefix
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Phase 1 (Done)
|
||||
|
||||
1. ~~`core api serve` starts a Gin server with registered subsystem routes~~
|
||||
2. ~~WebSocket subscriptions work alongside REST~~
|
||||
3. ~~Swagger UI accessible at `/swagger/`~~
|
||||
4. ~~All endpoints return consistent Response envelope~~
|
||||
5. ~~Bearer token auth protects all routes~~
|
||||
6. ~~First subsystem integration (go-ml/api/) proves the pattern~~
|
||||
|
||||
### Phase 2 (Done)
|
||||
|
||||
7. ~~Security headers, compression, and caching active in production~~
|
||||
8. ~~Session-based auth alongside bearer tokens~~
|
||||
9. ~~HTTP signature verification for Lethean network peers~~
|
||||
10. ~~Static file serving for docs site and SDK downloads~~
|
||||
11. ~~GraphQL endpoint at `/graphql` with playground~~
|
||||
|
||||
### Phase 3 (Done)
|
||||
|
||||
12. ~~`core api spec` emits valid OpenAPI 3.1 JSON via runtime SpecBuilder~~
|
||||
13. ~~`core api sdk` generates SDKs for 11 languages via openapi-generator-cli~~
|
||||
14. ~~MCP tools bridged to REST endpoints via ToolBridge + BridgeToAPI~~
|
||||
15. ~~OpenAPI spec includes Response[T] envelope wrapping~~
|
||||
16. ~~Spec export to file in JSON and YAML formats~~
|
||||
1503
docs/plans/2026-02-20-go-api-plan.md
Normal file
1503
docs/plans/2026-02-20-go-api-plan.md
Normal file
File diff suppressed because it is too large
Load diff
155
docs/plans/2026-02-21-core-help-design.md
Normal file
155
docs/plans/2026-02-21-core-help-design.md
Normal file
|
|
@ -0,0 +1,155 @@
|
|||
# core.help Documentation Website — Design
|
||||
|
||||
**Date:** 2026-02-21
|
||||
**Author:** Virgil
|
||||
**Status:** Design approved
|
||||
**Domain:** https://core.help
|
||||
|
||||
## Problem
|
||||
|
||||
Documentation is scattered across 39 repos (18 Go packages, 20 PHP packages, 1 CLI). There is no unified docs site. Developers need a single entry point to find CLI commands, Go package APIs, MCP tool references, and PHP module guides.
|
||||
|
||||
## Solution
|
||||
|
||||
A Hugo + Docsy static site at core.help, built from existing markdown docs aggregated by `core docs sync`. No new content — just collect and present what already exists across the ecosystem.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Stack
|
||||
|
||||
- **Hugo** — Go-native static site generator, sub-second builds
|
||||
- **Docsy theme** — Purpose-built for technical docs (used by Kubernetes, gRPC, Knative)
|
||||
- **BunnyCDN** — Static hosting with pull zone
|
||||
- **`core docs sync --target hugo`** — Collects markdown from all repos into Hugo content tree
|
||||
|
||||
### Why Hugo + Docsy (not VitePress or mdBook)
|
||||
|
||||
- Go-native, no Node.js dependency
|
||||
- Handles multi-section navigation (CLI, Go packages, PHP modules, MCP tools)
|
||||
- Sub-second builds for ~250 markdown files
|
||||
- Docsy has built-in search, versioned nav, API reference sections
|
||||
|
||||
## Content Structure
|
||||
|
||||
```
|
||||
docs-site/
|
||||
├── hugo.toml
|
||||
├── content/
|
||||
│ ├── _index.md # Landing page
|
||||
│ ├── getting-started/ # CLI top-level guides
|
||||
│ │ ├── _index.md
|
||||
│ │ ├── installation.md
|
||||
│ │ ├── configuration.md
|
||||
│ │ ├── user-guide.md
|
||||
│ │ ├── troubleshooting.md
|
||||
│ │ └── faq.md
|
||||
│ ├── cli/ # CLI command reference (43 commands)
|
||||
│ │ ├── _index.md
|
||||
│ │ ├── dev/ # core dev commit, push, pull, etc.
|
||||
│ │ ├── ai/ # core ai commands
|
||||
│ │ ├── go/ # core go test, lint, etc.
|
||||
│ │ └── ...
|
||||
│ ├── go/ # Go ecosystem packages (18)
|
||||
│ │ ├── _index.md # Ecosystem overview
|
||||
│ │ ├── go-api/ # README + architecture/development/history
|
||||
│ │ ├── go-ai/
|
||||
│ │ ├── go-mlx/
|
||||
│ │ ├── go-i18n/
|
||||
│ │ └── ...
|
||||
│ ├── mcp/ # MCP tool reference (49 tools)
|
||||
│ │ ├── _index.md
|
||||
│ │ ├── file-operations.md
|
||||
│ │ ├── process-management.md
|
||||
│ │ ├── rag.md
|
||||
│ │ └── ...
|
||||
│ ├── php/ # PHP packages (from core-php/docs/packages/)
|
||||
│ │ ├── _index.md
|
||||
│ │ ├── admin/
|
||||
│ │ ├── tenant/
|
||||
│ │ ├── commerce/
|
||||
│ │ └── ...
|
||||
│ └── kb/ # Knowledge base (wiki pages from go-mlx, go-i18n)
|
||||
│ ├── _index.md
|
||||
│ ├── mlx/
|
||||
│ └── i18n/
|
||||
├── static/ # Logos, favicons
|
||||
├── layouts/ # Custom template overrides (minimal)
|
||||
└── go.mod # Hugo modules (Docsy as module dep)
|
||||
```
|
||||
|
||||
## Sync Pipeline
|
||||
|
||||
`core docs sync --target hugo --output site/content/` performs:
|
||||
|
||||
### Source Mapping
|
||||
|
||||
```
|
||||
cli/docs/index.md → content/getting-started/_index.md
|
||||
cli/docs/getting-started.md → content/getting-started/installation.md
|
||||
cli/docs/user-guide.md → content/getting-started/user-guide.md
|
||||
cli/docs/configuration.md → content/getting-started/configuration.md
|
||||
cli/docs/troubleshooting.md → content/getting-started/troubleshooting.md
|
||||
cli/docs/faq.md → content/getting-started/faq.md
|
||||
|
||||
core/docs/cmd/**/*.md → content/cli/**/*.md
|
||||
|
||||
go-*/README.md → content/go/{name}/_index.md
|
||||
go-*/docs/*.md → content/go/{name}/*.md
|
||||
go-*/KB/*.md → content/kb/{name-suffix}/*.md
|
||||
|
||||
core-*/docs/**/*.md → content/php/{name-suffix}/**/*.md
|
||||
```
|
||||
|
||||
### Front Matter Injection
|
||||
|
||||
If a markdown file doesn't start with `---`, prepend:
|
||||
|
||||
```yaml
|
||||
---
|
||||
title: "{derived from filename}"
|
||||
linkTitle: "{short name}"
|
||||
weight: {auto-incremented}
|
||||
---
|
||||
```
|
||||
|
||||
No other content transformations. Markdown stays as-is.
|
||||
|
||||
### Build & Deploy
|
||||
|
||||
```bash
|
||||
core docs sync --target hugo --output docs-site/content/
|
||||
cd docs-site && hugo build
|
||||
hugo deploy --target bunnycdn
|
||||
```
|
||||
|
||||
Hugo deploy config in `hugo.toml`:
|
||||
|
||||
```toml
|
||||
[deployment]
|
||||
[[deployment.targets]]
|
||||
name = "bunnycdn"
|
||||
URL = "s3://core-help?endpoint=storage.bunnycdn.com®ion=auto"
|
||||
```
|
||||
|
||||
Credentials via env vars.
|
||||
|
||||
## Registry
|
||||
|
||||
All 39 repos registered in `.core/repos.yaml` with `docs: true`. Go repos use explicit `path:` fields since they live outside the PHP `base_path`. `FindRegistry()` checks `.core/repos.yaml` alongside `repos.yaml`.
|
||||
|
||||
## Prerequisites Completed
|
||||
|
||||
- [x] `.core/repos.yaml` created with all 39 repos
|
||||
- [x] `FindRegistry()` updated to find `.core/repos.yaml`
|
||||
- [x] `Repo.Path` supports explicit YAML override
|
||||
- [x] go-api docs gap filled (architecture.md, development.md, history.md)
|
||||
- [x] All 18 Go repos have standard docs trio
|
||||
|
||||
## What Remains (Implementation Plan)
|
||||
|
||||
1. Create docs-site repo with Hugo + Docsy scaffold
|
||||
2. Extend `core docs sync` with `--target hugo` mode
|
||||
3. Write section _index.md files (landing page, section intros)
|
||||
4. Hugo config (navigation, search, theme colours)
|
||||
5. BunnyCDN deployment config
|
||||
6. CI pipeline on Forge (optional — can deploy manually initially)
|
||||
642
docs/plans/2026-02-21-core-help-plan.md
Normal file
642
docs/plans/2026-02-21-core-help-plan.md
Normal file
|
|
@ -0,0 +1,642 @@
|
|||
# core.help Hugo Documentation Site — Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Build a Hugo + Docsy documentation site at core.help that aggregates markdown from 39 repos via `core docs sync --target hugo`.
|
||||
|
||||
**Architecture:** Hugo static site with Docsy theme, populated by extending `core docs sync` with a `--target hugo` flag that maps repo docs into Hugo's `content/` tree with auto-injected front matter. Deploy to BunnyCDN.
|
||||
|
||||
**Tech Stack:** Hugo (Go SSG), Docsy theme (Hugo module), BunnyCDN, `core docs sync` CLI
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
The docs sync command lives in `/Users/snider/Code/host-uk/cli/cmd/docs/`. The site will be scaffolded at `/Users/snider/Code/host-uk/docs-site/`. The registry at `/Users/snider/Code/host-uk/.core/repos.yaml` already contains all 39 repos (20 PHP + 18 Go + 1 CLI) with explicit paths for Go repos.
|
||||
|
||||
Key files:
|
||||
- `/Users/snider/Code/host-uk/cli/cmd/docs/cmd_sync.go` — sync command (modify)
|
||||
- `/Users/snider/Code/host-uk/cli/cmd/docs/cmd_scan.go` — repo scanner (modify)
|
||||
- `/Users/snider/Code/host-uk/docs-site/` — Hugo site (create)
|
||||
|
||||
## Task 1: Scaffold Hugo + Docsy site
|
||||
|
||||
**Files:**
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/hugo.toml`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/go.mod`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/_index.md`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/getting-started/_index.md`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/cli/_index.md`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/go/_index.md`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/mcp/_index.md`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/php/_index.md`
|
||||
- Create: `/Users/snider/Code/host-uk/docs-site/content/kb/_index.md`
|
||||
|
||||
This is the one-time Hugo scaffolding. No tests — just files.
|
||||
|
||||
**`hugo.toml`:**
|
||||
```toml
|
||||
baseURL = "https://core.help/"
|
||||
title = "Core Documentation"
|
||||
languageCode = "en"
|
||||
defaultContentLanguage = "en"
|
||||
|
||||
enableRobotsTXT = true
|
||||
enableGitInfo = false
|
||||
|
||||
[outputs]
|
||||
home = ["HTML", "JSON"]
|
||||
section = ["HTML"]
|
||||
|
||||
[params]
|
||||
description = "Documentation for the Core CLI, Go packages, PHP modules, and MCP tools"
|
||||
copyright = "Host UK — EUPL-1.2"
|
||||
|
||||
[params.ui]
|
||||
sidebar_menu_compact = true
|
||||
breadcrumb_disable = false
|
||||
sidebar_search_disable = false
|
||||
navbar_logo = false
|
||||
|
||||
[params.ui.readingtime]
|
||||
enable = false
|
||||
|
||||
[module]
|
||||
proxy = "direct"
|
||||
|
||||
[module.hugoVersion]
|
||||
extended = true
|
||||
min = "0.120.0"
|
||||
|
||||
[[module.imports]]
|
||||
path = "github.com/google/docsy"
|
||||
disable = false
|
||||
|
||||
[markup.goldmark.renderer]
|
||||
unsafe = true
|
||||
|
||||
[menu]
|
||||
[[menu.main]]
|
||||
name = "Getting Started"
|
||||
weight = 10
|
||||
url = "/getting-started/"
|
||||
[[menu.main]]
|
||||
name = "CLI Reference"
|
||||
weight = 20
|
||||
url = "/cli/"
|
||||
[[menu.main]]
|
||||
name = "Go Packages"
|
||||
weight = 30
|
||||
url = "/go/"
|
||||
[[menu.main]]
|
||||
name = "MCP Tools"
|
||||
weight = 40
|
||||
url = "/mcp/"
|
||||
[[menu.main]]
|
||||
name = "PHP Packages"
|
||||
weight = 50
|
||||
url = "/php/"
|
||||
[[menu.main]]
|
||||
name = "Knowledge Base"
|
||||
weight = 60
|
||||
url = "/kb/"
|
||||
```
|
||||
|
||||
**`go.mod`:**
|
||||
```
|
||||
module github.com/host-uk/docs-site
|
||||
|
||||
go 1.22
|
||||
|
||||
require github.com/google/docsy v0.11.0
|
||||
```
|
||||
|
||||
Note: Run `hugo mod get` after creating these files to populate `go.sum` and download Docsy.
|
||||
|
||||
**Section `_index.md` files** — each needs Hugo front matter:
|
||||
|
||||
`content/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "Core Documentation"
|
||||
description: "Documentation for the Core CLI, Go packages, PHP modules, and MCP tools"
|
||||
---
|
||||
|
||||
Welcome to the Core ecosystem documentation.
|
||||
|
||||
## Sections
|
||||
|
||||
- [Getting Started](/getting-started/) — Installation, configuration, and first steps
|
||||
- [CLI Reference](/cli/) — Command reference for `core` CLI
|
||||
- [Go Packages](/go/) — Go ecosystem package documentation
|
||||
- [MCP Tools](/mcp/) — Model Context Protocol tool reference
|
||||
- [PHP Packages](/php/) — PHP module documentation
|
||||
- [Knowledge Base](/kb/) — Wiki articles and deep dives
|
||||
```
|
||||
|
||||
`content/getting-started/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "Getting Started"
|
||||
linkTitle: "Getting Started"
|
||||
weight: 10
|
||||
description: "Installation, configuration, and first steps with the Core CLI"
|
||||
---
|
||||
```
|
||||
|
||||
`content/cli/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "CLI Reference"
|
||||
linkTitle: "CLI Reference"
|
||||
weight: 20
|
||||
description: "Command reference for the core CLI tool"
|
||||
---
|
||||
```
|
||||
|
||||
`content/go/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "Go Packages"
|
||||
linkTitle: "Go Packages"
|
||||
weight: 30
|
||||
description: "Documentation for the Go ecosystem packages"
|
||||
---
|
||||
```
|
||||
|
||||
`content/mcp/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "MCP Tools"
|
||||
linkTitle: "MCP Tools"
|
||||
weight: 40
|
||||
description: "Model Context Protocol tool reference — file operations, RAG, ML inference, process management"
|
||||
---
|
||||
```
|
||||
|
||||
`content/php/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "PHP Packages"
|
||||
linkTitle: "PHP Packages"
|
||||
weight: 50
|
||||
description: "Documentation for the PHP module ecosystem"
|
||||
---
|
||||
```
|
||||
|
||||
`content/kb/_index.md`:
|
||||
```markdown
|
||||
---
|
||||
title: "Knowledge Base"
|
||||
linkTitle: "Knowledge Base"
|
||||
weight: 60
|
||||
description: "Wiki articles, deep dives, and reference material"
|
||||
---
|
||||
```
|
||||
|
||||
**Verify:** After creating files, run from `/Users/snider/Code/host-uk/docs-site/`:
|
||||
```bash
|
||||
hugo mod get
|
||||
hugo server
|
||||
```
|
||||
The site should start and show the landing page with Docsy theme at `localhost:1313`.
|
||||
|
||||
**Commit:**
|
||||
```bash
|
||||
cd /Users/snider/Code/host-uk/docs-site
|
||||
git init
|
||||
git add .
|
||||
git commit -m "feat: scaffold Hugo + Docsy documentation site"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 2: Extend scanRepoDocs to collect KB/ and README
|
||||
|
||||
**Files:**
|
||||
- Modify: `/Users/snider/Code/host-uk/cli/cmd/docs/cmd_scan.go`
|
||||
|
||||
Currently `scanRepoDocs` only collects files from `docs/`. For the Hugo target we also need:
|
||||
- `KB/**/*.md` files (wiki pages from go-mlx, go-i18n)
|
||||
- `README.md` content (becomes the package _index.md)
|
||||
|
||||
Add a `KBFiles []string` field to `RepoDocInfo` and scan `KB/` alongside `docs/`:
|
||||
|
||||
```go
|
||||
type RepoDocInfo struct {
|
||||
Name string
|
||||
Path string
|
||||
HasDocs bool
|
||||
Readme string
|
||||
ClaudeMd string
|
||||
Changelog string
|
||||
DocsFiles []string // All files in docs/ directory (recursive)
|
||||
KBFiles []string // All files in KB/ directory (recursive)
|
||||
}
|
||||
```
|
||||
|
||||
In `scanRepoDocs`, after the `docs/` walk, add a second walk for `KB/`:
|
||||
|
||||
```go
|
||||
// Recursively scan KB/ directory for .md files
|
||||
kbDir := filepath.Join(repo.Path, "KB")
|
||||
if _, err := io.Local.List(kbDir); err == nil {
|
||||
_ = filepath.WalkDir(kbDir, func(path string, d fs.DirEntry, err error) error {
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
if d.IsDir() || !strings.HasSuffix(d.Name(), ".md") {
|
||||
return nil
|
||||
}
|
||||
relPath, _ := filepath.Rel(kbDir, path)
|
||||
info.KBFiles = append(info.KBFiles, relPath)
|
||||
info.HasDocs = true
|
||||
return nil
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Tests:** The existing tests should still pass. No new test file needed — this is a data-collection change.
|
||||
|
||||
**Verify:** `cd /Users/snider/Code/host-uk/cli && GOWORK=off go build ./cmd/docs/...`
|
||||
|
||||
**Commit:**
|
||||
```bash
|
||||
git add cmd/docs/cmd_scan.go
|
||||
git commit -m "feat(docs): scan KB/ directory alongside docs/"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 3: Add `--target hugo` flag and Hugo sync logic
|
||||
|
||||
**Files:**
|
||||
- Modify: `/Users/snider/Code/host-uk/cli/cmd/docs/cmd_sync.go`
|
||||
|
||||
This is the main task. Add a `--target` flag (default `"php"`) and a new `runHugoSync` function that maps repos to Hugo's content tree.
|
||||
|
||||
**Add flag variable and registration:**
|
||||
|
||||
```go
|
||||
var (
|
||||
docsSyncRegistryPath string
|
||||
docsSyncDryRun bool
|
||||
docsSyncOutputDir string
|
||||
docsSyncTarget string
|
||||
)
|
||||
|
||||
func init() {
|
||||
docsSyncCmd.Flags().StringVar(&docsSyncRegistryPath, "registry", "", i18n.T("common.flag.registry"))
|
||||
docsSyncCmd.Flags().BoolVar(&docsSyncDryRun, "dry-run", false, i18n.T("cmd.docs.sync.flag.dry_run"))
|
||||
docsSyncCmd.Flags().StringVar(&docsSyncOutputDir, "output", "", i18n.T("cmd.docs.sync.flag.output"))
|
||||
docsSyncCmd.Flags().StringVar(&docsSyncTarget, "target", "php", "Target format: php (default) or hugo")
|
||||
}
|
||||
```
|
||||
|
||||
**Update RunE to pass target:**
|
||||
```go
|
||||
RunE: func(cmd *cli.Command, args []string) error {
|
||||
return runDocsSync(docsSyncRegistryPath, docsSyncOutputDir, docsSyncDryRun, docsSyncTarget)
|
||||
},
|
||||
```
|
||||
|
||||
**Update `runDocsSync` signature and add target dispatch:**
|
||||
```go
|
||||
func runDocsSync(registryPath string, outputDir string, dryRun bool, target string) error {
|
||||
reg, basePath, err := loadRegistry(registryPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch target {
|
||||
case "hugo":
|
||||
return runHugoSync(reg, basePath, outputDir, dryRun)
|
||||
default:
|
||||
return runPHPSync(reg, basePath, outputDir, dryRun)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Rename current sync body to `runPHPSync`** — extract lines 67-159 of current `runDocsSync` into `runPHPSync(reg, basePath, outputDir string, dryRun bool) error`. This is a pure extract, no logic changes.
|
||||
|
||||
**Add `hugoOutputName` mapping function:**
|
||||
```go
|
||||
// hugoOutputName maps repo name to Hugo content section and folder.
|
||||
// Returns (section, folder) where section is the top-level content dir.
|
||||
func hugoOutputName(repoName string) (string, string) {
|
||||
// CLI guides
|
||||
if repoName == "cli" {
|
||||
return "getting-started", ""
|
||||
}
|
||||
// Core CLI command docs
|
||||
if repoName == "core" {
|
||||
return "cli", ""
|
||||
}
|
||||
// Go packages
|
||||
if strings.HasPrefix(repoName, "go-") {
|
||||
return "go", repoName
|
||||
}
|
||||
// PHP packages
|
||||
if strings.HasPrefix(repoName, "core-") {
|
||||
return "php", strings.TrimPrefix(repoName, "core-")
|
||||
}
|
||||
return "go", repoName
|
||||
}
|
||||
```
|
||||
|
||||
**Add front matter injection helper:**
|
||||
```go
|
||||
// injectFrontMatter prepends Hugo front matter to markdown content if missing.
|
||||
func injectFrontMatter(content []byte, title string, weight int) []byte {
|
||||
// Already has front matter
|
||||
if bytes.HasPrefix(bytes.TrimSpace(content), []byte("---")) {
|
||||
return content
|
||||
}
|
||||
fm := fmt.Sprintf("---\ntitle: %q\nweight: %d\n---\n\n", title, weight)
|
||||
return append([]byte(fm), content...)
|
||||
}
|
||||
|
||||
// titleFromFilename derives a human-readable title from a filename.
|
||||
func titleFromFilename(filename string) string {
|
||||
name := strings.TrimSuffix(filepath.Base(filename), ".md")
|
||||
name = strings.ReplaceAll(name, "-", " ")
|
||||
name = strings.ReplaceAll(name, "_", " ")
|
||||
// Title case
|
||||
words := strings.Fields(name)
|
||||
for i, w := range words {
|
||||
if len(w) > 0 {
|
||||
words[i] = strings.ToUpper(w[:1]) + w[1:]
|
||||
}
|
||||
}
|
||||
return strings.Join(words, " ")
|
||||
}
|
||||
```
|
||||
|
||||
**Add `runHugoSync` function:**
|
||||
```go
|
||||
func runHugoSync(reg *repos.Registry, basePath string, outputDir string, dryRun bool) error {
|
||||
if outputDir == "" {
|
||||
outputDir = filepath.Join(basePath, "docs-site", "content")
|
||||
}
|
||||
|
||||
// Scan all repos
|
||||
var docsInfo []RepoDocInfo
|
||||
for _, repo := range reg.List() {
|
||||
if repo.Name == "core-template" || repo.Name == "core-claude" {
|
||||
continue
|
||||
}
|
||||
info := scanRepoDocs(repo)
|
||||
if info.HasDocs {
|
||||
docsInfo = append(docsInfo, info)
|
||||
}
|
||||
}
|
||||
|
||||
if len(docsInfo) == 0 {
|
||||
cli.Text("No documentation found")
|
||||
return nil
|
||||
}
|
||||
|
||||
cli.Print("\n Hugo sync: %d repos with docs → %s\n\n", len(docsInfo), outputDir)
|
||||
|
||||
// Show plan
|
||||
for _, info := range docsInfo {
|
||||
section, folder := hugoOutputName(info.Name)
|
||||
target := section
|
||||
if folder != "" {
|
||||
target = section + "/" + folder
|
||||
}
|
||||
fileCount := len(info.DocsFiles) + len(info.KBFiles)
|
||||
if info.Readme != "" {
|
||||
fileCount++
|
||||
}
|
||||
cli.Print(" %s → %s/ (%d files)\n", repoNameStyle.Render(info.Name), target, fileCount)
|
||||
}
|
||||
|
||||
if dryRun {
|
||||
cli.Print("\n Dry run — no files written\n")
|
||||
return nil
|
||||
}
|
||||
|
||||
cli.Blank()
|
||||
if !confirm("Sync to Hugo content directory?") {
|
||||
cli.Text("Aborted")
|
||||
return nil
|
||||
}
|
||||
|
||||
cli.Blank()
|
||||
var synced int
|
||||
for _, info := range docsInfo {
|
||||
section, folder := hugoOutputName(info.Name)
|
||||
|
||||
// Build destination path
|
||||
destDir := filepath.Join(outputDir, section)
|
||||
if folder != "" {
|
||||
destDir = filepath.Join(destDir, folder)
|
||||
}
|
||||
|
||||
// Copy docs/ files
|
||||
weight := 10
|
||||
docsDir := filepath.Join(info.Path, "docs")
|
||||
for _, f := range info.DocsFiles {
|
||||
src := filepath.Join(docsDir, f)
|
||||
dst := filepath.Join(destDir, f)
|
||||
if err := copyWithFrontMatter(src, dst, weight); err != nil {
|
||||
cli.Print(" %s %s: %s\n", errorStyle.Render("✗"), f, err)
|
||||
continue
|
||||
}
|
||||
weight += 10
|
||||
}
|
||||
|
||||
// Copy README.md as _index.md (if not CLI/core which use their own index)
|
||||
if info.Readme != "" && folder != "" {
|
||||
dst := filepath.Join(destDir, "_index.md")
|
||||
if err := copyWithFrontMatter(info.Readme, dst, 1); err != nil {
|
||||
cli.Print(" %s README: %s\n", errorStyle.Render("✗"), err)
|
||||
}
|
||||
}
|
||||
|
||||
// Copy KB/ files to kb/{suffix}/
|
||||
if len(info.KBFiles) > 0 {
|
||||
// Extract suffix: go-mlx → mlx, go-i18n → i18n
|
||||
suffix := strings.TrimPrefix(info.Name, "go-")
|
||||
kbDestDir := filepath.Join(outputDir, "kb", suffix)
|
||||
kbDir := filepath.Join(info.Path, "KB")
|
||||
kbWeight := 10
|
||||
for _, f := range info.KBFiles {
|
||||
src := filepath.Join(kbDir, f)
|
||||
dst := filepath.Join(kbDestDir, f)
|
||||
if err := copyWithFrontMatter(src, dst, kbWeight); err != nil {
|
||||
cli.Print(" %s KB/%s: %s\n", errorStyle.Render("✗"), f, err)
|
||||
continue
|
||||
}
|
||||
kbWeight += 10
|
||||
}
|
||||
}
|
||||
|
||||
cli.Print(" %s %s\n", successStyle.Render("✓"), info.Name)
|
||||
synced++
|
||||
}
|
||||
|
||||
cli.Print("\n Synced %d repos to Hugo content\n", synced)
|
||||
return nil
|
||||
}
|
||||
|
||||
// copyWithFrontMatter copies a markdown file, injecting front matter if missing.
|
||||
func copyWithFrontMatter(src, dst string, weight int) error {
|
||||
if err := io.Local.EnsureDir(filepath.Dir(dst)); err != nil {
|
||||
return err
|
||||
}
|
||||
content, err := io.Local.Read(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
title := titleFromFilename(src)
|
||||
result := injectFrontMatter([]byte(content), title, weight)
|
||||
return io.Local.Write(dst, string(result))
|
||||
}
|
||||
```
|
||||
|
||||
**Add imports** at top of file:
|
||||
```go
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/cli"
|
||||
"forge.lthn.ai/core/go/pkg/i18n"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"forge.lthn.ai/core/go/pkg/repos"
|
||||
)
|
||||
```
|
||||
|
||||
**Verify:** `cd /Users/snider/Code/host-uk/cli && GOWORK=off go build ./cmd/docs/...`
|
||||
|
||||
**Commit:**
|
||||
```bash
|
||||
git add cmd/docs/cmd_sync.go
|
||||
git commit -m "feat(docs): add --target hugo sync mode for core.help"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 4: Test the full pipeline
|
||||
|
||||
**No code changes.** Run the pipeline end-to-end.
|
||||
|
||||
**Step 1:** Sync docs to Hugo:
|
||||
```bash
|
||||
cd /Users/snider/Code/host-uk
|
||||
core docs sync --target hugo --dry-run
|
||||
```
|
||||
Verify all 39 repos appear with correct section mappings.
|
||||
|
||||
**Step 2:** Run actual sync:
|
||||
```bash
|
||||
core docs sync --target hugo
|
||||
```
|
||||
|
||||
**Step 3:** Build and preview:
|
||||
```bash
|
||||
cd /Users/snider/Code/host-uk/docs-site
|
||||
hugo server
|
||||
```
|
||||
Open `localhost:1313` and verify:
|
||||
- Landing page renders with section links
|
||||
- Getting Started section has CLI guides
|
||||
- CLI Reference section has command docs
|
||||
- Go Packages section has 18 packages with architecture/development/history
|
||||
- PHP Packages section has PHP module docs
|
||||
- Knowledge Base has MLX and i18n wiki pages
|
||||
- Navigation works, search works
|
||||
|
||||
**Step 4:** Fix any issues found during preview.
|
||||
|
||||
**Commit docs-site content:**
|
||||
```bash
|
||||
cd /Users/snider/Code/host-uk/docs-site
|
||||
git add content/
|
||||
git commit -m "feat: sync initial content from 39 repos"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 5: BunnyCDN deployment config
|
||||
|
||||
**Files:**
|
||||
- Modify: `/Users/snider/Code/host-uk/docs-site/hugo.toml`
|
||||
|
||||
Add deployment target:
|
||||
|
||||
```toml
|
||||
[deployment]
|
||||
[[deployment.targets]]
|
||||
name = "production"
|
||||
URL = "s3://core-help?endpoint=storage.bunnycdn.com®ion=auto"
|
||||
```
|
||||
|
||||
Add a `Taskfile.yml` for convenience:
|
||||
|
||||
**Create:** `/Users/snider/Code/host-uk/docs-site/Taskfile.yml`
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
tasks:
|
||||
dev:
|
||||
desc: Start Hugo dev server
|
||||
cmds:
|
||||
- hugo server --buildDrafts
|
||||
|
||||
build:
|
||||
desc: Build static site
|
||||
cmds:
|
||||
- hugo --minify
|
||||
|
||||
sync:
|
||||
desc: Sync docs from all repos
|
||||
dir: ..
|
||||
cmds:
|
||||
- core docs sync --target hugo
|
||||
|
||||
deploy:
|
||||
desc: Build and deploy to BunnyCDN
|
||||
cmds:
|
||||
- task: sync
|
||||
- task: build
|
||||
- hugo deploy --target production
|
||||
|
||||
clean:
|
||||
desc: Remove generated content (keeps _index.md files)
|
||||
cmds:
|
||||
- find content -name "*.md" ! -name "_index.md" -delete
|
||||
```
|
||||
|
||||
**Verify:** `task dev` starts the site.
|
||||
|
||||
**Commit:**
|
||||
```bash
|
||||
git add hugo.toml Taskfile.yml
|
||||
git commit -m "feat: add BunnyCDN deployment config and Taskfile"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dependency Sequencing
|
||||
|
||||
```
|
||||
Task 1 (Hugo scaffold) — independent, do first
|
||||
Task 2 (scan KB/) — independent, can parallel with Task 1
|
||||
Task 3 (--target hugo) — depends on Task 2
|
||||
Task 4 (test pipeline) — depends on Tasks 1 + 3
|
||||
Task 5 (deploy config) — depends on Task 1
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
After all tasks:
|
||||
1. `core docs sync --target hugo` populates `docs-site/content/` from all repos
|
||||
2. `cd docs-site && hugo server` renders the full site
|
||||
3. Navigation has 6 sections: Getting Started, CLI, Go, MCP, PHP, KB
|
||||
4. All existing markdown renders correctly with auto-injected front matter
|
||||
5. `hugo build` produces `public/` with no errors
|
||||
70
go.mod
70
go.mod
|
|
@ -2,30 +2,17 @@ module forge.lthn.ai/core/go
|
|||
|
||||
go 1.25.5
|
||||
|
||||
require forge.lthn.ai/core/go-crypt main
|
||||
|
||||
require (
|
||||
code.gitea.io/sdk/gitea v0.23.2
|
||||
codeberg.org/mvdkleijn/forgejo-sdk/forgejo/v2 v2.2.0
|
||||
github.com/ProtonMail/go-crypto v1.3.0
|
||||
github.com/Snider/Borg v0.2.0
|
||||
github.com/aws/aws-sdk-go-v2 v1.41.1
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.96.0
|
||||
github.com/getkin/kin-openapi v0.133.0
|
||||
github.com/gorilla/websocket v1.5.3
|
||||
github.com/kluctl/go-embed-python v0.0.0-3.13.1-20241219-1
|
||||
github.com/leaanthony/debme v1.2.1
|
||||
github.com/leaanthony/gosod v1.0.4
|
||||
github.com/marcboeker/go-duckdb v1.8.5
|
||||
github.com/modelcontextprotocol/go-sdk v1.3.0
|
||||
github.com/oasdiff/oasdiff v1.11.10
|
||||
github.com/ollama/ollama v0.16.1
|
||||
github.com/parquet-go/parquet-go v0.27.0
|
||||
github.com/qdrant/go-client v1.16.2
|
||||
github.com/spf13/cobra v1.10.2
|
||||
github.com/spf13/viper v1.21.0
|
||||
github.com/stretchr/testify v1.11.1
|
||||
github.com/unpoller/unifi/v5 v5.18.0
|
||||
golang.org/x/crypto v0.48.0
|
||||
golang.org/x/net v0.50.0
|
||||
golang.org/x/term v0.40.0
|
||||
golang.org/x/text v0.34.0
|
||||
google.golang.org/grpc v1.79.1
|
||||
|
|
@ -35,11 +22,7 @@ require (
|
|||
)
|
||||
|
||||
require (
|
||||
cloud.google.com/go v0.123.0 // indirect
|
||||
github.com/42wim/httpsig v1.2.3 // indirect
|
||||
github.com/TwiN/go-color v1.4.1 // indirect
|
||||
github.com/andybalholm/brotli v1.2.0 // indirect
|
||||
github.com/apache/arrow-go/v18 v18.5.1 // indirect
|
||||
github.com/ProtonMail/go-crypto v1.3.0 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.17 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.17 // indirect
|
||||
|
|
@ -49,73 +32,34 @@ require (
|
|||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.17 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.17 // indirect
|
||||
github.com/aws/smithy-go v1.24.0 // indirect
|
||||
github.com/bahlo/generic-list-go v0.2.0 // indirect
|
||||
github.com/brianvoe/gofakeit/v6 v6.28.0 // indirect
|
||||
github.com/buger/jsonparser v1.1.1 // indirect
|
||||
github.com/cloudflare/circl v1.6.3 // indirect
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/davidmz/go-pageant v1.0.2 // indirect
|
||||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/fsnotify/fsnotify v1.9.0 // indirect
|
||||
github.com/go-fed/httpsig v1.1.0 // indirect
|
||||
github.com/go-openapi/jsonpointer v0.22.4 // indirect
|
||||
github.com/go-openapi/swag/jsonname v0.25.4 // indirect
|
||||
github.com/go-viper/mapstructure/v2 v2.5.0 // indirect
|
||||
github.com/goccy/go-json v0.10.5 // indirect
|
||||
github.com/gofrs/flock v0.12.1 // indirect
|
||||
github.com/google/flatbuffers v25.12.19+incompatible // indirect
|
||||
github.com/google/jsonschema-go v0.4.2 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/hashicorp/go-version v1.8.0 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
github.com/klauspost/compress v1.18.4 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
||||
github.com/mailru/easyjson v0.9.1 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
|
||||
github.com/ncruces/go-strftime v1.0.0 // indirect
|
||||
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037 // indirect
|
||||
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90 // indirect
|
||||
github.com/parquet-go/bitpack v1.0.0 // indirect
|
||||
github.com/parquet-go/jsonlite v1.4.0 // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
|
||||
github.com/perimeterx/marshmallow v1.1.5 // indirect
|
||||
github.com/pierrec/lz4/v4 v4.1.25 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
|
||||
github.com/rogpeppe/go-internal v1.14.1 // indirect
|
||||
github.com/sagikazarmark/locafero v0.12.0 // indirect
|
||||
github.com/sirupsen/logrus v1.9.3 // indirect
|
||||
github.com/spf13/afero v1.15.0 // indirect
|
||||
github.com/spf13/cast v1.10.0 // indirect
|
||||
github.com/spf13/pflag v1.0.10 // indirect
|
||||
github.com/subosito/gotenv v1.6.0 // indirect
|
||||
github.com/tidwall/gjson v1.18.0 // indirect
|
||||
github.com/tidwall/match v1.2.0 // indirect
|
||||
github.com/tidwall/pretty v1.2.1 // indirect
|
||||
github.com/tidwall/sjson v1.2.5 // indirect
|
||||
github.com/twpayne/go-geom v1.6.1 // indirect
|
||||
github.com/ugorji/go/codec v1.3.1 // indirect
|
||||
github.com/ulikunitz/xz v0.5.15 // indirect
|
||||
github.com/wI2L/jsondiff v0.7.0 // indirect
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 // indirect
|
||||
github.com/woodsbury/decimal128 v1.4.0 // indirect
|
||||
github.com/yargevad/filepathx v1.0.0 // indirect
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2 // indirect
|
||||
github.com/zeebo/xxh3 v1.1.0 // indirect
|
||||
go.yaml.in/yaml/v3 v3.0.4 // indirect
|
||||
golang.org/x/exp v0.0.0-20260212183809-81e46e3db34a // indirect
|
||||
golang.org/x/mod v0.33.0 // indirect
|
||||
golang.org/x/oauth2 v0.35.0 // indirect
|
||||
golang.org/x/sync v0.19.0 // indirect
|
||||
golang.org/x/net v0.50.0 // indirect
|
||||
golang.org/x/sys v0.41.0 // indirect
|
||||
golang.org/x/telemetry v0.0.0-20260213145524-e0ab670178e1 // indirect
|
||||
golang.org/x/tools v0.42.0 // indirect
|
||||
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da // indirect
|
||||
gonum.org/v1/gonum v0.17.0 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 // indirect
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
|
||||
modernc.org/libc v1.67.7 // indirect
|
||||
modernc.org/mathutil v1.7.1 // indirect
|
||||
modernc.org/memory v1.11.0 // indirect
|
||||
)
|
||||
|
||||
replace forge.lthn.ai/core/go-crypt => ../go-crypt
|
||||
|
|
|
|||
177
go.sum
177
go.sum
|
|
@ -1,29 +1,7 @@
|
|||
cloud.google.com/go v0.123.0 h1:2NAUJwPR47q+E35uaJeYoNhuNEM9kM8SjgRgdeOJUSE=
|
||||
cloud.google.com/go v0.123.0/go.mod h1:xBoMV08QcqUGuPW65Qfm1o9Y4zKZBpGS+7bImXLTAZU=
|
||||
code.gitea.io/sdk/gitea v0.23.2 h1:iJB1FDmLegwfwjX8gotBDHdPSbk/ZR8V9VmEJaVsJYg=
|
||||
code.gitea.io/sdk/gitea v0.23.2/go.mod h1:yyF5+GhljqvA30sRDreoyHILruNiy4ASufugzYg0VHM=
|
||||
codeberg.org/mvdkleijn/forgejo-sdk/forgejo/v2 v2.2.0 h1:HTCWpzyWQOHDWt3LzI6/d2jvUDsw/vgGRWm/8BTvcqI=
|
||||
codeberg.org/mvdkleijn/forgejo-sdk/forgejo/v2 v2.2.0/go.mod h1:ZglEEDj+qkxYUb+SQIeqGtFxQrbaMYqIOgahNKb7uxs=
|
||||
github.com/42wim/httpsig v1.2.3 h1:xb0YyWhkYj57SPtfSttIobJUPJZB9as1nsfo7KWVcEs=
|
||||
github.com/42wim/httpsig v1.2.3/go.mod h1:nZq9OlYKDrUBhptd77IHx4/sZZD+IxTBADvAPI9G/EM=
|
||||
github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU=
|
||||
github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU=
|
||||
github.com/ProtonMail/go-crypto v1.3.0 h1:ILq8+Sf5If5DCpHQp4PbZdS1J7HDFRXz/+xKBiRGFrw=
|
||||
github.com/ProtonMail/go-crypto v1.3.0/go.mod h1:9whxjD8Rbs29b4XWbB8irEcE8KHMqaR2e7GWU1R+/PE=
|
||||
github.com/Snider/Borg v0.2.0 h1:iCyDhY4WTXi39+FexRwXbn2YpZ2U9FUXVXDZk9xRCXQ=
|
||||
github.com/Snider/Borg v0.2.0/go.mod h1:TqlKnfRo9okioHbgrZPfWjQsztBV0Nfskz4Om1/vdMY=
|
||||
github.com/TwiN/go-color v1.4.1 h1:mqG0P/KBgHKVqmtL5ye7K0/Gr4l6hTksPgTgMk3mUzc=
|
||||
github.com/TwiN/go-color v1.4.1/go.mod h1:WcPf/jtiW95WBIsEeY1Lc/b8aaWoiqQpu5cf8WFxu+s=
|
||||
github.com/alecthomas/assert/v2 v2.10.0 h1:jjRCHsj6hBJhkmhznrCzoNpbA3zqy0fYiUcYZP/GkPY=
|
||||
github.com/alecthomas/assert/v2 v2.10.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k=
|
||||
github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc=
|
||||
github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4=
|
||||
github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
|
||||
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
|
||||
github.com/apache/arrow-go/v18 v18.5.1 h1:yaQ6zxMGgf9YCYw4/oaeOU3AULySDlAYDOcnr4LdHdI=
|
||||
github.com/apache/arrow-go/v18 v18.5.1/go.mod h1:OCCJsmdq8AsRm8FkBSSmYTwL/s4zHW9CqxeBxEytkNE=
|
||||
github.com/apache/thrift v0.22.0 h1:r7mTJdj51TMDe6RtcmNdQxgn9XcyfGDOzegMDRg47uc=
|
||||
github.com/apache/thrift v0.22.0/go.mod h1:1e7J/O1Ae6ZQMTYdy9xa3w9k+XHWPfRvdPyJeynQ+/g=
|
||||
github.com/aws/aws-sdk-go-v2 v1.41.1 h1:ABlyEARCDLN034NhxlRUSZr4l71mh+T5KAeGh6cerhU=
|
||||
github.com/aws/aws-sdk-go-v2 v1.41.1/go.mod h1:MayyLB8y+buD9hZqkCW3kX1AKq07Y5pXxtgB+rRFhz0=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.4 h1:489krEF9xIGkOaaX3CE/Be2uWjiXrkCH6gUX+bZA/BU=
|
||||
|
|
@ -46,144 +24,54 @@ github.com/aws/aws-sdk-go-v2/service/s3 v1.96.0 h1:oeu8VPlOre74lBA/PMhxa5vewaMIM
|
|||
github.com/aws/aws-sdk-go-v2/service/s3 v1.96.0/go.mod h1:5jggDlZ2CLQhwJBiZJb4vfk4f0GxWdEDruWKEJ1xOdo=
|
||||
github.com/aws/smithy-go v1.24.0 h1:LpilSUItNPFr1eY85RYgTIg5eIEPtvFbskaFcmmIUnk=
|
||||
github.com/aws/smithy-go v1.24.0/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
|
||||
github.com/bahlo/generic-list-go v0.2.0 h1:5sz/EEAK+ls5wF+NeqDpk5+iNdMDXrh3z3nPnH1Wvgk=
|
||||
github.com/bahlo/generic-list-go v0.2.0/go.mod h1:2KvAjgMlE5NNynlg/5iLrrCCZ2+5xWbdbCW3pNTGyYg=
|
||||
github.com/brianvoe/gofakeit/v6 v6.28.0 h1:Xib46XXuQfmlLS2EXRuJpqcw8St6qSZz75OUo0tgAW4=
|
||||
github.com/brianvoe/gofakeit/v6 v6.28.0/go.mod h1:Xj58BMSnFqcn/fAQeSK+/PLtC5kSb7FJIq4JyGa8vEs=
|
||||
github.com/buger/jsonparser v1.1.1 h1:2PnMjfWD7wBILjqQbt530v576A/cAbQvEW9gGIpYMUs=
|
||||
github.com/buger/jsonparser v1.1.1/go.mod h1:6RYKKt7H4d4+iWqouImQ9R2FZql3VbhNgx27UK13J/0=
|
||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cloudflare/circl v1.6.3 h1:9GPOhQGF9MCYUeXyMYlqTR6a5gTrgR/fBLXvUgtVcg8=
|
||||
github.com/cloudflare/circl v1.6.3/go.mod h1:2eXP6Qfat4O/Yhh8BznvKnJ+uzEoTQ6jVKJRn81BiS4=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davidmz/go-pageant v1.0.2 h1:bPblRCh5jGU+Uptpz6LgMZGD5hJoOt7otgT454WvHn0=
|
||||
github.com/davidmz/go-pageant v1.0.2/go.mod h1:P2EDDnMqIwG5Rrp05dTRITj9z2zpGcD9efWSkTNKLIE=
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
|
||||
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
|
||||
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
|
||||
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
|
||||
github.com/getkin/kin-openapi v0.133.0 h1:pJdmNohVIJ97r4AUFtEXRXwESr8b0bD721u/Tz6k8PQ=
|
||||
github.com/getkin/kin-openapi v0.133.0/go.mod h1:boAciF6cXk5FhPqe/NQeBTeenbjqU4LhWBf09ILVvWE=
|
||||
github.com/go-fed/httpsig v1.1.0 h1:9M+hb0jkEICD8/cAiNqEB66R87tTINszBRTjwjQzWcI=
|
||||
github.com/go-fed/httpsig v1.1.0/go.mod h1:RCMrTZvN1bJYtofsG4rd5NaO5obxQ5xBkdiS7xsT7bM=
|
||||
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||
github.com/go-openapi/jsonpointer v0.22.4 h1:dZtK82WlNpVLDW2jlA1YCiVJFVqkED1MegOUy9kR5T4=
|
||||
github.com/go-openapi/jsonpointer v0.22.4/go.mod h1:elX9+UgznpFhgBuaMQ7iu4lvvX1nvNsesQ3oxmYTw80=
|
||||
github.com/go-openapi/swag/jsonname v0.25.4 h1:bZH0+MsS03MbnwBXYhuTttMOqk+5KcQ9869Vye1bNHI=
|
||||
github.com/go-openapi/swag/jsonname v0.25.4/go.mod h1:GPVEk9CWVhNvWhZgrnvRA6utbAltopbKwDu8mXNUMag=
|
||||
github.com/go-openapi/testify/v2 v2.0.2 h1:X999g3jeLcoY8qctY/c/Z8iBHTbwLz7R2WXd6Ub6wls=
|
||||
github.com/go-openapi/testify/v2 v2.0.2/go.mod h1:HCPmvFFnheKK2BuwSA0TbbdxJ3I16pjwMkYkP4Ywn54=
|
||||
github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM=
|
||||
github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE=
|
||||
github.com/go-viper/mapstructure/v2 v2.5.0 h1:vM5IJoUAy3d7zRSVtIwQgBj7BiWtMPfmPEgAXnvj1Ro=
|
||||
github.com/go-viper/mapstructure/v2 v2.5.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
|
||||
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
|
||||
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
|
||||
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
|
||||
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
|
||||
github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E=
|
||||
github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0=
|
||||
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
|
||||
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs=
|
||||
github.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/google/flatbuffers v25.12.19+incompatible h1:haMV2JRRJCe1998HeW/p0X9UaMTK6SDo0ffLn2+DbLs=
|
||||
github.com/google/flatbuffers v25.12.19+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/google/jsonschema-go v0.4.2 h1:tmrUohrwoLZZS/P3x7ex0WAVknEkBZM46iALbcqoRA8=
|
||||
github.com/google/jsonschema-go v0.4.2/go.mod h1:r5quNTdLOYEz95Ru18zA0ydNbBuYoo9tgaYcxEYhJVE=
|
||||
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
|
||||
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
|
||||
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/hashicorp/go-version v1.8.0 h1:KAkNb1HAiZd1ukkxDFGmokVZe1Xy9HG6NUp+bPle2i4=
|
||||
github.com/hashicorp/go-version v1.8.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
|
||||
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
|
||||
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
|
||||
github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM=
|
||||
github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg=
|
||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
||||
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||
github.com/klauspost/asmfmt v1.3.2 h1:4Ri7ox3EwapiOjCki+hw14RyKk201CN4rzyCJRFLpK4=
|
||||
github.com/klauspost/asmfmt v1.3.2/go.mod h1:AG8TuvYojzulgDAMCnYn50l/5QV3Bs/tp6j0HLHbNSE=
|
||||
github.com/klauspost/compress v1.18.4 h1:RPhnKRAQ4Fh8zU2FY/6ZFDwTVTxgJ/EMydqSTzE9a2c=
|
||||
github.com/klauspost/compress v1.18.4/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
|
||||
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
||||
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
|
||||
github.com/kluctl/go-embed-python v0.0.0-3.13.1-20241219-1 h1:x1cSEj4Ug5mpuZgUHLvUmlc5r//KHFn6iYiRSrRcVy4=
|
||||
github.com/kluctl/go-embed-python v0.0.0-3.13.1-20241219-1/go.mod h1:3ebNU9QBrNpUO+Hj6bHaGpkh5pymDHQ+wwVPHTE4mCE=
|
||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/leaanthony/debme v1.2.1 h1:9Tgwf+kjcrbMQ4WnPcEIUcQuIZYqdWftzZkBr+i/oOc=
|
||||
github.com/leaanthony/debme v1.2.1/go.mod h1:3V+sCm5tYAgQymvSOfYQ5Xx2JCr+OXiD9Jkw3otUjiA=
|
||||
github.com/leaanthony/gosod v1.0.4 h1:YLAbVyd591MRffDgxUOU1NwLhT9T1/YiwjKZpkNFeaI=
|
||||
github.com/leaanthony/gosod v1.0.4/go.mod h1:GKuIL0zzPj3O1SdWQOdgURSuhkF+Urizzxh26t9f1cw=
|
||||
github.com/leaanthony/slicer v1.5.0/go.mod h1:FwrApmf8gOrpzEWM2J/9Lh79tyq8KTX5AzRtwV7m4AY=
|
||||
github.com/leaanthony/slicer v1.6.0 h1:1RFP5uiPJvT93TAHi+ipd3NACobkW53yUiBqZheE/Js=
|
||||
github.com/leaanthony/slicer v1.6.0/go.mod h1:o/Iz29g7LN0GqH3aMjWAe90381nyZlDNquK+mtH2Fj8=
|
||||
github.com/mailru/easyjson v0.9.1 h1:LbtsOm5WAswyWbvTEOqhypdPeZzHavpZx96/n553mR8=
|
||||
github.com/mailru/easyjson v0.9.1/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU=
|
||||
github.com/marcboeker/go-duckdb v1.8.5 h1:tkYp+TANippy0DaIOP5OEfBEwbUINqiFqgwMQ44jME0=
|
||||
github.com/marcboeker/go-duckdb v1.8.5/go.mod h1:6mK7+WQE4P4u5AFLvVBmhFxY5fvhymFptghgJX6B+/8=
|
||||
github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU=
|
||||
github.com/matryer/is v1.4.1 h1:55ehd8zaGABKLXQUe2awZ99BD/PTc2ls+KV/dXphgEQ=
|
||||
github.com/matryer/is v1.4.1/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8 h1:AMFGa4R4MiIpspGNG7Z948v4n35fFGB3RR3G/ry4FWs=
|
||||
github.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8/go.mod h1:mC1jAcsrzbxHt8iiaC+zU4b1ylILSosueou12R++wfY=
|
||||
github.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3 h1:+n/aFZefKZp7spd8DFdX7uMikMLXX4oubIzJF4kv/wI=
|
||||
github.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3/go.mod h1:RagcQ7I8IeTMnF8JTXieKnO4Z6JCsikNEzj0DwauVzE=
|
||||
github.com/modelcontextprotocol/go-sdk v1.3.0 h1:gMfZkv3DzQF5q/DcQePo5rahEY+sguyPfXDfNBcT0Zs=
|
||||
github.com/modelcontextprotocol/go-sdk v1.3.0/go.mod h1:AnQ//Qc6+4nIyyrB4cxBU7UW9VibK4iOZBeyP/rF1IE=
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
|
||||
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
|
||||
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
|
||||
github.com/oasdiff/oasdiff v1.11.10 h1:4I9VrktUoHmwydkJqVOC7Bd6BXKu9dc4UUP3PIu1VjM=
|
||||
github.com/oasdiff/oasdiff v1.11.10/go.mod h1:GXARzmqBKN8lZHsTQD35ZM41ePbu6JdAZza4sRMeEKg=
|
||||
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037 h1:G7ERwszslrBzRxj//JalHPu/3yz+De2J+4aLtSRlHiY=
|
||||
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037/go.mod h1:2bpvgLBZEtENV5scfDFEtB/5+1M4hkQhDQrccEJ/qGw=
|
||||
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90 h1:bQx3WeLcUWy+RletIKwUIt4x3t8n2SxavmoclizMb8c=
|
||||
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o=
|
||||
github.com/ollama/ollama v0.16.1 h1:DIxnLdS0om3hb7HheJqj6+ZnPCCMWmy/vyUxiQgRYoI=
|
||||
github.com/ollama/ollama v0.16.1/go.mod h1:FEk95NbAJJZk+t7cLh+bPGTul72j1O3PLLlYNV3FVZ0=
|
||||
github.com/parquet-go/bitpack v1.0.0 h1:AUqzlKzPPXf2bCdjfj4sTeacrUwsT7NlcYDMUQxPcQA=
|
||||
github.com/parquet-go/bitpack v1.0.0/go.mod h1:XnVk9TH+O40eOOmvpAVZ7K2ocQFrQwysLMnc6M/8lgs=
|
||||
github.com/parquet-go/jsonlite v1.4.0 h1:RTG7prqfO0HD5egejU8MUDBN8oToMj55cgSV1I0zNW4=
|
||||
github.com/parquet-go/jsonlite v1.4.0/go.mod h1:nDjpkpL4EOtqs6NQugUsi0Rleq9sW/OtC1NnZEnxzF0=
|
||||
github.com/parquet-go/parquet-go v0.27.0 h1:vHWK2xaHbj+v1DYps03yDRpEsdtOeKbhiXUaixoPb3g=
|
||||
github.com/parquet-go/parquet-go v0.27.0/go.mod h1:navtkAYr2LGoJVp141oXPlO/sxLvaOe3la2JEoD8+rg=
|
||||
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
|
||||
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
|
||||
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
|
||||
github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw=
|
||||
github.com/pierrec/lz4/v4 v4.1.25 h1:kocOqRffaIbU5djlIBr7Wh+cx82C0vtFb0fOurZHqD0=
|
||||
github.com/pierrec/lz4/v4 v4.1.25/go.mod h1:EoQMVJgeeEOMsCqCzqFm2O0cJvljX2nGZjcRIPL34O4=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/qdrant/go-client v1.16.2 h1:UUMJJfvXTByhwhH1DwWdbkhZ2cTdvSqVkXSIfBrVWSg=
|
||||
github.com/qdrant/go-client v1.16.2/go.mod h1:I+EL3h4HRoRTeHtbfOd/4kDXwCukZfkd41j/9wryGkw=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
|
|
@ -191,8 +79,6 @@ github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7
|
|||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/sagikazarmark/locafero v0.12.0 h1:/NQhBAkUb4+fH1jivKHWusDYFjMOOKU88eegjfxfHb4=
|
||||
github.com/sagikazarmark/locafero v0.12.0/go.mod h1:sZh36u/YSZ918v0Io+U9ogLYQJ9tLLBmM4eneO6WwsI=
|
||||
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
||||
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
|
||||
github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg=
|
||||
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
|
||||
|
|
@ -204,47 +90,10 @@ github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
|
|||
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spf13/viper v1.21.0 h1:x5S+0EU27Lbphp4UKm1C+1oQO+rKx36vfCoaVebLFSU=
|
||||
github.com/spf13/viper v1.21.0/go.mod h1:P0lhsswPGWD/1lZJ9ny3fYnVqxiegrlNrEmgLjbTCAY=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
|
||||
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
|
||||
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
|
||||
github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY=
|
||||
github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
|
||||
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
|
||||
github.com/tidwall/match v1.2.0 h1:0pt8FlkOwjN2fPt4bIl4BoNxb98gGHN2ObFEDkrfZnM=
|
||||
github.com/tidwall/match v1.2.0/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
|
||||
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
|
||||
github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4=
|
||||
github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
|
||||
github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY=
|
||||
github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28=
|
||||
github.com/twpayne/go-geom v1.6.1 h1:iLE+Opv0Ihm/ABIcvQFGIiFBXd76oBIar9drAwHFhR4=
|
||||
github.com/twpayne/go-geom v1.6.1/go.mod h1:Kr+Nly6BswFsKM5sd31YaoWS5PeDDH2NftJTK7Gd028=
|
||||
github.com/ugorji/go/codec v1.3.1 h1:waO7eEiFDwidsBN6agj1vJQ4AG7lh2yqXyOXqhgQuyY=
|
||||
github.com/ugorji/go/codec v1.3.1/go.mod h1:pRBVtBSKl77K30Bv8R2P+cLSGaTtex6fsA2Wjqmfxj4=
|
||||
github.com/ulikunitz/xz v0.5.15 h1:9DNdB5s+SgV3bQ2ApL10xRc35ck0DuIX/isZvIk+ubY=
|
||||
github.com/ulikunitz/xz v0.5.15/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14=
|
||||
github.com/unpoller/unifi/v5 v5.18.0 h1:i9xecLeI9CU6m+5++TIm+zhdGS9f8KCUz8PuuzO7sSQ=
|
||||
github.com/unpoller/unifi/v5 v5.18.0/go.mod h1:vSIXIclPG9dpKxUp+pavfgENHWaTZXvDg7F036R1YCo=
|
||||
github.com/wI2L/jsondiff v0.7.0 h1:1lH1G37GhBPqCfp/lrs91rf/2j3DktX6qYAKZkLuCQQ=
|
||||
github.com/wI2L/jsondiff v0.7.0/go.mod h1:KAEIojdQq66oJiHhDyQez2x+sRit0vIzC9KeK0yizxM=
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 h1:5h/BUHu93oj4gIdvHHHGsScSTMijfx5PeYkE/fJgbpc=
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8/go.mod h1:5nJHM5DyteebpVlHnWMV0rPz6Zp7+xBAnxjb1X5vnTw=
|
||||
github.com/woodsbury/decimal128 v1.4.0 h1:xJATj7lLu4f2oObouMt2tgGiElE5gO6mSWUjQsBgUlc=
|
||||
github.com/woodsbury/decimal128 v1.4.0/go.mod h1:BP46FUrVjVhdTbKT+XuQh2xfQaGki9LMIRJSFuh6THU=
|
||||
github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
|
||||
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
|
||||
github.com/yargevad/filepathx v1.0.0 h1:SYcT+N3tYGi+NvazubCNlvgIPbzAk7i7y2dwg3I5FYc=
|
||||
github.com/yargevad/filepathx v1.0.0/go.mod h1:BprfX/gpYNJHJfc35GjRRpVcwWXS89gGulUIU5tK3tA=
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2 h1:Ed3Oyj9yrmi9087+NczuL5BwkIc4wvTb5zIM+UJPGz4=
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2/go.mod h1:ILOh0sOhIJR3+L/8afwt/kE++YT040gmv5BQTMR2HP4=
|
||||
github.com/zeebo/assert v1.3.0 h1:g7C04CbJuIDKNPFHmsk4hwZDO5O+kntRxzaUoNXj+IQ=
|
||||
github.com/zeebo/assert v1.3.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN/wJ0=
|
||||
github.com/zeebo/xxh3 v1.1.0 h1:s7DLGDK45Dyfg7++yxI0khrfwq9661w9EN78eP/UZVs=
|
||||
github.com/zeebo/xxh3 v1.1.0/go.mod h1:IisAie1LELR4xhVinxWS5+zf1lA4p0MW4T+w+W07F5s=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
|
||||
|
|
@ -259,44 +108,25 @@ go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6
|
|||
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
|
||||
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
|
||||
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a/go.mod h1:P+XmwS30IXTQdn5tA2iutPOUgjI07+tq3H3K9MVA1s8=
|
||||
golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
|
||||
golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos=
|
||||
golang.org/x/exp v0.0.0-20260212183809-81e46e3db34a h1:ovFr6Z0MNmU7nH8VaX5xqw+05ST2uO1exVfZPVqRC5o=
|
||||
golang.org/x/exp v0.0.0-20260212183809-81e46e3db34a/go.mod h1:K79w1Vqn7PoiZn+TkNpx3BUWUQksGO3JcVX6qIjytmA=
|
||||
golang.org/x/mod v0.33.0 h1:tHFzIWbBifEmbwtGz65eaWyGiGZatSrT9prnU8DbVL8=
|
||||
golang.org/x/mod v0.33.0/go.mod h1:swjeQEj+6r7fODbD2cqrnje9PnziFuw4bmLbBZFrQ5w=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.50.0 h1:ucWh9eiCGyDR3vtzso0WMQinm2Dnt8cFMuQa9K33J60=
|
||||
golang.org/x/net v0.50.0/go.mod h1:UgoSli3F/pBgdJBHCTc+tp3gmrU4XswgGRgtnwWTfyM=
|
||||
golang.org/x/oauth2 v0.35.0 h1:Mv2mzuHuZuY2+bkyWXIHMfhNdJAdwW3FuWeCPYN5GVQ=
|
||||
golang.org/x/oauth2 v0.35.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
|
||||
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
|
||||
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/telemetry v0.0.0-20260213145524-e0ab670178e1 h1:QNaHp8YvpPswfDNxlCmJyeesxbGOgaKf41iT9/QrErY=
|
||||
golang.org/x/telemetry v0.0.0-20260213145524-e0ab670178e1/go.mod h1:NuITXsA9cTiqnXtVk+/wrBT2Ja4X5hsfGOYRJ6kgYjs=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.40.0 h1:36e4zGLqU4yhjlmxEaagx2KuYbJq3EwY8K943ZsHcvg=
|
||||
golang.org/x/term v0.40.0/go.mod h1:w2P8uVp06p2iyKKuvXIm7N/y0UCRt3UfJTfZ7oOpglM=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.34.0 h1:oL/Qq0Kdaqxa1KbNeMKwQq0reLCCaFtqu2eNuSeNHbk=
|
||||
golang.org/x/text v0.34.0/go.mod h1:homfLqTYRFyVYemLBFl5GgL/DWEiH5wcsQ5gSh1yziA=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.42.0 h1:uNgphsn75Tdz5Ji2q36v/nsFSfR/9BRFvqhGBaJGd5k=
|
||||
golang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0=
|
||||
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da h1:noIWHXmPHxILtqtCOPIhSt0ABwskkZKjD3bXGnZGpNY=
|
||||
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da/go.mod h1:NDW/Ps6MPRej6fsCIbMTohpP40sJ/P/vI1MoTEGwX90=
|
||||
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
||||
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 h1:gRkg/vSppuSQoDjxyiGfN4Upv/h/DQmIR10ZU8dh4Ww=
|
||||
|
|
@ -308,7 +138,6 @@ google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j
|
|||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
|
||||
|
|
|
|||
|
|
@ -1,87 +0,0 @@
|
|||
package agentci
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/jobrunner"
|
||||
)
|
||||
|
||||
// RunMode determines the execution strategy for a dispatched task.
|
||||
type RunMode string
|
||||
|
||||
const (
|
||||
ModeStandard RunMode = "standard"
|
||||
ModeDual RunMode = "dual" // The Clotho Protocol — dual-run verification
|
||||
)
|
||||
|
||||
// Spinner is the Clotho orchestrator that determines the fate of each task.
|
||||
type Spinner struct {
|
||||
Config ClothoConfig
|
||||
Agents map[string]AgentConfig
|
||||
}
|
||||
|
||||
// NewSpinner creates a new Clotho orchestrator.
|
||||
func NewSpinner(cfg ClothoConfig, agents map[string]AgentConfig) *Spinner {
|
||||
return &Spinner{
|
||||
Config: cfg,
|
||||
Agents: agents,
|
||||
}
|
||||
}
|
||||
|
||||
// DeterminePlan decides if a signal requires dual-run verification based on
|
||||
// the global strategy, agent configuration, and repository criticality.
|
||||
func (s *Spinner) DeterminePlan(signal *jobrunner.PipelineSignal, agentName string) RunMode {
|
||||
if s.Config.Strategy != "clotho-verified" {
|
||||
return ModeStandard
|
||||
}
|
||||
|
||||
agent, ok := s.Agents[agentName]
|
||||
if !ok {
|
||||
return ModeStandard
|
||||
}
|
||||
if agent.DualRun {
|
||||
return ModeDual
|
||||
}
|
||||
|
||||
// Protect critical repos with dual-run (Axiom 1).
|
||||
if signal.RepoName == "core" || strings.Contains(signal.RepoName, "security") {
|
||||
return ModeDual
|
||||
}
|
||||
|
||||
return ModeStandard
|
||||
}
|
||||
|
||||
// GetVerifierModel returns the model for the secondary "signed" verification run.
|
||||
func (s *Spinner) GetVerifierModel(agentName string) string {
|
||||
agent, ok := s.Agents[agentName]
|
||||
if !ok || agent.VerifyModel == "" {
|
||||
return "gemini-1.5-pro"
|
||||
}
|
||||
return agent.VerifyModel
|
||||
}
|
||||
|
||||
// FindByForgejoUser resolves a Forgejo username to the agent config key and config.
|
||||
// This decouples agent naming (mythological roles) from Forgejo identity.
|
||||
func (s *Spinner) FindByForgejoUser(forgejoUser string) (string, AgentConfig, bool) {
|
||||
if forgejoUser == "" {
|
||||
return "", AgentConfig{}, false
|
||||
}
|
||||
// Direct match on config key first.
|
||||
if agent, ok := s.Agents[forgejoUser]; ok {
|
||||
return forgejoUser, agent, true
|
||||
}
|
||||
// Search by ForgejoUser field.
|
||||
for name, agent := range s.Agents {
|
||||
if agent.ForgejoUser != "" && agent.ForgejoUser == forgejoUser {
|
||||
return name, agent, true
|
||||
}
|
||||
}
|
||||
return "", AgentConfig{}, false
|
||||
}
|
||||
|
||||
// Weave compares primary and verifier outputs. Returns true if they converge.
|
||||
// This is a placeholder for future semantic diff logic.
|
||||
func (s *Spinner) Weave(ctx context.Context, primaryOutput, signedOutput []byte) (bool, error) {
|
||||
return string(primaryOutput) == string(signedOutput), nil
|
||||
}
|
||||
|
|
@ -1,144 +0,0 @@
|
|||
// Package agentci provides configuration, security, and orchestration for AgentCI dispatch targets.
|
||||
package agentci
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/config"
|
||||
)
|
||||
|
||||
// AgentConfig represents a single agent machine in the config file.
|
||||
type AgentConfig struct {
|
||||
Host string `yaml:"host" mapstructure:"host"`
|
||||
QueueDir string `yaml:"queue_dir" mapstructure:"queue_dir"`
|
||||
ForgejoUser string `yaml:"forgejo_user" mapstructure:"forgejo_user"`
|
||||
Model string `yaml:"model" mapstructure:"model"` // primary AI model
|
||||
Runner string `yaml:"runner" mapstructure:"runner"` // runner binary: claude, codex, gemini
|
||||
VerifyModel string `yaml:"verify_model" mapstructure:"verify_model"` // secondary model for dual-run
|
||||
SecurityLevel string `yaml:"security_level" mapstructure:"security_level"` // low, high
|
||||
Roles []string `yaml:"roles" mapstructure:"roles"`
|
||||
DualRun bool `yaml:"dual_run" mapstructure:"dual_run"`
|
||||
Active bool `yaml:"active" mapstructure:"active"`
|
||||
}
|
||||
|
||||
// ClothoConfig controls the orchestration strategy.
|
||||
type ClothoConfig struct {
|
||||
Strategy string `yaml:"strategy" mapstructure:"strategy"` // direct, clotho-verified
|
||||
ValidationThreshold float64 `yaml:"validation_threshold" mapstructure:"validation_threshold"` // divergence limit (0.0-1.0)
|
||||
SigningKeyPath string `yaml:"signing_key_path" mapstructure:"signing_key_path"`
|
||||
}
|
||||
|
||||
// LoadAgents reads agent targets from config and returns a map of AgentConfig.
|
||||
// Returns an empty map (not an error) if no agents are configured.
|
||||
func LoadAgents(cfg *config.Config) (map[string]AgentConfig, error) {
|
||||
var agents map[string]AgentConfig
|
||||
if err := cfg.Get("agentci.agents", &agents); err != nil {
|
||||
return map[string]AgentConfig{}, nil
|
||||
}
|
||||
|
||||
// Validate and apply defaults.
|
||||
for name, ac := range agents {
|
||||
if !ac.Active {
|
||||
continue
|
||||
}
|
||||
if ac.Host == "" {
|
||||
return nil, fmt.Errorf("agent %q: host is required", name)
|
||||
}
|
||||
if ac.QueueDir == "" {
|
||||
ac.QueueDir = "/home/claude/ai-work/queue"
|
||||
}
|
||||
if ac.Model == "" {
|
||||
ac.Model = "sonnet"
|
||||
}
|
||||
if ac.Runner == "" {
|
||||
ac.Runner = "claude"
|
||||
}
|
||||
agents[name] = ac
|
||||
}
|
||||
|
||||
return agents, nil
|
||||
}
|
||||
|
||||
// LoadActiveAgents returns only active agents.
|
||||
func LoadActiveAgents(cfg *config.Config) (map[string]AgentConfig, error) {
|
||||
all, err := LoadAgents(cfg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
active := make(map[string]AgentConfig)
|
||||
for name, ac := range all {
|
||||
if ac.Active {
|
||||
active[name] = ac
|
||||
}
|
||||
}
|
||||
return active, nil
|
||||
}
|
||||
|
||||
// LoadClothoConfig loads the Clotho orchestrator settings.
|
||||
// Returns sensible defaults if no config is present.
|
||||
func LoadClothoConfig(cfg *config.Config) (ClothoConfig, error) {
|
||||
var cc ClothoConfig
|
||||
if err := cfg.Get("agentci.clotho", &cc); err != nil {
|
||||
return ClothoConfig{
|
||||
Strategy: "direct",
|
||||
ValidationThreshold: 0.85,
|
||||
}, nil
|
||||
}
|
||||
if cc.Strategy == "" {
|
||||
cc.Strategy = "direct"
|
||||
}
|
||||
if cc.ValidationThreshold == 0 {
|
||||
cc.ValidationThreshold = 0.85
|
||||
}
|
||||
return cc, nil
|
||||
}
|
||||
|
||||
// SaveAgent writes an agent config entry to the config file.
|
||||
func SaveAgent(cfg *config.Config, name string, ac AgentConfig) error {
|
||||
key := fmt.Sprintf("agentci.agents.%s", name)
|
||||
data := map[string]any{
|
||||
"host": ac.Host,
|
||||
"queue_dir": ac.QueueDir,
|
||||
"forgejo_user": ac.ForgejoUser,
|
||||
"active": ac.Active,
|
||||
"dual_run": ac.DualRun,
|
||||
}
|
||||
if ac.Model != "" {
|
||||
data["model"] = ac.Model
|
||||
}
|
||||
if ac.Runner != "" {
|
||||
data["runner"] = ac.Runner
|
||||
}
|
||||
if ac.VerifyModel != "" {
|
||||
data["verify_model"] = ac.VerifyModel
|
||||
}
|
||||
if ac.SecurityLevel != "" {
|
||||
data["security_level"] = ac.SecurityLevel
|
||||
}
|
||||
if len(ac.Roles) > 0 {
|
||||
data["roles"] = ac.Roles
|
||||
}
|
||||
return cfg.Set(key, data)
|
||||
}
|
||||
|
||||
// RemoveAgent removes an agent from the config file.
|
||||
func RemoveAgent(cfg *config.Config, name string) error {
|
||||
var agents map[string]AgentConfig
|
||||
if err := cfg.Get("agentci.agents", &agents); err != nil {
|
||||
return fmt.Errorf("no agents configured")
|
||||
}
|
||||
if _, ok := agents[name]; !ok {
|
||||
return fmt.Errorf("agent %q not found", name)
|
||||
}
|
||||
delete(agents, name)
|
||||
return cfg.Set("agentci.agents", agents)
|
||||
}
|
||||
|
||||
// ListAgents returns all configured agents (active and inactive).
|
||||
func ListAgents(cfg *config.Config) (map[string]AgentConfig, error) {
|
||||
var agents map[string]AgentConfig
|
||||
if err := cfg.Get("agentci.agents", &agents); err != nil {
|
||||
return map[string]AgentConfig{}, nil
|
||||
}
|
||||
return agents, nil
|
||||
}
|
||||
|
|
@ -1,329 +0,0 @@
|
|||
package agentci
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/config"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func newTestConfig(t *testing.T, yaml string) *config.Config {
|
||||
t.Helper()
|
||||
m := io.NewMockMedium()
|
||||
if yaml != "" {
|
||||
m.Files["/tmp/test/config.yaml"] = yaml
|
||||
}
|
||||
cfg, err := config.New(config.WithMedium(m), config.WithPath("/tmp/test/config.yaml"))
|
||||
require.NoError(t, err)
|
||||
return cfg
|
||||
}
|
||||
|
||||
func TestLoadAgents_Good(t *testing.T) {
|
||||
cfg := newTestConfig(t, `
|
||||
agentci:
|
||||
agents:
|
||||
darbs-claude:
|
||||
host: claude@192.168.0.201
|
||||
queue_dir: /home/claude/ai-work/queue
|
||||
forgejo_user: darbs-claude
|
||||
model: sonnet
|
||||
runner: claude
|
||||
active: true
|
||||
`)
|
||||
agents, err := LoadAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, agents, 1)
|
||||
|
||||
agent := agents["darbs-claude"]
|
||||
assert.Equal(t, "claude@192.168.0.201", agent.Host)
|
||||
assert.Equal(t, "/home/claude/ai-work/queue", agent.QueueDir)
|
||||
assert.Equal(t, "sonnet", agent.Model)
|
||||
assert.Equal(t, "claude", agent.Runner)
|
||||
}
|
||||
|
||||
func TestLoadAgents_Good_MultipleAgents(t *testing.T) {
|
||||
cfg := newTestConfig(t, `
|
||||
agentci:
|
||||
agents:
|
||||
darbs-claude:
|
||||
host: claude@192.168.0.201
|
||||
queue_dir: /home/claude/ai-work/queue
|
||||
active: true
|
||||
local-codex:
|
||||
host: localhost
|
||||
queue_dir: /home/claude/ai-work/queue
|
||||
runner: codex
|
||||
active: true
|
||||
`)
|
||||
agents, err := LoadAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, agents, 2)
|
||||
assert.Contains(t, agents, "darbs-claude")
|
||||
assert.Contains(t, agents, "local-codex")
|
||||
}
|
||||
|
||||
func TestLoadAgents_Good_SkipsInactive(t *testing.T) {
|
||||
cfg := newTestConfig(t, `
|
||||
agentci:
|
||||
agents:
|
||||
active-agent:
|
||||
host: claude@10.0.0.1
|
||||
active: true
|
||||
offline-agent:
|
||||
host: claude@10.0.0.2
|
||||
active: false
|
||||
`)
|
||||
agents, err := LoadAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
// Both are returned, but only active-agent has defaults applied.
|
||||
assert.Len(t, agents, 2)
|
||||
assert.Contains(t, agents, "active-agent")
|
||||
}
|
||||
|
||||
func TestLoadActiveAgents_Good(t *testing.T) {
|
||||
cfg := newTestConfig(t, `
|
||||
agentci:
|
||||
agents:
|
||||
active-agent:
|
||||
host: claude@10.0.0.1
|
||||
active: true
|
||||
offline-agent:
|
||||
host: claude@10.0.0.2
|
||||
active: false
|
||||
`)
|
||||
active, err := LoadActiveAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, active, 1)
|
||||
assert.Contains(t, active, "active-agent")
|
||||
}
|
||||
|
||||
func TestLoadAgents_Good_Defaults(t *testing.T) {
|
||||
cfg := newTestConfig(t, `
|
||||
agentci:
|
||||
agents:
|
||||
minimal:
|
||||
host: claude@10.0.0.1
|
||||
active: true
|
||||
`)
|
||||
agents, err := LoadAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, agents, 1)
|
||||
|
||||
agent := agents["minimal"]
|
||||
assert.Equal(t, "/home/claude/ai-work/queue", agent.QueueDir)
|
||||
assert.Equal(t, "sonnet", agent.Model)
|
||||
assert.Equal(t, "claude", agent.Runner)
|
||||
}
|
||||
|
||||
func TestLoadAgents_Good_NoConfig(t *testing.T) {
|
||||
cfg := newTestConfig(t, "")
|
||||
agents, err := LoadAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, agents)
|
||||
}
|
||||
|
||||
func TestLoadAgents_Bad_MissingHost(t *testing.T) {
|
||||
cfg := newTestConfig(t, `
|
||||
agentci:
|
||||
agents:
|
||||
broken:
|
||||
queue_dir: /tmp
|
||||
active: true
|
||||
`)
|
||||
_, err := LoadAgents(cfg)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "host is required")
|
||||
}
|
||||
|
||||
func TestLoadAgents_Good_WithDualRun(t *testing.T) {
|
||||
cfg := newTestConfig(t, `
|
||||
agentci:
|
||||
agents:
|
||||
gemini-agent:
|
||||
host: localhost
|
||||
runner: gemini
|
||||
model: gemini-2.0-flash
|
||||
verify_model: gemini-1.5-pro
|
||||
dual_run: true
|
||||
active: true
|
||||
`)
|
||||
agents, err := LoadAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
|
||||
agent := agents["gemini-agent"]
|
||||
assert.Equal(t, "gemini", agent.Runner)
|
||||
assert.Equal(t, "gemini-2.0-flash", agent.Model)
|
||||
assert.Equal(t, "gemini-1.5-pro", agent.VerifyModel)
|
||||
assert.True(t, agent.DualRun)
|
||||
}
|
||||
|
||||
func TestLoadClothoConfig_Good(t *testing.T) {
|
||||
cfg := newTestConfig(t, `
|
||||
agentci:
|
||||
clotho:
|
||||
strategy: clotho-verified
|
||||
validation_threshold: 0.9
|
||||
signing_key_path: /etc/core/keys/clotho.pub
|
||||
`)
|
||||
cc, err := LoadClothoConfig(cfg)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "clotho-verified", cc.Strategy)
|
||||
assert.Equal(t, 0.9, cc.ValidationThreshold)
|
||||
assert.Equal(t, "/etc/core/keys/clotho.pub", cc.SigningKeyPath)
|
||||
}
|
||||
|
||||
func TestLoadClothoConfig_Good_Defaults(t *testing.T) {
|
||||
cfg := newTestConfig(t, "")
|
||||
cc, err := LoadClothoConfig(cfg)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "direct", cc.Strategy)
|
||||
assert.Equal(t, 0.85, cc.ValidationThreshold)
|
||||
}
|
||||
|
||||
func TestSaveAgent_Good(t *testing.T) {
|
||||
cfg := newTestConfig(t, "")
|
||||
|
||||
err := SaveAgent(cfg, "new-agent", AgentConfig{
|
||||
Host: "claude@10.0.0.5",
|
||||
QueueDir: "/home/claude/ai-work/queue",
|
||||
ForgejoUser: "new-agent",
|
||||
Model: "haiku",
|
||||
Runner: "claude",
|
||||
Active: true,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
agents, err := ListAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
require.Contains(t, agents, "new-agent")
|
||||
assert.Equal(t, "claude@10.0.0.5", agents["new-agent"].Host)
|
||||
assert.Equal(t, "haiku", agents["new-agent"].Model)
|
||||
}
|
||||
|
||||
func TestSaveAgent_Good_WithDualRun(t *testing.T) {
|
||||
cfg := newTestConfig(t, "")
|
||||
|
||||
err := SaveAgent(cfg, "verified-agent", AgentConfig{
|
||||
Host: "claude@10.0.0.5",
|
||||
Model: "gemini-2.0-flash",
|
||||
VerifyModel: "gemini-1.5-pro",
|
||||
DualRun: true,
|
||||
Active: true,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
agents, err := ListAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
require.Contains(t, agents, "verified-agent")
|
||||
assert.True(t, agents["verified-agent"].DualRun)
|
||||
}
|
||||
|
||||
func TestSaveAgent_Good_OmitsEmptyOptionals(t *testing.T) {
|
||||
cfg := newTestConfig(t, "")
|
||||
|
||||
err := SaveAgent(cfg, "minimal", AgentConfig{
|
||||
Host: "claude@10.0.0.1",
|
||||
Active: true,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
agents, err := ListAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, agents, "minimal")
|
||||
}
|
||||
|
||||
func TestRemoveAgent_Good(t *testing.T) {
|
||||
cfg := newTestConfig(t, `
|
||||
agentci:
|
||||
agents:
|
||||
to-remove:
|
||||
host: claude@10.0.0.1
|
||||
active: true
|
||||
to-keep:
|
||||
host: claude@10.0.0.2
|
||||
active: true
|
||||
`)
|
||||
err := RemoveAgent(cfg, "to-remove")
|
||||
require.NoError(t, err)
|
||||
|
||||
agents, err := ListAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
assert.NotContains(t, agents, "to-remove")
|
||||
assert.Contains(t, agents, "to-keep")
|
||||
}
|
||||
|
||||
func TestRemoveAgent_Bad_NotFound(t *testing.T) {
|
||||
cfg := newTestConfig(t, `
|
||||
agentci:
|
||||
agents:
|
||||
existing:
|
||||
host: claude@10.0.0.1
|
||||
active: true
|
||||
`)
|
||||
err := RemoveAgent(cfg, "nonexistent")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "not found")
|
||||
}
|
||||
|
||||
func TestRemoveAgent_Bad_NoAgents(t *testing.T) {
|
||||
cfg := newTestConfig(t, "")
|
||||
err := RemoveAgent(cfg, "anything")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "no agents configured")
|
||||
}
|
||||
|
||||
func TestListAgents_Good(t *testing.T) {
|
||||
cfg := newTestConfig(t, `
|
||||
agentci:
|
||||
agents:
|
||||
agent-a:
|
||||
host: claude@10.0.0.1
|
||||
active: true
|
||||
agent-b:
|
||||
host: claude@10.0.0.2
|
||||
active: false
|
||||
`)
|
||||
agents, err := ListAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, agents, 2)
|
||||
assert.True(t, agents["agent-a"].Active)
|
||||
assert.False(t, agents["agent-b"].Active)
|
||||
}
|
||||
|
||||
func TestListAgents_Good_Empty(t *testing.T) {
|
||||
cfg := newTestConfig(t, "")
|
||||
agents, err := ListAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, agents)
|
||||
}
|
||||
|
||||
func TestRoundTrip_SaveThenLoad(t *testing.T) {
|
||||
cfg := newTestConfig(t, "")
|
||||
|
||||
err := SaveAgent(cfg, "alpha", AgentConfig{
|
||||
Host: "claude@alpha",
|
||||
QueueDir: "/home/claude/work/queue",
|
||||
ForgejoUser: "alpha-bot",
|
||||
Model: "opus",
|
||||
Runner: "claude",
|
||||
Active: true,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = SaveAgent(cfg, "beta", AgentConfig{
|
||||
Host: "claude@beta",
|
||||
ForgejoUser: "beta-bot",
|
||||
Runner: "codex",
|
||||
Active: true,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
agents, err := LoadActiveAgents(cfg)
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, agents, 2)
|
||||
assert.Equal(t, "claude@alpha", agents["alpha"].Host)
|
||||
assert.Equal(t, "opus", agents["alpha"].Model)
|
||||
assert.Equal(t, "codex", agents["beta"].Runner)
|
||||
}
|
||||
|
|
@ -1,49 +0,0 @@
|
|||
package agentci
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var safeNameRegex = regexp.MustCompile(`^[a-zA-Z0-9\-\_\.]+$`)
|
||||
|
||||
// SanitizePath ensures a filename or directory name is safe and prevents path traversal.
|
||||
// Returns filepath.Base of the input after validation.
|
||||
func SanitizePath(input string) (string, error) {
|
||||
base := filepath.Base(input)
|
||||
if !safeNameRegex.MatchString(base) {
|
||||
return "", fmt.Errorf("invalid characters in path element: %s", input)
|
||||
}
|
||||
if base == "." || base == ".." || base == "/" {
|
||||
return "", fmt.Errorf("invalid path element: %s", base)
|
||||
}
|
||||
return base, nil
|
||||
}
|
||||
|
||||
// EscapeShellArg wraps a string in single quotes for safe remote shell insertion.
|
||||
// Prefer exec.Command arguments over constructing shell strings where possible.
|
||||
func EscapeShellArg(arg string) string {
|
||||
return "'" + strings.ReplaceAll(arg, "'", "'\\''") + "'"
|
||||
}
|
||||
|
||||
// SecureSSHCommand creates an SSH exec.Cmd with strict host key checking and batch mode.
|
||||
func SecureSSHCommand(host string, remoteCmd string) *exec.Cmd {
|
||||
return exec.Command("ssh",
|
||||
"-o", "StrictHostKeyChecking=yes",
|
||||
"-o", "BatchMode=yes",
|
||||
"-o", "ConnectTimeout=10",
|
||||
host,
|
||||
remoteCmd,
|
||||
)
|
||||
}
|
||||
|
||||
// MaskToken returns a masked version of a token for safe logging.
|
||||
func MaskToken(token string) string {
|
||||
if len(token) < 8 {
|
||||
return "*****"
|
||||
}
|
||||
return token[:4] + "****" + token[len(token)-4:]
|
||||
}
|
||||
|
|
@ -1,299 +0,0 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// AllowanceStatus indicates the current state of an agent's quota.
|
||||
type AllowanceStatus string
|
||||
|
||||
const (
|
||||
// AllowanceOK indicates the agent has remaining quota.
|
||||
AllowanceOK AllowanceStatus = "ok"
|
||||
// AllowanceWarning indicates the agent is at 80%+ usage.
|
||||
AllowanceWarning AllowanceStatus = "warning"
|
||||
// AllowanceExceeded indicates the agent has exceeded its quota.
|
||||
AllowanceExceeded AllowanceStatus = "exceeded"
|
||||
)
|
||||
|
||||
// AgentAllowance defines the quota limits for a single agent.
|
||||
type AgentAllowance struct {
|
||||
// AgentID is the unique identifier for the agent.
|
||||
AgentID string `json:"agent_id" yaml:"agent_id"`
|
||||
// DailyTokenLimit is the maximum tokens (in+out) per 24h. 0 means unlimited.
|
||||
DailyTokenLimit int64 `json:"daily_token_limit" yaml:"daily_token_limit"`
|
||||
// DailyJobLimit is the maximum jobs per 24h. 0 means unlimited.
|
||||
DailyJobLimit int `json:"daily_job_limit" yaml:"daily_job_limit"`
|
||||
// ConcurrentJobs is the maximum simultaneous jobs. 0 means unlimited.
|
||||
ConcurrentJobs int `json:"concurrent_jobs" yaml:"concurrent_jobs"`
|
||||
// MaxJobDuration is the maximum job duration before kill. 0 means unlimited.
|
||||
MaxJobDuration time.Duration `json:"max_job_duration" yaml:"max_job_duration"`
|
||||
// ModelAllowlist restricts which models this agent can use. Empty means all.
|
||||
ModelAllowlist []string `json:"model_allowlist,omitempty" yaml:"model_allowlist"`
|
||||
}
|
||||
|
||||
// ModelQuota defines global per-model limits across all agents.
|
||||
type ModelQuota struct {
|
||||
// Model is the model identifier (e.g. "claude-sonnet-4-5-20250929").
|
||||
Model string `json:"model" yaml:"model"`
|
||||
// DailyTokenBudget is the total tokens across all agents per 24h.
|
||||
DailyTokenBudget int64 `json:"daily_token_budget" yaml:"daily_token_budget"`
|
||||
// HourlyRateLimit is the max requests per hour.
|
||||
HourlyRateLimit int `json:"hourly_rate_limit" yaml:"hourly_rate_limit"`
|
||||
// CostCeiling stops all usage if cumulative cost exceeds this (in cents).
|
||||
CostCeiling int64 `json:"cost_ceiling" yaml:"cost_ceiling"`
|
||||
}
|
||||
|
||||
// RepoLimit defines per-repository rate limits.
|
||||
type RepoLimit struct {
|
||||
// Repo is the repository identifier (e.g. "owner/repo").
|
||||
Repo string `json:"repo" yaml:"repo"`
|
||||
// MaxDailyPRs is the maximum PRs per day. 0 means unlimited.
|
||||
MaxDailyPRs int `json:"max_daily_prs" yaml:"max_daily_prs"`
|
||||
// MaxDailyIssues is the maximum issues per day. 0 means unlimited.
|
||||
MaxDailyIssues int `json:"max_daily_issues" yaml:"max_daily_issues"`
|
||||
// CooldownAfterFailure is the wait time after a failure before retrying.
|
||||
CooldownAfterFailure time.Duration `json:"cooldown_after_failure" yaml:"cooldown_after_failure"`
|
||||
}
|
||||
|
||||
// UsageRecord tracks an agent's current usage within a quota period.
|
||||
type UsageRecord struct {
|
||||
// AgentID is the agent this record belongs to.
|
||||
AgentID string `json:"agent_id"`
|
||||
// TokensUsed is the total tokens consumed in the current period.
|
||||
TokensUsed int64 `json:"tokens_used"`
|
||||
// JobsStarted is the total jobs started in the current period.
|
||||
JobsStarted int `json:"jobs_started"`
|
||||
// ActiveJobs is the number of currently running jobs.
|
||||
ActiveJobs int `json:"active_jobs"`
|
||||
// PeriodStart is when the current quota period began.
|
||||
PeriodStart time.Time `json:"period_start"`
|
||||
}
|
||||
|
||||
// QuotaCheckResult is the outcome of a pre-dispatch allowance check.
|
||||
type QuotaCheckResult struct {
|
||||
// Allowed indicates whether the agent may proceed.
|
||||
Allowed bool `json:"allowed"`
|
||||
// Status is the current allowance state.
|
||||
Status AllowanceStatus `json:"status"`
|
||||
// Remaining is the number of tokens remaining in the period.
|
||||
RemainingTokens int64 `json:"remaining_tokens"`
|
||||
// RemainingJobs is the number of jobs remaining in the period.
|
||||
RemainingJobs int `json:"remaining_jobs"`
|
||||
// Reason explains why the check failed (if !Allowed).
|
||||
Reason string `json:"reason,omitempty"`
|
||||
}
|
||||
|
||||
// QuotaEvent represents a change in quota usage, used for recovery.
|
||||
type QuotaEvent string
|
||||
|
||||
const (
|
||||
// QuotaEventJobStarted deducts quota when a job begins.
|
||||
QuotaEventJobStarted QuotaEvent = "job_started"
|
||||
// QuotaEventJobCompleted deducts nothing (already counted).
|
||||
QuotaEventJobCompleted QuotaEvent = "job_completed"
|
||||
// QuotaEventJobFailed returns 50% of token quota.
|
||||
QuotaEventJobFailed QuotaEvent = "job_failed"
|
||||
// QuotaEventJobCancelled returns 100% of token quota.
|
||||
QuotaEventJobCancelled QuotaEvent = "job_cancelled"
|
||||
)
|
||||
|
||||
// UsageReport is emitted by the agent runner to report token consumption.
|
||||
type UsageReport struct {
|
||||
// AgentID is the agent that consumed tokens.
|
||||
AgentID string `json:"agent_id"`
|
||||
// JobID identifies the specific job.
|
||||
JobID string `json:"job_id"`
|
||||
// Model is the model used.
|
||||
Model string `json:"model"`
|
||||
// TokensIn is the number of input tokens consumed.
|
||||
TokensIn int64 `json:"tokens_in"`
|
||||
// TokensOut is the number of output tokens consumed.
|
||||
TokensOut int64 `json:"tokens_out"`
|
||||
// Event is the type of quota event.
|
||||
Event QuotaEvent `json:"event"`
|
||||
// Timestamp is when the usage occurred.
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
}
|
||||
|
||||
// AllowanceStore is the interface for persisting and querying allowance data.
|
||||
// Implementations may use Redis, SQLite, or any backing store.
|
||||
type AllowanceStore interface {
|
||||
// GetAllowance returns the quota limits for an agent.
|
||||
GetAllowance(agentID string) (*AgentAllowance, error)
|
||||
// SetAllowance persists quota limits for an agent.
|
||||
SetAllowance(a *AgentAllowance) error
|
||||
// GetUsage returns the current usage record for an agent.
|
||||
GetUsage(agentID string) (*UsageRecord, error)
|
||||
// IncrementUsage atomically adds to an agent's usage counters.
|
||||
IncrementUsage(agentID string, tokens int64, jobs int) error
|
||||
// DecrementActiveJobs reduces the active job count by 1.
|
||||
DecrementActiveJobs(agentID string) error
|
||||
// ReturnTokens adds tokens back to the agent's remaining quota.
|
||||
ReturnTokens(agentID string, tokens int64) error
|
||||
// ResetUsage clears usage counters for an agent (daily reset).
|
||||
ResetUsage(agentID string) error
|
||||
// GetModelQuota returns global limits for a model.
|
||||
GetModelQuota(model string) (*ModelQuota, error)
|
||||
// GetModelUsage returns current token usage for a model.
|
||||
GetModelUsage(model string) (int64, error)
|
||||
// IncrementModelUsage atomically adds to a model's usage counter.
|
||||
IncrementModelUsage(model string, tokens int64) error
|
||||
}
|
||||
|
||||
// MemoryStore is an in-memory AllowanceStore for testing and single-node use.
|
||||
type MemoryStore struct {
|
||||
mu sync.RWMutex
|
||||
allowances map[string]*AgentAllowance
|
||||
usage map[string]*UsageRecord
|
||||
modelQuotas map[string]*ModelQuota
|
||||
modelUsage map[string]int64
|
||||
}
|
||||
|
||||
// NewMemoryStore creates a new in-memory allowance store.
|
||||
func NewMemoryStore() *MemoryStore {
|
||||
return &MemoryStore{
|
||||
allowances: make(map[string]*AgentAllowance),
|
||||
usage: make(map[string]*UsageRecord),
|
||||
modelQuotas: make(map[string]*ModelQuota),
|
||||
modelUsage: make(map[string]int64),
|
||||
}
|
||||
}
|
||||
|
||||
// GetAllowance returns the quota limits for an agent.
|
||||
func (m *MemoryStore) GetAllowance(agentID string) (*AgentAllowance, error) {
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
a, ok := m.allowances[agentID]
|
||||
if !ok {
|
||||
return nil, &APIError{Code: 404, Message: "allowance not found for agent: " + agentID}
|
||||
}
|
||||
cp := *a
|
||||
return &cp, nil
|
||||
}
|
||||
|
||||
// SetAllowance persists quota limits for an agent.
|
||||
func (m *MemoryStore) SetAllowance(a *AgentAllowance) error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
cp := *a
|
||||
m.allowances[a.AgentID] = &cp
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetUsage returns the current usage record for an agent.
|
||||
func (m *MemoryStore) GetUsage(agentID string) (*UsageRecord, error) {
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
u, ok := m.usage[agentID]
|
||||
if !ok {
|
||||
return &UsageRecord{
|
||||
AgentID: agentID,
|
||||
PeriodStart: startOfDay(time.Now().UTC()),
|
||||
}, nil
|
||||
}
|
||||
cp := *u
|
||||
return &cp, nil
|
||||
}
|
||||
|
||||
// IncrementUsage atomically adds to an agent's usage counters.
|
||||
func (m *MemoryStore) IncrementUsage(agentID string, tokens int64, jobs int) error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
u, ok := m.usage[agentID]
|
||||
if !ok {
|
||||
u = &UsageRecord{
|
||||
AgentID: agentID,
|
||||
PeriodStart: startOfDay(time.Now().UTC()),
|
||||
}
|
||||
m.usage[agentID] = u
|
||||
}
|
||||
u.TokensUsed += tokens
|
||||
u.JobsStarted += jobs
|
||||
if jobs > 0 {
|
||||
u.ActiveJobs += jobs
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DecrementActiveJobs reduces the active job count by 1.
|
||||
func (m *MemoryStore) DecrementActiveJobs(agentID string) error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
u, ok := m.usage[agentID]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
if u.ActiveJobs > 0 {
|
||||
u.ActiveJobs--
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReturnTokens adds tokens back to the agent's remaining quota.
|
||||
func (m *MemoryStore) ReturnTokens(agentID string, tokens int64) error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
u, ok := m.usage[agentID]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
u.TokensUsed -= tokens
|
||||
if u.TokensUsed < 0 {
|
||||
u.TokensUsed = 0
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ResetUsage clears usage counters for an agent.
|
||||
func (m *MemoryStore) ResetUsage(agentID string) error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
m.usage[agentID] = &UsageRecord{
|
||||
AgentID: agentID,
|
||||
PeriodStart: startOfDay(time.Now().UTC()),
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetModelQuota returns global limits for a model.
|
||||
func (m *MemoryStore) GetModelQuota(model string) (*ModelQuota, error) {
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
q, ok := m.modelQuotas[model]
|
||||
if !ok {
|
||||
return nil, &APIError{Code: 404, Message: "model quota not found: " + model}
|
||||
}
|
||||
cp := *q
|
||||
return &cp, nil
|
||||
}
|
||||
|
||||
// GetModelUsage returns current token usage for a model.
|
||||
func (m *MemoryStore) GetModelUsage(model string) (int64, error) {
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
return m.modelUsage[model], nil
|
||||
}
|
||||
|
||||
// IncrementModelUsage atomically adds to a model's usage counter.
|
||||
func (m *MemoryStore) IncrementModelUsage(model string, tokens int64) error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
m.modelUsage[model] += tokens
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetModelQuota sets global limits for a model (used in testing).
|
||||
func (m *MemoryStore) SetModelQuota(q *ModelQuota) {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
cp := *q
|
||||
m.modelQuotas[q.Model] = &cp
|
||||
}
|
||||
|
||||
// startOfDay returns midnight UTC for the given time.
|
||||
func startOfDay(t time.Time) time.Time {
|
||||
y, mo, d := t.Date()
|
||||
return time.Date(y, mo, d, 0, 0, 0, 0, time.UTC)
|
||||
}
|
||||
|
|
@ -1,176 +0,0 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"slices"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/log"
|
||||
)
|
||||
|
||||
// AllowanceService enforces agent quota limits. It provides pre-dispatch checks,
|
||||
// runtime usage recording, and quota recovery for failed/cancelled jobs.
|
||||
type AllowanceService struct {
|
||||
store AllowanceStore
|
||||
}
|
||||
|
||||
// NewAllowanceService creates a new AllowanceService with the given store.
|
||||
func NewAllowanceService(store AllowanceStore) *AllowanceService {
|
||||
return &AllowanceService{store: store}
|
||||
}
|
||||
|
||||
// Check performs a pre-dispatch allowance check for the given agent and model.
|
||||
// It verifies daily token limits, daily job limits, concurrent job limits, and
|
||||
// model allowlists. Returns a QuotaCheckResult indicating whether the agent may proceed.
|
||||
func (s *AllowanceService) Check(agentID, model string) (*QuotaCheckResult, error) {
|
||||
const op = "AllowanceService.Check"
|
||||
|
||||
allowance, err := s.store.GetAllowance(agentID)
|
||||
if err != nil {
|
||||
return nil, log.E(op, "failed to get allowance", err)
|
||||
}
|
||||
|
||||
usage, err := s.store.GetUsage(agentID)
|
||||
if err != nil {
|
||||
return nil, log.E(op, "failed to get usage", err)
|
||||
}
|
||||
|
||||
result := &QuotaCheckResult{
|
||||
Allowed: true,
|
||||
Status: AllowanceOK,
|
||||
RemainingTokens: -1, // unlimited
|
||||
RemainingJobs: -1, // unlimited
|
||||
}
|
||||
|
||||
// Check model allowlist
|
||||
if len(allowance.ModelAllowlist) > 0 && model != "" {
|
||||
if !slices.Contains(allowance.ModelAllowlist, model) {
|
||||
result.Allowed = false
|
||||
result.Status = AllowanceExceeded
|
||||
result.Reason = "model not in allowlist: " + model
|
||||
return result, nil
|
||||
}
|
||||
}
|
||||
|
||||
// Check daily token limit
|
||||
if allowance.DailyTokenLimit > 0 {
|
||||
remaining := allowance.DailyTokenLimit - usage.TokensUsed
|
||||
result.RemainingTokens = remaining
|
||||
if remaining <= 0 {
|
||||
result.Allowed = false
|
||||
result.Status = AllowanceExceeded
|
||||
result.Reason = "daily token limit exceeded"
|
||||
return result, nil
|
||||
}
|
||||
ratio := float64(usage.TokensUsed) / float64(allowance.DailyTokenLimit)
|
||||
if ratio >= 0.8 {
|
||||
result.Status = AllowanceWarning
|
||||
}
|
||||
}
|
||||
|
||||
// Check daily job limit
|
||||
if allowance.DailyJobLimit > 0 {
|
||||
remaining := allowance.DailyJobLimit - usage.JobsStarted
|
||||
result.RemainingJobs = remaining
|
||||
if remaining <= 0 {
|
||||
result.Allowed = false
|
||||
result.Status = AllowanceExceeded
|
||||
result.Reason = "daily job limit exceeded"
|
||||
return result, nil
|
||||
}
|
||||
}
|
||||
|
||||
// Check concurrent jobs
|
||||
if allowance.ConcurrentJobs > 0 && usage.ActiveJobs >= allowance.ConcurrentJobs {
|
||||
result.Allowed = false
|
||||
result.Status = AllowanceExceeded
|
||||
result.Reason = "concurrent job limit reached"
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// Check global model quota
|
||||
if model != "" {
|
||||
modelQuota, err := s.store.GetModelQuota(model)
|
||||
if err == nil && modelQuota.DailyTokenBudget > 0 {
|
||||
modelUsage, err := s.store.GetModelUsage(model)
|
||||
if err == nil && modelUsage >= modelQuota.DailyTokenBudget {
|
||||
result.Allowed = false
|
||||
result.Status = AllowanceExceeded
|
||||
result.Reason = "global model token budget exceeded for: " + model
|
||||
return result, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// RecordUsage processes a usage report, updating counters and handling quota recovery.
|
||||
func (s *AllowanceService) RecordUsage(report UsageReport) error {
|
||||
const op = "AllowanceService.RecordUsage"
|
||||
|
||||
totalTokens := report.TokensIn + report.TokensOut
|
||||
|
||||
switch report.Event {
|
||||
case QuotaEventJobStarted:
|
||||
if err := s.store.IncrementUsage(report.AgentID, 0, 1); err != nil {
|
||||
return log.E(op, "failed to increment job count", err)
|
||||
}
|
||||
|
||||
case QuotaEventJobCompleted:
|
||||
if err := s.store.IncrementUsage(report.AgentID, totalTokens, 0); err != nil {
|
||||
return log.E(op, "failed to record token usage", err)
|
||||
}
|
||||
if err := s.store.DecrementActiveJobs(report.AgentID); err != nil {
|
||||
return log.E(op, "failed to decrement active jobs", err)
|
||||
}
|
||||
// Record model-level usage
|
||||
if report.Model != "" {
|
||||
if err := s.store.IncrementModelUsage(report.Model, totalTokens); err != nil {
|
||||
return log.E(op, "failed to record model usage", err)
|
||||
}
|
||||
}
|
||||
|
||||
case QuotaEventJobFailed:
|
||||
// Record partial usage, return 50% of tokens
|
||||
if err := s.store.IncrementUsage(report.AgentID, totalTokens, 0); err != nil {
|
||||
return log.E(op, "failed to record token usage", err)
|
||||
}
|
||||
if err := s.store.DecrementActiveJobs(report.AgentID); err != nil {
|
||||
return log.E(op, "failed to decrement active jobs", err)
|
||||
}
|
||||
returnAmount := totalTokens / 2
|
||||
if returnAmount > 0 {
|
||||
if err := s.store.ReturnTokens(report.AgentID, returnAmount); err != nil {
|
||||
return log.E(op, "failed to return tokens", err)
|
||||
}
|
||||
}
|
||||
// Still record model-level usage (net of return)
|
||||
if report.Model != "" {
|
||||
if err := s.store.IncrementModelUsage(report.Model, totalTokens-returnAmount); err != nil {
|
||||
return log.E(op, "failed to record model usage", err)
|
||||
}
|
||||
}
|
||||
|
||||
case QuotaEventJobCancelled:
|
||||
// Return 100% of tokens
|
||||
if err := s.store.DecrementActiveJobs(report.AgentID); err != nil {
|
||||
return log.E(op, "failed to decrement active jobs", err)
|
||||
}
|
||||
if totalTokens > 0 {
|
||||
if err := s.store.ReturnTokens(report.AgentID, totalTokens); err != nil {
|
||||
return log.E(op, "failed to return tokens", err)
|
||||
}
|
||||
}
|
||||
// No model-level usage for cancelled jobs
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ResetAgent clears daily usage counters for the given agent (midnight reset).
|
||||
func (s *AllowanceService) ResetAgent(agentID string) error {
|
||||
const op = "AllowanceService.ResetAgent"
|
||||
if err := s.store.ResetUsage(agentID); err != nil {
|
||||
return log.E(op, "failed to reset usage", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,407 +0,0 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// --- MemoryStore tests ---
|
||||
|
||||
func TestMemoryStore_SetGetAllowance_Good(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
a := &AgentAllowance{
|
||||
AgentID: "agent-1",
|
||||
DailyTokenLimit: 100000,
|
||||
DailyJobLimit: 10,
|
||||
ConcurrentJobs: 2,
|
||||
MaxJobDuration: 30 * time.Minute,
|
||||
ModelAllowlist: []string{"claude-sonnet-4-5-20250929"},
|
||||
}
|
||||
|
||||
err := store.SetAllowance(a)
|
||||
require.NoError(t, err)
|
||||
|
||||
got, err := store.GetAllowance("agent-1")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, a.AgentID, got.AgentID)
|
||||
assert.Equal(t, a.DailyTokenLimit, got.DailyTokenLimit)
|
||||
assert.Equal(t, a.DailyJobLimit, got.DailyJobLimit)
|
||||
assert.Equal(t, a.ConcurrentJobs, got.ConcurrentJobs)
|
||||
assert.Equal(t, a.ModelAllowlist, got.ModelAllowlist)
|
||||
}
|
||||
|
||||
func TestMemoryStore_GetAllowance_Bad_NotFound(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
_, err := store.GetAllowance("nonexistent")
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestMemoryStore_IncrementUsage_Good(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
|
||||
err := store.IncrementUsage("agent-1", 5000, 1)
|
||||
require.NoError(t, err)
|
||||
|
||||
usage, err := store.GetUsage("agent-1")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, int64(5000), usage.TokensUsed)
|
||||
assert.Equal(t, 1, usage.JobsStarted)
|
||||
assert.Equal(t, 1, usage.ActiveJobs)
|
||||
}
|
||||
|
||||
func TestMemoryStore_DecrementActiveJobs_Good(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
|
||||
_ = store.IncrementUsage("agent-1", 0, 2)
|
||||
_ = store.DecrementActiveJobs("agent-1")
|
||||
|
||||
usage, _ := store.GetUsage("agent-1")
|
||||
assert.Equal(t, 1, usage.ActiveJobs)
|
||||
}
|
||||
|
||||
func TestMemoryStore_DecrementActiveJobs_Good_FloorAtZero(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
|
||||
_ = store.DecrementActiveJobs("agent-1") // no-op, no usage record
|
||||
_ = store.IncrementUsage("agent-1", 0, 0)
|
||||
_ = store.DecrementActiveJobs("agent-1") // should stay at 0
|
||||
|
||||
usage, _ := store.GetUsage("agent-1")
|
||||
assert.Equal(t, 0, usage.ActiveJobs)
|
||||
}
|
||||
|
||||
func TestMemoryStore_ReturnTokens_Good(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
|
||||
_ = store.IncrementUsage("agent-1", 10000, 0)
|
||||
err := store.ReturnTokens("agent-1", 5000)
|
||||
require.NoError(t, err)
|
||||
|
||||
usage, _ := store.GetUsage("agent-1")
|
||||
assert.Equal(t, int64(5000), usage.TokensUsed)
|
||||
}
|
||||
|
||||
func TestMemoryStore_ReturnTokens_Good_FloorAtZero(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
|
||||
_ = store.IncrementUsage("agent-1", 1000, 0)
|
||||
_ = store.ReturnTokens("agent-1", 5000) // more than used
|
||||
|
||||
usage, _ := store.GetUsage("agent-1")
|
||||
assert.Equal(t, int64(0), usage.TokensUsed)
|
||||
}
|
||||
|
||||
func TestMemoryStore_ResetUsage_Good(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
|
||||
_ = store.IncrementUsage("agent-1", 50000, 5)
|
||||
err := store.ResetUsage("agent-1")
|
||||
require.NoError(t, err)
|
||||
|
||||
usage, _ := store.GetUsage("agent-1")
|
||||
assert.Equal(t, int64(0), usage.TokensUsed)
|
||||
assert.Equal(t, 0, usage.JobsStarted)
|
||||
assert.Equal(t, 0, usage.ActiveJobs)
|
||||
}
|
||||
|
||||
func TestMemoryStore_ModelUsage_Good(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
|
||||
_ = store.IncrementModelUsage("claude-sonnet", 10000)
|
||||
_ = store.IncrementModelUsage("claude-sonnet", 5000)
|
||||
|
||||
usage, err := store.GetModelUsage("claude-sonnet")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, int64(15000), usage)
|
||||
}
|
||||
|
||||
// --- AllowanceService.Check tests ---
|
||||
|
||||
func TestAllowanceServiceCheck_Good(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_ = store.SetAllowance(&AgentAllowance{
|
||||
AgentID: "agent-1",
|
||||
DailyTokenLimit: 100000,
|
||||
DailyJobLimit: 10,
|
||||
ConcurrentJobs: 2,
|
||||
})
|
||||
|
||||
result, err := svc.Check("agent-1", "")
|
||||
require.NoError(t, err)
|
||||
assert.True(t, result.Allowed)
|
||||
assert.Equal(t, AllowanceOK, result.Status)
|
||||
assert.Equal(t, int64(100000), result.RemainingTokens)
|
||||
assert.Equal(t, 10, result.RemainingJobs)
|
||||
}
|
||||
|
||||
func TestAllowanceServiceCheck_Good_Warning(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_ = store.SetAllowance(&AgentAllowance{
|
||||
AgentID: "agent-1",
|
||||
DailyTokenLimit: 100000,
|
||||
})
|
||||
_ = store.IncrementUsage("agent-1", 85000, 0)
|
||||
|
||||
result, err := svc.Check("agent-1", "")
|
||||
require.NoError(t, err)
|
||||
assert.True(t, result.Allowed)
|
||||
assert.Equal(t, AllowanceWarning, result.Status)
|
||||
assert.Equal(t, int64(15000), result.RemainingTokens)
|
||||
}
|
||||
|
||||
func TestAllowanceServiceCheck_Bad_TokenLimitExceeded(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_ = store.SetAllowance(&AgentAllowance{
|
||||
AgentID: "agent-1",
|
||||
DailyTokenLimit: 100000,
|
||||
})
|
||||
_ = store.IncrementUsage("agent-1", 100001, 0)
|
||||
|
||||
result, err := svc.Check("agent-1", "")
|
||||
require.NoError(t, err)
|
||||
assert.False(t, result.Allowed)
|
||||
assert.Equal(t, AllowanceExceeded, result.Status)
|
||||
assert.Contains(t, result.Reason, "daily token limit")
|
||||
}
|
||||
|
||||
func TestAllowanceServiceCheck_Bad_JobLimitExceeded(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_ = store.SetAllowance(&AgentAllowance{
|
||||
AgentID: "agent-1",
|
||||
DailyJobLimit: 5,
|
||||
})
|
||||
_ = store.IncrementUsage("agent-1", 0, 5)
|
||||
|
||||
result, err := svc.Check("agent-1", "")
|
||||
require.NoError(t, err)
|
||||
assert.False(t, result.Allowed)
|
||||
assert.Contains(t, result.Reason, "daily job limit")
|
||||
}
|
||||
|
||||
func TestAllowanceServiceCheck_Bad_ConcurrentLimitReached(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_ = store.SetAllowance(&AgentAllowance{
|
||||
AgentID: "agent-1",
|
||||
ConcurrentJobs: 1,
|
||||
})
|
||||
_ = store.IncrementUsage("agent-1", 0, 1) // 1 active job
|
||||
|
||||
result, err := svc.Check("agent-1", "")
|
||||
require.NoError(t, err)
|
||||
assert.False(t, result.Allowed)
|
||||
assert.Contains(t, result.Reason, "concurrent job limit")
|
||||
}
|
||||
|
||||
func TestAllowanceServiceCheck_Bad_ModelNotInAllowlist(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_ = store.SetAllowance(&AgentAllowance{
|
||||
AgentID: "agent-1",
|
||||
ModelAllowlist: []string{"claude-sonnet-4-5-20250929"},
|
||||
})
|
||||
|
||||
result, err := svc.Check("agent-1", "claude-opus-4-6")
|
||||
require.NoError(t, err)
|
||||
assert.False(t, result.Allowed)
|
||||
assert.Contains(t, result.Reason, "model not in allowlist")
|
||||
}
|
||||
|
||||
func TestAllowanceServiceCheck_Good_ModelInAllowlist(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_ = store.SetAllowance(&AgentAllowance{
|
||||
AgentID: "agent-1",
|
||||
ModelAllowlist: []string{"claude-sonnet-4-5-20250929", "claude-haiku-4-5-20251001"},
|
||||
})
|
||||
|
||||
result, err := svc.Check("agent-1", "claude-sonnet-4-5-20250929")
|
||||
require.NoError(t, err)
|
||||
assert.True(t, result.Allowed)
|
||||
}
|
||||
|
||||
func TestAllowanceServiceCheck_Good_EmptyModelSkipsCheck(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_ = store.SetAllowance(&AgentAllowance{
|
||||
AgentID: "agent-1",
|
||||
ModelAllowlist: []string{"claude-sonnet-4-5-20250929"},
|
||||
})
|
||||
|
||||
result, err := svc.Check("agent-1", "")
|
||||
require.NoError(t, err)
|
||||
assert.True(t, result.Allowed)
|
||||
}
|
||||
|
||||
func TestAllowanceServiceCheck_Bad_GlobalModelBudgetExceeded(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_ = store.SetAllowance(&AgentAllowance{
|
||||
AgentID: "agent-1",
|
||||
})
|
||||
store.SetModelQuota(&ModelQuota{
|
||||
Model: "claude-opus-4-6",
|
||||
DailyTokenBudget: 500000,
|
||||
})
|
||||
_ = store.IncrementModelUsage("claude-opus-4-6", 500001)
|
||||
|
||||
result, err := svc.Check("agent-1", "claude-opus-4-6")
|
||||
require.NoError(t, err)
|
||||
assert.False(t, result.Allowed)
|
||||
assert.Contains(t, result.Reason, "global model token budget")
|
||||
}
|
||||
|
||||
func TestAllowanceServiceCheck_Bad_NoAllowance(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_, err := svc.Check("unknown-agent", "")
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
// --- AllowanceService.RecordUsage tests ---
|
||||
|
||||
func TestAllowanceServiceRecordUsage_Good_JobStarted(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
err := svc.RecordUsage(UsageReport{
|
||||
AgentID: "agent-1",
|
||||
JobID: "job-1",
|
||||
Event: QuotaEventJobStarted,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
usage, _ := store.GetUsage("agent-1")
|
||||
assert.Equal(t, 1, usage.JobsStarted)
|
||||
assert.Equal(t, 1, usage.ActiveJobs)
|
||||
assert.Equal(t, int64(0), usage.TokensUsed)
|
||||
}
|
||||
|
||||
func TestAllowanceServiceRecordUsage_Good_JobCompleted(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
// Start a job first
|
||||
_ = svc.RecordUsage(UsageReport{
|
||||
AgentID: "agent-1",
|
||||
JobID: "job-1",
|
||||
Event: QuotaEventJobStarted,
|
||||
})
|
||||
|
||||
err := svc.RecordUsage(UsageReport{
|
||||
AgentID: "agent-1",
|
||||
JobID: "job-1",
|
||||
Model: "claude-sonnet",
|
||||
TokensIn: 1000,
|
||||
TokensOut: 500,
|
||||
Event: QuotaEventJobCompleted,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
usage, _ := store.GetUsage("agent-1")
|
||||
assert.Equal(t, int64(1500), usage.TokensUsed)
|
||||
assert.Equal(t, 0, usage.ActiveJobs)
|
||||
|
||||
modelUsage, _ := store.GetModelUsage("claude-sonnet")
|
||||
assert.Equal(t, int64(1500), modelUsage)
|
||||
}
|
||||
|
||||
func TestAllowanceServiceRecordUsage_Good_JobFailed_ReturnsHalf(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_ = svc.RecordUsage(UsageReport{
|
||||
AgentID: "agent-1",
|
||||
JobID: "job-1",
|
||||
Event: QuotaEventJobStarted,
|
||||
})
|
||||
|
||||
err := svc.RecordUsage(UsageReport{
|
||||
AgentID: "agent-1",
|
||||
JobID: "job-1",
|
||||
Model: "claude-sonnet",
|
||||
TokensIn: 1000,
|
||||
TokensOut: 1000,
|
||||
Event: QuotaEventJobFailed,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
usage, _ := store.GetUsage("agent-1")
|
||||
// 2000 tokens used, 1000 returned (50%) = 1000 net
|
||||
assert.Equal(t, int64(1000), usage.TokensUsed)
|
||||
assert.Equal(t, 0, usage.ActiveJobs)
|
||||
|
||||
// Model sees net usage (2000 - 1000 = 1000)
|
||||
modelUsage, _ := store.GetModelUsage("claude-sonnet")
|
||||
assert.Equal(t, int64(1000), modelUsage)
|
||||
}
|
||||
|
||||
func TestAllowanceServiceRecordUsage_Good_JobCancelled_ReturnsAll(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_ = store.IncrementUsage("agent-1", 5000, 1) // simulate pre-existing usage
|
||||
|
||||
err := svc.RecordUsage(UsageReport{
|
||||
AgentID: "agent-1",
|
||||
JobID: "job-1",
|
||||
TokensIn: 500,
|
||||
TokensOut: 500,
|
||||
Event: QuotaEventJobCancelled,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
usage, _ := store.GetUsage("agent-1")
|
||||
// 5000 pre-existing - 1000 returned = 4000
|
||||
assert.Equal(t, int64(4000), usage.TokensUsed)
|
||||
assert.Equal(t, 0, usage.ActiveJobs)
|
||||
}
|
||||
|
||||
// --- AllowanceService.ResetAgent tests ---
|
||||
|
||||
func TestAllowanceServiceResetAgent_Good(t *testing.T) {
|
||||
store := NewMemoryStore()
|
||||
svc := NewAllowanceService(store)
|
||||
|
||||
_ = store.IncrementUsage("agent-1", 50000, 5)
|
||||
|
||||
err := svc.ResetAgent("agent-1")
|
||||
require.NoError(t, err)
|
||||
|
||||
usage, _ := store.GetUsage("agent-1")
|
||||
assert.Equal(t, int64(0), usage.TokensUsed)
|
||||
assert.Equal(t, 0, usage.JobsStarted)
|
||||
}
|
||||
|
||||
// --- startOfDay helper test ---
|
||||
|
||||
func TestStartOfDay_Good(t *testing.T) {
|
||||
input := time.Date(2026, 2, 10, 15, 30, 45, 0, time.UTC)
|
||||
expected := time.Date(2026, 2, 10, 0, 0, 0, 0, time.UTC)
|
||||
assert.Equal(t, expected, startOfDay(input))
|
||||
}
|
||||
|
||||
// --- AllowanceStatus tests ---
|
||||
|
||||
func TestAllowanceStatus_Good_Values(t *testing.T) {
|
||||
assert.Equal(t, AllowanceStatus("ok"), AllowanceOK)
|
||||
assert.Equal(t, AllowanceStatus("warning"), AllowanceWarning)
|
||||
assert.Equal(t, AllowanceStatus("exceeded"), AllowanceExceeded)
|
||||
}
|
||||
|
|
@ -1,322 +0,0 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/log"
|
||||
)
|
||||
|
||||
// Client is the API client for the core-agentic service.
|
||||
type Client struct {
|
||||
// BaseURL is the base URL of the API server.
|
||||
BaseURL string
|
||||
// Token is the authentication token.
|
||||
Token string
|
||||
// HTTPClient is the HTTP client used for requests.
|
||||
HTTPClient *http.Client
|
||||
// AgentID is the identifier for this agent when claiming tasks.
|
||||
AgentID string
|
||||
}
|
||||
|
||||
// NewClient creates a new agentic API client with the given base URL and token.
|
||||
func NewClient(baseURL, token string) *Client {
|
||||
return &Client{
|
||||
BaseURL: strings.TrimSuffix(baseURL, "/"),
|
||||
Token: token,
|
||||
HTTPClient: &http.Client{
|
||||
Timeout: 30 * time.Second,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// NewClientFromConfig creates a new client from a Config struct.
|
||||
func NewClientFromConfig(cfg *Config) *Client {
|
||||
client := NewClient(cfg.BaseURL, cfg.Token)
|
||||
client.AgentID = cfg.AgentID
|
||||
return client
|
||||
}
|
||||
|
||||
// ListTasks retrieves a list of tasks matching the given options.
|
||||
func (c *Client) ListTasks(ctx context.Context, opts ListOptions) ([]Task, error) {
|
||||
const op = "agentic.Client.ListTasks"
|
||||
|
||||
// Build query parameters
|
||||
params := url.Values{}
|
||||
if opts.Status != "" {
|
||||
params.Set("status", string(opts.Status))
|
||||
}
|
||||
if opts.Priority != "" {
|
||||
params.Set("priority", string(opts.Priority))
|
||||
}
|
||||
if opts.Project != "" {
|
||||
params.Set("project", opts.Project)
|
||||
}
|
||||
if opts.ClaimedBy != "" {
|
||||
params.Set("claimed_by", opts.ClaimedBy)
|
||||
}
|
||||
if opts.Limit > 0 {
|
||||
params.Set("limit", strconv.Itoa(opts.Limit))
|
||||
}
|
||||
if len(opts.Labels) > 0 {
|
||||
params.Set("labels", strings.Join(opts.Labels, ","))
|
||||
}
|
||||
|
||||
endpoint := c.BaseURL + "/api/tasks"
|
||||
if len(params) > 0 {
|
||||
endpoint += "?" + params.Encode()
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, endpoint, nil)
|
||||
if err != nil {
|
||||
return nil, log.E(op, "failed to create request", err)
|
||||
}
|
||||
|
||||
c.setHeaders(req)
|
||||
|
||||
resp, err := c.HTTPClient.Do(req)
|
||||
if err != nil {
|
||||
return nil, log.E(op, "request failed", err)
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
|
||||
if err := c.checkResponse(resp); err != nil {
|
||||
return nil, log.E(op, "API error", err)
|
||||
}
|
||||
|
||||
var tasks []Task
|
||||
if err := json.NewDecoder(resp.Body).Decode(&tasks); err != nil {
|
||||
return nil, log.E(op, "failed to decode response", err)
|
||||
}
|
||||
|
||||
return tasks, nil
|
||||
}
|
||||
|
||||
// GetTask retrieves a single task by its ID.
|
||||
func (c *Client) GetTask(ctx context.Context, id string) (*Task, error) {
|
||||
const op = "agentic.Client.GetTask"
|
||||
|
||||
if id == "" {
|
||||
return nil, log.E(op, "task ID is required", nil)
|
||||
}
|
||||
|
||||
endpoint := fmt.Sprintf("%s/api/tasks/%s", c.BaseURL, url.PathEscape(id))
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, endpoint, nil)
|
||||
if err != nil {
|
||||
return nil, log.E(op, "failed to create request", err)
|
||||
}
|
||||
|
||||
c.setHeaders(req)
|
||||
|
||||
resp, err := c.HTTPClient.Do(req)
|
||||
if err != nil {
|
||||
return nil, log.E(op, "request failed", err)
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
|
||||
if err := c.checkResponse(resp); err != nil {
|
||||
return nil, log.E(op, "API error", err)
|
||||
}
|
||||
|
||||
var task Task
|
||||
if err := json.NewDecoder(resp.Body).Decode(&task); err != nil {
|
||||
return nil, log.E(op, "failed to decode response", err)
|
||||
}
|
||||
|
||||
return &task, nil
|
||||
}
|
||||
|
||||
// ClaimTask claims a task for the current agent.
|
||||
func (c *Client) ClaimTask(ctx context.Context, id string) (*Task, error) {
|
||||
const op = "agentic.Client.ClaimTask"
|
||||
|
||||
if id == "" {
|
||||
return nil, log.E(op, "task ID is required", nil)
|
||||
}
|
||||
|
||||
endpoint := fmt.Sprintf("%s/api/tasks/%s/claim", c.BaseURL, url.PathEscape(id))
|
||||
|
||||
// Include agent ID in the claim request if available
|
||||
var body io.Reader
|
||||
if c.AgentID != "" {
|
||||
data, _ := json.Marshal(map[string]string{"agent_id": c.AgentID})
|
||||
body = bytes.NewReader(data)
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint, body)
|
||||
if err != nil {
|
||||
return nil, log.E(op, "failed to create request", err)
|
||||
}
|
||||
|
||||
c.setHeaders(req)
|
||||
if body != nil {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
|
||||
resp, err := c.HTTPClient.Do(req)
|
||||
if err != nil {
|
||||
return nil, log.E(op, "request failed", err)
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
|
||||
if err := c.checkResponse(resp); err != nil {
|
||||
return nil, log.E(op, "API error", err)
|
||||
}
|
||||
|
||||
// Read body once to allow multiple decode attempts
|
||||
bodyData, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, log.E(op, "failed to read response", err)
|
||||
}
|
||||
|
||||
// Try decoding as ClaimResponse first
|
||||
var result ClaimResponse
|
||||
if err := json.Unmarshal(bodyData, &result); err == nil && result.Task != nil {
|
||||
return result.Task, nil
|
||||
}
|
||||
|
||||
// Try decoding as just a Task for simpler API responses
|
||||
var task Task
|
||||
if err := json.Unmarshal(bodyData, &task); err != nil {
|
||||
return nil, log.E(op, "failed to decode response", err)
|
||||
}
|
||||
|
||||
return &task, nil
|
||||
}
|
||||
|
||||
// UpdateTask updates a task with new status, progress, or notes.
|
||||
func (c *Client) UpdateTask(ctx context.Context, id string, update TaskUpdate) error {
|
||||
const op = "agentic.Client.UpdateTask"
|
||||
|
||||
if id == "" {
|
||||
return log.E(op, "task ID is required", nil)
|
||||
}
|
||||
|
||||
endpoint := fmt.Sprintf("%s/api/tasks/%s", c.BaseURL, url.PathEscape(id))
|
||||
|
||||
data, err := json.Marshal(update)
|
||||
if err != nil {
|
||||
return log.E(op, "failed to marshal update", err)
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodPatch, endpoint, bytes.NewReader(data))
|
||||
if err != nil {
|
||||
return log.E(op, "failed to create request", err)
|
||||
}
|
||||
|
||||
c.setHeaders(req)
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
|
||||
resp, err := c.HTTPClient.Do(req)
|
||||
if err != nil {
|
||||
return log.E(op, "request failed", err)
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
|
||||
if err := c.checkResponse(resp); err != nil {
|
||||
return log.E(op, "API error", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// CompleteTask marks a task as completed with the given result.
|
||||
func (c *Client) CompleteTask(ctx context.Context, id string, result TaskResult) error {
|
||||
const op = "agentic.Client.CompleteTask"
|
||||
|
||||
if id == "" {
|
||||
return log.E(op, "task ID is required", nil)
|
||||
}
|
||||
|
||||
endpoint := fmt.Sprintf("%s/api/tasks/%s/complete", c.BaseURL, url.PathEscape(id))
|
||||
|
||||
data, err := json.Marshal(result)
|
||||
if err != nil {
|
||||
return log.E(op, "failed to marshal result", err)
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint, bytes.NewReader(data))
|
||||
if err != nil {
|
||||
return log.E(op, "failed to create request", err)
|
||||
}
|
||||
|
||||
c.setHeaders(req)
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
|
||||
resp, err := c.HTTPClient.Do(req)
|
||||
if err != nil {
|
||||
return log.E(op, "request failed", err)
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
|
||||
if err := c.checkResponse(resp); err != nil {
|
||||
return log.E(op, "API error", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// setHeaders adds common headers to the request.
|
||||
func (c *Client) setHeaders(req *http.Request) {
|
||||
req.Header.Set("Authorization", "Bearer "+c.Token)
|
||||
req.Header.Set("Accept", "application/json")
|
||||
req.Header.Set("User-Agent", "core-agentic-client/1.0")
|
||||
}
|
||||
|
||||
// checkResponse checks if the response indicates an error.
|
||||
func (c *Client) checkResponse(resp *http.Response) error {
|
||||
if resp.StatusCode >= 200 && resp.StatusCode < 300 {
|
||||
return nil
|
||||
}
|
||||
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
|
||||
// Try to parse as APIError
|
||||
var apiErr APIError
|
||||
if err := json.Unmarshal(body, &apiErr); err == nil && apiErr.Message != "" {
|
||||
apiErr.Code = resp.StatusCode
|
||||
return &apiErr
|
||||
}
|
||||
|
||||
// Return generic error
|
||||
return &APIError{
|
||||
Code: resp.StatusCode,
|
||||
Message: fmt.Sprintf("HTTP %d: %s", resp.StatusCode, http.StatusText(resp.StatusCode)),
|
||||
Details: string(body),
|
||||
}
|
||||
}
|
||||
|
||||
// Ping tests the connection to the API server.
|
||||
func (c *Client) Ping(ctx context.Context) error {
|
||||
const op = "agentic.Client.Ping"
|
||||
|
||||
endpoint := c.BaseURL + "/api/health"
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, endpoint, nil)
|
||||
if err != nil {
|
||||
return log.E(op, "failed to create request", err)
|
||||
}
|
||||
|
||||
c.setHeaders(req)
|
||||
|
||||
resp, err := c.HTTPClient.Do(req)
|
||||
if err != nil {
|
||||
return log.E(op, "request failed", err)
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
|
||||
if resp.StatusCode >= 400 {
|
||||
return log.E(op, fmt.Sprintf("server returned status %d", resp.StatusCode), nil)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,356 +0,0 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// Test fixtures
|
||||
var testTask = Task{
|
||||
ID: "task-123",
|
||||
Title: "Implement feature X",
|
||||
Description: "Add the new feature X to the system",
|
||||
Priority: PriorityHigh,
|
||||
Status: StatusPending,
|
||||
Labels: []string{"feature", "backend"},
|
||||
Files: []string{"pkg/feature/feature.go"},
|
||||
CreatedAt: time.Now().Add(-24 * time.Hour),
|
||||
Project: "core",
|
||||
}
|
||||
|
||||
var testTasks = []Task{
|
||||
testTask,
|
||||
{
|
||||
ID: "task-456",
|
||||
Title: "Fix bug Y",
|
||||
Description: "Fix the bug in component Y",
|
||||
Priority: PriorityCritical,
|
||||
Status: StatusPending,
|
||||
Labels: []string{"bug", "urgent"},
|
||||
CreatedAt: time.Now().Add(-2 * time.Hour),
|
||||
Project: "core",
|
||||
},
|
||||
}
|
||||
|
||||
func TestNewClient_Good(t *testing.T) {
|
||||
client := NewClient("https://api.example.com", "test-token")
|
||||
|
||||
assert.Equal(t, "https://api.example.com", client.BaseURL)
|
||||
assert.Equal(t, "test-token", client.Token)
|
||||
assert.NotNil(t, client.HTTPClient)
|
||||
}
|
||||
|
||||
func TestNewClient_Good_TrailingSlash(t *testing.T) {
|
||||
client := NewClient("https://api.example.com/", "test-token")
|
||||
|
||||
assert.Equal(t, "https://api.example.com", client.BaseURL)
|
||||
}
|
||||
|
||||
func TestNewClientFromConfig_Good(t *testing.T) {
|
||||
cfg := &Config{
|
||||
BaseURL: "https://api.example.com",
|
||||
Token: "config-token",
|
||||
AgentID: "agent-001",
|
||||
}
|
||||
|
||||
client := NewClientFromConfig(cfg)
|
||||
|
||||
assert.Equal(t, "https://api.example.com", client.BaseURL)
|
||||
assert.Equal(t, "config-token", client.Token)
|
||||
assert.Equal(t, "agent-001", client.AgentID)
|
||||
}
|
||||
|
||||
func TestClient_ListTasks_Good(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, http.MethodGet, r.Method)
|
||||
assert.Equal(t, "/api/tasks", r.URL.Path)
|
||||
assert.Equal(t, "Bearer test-token", r.Header.Get("Authorization"))
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_ = json.NewEncoder(w).Encode(testTasks)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := NewClient(server.URL, "test-token")
|
||||
tasks, err := client.ListTasks(context.Background(), ListOptions{})
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, tasks, 2)
|
||||
assert.Equal(t, "task-123", tasks[0].ID)
|
||||
assert.Equal(t, "task-456", tasks[1].ID)
|
||||
}
|
||||
|
||||
func TestClient_ListTasks_Good_WithFilters(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
query := r.URL.Query()
|
||||
assert.Equal(t, "pending", query.Get("status"))
|
||||
assert.Equal(t, "high", query.Get("priority"))
|
||||
assert.Equal(t, "core", query.Get("project"))
|
||||
assert.Equal(t, "10", query.Get("limit"))
|
||||
assert.Equal(t, "bug,urgent", query.Get("labels"))
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_ = json.NewEncoder(w).Encode([]Task{testTask})
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := NewClient(server.URL, "test-token")
|
||||
opts := ListOptions{
|
||||
Status: StatusPending,
|
||||
Priority: PriorityHigh,
|
||||
Project: "core",
|
||||
Limit: 10,
|
||||
Labels: []string{"bug", "urgent"},
|
||||
}
|
||||
|
||||
tasks, err := client.ListTasks(context.Background(), opts)
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, tasks, 1)
|
||||
}
|
||||
|
||||
func TestClient_ListTasks_Bad_ServerError(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
_ = json.NewEncoder(w).Encode(APIError{Message: "internal error"})
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := NewClient(server.URL, "test-token")
|
||||
tasks, err := client.ListTasks(context.Background(), ListOptions{})
|
||||
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, tasks)
|
||||
assert.Contains(t, err.Error(), "internal error")
|
||||
}
|
||||
|
||||
func TestClient_GetTask_Good(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, http.MethodGet, r.Method)
|
||||
assert.Equal(t, "/api/tasks/task-123", r.URL.Path)
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_ = json.NewEncoder(w).Encode(testTask)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := NewClient(server.URL, "test-token")
|
||||
task, err := client.GetTask(context.Background(), "task-123")
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "task-123", task.ID)
|
||||
assert.Equal(t, "Implement feature X", task.Title)
|
||||
assert.Equal(t, PriorityHigh, task.Priority)
|
||||
}
|
||||
|
||||
func TestClient_GetTask_Bad_EmptyID(t *testing.T) {
|
||||
client := NewClient("https://api.example.com", "test-token")
|
||||
task, err := client.GetTask(context.Background(), "")
|
||||
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, task)
|
||||
assert.Contains(t, err.Error(), "task ID is required")
|
||||
}
|
||||
|
||||
func TestClient_GetTask_Bad_NotFound(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
_ = json.NewEncoder(w).Encode(APIError{Message: "task not found"})
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := NewClient(server.URL, "test-token")
|
||||
task, err := client.GetTask(context.Background(), "nonexistent")
|
||||
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, task)
|
||||
assert.Contains(t, err.Error(), "task not found")
|
||||
}
|
||||
|
||||
func TestClient_ClaimTask_Good(t *testing.T) {
|
||||
claimedTask := testTask
|
||||
claimedTask.Status = StatusInProgress
|
||||
claimedTask.ClaimedBy = "agent-001"
|
||||
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, http.MethodPost, r.Method)
|
||||
assert.Equal(t, "/api/tasks/task-123/claim", r.URL.Path)
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_ = json.NewEncoder(w).Encode(ClaimResponse{Task: &claimedTask})
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := NewClient(server.URL, "test-token")
|
||||
client.AgentID = "agent-001"
|
||||
task, err := client.ClaimTask(context.Background(), "task-123")
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, StatusInProgress, task.Status)
|
||||
assert.Equal(t, "agent-001", task.ClaimedBy)
|
||||
}
|
||||
|
||||
func TestClient_ClaimTask_Good_SimpleResponse(t *testing.T) {
|
||||
// Some APIs might return just the task without wrapping
|
||||
claimedTask := testTask
|
||||
claimedTask.Status = StatusInProgress
|
||||
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_ = json.NewEncoder(w).Encode(claimedTask)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := NewClient(server.URL, "test-token")
|
||||
task, err := client.ClaimTask(context.Background(), "task-123")
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "task-123", task.ID)
|
||||
}
|
||||
|
||||
func TestClient_ClaimTask_Bad_EmptyID(t *testing.T) {
|
||||
client := NewClient("https://api.example.com", "test-token")
|
||||
task, err := client.ClaimTask(context.Background(), "")
|
||||
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, task)
|
||||
assert.Contains(t, err.Error(), "task ID is required")
|
||||
}
|
||||
|
||||
func TestClient_ClaimTask_Bad_AlreadyClaimed(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusConflict)
|
||||
_ = json.NewEncoder(w).Encode(APIError{Message: "task already claimed"})
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := NewClient(server.URL, "test-token")
|
||||
task, err := client.ClaimTask(context.Background(), "task-123")
|
||||
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, task)
|
||||
assert.Contains(t, err.Error(), "task already claimed")
|
||||
}
|
||||
|
||||
func TestClient_UpdateTask_Good(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, http.MethodPatch, r.Method)
|
||||
assert.Equal(t, "/api/tasks/task-123", r.URL.Path)
|
||||
assert.Equal(t, "application/json", r.Header.Get("Content-Type"))
|
||||
|
||||
var update TaskUpdate
|
||||
err := json.NewDecoder(r.Body).Decode(&update)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, StatusInProgress, update.Status)
|
||||
assert.Equal(t, 50, update.Progress)
|
||||
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := NewClient(server.URL, "test-token")
|
||||
err := client.UpdateTask(context.Background(), "task-123", TaskUpdate{
|
||||
Status: StatusInProgress,
|
||||
Progress: 50,
|
||||
Notes: "Making progress",
|
||||
})
|
||||
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestClient_UpdateTask_Bad_EmptyID(t *testing.T) {
|
||||
client := NewClient("https://api.example.com", "test-token")
|
||||
err := client.UpdateTask(context.Background(), "", TaskUpdate{})
|
||||
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "task ID is required")
|
||||
}
|
||||
|
||||
func TestClient_CompleteTask_Good(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, http.MethodPost, r.Method)
|
||||
assert.Equal(t, "/api/tasks/task-123/complete", r.URL.Path)
|
||||
|
||||
var result TaskResult
|
||||
err := json.NewDecoder(r.Body).Decode(&result)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, result.Success)
|
||||
assert.Equal(t, "Feature implemented", result.Output)
|
||||
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := NewClient(server.URL, "test-token")
|
||||
err := client.CompleteTask(context.Background(), "task-123", TaskResult{
|
||||
Success: true,
|
||||
Output: "Feature implemented",
|
||||
Artifacts: []string{"pkg/feature/feature.go"},
|
||||
})
|
||||
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestClient_CompleteTask_Bad_EmptyID(t *testing.T) {
|
||||
client := NewClient("https://api.example.com", "test-token")
|
||||
err := client.CompleteTask(context.Background(), "", TaskResult{})
|
||||
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "task ID is required")
|
||||
}
|
||||
|
||||
func TestClient_Ping_Good(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, "/api/health", r.URL.Path)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
client := NewClient(server.URL, "test-token")
|
||||
err := client.Ping(context.Background())
|
||||
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestClient_Ping_Bad_ServerDown(t *testing.T) {
|
||||
client := NewClient("http://localhost:99999", "test-token")
|
||||
client.HTTPClient.Timeout = 100 * time.Millisecond
|
||||
|
||||
err := client.Ping(context.Background())
|
||||
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "request failed")
|
||||
}
|
||||
|
||||
func TestAPIError_Error_Good(t *testing.T) {
|
||||
err := &APIError{
|
||||
Code: 404,
|
||||
Message: "task not found",
|
||||
}
|
||||
|
||||
assert.Equal(t, "task not found", err.Error())
|
||||
|
||||
err.Details = "task-123 does not exist"
|
||||
assert.Equal(t, "task not found: task-123 does not exist", err.Error())
|
||||
}
|
||||
|
||||
func TestTaskStatus_Good(t *testing.T) {
|
||||
assert.Equal(t, TaskStatus("pending"), StatusPending)
|
||||
assert.Equal(t, TaskStatus("in_progress"), StatusInProgress)
|
||||
assert.Equal(t, TaskStatus("completed"), StatusCompleted)
|
||||
assert.Equal(t, TaskStatus("blocked"), StatusBlocked)
|
||||
}
|
||||
|
||||
func TestTaskPriority_Good(t *testing.T) {
|
||||
assert.Equal(t, TaskPriority("critical"), PriorityCritical)
|
||||
assert.Equal(t, TaskPriority("high"), PriorityHigh)
|
||||
assert.Equal(t, TaskPriority("medium"), PriorityMedium)
|
||||
assert.Equal(t, TaskPriority("low"), PriorityLow)
|
||||
}
|
||||
|
|
@ -1,338 +0,0 @@
|
|||
// Package agentic provides AI collaboration features for task management.
|
||||
package agentic
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/log"
|
||||
)
|
||||
|
||||
// PROptions contains options for creating a pull request.
|
||||
type PROptions struct {
|
||||
// Title is the PR title.
|
||||
Title string `json:"title"`
|
||||
// Body is the PR description.
|
||||
Body string `json:"body"`
|
||||
// Draft marks the PR as a draft.
|
||||
Draft bool `json:"draft"`
|
||||
// Labels are labels to add to the PR.
|
||||
Labels []string `json:"labels"`
|
||||
// Base is the base branch (defaults to main).
|
||||
Base string `json:"base"`
|
||||
}
|
||||
|
||||
// AutoCommit creates a git commit with a task reference.
|
||||
// The commit message follows the format:
|
||||
//
|
||||
// feat(scope): description
|
||||
//
|
||||
// Task: #123
|
||||
// Co-Authored-By: Claude <noreply@anthropic.com>
|
||||
func AutoCommit(ctx context.Context, task *Task, dir string, message string) error {
|
||||
const op = "agentic.AutoCommit"
|
||||
|
||||
if task == nil {
|
||||
return log.E(op, "task is required", nil)
|
||||
}
|
||||
|
||||
if message == "" {
|
||||
return log.E(op, "commit message is required", nil)
|
||||
}
|
||||
|
||||
// Build full commit message
|
||||
fullMessage := buildCommitMessage(task, message)
|
||||
|
||||
// Stage all changes
|
||||
if _, err := runGitCommandCtx(ctx, dir, "add", "-A"); err != nil {
|
||||
return log.E(op, "failed to stage changes", err)
|
||||
}
|
||||
|
||||
// Create commit
|
||||
if _, err := runGitCommandCtx(ctx, dir, "commit", "-m", fullMessage); err != nil {
|
||||
return log.E(op, "failed to create commit", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// buildCommitMessage formats a commit message with task reference.
|
||||
func buildCommitMessage(task *Task, message string) string {
|
||||
var sb strings.Builder
|
||||
|
||||
// Write the main message
|
||||
sb.WriteString(message)
|
||||
sb.WriteString("\n\n")
|
||||
|
||||
// Add task reference
|
||||
sb.WriteString("Task: #")
|
||||
sb.WriteString(task.ID)
|
||||
sb.WriteString("\n")
|
||||
|
||||
// Add co-author
|
||||
sb.WriteString("Co-Authored-By: Claude <noreply@anthropic.com>\n")
|
||||
|
||||
return sb.String()
|
||||
}
|
||||
|
||||
// CreatePR creates a pull request using the gh CLI.
|
||||
func CreatePR(ctx context.Context, task *Task, dir string, opts PROptions) (string, error) {
|
||||
const op = "agentic.CreatePR"
|
||||
|
||||
if task == nil {
|
||||
return "", log.E(op, "task is required", nil)
|
||||
}
|
||||
|
||||
// Build title if not provided
|
||||
title := opts.Title
|
||||
if title == "" {
|
||||
title = task.Title
|
||||
}
|
||||
|
||||
// Build body if not provided
|
||||
body := opts.Body
|
||||
if body == "" {
|
||||
body = buildPRBody(task)
|
||||
}
|
||||
|
||||
// Build gh command arguments
|
||||
args := []string{"pr", "create", "--title", title, "--body", body}
|
||||
|
||||
if opts.Draft {
|
||||
args = append(args, "--draft")
|
||||
}
|
||||
|
||||
if opts.Base != "" {
|
||||
args = append(args, "--base", opts.Base)
|
||||
}
|
||||
|
||||
for _, label := range opts.Labels {
|
||||
args = append(args, "--label", label)
|
||||
}
|
||||
|
||||
// Run gh pr create
|
||||
output, err := runCommandCtx(ctx, dir, "gh", args...)
|
||||
if err != nil {
|
||||
return "", log.E(op, "failed to create PR", err)
|
||||
}
|
||||
|
||||
// Extract PR URL from output
|
||||
prURL := strings.TrimSpace(output)
|
||||
|
||||
return prURL, nil
|
||||
}
|
||||
|
||||
// buildPRBody creates a PR body from task details.
|
||||
func buildPRBody(task *Task) string {
|
||||
var sb strings.Builder
|
||||
|
||||
sb.WriteString("## Summary\n\n")
|
||||
sb.WriteString(task.Description)
|
||||
sb.WriteString("\n\n")
|
||||
|
||||
sb.WriteString("## Task Reference\n\n")
|
||||
sb.WriteString("- Task ID: #")
|
||||
sb.WriteString(task.ID)
|
||||
sb.WriteString("\n")
|
||||
sb.WriteString("- Priority: ")
|
||||
sb.WriteString(string(task.Priority))
|
||||
sb.WriteString("\n")
|
||||
|
||||
if len(task.Labels) > 0 {
|
||||
sb.WriteString("- Labels: ")
|
||||
sb.WriteString(strings.Join(task.Labels, ", "))
|
||||
sb.WriteString("\n")
|
||||
}
|
||||
|
||||
sb.WriteString("\n---\n")
|
||||
sb.WriteString("Generated with AI assistance\n")
|
||||
|
||||
return sb.String()
|
||||
}
|
||||
|
||||
// SyncStatus syncs the task status back to the agentic service.
|
||||
func SyncStatus(ctx context.Context, client *Client, task *Task, update TaskUpdate) error {
|
||||
const op = "agentic.SyncStatus"
|
||||
|
||||
if client == nil {
|
||||
return log.E(op, "client is required", nil)
|
||||
}
|
||||
|
||||
if task == nil {
|
||||
return log.E(op, "task is required", nil)
|
||||
}
|
||||
|
||||
return client.UpdateTask(ctx, task.ID, update)
|
||||
}
|
||||
|
||||
// CommitAndSync commits changes and syncs task status.
|
||||
func CommitAndSync(ctx context.Context, client *Client, task *Task, dir string, message string, progress int) error {
|
||||
const op = "agentic.CommitAndSync"
|
||||
|
||||
// Create commit
|
||||
if err := AutoCommit(ctx, task, dir, message); err != nil {
|
||||
return log.E(op, "failed to commit", err)
|
||||
}
|
||||
|
||||
// Sync status if client provided
|
||||
if client != nil {
|
||||
update := TaskUpdate{
|
||||
Status: StatusInProgress,
|
||||
Progress: progress,
|
||||
Notes: "Committed: " + message,
|
||||
}
|
||||
|
||||
if err := SyncStatus(ctx, client, task, update); err != nil {
|
||||
// Log but don't fail on sync errors
|
||||
return log.E(op, "commit succeeded but sync failed", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// PushChanges pushes committed changes to the remote.
|
||||
func PushChanges(ctx context.Context, dir string) error {
|
||||
const op = "agentic.PushChanges"
|
||||
|
||||
_, err := runGitCommandCtx(ctx, dir, "push")
|
||||
if err != nil {
|
||||
return log.E(op, "failed to push changes", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateBranch creates a new branch for the task.
|
||||
func CreateBranch(ctx context.Context, task *Task, dir string) (string, error) {
|
||||
const op = "agentic.CreateBranch"
|
||||
|
||||
if task == nil {
|
||||
return "", log.E(op, "task is required", nil)
|
||||
}
|
||||
|
||||
// Generate branch name from task
|
||||
branchName := generateBranchName(task)
|
||||
|
||||
// Create and checkout branch
|
||||
_, err := runGitCommandCtx(ctx, dir, "checkout", "-b", branchName)
|
||||
if err != nil {
|
||||
return "", log.E(op, "failed to create branch", err)
|
||||
}
|
||||
|
||||
return branchName, nil
|
||||
}
|
||||
|
||||
// generateBranchName creates a branch name from task details.
|
||||
func generateBranchName(task *Task) string {
|
||||
// Determine prefix based on labels
|
||||
prefix := "feat"
|
||||
for _, label := range task.Labels {
|
||||
switch strings.ToLower(label) {
|
||||
case "bug", "bugfix", "fix":
|
||||
prefix = "fix"
|
||||
case "docs", "documentation":
|
||||
prefix = "docs"
|
||||
case "refactor":
|
||||
prefix = "refactor"
|
||||
case "test", "tests":
|
||||
prefix = "test"
|
||||
case "chore":
|
||||
prefix = "chore"
|
||||
}
|
||||
}
|
||||
|
||||
// Sanitize title for branch name
|
||||
title := strings.ToLower(task.Title)
|
||||
title = strings.Map(func(r rune) rune {
|
||||
if (r >= 'a' && r <= 'z') || (r >= '0' && r <= '9') {
|
||||
return r
|
||||
}
|
||||
if r == ' ' || r == '-' || r == '_' {
|
||||
return '-'
|
||||
}
|
||||
return -1
|
||||
}, title)
|
||||
|
||||
// Remove consecutive dashes
|
||||
for strings.Contains(title, "--") {
|
||||
title = strings.ReplaceAll(title, "--", "-")
|
||||
}
|
||||
title = strings.Trim(title, "-")
|
||||
|
||||
// Truncate if too long
|
||||
if len(title) > 40 {
|
||||
title = title[:40]
|
||||
title = strings.TrimRight(title, "-")
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s/%s-%s", prefix, task.ID, title)
|
||||
}
|
||||
|
||||
// runGitCommandCtx runs a git command with context.
|
||||
func runGitCommandCtx(ctx context.Context, dir string, args ...string) (string, error) {
|
||||
return runCommandCtx(ctx, dir, "git", args...)
|
||||
}
|
||||
|
||||
// runCommandCtx runs an arbitrary command with context.
|
||||
func runCommandCtx(ctx context.Context, dir string, command string, args ...string) (string, error) {
|
||||
cmd := exec.CommandContext(ctx, command, args...)
|
||||
cmd.Dir = dir
|
||||
|
||||
var stdout, stderr bytes.Buffer
|
||||
cmd.Stdout = &stdout
|
||||
cmd.Stderr = &stderr
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
if stderr.Len() > 0 {
|
||||
return "", fmt.Errorf("%w: %s", err, stderr.String())
|
||||
}
|
||||
return "", err
|
||||
}
|
||||
|
||||
return stdout.String(), nil
|
||||
}
|
||||
|
||||
// GetCurrentBranch returns the current git branch name.
|
||||
func GetCurrentBranch(ctx context.Context, dir string) (string, error) {
|
||||
const op = "agentic.GetCurrentBranch"
|
||||
|
||||
output, err := runGitCommandCtx(ctx, dir, "rev-parse", "--abbrev-ref", "HEAD")
|
||||
if err != nil {
|
||||
return "", log.E(op, "failed to get current branch", err)
|
||||
}
|
||||
|
||||
return strings.TrimSpace(output), nil
|
||||
}
|
||||
|
||||
// HasUncommittedChanges checks if there are uncommitted changes.
|
||||
func HasUncommittedChanges(ctx context.Context, dir string) (bool, error) {
|
||||
const op = "agentic.HasUncommittedChanges"
|
||||
|
||||
output, err := runGitCommandCtx(ctx, dir, "status", "--porcelain")
|
||||
if err != nil {
|
||||
return false, log.E(op, "failed to get git status", err)
|
||||
}
|
||||
|
||||
return strings.TrimSpace(output) != "", nil
|
||||
}
|
||||
|
||||
// GetDiff returns the current diff for staged and unstaged changes.
|
||||
func GetDiff(ctx context.Context, dir string, staged bool) (string, error) {
|
||||
const op = "agentic.GetDiff"
|
||||
|
||||
args := []string{"diff"}
|
||||
if staged {
|
||||
args = append(args, "--staged")
|
||||
}
|
||||
|
||||
output, err := runGitCommandCtx(ctx, dir, args...)
|
||||
if err != nil {
|
||||
return "", log.E(op, "failed to get diff", err)
|
||||
}
|
||||
|
||||
return output, nil
|
||||
}
|
||||
|
|
@ -1,199 +0,0 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestBuildCommitMessage(t *testing.T) {
|
||||
task := &Task{
|
||||
ID: "ABC123",
|
||||
Title: "Test Task",
|
||||
}
|
||||
|
||||
message := buildCommitMessage(task, "add new feature")
|
||||
|
||||
assert.Contains(t, message, "add new feature")
|
||||
assert.Contains(t, message, "Task: #ABC123")
|
||||
assert.Contains(t, message, "Co-Authored-By: Claude <noreply@anthropic.com>")
|
||||
}
|
||||
|
||||
func TestBuildPRBody(t *testing.T) {
|
||||
task := &Task{
|
||||
ID: "PR-456",
|
||||
Title: "Add authentication",
|
||||
Description: "Implement user authentication with OAuth2",
|
||||
Priority: PriorityHigh,
|
||||
Labels: []string{"enhancement", "security"},
|
||||
}
|
||||
|
||||
body := buildPRBody(task)
|
||||
|
||||
assert.Contains(t, body, "## Summary")
|
||||
assert.Contains(t, body, "Implement user authentication with OAuth2")
|
||||
assert.Contains(t, body, "## Task Reference")
|
||||
assert.Contains(t, body, "Task ID: #PR-456")
|
||||
assert.Contains(t, body, "Priority: high")
|
||||
assert.Contains(t, body, "Labels: enhancement, security")
|
||||
assert.Contains(t, body, "Generated with AI assistance")
|
||||
}
|
||||
|
||||
func TestBuildPRBody_NoLabels(t *testing.T) {
|
||||
task := &Task{
|
||||
ID: "PR-789",
|
||||
Title: "Fix bug",
|
||||
Description: "Fix the login bug",
|
||||
Priority: PriorityMedium,
|
||||
Labels: nil,
|
||||
}
|
||||
|
||||
body := buildPRBody(task)
|
||||
|
||||
assert.Contains(t, body, "## Summary")
|
||||
assert.Contains(t, body, "Fix the login bug")
|
||||
assert.NotContains(t, body, "Labels:")
|
||||
}
|
||||
|
||||
func TestGenerateBranchName(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
task *Task
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
name: "feature task",
|
||||
task: &Task{
|
||||
ID: "123",
|
||||
Title: "Add user authentication",
|
||||
Labels: []string{"enhancement"},
|
||||
},
|
||||
expected: "feat/123-add-user-authentication",
|
||||
},
|
||||
{
|
||||
name: "bug fix task",
|
||||
task: &Task{
|
||||
ID: "456",
|
||||
Title: "Fix login error",
|
||||
Labels: []string{"bug"},
|
||||
},
|
||||
expected: "fix/456-fix-login-error",
|
||||
},
|
||||
{
|
||||
name: "docs task",
|
||||
task: &Task{
|
||||
ID: "789",
|
||||
Title: "Update README",
|
||||
Labels: []string{"documentation"},
|
||||
},
|
||||
expected: "docs/789-update-readme",
|
||||
},
|
||||
{
|
||||
name: "refactor task",
|
||||
task: &Task{
|
||||
ID: "101",
|
||||
Title: "Refactor auth module",
|
||||
Labels: []string{"refactor"},
|
||||
},
|
||||
expected: "refactor/101-refactor-auth-module",
|
||||
},
|
||||
{
|
||||
name: "test task",
|
||||
task: &Task{
|
||||
ID: "202",
|
||||
Title: "Add unit tests",
|
||||
Labels: []string{"test"},
|
||||
},
|
||||
expected: "test/202-add-unit-tests",
|
||||
},
|
||||
{
|
||||
name: "chore task",
|
||||
task: &Task{
|
||||
ID: "303",
|
||||
Title: "Update dependencies",
|
||||
Labels: []string{"chore"},
|
||||
},
|
||||
expected: "chore/303-update-dependencies",
|
||||
},
|
||||
{
|
||||
name: "long title truncated",
|
||||
task: &Task{
|
||||
ID: "404",
|
||||
Title: "This is a very long title that should be truncated to fit the branch name limit",
|
||||
Labels: nil,
|
||||
},
|
||||
expected: "feat/404-this-is-a-very-long-title-that-should-be",
|
||||
},
|
||||
{
|
||||
name: "special characters removed",
|
||||
task: &Task{
|
||||
ID: "505",
|
||||
Title: "Fix: user's auth (OAuth2) [important]",
|
||||
Labels: nil,
|
||||
},
|
||||
expected: "feat/505-fix-users-auth-oauth2-important",
|
||||
},
|
||||
{
|
||||
name: "no labels defaults to feat",
|
||||
task: &Task{
|
||||
ID: "606",
|
||||
Title: "New feature",
|
||||
Labels: nil,
|
||||
},
|
||||
expected: "feat/606-new-feature",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := generateBranchName(tt.task)
|
||||
assert.Equal(t, tt.expected, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestAutoCommit_Bad_NilTask(t *testing.T) {
|
||||
err := AutoCommit(context.TODO(), nil, ".", "test message")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "task is required")
|
||||
}
|
||||
|
||||
func TestAutoCommit_Bad_EmptyMessage(t *testing.T) {
|
||||
task := &Task{ID: "123", Title: "Test"}
|
||||
err := AutoCommit(context.TODO(), task, ".", "")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "commit message is required")
|
||||
}
|
||||
|
||||
func TestSyncStatus_Bad_NilClient(t *testing.T) {
|
||||
task := &Task{ID: "123", Title: "Test"}
|
||||
update := TaskUpdate{Status: StatusInProgress}
|
||||
|
||||
err := SyncStatus(context.TODO(), nil, task, update)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "client is required")
|
||||
}
|
||||
|
||||
func TestSyncStatus_Bad_NilTask(t *testing.T) {
|
||||
client := &Client{BaseURL: "http://test"}
|
||||
update := TaskUpdate{Status: StatusInProgress}
|
||||
|
||||
err := SyncStatus(context.TODO(), client, nil, update)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "task is required")
|
||||
}
|
||||
|
||||
func TestCreateBranch_Bad_NilTask(t *testing.T) {
|
||||
branch, err := CreateBranch(context.TODO(), nil, ".")
|
||||
assert.Error(t, err)
|
||||
assert.Empty(t, branch)
|
||||
assert.Contains(t, err.Error(), "task is required")
|
||||
}
|
||||
|
||||
func TestCreatePR_Bad_NilTask(t *testing.T) {
|
||||
url, err := CreatePR(context.TODO(), nil, ".", PROptions{})
|
||||
assert.Error(t, err)
|
||||
assert.Empty(t, url)
|
||||
assert.Contains(t, err.Error(), "task is required")
|
||||
}
|
||||
|
|
@ -1,197 +0,0 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
errors "forge.lthn.ai/core/go/pkg/framework/core"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// Config holds the configuration for connecting to the core-agentic service.
|
||||
type Config struct {
|
||||
// BaseURL is the URL of the core-agentic API server.
|
||||
BaseURL string `yaml:"base_url" json:"base_url"`
|
||||
// Token is the authentication token for API requests.
|
||||
Token string `yaml:"token" json:"token"`
|
||||
// DefaultProject is the project to use when none is specified.
|
||||
DefaultProject string `yaml:"default_project" json:"default_project"`
|
||||
// AgentID is the identifier for this agent (optional, used for claiming tasks).
|
||||
AgentID string `yaml:"agent_id" json:"agent_id"`
|
||||
}
|
||||
|
||||
// configFileName is the name of the YAML config file.
|
||||
const configFileName = "agentic.yaml"
|
||||
|
||||
// envFileName is the name of the environment file.
|
||||
const envFileName = ".env"
|
||||
|
||||
// DefaultBaseURL is the default API endpoint if none is configured.
|
||||
const DefaultBaseURL = "https://api.core-agentic.dev"
|
||||
|
||||
// LoadConfig loads the agentic configuration from the specified directory.
|
||||
// It first checks for a .env file, then falls back to ~/.core/agentic.yaml.
|
||||
// If dir is empty, it checks the current directory first.
|
||||
//
|
||||
// Environment variables take precedence:
|
||||
// - AGENTIC_BASE_URL: API base URL
|
||||
// - AGENTIC_TOKEN: Authentication token
|
||||
// - AGENTIC_PROJECT: Default project
|
||||
// - AGENTIC_AGENT_ID: Agent identifier
|
||||
func LoadConfig(dir string) (*Config, error) {
|
||||
cfg := &Config{
|
||||
BaseURL: DefaultBaseURL,
|
||||
}
|
||||
|
||||
// Try loading from .env file in the specified directory
|
||||
if dir != "" {
|
||||
envPath := filepath.Join(dir, envFileName)
|
||||
if err := loadEnvFile(envPath, cfg); err == nil {
|
||||
// Successfully loaded from .env
|
||||
applyEnvOverrides(cfg)
|
||||
if cfg.Token != "" {
|
||||
return cfg, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Try loading from current directory .env
|
||||
if dir == "" {
|
||||
cwd, err := os.Getwd()
|
||||
if err == nil {
|
||||
envPath := filepath.Join(cwd, envFileName)
|
||||
if err := loadEnvFile(envPath, cfg); err == nil {
|
||||
applyEnvOverrides(cfg)
|
||||
if cfg.Token != "" {
|
||||
return cfg, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Try loading from ~/.core/agentic.yaml
|
||||
homeDir, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return nil, errors.E("agentic.LoadConfig", "failed to get home directory", err)
|
||||
}
|
||||
|
||||
configPath := filepath.Join(homeDir, ".core", configFileName)
|
||||
if err := loadYAMLConfig(configPath, cfg); err != nil && !os.IsNotExist(err) {
|
||||
return nil, errors.E("agentic.LoadConfig", "failed to load config", err)
|
||||
}
|
||||
|
||||
// Apply environment variable overrides
|
||||
applyEnvOverrides(cfg)
|
||||
|
||||
// Validate configuration
|
||||
if cfg.Token == "" {
|
||||
return nil, errors.E("agentic.LoadConfig", "no authentication token configured", nil)
|
||||
}
|
||||
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
// loadEnvFile reads a .env file and extracts agentic configuration.
|
||||
func loadEnvFile(path string, cfg *Config) error {
|
||||
content, err := io.Local.Read(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, line := range strings.Split(content, "\n") {
|
||||
line = strings.TrimSpace(line)
|
||||
|
||||
// Skip empty lines and comments
|
||||
if line == "" || strings.HasPrefix(line, "#") {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse KEY=value
|
||||
parts := strings.SplitN(line, "=", 2)
|
||||
if len(parts) != 2 {
|
||||
continue
|
||||
}
|
||||
|
||||
key := strings.TrimSpace(parts[0])
|
||||
value := strings.TrimSpace(parts[1])
|
||||
|
||||
// Remove quotes if present
|
||||
value = strings.Trim(value, `"'`)
|
||||
|
||||
switch key {
|
||||
case "AGENTIC_BASE_URL":
|
||||
cfg.BaseURL = value
|
||||
case "AGENTIC_TOKEN":
|
||||
cfg.Token = value
|
||||
case "AGENTIC_PROJECT":
|
||||
cfg.DefaultProject = value
|
||||
case "AGENTIC_AGENT_ID":
|
||||
cfg.AgentID = value
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// loadYAMLConfig reads configuration from a YAML file.
|
||||
func loadYAMLConfig(path string, cfg *Config) error {
|
||||
content, err := io.Local.Read(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return yaml.Unmarshal([]byte(content), cfg)
|
||||
}
|
||||
|
||||
// applyEnvOverrides applies environment variable overrides to the config.
|
||||
func applyEnvOverrides(cfg *Config) {
|
||||
if v := os.Getenv("AGENTIC_BASE_URL"); v != "" {
|
||||
cfg.BaseURL = v
|
||||
}
|
||||
if v := os.Getenv("AGENTIC_TOKEN"); v != "" {
|
||||
cfg.Token = v
|
||||
}
|
||||
if v := os.Getenv("AGENTIC_PROJECT"); v != "" {
|
||||
cfg.DefaultProject = v
|
||||
}
|
||||
if v := os.Getenv("AGENTIC_AGENT_ID"); v != "" {
|
||||
cfg.AgentID = v
|
||||
}
|
||||
}
|
||||
|
||||
// SaveConfig saves the configuration to ~/.core/agentic.yaml.
|
||||
func SaveConfig(cfg *Config) error {
|
||||
homeDir, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return errors.E("agentic.SaveConfig", "failed to get home directory", err)
|
||||
}
|
||||
|
||||
configDir := filepath.Join(homeDir, ".core")
|
||||
if err := io.Local.EnsureDir(configDir); err != nil {
|
||||
return errors.E("agentic.SaveConfig", "failed to create config directory", err)
|
||||
}
|
||||
|
||||
configPath := filepath.Join(configDir, configFileName)
|
||||
|
||||
data, err := yaml.Marshal(cfg)
|
||||
if err != nil {
|
||||
return errors.E("agentic.SaveConfig", "failed to marshal config", err)
|
||||
}
|
||||
|
||||
if err := io.Local.Write(configPath, string(data)); err != nil {
|
||||
return errors.E("agentic.SaveConfig", "failed to write config file", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ConfigPath returns the path to the config file in the user's home directory.
|
||||
func ConfigPath() (string, error) {
|
||||
homeDir, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return "", errors.E("agentic.ConfigPath", "failed to get home directory", err)
|
||||
}
|
||||
return filepath.Join(homeDir, ".core", configFileName), nil
|
||||
}
|
||||
|
|
@ -1,185 +0,0 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestLoadConfig_Good_FromEnvFile(t *testing.T) {
|
||||
// Create temp directory with .env file
|
||||
tmpDir, err := os.MkdirTemp("", "agentic-test")
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = os.RemoveAll(tmpDir) }()
|
||||
|
||||
envContent := `
|
||||
AGENTIC_BASE_URL=https://test.api.com
|
||||
AGENTIC_TOKEN=test-token-123
|
||||
AGENTIC_PROJECT=my-project
|
||||
AGENTIC_AGENT_ID=agent-001
|
||||
`
|
||||
err = os.WriteFile(filepath.Join(tmpDir, ".env"), []byte(envContent), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg, err := LoadConfig(tmpDir)
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "https://test.api.com", cfg.BaseURL)
|
||||
assert.Equal(t, "test-token-123", cfg.Token)
|
||||
assert.Equal(t, "my-project", cfg.DefaultProject)
|
||||
assert.Equal(t, "agent-001", cfg.AgentID)
|
||||
}
|
||||
|
||||
func TestLoadConfig_Good_FromEnvVars(t *testing.T) {
|
||||
// Create temp directory with .env file (partial config)
|
||||
tmpDir, err := os.MkdirTemp("", "agentic-test")
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = os.RemoveAll(tmpDir) }()
|
||||
|
||||
envContent := `
|
||||
AGENTIC_TOKEN=env-file-token
|
||||
`
|
||||
err = os.WriteFile(filepath.Join(tmpDir, ".env"), []byte(envContent), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Set environment variables that should override
|
||||
_ = os.Setenv("AGENTIC_BASE_URL", "https://env-override.com")
|
||||
_ = os.Setenv("AGENTIC_TOKEN", "env-override-token")
|
||||
defer func() {
|
||||
_ = os.Unsetenv("AGENTIC_BASE_URL")
|
||||
_ = os.Unsetenv("AGENTIC_TOKEN")
|
||||
}()
|
||||
|
||||
cfg, err := LoadConfig(tmpDir)
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "https://env-override.com", cfg.BaseURL)
|
||||
assert.Equal(t, "env-override-token", cfg.Token)
|
||||
}
|
||||
|
||||
func TestLoadConfig_Bad_NoToken(t *testing.T) {
|
||||
// Create temp directory without config
|
||||
tmpDir, err := os.MkdirTemp("", "agentic-test")
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = os.RemoveAll(tmpDir) }()
|
||||
|
||||
// Create empty .env
|
||||
err = os.WriteFile(filepath.Join(tmpDir, ".env"), []byte(""), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Ensure no env vars are set
|
||||
_ = os.Unsetenv("AGENTIC_TOKEN")
|
||||
_ = os.Unsetenv("AGENTIC_BASE_URL")
|
||||
|
||||
_, err = LoadConfig(tmpDir)
|
||||
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "no authentication token")
|
||||
}
|
||||
|
||||
func TestLoadConfig_Good_EnvFileWithQuotes(t *testing.T) {
|
||||
tmpDir, err := os.MkdirTemp("", "agentic-test")
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = os.RemoveAll(tmpDir) }()
|
||||
|
||||
// Test with quoted values
|
||||
envContent := `
|
||||
AGENTIC_TOKEN="quoted-token"
|
||||
AGENTIC_BASE_URL='single-quoted-url'
|
||||
`
|
||||
err = os.WriteFile(filepath.Join(tmpDir, ".env"), []byte(envContent), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg, err := LoadConfig(tmpDir)
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "quoted-token", cfg.Token)
|
||||
assert.Equal(t, "single-quoted-url", cfg.BaseURL)
|
||||
}
|
||||
|
||||
func TestLoadConfig_Good_EnvFileWithComments(t *testing.T) {
|
||||
tmpDir, err := os.MkdirTemp("", "agentic-test")
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = os.RemoveAll(tmpDir) }()
|
||||
|
||||
envContent := `
|
||||
# This is a comment
|
||||
AGENTIC_TOKEN=token-with-comments
|
||||
|
||||
# Another comment
|
||||
AGENTIC_PROJECT=commented-project
|
||||
`
|
||||
err = os.WriteFile(filepath.Join(tmpDir, ".env"), []byte(envContent), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg, err := LoadConfig(tmpDir)
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "token-with-comments", cfg.Token)
|
||||
assert.Equal(t, "commented-project", cfg.DefaultProject)
|
||||
}
|
||||
|
||||
func TestSaveConfig_Good(t *testing.T) {
|
||||
// Create temp home directory
|
||||
tmpHome, err := os.MkdirTemp("", "agentic-home")
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = os.RemoveAll(tmpHome) }()
|
||||
|
||||
// Override HOME for the test
|
||||
originalHome := os.Getenv("HOME")
|
||||
_ = os.Setenv("HOME", tmpHome)
|
||||
defer func() { _ = os.Setenv("HOME", originalHome) }()
|
||||
|
||||
cfg := &Config{
|
||||
BaseURL: "https://saved.api.com",
|
||||
Token: "saved-token",
|
||||
DefaultProject: "saved-project",
|
||||
AgentID: "saved-agent",
|
||||
}
|
||||
|
||||
err = SaveConfig(cfg)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify file was created
|
||||
configPath := filepath.Join(tmpHome, ".core", "agentic.yaml")
|
||||
_, err = os.Stat(configPath)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Read back the config
|
||||
data, err := os.ReadFile(configPath)
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, string(data), "saved.api.com")
|
||||
assert.Contains(t, string(data), "saved-token")
|
||||
}
|
||||
|
||||
func TestConfigPath_Good(t *testing.T) {
|
||||
path, err := ConfigPath()
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, path, ".core")
|
||||
assert.Contains(t, path, "agentic.yaml")
|
||||
}
|
||||
|
||||
func TestLoadConfig_Good_DefaultBaseURL(t *testing.T) {
|
||||
tmpDir, err := os.MkdirTemp("", "agentic-test")
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = os.RemoveAll(tmpDir) }()
|
||||
|
||||
// Only provide token, should use default base URL
|
||||
envContent := `
|
||||
AGENTIC_TOKEN=test-token
|
||||
`
|
||||
err = os.WriteFile(filepath.Join(tmpDir, ".env"), []byte(envContent), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Clear any env overrides
|
||||
_ = os.Unsetenv("AGENTIC_BASE_URL")
|
||||
|
||||
cfg, err := LoadConfig(tmpDir)
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, DefaultBaseURL, cfg.BaseURL)
|
||||
}
|
||||
|
|
@ -1,335 +0,0 @@
|
|||
// Package agentic provides AI collaboration features for task management.
|
||||
package agentic
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
errors "forge.lthn.ai/core/go/pkg/framework/core"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// FileContent represents the content of a file for AI context.
|
||||
type FileContent struct {
|
||||
// Path is the relative path to the file.
|
||||
Path string `json:"path"`
|
||||
// Content is the file content.
|
||||
Content string `json:"content"`
|
||||
// Language is the detected programming language.
|
||||
Language string `json:"language"`
|
||||
}
|
||||
|
||||
// TaskContext contains gathered context for AI collaboration.
|
||||
type TaskContext struct {
|
||||
// Task is the task being worked on.
|
||||
Task *Task `json:"task"`
|
||||
// Files is a list of relevant file contents.
|
||||
Files []FileContent `json:"files"`
|
||||
// GitStatus is the current git status output.
|
||||
GitStatus string `json:"git_status"`
|
||||
// RecentCommits is the recent commit log.
|
||||
RecentCommits string `json:"recent_commits"`
|
||||
// RelatedCode contains code snippets related to the task.
|
||||
RelatedCode []FileContent `json:"related_code"`
|
||||
}
|
||||
|
||||
// BuildTaskContext gathers context for AI collaboration on a task.
|
||||
func BuildTaskContext(task *Task, dir string) (*TaskContext, error) {
|
||||
const op = "agentic.BuildTaskContext"
|
||||
|
||||
if task == nil {
|
||||
return nil, errors.E(op, "task is required", nil)
|
||||
}
|
||||
|
||||
if dir == "" {
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return nil, errors.E(op, "failed to get working directory", err)
|
||||
}
|
||||
dir = cwd
|
||||
}
|
||||
|
||||
ctx := &TaskContext{
|
||||
Task: task,
|
||||
}
|
||||
|
||||
// Gather files mentioned in the task
|
||||
files, err := GatherRelatedFiles(task, dir)
|
||||
if err != nil {
|
||||
// Non-fatal: continue without files
|
||||
files = nil
|
||||
}
|
||||
ctx.Files = files
|
||||
|
||||
// Get git status
|
||||
gitStatus, _ := runGitCommand(dir, "status", "--porcelain")
|
||||
ctx.GitStatus = gitStatus
|
||||
|
||||
// Get recent commits
|
||||
recentCommits, _ := runGitCommand(dir, "log", "--oneline", "-10")
|
||||
ctx.RecentCommits = recentCommits
|
||||
|
||||
// Find related code by searching for keywords
|
||||
relatedCode, err := findRelatedCode(task, dir)
|
||||
if err != nil {
|
||||
relatedCode = nil
|
||||
}
|
||||
ctx.RelatedCode = relatedCode
|
||||
|
||||
return ctx, nil
|
||||
}
|
||||
|
||||
// GatherRelatedFiles reads files mentioned in the task.
|
||||
func GatherRelatedFiles(task *Task, dir string) ([]FileContent, error) {
|
||||
const op = "agentic.GatherRelatedFiles"
|
||||
|
||||
if task == nil {
|
||||
return nil, errors.E(op, "task is required", nil)
|
||||
}
|
||||
|
||||
var files []FileContent
|
||||
|
||||
// Read files explicitly mentioned in the task
|
||||
for _, relPath := range task.Files {
|
||||
fullPath := filepath.Join(dir, relPath)
|
||||
|
||||
content, err := io.Local.Read(fullPath)
|
||||
if err != nil {
|
||||
// Skip files that don't exist
|
||||
continue
|
||||
}
|
||||
|
||||
files = append(files, FileContent{
|
||||
Path: relPath,
|
||||
Content: content,
|
||||
Language: detectLanguage(relPath),
|
||||
})
|
||||
}
|
||||
|
||||
return files, nil
|
||||
}
|
||||
|
||||
// findRelatedCode searches for code related to the task by keywords.
|
||||
func findRelatedCode(task *Task, dir string) ([]FileContent, error) {
|
||||
const op = "agentic.findRelatedCode"
|
||||
|
||||
if task == nil {
|
||||
return nil, errors.E(op, "task is required", nil)
|
||||
}
|
||||
|
||||
// Extract keywords from title and description
|
||||
keywords := extractKeywords(task.Title + " " + task.Description)
|
||||
if len(keywords) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var files []FileContent
|
||||
seen := make(map[string]bool)
|
||||
|
||||
// Search for each keyword using git grep
|
||||
for _, keyword := range keywords {
|
||||
if len(keyword) < 3 {
|
||||
continue
|
||||
}
|
||||
|
||||
output, err := runGitCommand(dir, "grep", "-l", "-i", keyword, "--", "*.go", "*.ts", "*.js", "*.py")
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse matched files
|
||||
for _, line := range strings.Split(output, "\n") {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" || seen[line] {
|
||||
continue
|
||||
}
|
||||
seen[line] = true
|
||||
|
||||
// Limit to 10 related files
|
||||
if len(files) >= 10 {
|
||||
break
|
||||
}
|
||||
|
||||
fullPath := filepath.Join(dir, line)
|
||||
content, err := io.Local.Read(fullPath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Truncate large files
|
||||
if len(content) > 5000 {
|
||||
content = content[:5000] + "\n... (truncated)"
|
||||
}
|
||||
|
||||
files = append(files, FileContent{
|
||||
Path: line,
|
||||
Content: content,
|
||||
Language: detectLanguage(line),
|
||||
})
|
||||
}
|
||||
|
||||
if len(files) >= 10 {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return files, nil
|
||||
}
|
||||
|
||||
// extractKeywords extracts meaningful words from text for searching.
|
||||
func extractKeywords(text string) []string {
|
||||
// Remove common words and extract identifiers
|
||||
text = strings.ToLower(text)
|
||||
|
||||
// Split by non-alphanumeric characters
|
||||
re := regexp.MustCompile(`[^a-zA-Z0-9]+`)
|
||||
words := re.Split(text, -1)
|
||||
|
||||
// Filter stop words and short words
|
||||
stopWords := map[string]bool{
|
||||
"the": true, "a": true, "an": true, "and": true, "or": true, "but": true,
|
||||
"in": true, "on": true, "at": true, "to": true, "for": true, "of": true,
|
||||
"with": true, "by": true, "from": true, "is": true, "are": true, "was": true,
|
||||
"be": true, "been": true, "being": true, "have": true, "has": true, "had": true,
|
||||
"do": true, "does": true, "did": true, "will": true, "would": true, "could": true,
|
||||
"should": true, "may": true, "might": true, "must": true, "shall": true,
|
||||
"this": true, "that": true, "these": true, "those": true, "it": true,
|
||||
"add": true, "create": true, "update": true, "fix": true, "remove": true,
|
||||
"implement": true, "new": true, "file": true, "code": true,
|
||||
}
|
||||
|
||||
var keywords []string
|
||||
for _, word := range words {
|
||||
word = strings.TrimSpace(word)
|
||||
if len(word) >= 3 && !stopWords[word] {
|
||||
keywords = append(keywords, word)
|
||||
}
|
||||
}
|
||||
|
||||
// Limit to first 5 keywords
|
||||
if len(keywords) > 5 {
|
||||
keywords = keywords[:5]
|
||||
}
|
||||
|
||||
return keywords
|
||||
}
|
||||
|
||||
// detectLanguage detects the programming language from a file extension.
|
||||
func detectLanguage(path string) string {
|
||||
ext := strings.ToLower(filepath.Ext(path))
|
||||
|
||||
languages := map[string]string{
|
||||
".go": "go",
|
||||
".ts": "typescript",
|
||||
".tsx": "typescript",
|
||||
".js": "javascript",
|
||||
".jsx": "javascript",
|
||||
".py": "python",
|
||||
".rs": "rust",
|
||||
".java": "java",
|
||||
".kt": "kotlin",
|
||||
".swift": "swift",
|
||||
".c": "c",
|
||||
".cpp": "cpp",
|
||||
".h": "c",
|
||||
".hpp": "cpp",
|
||||
".rb": "ruby",
|
||||
".php": "php",
|
||||
".cs": "csharp",
|
||||
".fs": "fsharp",
|
||||
".scala": "scala",
|
||||
".sh": "bash",
|
||||
".bash": "bash",
|
||||
".zsh": "zsh",
|
||||
".yaml": "yaml",
|
||||
".yml": "yaml",
|
||||
".json": "json",
|
||||
".xml": "xml",
|
||||
".html": "html",
|
||||
".css": "css",
|
||||
".scss": "scss",
|
||||
".sql": "sql",
|
||||
".md": "markdown",
|
||||
}
|
||||
|
||||
if lang, ok := languages[ext]; ok {
|
||||
return lang
|
||||
}
|
||||
return "text"
|
||||
}
|
||||
|
||||
// runGitCommand runs a git command and returns the output.
|
||||
func runGitCommand(dir string, args ...string) (string, error) {
|
||||
cmd := exec.Command("git", args...)
|
||||
cmd.Dir = dir
|
||||
|
||||
var stdout, stderr bytes.Buffer
|
||||
cmd.Stdout = &stdout
|
||||
cmd.Stderr = &stderr
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return stdout.String(), nil
|
||||
}
|
||||
|
||||
// FormatContext formats the TaskContext for AI consumption.
|
||||
func (tc *TaskContext) FormatContext() string {
|
||||
var sb strings.Builder
|
||||
|
||||
sb.WriteString("# Task Context\n\n")
|
||||
|
||||
// Task info
|
||||
sb.WriteString("## Task\n")
|
||||
sb.WriteString("ID: " + tc.Task.ID + "\n")
|
||||
sb.WriteString("Title: " + tc.Task.Title + "\n")
|
||||
sb.WriteString("Priority: " + string(tc.Task.Priority) + "\n")
|
||||
sb.WriteString("Status: " + string(tc.Task.Status) + "\n")
|
||||
sb.WriteString("\n### Description\n")
|
||||
sb.WriteString(tc.Task.Description + "\n\n")
|
||||
|
||||
// Files
|
||||
if len(tc.Files) > 0 {
|
||||
sb.WriteString("## Task Files\n")
|
||||
for _, f := range tc.Files {
|
||||
sb.WriteString("### " + f.Path + " (" + f.Language + ")\n")
|
||||
sb.WriteString("```" + f.Language + "\n")
|
||||
sb.WriteString(f.Content)
|
||||
sb.WriteString("\n```\n\n")
|
||||
}
|
||||
}
|
||||
|
||||
// Git status
|
||||
if tc.GitStatus != "" {
|
||||
sb.WriteString("## Git Status\n")
|
||||
sb.WriteString("```\n")
|
||||
sb.WriteString(tc.GitStatus)
|
||||
sb.WriteString("\n```\n\n")
|
||||
}
|
||||
|
||||
// Recent commits
|
||||
if tc.RecentCommits != "" {
|
||||
sb.WriteString("## Recent Commits\n")
|
||||
sb.WriteString("```\n")
|
||||
sb.WriteString(tc.RecentCommits)
|
||||
sb.WriteString("\n```\n\n")
|
||||
}
|
||||
|
||||
// Related code
|
||||
if len(tc.RelatedCode) > 0 {
|
||||
sb.WriteString("## Related Code\n")
|
||||
for _, f := range tc.RelatedCode {
|
||||
sb.WriteString("### " + f.Path + " (" + f.Language + ")\n")
|
||||
sb.WriteString("```" + f.Language + "\n")
|
||||
sb.WriteString(f.Content)
|
||||
sb.WriteString("\n```\n\n")
|
||||
}
|
||||
}
|
||||
|
||||
return sb.String()
|
||||
}
|
||||
|
|
@ -1,214 +0,0 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestBuildTaskContext_Good(t *testing.T) {
|
||||
// Create a temp directory with some files
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
// Create a test file
|
||||
testFile := filepath.Join(tmpDir, "main.go")
|
||||
err := os.WriteFile(testFile, []byte("package main\n\nfunc main() {}\n"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
task := &Task{
|
||||
ID: "test-123",
|
||||
Title: "Test Task",
|
||||
Description: "A test task description",
|
||||
Priority: PriorityMedium,
|
||||
Status: StatusPending,
|
||||
Files: []string{"main.go"},
|
||||
CreatedAt: time.Now(),
|
||||
}
|
||||
|
||||
ctx, err := BuildTaskContext(task, tmpDir)
|
||||
require.NoError(t, err)
|
||||
assert.NotNil(t, ctx)
|
||||
assert.Equal(t, task, ctx.Task)
|
||||
assert.Len(t, ctx.Files, 1)
|
||||
assert.Equal(t, "main.go", ctx.Files[0].Path)
|
||||
assert.Equal(t, "go", ctx.Files[0].Language)
|
||||
}
|
||||
|
||||
func TestBuildTaskContext_Bad_NilTask(t *testing.T) {
|
||||
ctx, err := BuildTaskContext(nil, ".")
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, ctx)
|
||||
assert.Contains(t, err.Error(), "task is required")
|
||||
}
|
||||
|
||||
func TestGatherRelatedFiles_Good(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
// Create test files
|
||||
files := map[string]string{
|
||||
"app.go": "package app\n\nfunc Run() {}\n",
|
||||
"config.ts": "export const config = {};\n",
|
||||
"README.md": "# Project\n",
|
||||
}
|
||||
|
||||
for name, content := range files {
|
||||
path := filepath.Join(tmpDir, name)
|
||||
err := os.WriteFile(path, []byte(content), 0644)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
task := &Task{
|
||||
ID: "task-1",
|
||||
Title: "Test",
|
||||
Files: []string{"app.go", "config.ts"},
|
||||
}
|
||||
|
||||
gathered, err := GatherRelatedFiles(task, tmpDir)
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, gathered, 2)
|
||||
|
||||
// Check languages detected correctly
|
||||
foundGo := false
|
||||
foundTS := false
|
||||
for _, f := range gathered {
|
||||
if f.Path == "app.go" {
|
||||
foundGo = true
|
||||
assert.Equal(t, "go", f.Language)
|
||||
}
|
||||
if f.Path == "config.ts" {
|
||||
foundTS = true
|
||||
assert.Equal(t, "typescript", f.Language)
|
||||
}
|
||||
}
|
||||
assert.True(t, foundGo, "should find app.go")
|
||||
assert.True(t, foundTS, "should find config.ts")
|
||||
}
|
||||
|
||||
func TestGatherRelatedFiles_Bad_NilTask(t *testing.T) {
|
||||
files, err := GatherRelatedFiles(nil, ".")
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, files)
|
||||
}
|
||||
|
||||
func TestGatherRelatedFiles_Good_MissingFiles(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
task := &Task{
|
||||
ID: "task-1",
|
||||
Title: "Test",
|
||||
Files: []string{"nonexistent.go", "also-missing.ts"},
|
||||
}
|
||||
|
||||
// Should not error, just return empty list
|
||||
gathered, err := GatherRelatedFiles(task, tmpDir)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, gathered)
|
||||
}
|
||||
|
||||
func TestDetectLanguage(t *testing.T) {
|
||||
tests := []struct {
|
||||
path string
|
||||
expected string
|
||||
}{
|
||||
{"main.go", "go"},
|
||||
{"app.ts", "typescript"},
|
||||
{"app.tsx", "typescript"},
|
||||
{"script.js", "javascript"},
|
||||
{"script.jsx", "javascript"},
|
||||
{"main.py", "python"},
|
||||
{"lib.rs", "rust"},
|
||||
{"App.java", "java"},
|
||||
{"config.yaml", "yaml"},
|
||||
{"config.yml", "yaml"},
|
||||
{"data.json", "json"},
|
||||
{"index.html", "html"},
|
||||
{"styles.css", "css"},
|
||||
{"styles.scss", "scss"},
|
||||
{"query.sql", "sql"},
|
||||
{"README.md", "markdown"},
|
||||
{"unknown.xyz", "text"},
|
||||
{"", "text"},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.path, func(t *testing.T) {
|
||||
result := detectLanguage(tt.path)
|
||||
assert.Equal(t, tt.expected, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractKeywords(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
text string
|
||||
expected int // minimum number of keywords expected
|
||||
}{
|
||||
{
|
||||
name: "simple title",
|
||||
text: "Add user authentication feature",
|
||||
expected: 2,
|
||||
},
|
||||
{
|
||||
name: "with stop words",
|
||||
text: "The quick brown fox jumps over the lazy dog",
|
||||
expected: 3,
|
||||
},
|
||||
{
|
||||
name: "technical text",
|
||||
text: "Implement OAuth2 authentication with JWT tokens",
|
||||
expected: 3,
|
||||
},
|
||||
{
|
||||
name: "empty",
|
||||
text: "",
|
||||
expected: 0,
|
||||
},
|
||||
{
|
||||
name: "only stop words",
|
||||
text: "the a an and or but in on at",
|
||||
expected: 0,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
keywords := extractKeywords(tt.text)
|
||||
assert.GreaterOrEqual(t, len(keywords), tt.expected)
|
||||
// Keywords should not exceed 5
|
||||
assert.LessOrEqual(t, len(keywords), 5)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestTaskContext_FormatContext(t *testing.T) {
|
||||
task := &Task{
|
||||
ID: "test-456",
|
||||
Title: "Test Formatting",
|
||||
Description: "This is a test description",
|
||||
Priority: PriorityHigh,
|
||||
Status: StatusInProgress,
|
||||
}
|
||||
|
||||
ctx := &TaskContext{
|
||||
Task: task,
|
||||
Files: []FileContent{{Path: "main.go", Content: "package main", Language: "go"}},
|
||||
GitStatus: " M main.go",
|
||||
RecentCommits: "abc123 Initial commit",
|
||||
RelatedCode: []FileContent{{Path: "util.go", Content: "package util", Language: "go"}},
|
||||
}
|
||||
|
||||
formatted := ctx.FormatContext()
|
||||
|
||||
assert.Contains(t, formatted, "# Task Context")
|
||||
assert.Contains(t, formatted, "test-456")
|
||||
assert.Contains(t, formatted, "Test Formatting")
|
||||
assert.Contains(t, formatted, "## Task Files")
|
||||
assert.Contains(t, formatted, "## Git Status")
|
||||
assert.Contains(t, formatted, "## Recent Commits")
|
||||
assert.Contains(t, formatted, "## Related Code")
|
||||
}
|
||||
|
|
@ -1,19 +0,0 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"embed"
|
||||
"strings"
|
||||
)
|
||||
|
||||
//go:embed prompts/*.md
|
||||
var promptsFS embed.FS
|
||||
|
||||
// Prompt returns the content of an embedded prompt file.
|
||||
// Name should be without the .md extension (e.g., "commit").
|
||||
func Prompt(name string) string {
|
||||
data, err := promptsFS.ReadFile("prompts/" + name + ".md")
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return strings.TrimSpace(string(data))
|
||||
}
|
||||
|
|
@ -1,44 +0,0 @@
|
|||
# Commit Instructions
|
||||
|
||||
## Context
|
||||
|
||||
When asked to commit changes in these repositories:
|
||||
|
||||
1. **All changes are pre-approved** - Both tracked (modified) and untracked files have been reviewed and approved for commit
|
||||
2. **Include everything** - Commit ALL modified files AND all untracked files (including new directories)
|
||||
3. **Don't skip files** - Do not leave out untracked files assuming they need separate review
|
||||
4. **Single commit preferred** - Combine all changes into one cohesive commit unless explicitly told otherwise
|
||||
|
||||
The user has already validated these changes. Proceed with confidence.
|
||||
|
||||
## Handling Ignored Files
|
||||
|
||||
If you see untracked directories that should typically be ignored (like `node_modules/`, `vendor/`, `.cache/`, `dist/`, `build/`):
|
||||
|
||||
1. **Fix the .gitignore** - Create or update `.gitignore` to exclude these directories
|
||||
2. **Commit the .gitignore** - Include this fix in your commit
|
||||
3. **Don't ask** - Just fix it and commit
|
||||
|
||||
Common patterns to add to .gitignore:
|
||||
```
|
||||
node_modules/
|
||||
vendor/
|
||||
.cache/
|
||||
dist/
|
||||
build/
|
||||
*.log
|
||||
.env
|
||||
.DS_Store
|
||||
```
|
||||
|
||||
## Commit Message Style
|
||||
|
||||
- Use conventional commit format: `type(scope): description`
|
||||
- Common types: `refactor`, `feat`, `fix`, `docs`, `chore`
|
||||
- Keep the first line under 72 characters
|
||||
- Add body for complex changes explaining the "why"
|
||||
- Include `Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>`
|
||||
|
||||
## Task
|
||||
|
||||
Review the uncommitted changes and create an appropriate commit. Be concise.
|
||||
|
|
@ -1,142 +0,0 @@
|
|||
package agentic
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/framework"
|
||||
"forge.lthn.ai/core/go/pkg/log"
|
||||
)
|
||||
|
||||
// Tasks for AI service
|
||||
|
||||
// TaskCommit requests Claude to create a commit.
|
||||
type TaskCommit struct {
|
||||
Path string
|
||||
Name string
|
||||
CanEdit bool // allow Write/Edit tools
|
||||
}
|
||||
|
||||
// TaskPrompt sends a custom prompt to Claude.
|
||||
type TaskPrompt struct {
|
||||
Prompt string
|
||||
WorkDir string
|
||||
AllowedTools []string
|
||||
|
||||
taskID string
|
||||
}
|
||||
|
||||
func (t *TaskPrompt) SetTaskID(id string) { t.taskID = id }
|
||||
func (t *TaskPrompt) GetTaskID() string { return t.taskID }
|
||||
|
||||
// ServiceOptions for configuring the AI service.
|
||||
type ServiceOptions struct {
|
||||
DefaultTools []string
|
||||
AllowEdit bool // global permission for Write/Edit tools
|
||||
}
|
||||
|
||||
// DefaultServiceOptions returns sensible defaults.
|
||||
func DefaultServiceOptions() ServiceOptions {
|
||||
return ServiceOptions{
|
||||
DefaultTools: []string{"Bash", "Read", "Glob", "Grep"},
|
||||
AllowEdit: false,
|
||||
}
|
||||
}
|
||||
|
||||
// Service provides AI/Claude operations as a Core service.
|
||||
type Service struct {
|
||||
*framework.ServiceRuntime[ServiceOptions]
|
||||
}
|
||||
|
||||
// NewService creates an AI service factory.
|
||||
func NewService(opts ServiceOptions) func(*framework.Core) (any, error) {
|
||||
return func(c *framework.Core) (any, error) {
|
||||
return &Service{
|
||||
ServiceRuntime: framework.NewServiceRuntime(c, opts),
|
||||
}, nil
|
||||
}
|
||||
}
|
||||
|
||||
// OnStartup registers task handlers.
|
||||
func (s *Service) OnStartup(ctx context.Context) error {
|
||||
s.Core().RegisterTask(s.handleTask)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Service) handleTask(c *framework.Core, t framework.Task) (any, bool, error) {
|
||||
switch m := t.(type) {
|
||||
case TaskCommit:
|
||||
err := s.doCommit(m)
|
||||
if err != nil {
|
||||
log.Error("agentic: commit task failed", "err", err, "path", m.Path)
|
||||
}
|
||||
return nil, true, err
|
||||
|
||||
case TaskPrompt:
|
||||
err := s.doPrompt(m)
|
||||
if err != nil {
|
||||
log.Error("agentic: prompt task failed", "err", err)
|
||||
}
|
||||
return nil, true, err
|
||||
}
|
||||
return nil, false, nil
|
||||
}
|
||||
|
||||
func (s *Service) doCommit(task TaskCommit) error {
|
||||
prompt := Prompt("commit")
|
||||
|
||||
tools := []string{"Bash", "Read", "Glob", "Grep"}
|
||||
if task.CanEdit {
|
||||
tools = []string{"Bash", "Read", "Write", "Edit", "Glob", "Grep"}
|
||||
}
|
||||
|
||||
cmd := exec.CommandContext(context.Background(), "claude", "-p", prompt, "--allowedTools", strings.Join(tools, ","))
|
||||
cmd.Dir = task.Path
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
cmd.Stdin = os.Stdin
|
||||
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
func (s *Service) doPrompt(task TaskPrompt) error {
|
||||
if task.taskID != "" {
|
||||
s.Core().Progress(task.taskID, 0.1, "Starting Claude...", &task)
|
||||
}
|
||||
|
||||
opts := s.Opts()
|
||||
tools := opts.DefaultTools
|
||||
if len(tools) == 0 {
|
||||
tools = []string{"Bash", "Read", "Glob", "Grep"}
|
||||
}
|
||||
|
||||
if len(task.AllowedTools) > 0 {
|
||||
tools = task.AllowedTools
|
||||
}
|
||||
|
||||
cmd := exec.CommandContext(context.Background(), "claude", "-p", task.Prompt, "--allowedTools", strings.Join(tools, ","))
|
||||
if task.WorkDir != "" {
|
||||
cmd.Dir = task.WorkDir
|
||||
}
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
cmd.Stdin = os.Stdin
|
||||
|
||||
if task.taskID != "" {
|
||||
s.Core().Progress(task.taskID, 0.5, "Running Claude prompt...", &task)
|
||||
}
|
||||
|
||||
err := cmd.Run()
|
||||
|
||||
if task.taskID != "" {
|
||||
if err != nil {
|
||||
s.Core().Progress(task.taskID, 1.0, "Failed: "+err.Error(), &task)
|
||||
} else {
|
||||
s.Core().Progress(task.taskID, 1.0, "Completed", &task)
|
||||
}
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
|
@ -1,140 +0,0 @@
|
|||
// Package agentic provides an API client for core-agentic, an AI-assisted task
|
||||
// management service. It enables developers and AI agents to discover, claim,
|
||||
// and complete development tasks.
|
||||
package agentic
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// TaskStatus represents the state of a task in the system.
|
||||
type TaskStatus string
|
||||
|
||||
const (
|
||||
// StatusPending indicates the task is available to be claimed.
|
||||
StatusPending TaskStatus = "pending"
|
||||
// StatusInProgress indicates the task has been claimed and is being worked on.
|
||||
StatusInProgress TaskStatus = "in_progress"
|
||||
// StatusCompleted indicates the task has been successfully completed.
|
||||
StatusCompleted TaskStatus = "completed"
|
||||
// StatusBlocked indicates the task cannot proceed due to dependencies.
|
||||
StatusBlocked TaskStatus = "blocked"
|
||||
)
|
||||
|
||||
// TaskPriority represents the urgency level of a task.
|
||||
type TaskPriority string
|
||||
|
||||
const (
|
||||
// PriorityCritical indicates the task requires immediate attention.
|
||||
PriorityCritical TaskPriority = "critical"
|
||||
// PriorityHigh indicates the task is important and should be addressed soon.
|
||||
PriorityHigh TaskPriority = "high"
|
||||
// PriorityMedium indicates the task has normal priority.
|
||||
PriorityMedium TaskPriority = "medium"
|
||||
// PriorityLow indicates the task can be addressed when time permits.
|
||||
PriorityLow TaskPriority = "low"
|
||||
)
|
||||
|
||||
// Task represents a development task in the core-agentic system.
|
||||
type Task struct {
|
||||
// ID is the unique identifier for the task.
|
||||
ID string `json:"id"`
|
||||
// Title is the short description of the task.
|
||||
Title string `json:"title"`
|
||||
// Description provides detailed information about what needs to be done.
|
||||
Description string `json:"description"`
|
||||
// Priority indicates the urgency of the task.
|
||||
Priority TaskPriority `json:"priority"`
|
||||
// Status indicates the current state of the task.
|
||||
Status TaskStatus `json:"status"`
|
||||
// Labels are tags used to categorize the task.
|
||||
Labels []string `json:"labels,omitempty"`
|
||||
// Files lists the files that are relevant to this task.
|
||||
Files []string `json:"files,omitempty"`
|
||||
// CreatedAt is when the task was created.
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
// UpdatedAt is when the task was last modified.
|
||||
UpdatedAt time.Time `json:"updated_at,omitempty"`
|
||||
// ClaimedBy is the identifier of the agent or developer who claimed the task.
|
||||
ClaimedBy string `json:"claimed_by,omitempty"`
|
||||
// ClaimedAt is when the task was claimed.
|
||||
ClaimedAt *time.Time `json:"claimed_at,omitempty"`
|
||||
// Project is the project this task belongs to.
|
||||
Project string `json:"project,omitempty"`
|
||||
// Dependencies lists task IDs that must be completed before this task.
|
||||
Dependencies []string `json:"dependencies,omitempty"`
|
||||
// Blockers lists task IDs that this task is blocking.
|
||||
Blockers []string `json:"blockers,omitempty"`
|
||||
}
|
||||
|
||||
// TaskUpdate contains fields that can be updated on a task.
|
||||
type TaskUpdate struct {
|
||||
// Status is the new status for the task.
|
||||
Status TaskStatus `json:"status,omitempty"`
|
||||
// Progress is a percentage (0-100) indicating completion.
|
||||
Progress int `json:"progress,omitempty"`
|
||||
// Notes are additional comments about the update.
|
||||
Notes string `json:"notes,omitempty"`
|
||||
}
|
||||
|
||||
// TaskResult contains the outcome of a completed task.
|
||||
type TaskResult struct {
|
||||
// Success indicates whether the task was completed successfully.
|
||||
Success bool `json:"success"`
|
||||
// Output is the result or summary of the completed work.
|
||||
Output string `json:"output,omitempty"`
|
||||
// Artifacts are files or resources produced by the task.
|
||||
Artifacts []string `json:"artifacts,omitempty"`
|
||||
// ErrorMessage contains details if the task failed.
|
||||
ErrorMessage string `json:"error_message,omitempty"`
|
||||
}
|
||||
|
||||
// ListOptions specifies filters for listing tasks.
|
||||
type ListOptions struct {
|
||||
// Status filters tasks by their current status.
|
||||
Status TaskStatus `json:"status,omitempty"`
|
||||
// Labels filters tasks that have all specified labels.
|
||||
Labels []string `json:"labels,omitempty"`
|
||||
// Priority filters tasks by priority level.
|
||||
Priority TaskPriority `json:"priority,omitempty"`
|
||||
// Limit is the maximum number of tasks to return.
|
||||
Limit int `json:"limit,omitempty"`
|
||||
// Project filters tasks by project.
|
||||
Project string `json:"project,omitempty"`
|
||||
// ClaimedBy filters tasks claimed by a specific agent.
|
||||
ClaimedBy string `json:"claimed_by,omitempty"`
|
||||
}
|
||||
|
||||
// APIError represents an error response from the API.
|
||||
type APIError struct {
|
||||
// Code is the HTTP status code.
|
||||
Code int `json:"code"`
|
||||
// Message is the error description.
|
||||
Message string `json:"message"`
|
||||
// Details provides additional context about the error.
|
||||
Details string `json:"details,omitempty"`
|
||||
}
|
||||
|
||||
// Error implements the error interface for APIError.
|
||||
func (e *APIError) Error() string {
|
||||
if e.Details != "" {
|
||||
return e.Message + ": " + e.Details
|
||||
}
|
||||
return e.Message
|
||||
}
|
||||
|
||||
// ClaimResponse is returned when a task is successfully claimed.
|
||||
type ClaimResponse struct {
|
||||
// Task is the claimed task with updated fields.
|
||||
Task *Task `json:"task"`
|
||||
// Message provides additional context about the claim.
|
||||
Message string `json:"message,omitempty"`
|
||||
}
|
||||
|
||||
// CompleteResponse is returned when a task is completed.
|
||||
type CompleteResponse struct {
|
||||
// Task is the completed task with final status.
|
||||
Task *Task `json:"task"`
|
||||
// Message provides additional context about the completion.
|
||||
Message string `json:"message,omitempty"`
|
||||
}
|
||||
11
pkg/ai/ai.go
11
pkg/ai/ai.go
|
|
@ -1,11 +0,0 @@
|
|||
// Package ai provides the unified AI package for the core CLI.
|
||||
//
|
||||
// It composes functionality from pkg/rag (vector search) and pkg/agentic
|
||||
// (task management) into a single public API surface. New AI features
|
||||
// should be added here; existing packages remain importable but pkg/ai
|
||||
// is the canonical entry point.
|
||||
//
|
||||
// Sub-packages composed:
|
||||
// - pkg/rag: Qdrant vector database + Ollama embeddings
|
||||
// - pkg/agentic: Task queue client and context building
|
||||
package ai
|
||||
|
|
@ -1,171 +0,0 @@
|
|||
package ai
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Event represents a recorded AI/security metric event.
|
||||
type Event struct {
|
||||
Type string `json:"type"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
AgentID string `json:"agent_id,omitempty"`
|
||||
Repo string `json:"repo,omitempty"`
|
||||
Duration time.Duration `json:"duration,omitempty"`
|
||||
Data map[string]any `json:"data,omitempty"`
|
||||
}
|
||||
|
||||
// metricsDir returns the base directory for metrics storage.
|
||||
func metricsDir() (string, error) {
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("get home directory: %w", err)
|
||||
}
|
||||
return filepath.Join(home, ".core", "ai", "metrics"), nil
|
||||
}
|
||||
|
||||
// metricsFilePath returns the JSONL file path for the given date.
|
||||
func metricsFilePath(dir string, t time.Time) string {
|
||||
return filepath.Join(dir, t.Format("2006-01-02")+".jsonl")
|
||||
}
|
||||
|
||||
// Record appends an event to the daily JSONL file at
|
||||
// ~/.core/ai/metrics/YYYY-MM-DD.jsonl.
|
||||
func Record(event Event) (err error) {
|
||||
if event.Timestamp.IsZero() {
|
||||
event.Timestamp = time.Now()
|
||||
}
|
||||
|
||||
dir, err := metricsDir()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := os.MkdirAll(dir, 0o755); err != nil {
|
||||
return fmt.Errorf("create metrics directory: %w", err)
|
||||
}
|
||||
|
||||
path := metricsFilePath(dir, event.Timestamp)
|
||||
|
||||
f, err := os.OpenFile(path, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o644)
|
||||
if err != nil {
|
||||
return fmt.Errorf("open metrics file: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if cerr := f.Close(); cerr != nil && err == nil {
|
||||
err = fmt.Errorf("close metrics file: %w", cerr)
|
||||
}
|
||||
}()
|
||||
|
||||
data, err := json.Marshal(event)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marshal event: %w", err)
|
||||
}
|
||||
|
||||
if _, err := f.Write(append(data, '\n')); err != nil {
|
||||
return fmt.Errorf("write event: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReadEvents reads events from JSONL files within the given time range.
|
||||
func ReadEvents(since time.Time) ([]Event, error) {
|
||||
dir, err := metricsDir()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var events []Event
|
||||
now := time.Now()
|
||||
|
||||
// Iterate each day from since to now.
|
||||
for d := time.Date(since.Year(), since.Month(), since.Day(), 0, 0, 0, 0, time.Local); !d.After(now); d = d.AddDate(0, 0, 1) {
|
||||
path := metricsFilePath(dir, d)
|
||||
|
||||
dayEvents, err := readMetricsFile(path, since)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
events = append(events, dayEvents...)
|
||||
}
|
||||
|
||||
return events, nil
|
||||
}
|
||||
|
||||
// readMetricsFile reads events from a single JSONL file, returning only those at or after since.
|
||||
func readMetricsFile(path string, since time.Time) ([]Event, error) {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("open metrics file %s: %w", path, err)
|
||||
}
|
||||
defer func() { _ = f.Close() }()
|
||||
|
||||
var events []Event
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
var ev Event
|
||||
if err := json.Unmarshal(scanner.Bytes(), &ev); err != nil {
|
||||
continue // skip malformed lines
|
||||
}
|
||||
if !ev.Timestamp.Before(since) {
|
||||
events = append(events, ev)
|
||||
}
|
||||
}
|
||||
if err := scanner.Err(); err != nil {
|
||||
return nil, fmt.Errorf("read metrics file %s: %w", path, err)
|
||||
}
|
||||
return events, nil
|
||||
}
|
||||
|
||||
// Summary aggregates events into counts by type, repo, and agent.
|
||||
func Summary(events []Event) map[string]any {
|
||||
byType := make(map[string]int)
|
||||
byRepo := make(map[string]int)
|
||||
byAgent := make(map[string]int)
|
||||
|
||||
for _, ev := range events {
|
||||
byType[ev.Type]++
|
||||
if ev.Repo != "" {
|
||||
byRepo[ev.Repo]++
|
||||
}
|
||||
if ev.AgentID != "" {
|
||||
byAgent[ev.AgentID]++
|
||||
}
|
||||
}
|
||||
|
||||
return map[string]any{
|
||||
"total": len(events),
|
||||
"by_type": sortedMap(byType),
|
||||
"by_repo": sortedMap(byRepo),
|
||||
"by_agent": sortedMap(byAgent),
|
||||
}
|
||||
}
|
||||
|
||||
// sortedMap returns a slice of key-count pairs sorted by count descending.
|
||||
func sortedMap(m map[string]int) []map[string]any {
|
||||
type entry struct {
|
||||
key string
|
||||
count int
|
||||
}
|
||||
entries := make([]entry, 0, len(m))
|
||||
for k, v := range m {
|
||||
entries = append(entries, entry{k, v})
|
||||
}
|
||||
sort.Slice(entries, func(i, j int) bool {
|
||||
return entries[i].count > entries[j].count
|
||||
})
|
||||
result := make([]map[string]any, len(entries))
|
||||
for i, e := range entries {
|
||||
result[i] = map[string]any{"key": e.key, "count": e.count}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
|
@ -1,58 +0,0 @@
|
|||
package ai
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/rag"
|
||||
)
|
||||
|
||||
// TaskInfo carries the minimal task data needed for RAG queries,
|
||||
// avoiding a direct dependency on pkg/agentic (which imports pkg/ai).
|
||||
type TaskInfo struct {
|
||||
Title string
|
||||
Description string
|
||||
}
|
||||
|
||||
// QueryRAGForTask queries Qdrant for documentation relevant to a task.
|
||||
// It builds a query from the task title and description, queries with
|
||||
// sensible defaults, and returns formatted context. Returns "" on any
|
||||
// error (e.g. Qdrant/Ollama not running) for graceful degradation.
|
||||
func QueryRAGForTask(task TaskInfo) string {
|
||||
query := task.Title + " " + task.Description
|
||||
|
||||
// Truncate to 500 runes to keep the embedding focused.
|
||||
runes := []rune(query)
|
||||
if len(runes) > 500 {
|
||||
query = string(runes[:500])
|
||||
}
|
||||
|
||||
qdrantCfg := rag.DefaultQdrantConfig()
|
||||
qdrantClient, err := rag.NewQdrantClient(qdrantCfg)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
defer func() { _ = qdrantClient.Close() }()
|
||||
|
||||
ollamaCfg := rag.DefaultOllamaConfig()
|
||||
ollamaClient, err := rag.NewOllamaClient(ollamaCfg)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
queryCfg := rag.QueryConfig{
|
||||
Collection: "hostuk-docs",
|
||||
Limit: 3,
|
||||
Threshold: 0.5,
|
||||
}
|
||||
|
||||
results, err := rag.Query(ctx, qdrantClient, ollamaClient, query, queryCfg)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
return rag.FormatResultsContext(results)
|
||||
}
|
||||
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
|
|
@ -1,438 +0,0 @@
|
|||
package ansible
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/log"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// Parser handles Ansible YAML parsing.
|
||||
type Parser struct {
|
||||
basePath string
|
||||
vars map[string]any
|
||||
}
|
||||
|
||||
// NewParser creates a new Ansible parser.
|
||||
func NewParser(basePath string) *Parser {
|
||||
return &Parser{
|
||||
basePath: basePath,
|
||||
vars: make(map[string]any),
|
||||
}
|
||||
}
|
||||
|
||||
// ParsePlaybook parses an Ansible playbook file.
|
||||
func (p *Parser) ParsePlaybook(path string) ([]Play, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("read playbook: %w", err)
|
||||
}
|
||||
|
||||
var plays []Play
|
||||
if err := yaml.Unmarshal(data, &plays); err != nil {
|
||||
return nil, fmt.Errorf("parse playbook: %w", err)
|
||||
}
|
||||
|
||||
// Process each play
|
||||
for i := range plays {
|
||||
if err := p.processPlay(&plays[i]); err != nil {
|
||||
return nil, fmt.Errorf("process play %d: %w", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
return plays, nil
|
||||
}
|
||||
|
||||
// ParseInventory parses an Ansible inventory file.
|
||||
func (p *Parser) ParseInventory(path string) (*Inventory, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("read inventory: %w", err)
|
||||
}
|
||||
|
||||
var inv Inventory
|
||||
if err := yaml.Unmarshal(data, &inv); err != nil {
|
||||
return nil, fmt.Errorf("parse inventory: %w", err)
|
||||
}
|
||||
|
||||
return &inv, nil
|
||||
}
|
||||
|
||||
// ParseTasks parses a tasks file (used by include_tasks).
|
||||
func (p *Parser) ParseTasks(path string) ([]Task, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("read tasks: %w", err)
|
||||
}
|
||||
|
||||
var tasks []Task
|
||||
if err := yaml.Unmarshal(data, &tasks); err != nil {
|
||||
return nil, fmt.Errorf("parse tasks: %w", err)
|
||||
}
|
||||
|
||||
for i := range tasks {
|
||||
if err := p.extractModule(&tasks[i]); err != nil {
|
||||
return nil, fmt.Errorf("task %d: %w", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
return tasks, nil
|
||||
}
|
||||
|
||||
// ParseRole parses a role and returns its tasks.
|
||||
func (p *Parser) ParseRole(name string, tasksFrom string) ([]Task, error) {
|
||||
if tasksFrom == "" {
|
||||
tasksFrom = "main.yml"
|
||||
}
|
||||
|
||||
// Search paths for roles (in order of precedence)
|
||||
searchPaths := []string{
|
||||
// Relative to playbook
|
||||
filepath.Join(p.basePath, "roles", name, "tasks", tasksFrom),
|
||||
// Parent directory roles
|
||||
filepath.Join(filepath.Dir(p.basePath), "roles", name, "tasks", tasksFrom),
|
||||
// Sibling roles directory
|
||||
filepath.Join(p.basePath, "..", "roles", name, "tasks", tasksFrom),
|
||||
// playbooks/roles pattern
|
||||
filepath.Join(p.basePath, "playbooks", "roles", name, "tasks", tasksFrom),
|
||||
// Common DevOps structure
|
||||
filepath.Join(filepath.Dir(filepath.Dir(p.basePath)), "roles", name, "tasks", tasksFrom),
|
||||
}
|
||||
|
||||
var tasksPath string
|
||||
for _, sp := range searchPaths {
|
||||
// Clean the path to resolve .. segments
|
||||
sp = filepath.Clean(sp)
|
||||
if _, err := os.Stat(sp); err == nil {
|
||||
tasksPath = sp
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if tasksPath == "" {
|
||||
return nil, log.E("parser.ParseRole", fmt.Sprintf("role %s not found in search paths: %v", name, searchPaths), nil)
|
||||
}
|
||||
|
||||
// Load role defaults
|
||||
defaultsPath := filepath.Join(filepath.Dir(filepath.Dir(tasksPath)), "defaults", "main.yml")
|
||||
if data, err := os.ReadFile(defaultsPath); err == nil {
|
||||
var defaults map[string]any
|
||||
if yaml.Unmarshal(data, &defaults) == nil {
|
||||
for k, v := range defaults {
|
||||
if _, exists := p.vars[k]; !exists {
|
||||
p.vars[k] = v
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Load role vars
|
||||
varsPath := filepath.Join(filepath.Dir(filepath.Dir(tasksPath)), "vars", "main.yml")
|
||||
if data, err := os.ReadFile(varsPath); err == nil {
|
||||
var roleVars map[string]any
|
||||
if yaml.Unmarshal(data, &roleVars) == nil {
|
||||
for k, v := range roleVars {
|
||||
p.vars[k] = v
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return p.ParseTasks(tasksPath)
|
||||
}
|
||||
|
||||
// processPlay processes a play and extracts modules from tasks.
|
||||
func (p *Parser) processPlay(play *Play) error {
|
||||
// Merge play vars
|
||||
for k, v := range play.Vars {
|
||||
p.vars[k] = v
|
||||
}
|
||||
|
||||
for i := range play.PreTasks {
|
||||
if err := p.extractModule(&play.PreTasks[i]); err != nil {
|
||||
return fmt.Errorf("pre_task %d: %w", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
for i := range play.Tasks {
|
||||
if err := p.extractModule(&play.Tasks[i]); err != nil {
|
||||
return fmt.Errorf("task %d: %w", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
for i := range play.PostTasks {
|
||||
if err := p.extractModule(&play.PostTasks[i]); err != nil {
|
||||
return fmt.Errorf("post_task %d: %w", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
for i := range play.Handlers {
|
||||
if err := p.extractModule(&play.Handlers[i]); err != nil {
|
||||
return fmt.Errorf("handler %d: %w", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// extractModule extracts the module name and args from a task.
|
||||
func (p *Parser) extractModule(task *Task) error {
|
||||
// First, unmarshal the raw YAML to get all keys
|
||||
// This is a workaround since we need to find the module key dynamically
|
||||
|
||||
// Handle block tasks
|
||||
for i := range task.Block {
|
||||
if err := p.extractModule(&task.Block[i]); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
for i := range task.Rescue {
|
||||
if err := p.extractModule(&task.Rescue[i]); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
for i := range task.Always {
|
||||
if err := p.extractModule(&task.Always[i]); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// UnmarshalYAML implements custom YAML unmarshaling for Task.
|
||||
func (t *Task) UnmarshalYAML(node *yaml.Node) error {
|
||||
// First decode known fields
|
||||
type rawTask Task
|
||||
var raw rawTask
|
||||
|
||||
// Create a map to capture all fields
|
||||
var m map[string]any
|
||||
if err := node.Decode(&m); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Decode into struct
|
||||
if err := node.Decode(&raw); err != nil {
|
||||
return err
|
||||
}
|
||||
*t = Task(raw)
|
||||
t.raw = m
|
||||
|
||||
// Find the module key
|
||||
knownKeys := map[string]bool{
|
||||
"name": true, "register": true, "when": true, "loop": true,
|
||||
"loop_control": true, "vars": true, "environment": true,
|
||||
"changed_when": true, "failed_when": true, "ignore_errors": true,
|
||||
"no_log": true, "become": true, "become_user": true,
|
||||
"delegate_to": true, "run_once": true, "tags": true,
|
||||
"block": true, "rescue": true, "always": true, "notify": true,
|
||||
"retries": true, "delay": true, "until": true,
|
||||
"include_tasks": true, "import_tasks": true,
|
||||
"include_role": true, "import_role": true,
|
||||
"with_items": true, "with_dict": true, "with_file": true,
|
||||
}
|
||||
|
||||
for key, val := range m {
|
||||
if knownKeys[key] {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if this is a module
|
||||
if isModule(key) {
|
||||
t.Module = key
|
||||
t.Args = make(map[string]any)
|
||||
|
||||
switch v := val.(type) {
|
||||
case string:
|
||||
// Free-form args (e.g., shell: echo hello)
|
||||
t.Args["_raw_params"] = v
|
||||
case map[string]any:
|
||||
t.Args = v
|
||||
case nil:
|
||||
// Module with no args
|
||||
default:
|
||||
t.Args["_raw_params"] = v
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Handle with_items as loop
|
||||
if items, ok := m["with_items"]; ok && t.Loop == nil {
|
||||
t.Loop = items
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// isModule checks if a key is a known module.
|
||||
func isModule(key string) bool {
|
||||
for _, m := range KnownModules {
|
||||
if key == m {
|
||||
return true
|
||||
}
|
||||
// Also check without ansible.builtin. prefix
|
||||
if strings.HasPrefix(m, "ansible.builtin.") {
|
||||
if key == strings.TrimPrefix(m, "ansible.builtin.") {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
// Accept any key with dots (likely a module)
|
||||
return strings.Contains(key, ".")
|
||||
}
|
||||
|
||||
// NormalizeModule normalizes a module name to its canonical form.
|
||||
func NormalizeModule(name string) string {
|
||||
// Add ansible.builtin. prefix if missing
|
||||
if !strings.Contains(name, ".") {
|
||||
return "ansible.builtin." + name
|
||||
}
|
||||
return name
|
||||
}
|
||||
|
||||
// GetHosts returns hosts matching a pattern from inventory.
|
||||
func GetHosts(inv *Inventory, pattern string) []string {
|
||||
if pattern == "all" {
|
||||
return getAllHosts(inv.All)
|
||||
}
|
||||
if pattern == "localhost" {
|
||||
return []string{"localhost"}
|
||||
}
|
||||
|
||||
// Check if it's a group name
|
||||
hosts := getGroupHosts(inv.All, pattern)
|
||||
if len(hosts) > 0 {
|
||||
return hosts
|
||||
}
|
||||
|
||||
// Check if it's a specific host
|
||||
if hasHost(inv.All, pattern) {
|
||||
return []string{pattern}
|
||||
}
|
||||
|
||||
// Handle patterns with : (intersection/union)
|
||||
// For now, just return empty
|
||||
return nil
|
||||
}
|
||||
|
||||
func getAllHosts(group *InventoryGroup) []string {
|
||||
if group == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var hosts []string
|
||||
for name := range group.Hosts {
|
||||
hosts = append(hosts, name)
|
||||
}
|
||||
for _, child := range group.Children {
|
||||
hosts = append(hosts, getAllHosts(child)...)
|
||||
}
|
||||
return hosts
|
||||
}
|
||||
|
||||
func getGroupHosts(group *InventoryGroup, name string) []string {
|
||||
if group == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check children for the group name
|
||||
if child, ok := group.Children[name]; ok {
|
||||
return getAllHosts(child)
|
||||
}
|
||||
|
||||
// Recurse
|
||||
for _, child := range group.Children {
|
||||
if hosts := getGroupHosts(child, name); len(hosts) > 0 {
|
||||
return hosts
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func hasHost(group *InventoryGroup, name string) bool {
|
||||
if group == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
if _, ok := group.Hosts[name]; ok {
|
||||
return true
|
||||
}
|
||||
|
||||
for _, child := range group.Children {
|
||||
if hasHost(child, name) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// GetHostVars returns variables for a specific host.
|
||||
func GetHostVars(inv *Inventory, hostname string) map[string]any {
|
||||
vars := make(map[string]any)
|
||||
|
||||
// Collect vars from all levels
|
||||
collectHostVars(inv.All, hostname, vars)
|
||||
|
||||
return vars
|
||||
}
|
||||
|
||||
func collectHostVars(group *InventoryGroup, hostname string, vars map[string]any) bool {
|
||||
if group == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check if host is in this group
|
||||
found := false
|
||||
if host, ok := group.Hosts[hostname]; ok {
|
||||
found = true
|
||||
// Apply group vars first
|
||||
for k, v := range group.Vars {
|
||||
vars[k] = v
|
||||
}
|
||||
// Then host vars
|
||||
if host != nil {
|
||||
if host.AnsibleHost != "" {
|
||||
vars["ansible_host"] = host.AnsibleHost
|
||||
}
|
||||
if host.AnsiblePort != 0 {
|
||||
vars["ansible_port"] = host.AnsiblePort
|
||||
}
|
||||
if host.AnsibleUser != "" {
|
||||
vars["ansible_user"] = host.AnsibleUser
|
||||
}
|
||||
if host.AnsiblePassword != "" {
|
||||
vars["ansible_password"] = host.AnsiblePassword
|
||||
}
|
||||
if host.AnsibleSSHPrivateKeyFile != "" {
|
||||
vars["ansible_ssh_private_key_file"] = host.AnsibleSSHPrivateKeyFile
|
||||
}
|
||||
if host.AnsibleConnection != "" {
|
||||
vars["ansible_connection"] = host.AnsibleConnection
|
||||
}
|
||||
for k, v := range host.Vars {
|
||||
vars[k] = v
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check children
|
||||
for _, child := range group.Children {
|
||||
if collectHostVars(child, hostname, vars) {
|
||||
// Apply this group's vars (parent vars)
|
||||
for k, v := range group.Vars {
|
||||
if _, exists := vars[k]; !exists {
|
||||
vars[k] = v
|
||||
}
|
||||
}
|
||||
found = true
|
||||
}
|
||||
}
|
||||
|
||||
return found
|
||||
}
|
||||
|
|
@ -1,451 +0,0 @@
|
|||
package ansible
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/log"
|
||||
"golang.org/x/crypto/ssh"
|
||||
"golang.org/x/crypto/ssh/knownhosts"
|
||||
)
|
||||
|
||||
// SSHClient handles SSH connections to remote hosts.
|
||||
type SSHClient struct {
|
||||
host string
|
||||
port int
|
||||
user string
|
||||
password string
|
||||
keyFile string
|
||||
client *ssh.Client
|
||||
mu sync.Mutex
|
||||
become bool
|
||||
becomeUser string
|
||||
becomePass string
|
||||
timeout time.Duration
|
||||
}
|
||||
|
||||
// SSHConfig holds SSH connection configuration.
|
||||
type SSHConfig struct {
|
||||
Host string
|
||||
Port int
|
||||
User string
|
||||
Password string
|
||||
KeyFile string
|
||||
Become bool
|
||||
BecomeUser string
|
||||
BecomePass string
|
||||
Timeout time.Duration
|
||||
}
|
||||
|
||||
// NewSSHClient creates a new SSH client.
|
||||
func NewSSHClient(cfg SSHConfig) (*SSHClient, error) {
|
||||
if cfg.Port == 0 {
|
||||
cfg.Port = 22
|
||||
}
|
||||
if cfg.User == "" {
|
||||
cfg.User = "root"
|
||||
}
|
||||
if cfg.Timeout == 0 {
|
||||
cfg.Timeout = 30 * time.Second
|
||||
}
|
||||
|
||||
client := &SSHClient{
|
||||
host: cfg.Host,
|
||||
port: cfg.Port,
|
||||
user: cfg.User,
|
||||
password: cfg.Password,
|
||||
keyFile: cfg.KeyFile,
|
||||
become: cfg.Become,
|
||||
becomeUser: cfg.BecomeUser,
|
||||
becomePass: cfg.BecomePass,
|
||||
timeout: cfg.Timeout,
|
||||
}
|
||||
|
||||
return client, nil
|
||||
}
|
||||
|
||||
// Connect establishes the SSH connection.
|
||||
func (c *SSHClient) Connect(ctx context.Context) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
if c.client != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var authMethods []ssh.AuthMethod
|
||||
|
||||
// Try key-based auth first
|
||||
if c.keyFile != "" {
|
||||
keyPath := c.keyFile
|
||||
if strings.HasPrefix(keyPath, "~") {
|
||||
home, _ := os.UserHomeDir()
|
||||
keyPath = filepath.Join(home, keyPath[1:])
|
||||
}
|
||||
|
||||
if key, err := os.ReadFile(keyPath); err == nil {
|
||||
if signer, err := ssh.ParsePrivateKey(key); err == nil {
|
||||
authMethods = append(authMethods, ssh.PublicKeys(signer))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Try default SSH keys
|
||||
if len(authMethods) == 0 {
|
||||
home, _ := os.UserHomeDir()
|
||||
defaultKeys := []string{
|
||||
filepath.Join(home, ".ssh", "id_ed25519"),
|
||||
filepath.Join(home, ".ssh", "id_rsa"),
|
||||
}
|
||||
for _, keyPath := range defaultKeys {
|
||||
if key, err := os.ReadFile(keyPath); err == nil {
|
||||
if signer, err := ssh.ParsePrivateKey(key); err == nil {
|
||||
authMethods = append(authMethods, ssh.PublicKeys(signer))
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Fall back to password auth
|
||||
if c.password != "" {
|
||||
authMethods = append(authMethods, ssh.Password(c.password))
|
||||
authMethods = append(authMethods, ssh.KeyboardInteractive(func(user, instruction string, questions []string, echos []bool) ([]string, error) {
|
||||
answers := make([]string, len(questions))
|
||||
for i := range questions {
|
||||
answers[i] = c.password
|
||||
}
|
||||
return answers, nil
|
||||
}))
|
||||
}
|
||||
|
||||
if len(authMethods) == 0 {
|
||||
return log.E("ssh.Connect", "no authentication method available", nil)
|
||||
}
|
||||
|
||||
// Host key verification
|
||||
var hostKeyCallback ssh.HostKeyCallback
|
||||
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return log.E("ssh.Connect", "failed to get user home dir", err)
|
||||
}
|
||||
knownHostsPath := filepath.Join(home, ".ssh", "known_hosts")
|
||||
|
||||
// Ensure known_hosts file exists
|
||||
if _, err := os.Stat(knownHostsPath); os.IsNotExist(err) {
|
||||
if err := os.MkdirAll(filepath.Dir(knownHostsPath), 0700); err != nil {
|
||||
return log.E("ssh.Connect", "failed to create .ssh dir", err)
|
||||
}
|
||||
if err := os.WriteFile(knownHostsPath, nil, 0600); err != nil {
|
||||
return log.E("ssh.Connect", "failed to create known_hosts file", err)
|
||||
}
|
||||
}
|
||||
|
||||
cb, err := knownhosts.New(knownHostsPath)
|
||||
if err != nil {
|
||||
return log.E("ssh.Connect", "failed to load known_hosts", err)
|
||||
}
|
||||
hostKeyCallback = cb
|
||||
|
||||
config := &ssh.ClientConfig{
|
||||
User: c.user,
|
||||
Auth: authMethods,
|
||||
HostKeyCallback: hostKeyCallback,
|
||||
Timeout: c.timeout,
|
||||
}
|
||||
|
||||
addr := fmt.Sprintf("%s:%d", c.host, c.port)
|
||||
|
||||
// Connect with context timeout
|
||||
var d net.Dialer
|
||||
conn, err := d.DialContext(ctx, "tcp", addr)
|
||||
if err != nil {
|
||||
return log.E("ssh.Connect", fmt.Sprintf("dial %s", addr), err)
|
||||
}
|
||||
|
||||
sshConn, chans, reqs, err := ssh.NewClientConn(conn, addr, config)
|
||||
if err != nil {
|
||||
// conn is closed by NewClientConn on error
|
||||
return log.E("ssh.Connect", fmt.Sprintf("ssh connect %s", addr), err)
|
||||
}
|
||||
|
||||
c.client = ssh.NewClient(sshConn, chans, reqs)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Close closes the SSH connection.
|
||||
func (c *SSHClient) Close() error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
if c.client != nil {
|
||||
err := c.client.Close()
|
||||
c.client = nil
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Run executes a command on the remote host.
|
||||
func (c *SSHClient) Run(ctx context.Context, cmd string) (stdout, stderr string, exitCode int, err error) {
|
||||
if err := c.Connect(ctx); err != nil {
|
||||
return "", "", -1, err
|
||||
}
|
||||
|
||||
session, err := c.client.NewSession()
|
||||
if err != nil {
|
||||
return "", "", -1, log.E("ssh.Run", "new session", err)
|
||||
}
|
||||
defer func() { _ = session.Close() }()
|
||||
|
||||
var stdoutBuf, stderrBuf bytes.Buffer
|
||||
session.Stdout = &stdoutBuf
|
||||
session.Stderr = &stderrBuf
|
||||
|
||||
// Apply become if needed
|
||||
if c.become {
|
||||
becomeUser := c.becomeUser
|
||||
if becomeUser == "" {
|
||||
becomeUser = "root"
|
||||
}
|
||||
// Escape single quotes in the command
|
||||
escapedCmd := strings.ReplaceAll(cmd, "'", "'\\''")
|
||||
if c.becomePass != "" {
|
||||
// Use sudo with password via stdin (-S flag)
|
||||
// We launch a goroutine to write the password to stdin
|
||||
cmd = fmt.Sprintf("sudo -S -u %s bash -c '%s'", becomeUser, escapedCmd)
|
||||
stdin, err := session.StdinPipe()
|
||||
if err != nil {
|
||||
return "", "", -1, log.E("ssh.Run", "stdin pipe", err)
|
||||
}
|
||||
go func() {
|
||||
defer func() { _ = stdin.Close() }()
|
||||
_, _ = io.WriteString(stdin, c.becomePass+"\n")
|
||||
}()
|
||||
} else if c.password != "" {
|
||||
// Try using connection password for sudo
|
||||
cmd = fmt.Sprintf("sudo -S -u %s bash -c '%s'", becomeUser, escapedCmd)
|
||||
stdin, err := session.StdinPipe()
|
||||
if err != nil {
|
||||
return "", "", -1, log.E("ssh.Run", "stdin pipe", err)
|
||||
}
|
||||
go func() {
|
||||
defer func() { _ = stdin.Close() }()
|
||||
_, _ = io.WriteString(stdin, c.password+"\n")
|
||||
}()
|
||||
} else {
|
||||
// Try passwordless sudo
|
||||
cmd = fmt.Sprintf("sudo -n -u %s bash -c '%s'", becomeUser, escapedCmd)
|
||||
}
|
||||
}
|
||||
|
||||
// Run with context
|
||||
done := make(chan error, 1)
|
||||
go func() {
|
||||
done <- session.Run(cmd)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
_ = session.Signal(ssh.SIGKILL)
|
||||
return "", "", -1, ctx.Err()
|
||||
case err := <-done:
|
||||
exitCode = 0
|
||||
if err != nil {
|
||||
if exitErr, ok := err.(*ssh.ExitError); ok {
|
||||
exitCode = exitErr.ExitStatus()
|
||||
} else {
|
||||
return stdoutBuf.String(), stderrBuf.String(), -1, err
|
||||
}
|
||||
}
|
||||
return stdoutBuf.String(), stderrBuf.String(), exitCode, nil
|
||||
}
|
||||
}
|
||||
|
||||
// RunScript runs a script on the remote host.
|
||||
func (c *SSHClient) RunScript(ctx context.Context, script string) (stdout, stderr string, exitCode int, err error) {
|
||||
// Escape the script for heredoc
|
||||
cmd := fmt.Sprintf("bash <<'ANSIBLE_SCRIPT_EOF'\n%s\nANSIBLE_SCRIPT_EOF", script)
|
||||
return c.Run(ctx, cmd)
|
||||
}
|
||||
|
||||
// Upload copies a file to the remote host.
|
||||
func (c *SSHClient) Upload(ctx context.Context, local io.Reader, remote string, mode os.FileMode) error {
|
||||
if err := c.Connect(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Read content
|
||||
content, err := io.ReadAll(local)
|
||||
if err != nil {
|
||||
return log.E("ssh.Upload", "read content", err)
|
||||
}
|
||||
|
||||
// Create parent directory
|
||||
dir := filepath.Dir(remote)
|
||||
dirCmd := fmt.Sprintf("mkdir -p %q", dir)
|
||||
if c.become {
|
||||
dirCmd = fmt.Sprintf("sudo mkdir -p %q", dir)
|
||||
}
|
||||
if _, _, _, err := c.Run(ctx, dirCmd); err != nil {
|
||||
return log.E("ssh.Upload", "create parent dir", err)
|
||||
}
|
||||
|
||||
// Use cat to write the file (simpler than SCP)
|
||||
writeCmd := fmt.Sprintf("cat > %q && chmod %o %q", remote, mode, remote)
|
||||
|
||||
// If become is needed, we construct a command that reads password then content from stdin
|
||||
// But we need to be careful with handling stdin for sudo + cat.
|
||||
// We'll use a session with piped stdin.
|
||||
|
||||
session2, err := c.client.NewSession()
|
||||
if err != nil {
|
||||
return log.E("ssh.Upload", "new session for write", err)
|
||||
}
|
||||
defer func() { _ = session2.Close() }()
|
||||
|
||||
stdin, err := session2.StdinPipe()
|
||||
if err != nil {
|
||||
return log.E("ssh.Upload", "stdin pipe", err)
|
||||
}
|
||||
|
||||
var stderrBuf bytes.Buffer
|
||||
session2.Stderr = &stderrBuf
|
||||
|
||||
if c.become {
|
||||
becomeUser := c.becomeUser
|
||||
if becomeUser == "" {
|
||||
becomeUser = "root"
|
||||
}
|
||||
|
||||
pass := c.becomePass
|
||||
if pass == "" {
|
||||
pass = c.password
|
||||
}
|
||||
|
||||
if pass != "" {
|
||||
// Use sudo -S with password from stdin
|
||||
writeCmd = fmt.Sprintf("sudo -S -u %s bash -c 'cat > %q && chmod %o %q'",
|
||||
becomeUser, remote, mode, remote)
|
||||
} else {
|
||||
// Use passwordless sudo (sudo -n) to avoid consuming file content as password
|
||||
writeCmd = fmt.Sprintf("sudo -n -u %s bash -c 'cat > %q && chmod %o %q'",
|
||||
becomeUser, remote, mode, remote)
|
||||
}
|
||||
|
||||
if err := session2.Start(writeCmd); err != nil {
|
||||
return log.E("ssh.Upload", "start write", err)
|
||||
}
|
||||
|
||||
go func() {
|
||||
defer func() { _ = stdin.Close() }()
|
||||
if pass != "" {
|
||||
_, _ = io.WriteString(stdin, pass+"\n")
|
||||
}
|
||||
_, _ = stdin.Write(content)
|
||||
}()
|
||||
} else {
|
||||
// Normal write
|
||||
if err := session2.Start(writeCmd); err != nil {
|
||||
return log.E("ssh.Upload", "start write", err)
|
||||
}
|
||||
|
||||
go func() {
|
||||
defer func() { _ = stdin.Close() }()
|
||||
_, _ = stdin.Write(content)
|
||||
}()
|
||||
}
|
||||
|
||||
if err := session2.Wait(); err != nil {
|
||||
return log.E("ssh.Upload", fmt.Sprintf("write failed (stderr: %s)", stderrBuf.String()), err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Download copies a file from the remote host.
|
||||
func (c *SSHClient) Download(ctx context.Context, remote string) ([]byte, error) {
|
||||
if err := c.Connect(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cmd := fmt.Sprintf("cat %q", remote)
|
||||
|
||||
stdout, stderr, exitCode, err := c.Run(ctx, cmd)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if exitCode != 0 {
|
||||
return nil, log.E("ssh.Download", fmt.Sprintf("cat failed: %s", stderr), nil)
|
||||
}
|
||||
|
||||
return []byte(stdout), nil
|
||||
}
|
||||
|
||||
// FileExists checks if a file exists on the remote host.
|
||||
func (c *SSHClient) FileExists(ctx context.Context, path string) (bool, error) {
|
||||
cmd := fmt.Sprintf("test -e %q && echo yes || echo no", path)
|
||||
stdout, _, exitCode, err := c.Run(ctx, cmd)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if exitCode != 0 {
|
||||
// test command failed but didn't error - file doesn't exist
|
||||
return false, nil
|
||||
}
|
||||
return strings.TrimSpace(stdout) == "yes", nil
|
||||
}
|
||||
|
||||
// Stat returns file info from the remote host.
|
||||
func (c *SSHClient) Stat(ctx context.Context, path string) (map[string]any, error) {
|
||||
// Simple approach - get basic file info
|
||||
cmd := fmt.Sprintf(`
|
||||
if [ -e %q ]; then
|
||||
if [ -d %q ]; then
|
||||
echo "exists=true isdir=true"
|
||||
else
|
||||
echo "exists=true isdir=false"
|
||||
fi
|
||||
else
|
||||
echo "exists=false"
|
||||
fi
|
||||
`, path, path)
|
||||
|
||||
stdout, _, _, err := c.Run(ctx, cmd)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
result := make(map[string]any)
|
||||
parts := strings.Fields(strings.TrimSpace(stdout))
|
||||
for _, part := range parts {
|
||||
kv := strings.SplitN(part, "=", 2)
|
||||
if len(kv) == 2 {
|
||||
result[kv[0]] = kv[1] == "true"
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// SetBecome enables privilege escalation.
|
||||
func (c *SSHClient) SetBecome(become bool, user, password string) {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.become = become
|
||||
if user != "" {
|
||||
c.becomeUser = user
|
||||
}
|
||||
if password != "" {
|
||||
c.becomePass = password
|
||||
}
|
||||
}
|
||||
|
|
@ -1,36 +0,0 @@
|
|||
package ansible
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestNewSSHClient(t *testing.T) {
|
||||
cfg := SSHConfig{
|
||||
Host: "localhost",
|
||||
Port: 2222,
|
||||
User: "root",
|
||||
}
|
||||
|
||||
client, err := NewSSHClient(cfg)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, client)
|
||||
assert.Equal(t, "localhost", client.host)
|
||||
assert.Equal(t, 2222, client.port)
|
||||
assert.Equal(t, "root", client.user)
|
||||
assert.Equal(t, 30*time.Second, client.timeout)
|
||||
}
|
||||
|
||||
func TestSSHConfig_Defaults(t *testing.T) {
|
||||
cfg := SSHConfig{
|
||||
Host: "localhost",
|
||||
}
|
||||
|
||||
client, err := NewSSHClient(cfg)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 22, client.port)
|
||||
assert.Equal(t, "root", client.user)
|
||||
assert.Equal(t, 30*time.Second, client.timeout)
|
||||
}
|
||||
|
|
@ -1,258 +0,0 @@
|
|||
package ansible
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// Playbook represents an Ansible playbook.
|
||||
type Playbook struct {
|
||||
Plays []Play `yaml:",inline"`
|
||||
}
|
||||
|
||||
// Play represents a single play in a playbook.
|
||||
type Play struct {
|
||||
Name string `yaml:"name"`
|
||||
Hosts string `yaml:"hosts"`
|
||||
Connection string `yaml:"connection,omitempty"`
|
||||
Become bool `yaml:"become,omitempty"`
|
||||
BecomeUser string `yaml:"become_user,omitempty"`
|
||||
GatherFacts *bool `yaml:"gather_facts,omitempty"`
|
||||
Vars map[string]any `yaml:"vars,omitempty"`
|
||||
PreTasks []Task `yaml:"pre_tasks,omitempty"`
|
||||
Tasks []Task `yaml:"tasks,omitempty"`
|
||||
PostTasks []Task `yaml:"post_tasks,omitempty"`
|
||||
Roles []RoleRef `yaml:"roles,omitempty"`
|
||||
Handlers []Task `yaml:"handlers,omitempty"`
|
||||
Tags []string `yaml:"tags,omitempty"`
|
||||
Environment map[string]string `yaml:"environment,omitempty"`
|
||||
Serial any `yaml:"serial,omitempty"` // int or string
|
||||
MaxFailPercent int `yaml:"max_fail_percentage,omitempty"`
|
||||
}
|
||||
|
||||
// RoleRef represents a role reference in a play.
|
||||
type RoleRef struct {
|
||||
Role string `yaml:"role,omitempty"`
|
||||
Name string `yaml:"name,omitempty"` // Alternative to role
|
||||
TasksFrom string `yaml:"tasks_from,omitempty"`
|
||||
Vars map[string]any `yaml:"vars,omitempty"`
|
||||
When any `yaml:"when,omitempty"`
|
||||
Tags []string `yaml:"tags,omitempty"`
|
||||
}
|
||||
|
||||
// UnmarshalYAML handles both string and struct role refs.
|
||||
func (r *RoleRef) UnmarshalYAML(unmarshal func(any) error) error {
|
||||
// Try string first
|
||||
var s string
|
||||
if err := unmarshal(&s); err == nil {
|
||||
r.Role = s
|
||||
return nil
|
||||
}
|
||||
|
||||
// Try struct
|
||||
type rawRoleRef RoleRef
|
||||
var raw rawRoleRef
|
||||
if err := unmarshal(&raw); err != nil {
|
||||
return err
|
||||
}
|
||||
*r = RoleRef(raw)
|
||||
if r.Role == "" && r.Name != "" {
|
||||
r.Role = r.Name
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Task represents an Ansible task.
|
||||
type Task struct {
|
||||
Name string `yaml:"name,omitempty"`
|
||||
Module string `yaml:"-"` // Derived from the module key
|
||||
Args map[string]any `yaml:"-"` // Module arguments
|
||||
Register string `yaml:"register,omitempty"`
|
||||
When any `yaml:"when,omitempty"` // string or []string
|
||||
Loop any `yaml:"loop,omitempty"` // string or []any
|
||||
LoopControl *LoopControl `yaml:"loop_control,omitempty"`
|
||||
Vars map[string]any `yaml:"vars,omitempty"`
|
||||
Environment map[string]string `yaml:"environment,omitempty"`
|
||||
ChangedWhen any `yaml:"changed_when,omitempty"`
|
||||
FailedWhen any `yaml:"failed_when,omitempty"`
|
||||
IgnoreErrors bool `yaml:"ignore_errors,omitempty"`
|
||||
NoLog bool `yaml:"no_log,omitempty"`
|
||||
Become *bool `yaml:"become,omitempty"`
|
||||
BecomeUser string `yaml:"become_user,omitempty"`
|
||||
Delegate string `yaml:"delegate_to,omitempty"`
|
||||
RunOnce bool `yaml:"run_once,omitempty"`
|
||||
Tags []string `yaml:"tags,omitempty"`
|
||||
Block []Task `yaml:"block,omitempty"`
|
||||
Rescue []Task `yaml:"rescue,omitempty"`
|
||||
Always []Task `yaml:"always,omitempty"`
|
||||
Notify any `yaml:"notify,omitempty"` // string or []string
|
||||
Retries int `yaml:"retries,omitempty"`
|
||||
Delay int `yaml:"delay,omitempty"`
|
||||
Until string `yaml:"until,omitempty"`
|
||||
|
||||
// Include/import directives
|
||||
IncludeTasks string `yaml:"include_tasks,omitempty"`
|
||||
ImportTasks string `yaml:"import_tasks,omitempty"`
|
||||
IncludeRole *struct {
|
||||
Name string `yaml:"name"`
|
||||
TasksFrom string `yaml:"tasks_from,omitempty"`
|
||||
Vars map[string]any `yaml:"vars,omitempty"`
|
||||
} `yaml:"include_role,omitempty"`
|
||||
ImportRole *struct {
|
||||
Name string `yaml:"name"`
|
||||
TasksFrom string `yaml:"tasks_from,omitempty"`
|
||||
Vars map[string]any `yaml:"vars,omitempty"`
|
||||
} `yaml:"import_role,omitempty"`
|
||||
|
||||
// Raw YAML for module extraction
|
||||
raw map[string]any
|
||||
}
|
||||
|
||||
// LoopControl controls loop behavior.
|
||||
type LoopControl struct {
|
||||
LoopVar string `yaml:"loop_var,omitempty"`
|
||||
IndexVar string `yaml:"index_var,omitempty"`
|
||||
Label string `yaml:"label,omitempty"`
|
||||
Pause int `yaml:"pause,omitempty"`
|
||||
Extended bool `yaml:"extended,omitempty"`
|
||||
}
|
||||
|
||||
// TaskResult holds the result of executing a task.
|
||||
type TaskResult struct {
|
||||
Changed bool `json:"changed"`
|
||||
Failed bool `json:"failed"`
|
||||
Skipped bool `json:"skipped"`
|
||||
Msg string `json:"msg,omitempty"`
|
||||
Stdout string `json:"stdout,omitempty"`
|
||||
Stderr string `json:"stderr,omitempty"`
|
||||
RC int `json:"rc,omitempty"`
|
||||
Results []TaskResult `json:"results,omitempty"` // For loops
|
||||
Data map[string]any `json:"data,omitempty"` // Module-specific data
|
||||
Duration time.Duration `json:"duration,omitempty"`
|
||||
}
|
||||
|
||||
// Inventory represents Ansible inventory.
|
||||
type Inventory struct {
|
||||
All *InventoryGroup `yaml:"all"`
|
||||
}
|
||||
|
||||
// InventoryGroup represents a group in inventory.
|
||||
type InventoryGroup struct {
|
||||
Hosts map[string]*Host `yaml:"hosts,omitempty"`
|
||||
Children map[string]*InventoryGroup `yaml:"children,omitempty"`
|
||||
Vars map[string]any `yaml:"vars,omitempty"`
|
||||
}
|
||||
|
||||
// Host represents a host in inventory.
|
||||
type Host struct {
|
||||
AnsibleHost string `yaml:"ansible_host,omitempty"`
|
||||
AnsiblePort int `yaml:"ansible_port,omitempty"`
|
||||
AnsibleUser string `yaml:"ansible_user,omitempty"`
|
||||
AnsiblePassword string `yaml:"ansible_password,omitempty"`
|
||||
AnsibleSSHPrivateKeyFile string `yaml:"ansible_ssh_private_key_file,omitempty"`
|
||||
AnsibleConnection string `yaml:"ansible_connection,omitempty"`
|
||||
AnsibleBecomePassword string `yaml:"ansible_become_password,omitempty"`
|
||||
|
||||
// Custom vars
|
||||
Vars map[string]any `yaml:",inline"`
|
||||
}
|
||||
|
||||
// Facts holds gathered facts about a host.
|
||||
type Facts struct {
|
||||
Hostname string `json:"ansible_hostname"`
|
||||
FQDN string `json:"ansible_fqdn"`
|
||||
OS string `json:"ansible_os_family"`
|
||||
Distribution string `json:"ansible_distribution"`
|
||||
Version string `json:"ansible_distribution_version"`
|
||||
Architecture string `json:"ansible_architecture"`
|
||||
Kernel string `json:"ansible_kernel"`
|
||||
Memory int64 `json:"ansible_memtotal_mb"`
|
||||
CPUs int `json:"ansible_processor_vcpus"`
|
||||
IPv4 string `json:"ansible_default_ipv4_address"`
|
||||
}
|
||||
|
||||
// Known Ansible modules
|
||||
var KnownModules = []string{
|
||||
// Builtin
|
||||
"ansible.builtin.shell",
|
||||
"ansible.builtin.command",
|
||||
"ansible.builtin.raw",
|
||||
"ansible.builtin.script",
|
||||
"ansible.builtin.copy",
|
||||
"ansible.builtin.template",
|
||||
"ansible.builtin.file",
|
||||
"ansible.builtin.lineinfile",
|
||||
"ansible.builtin.blockinfile",
|
||||
"ansible.builtin.stat",
|
||||
"ansible.builtin.slurp",
|
||||
"ansible.builtin.fetch",
|
||||
"ansible.builtin.get_url",
|
||||
"ansible.builtin.uri",
|
||||
"ansible.builtin.apt",
|
||||
"ansible.builtin.apt_key",
|
||||
"ansible.builtin.apt_repository",
|
||||
"ansible.builtin.yum",
|
||||
"ansible.builtin.dnf",
|
||||
"ansible.builtin.package",
|
||||
"ansible.builtin.pip",
|
||||
"ansible.builtin.service",
|
||||
"ansible.builtin.systemd",
|
||||
"ansible.builtin.user",
|
||||
"ansible.builtin.group",
|
||||
"ansible.builtin.cron",
|
||||
"ansible.builtin.git",
|
||||
"ansible.builtin.unarchive",
|
||||
"ansible.builtin.archive",
|
||||
"ansible.builtin.debug",
|
||||
"ansible.builtin.fail",
|
||||
"ansible.builtin.assert",
|
||||
"ansible.builtin.pause",
|
||||
"ansible.builtin.wait_for",
|
||||
"ansible.builtin.set_fact",
|
||||
"ansible.builtin.include_vars",
|
||||
"ansible.builtin.add_host",
|
||||
"ansible.builtin.group_by",
|
||||
"ansible.builtin.meta",
|
||||
"ansible.builtin.setup",
|
||||
|
||||
// Short forms (legacy)
|
||||
"shell",
|
||||
"command",
|
||||
"raw",
|
||||
"script",
|
||||
"copy",
|
||||
"template",
|
||||
"file",
|
||||
"lineinfile",
|
||||
"blockinfile",
|
||||
"stat",
|
||||
"slurp",
|
||||
"fetch",
|
||||
"get_url",
|
||||
"uri",
|
||||
"apt",
|
||||
"apt_key",
|
||||
"apt_repository",
|
||||
"yum",
|
||||
"dnf",
|
||||
"package",
|
||||
"pip",
|
||||
"service",
|
||||
"systemd",
|
||||
"user",
|
||||
"group",
|
||||
"cron",
|
||||
"git",
|
||||
"unarchive",
|
||||
"archive",
|
||||
"debug",
|
||||
"fail",
|
||||
"assert",
|
||||
"pause",
|
||||
"wait_for",
|
||||
"set_fact",
|
||||
"include_vars",
|
||||
"add_host",
|
||||
"group_by",
|
||||
"meta",
|
||||
"setup",
|
||||
}
|
||||
455
pkg/auth/auth.go
455
pkg/auth/auth.go
|
|
@ -1,455 +0,0 @@
|
|||
// Package auth implements OpenPGP challenge-response authentication with
|
||||
// support for both online (HTTP) and air-gapped (file-based) transport.
|
||||
//
|
||||
// Ported from dAppServer's mod-auth/lethean.service.ts.
|
||||
//
|
||||
// Authentication Flow (Online):
|
||||
//
|
||||
// 1. Client sends public key to server
|
||||
// 2. Server generates a random nonce, encrypts it with client's public key
|
||||
// 3. Client decrypts the nonce and signs it with their private key
|
||||
// 4. Server verifies the signature, creates a session token
|
||||
//
|
||||
// Authentication Flow (Air-Gapped / Courier):
|
||||
//
|
||||
// Same crypto but challenge/response are exchanged via files on a Medium.
|
||||
//
|
||||
// Storage Layout (via Medium):
|
||||
//
|
||||
// users/
|
||||
// {userID}.pub PGP public key (armored)
|
||||
// {userID}.key PGP private key (armored, password-encrypted)
|
||||
// {userID}.rev Revocation certificate (placeholder)
|
||||
// {userID}.json User metadata (encrypted with user's public key)
|
||||
// {userID}.lthn LTHN password hash
|
||||
package auth
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
coreerr "forge.lthn.ai/core/go/pkg/framework/core"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/crypt/lthn"
|
||||
"forge.lthn.ai/core/go/pkg/crypt/pgp"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// Default durations for challenge and session lifetimes.
|
||||
const (
|
||||
DefaultChallengeTTL = 5 * time.Minute
|
||||
DefaultSessionTTL = 24 * time.Hour
|
||||
nonceBytes = 32
|
||||
)
|
||||
|
||||
// protectedUsers lists usernames that cannot be deleted.
|
||||
// The "server" user holds the server keypair; deleting it would
|
||||
// permanently destroy all joining data and require a full rebuild.
|
||||
var protectedUsers = map[string]bool{
|
||||
"server": true,
|
||||
}
|
||||
|
||||
// User represents a registered user with PGP credentials.
|
||||
type User struct {
|
||||
PublicKey string `json:"public_key"`
|
||||
KeyID string `json:"key_id"`
|
||||
Fingerprint string `json:"fingerprint"`
|
||||
PasswordHash string `json:"password_hash"` // LTHN hash
|
||||
Created time.Time `json:"created"`
|
||||
LastLogin time.Time `json:"last_login"`
|
||||
}
|
||||
|
||||
// Challenge is a PGP-encrypted nonce sent to a client during authentication.
|
||||
type Challenge struct {
|
||||
Nonce []byte `json:"nonce"`
|
||||
Encrypted string `json:"encrypted"` // PGP-encrypted nonce (armored)
|
||||
ExpiresAt time.Time `json:"expires_at"`
|
||||
}
|
||||
|
||||
// Session represents an authenticated session.
|
||||
type Session struct {
|
||||
Token string `json:"token"`
|
||||
UserID string `json:"user_id"`
|
||||
ExpiresAt time.Time `json:"expires_at"`
|
||||
}
|
||||
|
||||
// Option configures an Authenticator.
|
||||
type Option func(*Authenticator)
|
||||
|
||||
// WithChallengeTTL sets the lifetime of a challenge before it expires.
|
||||
func WithChallengeTTL(d time.Duration) Option {
|
||||
return func(a *Authenticator) {
|
||||
a.challengeTTL = d
|
||||
}
|
||||
}
|
||||
|
||||
// WithSessionTTL sets the lifetime of a session before it expires.
|
||||
func WithSessionTTL(d time.Duration) Option {
|
||||
return func(a *Authenticator) {
|
||||
a.sessionTTL = d
|
||||
}
|
||||
}
|
||||
|
||||
// Authenticator manages PGP-based challenge-response authentication.
|
||||
// All user data and keys are persisted through an io.Medium, which may
|
||||
// be backed by disk, memory (MockMedium), or any other storage backend.
|
||||
type Authenticator struct {
|
||||
medium io.Medium
|
||||
sessions map[string]*Session
|
||||
challenges map[string]*Challenge // userID -> pending challenge
|
||||
mu sync.RWMutex
|
||||
challengeTTL time.Duration
|
||||
sessionTTL time.Duration
|
||||
}
|
||||
|
||||
// New creates an Authenticator that persists user data via the given Medium.
|
||||
func New(m io.Medium, opts ...Option) *Authenticator {
|
||||
a := &Authenticator{
|
||||
medium: m,
|
||||
sessions: make(map[string]*Session),
|
||||
challenges: make(map[string]*Challenge),
|
||||
challengeTTL: DefaultChallengeTTL,
|
||||
sessionTTL: DefaultSessionTTL,
|
||||
}
|
||||
for _, opt := range opts {
|
||||
opt(a)
|
||||
}
|
||||
return a
|
||||
}
|
||||
|
||||
// userPath returns the storage path for a user artifact.
|
||||
func userPath(userID, ext string) string {
|
||||
return "users/" + userID + ext
|
||||
}
|
||||
|
||||
// Register creates a new user account. It hashes the username with LTHN to
|
||||
// produce a userID, generates a PGP keypair (protected by the given password),
|
||||
// and persists the public key, private key, revocation placeholder, password
|
||||
// hash, and encrypted metadata via the Medium.
|
||||
func (a *Authenticator) Register(username, password string) (*User, error) {
|
||||
const op = "auth.Register"
|
||||
|
||||
userID := lthn.Hash(username)
|
||||
|
||||
// Check if user already exists
|
||||
if a.medium.IsFile(userPath(userID, ".pub")) {
|
||||
return nil, coreerr.E(op, "user already exists", nil)
|
||||
}
|
||||
|
||||
// Ensure users directory exists
|
||||
if err := a.medium.EnsureDir("users"); err != nil {
|
||||
return nil, coreerr.E(op, "failed to create users directory", err)
|
||||
}
|
||||
|
||||
// Generate PGP keypair
|
||||
kp, err := pgp.CreateKeyPair(userID, userID+"@auth.local", password)
|
||||
if err != nil {
|
||||
return nil, coreerr.E(op, "failed to create PGP keypair", err)
|
||||
}
|
||||
|
||||
// Store public key
|
||||
if err := a.medium.Write(userPath(userID, ".pub"), kp.PublicKey); err != nil {
|
||||
return nil, coreerr.E(op, "failed to write public key", err)
|
||||
}
|
||||
|
||||
// Store private key (already encrypted by PGP if password is non-empty)
|
||||
if err := a.medium.Write(userPath(userID, ".key"), kp.PrivateKey); err != nil {
|
||||
return nil, coreerr.E(op, "failed to write private key", err)
|
||||
}
|
||||
|
||||
// Store revocation certificate placeholder
|
||||
if err := a.medium.Write(userPath(userID, ".rev"), "REVOCATION_PLACEHOLDER"); err != nil {
|
||||
return nil, coreerr.E(op, "failed to write revocation certificate", err)
|
||||
}
|
||||
|
||||
// Store LTHN password hash
|
||||
passwordHash := lthn.Hash(password)
|
||||
if err := a.medium.Write(userPath(userID, ".lthn"), passwordHash); err != nil {
|
||||
return nil, coreerr.E(op, "failed to write password hash", err)
|
||||
}
|
||||
|
||||
// Build user metadata
|
||||
now := time.Now()
|
||||
user := &User{
|
||||
PublicKey: kp.PublicKey,
|
||||
KeyID: userID,
|
||||
Fingerprint: lthn.Hash(kp.PublicKey),
|
||||
PasswordHash: passwordHash,
|
||||
Created: now,
|
||||
LastLogin: time.Time{},
|
||||
}
|
||||
|
||||
// Encrypt metadata with the user's public key and store
|
||||
metaJSON, err := json.Marshal(user)
|
||||
if err != nil {
|
||||
return nil, coreerr.E(op, "failed to marshal user metadata", err)
|
||||
}
|
||||
|
||||
encMeta, err := pgp.Encrypt(metaJSON, kp.PublicKey)
|
||||
if err != nil {
|
||||
return nil, coreerr.E(op, "failed to encrypt user metadata", err)
|
||||
}
|
||||
|
||||
if err := a.medium.Write(userPath(userID, ".json"), string(encMeta)); err != nil {
|
||||
return nil, coreerr.E(op, "failed to write user metadata", err)
|
||||
}
|
||||
|
||||
return user, nil
|
||||
}
|
||||
|
||||
// CreateChallenge generates a cryptographic challenge for the given user.
|
||||
// A random nonce is created and encrypted with the user's PGP public key.
|
||||
// The client must decrypt the nonce and sign it to prove key ownership.
|
||||
func (a *Authenticator) CreateChallenge(userID string) (*Challenge, error) {
|
||||
const op = "auth.CreateChallenge"
|
||||
|
||||
// Read user's public key
|
||||
pubKey, err := a.medium.Read(userPath(userID, ".pub"))
|
||||
if err != nil {
|
||||
return nil, coreerr.E(op, "user not found", err)
|
||||
}
|
||||
|
||||
// Generate random nonce
|
||||
nonce := make([]byte, nonceBytes)
|
||||
if _, err := rand.Read(nonce); err != nil {
|
||||
return nil, coreerr.E(op, "failed to generate nonce", err)
|
||||
}
|
||||
|
||||
// Encrypt nonce with user's public key
|
||||
encrypted, err := pgp.Encrypt(nonce, pubKey)
|
||||
if err != nil {
|
||||
return nil, coreerr.E(op, "failed to encrypt nonce", err)
|
||||
}
|
||||
|
||||
challenge := &Challenge{
|
||||
Nonce: nonce,
|
||||
Encrypted: string(encrypted),
|
||||
ExpiresAt: time.Now().Add(a.challengeTTL),
|
||||
}
|
||||
|
||||
a.mu.Lock()
|
||||
a.challenges[userID] = challenge
|
||||
a.mu.Unlock()
|
||||
|
||||
return challenge, nil
|
||||
}
|
||||
|
||||
// ValidateResponse verifies a signed nonce from the client. The client must
|
||||
// have decrypted the challenge nonce and signed it with their private key.
|
||||
// On success, a new session is created and returned.
|
||||
func (a *Authenticator) ValidateResponse(userID string, signedNonce []byte) (*Session, error) {
|
||||
const op = "auth.ValidateResponse"
|
||||
|
||||
a.mu.Lock()
|
||||
challenge, exists := a.challenges[userID]
|
||||
if exists {
|
||||
delete(a.challenges, userID)
|
||||
}
|
||||
a.mu.Unlock()
|
||||
|
||||
if !exists {
|
||||
return nil, coreerr.E(op, "no pending challenge for user", nil)
|
||||
}
|
||||
|
||||
// Check challenge expiry
|
||||
if time.Now().After(challenge.ExpiresAt) {
|
||||
return nil, coreerr.E(op, "challenge expired", nil)
|
||||
}
|
||||
|
||||
// Read user's public key
|
||||
pubKey, err := a.medium.Read(userPath(userID, ".pub"))
|
||||
if err != nil {
|
||||
return nil, coreerr.E(op, "user not found", err)
|
||||
}
|
||||
|
||||
// Verify signature over the original nonce
|
||||
if err := pgp.Verify(challenge.Nonce, signedNonce, pubKey); err != nil {
|
||||
return nil, coreerr.E(op, "signature verification failed", err)
|
||||
}
|
||||
|
||||
return a.createSession(userID)
|
||||
}
|
||||
|
||||
// ValidateSession checks whether a token maps to a valid, non-expired session.
|
||||
func (a *Authenticator) ValidateSession(token string) (*Session, error) {
|
||||
const op = "auth.ValidateSession"
|
||||
|
||||
a.mu.RLock()
|
||||
session, exists := a.sessions[token]
|
||||
a.mu.RUnlock()
|
||||
|
||||
if !exists {
|
||||
return nil, coreerr.E(op, "session not found", nil)
|
||||
}
|
||||
|
||||
if time.Now().After(session.ExpiresAt) {
|
||||
a.mu.Lock()
|
||||
delete(a.sessions, token)
|
||||
a.mu.Unlock()
|
||||
return nil, coreerr.E(op, "session expired", nil)
|
||||
}
|
||||
|
||||
return session, nil
|
||||
}
|
||||
|
||||
// RefreshSession extends the expiry of an existing valid session.
|
||||
func (a *Authenticator) RefreshSession(token string) (*Session, error) {
|
||||
const op = "auth.RefreshSession"
|
||||
|
||||
a.mu.Lock()
|
||||
defer a.mu.Unlock()
|
||||
|
||||
session, exists := a.sessions[token]
|
||||
if !exists {
|
||||
return nil, coreerr.E(op, "session not found", nil)
|
||||
}
|
||||
|
||||
if time.Now().After(session.ExpiresAt) {
|
||||
delete(a.sessions, token)
|
||||
return nil, coreerr.E(op, "session expired", nil)
|
||||
}
|
||||
|
||||
session.ExpiresAt = time.Now().Add(a.sessionTTL)
|
||||
return session, nil
|
||||
}
|
||||
|
||||
// RevokeSession removes a session, invalidating the token immediately.
|
||||
func (a *Authenticator) RevokeSession(token string) error {
|
||||
const op = "auth.RevokeSession"
|
||||
|
||||
a.mu.Lock()
|
||||
defer a.mu.Unlock()
|
||||
|
||||
if _, exists := a.sessions[token]; !exists {
|
||||
return coreerr.E(op, "session not found", nil)
|
||||
}
|
||||
|
||||
delete(a.sessions, token)
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteUser removes a user and all associated keys from storage.
|
||||
// The "server" user is protected and cannot be deleted (mirroring the
|
||||
// original TypeScript implementation's safeguard).
|
||||
func (a *Authenticator) DeleteUser(userID string) error {
|
||||
const op = "auth.DeleteUser"
|
||||
|
||||
// Protect special users
|
||||
if protectedUsers[userID] {
|
||||
return coreerr.E(op, "cannot delete protected user", nil)
|
||||
}
|
||||
|
||||
// Check user exists
|
||||
if !a.medium.IsFile(userPath(userID, ".pub")) {
|
||||
return coreerr.E(op, "user not found", nil)
|
||||
}
|
||||
|
||||
// Remove all artifacts
|
||||
extensions := []string{".pub", ".key", ".rev", ".json", ".lthn"}
|
||||
for _, ext := range extensions {
|
||||
p := userPath(userID, ext)
|
||||
if a.medium.IsFile(p) {
|
||||
if err := a.medium.Delete(p); err != nil {
|
||||
return coreerr.E(op, "failed to delete "+ext, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Revoke any active sessions for this user
|
||||
a.mu.Lock()
|
||||
for token, session := range a.sessions {
|
||||
if session.UserID == userID {
|
||||
delete(a.sessions, token)
|
||||
}
|
||||
}
|
||||
a.mu.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Login performs password-based authentication as a convenience method.
|
||||
// It verifies the password against the stored LTHN hash and, on success,
|
||||
// creates a new session. This bypasses the PGP challenge-response flow.
|
||||
func (a *Authenticator) Login(userID, password string) (*Session, error) {
|
||||
const op = "auth.Login"
|
||||
|
||||
// Read stored password hash
|
||||
storedHash, err := a.medium.Read(userPath(userID, ".lthn"))
|
||||
if err != nil {
|
||||
return nil, coreerr.E(op, "user not found", err)
|
||||
}
|
||||
|
||||
// Verify password
|
||||
if !lthn.Verify(password, storedHash) {
|
||||
return nil, coreerr.E(op, "invalid password", nil)
|
||||
}
|
||||
|
||||
return a.createSession(userID)
|
||||
}
|
||||
|
||||
// WriteChallengeFile writes an encrypted challenge to a file for air-gapped
|
||||
// (courier) transport. The challenge is created and then its encrypted nonce
|
||||
// is written to the specified path on the Medium.
|
||||
func (a *Authenticator) WriteChallengeFile(userID, path string) error {
|
||||
const op = "auth.WriteChallengeFile"
|
||||
|
||||
challenge, err := a.CreateChallenge(userID)
|
||||
if err != nil {
|
||||
return coreerr.E(op, "failed to create challenge", err)
|
||||
}
|
||||
|
||||
data, err := json.Marshal(challenge)
|
||||
if err != nil {
|
||||
return coreerr.E(op, "failed to marshal challenge", err)
|
||||
}
|
||||
|
||||
if err := a.medium.Write(path, string(data)); err != nil {
|
||||
return coreerr.E(op, "failed to write challenge file", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReadResponseFile reads a signed response from a file and validates it,
|
||||
// completing the air-gapped authentication flow. The file must contain the
|
||||
// raw PGP signature bytes (armored).
|
||||
func (a *Authenticator) ReadResponseFile(userID, path string) (*Session, error) {
|
||||
const op = "auth.ReadResponseFile"
|
||||
|
||||
content, err := a.medium.Read(path)
|
||||
if err != nil {
|
||||
return nil, coreerr.E(op, "failed to read response file", err)
|
||||
}
|
||||
|
||||
session, err := a.ValidateResponse(userID, []byte(content))
|
||||
if err != nil {
|
||||
return nil, coreerr.E(op, "failed to validate response", err)
|
||||
}
|
||||
|
||||
return session, nil
|
||||
}
|
||||
|
||||
// createSession generates a cryptographically random session token and
|
||||
// stores the session in the in-memory session map.
|
||||
func (a *Authenticator) createSession(userID string) (*Session, error) {
|
||||
tokenBytes := make([]byte, 32)
|
||||
if _, err := rand.Read(tokenBytes); err != nil {
|
||||
return nil, fmt.Errorf("auth: failed to generate session token: %w", err)
|
||||
}
|
||||
|
||||
session := &Session{
|
||||
Token: hex.EncodeToString(tokenBytes),
|
||||
UserID: userID,
|
||||
ExpiresAt: time.Now().Add(a.sessionTTL),
|
||||
}
|
||||
|
||||
a.mu.Lock()
|
||||
a.sessions[session.Token] = session
|
||||
a.mu.Unlock()
|
||||
|
||||
return session, nil
|
||||
}
|
||||
|
|
@ -1,581 +0,0 @@
|
|||
package auth
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/crypt/lthn"
|
||||
"forge.lthn.ai/core/go/pkg/crypt/pgp"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// helper creates a fresh Authenticator backed by MockMedium.
|
||||
func newTestAuth(opts ...Option) (*Authenticator, *io.MockMedium) {
|
||||
m := io.NewMockMedium()
|
||||
a := New(m, opts...)
|
||||
return a, m
|
||||
}
|
||||
|
||||
// --- Register ---
|
||||
|
||||
func TestRegister_Good(t *testing.T) {
|
||||
a, m := newTestAuth()
|
||||
|
||||
user, err := a.Register("alice", "hunter2")
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, user)
|
||||
|
||||
userID := lthn.Hash("alice")
|
||||
|
||||
// Verify public key is stored
|
||||
assert.True(t, m.IsFile(userPath(userID, ".pub")))
|
||||
assert.True(t, m.IsFile(userPath(userID, ".key")))
|
||||
assert.True(t, m.IsFile(userPath(userID, ".rev")))
|
||||
assert.True(t, m.IsFile(userPath(userID, ".json")))
|
||||
assert.True(t, m.IsFile(userPath(userID, ".lthn")))
|
||||
|
||||
// Verify user fields
|
||||
assert.NotEmpty(t, user.PublicKey)
|
||||
assert.Equal(t, userID, user.KeyID)
|
||||
assert.NotEmpty(t, user.Fingerprint)
|
||||
assert.Equal(t, lthn.Hash("hunter2"), user.PasswordHash)
|
||||
assert.False(t, user.Created.IsZero())
|
||||
}
|
||||
|
||||
func TestRegister_Bad(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
// Register first time succeeds
|
||||
_, err := a.Register("bob", "pass1")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Duplicate registration should fail
|
||||
_, err = a.Register("bob", "pass2")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "user already exists")
|
||||
}
|
||||
|
||||
func TestRegister_Ugly(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
// Empty username/password should still work (PGP allows it)
|
||||
user, err := a.Register("", "")
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, user)
|
||||
}
|
||||
|
||||
// --- CreateChallenge ---
|
||||
|
||||
func TestCreateChallenge_Good(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
user, err := a.Register("charlie", "pass")
|
||||
require.NoError(t, err)
|
||||
|
||||
challenge, err := a.CreateChallenge(user.KeyID)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, challenge)
|
||||
|
||||
assert.Len(t, challenge.Nonce, nonceBytes)
|
||||
assert.NotEmpty(t, challenge.Encrypted)
|
||||
assert.True(t, challenge.ExpiresAt.After(time.Now()))
|
||||
}
|
||||
|
||||
func TestCreateChallenge_Bad(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
// Challenge for non-existent user
|
||||
_, err := a.CreateChallenge("nonexistent-user-id")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "user not found")
|
||||
}
|
||||
|
||||
func TestCreateChallenge_Ugly(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
// Empty userID
|
||||
_, err := a.CreateChallenge("")
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
// --- ValidateResponse (full challenge-response flow) ---
|
||||
|
||||
func TestValidateResponse_Good(t *testing.T) {
|
||||
a, m := newTestAuth()
|
||||
|
||||
// Register user
|
||||
_, err := a.Register("dave", "password123")
|
||||
require.NoError(t, err)
|
||||
|
||||
userID := lthn.Hash("dave")
|
||||
|
||||
// Create challenge
|
||||
challenge, err := a.CreateChallenge(userID)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Client-side: decrypt nonce, then sign it
|
||||
privKey, err := m.Read(userPath(userID, ".key"))
|
||||
require.NoError(t, err)
|
||||
|
||||
decryptedNonce, err := pgp.Decrypt([]byte(challenge.Encrypted), privKey, "password123")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, challenge.Nonce, decryptedNonce)
|
||||
|
||||
signedNonce, err := pgp.Sign(decryptedNonce, privKey, "password123")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Validate response
|
||||
session, err := a.ValidateResponse(userID, signedNonce)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, session)
|
||||
|
||||
assert.NotEmpty(t, session.Token)
|
||||
assert.Equal(t, userID, session.UserID)
|
||||
assert.True(t, session.ExpiresAt.After(time.Now()))
|
||||
}
|
||||
|
||||
func TestValidateResponse_Bad(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
_, err := a.Register("eve", "pass")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("eve")
|
||||
|
||||
// No pending challenge
|
||||
_, err = a.ValidateResponse(userID, []byte("fake-signature"))
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "no pending challenge")
|
||||
}
|
||||
|
||||
func TestValidateResponse_Ugly(t *testing.T) {
|
||||
a, m := newTestAuth(WithChallengeTTL(1 * time.Millisecond))
|
||||
|
||||
_, err := a.Register("frank", "pass")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("frank")
|
||||
|
||||
// Create challenge and let it expire
|
||||
challenge, err := a.CreateChallenge(userID)
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(5 * time.Millisecond)
|
||||
|
||||
// Sign with valid key but expired challenge
|
||||
privKey, err := m.Read(userPath(userID, ".key"))
|
||||
require.NoError(t, err)
|
||||
|
||||
signedNonce, err := pgp.Sign(challenge.Nonce, privKey, "pass")
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = a.ValidateResponse(userID, signedNonce)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "challenge expired")
|
||||
}
|
||||
|
||||
// --- ValidateSession ---
|
||||
|
||||
func TestValidateSession_Good(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
_, err := a.Register("grace", "pass")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("grace")
|
||||
|
||||
session, err := a.Login(userID, "pass")
|
||||
require.NoError(t, err)
|
||||
|
||||
validated, err := a.ValidateSession(session.Token)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, session.Token, validated.Token)
|
||||
assert.Equal(t, userID, validated.UserID)
|
||||
}
|
||||
|
||||
func TestValidateSession_Bad(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
_, err := a.ValidateSession("nonexistent-token")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "session not found")
|
||||
}
|
||||
|
||||
func TestValidateSession_Ugly(t *testing.T) {
|
||||
a, _ := newTestAuth(WithSessionTTL(1 * time.Millisecond))
|
||||
|
||||
_, err := a.Register("heidi", "pass")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("heidi")
|
||||
|
||||
session, err := a.Login(userID, "pass")
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(5 * time.Millisecond)
|
||||
|
||||
_, err = a.ValidateSession(session.Token)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "session expired")
|
||||
}
|
||||
|
||||
// --- RefreshSession ---
|
||||
|
||||
func TestRefreshSession_Good(t *testing.T) {
|
||||
a, _ := newTestAuth(WithSessionTTL(1 * time.Hour))
|
||||
|
||||
_, err := a.Register("ivan", "pass")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("ivan")
|
||||
|
||||
session, err := a.Login(userID, "pass")
|
||||
require.NoError(t, err)
|
||||
|
||||
originalExpiry := session.ExpiresAt
|
||||
|
||||
// Small delay to ensure time moves forward
|
||||
time.Sleep(2 * time.Millisecond)
|
||||
|
||||
refreshed, err := a.RefreshSession(session.Token)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, refreshed.ExpiresAt.After(originalExpiry))
|
||||
}
|
||||
|
||||
func TestRefreshSession_Bad(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
_, err := a.RefreshSession("nonexistent-token")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "session not found")
|
||||
}
|
||||
|
||||
func TestRefreshSession_Ugly(t *testing.T) {
|
||||
a, _ := newTestAuth(WithSessionTTL(1 * time.Millisecond))
|
||||
|
||||
_, err := a.Register("judy", "pass")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("judy")
|
||||
|
||||
session, err := a.Login(userID, "pass")
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(5 * time.Millisecond)
|
||||
|
||||
_, err = a.RefreshSession(session.Token)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "session expired")
|
||||
}
|
||||
|
||||
// --- RevokeSession ---
|
||||
|
||||
func TestRevokeSession_Good(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
_, err := a.Register("karl", "pass")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("karl")
|
||||
|
||||
session, err := a.Login(userID, "pass")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = a.RevokeSession(session.Token)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Token should no longer be valid
|
||||
_, err = a.ValidateSession(session.Token)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestRevokeSession_Bad(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
err := a.RevokeSession("nonexistent-token")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "session not found")
|
||||
}
|
||||
|
||||
func TestRevokeSession_Ugly(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
// Revoke empty token
|
||||
err := a.RevokeSession("")
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
// --- DeleteUser ---
|
||||
|
||||
func TestDeleteUser_Good(t *testing.T) {
|
||||
a, m := newTestAuth()
|
||||
|
||||
_, err := a.Register("larry", "pass")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("larry")
|
||||
|
||||
// Also create a session that should be cleaned up
|
||||
_, err = a.Login(userID, "pass")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = a.DeleteUser(userID)
|
||||
require.NoError(t, err)
|
||||
|
||||
// All files should be gone
|
||||
assert.False(t, m.IsFile(userPath(userID, ".pub")))
|
||||
assert.False(t, m.IsFile(userPath(userID, ".key")))
|
||||
assert.False(t, m.IsFile(userPath(userID, ".rev")))
|
||||
assert.False(t, m.IsFile(userPath(userID, ".json")))
|
||||
assert.False(t, m.IsFile(userPath(userID, ".lthn")))
|
||||
|
||||
// Session should be gone
|
||||
a.mu.RLock()
|
||||
sessionCount := 0
|
||||
for _, s := range a.sessions {
|
||||
if s.UserID == userID {
|
||||
sessionCount++
|
||||
}
|
||||
}
|
||||
a.mu.RUnlock()
|
||||
assert.Equal(t, 0, sessionCount)
|
||||
}
|
||||
|
||||
func TestDeleteUser_Bad(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
// Protected user "server" cannot be deleted
|
||||
err := a.DeleteUser("server")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "cannot delete protected user")
|
||||
}
|
||||
|
||||
func TestDeleteUser_Ugly(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
// Non-existent user
|
||||
err := a.DeleteUser("nonexistent-user-id")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "user not found")
|
||||
}
|
||||
|
||||
// --- Login ---
|
||||
|
||||
func TestLogin_Good(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
_, err := a.Register("mallory", "secret")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("mallory")
|
||||
|
||||
session, err := a.Login(userID, "secret")
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, session)
|
||||
|
||||
assert.NotEmpty(t, session.Token)
|
||||
assert.Equal(t, userID, session.UserID)
|
||||
assert.True(t, session.ExpiresAt.After(time.Now()))
|
||||
}
|
||||
|
||||
func TestLogin_Bad(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
_, err := a.Register("nancy", "correct-password")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("nancy")
|
||||
|
||||
// Wrong password
|
||||
_, err = a.Login(userID, "wrong-password")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "invalid password")
|
||||
}
|
||||
|
||||
func TestLogin_Ugly(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
// Login for non-existent user
|
||||
_, err := a.Login("nonexistent-user-id", "pass")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "user not found")
|
||||
}
|
||||
|
||||
// --- WriteChallengeFile / ReadResponseFile (Air-Gapped) ---
|
||||
|
||||
func TestAirGappedFlow_Good(t *testing.T) {
|
||||
a, m := newTestAuth()
|
||||
|
||||
_, err := a.Register("oscar", "airgap-pass")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("oscar")
|
||||
|
||||
// Write challenge to file
|
||||
challengePath := "transfer/challenge.json"
|
||||
err = a.WriteChallengeFile(userID, challengePath)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, m.IsFile(challengePath))
|
||||
|
||||
// Read challenge file to get the encrypted nonce (simulating courier)
|
||||
challengeData, err := m.Read(challengePath)
|
||||
require.NoError(t, err)
|
||||
|
||||
var challenge Challenge
|
||||
err = json.Unmarshal([]byte(challengeData), &challenge)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Client-side: decrypt nonce and sign it
|
||||
privKey, err := m.Read(userPath(userID, ".key"))
|
||||
require.NoError(t, err)
|
||||
|
||||
decryptedNonce, err := pgp.Decrypt([]byte(challenge.Encrypted), privKey, "airgap-pass")
|
||||
require.NoError(t, err)
|
||||
|
||||
signedNonce, err := pgp.Sign(decryptedNonce, privKey, "airgap-pass")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write signed response to file
|
||||
responsePath := "transfer/response.sig"
|
||||
err = m.Write(responsePath, string(signedNonce))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Server reads response file
|
||||
session, err := a.ReadResponseFile(userID, responsePath)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, session)
|
||||
|
||||
assert.NotEmpty(t, session.Token)
|
||||
assert.Equal(t, userID, session.UserID)
|
||||
}
|
||||
|
||||
func TestWriteChallengeFile_Bad(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
// Challenge for non-existent user
|
||||
err := a.WriteChallengeFile("nonexistent-user", "challenge.json")
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestReadResponseFile_Bad(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
// Response file does not exist
|
||||
_, err := a.ReadResponseFile("some-user", "nonexistent-file.sig")
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestReadResponseFile_Ugly(t *testing.T) {
|
||||
a, m := newTestAuth()
|
||||
|
||||
_, err := a.Register("peggy", "pass")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("peggy")
|
||||
|
||||
// Create a challenge
|
||||
_, err = a.CreateChallenge(userID)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write garbage to response file
|
||||
responsePath := "transfer/bad-response.sig"
|
||||
err = m.Write(responsePath, "not-a-valid-signature")
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = a.ReadResponseFile(userID, responsePath)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
// --- Options ---
|
||||
|
||||
func TestWithChallengeTTL_Good(t *testing.T) {
|
||||
ttl := 30 * time.Second
|
||||
a, _ := newTestAuth(WithChallengeTTL(ttl))
|
||||
assert.Equal(t, ttl, a.challengeTTL)
|
||||
}
|
||||
|
||||
func TestWithSessionTTL_Good(t *testing.T) {
|
||||
ttl := 2 * time.Hour
|
||||
a, _ := newTestAuth(WithSessionTTL(ttl))
|
||||
assert.Equal(t, ttl, a.sessionTTL)
|
||||
}
|
||||
|
||||
// --- Full Round-Trip (Online Flow) ---
|
||||
|
||||
func TestFullRoundTrip_Good(t *testing.T) {
|
||||
a, m := newTestAuth()
|
||||
|
||||
// 1. Register
|
||||
user, err := a.Register("quinn", "roundtrip-pass")
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, user)
|
||||
|
||||
userID := lthn.Hash("quinn")
|
||||
|
||||
// 2. Create challenge
|
||||
challenge, err := a.CreateChallenge(userID)
|
||||
require.NoError(t, err)
|
||||
|
||||
// 3. Client decrypts + signs
|
||||
privKey, err := m.Read(userPath(userID, ".key"))
|
||||
require.NoError(t, err)
|
||||
|
||||
nonce, err := pgp.Decrypt([]byte(challenge.Encrypted), privKey, "roundtrip-pass")
|
||||
require.NoError(t, err)
|
||||
|
||||
sig, err := pgp.Sign(nonce, privKey, "roundtrip-pass")
|
||||
require.NoError(t, err)
|
||||
|
||||
// 4. Server validates, issues session
|
||||
session, err := a.ValidateResponse(userID, sig)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, session)
|
||||
|
||||
// 5. Validate session
|
||||
validated, err := a.ValidateSession(session.Token)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, session.Token, validated.Token)
|
||||
|
||||
// 6. Refresh session
|
||||
refreshed, err := a.RefreshSession(session.Token)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, session.Token, refreshed.Token)
|
||||
|
||||
// 7. Revoke session
|
||||
err = a.RevokeSession(session.Token)
|
||||
require.NoError(t, err)
|
||||
|
||||
// 8. Session should be invalid now
|
||||
_, err = a.ValidateSession(session.Token)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
// --- Concurrent Access ---
|
||||
|
||||
func TestConcurrentSessions_Good(t *testing.T) {
|
||||
a, _ := newTestAuth()
|
||||
|
||||
_, err := a.Register("ruth", "pass")
|
||||
require.NoError(t, err)
|
||||
userID := lthn.Hash("ruth")
|
||||
|
||||
// Create multiple sessions concurrently
|
||||
const n = 10
|
||||
sessions := make(chan *Session, n)
|
||||
errs := make(chan error, n)
|
||||
|
||||
for i := 0; i < n; i++ {
|
||||
go func() {
|
||||
s, err := a.Login(userID, "pass")
|
||||
if err != nil {
|
||||
errs <- err
|
||||
return
|
||||
}
|
||||
sessions <- s
|
||||
}()
|
||||
}
|
||||
|
||||
for i := 0; i < n; i++ {
|
||||
select {
|
||||
case s := <-sessions:
|
||||
require.NotNil(t, s)
|
||||
// Validate each session
|
||||
_, err := a.ValidateSession(s.Token)
|
||||
assert.NoError(t, err)
|
||||
case err := <-errs:
|
||||
t.Fatalf("concurrent login failed: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,297 +0,0 @@
|
|||
// Package build provides project type detection and cross-compilation for the Core build system.
|
||||
package build
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"archive/zip"
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"fmt"
|
||||
"io"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/Snider/Borg/pkg/compress"
|
||||
io_interface "forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// ArchiveFormat specifies the compression format for archives.
|
||||
type ArchiveFormat string
|
||||
|
||||
const (
|
||||
// ArchiveFormatGzip uses tar.gz (gzip compression) - widely compatible.
|
||||
ArchiveFormatGzip ArchiveFormat = "gz"
|
||||
// ArchiveFormatXZ uses tar.xz (xz/LZMA2 compression) - better compression ratio.
|
||||
ArchiveFormatXZ ArchiveFormat = "xz"
|
||||
// ArchiveFormatZip uses zip - for Windows.
|
||||
ArchiveFormatZip ArchiveFormat = "zip"
|
||||
)
|
||||
|
||||
// Archive creates an archive for a single artifact using gzip compression.
|
||||
// Uses tar.gz for linux/darwin and zip for windows.
|
||||
// The archive is created alongside the binary (e.g., dist/myapp_linux_amd64.tar.gz).
|
||||
// Returns a new Artifact with Path pointing to the archive.
|
||||
func Archive(fs io_interface.Medium, artifact Artifact) (Artifact, error) {
|
||||
return ArchiveWithFormat(fs, artifact, ArchiveFormatGzip)
|
||||
}
|
||||
|
||||
// ArchiveXZ creates an archive for a single artifact using xz compression.
|
||||
// Uses tar.xz for linux/darwin and zip for windows.
|
||||
// Returns a new Artifact with Path pointing to the archive.
|
||||
func ArchiveXZ(fs io_interface.Medium, artifact Artifact) (Artifact, error) {
|
||||
return ArchiveWithFormat(fs, artifact, ArchiveFormatXZ)
|
||||
}
|
||||
|
||||
// ArchiveWithFormat creates an archive for a single artifact with the specified format.
|
||||
// Uses tar.gz or tar.xz for linux/darwin and zip for windows.
|
||||
// The archive is created alongside the binary (e.g., dist/myapp_linux_amd64.tar.xz).
|
||||
// Returns a new Artifact with Path pointing to the archive.
|
||||
func ArchiveWithFormat(fs io_interface.Medium, artifact Artifact, format ArchiveFormat) (Artifact, error) {
|
||||
if artifact.Path == "" {
|
||||
return Artifact{}, fmt.Errorf("build.Archive: artifact path is empty")
|
||||
}
|
||||
|
||||
// Verify the source file exists
|
||||
info, err := fs.Stat(artifact.Path)
|
||||
if err != nil {
|
||||
return Artifact{}, fmt.Errorf("build.Archive: source file not found: %w", err)
|
||||
}
|
||||
if info.IsDir() {
|
||||
return Artifact{}, fmt.Errorf("build.Archive: source path is a directory, expected file")
|
||||
}
|
||||
|
||||
// Determine archive type based on OS and format
|
||||
var archivePath string
|
||||
var archiveFunc func(fs io_interface.Medium, src, dst string) error
|
||||
|
||||
if artifact.OS == "windows" {
|
||||
archivePath = archiveFilename(artifact, ".zip")
|
||||
archiveFunc = createZipArchive
|
||||
} else {
|
||||
switch format {
|
||||
case ArchiveFormatXZ:
|
||||
archivePath = archiveFilename(artifact, ".tar.xz")
|
||||
archiveFunc = createTarXzArchive
|
||||
default:
|
||||
archivePath = archiveFilename(artifact, ".tar.gz")
|
||||
archiveFunc = createTarGzArchive
|
||||
}
|
||||
}
|
||||
|
||||
// Create the archive
|
||||
if err := archiveFunc(fs, artifact.Path, archivePath); err != nil {
|
||||
return Artifact{}, fmt.Errorf("build.Archive: failed to create archive: %w", err)
|
||||
}
|
||||
|
||||
return Artifact{
|
||||
Path: archivePath,
|
||||
OS: artifact.OS,
|
||||
Arch: artifact.Arch,
|
||||
Checksum: artifact.Checksum,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ArchiveAll archives all artifacts using gzip compression.
|
||||
// Returns a slice of new artifacts pointing to the archives.
|
||||
func ArchiveAll(fs io_interface.Medium, artifacts []Artifact) ([]Artifact, error) {
|
||||
return ArchiveAllWithFormat(fs, artifacts, ArchiveFormatGzip)
|
||||
}
|
||||
|
||||
// ArchiveAllXZ archives all artifacts using xz compression.
|
||||
// Returns a slice of new artifacts pointing to the archives.
|
||||
func ArchiveAllXZ(fs io_interface.Medium, artifacts []Artifact) ([]Artifact, error) {
|
||||
return ArchiveAllWithFormat(fs, artifacts, ArchiveFormatXZ)
|
||||
}
|
||||
|
||||
// ArchiveAllWithFormat archives all artifacts with the specified format.
|
||||
// Returns a slice of new artifacts pointing to the archives.
|
||||
func ArchiveAllWithFormat(fs io_interface.Medium, artifacts []Artifact, format ArchiveFormat) ([]Artifact, error) {
|
||||
if len(artifacts) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var archived []Artifact
|
||||
for _, artifact := range artifacts {
|
||||
arch, err := ArchiveWithFormat(fs, artifact, format)
|
||||
if err != nil {
|
||||
return archived, fmt.Errorf("build.ArchiveAll: failed to archive %s: %w", artifact.Path, err)
|
||||
}
|
||||
archived = append(archived, arch)
|
||||
}
|
||||
|
||||
return archived, nil
|
||||
}
|
||||
|
||||
// archiveFilename generates the archive filename based on the artifact and extension.
|
||||
// Format: dist/myapp_linux_amd64.tar.gz (binary name taken from artifact path).
|
||||
func archiveFilename(artifact Artifact, ext string) string {
|
||||
// Get the directory containing the binary (e.g., dist/linux_amd64)
|
||||
dir := filepath.Dir(artifact.Path)
|
||||
// Go up one level to the output directory (e.g., dist)
|
||||
outputDir := filepath.Dir(dir)
|
||||
|
||||
// Get the binary name without extension
|
||||
binaryName := filepath.Base(artifact.Path)
|
||||
binaryName = strings.TrimSuffix(binaryName, ".exe")
|
||||
|
||||
// Construct archive name: myapp_linux_amd64.tar.gz
|
||||
archiveName := fmt.Sprintf("%s_%s_%s%s", binaryName, artifact.OS, artifact.Arch, ext)
|
||||
|
||||
return filepath.Join(outputDir, archiveName)
|
||||
}
|
||||
|
||||
// createTarXzArchive creates a tar.xz archive containing a single file.
|
||||
// Uses Borg's compress package for xz compression.
|
||||
func createTarXzArchive(fs io_interface.Medium, src, dst string) error {
|
||||
// Open the source file
|
||||
srcFile, err := fs.Open(src)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to open source file: %w", err)
|
||||
}
|
||||
defer func() { _ = srcFile.Close() }()
|
||||
|
||||
srcInfo, err := srcFile.Stat()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to stat source file: %w", err)
|
||||
}
|
||||
|
||||
// Create tar archive in memory
|
||||
var tarBuf bytes.Buffer
|
||||
tarWriter := tar.NewWriter(&tarBuf)
|
||||
|
||||
// Create tar header
|
||||
header, err := tar.FileInfoHeader(srcInfo, "")
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create tar header: %w", err)
|
||||
}
|
||||
header.Name = filepath.Base(src)
|
||||
|
||||
if err := tarWriter.WriteHeader(header); err != nil {
|
||||
return fmt.Errorf("failed to write tar header: %w", err)
|
||||
}
|
||||
|
||||
if _, err := io.Copy(tarWriter, srcFile); err != nil {
|
||||
return fmt.Errorf("failed to write file content to tar: %w", err)
|
||||
}
|
||||
|
||||
if err := tarWriter.Close(); err != nil {
|
||||
return fmt.Errorf("failed to close tar writer: %w", err)
|
||||
}
|
||||
|
||||
// Compress with xz using Borg
|
||||
xzData, err := compress.Compress(tarBuf.Bytes(), "xz")
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to compress with xz: %w", err)
|
||||
}
|
||||
|
||||
// Write to destination file
|
||||
dstFile, err := fs.Create(dst)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create archive file: %w", err)
|
||||
}
|
||||
defer func() { _ = dstFile.Close() }()
|
||||
|
||||
if _, err := dstFile.Write(xzData); err != nil {
|
||||
return fmt.Errorf("failed to write archive file: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// createTarGzArchive creates a tar.gz archive containing a single file.
|
||||
func createTarGzArchive(fs io_interface.Medium, src, dst string) error {
|
||||
// Open the source file
|
||||
srcFile, err := fs.Open(src)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to open source file: %w", err)
|
||||
}
|
||||
defer func() { _ = srcFile.Close() }()
|
||||
|
||||
srcInfo, err := srcFile.Stat()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to stat source file: %w", err)
|
||||
}
|
||||
|
||||
// Create the destination file
|
||||
dstFile, err := fs.Create(dst)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create archive file: %w", err)
|
||||
}
|
||||
defer func() { _ = dstFile.Close() }()
|
||||
|
||||
// Create gzip writer
|
||||
gzWriter := gzip.NewWriter(dstFile)
|
||||
defer func() { _ = gzWriter.Close() }()
|
||||
|
||||
// Create tar writer
|
||||
tarWriter := tar.NewWriter(gzWriter)
|
||||
defer func() { _ = tarWriter.Close() }()
|
||||
|
||||
// Create tar header
|
||||
header, err := tar.FileInfoHeader(srcInfo, "")
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create tar header: %w", err)
|
||||
}
|
||||
// Use just the filename, not the full path
|
||||
header.Name = filepath.Base(src)
|
||||
|
||||
// Write header
|
||||
if err := tarWriter.WriteHeader(header); err != nil {
|
||||
return fmt.Errorf("failed to write tar header: %w", err)
|
||||
}
|
||||
|
||||
// Write file content
|
||||
if _, err := io.Copy(tarWriter, srcFile); err != nil {
|
||||
return fmt.Errorf("failed to write file content to tar: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// createZipArchive creates a zip archive containing a single file.
|
||||
func createZipArchive(fs io_interface.Medium, src, dst string) error {
|
||||
// Open the source file
|
||||
srcFile, err := fs.Open(src)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to open source file: %w", err)
|
||||
}
|
||||
defer func() { _ = srcFile.Close() }()
|
||||
|
||||
srcInfo, err := srcFile.Stat()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to stat source file: %w", err)
|
||||
}
|
||||
|
||||
// Create the destination file
|
||||
dstFile, err := fs.Create(dst)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create archive file: %w", err)
|
||||
}
|
||||
defer func() { _ = dstFile.Close() }()
|
||||
|
||||
// Create zip writer
|
||||
zipWriter := zip.NewWriter(dstFile)
|
||||
defer func() { _ = zipWriter.Close() }()
|
||||
|
||||
// Create zip header
|
||||
header, err := zip.FileInfoHeader(srcInfo)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create zip header: %w", err)
|
||||
}
|
||||
// Use just the filename, not the full path
|
||||
header.Name = filepath.Base(src)
|
||||
header.Method = zip.Deflate
|
||||
|
||||
// Create file in archive
|
||||
writer, err := zipWriter.CreateHeader(header)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create zip entry: %w", err)
|
||||
}
|
||||
|
||||
// Write file content
|
||||
if _, err := io.Copy(writer, srcFile); err != nil {
|
||||
return fmt.Errorf("failed to write file content to zip: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,397 +0,0 @@
|
|||
package build
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"archive/zip"
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/Snider/Borg/pkg/compress"
|
||||
io_interface "forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// setupArchiveTestFile creates a test binary file in a temp directory with the standard structure.
|
||||
// Returns the path to the binary and the output directory.
|
||||
func setupArchiveTestFile(t *testing.T, name, os_, arch string) (binaryPath string, outputDir string) {
|
||||
t.Helper()
|
||||
|
||||
outputDir = t.TempDir()
|
||||
|
||||
// Create platform directory: dist/os_arch
|
||||
platformDir := filepath.Join(outputDir, os_+"_"+arch)
|
||||
err := os.MkdirAll(platformDir, 0755)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create test binary
|
||||
binaryPath = filepath.Join(platformDir, name)
|
||||
content := []byte("#!/bin/bash\necho 'Hello, World!'\n")
|
||||
err = os.WriteFile(binaryPath, content, 0755)
|
||||
require.NoError(t, err)
|
||||
|
||||
return binaryPath, outputDir
|
||||
}
|
||||
|
||||
func TestArchive_Good(t *testing.T) {
|
||||
fs := io_interface.Local
|
||||
t.Run("creates tar.gz for linux", func(t *testing.T) {
|
||||
binaryPath, outputDir := setupArchiveTestFile(t, "myapp", "linux", "amd64")
|
||||
|
||||
artifact := Artifact{
|
||||
Path: binaryPath,
|
||||
OS: "linux",
|
||||
Arch: "amd64",
|
||||
}
|
||||
|
||||
result, err := Archive(fs, artifact)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify archive was created
|
||||
expectedPath := filepath.Join(outputDir, "myapp_linux_amd64.tar.gz")
|
||||
assert.Equal(t, expectedPath, result.Path)
|
||||
assert.FileExists(t, result.Path)
|
||||
|
||||
// Verify OS and Arch are preserved
|
||||
assert.Equal(t, "linux", result.OS)
|
||||
assert.Equal(t, "amd64", result.Arch)
|
||||
|
||||
// Verify archive content
|
||||
verifyTarGzContent(t, result.Path, "myapp")
|
||||
})
|
||||
|
||||
t.Run("creates tar.gz for darwin", func(t *testing.T) {
|
||||
binaryPath, outputDir := setupArchiveTestFile(t, "myapp", "darwin", "arm64")
|
||||
|
||||
artifact := Artifact{
|
||||
Path: binaryPath,
|
||||
OS: "darwin",
|
||||
Arch: "arm64",
|
||||
}
|
||||
|
||||
result, err := Archive(fs, artifact)
|
||||
require.NoError(t, err)
|
||||
|
||||
expectedPath := filepath.Join(outputDir, "myapp_darwin_arm64.tar.gz")
|
||||
assert.Equal(t, expectedPath, result.Path)
|
||||
assert.FileExists(t, result.Path)
|
||||
|
||||
verifyTarGzContent(t, result.Path, "myapp")
|
||||
})
|
||||
|
||||
t.Run("creates zip for windows", func(t *testing.T) {
|
||||
binaryPath, outputDir := setupArchiveTestFile(t, "myapp.exe", "windows", "amd64")
|
||||
|
||||
artifact := Artifact{
|
||||
Path: binaryPath,
|
||||
OS: "windows",
|
||||
Arch: "amd64",
|
||||
}
|
||||
|
||||
result, err := Archive(fs, artifact)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Windows archives should strip .exe from archive name
|
||||
expectedPath := filepath.Join(outputDir, "myapp_windows_amd64.zip")
|
||||
assert.Equal(t, expectedPath, result.Path)
|
||||
assert.FileExists(t, result.Path)
|
||||
|
||||
verifyZipContent(t, result.Path, "myapp.exe")
|
||||
})
|
||||
|
||||
t.Run("preserves checksum field", func(t *testing.T) {
|
||||
binaryPath, _ := setupArchiveTestFile(t, "myapp", "linux", "amd64")
|
||||
|
||||
artifact := Artifact{
|
||||
Path: binaryPath,
|
||||
OS: "linux",
|
||||
Arch: "amd64",
|
||||
Checksum: "abc123",
|
||||
}
|
||||
|
||||
result, err := Archive(fs, artifact)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "abc123", result.Checksum)
|
||||
})
|
||||
|
||||
t.Run("creates tar.xz for linux with ArchiveXZ", func(t *testing.T) {
|
||||
binaryPath, outputDir := setupArchiveTestFile(t, "myapp", "linux", "amd64")
|
||||
|
||||
artifact := Artifact{
|
||||
Path: binaryPath,
|
||||
OS: "linux",
|
||||
Arch: "amd64",
|
||||
}
|
||||
|
||||
result, err := ArchiveXZ(fs, artifact)
|
||||
require.NoError(t, err)
|
||||
|
||||
expectedPath := filepath.Join(outputDir, "myapp_linux_amd64.tar.xz")
|
||||
assert.Equal(t, expectedPath, result.Path)
|
||||
assert.FileExists(t, result.Path)
|
||||
|
||||
verifyTarXzContent(t, result.Path, "myapp")
|
||||
})
|
||||
|
||||
t.Run("creates tar.xz for darwin with ArchiveWithFormat", func(t *testing.T) {
|
||||
binaryPath, outputDir := setupArchiveTestFile(t, "myapp", "darwin", "arm64")
|
||||
|
||||
artifact := Artifact{
|
||||
Path: binaryPath,
|
||||
OS: "darwin",
|
||||
Arch: "arm64",
|
||||
}
|
||||
|
||||
result, err := ArchiveWithFormat(fs, artifact, ArchiveFormatXZ)
|
||||
require.NoError(t, err)
|
||||
|
||||
expectedPath := filepath.Join(outputDir, "myapp_darwin_arm64.tar.xz")
|
||||
assert.Equal(t, expectedPath, result.Path)
|
||||
assert.FileExists(t, result.Path)
|
||||
|
||||
verifyTarXzContent(t, result.Path, "myapp")
|
||||
})
|
||||
|
||||
t.Run("windows still uses zip even with xz format", func(t *testing.T) {
|
||||
binaryPath, outputDir := setupArchiveTestFile(t, "myapp.exe", "windows", "amd64")
|
||||
|
||||
artifact := Artifact{
|
||||
Path: binaryPath,
|
||||
OS: "windows",
|
||||
Arch: "amd64",
|
||||
}
|
||||
|
||||
result, err := ArchiveWithFormat(fs, artifact, ArchiveFormatXZ)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Windows should still get .zip regardless of format
|
||||
expectedPath := filepath.Join(outputDir, "myapp_windows_amd64.zip")
|
||||
assert.Equal(t, expectedPath, result.Path)
|
||||
assert.FileExists(t, result.Path)
|
||||
|
||||
verifyZipContent(t, result.Path, "myapp.exe")
|
||||
})
|
||||
}
|
||||
|
||||
func TestArchive_Bad(t *testing.T) {
|
||||
fs := io_interface.Local
|
||||
t.Run("returns error for empty path", func(t *testing.T) {
|
||||
artifact := Artifact{
|
||||
Path: "",
|
||||
OS: "linux",
|
||||
Arch: "amd64",
|
||||
}
|
||||
|
||||
result, err := Archive(fs, artifact)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "artifact path is empty")
|
||||
assert.Empty(t, result.Path)
|
||||
})
|
||||
|
||||
t.Run("returns error for non-existent file", func(t *testing.T) {
|
||||
artifact := Artifact{
|
||||
Path: "/nonexistent/path/binary",
|
||||
OS: "linux",
|
||||
Arch: "amd64",
|
||||
}
|
||||
|
||||
result, err := Archive(fs, artifact)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "source file not found")
|
||||
assert.Empty(t, result.Path)
|
||||
})
|
||||
|
||||
t.Run("returns error for directory path", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
artifact := Artifact{
|
||||
Path: dir,
|
||||
OS: "linux",
|
||||
Arch: "amd64",
|
||||
}
|
||||
|
||||
result, err := Archive(fs, artifact)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "source path is a directory")
|
||||
assert.Empty(t, result.Path)
|
||||
})
|
||||
}
|
||||
|
||||
func TestArchiveAll_Good(t *testing.T) {
|
||||
fs := io_interface.Local
|
||||
t.Run("archives multiple artifacts", func(t *testing.T) {
|
||||
outputDir := t.TempDir()
|
||||
|
||||
// Create multiple binaries
|
||||
var artifacts []Artifact
|
||||
targets := []struct {
|
||||
os_ string
|
||||
arch string
|
||||
}{
|
||||
{"linux", "amd64"},
|
||||
{"linux", "arm64"},
|
||||
{"darwin", "arm64"},
|
||||
{"windows", "amd64"},
|
||||
}
|
||||
|
||||
for _, target := range targets {
|
||||
platformDir := filepath.Join(outputDir, target.os_+"_"+target.arch)
|
||||
err := os.MkdirAll(platformDir, 0755)
|
||||
require.NoError(t, err)
|
||||
|
||||
name := "myapp"
|
||||
if target.os_ == "windows" {
|
||||
name = "myapp.exe"
|
||||
}
|
||||
|
||||
binaryPath := filepath.Join(platformDir, name)
|
||||
err = os.WriteFile(binaryPath, []byte("binary content"), 0755)
|
||||
require.NoError(t, err)
|
||||
|
||||
artifacts = append(artifacts, Artifact{
|
||||
Path: binaryPath,
|
||||
OS: target.os_,
|
||||
Arch: target.arch,
|
||||
})
|
||||
}
|
||||
|
||||
results, err := ArchiveAll(fs, artifacts)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, results, 4)
|
||||
|
||||
// Verify all archives were created
|
||||
for i, result := range results {
|
||||
assert.FileExists(t, result.Path)
|
||||
assert.Equal(t, artifacts[i].OS, result.OS)
|
||||
assert.Equal(t, artifacts[i].Arch, result.Arch)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("returns nil for empty slice", func(t *testing.T) {
|
||||
results, err := ArchiveAll(fs, []Artifact{})
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, results)
|
||||
})
|
||||
|
||||
t.Run("returns nil for nil slice", func(t *testing.T) {
|
||||
results, err := ArchiveAll(fs, nil)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, results)
|
||||
})
|
||||
}
|
||||
|
||||
func TestArchiveAll_Bad(t *testing.T) {
|
||||
fs := io_interface.Local
|
||||
t.Run("returns partial results on error", func(t *testing.T) {
|
||||
binaryPath, _ := setupArchiveTestFile(t, "myapp", "linux", "amd64")
|
||||
|
||||
artifacts := []Artifact{
|
||||
{Path: binaryPath, OS: "linux", Arch: "amd64"},
|
||||
{Path: "/nonexistent/binary", OS: "linux", Arch: "arm64"}, // This will fail
|
||||
}
|
||||
|
||||
results, err := ArchiveAll(fs, artifacts)
|
||||
assert.Error(t, err)
|
||||
// Should have the first successful result
|
||||
assert.Len(t, results, 1)
|
||||
assert.FileExists(t, results[0].Path)
|
||||
})
|
||||
}
|
||||
|
||||
func TestArchiveFilename_Good(t *testing.T) {
|
||||
t.Run("generates correct tar.gz filename", func(t *testing.T) {
|
||||
artifact := Artifact{
|
||||
Path: "/output/linux_amd64/myapp",
|
||||
OS: "linux",
|
||||
Arch: "amd64",
|
||||
}
|
||||
|
||||
filename := archiveFilename(artifact, ".tar.gz")
|
||||
assert.Equal(t, "/output/myapp_linux_amd64.tar.gz", filename)
|
||||
})
|
||||
|
||||
t.Run("generates correct zip filename", func(t *testing.T) {
|
||||
artifact := Artifact{
|
||||
Path: "/output/windows_amd64/myapp.exe",
|
||||
OS: "windows",
|
||||
Arch: "amd64",
|
||||
}
|
||||
|
||||
filename := archiveFilename(artifact, ".zip")
|
||||
assert.Equal(t, "/output/myapp_windows_amd64.zip", filename)
|
||||
})
|
||||
|
||||
t.Run("handles nested output directories", func(t *testing.T) {
|
||||
artifact := Artifact{
|
||||
Path: "/project/dist/linux_arm64/cli",
|
||||
OS: "linux",
|
||||
Arch: "arm64",
|
||||
}
|
||||
|
||||
filename := archiveFilename(artifact, ".tar.gz")
|
||||
assert.Equal(t, "/project/dist/cli_linux_arm64.tar.gz", filename)
|
||||
})
|
||||
}
|
||||
|
||||
// verifyTarGzContent opens a tar.gz file and verifies it contains the expected file.
|
||||
func verifyTarGzContent(t *testing.T, archivePath, expectedName string) {
|
||||
t.Helper()
|
||||
|
||||
file, err := os.Open(archivePath)
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = file.Close() }()
|
||||
|
||||
gzReader, err := gzip.NewReader(file)
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = gzReader.Close() }()
|
||||
|
||||
tarReader := tar.NewReader(gzReader)
|
||||
|
||||
header, err := tarReader.Next()
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, expectedName, header.Name)
|
||||
|
||||
// Verify there's only one file
|
||||
_, err = tarReader.Next()
|
||||
assert.Equal(t, io.EOF, err)
|
||||
}
|
||||
|
||||
// verifyZipContent opens a zip file and verifies it contains the expected file.
|
||||
func verifyZipContent(t *testing.T, archivePath, expectedName string) {
|
||||
t.Helper()
|
||||
|
||||
reader, err := zip.OpenReader(archivePath)
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = reader.Close() }()
|
||||
|
||||
require.Len(t, reader.File, 1)
|
||||
assert.Equal(t, expectedName, reader.File[0].Name)
|
||||
}
|
||||
|
||||
// verifyTarXzContent opens a tar.xz file and verifies it contains the expected file.
|
||||
func verifyTarXzContent(t *testing.T, archivePath, expectedName string) {
|
||||
t.Helper()
|
||||
|
||||
// Read the xz-compressed file
|
||||
xzData, err := os.ReadFile(archivePath)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Decompress with Borg
|
||||
tarData, err := compress.Decompress(xzData)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Read tar archive
|
||||
tarReader := tar.NewReader(bytes.NewReader(tarData))
|
||||
|
||||
header, err := tarReader.Next()
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, expectedName, header.Name)
|
||||
|
||||
// Verify there's only one file
|
||||
_, err = tarReader.Next()
|
||||
assert.Equal(t, io.EOF, err)
|
||||
}
|
||||
|
|
@ -1,90 +0,0 @@
|
|||
// Package build provides project type detection and cross-compilation for the Core build system.
|
||||
// It supports Go, Wails, Node.js, and PHP projects with automatic detection based on
|
||||
// marker files (go.mod, wails.json, package.json, composer.json).
|
||||
package build
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// ProjectType represents a detected project type.
|
||||
type ProjectType string
|
||||
|
||||
// Project type constants for build detection.
|
||||
const (
|
||||
// ProjectTypeGo indicates a standard Go project with go.mod.
|
||||
ProjectTypeGo ProjectType = "go"
|
||||
// ProjectTypeWails indicates a Wails desktop application.
|
||||
ProjectTypeWails ProjectType = "wails"
|
||||
// ProjectTypeNode indicates a Node.js project with package.json.
|
||||
ProjectTypeNode ProjectType = "node"
|
||||
// ProjectTypePHP indicates a PHP/Laravel project with composer.json.
|
||||
ProjectTypePHP ProjectType = "php"
|
||||
// ProjectTypeCPP indicates a C++ project with CMakeLists.txt.
|
||||
ProjectTypeCPP ProjectType = "cpp"
|
||||
// ProjectTypeDocker indicates a Docker-based project with Dockerfile.
|
||||
ProjectTypeDocker ProjectType = "docker"
|
||||
// ProjectTypeLinuxKit indicates a LinuxKit VM configuration.
|
||||
ProjectTypeLinuxKit ProjectType = "linuxkit"
|
||||
// ProjectTypeTaskfile indicates a project using Taskfile automation.
|
||||
ProjectTypeTaskfile ProjectType = "taskfile"
|
||||
)
|
||||
|
||||
// Target represents a build target platform.
|
||||
type Target struct {
|
||||
OS string
|
||||
Arch string
|
||||
}
|
||||
|
||||
// String returns the target in GOOS/GOARCH format.
|
||||
func (t Target) String() string {
|
||||
return t.OS + "/" + t.Arch
|
||||
}
|
||||
|
||||
// Artifact represents a build output file.
|
||||
type Artifact struct {
|
||||
Path string
|
||||
OS string
|
||||
Arch string
|
||||
Checksum string
|
||||
}
|
||||
|
||||
// Config holds build configuration.
|
||||
type Config struct {
|
||||
// FS is the medium used for file operations.
|
||||
FS io.Medium
|
||||
// ProjectDir is the root directory of the project.
|
||||
ProjectDir string
|
||||
// OutputDir is where build artifacts are placed.
|
||||
OutputDir string
|
||||
// Name is the output binary name.
|
||||
Name string
|
||||
// Version is the build version string.
|
||||
Version string
|
||||
// LDFlags are additional linker flags.
|
||||
LDFlags []string
|
||||
|
||||
// Docker-specific config
|
||||
Dockerfile string // Path to Dockerfile (default: Dockerfile)
|
||||
Registry string // Container registry (default: ghcr.io)
|
||||
Image string // Image name (owner/repo format)
|
||||
Tags []string // Additional tags to apply
|
||||
BuildArgs map[string]string // Docker build arguments
|
||||
Push bool // Whether to push after build
|
||||
|
||||
// LinuxKit-specific config
|
||||
LinuxKitConfig string // Path to LinuxKit YAML config
|
||||
Formats []string // Output formats (iso, qcow2, raw, vmdk)
|
||||
}
|
||||
|
||||
// Builder defines the interface for project-specific build implementations.
|
||||
type Builder interface {
|
||||
// Name returns the builder's identifier.
|
||||
Name() string
|
||||
// Detect checks if this builder can handle the project in the given directory.
|
||||
Detect(fs io.Medium, dir string) (bool, error)
|
||||
// Build compiles the project for the specified targets.
|
||||
Build(ctx context.Context, cfg *Config, targets []Target) ([]Artifact, error)
|
||||
}
|
||||
|
|
@ -1,144 +0,0 @@
|
|||
// Package buildcmd provides project build commands with auto-detection.
|
||||
package buildcmd
|
||||
|
||||
import (
|
||||
"embed"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/cli"
|
||||
"forge.lthn.ai/core/go/pkg/i18n"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cli.RegisterCommands(AddBuildCommands)
|
||||
}
|
||||
|
||||
// Style aliases from shared package
|
||||
var (
|
||||
buildHeaderStyle = cli.TitleStyle
|
||||
buildTargetStyle = cli.ValueStyle
|
||||
buildSuccessStyle = cli.SuccessStyle
|
||||
buildErrorStyle = cli.ErrorStyle
|
||||
buildDimStyle = cli.DimStyle
|
||||
)
|
||||
|
||||
//go:embed all:tmpl/gui
|
||||
var guiTemplate embed.FS
|
||||
|
||||
// Flags for the main build command
|
||||
var (
|
||||
buildType string
|
||||
ciMode bool
|
||||
targets string
|
||||
outputDir string
|
||||
doArchive bool
|
||||
doChecksum bool
|
||||
verbose bool
|
||||
|
||||
// Docker/LinuxKit specific flags
|
||||
configPath string
|
||||
format string
|
||||
push bool
|
||||
imageName string
|
||||
|
||||
// Signing flags
|
||||
noSign bool
|
||||
notarize bool
|
||||
|
||||
// from-path subcommand
|
||||
fromPath string
|
||||
|
||||
// pwa subcommand
|
||||
pwaURL string
|
||||
|
||||
// sdk subcommand
|
||||
sdkSpec string
|
||||
sdkLang string
|
||||
sdkVersion string
|
||||
sdkDryRun bool
|
||||
)
|
||||
|
||||
var buildCmd = &cobra.Command{
|
||||
Use: "build",
|
||||
Short: i18n.T("cmd.build.short"),
|
||||
Long: i18n.T("cmd.build.long"),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runProjectBuild(cmd.Context(), buildType, ciMode, targets, outputDir, doArchive, doChecksum, configPath, format, push, imageName, noSign, notarize, verbose)
|
||||
},
|
||||
}
|
||||
|
||||
var fromPathCmd = &cobra.Command{
|
||||
Use: "from-path",
|
||||
Short: i18n.T("cmd.build.from_path.short"),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
if fromPath == "" {
|
||||
return errPathRequired
|
||||
}
|
||||
return runBuild(fromPath)
|
||||
},
|
||||
}
|
||||
|
||||
var pwaCmd = &cobra.Command{
|
||||
Use: "pwa",
|
||||
Short: i18n.T("cmd.build.pwa.short"),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
if pwaURL == "" {
|
||||
return errURLRequired
|
||||
}
|
||||
return runPwaBuild(pwaURL)
|
||||
},
|
||||
}
|
||||
|
||||
var sdkBuildCmd = &cobra.Command{
|
||||
Use: "sdk",
|
||||
Short: i18n.T("cmd.build.sdk.short"),
|
||||
Long: i18n.T("cmd.build.sdk.long"),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runBuildSDK(sdkSpec, sdkLang, sdkVersion, sdkDryRun)
|
||||
},
|
||||
}
|
||||
|
||||
func initBuildFlags() {
|
||||
// Main build command flags
|
||||
buildCmd.Flags().StringVar(&buildType, "type", "", i18n.T("cmd.build.flag.type"))
|
||||
buildCmd.Flags().BoolVar(&ciMode, "ci", false, i18n.T("cmd.build.flag.ci"))
|
||||
buildCmd.Flags().BoolVarP(&verbose, "verbose", "v", false, i18n.T("common.flag.verbose"))
|
||||
buildCmd.Flags().StringVar(&targets, "targets", "", i18n.T("cmd.build.flag.targets"))
|
||||
buildCmd.Flags().StringVar(&outputDir, "output", "", i18n.T("cmd.build.flag.output"))
|
||||
buildCmd.Flags().BoolVar(&doArchive, "archive", true, i18n.T("cmd.build.flag.archive"))
|
||||
buildCmd.Flags().BoolVar(&doChecksum, "checksum", true, i18n.T("cmd.build.flag.checksum"))
|
||||
|
||||
// Docker/LinuxKit specific
|
||||
buildCmd.Flags().StringVar(&configPath, "config", "", i18n.T("cmd.build.flag.config"))
|
||||
buildCmd.Flags().StringVar(&format, "format", "", i18n.T("cmd.build.flag.format"))
|
||||
buildCmd.Flags().BoolVar(&push, "push", false, i18n.T("cmd.build.flag.push"))
|
||||
buildCmd.Flags().StringVar(&imageName, "image", "", i18n.T("cmd.build.flag.image"))
|
||||
|
||||
// Signing flags
|
||||
buildCmd.Flags().BoolVar(&noSign, "no-sign", false, i18n.T("cmd.build.flag.no_sign"))
|
||||
buildCmd.Flags().BoolVar(¬arize, "notarize", false, i18n.T("cmd.build.flag.notarize"))
|
||||
|
||||
// from-path subcommand flags
|
||||
fromPathCmd.Flags().StringVar(&fromPath, "path", "", i18n.T("cmd.build.from_path.flag.path"))
|
||||
|
||||
// pwa subcommand flags
|
||||
pwaCmd.Flags().StringVar(&pwaURL, "url", "", i18n.T("cmd.build.pwa.flag.url"))
|
||||
|
||||
// sdk subcommand flags
|
||||
sdkBuildCmd.Flags().StringVar(&sdkSpec, "spec", "", i18n.T("common.flag.spec"))
|
||||
sdkBuildCmd.Flags().StringVar(&sdkLang, "lang", "", i18n.T("cmd.build.sdk.flag.lang"))
|
||||
sdkBuildCmd.Flags().StringVar(&sdkVersion, "version", "", i18n.T("cmd.build.sdk.flag.version"))
|
||||
sdkBuildCmd.Flags().BoolVar(&sdkDryRun, "dry-run", false, i18n.T("cmd.build.sdk.flag.dry_run"))
|
||||
|
||||
// Add subcommands
|
||||
buildCmd.AddCommand(fromPathCmd)
|
||||
buildCmd.AddCommand(pwaCmd)
|
||||
buildCmd.AddCommand(sdkBuildCmd)
|
||||
}
|
||||
|
||||
// AddBuildCommands registers the 'build' command and all subcommands.
|
||||
func AddBuildCommands(root *cobra.Command) {
|
||||
initBuildFlags()
|
||||
AddReleaseCommand(buildCmd)
|
||||
root.AddCommand(buildCmd)
|
||||
}
|
||||
|
|
@ -1,21 +0,0 @@
|
|||
// Package buildcmd provides project build commands with auto-detection.
|
||||
//
|
||||
// Supports building:
|
||||
// - Go projects (standard and cross-compilation)
|
||||
// - Wails desktop applications
|
||||
// - Docker images
|
||||
// - LinuxKit VM images
|
||||
// - Taskfile-based projects
|
||||
//
|
||||
// Configuration via .core/build.yaml or command-line flags.
|
||||
//
|
||||
// Subcommands:
|
||||
// - build: Auto-detect and build the current project
|
||||
// - build from-path: Build from a local static web app directory
|
||||
// - build pwa: Build from a live PWA URL
|
||||
// - build sdk: Generate API SDKs from OpenAPI spec
|
||||
package buildcmd
|
||||
|
||||
// Note: The AddBuildCommands function is defined in cmd_build.go
|
||||
// This file exists for documentation purposes and maintains the original
|
||||
// package documentation from commands.go.
|
||||
|
|
@ -1,392 +0,0 @@
|
|||
// cmd_project.go implements the main project build logic.
|
||||
//
|
||||
// This handles auto-detection of project types (Go, Wails, Docker, LinuxKit, Taskfile)
|
||||
// and orchestrates the build process including signing, archiving, and checksums.
|
||||
|
||||
package buildcmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/build"
|
||||
"forge.lthn.ai/core/go/pkg/build/builders"
|
||||
"forge.lthn.ai/core/go/pkg/build/signing"
|
||||
"forge.lthn.ai/core/go/pkg/i18n"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// runProjectBuild handles the main `core build` command with auto-detection.
|
||||
func runProjectBuild(ctx context.Context, buildType string, ciMode bool, targetsFlag string, outputDir string, doArchive bool, doChecksum bool, configPath string, format string, push bool, imageName string, noSign bool, notarize bool, verbose bool) error {
|
||||
// Use local filesystem as the default medium
|
||||
fs := io.Local
|
||||
|
||||
// Get current working directory as project root
|
||||
projectDir, err := os.Getwd()
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "get working directory"}), err)
|
||||
}
|
||||
|
||||
// Load configuration from .core/build.yaml (or defaults)
|
||||
buildCfg, err := build.LoadConfig(fs, projectDir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "load config"}), err)
|
||||
}
|
||||
|
||||
// Detect project type if not specified
|
||||
var projectType build.ProjectType
|
||||
if buildType != "" {
|
||||
projectType = build.ProjectType(buildType)
|
||||
} else {
|
||||
projectType, err = build.PrimaryType(fs, projectDir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "detect project type"}), err)
|
||||
}
|
||||
if projectType == "" {
|
||||
return fmt.Errorf("%s", i18n.T("cmd.build.error.no_project_type", map[string]interface{}{"Dir": projectDir}))
|
||||
}
|
||||
}
|
||||
|
||||
// Determine targets
|
||||
var buildTargets []build.Target
|
||||
if targetsFlag != "" {
|
||||
// Parse from command line
|
||||
buildTargets, err = parseTargets(targetsFlag)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else if len(buildCfg.Targets) > 0 {
|
||||
// Use config targets
|
||||
buildTargets = buildCfg.ToTargets()
|
||||
} else {
|
||||
// Fall back to current OS/arch
|
||||
buildTargets = []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
}
|
||||
|
||||
// Determine output directory
|
||||
if outputDir == "" {
|
||||
outputDir = "dist"
|
||||
}
|
||||
if !filepath.IsAbs(outputDir) {
|
||||
outputDir = filepath.Join(projectDir, outputDir)
|
||||
}
|
||||
outputDir = filepath.Clean(outputDir)
|
||||
|
||||
// Ensure config path is absolute if provided
|
||||
if configPath != "" && !filepath.IsAbs(configPath) {
|
||||
configPath = filepath.Join(projectDir, configPath)
|
||||
}
|
||||
|
||||
// Determine binary name
|
||||
binaryName := buildCfg.Project.Binary
|
||||
if binaryName == "" {
|
||||
binaryName = buildCfg.Project.Name
|
||||
}
|
||||
if binaryName == "" {
|
||||
binaryName = filepath.Base(projectDir)
|
||||
}
|
||||
|
||||
// Print build info (verbose mode only)
|
||||
if verbose && !ciMode {
|
||||
fmt.Printf("%s %s\n", buildHeaderStyle.Render(i18n.T("cmd.build.label.build")), i18n.T("cmd.build.building_project"))
|
||||
fmt.Printf(" %s %s\n", i18n.T("cmd.build.label.type"), buildTargetStyle.Render(string(projectType)))
|
||||
fmt.Printf(" %s %s\n", i18n.T("cmd.build.label.output"), buildTargetStyle.Render(outputDir))
|
||||
fmt.Printf(" %s %s\n", i18n.T("cmd.build.label.binary"), buildTargetStyle.Render(binaryName))
|
||||
fmt.Printf(" %s %s\n", i18n.T("cmd.build.label.targets"), buildTargetStyle.Render(formatTargets(buildTargets)))
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
// Get the appropriate builder
|
||||
builder, err := getBuilder(projectType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Create build config for the builder
|
||||
cfg := &build.Config{
|
||||
FS: fs,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: outputDir,
|
||||
Name: binaryName,
|
||||
Version: buildCfg.Project.Name, // Could be enhanced with git describe
|
||||
LDFlags: buildCfg.Build.LDFlags,
|
||||
// Docker/LinuxKit specific
|
||||
Dockerfile: configPath, // Reuse for Dockerfile path
|
||||
LinuxKitConfig: configPath,
|
||||
Push: push,
|
||||
Image: imageName,
|
||||
}
|
||||
|
||||
// Parse formats for LinuxKit
|
||||
if format != "" {
|
||||
cfg.Formats = strings.Split(format, ",")
|
||||
}
|
||||
|
||||
// Execute build
|
||||
artifacts, err := builder.Build(ctx, cfg, buildTargets)
|
||||
if err != nil {
|
||||
if !ciMode {
|
||||
fmt.Printf("%s %v\n", buildErrorStyle.Render(i18n.T("common.label.error")), err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if verbose && !ciMode {
|
||||
fmt.Printf("%s %s\n", buildSuccessStyle.Render(i18n.T("common.label.success")), i18n.T("cmd.build.built_artifacts", map[string]interface{}{"Count": len(artifacts)}))
|
||||
fmt.Println()
|
||||
for _, artifact := range artifacts {
|
||||
relPath, err := filepath.Rel(projectDir, artifact.Path)
|
||||
if err != nil {
|
||||
relPath = artifact.Path
|
||||
}
|
||||
fmt.Printf(" %s %s %s\n",
|
||||
buildSuccessStyle.Render("*"),
|
||||
buildTargetStyle.Render(relPath),
|
||||
buildDimStyle.Render(fmt.Sprintf("(%s/%s)", artifact.OS, artifact.Arch)),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// Sign macOS binaries if enabled
|
||||
signCfg := buildCfg.Sign
|
||||
if notarize {
|
||||
signCfg.MacOS.Notarize = true
|
||||
}
|
||||
if noSign {
|
||||
signCfg.Enabled = false
|
||||
}
|
||||
|
||||
if signCfg.Enabled && runtime.GOOS == "darwin" {
|
||||
if verbose && !ciMode {
|
||||
fmt.Println()
|
||||
fmt.Printf("%s %s\n", buildHeaderStyle.Render(i18n.T("cmd.build.label.sign")), i18n.T("cmd.build.signing_binaries"))
|
||||
}
|
||||
|
||||
// Convert build.Artifact to signing.Artifact
|
||||
signingArtifacts := make([]signing.Artifact, len(artifacts))
|
||||
for i, a := range artifacts {
|
||||
signingArtifacts[i] = signing.Artifact{Path: a.Path, OS: a.OS, Arch: a.Arch}
|
||||
}
|
||||
|
||||
if err := signing.SignBinaries(ctx, fs, signCfg, signingArtifacts); err != nil {
|
||||
if !ciMode {
|
||||
fmt.Printf("%s %s: %v\n", buildErrorStyle.Render(i18n.T("common.label.error")), i18n.T("cmd.build.error.signing_failed"), err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if signCfg.MacOS.Notarize {
|
||||
if err := signing.NotarizeBinaries(ctx, fs, signCfg, signingArtifacts); err != nil {
|
||||
if !ciMode {
|
||||
fmt.Printf("%s %s: %v\n", buildErrorStyle.Render(i18n.T("common.label.error")), i18n.T("cmd.build.error.notarization_failed"), err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Archive artifacts if enabled
|
||||
var archivedArtifacts []build.Artifact
|
||||
if doArchive && len(artifacts) > 0 {
|
||||
if verbose && !ciMode {
|
||||
fmt.Println()
|
||||
fmt.Printf("%s %s\n", buildHeaderStyle.Render(i18n.T("cmd.build.label.archive")), i18n.T("cmd.build.creating_archives"))
|
||||
}
|
||||
|
||||
archivedArtifacts, err = build.ArchiveAll(fs, artifacts)
|
||||
if err != nil {
|
||||
if !ciMode {
|
||||
fmt.Printf("%s %s: %v\n", buildErrorStyle.Render(i18n.T("common.label.error")), i18n.T("cmd.build.error.archive_failed"), err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if verbose && !ciMode {
|
||||
for _, artifact := range archivedArtifacts {
|
||||
relPath, err := filepath.Rel(projectDir, artifact.Path)
|
||||
if err != nil {
|
||||
relPath = artifact.Path
|
||||
}
|
||||
fmt.Printf(" %s %s %s\n",
|
||||
buildSuccessStyle.Render("*"),
|
||||
buildTargetStyle.Render(relPath),
|
||||
buildDimStyle.Render(fmt.Sprintf("(%s/%s)", artifact.OS, artifact.Arch)),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Compute checksums if enabled
|
||||
var checksummedArtifacts []build.Artifact
|
||||
if doChecksum && len(archivedArtifacts) > 0 {
|
||||
checksummedArtifacts, err = computeAndWriteChecksums(ctx, projectDir, outputDir, archivedArtifacts, signCfg, ciMode, verbose)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else if doChecksum && len(artifacts) > 0 && !doArchive {
|
||||
// Checksum raw binaries if archiving is disabled
|
||||
checksummedArtifacts, err = computeAndWriteChecksums(ctx, projectDir, outputDir, artifacts, signCfg, ciMode, verbose)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Output results
|
||||
if ciMode {
|
||||
// Determine which artifacts to output (prefer checksummed > archived > raw)
|
||||
var outputArtifacts []build.Artifact
|
||||
if len(checksummedArtifacts) > 0 {
|
||||
outputArtifacts = checksummedArtifacts
|
||||
} else if len(archivedArtifacts) > 0 {
|
||||
outputArtifacts = archivedArtifacts
|
||||
} else {
|
||||
outputArtifacts = artifacts
|
||||
}
|
||||
|
||||
// JSON output for CI
|
||||
output, err := json.MarshalIndent(outputArtifacts, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "marshal artifacts"}), err)
|
||||
}
|
||||
fmt.Println(string(output))
|
||||
} else if !verbose {
|
||||
// Minimal output: just success with artifact count
|
||||
fmt.Printf("%s %s %s\n",
|
||||
buildSuccessStyle.Render(i18n.T("common.label.success")),
|
||||
i18n.T("cmd.build.built_artifacts", map[string]interface{}{"Count": len(artifacts)}),
|
||||
buildDimStyle.Render(fmt.Sprintf("(%s)", outputDir)),
|
||||
)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// computeAndWriteChecksums computes checksums for artifacts and writes CHECKSUMS.txt.
|
||||
func computeAndWriteChecksums(ctx context.Context, projectDir, outputDir string, artifacts []build.Artifact, signCfg signing.SignConfig, ciMode bool, verbose bool) ([]build.Artifact, error) {
|
||||
fs := io.Local
|
||||
if verbose && !ciMode {
|
||||
fmt.Println()
|
||||
fmt.Printf("%s %s\n", buildHeaderStyle.Render(i18n.T("cmd.build.label.checksum")), i18n.T("cmd.build.computing_checksums"))
|
||||
}
|
||||
|
||||
checksummedArtifacts, err := build.ChecksumAll(fs, artifacts)
|
||||
if err != nil {
|
||||
if !ciMode {
|
||||
fmt.Printf("%s %s: %v\n", buildErrorStyle.Render(i18n.T("common.label.error")), i18n.T("cmd.build.error.checksum_failed"), err)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Write CHECKSUMS.txt
|
||||
checksumPath := filepath.Join(outputDir, "CHECKSUMS.txt")
|
||||
if err := build.WriteChecksumFile(fs, checksummedArtifacts, checksumPath); err != nil {
|
||||
if !ciMode {
|
||||
fmt.Printf("%s %s: %v\n", buildErrorStyle.Render(i18n.T("common.label.error")), i18n.T("common.error.failed", map[string]any{"Action": "write CHECKSUMS.txt"}), err)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Sign checksums with GPG
|
||||
if signCfg.Enabled {
|
||||
if err := signing.SignChecksums(ctx, fs, signCfg, checksumPath); err != nil {
|
||||
if !ciMode {
|
||||
fmt.Printf("%s %s: %v\n", buildErrorStyle.Render(i18n.T("common.label.error")), i18n.T("cmd.build.error.gpg_signing_failed"), err)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if verbose && !ciMode {
|
||||
for _, artifact := range checksummedArtifacts {
|
||||
relPath, err := filepath.Rel(projectDir, artifact.Path)
|
||||
if err != nil {
|
||||
relPath = artifact.Path
|
||||
}
|
||||
fmt.Printf(" %s %s\n",
|
||||
buildSuccessStyle.Render("*"),
|
||||
buildTargetStyle.Render(relPath),
|
||||
)
|
||||
fmt.Printf(" %s\n", buildDimStyle.Render(artifact.Checksum))
|
||||
}
|
||||
|
||||
relChecksumPath, err := filepath.Rel(projectDir, checksumPath)
|
||||
if err != nil {
|
||||
relChecksumPath = checksumPath
|
||||
}
|
||||
fmt.Printf(" %s %s\n",
|
||||
buildSuccessStyle.Render("*"),
|
||||
buildTargetStyle.Render(relChecksumPath),
|
||||
)
|
||||
}
|
||||
|
||||
return checksummedArtifacts, nil
|
||||
}
|
||||
|
||||
// parseTargets parses a comma-separated list of OS/arch pairs.
|
||||
func parseTargets(targetsFlag string) ([]build.Target, error) {
|
||||
parts := strings.Split(targetsFlag, ",")
|
||||
var targets []build.Target
|
||||
|
||||
for _, part := range parts {
|
||||
part = strings.TrimSpace(part)
|
||||
if part == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
osArch := strings.Split(part, "/")
|
||||
if len(osArch) != 2 {
|
||||
return nil, fmt.Errorf("%s", i18n.T("cmd.build.error.invalid_target", map[string]interface{}{"Target": part}))
|
||||
}
|
||||
|
||||
targets = append(targets, build.Target{
|
||||
OS: strings.TrimSpace(osArch[0]),
|
||||
Arch: strings.TrimSpace(osArch[1]),
|
||||
})
|
||||
}
|
||||
|
||||
if len(targets) == 0 {
|
||||
return nil, fmt.Errorf("%s", i18n.T("cmd.build.error.no_targets"))
|
||||
}
|
||||
|
||||
return targets, nil
|
||||
}
|
||||
|
||||
// formatTargets returns a human-readable string of targets.
|
||||
func formatTargets(targets []build.Target) string {
|
||||
var parts []string
|
||||
for _, t := range targets {
|
||||
parts = append(parts, t.String())
|
||||
}
|
||||
return strings.Join(parts, ", ")
|
||||
}
|
||||
|
||||
// getBuilder returns the appropriate builder for the project type.
|
||||
func getBuilder(projectType build.ProjectType) (build.Builder, error) {
|
||||
switch projectType {
|
||||
case build.ProjectTypeWails:
|
||||
return builders.NewWailsBuilder(), nil
|
||||
case build.ProjectTypeGo:
|
||||
return builders.NewGoBuilder(), nil
|
||||
case build.ProjectTypeDocker:
|
||||
return builders.NewDockerBuilder(), nil
|
||||
case build.ProjectTypeLinuxKit:
|
||||
return builders.NewLinuxKitBuilder(), nil
|
||||
case build.ProjectTypeTaskfile:
|
||||
return builders.NewTaskfileBuilder(), nil
|
||||
case build.ProjectTypeCPP:
|
||||
return builders.NewCPPBuilder(), nil
|
||||
case build.ProjectTypeNode:
|
||||
return nil, fmt.Errorf("%s", i18n.T("cmd.build.error.node_not_implemented"))
|
||||
case build.ProjectTypePHP:
|
||||
return nil, fmt.Errorf("%s", i18n.T("cmd.build.error.php_not_implemented"))
|
||||
default:
|
||||
return nil, fmt.Errorf("%s: %s", i18n.T("cmd.build.error.unsupported_type"), projectType)
|
||||
}
|
||||
}
|
||||
|
|
@ -1,324 +0,0 @@
|
|||
// cmd_pwa.go implements PWA and legacy GUI build functionality.
|
||||
//
|
||||
// Supports building desktop applications from:
|
||||
// - Local static web application directories
|
||||
// - Live PWA URLs (downloads and packages)
|
||||
|
||||
package buildcmd
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/i18n"
|
||||
"github.com/leaanthony/debme"
|
||||
"github.com/leaanthony/gosod"
|
||||
"golang.org/x/net/html"
|
||||
)
|
||||
|
||||
// Error sentinels for build commands
|
||||
var (
|
||||
errPathRequired = errors.New("the --path flag is required")
|
||||
errURLRequired = errors.New("the --url flag is required")
|
||||
)
|
||||
|
||||
// runPwaBuild downloads a PWA from URL and builds it.
|
||||
func runPwaBuild(pwaURL string) error {
|
||||
fmt.Printf("%s %s\n", i18n.T("cmd.build.pwa.starting"), pwaURL)
|
||||
|
||||
tempDir, err := os.MkdirTemp("", "core-pwa-build-*")
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "create temporary directory"}), err)
|
||||
}
|
||||
// defer os.RemoveAll(tempDir) // Keep temp dir for debugging
|
||||
fmt.Printf("%s %s\n", i18n.T("cmd.build.pwa.downloading_to"), tempDir)
|
||||
|
||||
if err := downloadPWA(pwaURL, tempDir); err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "download PWA"}), err)
|
||||
}
|
||||
|
||||
return runBuild(tempDir)
|
||||
}
|
||||
|
||||
// downloadPWA fetches a PWA from a URL and saves assets locally.
|
||||
func downloadPWA(baseURL, destDir string) error {
|
||||
// Fetch the main HTML page
|
||||
resp, err := http.Get(baseURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s %s: %w", i18n.T("common.error.failed", map[string]any{"Action": "fetch URL"}), baseURL, err)
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "read response body"}), err)
|
||||
}
|
||||
|
||||
// Find the manifest URL from the HTML
|
||||
manifestURL, err := findManifestURL(string(body), baseURL)
|
||||
if err != nil {
|
||||
// If no manifest, it's not a PWA, but we can still try to package it as a simple site.
|
||||
fmt.Printf("%s %s\n", i18n.T("common.label.warning"), i18n.T("cmd.build.pwa.no_manifest"))
|
||||
if err := os.WriteFile(filepath.Join(destDir, "index.html"), body, 0644); err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "write index.html"}), err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Printf("%s %s\n", i18n.T("cmd.build.pwa.found_manifest"), manifestURL)
|
||||
|
||||
// Fetch and parse the manifest
|
||||
manifest, err := fetchManifest(manifestURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "fetch or parse manifest"}), err)
|
||||
}
|
||||
|
||||
// Download all assets listed in the manifest
|
||||
assets := collectAssets(manifest, manifestURL)
|
||||
for _, assetURL := range assets {
|
||||
if err := downloadAsset(assetURL, destDir); err != nil {
|
||||
fmt.Printf("%s %s %s: %v\n", i18n.T("common.label.warning"), i18n.T("common.error.failed", map[string]any{"Action": "download asset"}), assetURL, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Also save the root index.html
|
||||
if err := os.WriteFile(filepath.Join(destDir, "index.html"), body, 0644); err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "write index.html"}), err)
|
||||
}
|
||||
|
||||
fmt.Println(i18n.T("cmd.build.pwa.download_complete"))
|
||||
return nil
|
||||
}
|
||||
|
||||
// findManifestURL extracts the manifest URL from HTML content.
|
||||
func findManifestURL(htmlContent, baseURL string) (string, error) {
|
||||
doc, err := html.Parse(strings.NewReader(htmlContent))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
var manifestPath string
|
||||
var f func(*html.Node)
|
||||
f = func(n *html.Node) {
|
||||
if n.Type == html.ElementNode && n.Data == "link" {
|
||||
var rel, href string
|
||||
for _, a := range n.Attr {
|
||||
if a.Key == "rel" {
|
||||
rel = a.Val
|
||||
}
|
||||
if a.Key == "href" {
|
||||
href = a.Val
|
||||
}
|
||||
}
|
||||
if rel == "manifest" && href != "" {
|
||||
manifestPath = href
|
||||
return
|
||||
}
|
||||
}
|
||||
for c := n.FirstChild; c != nil; c = c.NextSibling {
|
||||
f(c)
|
||||
}
|
||||
}
|
||||
f(doc)
|
||||
|
||||
if manifestPath == "" {
|
||||
return "", fmt.Errorf("%s", i18n.T("cmd.build.pwa.error.no_manifest_tag"))
|
||||
}
|
||||
|
||||
base, err := url.Parse(baseURL)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
manifestURL, err := base.Parse(manifestPath)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return manifestURL.String(), nil
|
||||
}
|
||||
|
||||
// fetchManifest downloads and parses a PWA manifest.
|
||||
func fetchManifest(manifestURL string) (map[string]interface{}, error) {
|
||||
resp, err := http.Get(manifestURL)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
|
||||
var manifest map[string]interface{}
|
||||
if err := json.NewDecoder(resp.Body).Decode(&manifest); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return manifest, nil
|
||||
}
|
||||
|
||||
// collectAssets extracts asset URLs from a PWA manifest.
|
||||
func collectAssets(manifest map[string]interface{}, manifestURL string) []string {
|
||||
var assets []string
|
||||
base, _ := url.Parse(manifestURL)
|
||||
|
||||
// Add start_url
|
||||
if startURL, ok := manifest["start_url"].(string); ok {
|
||||
if resolved, err := base.Parse(startURL); err == nil {
|
||||
assets = append(assets, resolved.String())
|
||||
}
|
||||
}
|
||||
|
||||
// Add icons
|
||||
if icons, ok := manifest["icons"].([]interface{}); ok {
|
||||
for _, icon := range icons {
|
||||
if iconMap, ok := icon.(map[string]interface{}); ok {
|
||||
if src, ok := iconMap["src"].(string); ok {
|
||||
if resolved, err := base.Parse(src); err == nil {
|
||||
assets = append(assets, resolved.String())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return assets
|
||||
}
|
||||
|
||||
// downloadAsset fetches a single asset and saves it locally.
|
||||
func downloadAsset(assetURL, destDir string) error {
|
||||
resp, err := http.Get(assetURL)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
|
||||
u, err := url.Parse(assetURL)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
path := filepath.Join(destDir, filepath.FromSlash(u.Path))
|
||||
if err := os.MkdirAll(filepath.Dir(path), os.ModePerm); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
out, err := os.Create(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() { _ = out.Close() }()
|
||||
|
||||
_, err = io.Copy(out, resp.Body)
|
||||
return err
|
||||
}
|
||||
|
||||
// runBuild builds a desktop application from a local directory.
|
||||
func runBuild(fromPath string) error {
|
||||
fmt.Printf("%s %s\n", i18n.T("cmd.build.from_path.starting"), fromPath)
|
||||
|
||||
info, err := os.Stat(fromPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("cmd.build.from_path.error.invalid_path"), err)
|
||||
}
|
||||
if !info.IsDir() {
|
||||
return fmt.Errorf("%s", i18n.T("cmd.build.from_path.error.must_be_directory"))
|
||||
}
|
||||
|
||||
buildDir := ".core/build/app"
|
||||
htmlDir := filepath.Join(buildDir, "html")
|
||||
appName := filepath.Base(fromPath)
|
||||
if strings.HasPrefix(appName, "core-pwa-build-") {
|
||||
appName = "pwa-app"
|
||||
}
|
||||
outputExe := appName
|
||||
|
||||
if err := os.RemoveAll(buildDir); err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "clean build directory"}), err)
|
||||
}
|
||||
|
||||
// 1. Generate the project from the embedded template
|
||||
fmt.Println(i18n.T("cmd.build.from_path.generating_template"))
|
||||
templateFS, err := debme.FS(guiTemplate, "tmpl/gui")
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "anchor template filesystem"}), err)
|
||||
}
|
||||
sod := gosod.New(templateFS)
|
||||
if sod == nil {
|
||||
return fmt.Errorf("%s", i18n.T("common.error.failed", map[string]any{"Action": "create new sod instance"}))
|
||||
}
|
||||
|
||||
templateData := map[string]string{"AppName": appName}
|
||||
if err := sod.Extract(buildDir, templateData); err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "extract template"}), err)
|
||||
}
|
||||
|
||||
// 2. Copy the user's web app files
|
||||
fmt.Println(i18n.T("cmd.build.from_path.copying_files"))
|
||||
if err := copyDir(fromPath, htmlDir); err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "copy application files"}), err)
|
||||
}
|
||||
|
||||
// 3. Compile the application
|
||||
fmt.Println(i18n.T("cmd.build.from_path.compiling"))
|
||||
|
||||
// Run go mod tidy
|
||||
cmd := exec.Command("go", "mod", "tidy")
|
||||
cmd.Dir = buildDir
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("cmd.build.from_path.error.go_mod_tidy"), err)
|
||||
}
|
||||
|
||||
// Run go build
|
||||
cmd = exec.Command("go", "build", "-o", outputExe)
|
||||
cmd.Dir = buildDir
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("cmd.build.from_path.error.go_build"), err)
|
||||
}
|
||||
|
||||
fmt.Printf("\n%s %s/%s\n", i18n.T("cmd.build.from_path.success"), buildDir, outputExe)
|
||||
return nil
|
||||
}
|
||||
|
||||
// copyDir recursively copies a directory from src to dst.
|
||||
func copyDir(src, dst string) error {
|
||||
return filepath.Walk(src, func(path string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
relPath, err := filepath.Rel(src, path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dstPath := filepath.Join(dst, relPath)
|
||||
|
||||
if info.IsDir() {
|
||||
return os.MkdirAll(dstPath, info.Mode())
|
||||
}
|
||||
|
||||
srcFile, err := os.Open(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() { _ = srcFile.Close() }()
|
||||
|
||||
dstFile, err := os.Create(dstPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() { _ = dstFile.Close() }()
|
||||
|
||||
_, err = io.Copy(dstFile, srcFile)
|
||||
return err
|
||||
})
|
||||
}
|
||||
|
|
@ -1,111 +0,0 @@
|
|||
// cmd_release.go implements the release command: build + archive + publish in one step.
|
||||
|
||||
package buildcmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/cli"
|
||||
"forge.lthn.ai/core/go/pkg/framework/core"
|
||||
"forge.lthn.ai/core/go/pkg/i18n"
|
||||
"forge.lthn.ai/core/go/pkg/release"
|
||||
)
|
||||
|
||||
// Flag variables for release command
|
||||
var (
|
||||
releaseVersion string
|
||||
releaseDraft bool
|
||||
releasePrerelease bool
|
||||
releaseGoForLaunch bool
|
||||
)
|
||||
|
||||
var releaseCmd = &cli.Command{
|
||||
Use: "release",
|
||||
Short: i18n.T("cmd.build.release.short"),
|
||||
Long: i18n.T("cmd.build.release.long"),
|
||||
RunE: func(cmd *cli.Command, args []string) error {
|
||||
return runRelease(cmd.Context(), !releaseGoForLaunch, releaseVersion, releaseDraft, releasePrerelease)
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
releaseCmd.Flags().BoolVar(&releaseGoForLaunch, "we-are-go-for-launch", false, i18n.T("cmd.build.release.flag.go_for_launch"))
|
||||
releaseCmd.Flags().StringVar(&releaseVersion, "version", "", i18n.T("cmd.build.release.flag.version"))
|
||||
releaseCmd.Flags().BoolVar(&releaseDraft, "draft", false, i18n.T("cmd.build.release.flag.draft"))
|
||||
releaseCmd.Flags().BoolVar(&releasePrerelease, "prerelease", false, i18n.T("cmd.build.release.flag.prerelease"))
|
||||
}
|
||||
|
||||
// AddReleaseCommand adds the release subcommand to the build command.
|
||||
func AddReleaseCommand(buildCmd *cli.Command) {
|
||||
buildCmd.AddCommand(releaseCmd)
|
||||
}
|
||||
|
||||
// runRelease executes the full release workflow: build + archive + checksum + publish.
|
||||
func runRelease(ctx context.Context, dryRun bool, version string, draft, prerelease bool) error {
|
||||
// Get current directory
|
||||
projectDir, err := os.Getwd()
|
||||
if err != nil {
|
||||
return core.E("release", "get working directory", err)
|
||||
}
|
||||
|
||||
// Check for release config
|
||||
if !release.ConfigExists(projectDir) {
|
||||
cli.Print("%s %s\n",
|
||||
buildErrorStyle.Render(i18n.Label("error")),
|
||||
i18n.T("cmd.build.release.error.no_config"),
|
||||
)
|
||||
cli.Print(" %s\n", buildDimStyle.Render(i18n.T("cmd.build.release.hint.create_config")))
|
||||
return core.E("release", "config not found", nil)
|
||||
}
|
||||
|
||||
// Load configuration
|
||||
cfg, err := release.LoadConfig(projectDir)
|
||||
if err != nil {
|
||||
return core.E("release", "load config", err)
|
||||
}
|
||||
|
||||
// Apply CLI overrides
|
||||
if version != "" {
|
||||
cfg.SetVersion(version)
|
||||
}
|
||||
|
||||
// Apply draft/prerelease overrides to all publishers
|
||||
if draft || prerelease {
|
||||
for i := range cfg.Publishers {
|
||||
if draft {
|
||||
cfg.Publishers[i].Draft = true
|
||||
}
|
||||
if prerelease {
|
||||
cfg.Publishers[i].Prerelease = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Print header
|
||||
cli.Print("%s %s\n", buildHeaderStyle.Render(i18n.T("cmd.build.release.label.release")), i18n.T("cmd.build.release.building_and_publishing"))
|
||||
if dryRun {
|
||||
cli.Print(" %s\n", buildDimStyle.Render(i18n.T("cmd.build.release.dry_run_hint")))
|
||||
}
|
||||
cli.Blank()
|
||||
|
||||
// Run full release (build + archive + checksum + publish)
|
||||
rel, err := release.Run(ctx, cfg, dryRun)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Print summary
|
||||
cli.Blank()
|
||||
cli.Print("%s %s\n", buildSuccessStyle.Render(i18n.T("i18n.done.pass")), i18n.T("cmd.build.release.completed"))
|
||||
cli.Print(" %s %s\n", i18n.Label("version"), buildTargetStyle.Render(rel.Version))
|
||||
cli.Print(" %s %d\n", i18n.T("cmd.build.release.label.artifacts"), len(rel.Artifacts))
|
||||
|
||||
if !dryRun {
|
||||
for _, pub := range cfg.Publishers {
|
||||
cli.Print(" %s %s\n", i18n.T("cmd.build.release.label.published"), buildTargetStyle.Render(pub.Type))
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,82 +0,0 @@
|
|||
// cmd_sdk.go implements SDK generation from OpenAPI specifications.
|
||||
//
|
||||
// Generates typed API clients for TypeScript, Python, Go, and PHP
|
||||
// from OpenAPI/Swagger specifications.
|
||||
|
||||
package buildcmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/sdk"
|
||||
"forge.lthn.ai/core/go/pkg/i18n"
|
||||
)
|
||||
|
||||
// runBuildSDK handles the `core build sdk` command.
|
||||
func runBuildSDK(specPath, lang, version string, dryRun bool) error {
|
||||
ctx := context.Background()
|
||||
|
||||
projectDir, err := os.Getwd()
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", i18n.T("common.error.failed", map[string]any{"Action": "get working directory"}), err)
|
||||
}
|
||||
|
||||
// Load config
|
||||
config := sdk.DefaultConfig()
|
||||
if specPath != "" {
|
||||
config.Spec = specPath
|
||||
}
|
||||
|
||||
s := sdk.New(projectDir, config)
|
||||
if version != "" {
|
||||
s.SetVersion(version)
|
||||
}
|
||||
|
||||
fmt.Printf("%s %s\n", buildHeaderStyle.Render(i18n.T("cmd.build.sdk.label")), i18n.T("cmd.build.sdk.generating"))
|
||||
if dryRun {
|
||||
fmt.Printf(" %s\n", buildDimStyle.Render(i18n.T("cmd.build.sdk.dry_run_mode")))
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Detect spec
|
||||
detectedSpec, err := s.DetectSpec()
|
||||
if err != nil {
|
||||
fmt.Printf("%s %v\n", buildErrorStyle.Render(i18n.T("common.label.error")), err)
|
||||
return err
|
||||
}
|
||||
fmt.Printf(" %s %s\n", i18n.T("common.label.spec"), buildTargetStyle.Render(detectedSpec))
|
||||
|
||||
if dryRun {
|
||||
if lang != "" {
|
||||
fmt.Printf(" %s %s\n", i18n.T("cmd.build.sdk.language_label"), buildTargetStyle.Render(lang))
|
||||
} else {
|
||||
fmt.Printf(" %s %s\n", i18n.T("cmd.build.sdk.languages_label"), buildTargetStyle.Render(strings.Join(config.Languages, ", ")))
|
||||
}
|
||||
fmt.Println()
|
||||
fmt.Printf("%s %s\n", buildSuccessStyle.Render(i18n.T("cmd.build.label.ok")), i18n.T("cmd.build.sdk.would_generate"))
|
||||
return nil
|
||||
}
|
||||
|
||||
if lang != "" {
|
||||
// Generate single language
|
||||
if err := s.GenerateLanguage(ctx, lang); err != nil {
|
||||
fmt.Printf("%s %v\n", buildErrorStyle.Render(i18n.T("common.label.error")), err)
|
||||
return err
|
||||
}
|
||||
fmt.Printf(" %s %s\n", i18n.T("cmd.build.sdk.generated_label"), buildTargetStyle.Render(lang))
|
||||
} else {
|
||||
// Generate all
|
||||
if err := s.Generate(ctx); err != nil {
|
||||
fmt.Printf("%s %v\n", buildErrorStyle.Render(i18n.T("common.label.error")), err)
|
||||
return err
|
||||
}
|
||||
fmt.Printf(" %s %s\n", i18n.T("cmd.build.sdk.generated_label"), buildTargetStyle.Render(strings.Join(config.Languages, ", ")))
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
fmt.Printf("%s %s\n", buildSuccessStyle.Render(i18n.T("common.label.success")), i18n.T("cmd.build.sdk.complete"))
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,7 +0,0 @@
|
|||
module {{.AppName}}
|
||||
|
||||
go 1.21
|
||||
|
||||
require (
|
||||
github.com/wailsapp/wails/v3 v3.0.0-alpha.8
|
||||
)
|
||||
|
|
@ -1 +0,0 @@
|
|||
// This file ensures the 'html' directory is correctly embedded by the Go compiler.
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"embed"
|
||||
"log"
|
||||
|
||||
"github.com/wailsapp/wails/v3/pkg/application"
|
||||
)
|
||||
|
||||
//go:embed all:html
|
||||
var assets embed.FS
|
||||
|
||||
func main() {
|
||||
app := application.New(application.Options{
|
||||
Name: "{{.AppName}}",
|
||||
Description: "A web application enclaved by Core.",
|
||||
Assets: application.AssetOptions{
|
||||
FS: assets,
|
||||
},
|
||||
})
|
||||
|
||||
if err := app.Run(); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
|
@ -1,253 +0,0 @@
|
|||
// Package builders provides build implementations for different project types.
|
||||
package builders
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/build"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// CPPBuilder implements the Builder interface for C++ projects using CMake + Conan.
|
||||
// It wraps the Makefile-based build system from the .core/build submodule.
|
||||
type CPPBuilder struct{}
|
||||
|
||||
// NewCPPBuilder creates a new CPPBuilder instance.
|
||||
func NewCPPBuilder() *CPPBuilder {
|
||||
return &CPPBuilder{}
|
||||
}
|
||||
|
||||
// Name returns the builder's identifier.
|
||||
func (b *CPPBuilder) Name() string {
|
||||
return "cpp"
|
||||
}
|
||||
|
||||
// Detect checks if this builder can handle the project in the given directory.
|
||||
func (b *CPPBuilder) Detect(fs io.Medium, dir string) (bool, error) {
|
||||
return build.IsCPPProject(fs, dir), nil
|
||||
}
|
||||
|
||||
// Build compiles the C++ project using Make targets.
|
||||
// The build flow is: make configure → make build → make package.
|
||||
// Cross-compilation is handled via Conan profiles specified in .core/build.yaml.
|
||||
func (b *CPPBuilder) Build(ctx context.Context, cfg *build.Config, targets []build.Target) ([]build.Artifact, error) {
|
||||
if cfg == nil {
|
||||
return nil, fmt.Errorf("builders.CPPBuilder.Build: config is nil")
|
||||
}
|
||||
|
||||
// Validate make is available
|
||||
if err := b.validateMake(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// For C++ projects, the Makefile handles everything.
|
||||
// We don't iterate per-target like Go — the Makefile's configure + build
|
||||
// produces binaries for the host platform, and cross-compilation uses
|
||||
// named Conan profiles (e.g., make gcc-linux-armv8).
|
||||
if len(targets) == 0 {
|
||||
// Default to host platform
|
||||
targets = []build.Target{{OS: runtime.GOOS, Arch: runtime.GOARCH}}
|
||||
}
|
||||
|
||||
var artifacts []build.Artifact
|
||||
|
||||
for _, target := range targets {
|
||||
built, err := b.buildTarget(ctx, cfg, target)
|
||||
if err != nil {
|
||||
return artifacts, fmt.Errorf("builders.CPPBuilder.Build: %w", err)
|
||||
}
|
||||
artifacts = append(artifacts, built...)
|
||||
}
|
||||
|
||||
return artifacts, nil
|
||||
}
|
||||
|
||||
// buildTarget compiles for a single target platform.
|
||||
func (b *CPPBuilder) buildTarget(ctx context.Context, cfg *build.Config, target build.Target) ([]build.Artifact, error) {
|
||||
// Determine if this is a cross-compile or host build
|
||||
isHostBuild := target.OS == runtime.GOOS && target.Arch == runtime.GOARCH
|
||||
|
||||
if isHostBuild {
|
||||
return b.buildHost(ctx, cfg, target)
|
||||
}
|
||||
|
||||
return b.buildCross(ctx, cfg, target)
|
||||
}
|
||||
|
||||
// buildHost runs the standard make configure → make build → make package flow.
|
||||
func (b *CPPBuilder) buildHost(ctx context.Context, cfg *build.Config, target build.Target) ([]build.Artifact, error) {
|
||||
fmt.Printf("Building C++ project for %s/%s (host)\n", target.OS, target.Arch)
|
||||
|
||||
// Step 1: Configure (runs conan install + cmake configure)
|
||||
if err := b.runMake(ctx, cfg.ProjectDir, "configure"); err != nil {
|
||||
return nil, fmt.Errorf("configure failed: %w", err)
|
||||
}
|
||||
|
||||
// Step 2: Build
|
||||
if err := b.runMake(ctx, cfg.ProjectDir, "build"); err != nil {
|
||||
return nil, fmt.Errorf("build failed: %w", err)
|
||||
}
|
||||
|
||||
// Step 3: Package
|
||||
if err := b.runMake(ctx, cfg.ProjectDir, "package"); err != nil {
|
||||
return nil, fmt.Errorf("package failed: %w", err)
|
||||
}
|
||||
|
||||
// Discover artifacts from build/packages/
|
||||
return b.findArtifacts(cfg.FS, cfg.ProjectDir, target)
|
||||
}
|
||||
|
||||
// buildCross runs a cross-compilation using a Conan profile name.
|
||||
// The Makefile supports profile targets like: make gcc-linux-armv8
|
||||
func (b *CPPBuilder) buildCross(ctx context.Context, cfg *build.Config, target build.Target) ([]build.Artifact, error) {
|
||||
// Map target to a Conan profile name
|
||||
profile := b.targetToProfile(target)
|
||||
if profile == "" {
|
||||
return nil, fmt.Errorf("no Conan profile mapped for target %s/%s", target.OS, target.Arch)
|
||||
}
|
||||
|
||||
fmt.Printf("Building C++ project for %s/%s (cross: %s)\n", target.OS, target.Arch, profile)
|
||||
|
||||
// The Makefile exposes each profile as a top-level target
|
||||
if err := b.runMake(ctx, cfg.ProjectDir, profile); err != nil {
|
||||
return nil, fmt.Errorf("cross-compile for %s failed: %w", profile, err)
|
||||
}
|
||||
|
||||
return b.findArtifacts(cfg.FS, cfg.ProjectDir, target)
|
||||
}
|
||||
|
||||
// runMake executes a make target in the project directory.
|
||||
func (b *CPPBuilder) runMake(ctx context.Context, projectDir string, target string) error {
|
||||
cmd := exec.CommandContext(ctx, "make", target)
|
||||
cmd.Dir = projectDir
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
cmd.Env = os.Environ()
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("make %s: %w", target, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// findArtifacts searches for built packages in build/packages/.
|
||||
func (b *CPPBuilder) findArtifacts(fs io.Medium, projectDir string, target build.Target) ([]build.Artifact, error) {
|
||||
packagesDir := filepath.Join(projectDir, "build", "packages")
|
||||
|
||||
if !fs.IsDir(packagesDir) {
|
||||
// Fall back to searching build/release/src/ for raw binaries
|
||||
return b.findBinaries(fs, projectDir, target)
|
||||
}
|
||||
|
||||
entries, err := fs.List(packagesDir)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to list packages directory: %w", err)
|
||||
}
|
||||
|
||||
var artifacts []build.Artifact
|
||||
for _, entry := range entries {
|
||||
if entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
name := entry.Name()
|
||||
// Skip checksum files and hidden files
|
||||
if strings.HasSuffix(name, ".sha256") || strings.HasPrefix(name, ".") {
|
||||
continue
|
||||
}
|
||||
|
||||
artifacts = append(artifacts, build.Artifact{
|
||||
Path: filepath.Join(packagesDir, name),
|
||||
OS: target.OS,
|
||||
Arch: target.Arch,
|
||||
})
|
||||
}
|
||||
|
||||
return artifacts, nil
|
||||
}
|
||||
|
||||
// findBinaries searches for compiled binaries in build/release/src/.
|
||||
func (b *CPPBuilder) findBinaries(fs io.Medium, projectDir string, target build.Target) ([]build.Artifact, error) {
|
||||
binDir := filepath.Join(projectDir, "build", "release", "src")
|
||||
|
||||
if !fs.IsDir(binDir) {
|
||||
return nil, fmt.Errorf("no build output found in %s", binDir)
|
||||
}
|
||||
|
||||
entries, err := fs.List(binDir)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to list build directory: %w", err)
|
||||
}
|
||||
|
||||
var artifacts []build.Artifact
|
||||
for _, entry := range entries {
|
||||
if entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
name := entry.Name()
|
||||
// Skip non-executable files (libraries, cmake files, etc.)
|
||||
if strings.HasSuffix(name, ".a") || strings.HasSuffix(name, ".o") ||
|
||||
strings.HasSuffix(name, ".cmake") || strings.HasPrefix(name, ".") {
|
||||
continue
|
||||
}
|
||||
|
||||
fullPath := filepath.Join(binDir, name)
|
||||
|
||||
// On Unix, check if file is executable
|
||||
if target.OS != "windows" {
|
||||
info, err := os.Stat(fullPath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if info.Mode()&0111 == 0 {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
artifacts = append(artifacts, build.Artifact{
|
||||
Path: fullPath,
|
||||
OS: target.OS,
|
||||
Arch: target.Arch,
|
||||
})
|
||||
}
|
||||
|
||||
return artifacts, nil
|
||||
}
|
||||
|
||||
// targetToProfile maps a build target to a Conan cross-compilation profile name.
|
||||
// Profile names match those in .core/build/cmake/profiles/.
|
||||
func (b *CPPBuilder) targetToProfile(target build.Target) string {
|
||||
key := target.OS + "/" + target.Arch
|
||||
profiles := map[string]string{
|
||||
"linux/amd64": "gcc-linux-x86_64",
|
||||
"linux/x86_64": "gcc-linux-x86_64",
|
||||
"linux/arm64": "gcc-linux-armv8",
|
||||
"linux/armv8": "gcc-linux-armv8",
|
||||
"darwin/arm64": "apple-clang-armv8",
|
||||
"darwin/armv8": "apple-clang-armv8",
|
||||
"darwin/amd64": "apple-clang-x86_64",
|
||||
"darwin/x86_64": "apple-clang-x86_64",
|
||||
"windows/amd64": "msvc-194-x86_64",
|
||||
"windows/x86_64": "msvc-194-x86_64",
|
||||
}
|
||||
|
||||
return profiles[key]
|
||||
}
|
||||
|
||||
// validateMake checks if make is available.
|
||||
func (b *CPPBuilder) validateMake() error {
|
||||
if _, err := exec.LookPath("make"); err != nil {
|
||||
return fmt.Errorf("cpp: make not found. Install build-essential (Linux) or Xcode Command Line Tools (macOS)")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Ensure CPPBuilder implements the Builder interface.
|
||||
var _ build.Builder = (*CPPBuilder)(nil)
|
||||
|
|
@ -1,149 +0,0 @@
|
|||
package builders
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/build"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestCPPBuilder_Name_Good(t *testing.T) {
|
||||
builder := NewCPPBuilder()
|
||||
assert.Equal(t, "cpp", builder.Name())
|
||||
}
|
||||
|
||||
func TestCPPBuilder_Detect_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
|
||||
t.Run("detects C++ project with CMakeLists.txt", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
err := os.WriteFile(filepath.Join(dir, "CMakeLists.txt"), []byte("cmake_minimum_required(VERSION 3.16)"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := NewCPPBuilder()
|
||||
detected, err := builder.Detect(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, detected)
|
||||
})
|
||||
|
||||
t.Run("returns false for non-C++ project", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
err := os.WriteFile(filepath.Join(dir, "go.mod"), []byte("module test"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := NewCPPBuilder()
|
||||
detected, err := builder.Detect(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, detected)
|
||||
})
|
||||
|
||||
t.Run("returns false for empty directory", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
builder := NewCPPBuilder()
|
||||
detected, err := builder.Detect(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, detected)
|
||||
})
|
||||
}
|
||||
|
||||
func TestCPPBuilder_Build_Bad(t *testing.T) {
|
||||
t.Run("returns error for nil config", func(t *testing.T) {
|
||||
builder := NewCPPBuilder()
|
||||
artifacts, err := builder.Build(nil, nil, []build.Target{{OS: "linux", Arch: "amd64"}})
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, artifacts)
|
||||
assert.Contains(t, err.Error(), "config is nil")
|
||||
})
|
||||
}
|
||||
|
||||
func TestCPPBuilder_TargetToProfile_Good(t *testing.T) {
|
||||
builder := NewCPPBuilder()
|
||||
|
||||
tests := []struct {
|
||||
os, arch string
|
||||
expected string
|
||||
}{
|
||||
{"linux", "amd64", "gcc-linux-x86_64"},
|
||||
{"linux", "x86_64", "gcc-linux-x86_64"},
|
||||
{"linux", "arm64", "gcc-linux-armv8"},
|
||||
{"darwin", "arm64", "apple-clang-armv8"},
|
||||
{"darwin", "amd64", "apple-clang-x86_64"},
|
||||
{"windows", "amd64", "msvc-194-x86_64"},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.os+"/"+tt.arch, func(t *testing.T) {
|
||||
profile := builder.targetToProfile(build.Target{OS: tt.os, Arch: tt.arch})
|
||||
assert.Equal(t, tt.expected, profile)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCPPBuilder_TargetToProfile_Bad(t *testing.T) {
|
||||
builder := NewCPPBuilder()
|
||||
|
||||
t.Run("returns empty for unknown target", func(t *testing.T) {
|
||||
profile := builder.targetToProfile(build.Target{OS: "plan9", Arch: "mips"})
|
||||
assert.Empty(t, profile)
|
||||
})
|
||||
}
|
||||
|
||||
func TestCPPBuilder_FindArtifacts_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
|
||||
t.Run("finds packages in build/packages", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
packagesDir := filepath.Join(dir, "build", "packages")
|
||||
require.NoError(t, os.MkdirAll(packagesDir, 0755))
|
||||
|
||||
// Create mock package files
|
||||
require.NoError(t, os.WriteFile(filepath.Join(packagesDir, "test-1.0-linux-x86_64.tar.xz"), []byte("pkg"), 0644))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(packagesDir, "test-1.0-linux-x86_64.tar.xz.sha256"), []byte("checksum"), 0644))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(packagesDir, "test-1.0-linux-x86_64.rpm"), []byte("rpm"), 0644))
|
||||
|
||||
builder := NewCPPBuilder()
|
||||
target := build.Target{OS: "linux", Arch: "amd64"}
|
||||
artifacts, err := builder.findArtifacts(fs, dir, target)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Should find tar.xz and rpm but not sha256
|
||||
assert.Len(t, artifacts, 2)
|
||||
for _, a := range artifacts {
|
||||
assert.Equal(t, "linux", a.OS)
|
||||
assert.Equal(t, "amd64", a.Arch)
|
||||
assert.False(t, filepath.Ext(a.Path) == ".sha256")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("falls back to binaries in build/release/src", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
binDir := filepath.Join(dir, "build", "release", "src")
|
||||
require.NoError(t, os.MkdirAll(binDir, 0755))
|
||||
|
||||
// Create mock binary (executable)
|
||||
binPath := filepath.Join(binDir, "test-daemon")
|
||||
require.NoError(t, os.WriteFile(binPath, []byte("binary"), 0755))
|
||||
|
||||
// Create a library (should be skipped)
|
||||
require.NoError(t, os.WriteFile(filepath.Join(binDir, "libcrypto.a"), []byte("lib"), 0644))
|
||||
|
||||
builder := NewCPPBuilder()
|
||||
target := build.Target{OS: "linux", Arch: "amd64"}
|
||||
artifacts, err := builder.findArtifacts(fs, dir, target)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Should find the executable but not the library
|
||||
assert.Len(t, artifacts, 1)
|
||||
assert.Contains(t, artifacts[0].Path, "test-daemon")
|
||||
})
|
||||
}
|
||||
|
||||
func TestCPPBuilder_Interface_Good(t *testing.T) {
|
||||
var _ build.Builder = (*CPPBuilder)(nil)
|
||||
var _ build.Builder = NewCPPBuilder()
|
||||
}
|
||||
|
|
@ -1,215 +0,0 @@
|
|||
// Package builders provides build implementations for different project types.
|
||||
package builders
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/build"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// DockerBuilder builds Docker images.
|
||||
type DockerBuilder struct{}
|
||||
|
||||
// NewDockerBuilder creates a new Docker builder.
|
||||
func NewDockerBuilder() *DockerBuilder {
|
||||
return &DockerBuilder{}
|
||||
}
|
||||
|
||||
// Name returns the builder's identifier.
|
||||
func (b *DockerBuilder) Name() string {
|
||||
return "docker"
|
||||
}
|
||||
|
||||
// Detect checks if a Dockerfile exists in the directory.
|
||||
func (b *DockerBuilder) Detect(fs io.Medium, dir string) (bool, error) {
|
||||
dockerfilePath := filepath.Join(dir, "Dockerfile")
|
||||
if fs.IsFile(dockerfilePath) {
|
||||
return true, nil
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// Build builds Docker images for the specified targets.
|
||||
func (b *DockerBuilder) Build(ctx context.Context, cfg *build.Config, targets []build.Target) ([]build.Artifact, error) {
|
||||
// Validate docker CLI is available
|
||||
if err := b.validateDockerCli(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Ensure buildx is available
|
||||
if err := b.ensureBuildx(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Determine Dockerfile path
|
||||
dockerfile := cfg.Dockerfile
|
||||
if dockerfile == "" {
|
||||
dockerfile = filepath.Join(cfg.ProjectDir, "Dockerfile")
|
||||
}
|
||||
|
||||
// Validate Dockerfile exists
|
||||
if !cfg.FS.IsFile(dockerfile) {
|
||||
return nil, fmt.Errorf("docker.Build: Dockerfile not found: %s", dockerfile)
|
||||
}
|
||||
|
||||
// Determine image name
|
||||
imageName := cfg.Image
|
||||
if imageName == "" {
|
||||
imageName = cfg.Name
|
||||
}
|
||||
if imageName == "" {
|
||||
imageName = filepath.Base(cfg.ProjectDir)
|
||||
}
|
||||
|
||||
// Build platform string from targets
|
||||
var platforms []string
|
||||
for _, t := range targets {
|
||||
platforms = append(platforms, fmt.Sprintf("%s/%s", t.OS, t.Arch))
|
||||
}
|
||||
|
||||
// If no targets specified, use current platform
|
||||
if len(platforms) == 0 {
|
||||
platforms = []string{"linux/amd64"}
|
||||
}
|
||||
|
||||
// Determine registry
|
||||
registry := cfg.Registry
|
||||
if registry == "" {
|
||||
registry = "ghcr.io"
|
||||
}
|
||||
|
||||
// Determine tags
|
||||
tags := cfg.Tags
|
||||
if len(tags) == 0 {
|
||||
tags = []string{"latest"}
|
||||
if cfg.Version != "" {
|
||||
tags = append(tags, cfg.Version)
|
||||
}
|
||||
}
|
||||
|
||||
// Build full image references
|
||||
var imageRefs []string
|
||||
for _, tag := range tags {
|
||||
// Expand version template
|
||||
expandedTag := strings.ReplaceAll(tag, "{{.Version}}", cfg.Version)
|
||||
expandedTag = strings.ReplaceAll(expandedTag, "{{Version}}", cfg.Version)
|
||||
|
||||
if registry != "" {
|
||||
imageRefs = append(imageRefs, fmt.Sprintf("%s/%s:%s", registry, imageName, expandedTag))
|
||||
} else {
|
||||
imageRefs = append(imageRefs, fmt.Sprintf("%s:%s", imageName, expandedTag))
|
||||
}
|
||||
}
|
||||
|
||||
// Build the docker buildx command
|
||||
args := []string{"buildx", "build"}
|
||||
|
||||
// Multi-platform support
|
||||
args = append(args, "--platform", strings.Join(platforms, ","))
|
||||
|
||||
// Add all tags
|
||||
for _, ref := range imageRefs {
|
||||
args = append(args, "-t", ref)
|
||||
}
|
||||
|
||||
// Dockerfile path
|
||||
args = append(args, "-f", dockerfile)
|
||||
|
||||
// Build arguments
|
||||
for k, v := range cfg.BuildArgs {
|
||||
expandedValue := strings.ReplaceAll(v, "{{.Version}}", cfg.Version)
|
||||
expandedValue = strings.ReplaceAll(expandedValue, "{{Version}}", cfg.Version)
|
||||
args = append(args, "--build-arg", fmt.Sprintf("%s=%s", k, expandedValue))
|
||||
}
|
||||
|
||||
// Always add VERSION build arg if version is set
|
||||
if cfg.Version != "" {
|
||||
args = append(args, "--build-arg", fmt.Sprintf("VERSION=%s", cfg.Version))
|
||||
}
|
||||
|
||||
// Output to local docker images or push
|
||||
if cfg.Push {
|
||||
args = append(args, "--push")
|
||||
} else {
|
||||
// For multi-platform builds without push, we need to load or output somewhere
|
||||
if len(platforms) == 1 {
|
||||
args = append(args, "--load")
|
||||
} else {
|
||||
// Multi-platform builds can't use --load, output to tarball
|
||||
outputPath := filepath.Join(cfg.OutputDir, fmt.Sprintf("%s.tar", imageName))
|
||||
args = append(args, "--output", fmt.Sprintf("type=oci,dest=%s", outputPath))
|
||||
}
|
||||
}
|
||||
|
||||
// Build context (project directory)
|
||||
args = append(args, cfg.ProjectDir)
|
||||
|
||||
// Create output directory
|
||||
if err := cfg.FS.EnsureDir(cfg.OutputDir); err != nil {
|
||||
return nil, fmt.Errorf("docker.Build: failed to create output directory: %w", err)
|
||||
}
|
||||
|
||||
// Execute build
|
||||
cmd := exec.CommandContext(ctx, "docker", args...)
|
||||
cmd.Dir = cfg.ProjectDir
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
|
||||
fmt.Printf("Building Docker image: %s\n", imageName)
|
||||
fmt.Printf(" Platforms: %s\n", strings.Join(platforms, ", "))
|
||||
fmt.Printf(" Tags: %s\n", strings.Join(imageRefs, ", "))
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
return nil, fmt.Errorf("docker.Build: buildx build failed: %w", err)
|
||||
}
|
||||
|
||||
// Create artifacts for each platform
|
||||
var artifacts []build.Artifact
|
||||
for _, t := range targets {
|
||||
artifacts = append(artifacts, build.Artifact{
|
||||
Path: imageRefs[0], // Primary image reference
|
||||
OS: t.OS,
|
||||
Arch: t.Arch,
|
||||
})
|
||||
}
|
||||
|
||||
return artifacts, nil
|
||||
}
|
||||
|
||||
// validateDockerCli checks if the docker CLI is available.
|
||||
func (b *DockerBuilder) validateDockerCli() error {
|
||||
cmd := exec.Command("docker", "--version")
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("docker: docker CLI not found. Install it from https://docs.docker.com/get-docker/")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ensureBuildx ensures docker buildx is available and has a builder.
|
||||
func (b *DockerBuilder) ensureBuildx(ctx context.Context) error {
|
||||
// Check if buildx is available
|
||||
cmd := exec.CommandContext(ctx, "docker", "buildx", "version")
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("docker: buildx is not available. Install it from https://docs.docker.com/buildx/working-with-buildx/")
|
||||
}
|
||||
|
||||
// Check if we have a builder, create one if not
|
||||
cmd = exec.CommandContext(ctx, "docker", "buildx", "inspect", "--bootstrap")
|
||||
if err := cmd.Run(); err != nil {
|
||||
// Try to create a builder
|
||||
cmd = exec.CommandContext(ctx, "docker", "buildx", "create", "--use", "--bootstrap")
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("docker: failed to create buildx builder: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,129 +0,0 @@
|
|||
// Package builders provides build implementations for different project types.
|
||||
package builders
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/build"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// GoBuilder implements the Builder interface for Go projects.
|
||||
type GoBuilder struct{}
|
||||
|
||||
// NewGoBuilder creates a new GoBuilder instance.
|
||||
func NewGoBuilder() *GoBuilder {
|
||||
return &GoBuilder{}
|
||||
}
|
||||
|
||||
// Name returns the builder's identifier.
|
||||
func (b *GoBuilder) Name() string {
|
||||
return "go"
|
||||
}
|
||||
|
||||
// Detect checks if this builder can handle the project in the given directory.
|
||||
// Uses IsGoProject from the build package which checks for go.mod or wails.json.
|
||||
func (b *GoBuilder) Detect(fs io.Medium, dir string) (bool, error) {
|
||||
return build.IsGoProject(fs, dir), nil
|
||||
}
|
||||
|
||||
// Build compiles the Go project for the specified targets.
|
||||
// It sets GOOS, GOARCH, and CGO_ENABLED environment variables,
|
||||
// applies ldflags and trimpath, and runs go build.
|
||||
func (b *GoBuilder) Build(ctx context.Context, cfg *build.Config, targets []build.Target) ([]build.Artifact, error) {
|
||||
if cfg == nil {
|
||||
return nil, fmt.Errorf("builders.GoBuilder.Build: config is nil")
|
||||
}
|
||||
|
||||
if len(targets) == 0 {
|
||||
return nil, fmt.Errorf("builders.GoBuilder.Build: no targets specified")
|
||||
}
|
||||
|
||||
// Ensure output directory exists
|
||||
if err := cfg.FS.EnsureDir(cfg.OutputDir); err != nil {
|
||||
return nil, fmt.Errorf("builders.GoBuilder.Build: failed to create output directory: %w", err)
|
||||
}
|
||||
|
||||
var artifacts []build.Artifact
|
||||
|
||||
for _, target := range targets {
|
||||
artifact, err := b.buildTarget(ctx, cfg, target)
|
||||
if err != nil {
|
||||
return artifacts, fmt.Errorf("builders.GoBuilder.Build: failed to build %s: %w", target.String(), err)
|
||||
}
|
||||
artifacts = append(artifacts, artifact)
|
||||
}
|
||||
|
||||
return artifacts, nil
|
||||
}
|
||||
|
||||
// buildTarget compiles for a single target platform.
|
||||
func (b *GoBuilder) buildTarget(ctx context.Context, cfg *build.Config, target build.Target) (build.Artifact, error) {
|
||||
// Determine output binary name
|
||||
binaryName := cfg.Name
|
||||
if binaryName == "" {
|
||||
binaryName = filepath.Base(cfg.ProjectDir)
|
||||
}
|
||||
|
||||
// Add .exe extension for Windows
|
||||
if target.OS == "windows" && !strings.HasSuffix(binaryName, ".exe") {
|
||||
binaryName += ".exe"
|
||||
}
|
||||
|
||||
// Create platform-specific output path: output/os_arch/binary
|
||||
platformDir := filepath.Join(cfg.OutputDir, fmt.Sprintf("%s_%s", target.OS, target.Arch))
|
||||
if err := cfg.FS.EnsureDir(platformDir); err != nil {
|
||||
return build.Artifact{}, fmt.Errorf("failed to create platform directory: %w", err)
|
||||
}
|
||||
|
||||
outputPath := filepath.Join(platformDir, binaryName)
|
||||
|
||||
// Build the go build arguments
|
||||
args := []string{"build"}
|
||||
|
||||
// Add trimpath flag
|
||||
args = append(args, "-trimpath")
|
||||
|
||||
// Add ldflags if specified
|
||||
if len(cfg.LDFlags) > 0 {
|
||||
ldflags := strings.Join(cfg.LDFlags, " ")
|
||||
args = append(args, "-ldflags", ldflags)
|
||||
}
|
||||
|
||||
// Add output path
|
||||
args = append(args, "-o", outputPath)
|
||||
|
||||
// Add the project directory as the build target (current directory)
|
||||
args = append(args, ".")
|
||||
|
||||
// Create the command
|
||||
cmd := exec.CommandContext(ctx, "go", args...)
|
||||
cmd.Dir = cfg.ProjectDir
|
||||
|
||||
// Set up environment
|
||||
env := os.Environ()
|
||||
env = append(env, fmt.Sprintf("GOOS=%s", target.OS))
|
||||
env = append(env, fmt.Sprintf("GOARCH=%s", target.Arch))
|
||||
env = append(env, "CGO_ENABLED=0") // CGO disabled by default for cross-compilation
|
||||
cmd.Env = env
|
||||
|
||||
// Capture output for error messages
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return build.Artifact{}, fmt.Errorf("go build failed: %w\nOutput: %s", err, string(output))
|
||||
}
|
||||
|
||||
return build.Artifact{
|
||||
Path: outputPath,
|
||||
OS: target.OS,
|
||||
Arch: target.Arch,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Ensure GoBuilder implements the Builder interface.
|
||||
var _ build.Builder = (*GoBuilder)(nil)
|
||||
|
|
@ -1,398 +0,0 @@
|
|||
package builders
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/build"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// setupGoTestProject creates a minimal Go project for testing.
|
||||
func setupGoTestProject(t *testing.T) string {
|
||||
t.Helper()
|
||||
dir := t.TempDir()
|
||||
|
||||
// Create a minimal go.mod
|
||||
goMod := `module testproject
|
||||
|
||||
go 1.21
|
||||
`
|
||||
err := os.WriteFile(filepath.Join(dir, "go.mod"), []byte(goMod), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create a minimal main.go
|
||||
mainGo := `package main
|
||||
|
||||
func main() {
|
||||
println("hello")
|
||||
}
|
||||
`
|
||||
err = os.WriteFile(filepath.Join(dir, "main.go"), []byte(mainGo), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
return dir
|
||||
}
|
||||
|
||||
func TestGoBuilder_Name_Good(t *testing.T) {
|
||||
builder := NewGoBuilder()
|
||||
assert.Equal(t, "go", builder.Name())
|
||||
}
|
||||
|
||||
func TestGoBuilder_Detect_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("detects Go project with go.mod", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
err := os.WriteFile(filepath.Join(dir, "go.mod"), []byte("module test"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := NewGoBuilder()
|
||||
detected, err := builder.Detect(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, detected)
|
||||
})
|
||||
|
||||
t.Run("detects Wails project", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
err := os.WriteFile(filepath.Join(dir, "wails.json"), []byte("{}"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := NewGoBuilder()
|
||||
detected, err := builder.Detect(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, detected)
|
||||
})
|
||||
|
||||
t.Run("returns false for non-Go project", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
// Create a Node.js project instead
|
||||
err := os.WriteFile(filepath.Join(dir, "package.json"), []byte("{}"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := NewGoBuilder()
|
||||
detected, err := builder.Detect(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, detected)
|
||||
})
|
||||
|
||||
t.Run("returns false for empty directory", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
builder := NewGoBuilder()
|
||||
detected, err := builder.Detect(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, detected)
|
||||
})
|
||||
}
|
||||
|
||||
func TestGoBuilder_Build_Good(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test in short mode")
|
||||
}
|
||||
|
||||
t.Run("builds for current platform", func(t *testing.T) {
|
||||
projectDir := setupGoTestProject(t)
|
||||
outputDir := t.TempDir()
|
||||
|
||||
builder := NewGoBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: outputDir,
|
||||
Name: "testbinary",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, targets)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, artifacts, 1)
|
||||
|
||||
// Verify artifact properties
|
||||
artifact := artifacts[0]
|
||||
assert.Equal(t, runtime.GOOS, artifact.OS)
|
||||
assert.Equal(t, runtime.GOARCH, artifact.Arch)
|
||||
|
||||
// Verify binary was created
|
||||
assert.FileExists(t, artifact.Path)
|
||||
|
||||
// Verify the path is in the expected location
|
||||
expectedName := "testbinary"
|
||||
if runtime.GOOS == "windows" {
|
||||
expectedName += ".exe"
|
||||
}
|
||||
assert.Contains(t, artifact.Path, expectedName)
|
||||
})
|
||||
|
||||
t.Run("builds multiple targets", func(t *testing.T) {
|
||||
projectDir := setupGoTestProject(t)
|
||||
outputDir := t.TempDir()
|
||||
|
||||
builder := NewGoBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: outputDir,
|
||||
Name: "multitest",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: "linux", Arch: "amd64"},
|
||||
{OS: "linux", Arch: "arm64"},
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, targets)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, artifacts, 2)
|
||||
|
||||
// Verify both artifacts were created
|
||||
for i, artifact := range artifacts {
|
||||
assert.Equal(t, targets[i].OS, artifact.OS)
|
||||
assert.Equal(t, targets[i].Arch, artifact.Arch)
|
||||
assert.FileExists(t, artifact.Path)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("adds .exe extension for Windows", func(t *testing.T) {
|
||||
projectDir := setupGoTestProject(t)
|
||||
outputDir := t.TempDir()
|
||||
|
||||
builder := NewGoBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: outputDir,
|
||||
Name: "wintest",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: "windows", Arch: "amd64"},
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, targets)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, artifacts, 1)
|
||||
|
||||
// Verify .exe extension
|
||||
assert.True(t, filepath.Ext(artifacts[0].Path) == ".exe")
|
||||
assert.FileExists(t, artifacts[0].Path)
|
||||
})
|
||||
|
||||
t.Run("uses directory name when Name not specified", func(t *testing.T) {
|
||||
projectDir := setupGoTestProject(t)
|
||||
outputDir := t.TempDir()
|
||||
|
||||
builder := NewGoBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: outputDir,
|
||||
Name: "", // Empty name
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, targets)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, artifacts, 1)
|
||||
|
||||
// Binary should use the project directory base name
|
||||
baseName := filepath.Base(projectDir)
|
||||
if runtime.GOOS == "windows" {
|
||||
baseName += ".exe"
|
||||
}
|
||||
assert.Contains(t, artifacts[0].Path, baseName)
|
||||
})
|
||||
|
||||
t.Run("applies ldflags", func(t *testing.T) {
|
||||
projectDir := setupGoTestProject(t)
|
||||
outputDir := t.TempDir()
|
||||
|
||||
builder := NewGoBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: outputDir,
|
||||
Name: "ldflagstest",
|
||||
LDFlags: []string{"-s", "-w"}, // Strip debug info
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, targets)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, artifacts, 1)
|
||||
assert.FileExists(t, artifacts[0].Path)
|
||||
})
|
||||
|
||||
t.Run("creates output directory if missing", func(t *testing.T) {
|
||||
projectDir := setupGoTestProject(t)
|
||||
outputDir := filepath.Join(t.TempDir(), "nested", "output")
|
||||
|
||||
builder := NewGoBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: outputDir,
|
||||
Name: "nestedtest",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, targets)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, artifacts, 1)
|
||||
assert.FileExists(t, artifacts[0].Path)
|
||||
assert.DirExists(t, outputDir)
|
||||
})
|
||||
}
|
||||
|
||||
func TestGoBuilder_Build_Bad(t *testing.T) {
|
||||
t.Run("returns error for nil config", func(t *testing.T) {
|
||||
builder := NewGoBuilder()
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), nil, []build.Target{{OS: "linux", Arch: "amd64"}})
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, artifacts)
|
||||
assert.Contains(t, err.Error(), "config is nil")
|
||||
})
|
||||
|
||||
t.Run("returns error for empty targets", func(t *testing.T) {
|
||||
projectDir := setupGoTestProject(t)
|
||||
|
||||
builder := NewGoBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: t.TempDir(),
|
||||
Name: "test",
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, []build.Target{})
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, artifacts)
|
||||
assert.Contains(t, err.Error(), "no targets specified")
|
||||
})
|
||||
|
||||
t.Run("returns error for invalid project directory", func(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test in short mode")
|
||||
}
|
||||
|
||||
builder := NewGoBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: "/nonexistent/path",
|
||||
OutputDir: t.TempDir(),
|
||||
Name: "test",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, targets)
|
||||
assert.Error(t, err)
|
||||
assert.Empty(t, artifacts)
|
||||
})
|
||||
|
||||
t.Run("returns error for invalid Go code", func(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test in short mode")
|
||||
}
|
||||
|
||||
dir := t.TempDir()
|
||||
|
||||
// Create go.mod
|
||||
err := os.WriteFile(filepath.Join(dir, "go.mod"), []byte("module test\n\ngo 1.21"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create invalid Go code
|
||||
err = os.WriteFile(filepath.Join(dir, "main.go"), []byte("this is not valid go code"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := NewGoBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: dir,
|
||||
OutputDir: t.TempDir(),
|
||||
Name: "test",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, targets)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "go build failed")
|
||||
assert.Empty(t, artifacts)
|
||||
})
|
||||
|
||||
t.Run("returns partial artifacts on partial failure", func(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test in short mode")
|
||||
}
|
||||
|
||||
// Create a project that will fail on one target
|
||||
// Using an invalid arch for linux
|
||||
projectDir := setupGoTestProject(t)
|
||||
outputDir := t.TempDir()
|
||||
|
||||
builder := NewGoBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: outputDir,
|
||||
Name: "partialtest",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH}, // This should succeed
|
||||
{OS: "linux", Arch: "invalid_arch"}, // This should fail
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, targets)
|
||||
// Should return error for the failed build
|
||||
assert.Error(t, err)
|
||||
// Should have the successful artifact
|
||||
assert.Len(t, artifacts, 1)
|
||||
})
|
||||
|
||||
t.Run("respects context cancellation", func(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test in short mode")
|
||||
}
|
||||
|
||||
projectDir := setupGoTestProject(t)
|
||||
|
||||
builder := NewGoBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: t.TempDir(),
|
||||
Name: "canceltest",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
|
||||
// Create an already cancelled context
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
|
||||
artifacts, err := builder.Build(ctx, cfg, targets)
|
||||
assert.Error(t, err)
|
||||
assert.Empty(t, artifacts)
|
||||
})
|
||||
}
|
||||
|
||||
func TestGoBuilder_Interface_Good(t *testing.T) {
|
||||
// Verify GoBuilder implements Builder interface
|
||||
var _ build.Builder = (*GoBuilder)(nil)
|
||||
var _ build.Builder = NewGoBuilder()
|
||||
}
|
||||
|
|
@ -1,270 +0,0 @@
|
|||
// Package builders provides build implementations for different project types.
|
||||
package builders
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/build"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// LinuxKitBuilder builds LinuxKit images.
|
||||
type LinuxKitBuilder struct{}
|
||||
|
||||
// NewLinuxKitBuilder creates a new LinuxKit builder.
|
||||
func NewLinuxKitBuilder() *LinuxKitBuilder {
|
||||
return &LinuxKitBuilder{}
|
||||
}
|
||||
|
||||
// Name returns the builder's identifier.
|
||||
func (b *LinuxKitBuilder) Name() string {
|
||||
return "linuxkit"
|
||||
}
|
||||
|
||||
// Detect checks if a linuxkit.yml or .yml config exists in the directory.
|
||||
func (b *LinuxKitBuilder) Detect(fs io.Medium, dir string) (bool, error) {
|
||||
// Check for linuxkit.yml
|
||||
if fs.IsFile(filepath.Join(dir, "linuxkit.yml")) {
|
||||
return true, nil
|
||||
}
|
||||
// Check for .core/linuxkit/
|
||||
lkDir := filepath.Join(dir, ".core", "linuxkit")
|
||||
if fs.IsDir(lkDir) {
|
||||
entries, err := fs.List(lkDir)
|
||||
if err == nil {
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() && strings.HasSuffix(entry.Name(), ".yml") {
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// Build builds LinuxKit images for the specified targets.
|
||||
func (b *LinuxKitBuilder) Build(ctx context.Context, cfg *build.Config, targets []build.Target) ([]build.Artifact, error) {
|
||||
// Validate linuxkit CLI is available
|
||||
if err := b.validateLinuxKitCli(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Determine config file path
|
||||
configPath := cfg.LinuxKitConfig
|
||||
if configPath == "" {
|
||||
// Auto-detect
|
||||
if cfg.FS.IsFile(filepath.Join(cfg.ProjectDir, "linuxkit.yml")) {
|
||||
configPath = filepath.Join(cfg.ProjectDir, "linuxkit.yml")
|
||||
} else {
|
||||
// Look in .core/linuxkit/
|
||||
lkDir := filepath.Join(cfg.ProjectDir, ".core", "linuxkit")
|
||||
if cfg.FS.IsDir(lkDir) {
|
||||
entries, err := cfg.FS.List(lkDir)
|
||||
if err == nil {
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() && strings.HasSuffix(entry.Name(), ".yml") {
|
||||
configPath = filepath.Join(lkDir, entry.Name())
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if configPath == "" {
|
||||
return nil, fmt.Errorf("linuxkit.Build: no LinuxKit config file found. Specify with --config or create linuxkit.yml")
|
||||
}
|
||||
|
||||
// Validate config file exists
|
||||
if !cfg.FS.IsFile(configPath) {
|
||||
return nil, fmt.Errorf("linuxkit.Build: config file not found: %s", configPath)
|
||||
}
|
||||
|
||||
// Determine output formats
|
||||
formats := cfg.Formats
|
||||
if len(formats) == 0 {
|
||||
formats = []string{"qcow2-bios"} // Default to QEMU-compatible format
|
||||
}
|
||||
|
||||
// Create output directory
|
||||
outputDir := cfg.OutputDir
|
||||
if outputDir == "" {
|
||||
outputDir = filepath.Join(cfg.ProjectDir, "dist")
|
||||
}
|
||||
if err := cfg.FS.EnsureDir(outputDir); err != nil {
|
||||
return nil, fmt.Errorf("linuxkit.Build: failed to create output directory: %w", err)
|
||||
}
|
||||
|
||||
// Determine base name from config file or project name
|
||||
baseName := cfg.Name
|
||||
if baseName == "" {
|
||||
baseName = strings.TrimSuffix(filepath.Base(configPath), ".yml")
|
||||
}
|
||||
|
||||
// If no targets, default to linux/amd64
|
||||
if len(targets) == 0 {
|
||||
targets = []build.Target{{OS: "linux", Arch: "amd64"}}
|
||||
}
|
||||
|
||||
var artifacts []build.Artifact
|
||||
|
||||
// Build for each target and format
|
||||
for _, target := range targets {
|
||||
// LinuxKit only supports Linux
|
||||
if target.OS != "linux" {
|
||||
fmt.Printf("Skipping %s/%s (LinuxKit only supports Linux)\n", target.OS, target.Arch)
|
||||
continue
|
||||
}
|
||||
|
||||
for _, format := range formats {
|
||||
outputName := fmt.Sprintf("%s-%s", baseName, target.Arch)
|
||||
|
||||
args := b.buildLinuxKitArgs(configPath, format, outputName, outputDir, target.Arch)
|
||||
|
||||
cmd := exec.CommandContext(ctx, "linuxkit", args...)
|
||||
cmd.Dir = cfg.ProjectDir
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
|
||||
fmt.Printf("Building LinuxKit image: %s (%s, %s)\n", outputName, format, target.Arch)
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
return nil, fmt.Errorf("linuxkit.Build: build failed for %s/%s: %w", target.Arch, format, err)
|
||||
}
|
||||
|
||||
// Determine the actual output file path
|
||||
artifactPath := b.getArtifactPath(outputDir, outputName, format)
|
||||
|
||||
// Verify the artifact was created
|
||||
if !cfg.FS.Exists(artifactPath) {
|
||||
// Try alternate naming conventions
|
||||
artifactPath = b.findArtifact(cfg.FS, outputDir, outputName, format)
|
||||
if artifactPath == "" {
|
||||
return nil, fmt.Errorf("linuxkit.Build: artifact not found after build: expected %s", b.getArtifactPath(outputDir, outputName, format))
|
||||
}
|
||||
}
|
||||
|
||||
artifacts = append(artifacts, build.Artifact{
|
||||
Path: artifactPath,
|
||||
OS: target.OS,
|
||||
Arch: target.Arch,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return artifacts, nil
|
||||
}
|
||||
|
||||
// buildLinuxKitArgs builds the arguments for linuxkit build command.
|
||||
func (b *LinuxKitBuilder) buildLinuxKitArgs(configPath, format, outputName, outputDir, arch string) []string {
|
||||
args := []string{"build"}
|
||||
|
||||
// Output format
|
||||
args = append(args, "--format", format)
|
||||
|
||||
// Output name
|
||||
args = append(args, "--name", outputName)
|
||||
|
||||
// Output directory
|
||||
args = append(args, "--dir", outputDir)
|
||||
|
||||
// Architecture (if not amd64)
|
||||
if arch != "amd64" {
|
||||
args = append(args, "--arch", arch)
|
||||
}
|
||||
|
||||
// Config file
|
||||
args = append(args, configPath)
|
||||
|
||||
return args
|
||||
}
|
||||
|
||||
// getArtifactPath returns the expected path of the built artifact.
|
||||
func (b *LinuxKitBuilder) getArtifactPath(outputDir, outputName, format string) string {
|
||||
ext := b.getFormatExtension(format)
|
||||
return filepath.Join(outputDir, outputName+ext)
|
||||
}
|
||||
|
||||
// findArtifact searches for the built artifact with various naming conventions.
|
||||
func (b *LinuxKitBuilder) findArtifact(fs io.Medium, outputDir, outputName, format string) string {
|
||||
// LinuxKit can create files with different suffixes
|
||||
extensions := []string{
|
||||
b.getFormatExtension(format),
|
||||
"-bios" + b.getFormatExtension(format),
|
||||
"-efi" + b.getFormatExtension(format),
|
||||
}
|
||||
|
||||
for _, ext := range extensions {
|
||||
path := filepath.Join(outputDir, outputName+ext)
|
||||
if fs.Exists(path) {
|
||||
return path
|
||||
}
|
||||
}
|
||||
|
||||
// Try to find any file matching the output name
|
||||
entries, err := fs.List(outputDir)
|
||||
if err == nil {
|
||||
for _, entry := range entries {
|
||||
if strings.HasPrefix(entry.Name(), outputName) {
|
||||
match := filepath.Join(outputDir, entry.Name())
|
||||
// Return first match that looks like an image
|
||||
ext := filepath.Ext(match)
|
||||
if ext == ".iso" || ext == ".qcow2" || ext == ".raw" || ext == ".vmdk" || ext == ".vhd" {
|
||||
return match
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// getFormatExtension returns the file extension for a LinuxKit output format.
|
||||
func (b *LinuxKitBuilder) getFormatExtension(format string) string {
|
||||
switch format {
|
||||
case "iso", "iso-bios", "iso-efi":
|
||||
return ".iso"
|
||||
case "raw", "raw-bios", "raw-efi":
|
||||
return ".raw"
|
||||
case "qcow2", "qcow2-bios", "qcow2-efi":
|
||||
return ".qcow2"
|
||||
case "vmdk":
|
||||
return ".vmdk"
|
||||
case "vhd":
|
||||
return ".vhd"
|
||||
case "gcp":
|
||||
return ".img.tar.gz"
|
||||
case "aws":
|
||||
return ".raw"
|
||||
default:
|
||||
return "." + strings.TrimSuffix(format, "-bios")
|
||||
}
|
||||
}
|
||||
|
||||
// validateLinuxKitCli checks if the linuxkit CLI is available.
|
||||
func (b *LinuxKitBuilder) validateLinuxKitCli() error {
|
||||
// Check PATH first
|
||||
if _, err := exec.LookPath("linuxkit"); err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check common locations
|
||||
paths := []string{
|
||||
"/usr/local/bin/linuxkit",
|
||||
"/opt/homebrew/bin/linuxkit",
|
||||
}
|
||||
|
||||
for _, p := range paths {
|
||||
if _, err := os.Stat(p); err == nil {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
return fmt.Errorf("linuxkit: linuxkit CLI not found. Install with: brew install linuxkit (macOS) or see https://github.com/linuxkit/linuxkit")
|
||||
}
|
||||
|
|
@ -1,275 +0,0 @@
|
|||
// Package builders provides build implementations for different project types.
|
||||
package builders
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/build"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// TaskfileBuilder builds projects using Taskfile (https://taskfile.dev/).
|
||||
// This is a generic builder that can handle any project type that has a Taskfile.
|
||||
type TaskfileBuilder struct{}
|
||||
|
||||
// NewTaskfileBuilder creates a new Taskfile builder.
|
||||
func NewTaskfileBuilder() *TaskfileBuilder {
|
||||
return &TaskfileBuilder{}
|
||||
}
|
||||
|
||||
// Name returns the builder's identifier.
|
||||
func (b *TaskfileBuilder) Name() string {
|
||||
return "taskfile"
|
||||
}
|
||||
|
||||
// Detect checks if a Taskfile exists in the directory.
|
||||
func (b *TaskfileBuilder) Detect(fs io.Medium, dir string) (bool, error) {
|
||||
// Check for Taskfile.yml, Taskfile.yaml, or Taskfile
|
||||
taskfiles := []string{
|
||||
"Taskfile.yml",
|
||||
"Taskfile.yaml",
|
||||
"Taskfile",
|
||||
"taskfile.yml",
|
||||
"taskfile.yaml",
|
||||
}
|
||||
|
||||
for _, tf := range taskfiles {
|
||||
if fs.IsFile(filepath.Join(dir, tf)) {
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// Build runs the Taskfile build task for each target platform.
|
||||
func (b *TaskfileBuilder) Build(ctx context.Context, cfg *build.Config, targets []build.Target) ([]build.Artifact, error) {
|
||||
// Validate task CLI is available
|
||||
if err := b.validateTaskCli(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Create output directory
|
||||
outputDir := cfg.OutputDir
|
||||
if outputDir == "" {
|
||||
outputDir = filepath.Join(cfg.ProjectDir, "dist")
|
||||
}
|
||||
if err := cfg.FS.EnsureDir(outputDir); err != nil {
|
||||
return nil, fmt.Errorf("taskfile.Build: failed to create output directory: %w", err)
|
||||
}
|
||||
|
||||
var artifacts []build.Artifact
|
||||
|
||||
// If no targets specified, just run the build task once
|
||||
if len(targets) == 0 {
|
||||
if err := b.runTask(ctx, cfg, "", ""); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Try to find artifacts in output directory
|
||||
found := b.findArtifacts(cfg.FS, outputDir)
|
||||
artifacts = append(artifacts, found...)
|
||||
} else {
|
||||
// Run build task for each target
|
||||
for _, target := range targets {
|
||||
if err := b.runTask(ctx, cfg, target.OS, target.Arch); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Try to find artifacts for this target
|
||||
found := b.findArtifactsForTarget(cfg.FS, outputDir, target)
|
||||
artifacts = append(artifacts, found...)
|
||||
}
|
||||
}
|
||||
|
||||
return artifacts, nil
|
||||
}
|
||||
|
||||
// runTask executes the Taskfile build task.
|
||||
func (b *TaskfileBuilder) runTask(ctx context.Context, cfg *build.Config, goos, goarch string) error {
|
||||
// Build task command
|
||||
args := []string{"build"}
|
||||
|
||||
// Pass variables if targets are specified
|
||||
if goos != "" {
|
||||
args = append(args, fmt.Sprintf("GOOS=%s", goos))
|
||||
}
|
||||
if goarch != "" {
|
||||
args = append(args, fmt.Sprintf("GOARCH=%s", goarch))
|
||||
}
|
||||
if cfg.OutputDir != "" {
|
||||
args = append(args, fmt.Sprintf("OUTPUT_DIR=%s", cfg.OutputDir))
|
||||
}
|
||||
if cfg.Name != "" {
|
||||
args = append(args, fmt.Sprintf("NAME=%s", cfg.Name))
|
||||
}
|
||||
if cfg.Version != "" {
|
||||
args = append(args, fmt.Sprintf("VERSION=%s", cfg.Version))
|
||||
}
|
||||
|
||||
cmd := exec.CommandContext(ctx, "task", args...)
|
||||
cmd.Dir = cfg.ProjectDir
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
|
||||
// Set environment variables
|
||||
cmd.Env = os.Environ()
|
||||
if goos != "" {
|
||||
cmd.Env = append(cmd.Env, fmt.Sprintf("GOOS=%s", goos))
|
||||
}
|
||||
if goarch != "" {
|
||||
cmd.Env = append(cmd.Env, fmt.Sprintf("GOARCH=%s", goarch))
|
||||
}
|
||||
if cfg.OutputDir != "" {
|
||||
cmd.Env = append(cmd.Env, fmt.Sprintf("OUTPUT_DIR=%s", cfg.OutputDir))
|
||||
}
|
||||
if cfg.Name != "" {
|
||||
cmd.Env = append(cmd.Env, fmt.Sprintf("NAME=%s", cfg.Name))
|
||||
}
|
||||
if cfg.Version != "" {
|
||||
cmd.Env = append(cmd.Env, fmt.Sprintf("VERSION=%s", cfg.Version))
|
||||
}
|
||||
|
||||
if goos != "" && goarch != "" {
|
||||
fmt.Printf("Running task build for %s/%s\n", goos, goarch)
|
||||
} else {
|
||||
fmt.Println("Running task build")
|
||||
}
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("taskfile.Build: task build failed: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// findArtifacts searches for built artifacts in the output directory.
|
||||
func (b *TaskfileBuilder) findArtifacts(fs io.Medium, outputDir string) []build.Artifact {
|
||||
var artifacts []build.Artifact
|
||||
|
||||
entries, err := fs.List(outputDir)
|
||||
if err != nil {
|
||||
return artifacts
|
||||
}
|
||||
|
||||
for _, entry := range entries {
|
||||
if entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
// Skip common non-artifact files
|
||||
name := entry.Name()
|
||||
if strings.HasPrefix(name, ".") || name == "CHECKSUMS.txt" {
|
||||
continue
|
||||
}
|
||||
|
||||
artifacts = append(artifacts, build.Artifact{
|
||||
Path: filepath.Join(outputDir, name),
|
||||
OS: "",
|
||||
Arch: "",
|
||||
})
|
||||
}
|
||||
|
||||
return artifacts
|
||||
}
|
||||
|
||||
// findArtifactsForTarget searches for built artifacts for a specific target.
|
||||
func (b *TaskfileBuilder) findArtifactsForTarget(fs io.Medium, outputDir string, target build.Target) []build.Artifact {
|
||||
var artifacts []build.Artifact
|
||||
|
||||
// 1. Look for platform-specific subdirectory: output/os_arch/
|
||||
platformSubdir := filepath.Join(outputDir, fmt.Sprintf("%s_%s", target.OS, target.Arch))
|
||||
if fs.IsDir(platformSubdir) {
|
||||
entries, _ := fs.List(platformSubdir)
|
||||
for _, entry := range entries {
|
||||
if entry.IsDir() {
|
||||
// Handle .app bundles on macOS
|
||||
if target.OS == "darwin" && strings.HasSuffix(entry.Name(), ".app") {
|
||||
artifacts = append(artifacts, build.Artifact{
|
||||
Path: filepath.Join(platformSubdir, entry.Name()),
|
||||
OS: target.OS,
|
||||
Arch: target.Arch,
|
||||
})
|
||||
}
|
||||
continue
|
||||
}
|
||||
// Skip hidden files
|
||||
if strings.HasPrefix(entry.Name(), ".") {
|
||||
continue
|
||||
}
|
||||
artifacts = append(artifacts, build.Artifact{
|
||||
Path: filepath.Join(platformSubdir, entry.Name()),
|
||||
OS: target.OS,
|
||||
Arch: target.Arch,
|
||||
})
|
||||
}
|
||||
if len(artifacts) > 0 {
|
||||
return artifacts
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Look for files matching the target pattern in the root output dir
|
||||
patterns := []string{
|
||||
fmt.Sprintf("*-%s-%s*", target.OS, target.Arch),
|
||||
fmt.Sprintf("*_%s_%s*", target.OS, target.Arch),
|
||||
fmt.Sprintf("*-%s*", target.Arch),
|
||||
}
|
||||
|
||||
for _, pattern := range patterns {
|
||||
entries, _ := fs.List(outputDir)
|
||||
for _, entry := range entries {
|
||||
match := entry.Name()
|
||||
// Simple glob matching
|
||||
if b.matchPattern(match, pattern) {
|
||||
fullPath := filepath.Join(outputDir, match)
|
||||
if fs.IsDir(fullPath) {
|
||||
continue
|
||||
}
|
||||
|
||||
artifacts = append(artifacts, build.Artifact{
|
||||
Path: fullPath,
|
||||
OS: target.OS,
|
||||
Arch: target.Arch,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if len(artifacts) > 0 {
|
||||
break // Found matches, stop looking
|
||||
}
|
||||
}
|
||||
|
||||
return artifacts
|
||||
}
|
||||
|
||||
// matchPattern implements glob matching for Taskfile artifacts.
|
||||
func (b *TaskfileBuilder) matchPattern(name, pattern string) bool {
|
||||
matched, _ := filepath.Match(pattern, name)
|
||||
return matched
|
||||
}
|
||||
|
||||
// validateTaskCli checks if the task CLI is available.
|
||||
func (b *TaskfileBuilder) validateTaskCli() error {
|
||||
// Check PATH first
|
||||
if _, err := exec.LookPath("task"); err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check common locations
|
||||
paths := []string{
|
||||
"/usr/local/bin/task",
|
||||
"/opt/homebrew/bin/task",
|
||||
}
|
||||
|
||||
for _, p := range paths {
|
||||
if _, err := os.Stat(p); err == nil {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
return fmt.Errorf("taskfile: task CLI not found. Install with: brew install go-task (macOS), go install github.com/go-task/task/v3/cmd/task@latest, or see https://taskfile.dev/installation/")
|
||||
}
|
||||
|
|
@ -1,247 +0,0 @@
|
|||
// Package builders provides build implementations for different project types.
|
||||
package builders
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/build"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// WailsBuilder implements the Builder interface for Wails v3 projects.
|
||||
type WailsBuilder struct{}
|
||||
|
||||
// NewWailsBuilder creates a new WailsBuilder instance.
|
||||
func NewWailsBuilder() *WailsBuilder {
|
||||
return &WailsBuilder{}
|
||||
}
|
||||
|
||||
// Name returns the builder's identifier.
|
||||
func (b *WailsBuilder) Name() string {
|
||||
return "wails"
|
||||
}
|
||||
|
||||
// Detect checks if this builder can handle the project in the given directory.
|
||||
// Uses IsWailsProject from the build package which checks for wails.json.
|
||||
func (b *WailsBuilder) Detect(fs io.Medium, dir string) (bool, error) {
|
||||
return build.IsWailsProject(fs, dir), nil
|
||||
}
|
||||
|
||||
// Build compiles the Wails project for the specified targets.
|
||||
// It detects the Wails version and chooses the appropriate build strategy:
|
||||
// - Wails v3: Delegates to Taskfile (error if missing)
|
||||
// - Wails v2: Uses 'wails build' command
|
||||
func (b *WailsBuilder) Build(ctx context.Context, cfg *build.Config, targets []build.Target) ([]build.Artifact, error) {
|
||||
if cfg == nil {
|
||||
return nil, fmt.Errorf("builders.WailsBuilder.Build: config is nil")
|
||||
}
|
||||
|
||||
if len(targets) == 0 {
|
||||
return nil, fmt.Errorf("builders.WailsBuilder.Build: no targets specified")
|
||||
}
|
||||
|
||||
// Detect Wails version
|
||||
isV3 := b.isWailsV3(cfg.FS, cfg.ProjectDir)
|
||||
|
||||
if isV3 {
|
||||
// Wails v3 strategy: Delegate to Taskfile
|
||||
taskBuilder := NewTaskfileBuilder()
|
||||
if detected, _ := taskBuilder.Detect(cfg.FS, cfg.ProjectDir); detected {
|
||||
return taskBuilder.Build(ctx, cfg, targets)
|
||||
}
|
||||
return nil, fmt.Errorf("wails v3 projects require a Taskfile for building")
|
||||
}
|
||||
|
||||
// Wails v2 strategy: Use 'wails build'
|
||||
// Ensure output directory exists
|
||||
if err := cfg.FS.EnsureDir(cfg.OutputDir); err != nil {
|
||||
return nil, fmt.Errorf("builders.WailsBuilder.Build: failed to create output directory: %w", err)
|
||||
}
|
||||
|
||||
// Note: Wails v2 handles frontend installation/building automatically via wails.json config
|
||||
|
||||
var artifacts []build.Artifact
|
||||
|
||||
for _, target := range targets {
|
||||
artifact, err := b.buildV2Target(ctx, cfg, target)
|
||||
if err != nil {
|
||||
return artifacts, fmt.Errorf("builders.WailsBuilder.Build: failed to build %s: %w", target.String(), err)
|
||||
}
|
||||
artifacts = append(artifacts, artifact)
|
||||
}
|
||||
|
||||
return artifacts, nil
|
||||
}
|
||||
|
||||
// isWailsV3 checks if the project uses Wails v3 by inspecting go.mod.
|
||||
func (b *WailsBuilder) isWailsV3(fs io.Medium, dir string) bool {
|
||||
goModPath := filepath.Join(dir, "go.mod")
|
||||
content, err := fs.Read(goModPath)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return strings.Contains(content, "github.com/wailsapp/wails/v3")
|
||||
}
|
||||
|
||||
// buildV2Target compiles for a single target platform using wails (v2).
|
||||
func (b *WailsBuilder) buildV2Target(ctx context.Context, cfg *build.Config, target build.Target) (build.Artifact, error) {
|
||||
// Determine output binary name
|
||||
binaryName := cfg.Name
|
||||
if binaryName == "" {
|
||||
binaryName = filepath.Base(cfg.ProjectDir)
|
||||
}
|
||||
|
||||
// Build the wails build arguments
|
||||
args := []string{"build"}
|
||||
|
||||
// Platform
|
||||
args = append(args, "-platform", fmt.Sprintf("%s/%s", target.OS, target.Arch))
|
||||
|
||||
// Output (Wails v2 uses -o for the binary name, relative to build/bin usually, but we want to control it)
|
||||
// Actually, Wails v2 is opinionated about output dir (build/bin).
|
||||
// We might need to copy artifacts after build if we want them in cfg.OutputDir.
|
||||
// For now, let's try to let Wails do its thing and find the artifact.
|
||||
|
||||
// Create the command
|
||||
cmd := exec.CommandContext(ctx, "wails", args...)
|
||||
cmd.Dir = cfg.ProjectDir
|
||||
|
||||
// Capture output for error messages
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return build.Artifact{}, fmt.Errorf("wails build failed: %w\nOutput: %s", err, string(output))
|
||||
}
|
||||
|
||||
// Wails v2 typically outputs to build/bin
|
||||
// We need to move/copy it to our desired output dir
|
||||
|
||||
// Construct the source path where Wails v2 puts the binary
|
||||
wailsOutputDir := filepath.Join(cfg.ProjectDir, "build", "bin")
|
||||
|
||||
// Find the artifact in Wails output dir
|
||||
sourcePath, err := b.findArtifact(cfg.FS, wailsOutputDir, binaryName, target)
|
||||
if err != nil {
|
||||
return build.Artifact{}, fmt.Errorf("failed to find Wails v2 build artifact: %w", err)
|
||||
}
|
||||
|
||||
// Move/Copy to our output dir
|
||||
// Create platform specific dir in our output
|
||||
platformDir := filepath.Join(cfg.OutputDir, fmt.Sprintf("%s_%s", target.OS, target.Arch))
|
||||
if err := cfg.FS.EnsureDir(platformDir); err != nil {
|
||||
return build.Artifact{}, fmt.Errorf("failed to create output dir: %w", err)
|
||||
}
|
||||
|
||||
destPath := filepath.Join(platformDir, filepath.Base(sourcePath))
|
||||
|
||||
// Simple copy using the medium
|
||||
content, err := cfg.FS.Read(sourcePath)
|
||||
if err != nil {
|
||||
return build.Artifact{}, err
|
||||
}
|
||||
if err := cfg.FS.Write(destPath, content); err != nil {
|
||||
return build.Artifact{}, err
|
||||
}
|
||||
|
||||
return build.Artifact{
|
||||
Path: destPath,
|
||||
OS: target.OS,
|
||||
Arch: target.Arch,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// findArtifact locates the built artifact based on the target platform.
|
||||
func (b *WailsBuilder) findArtifact(fs io.Medium, platformDir, binaryName string, target build.Target) (string, error) {
|
||||
var candidates []string
|
||||
|
||||
switch target.OS {
|
||||
case "windows":
|
||||
// Look for NSIS installer first, then plain exe
|
||||
candidates = []string{
|
||||
filepath.Join(platformDir, binaryName+"-installer.exe"),
|
||||
filepath.Join(platformDir, binaryName+".exe"),
|
||||
filepath.Join(platformDir, binaryName+"-amd64-installer.exe"),
|
||||
}
|
||||
case "darwin":
|
||||
// Look for .dmg, then .app bundle, then plain binary
|
||||
candidates = []string{
|
||||
filepath.Join(platformDir, binaryName+".dmg"),
|
||||
filepath.Join(platformDir, binaryName+".app"),
|
||||
filepath.Join(platformDir, binaryName),
|
||||
}
|
||||
default:
|
||||
// Linux and others: look for plain binary
|
||||
candidates = []string{
|
||||
filepath.Join(platformDir, binaryName),
|
||||
}
|
||||
}
|
||||
|
||||
// Try each candidate
|
||||
for _, candidate := range candidates {
|
||||
if fs.Exists(candidate) {
|
||||
return candidate, nil
|
||||
}
|
||||
}
|
||||
|
||||
// If no specific candidate found, try to find any executable or package in the directory
|
||||
entries, err := fs.List(platformDir)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to read platform directory: %w", err)
|
||||
}
|
||||
|
||||
for _, entry := range entries {
|
||||
name := entry.Name()
|
||||
// Skip common non-artifact files
|
||||
if strings.HasSuffix(name, ".go") || strings.HasSuffix(name, ".json") {
|
||||
continue
|
||||
}
|
||||
|
||||
path := filepath.Join(platformDir, name)
|
||||
info, err := entry.Info()
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// On Unix, check if it's executable; on Windows, check for .exe
|
||||
if target.OS == "windows" {
|
||||
if strings.HasSuffix(name, ".exe") {
|
||||
return path, nil
|
||||
}
|
||||
} else if info.Mode()&0111 != 0 || entry.IsDir() {
|
||||
// Executable file or directory (.app bundle)
|
||||
return path, nil
|
||||
}
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("no artifact found in %s", platformDir)
|
||||
}
|
||||
|
||||
// detectPackageManager detects the frontend package manager based on lock files.
|
||||
// Returns "bun", "pnpm", "yarn", or "npm" (default).
|
||||
func detectPackageManager(fs io.Medium, dir string) string {
|
||||
// Check in priority order: bun, pnpm, yarn, npm
|
||||
lockFiles := []struct {
|
||||
file string
|
||||
manager string
|
||||
}{
|
||||
{"bun.lockb", "bun"},
|
||||
{"pnpm-lock.yaml", "pnpm"},
|
||||
{"yarn.lock", "yarn"},
|
||||
{"package-lock.json", "npm"},
|
||||
}
|
||||
|
||||
for _, lf := range lockFiles {
|
||||
if fs.IsFile(filepath.Join(dir, lf.file)) {
|
||||
return lf.manager
|
||||
}
|
||||
}
|
||||
|
||||
// Default to npm if no lock file found
|
||||
return "npm"
|
||||
}
|
||||
|
||||
// Ensure WailsBuilder implements the Builder interface.
|
||||
var _ build.Builder = (*WailsBuilder)(nil)
|
||||
|
|
@ -1,416 +0,0 @@
|
|||
package builders
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/build"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// setupWailsTestProject creates a minimal Wails project structure for testing.
|
||||
func setupWailsTestProject(t *testing.T) string {
|
||||
t.Helper()
|
||||
dir := t.TempDir()
|
||||
|
||||
// Create wails.json
|
||||
wailsJSON := `{
|
||||
"name": "testapp",
|
||||
"outputfilename": "testapp"
|
||||
}`
|
||||
err := os.WriteFile(filepath.Join(dir, "wails.json"), []byte(wailsJSON), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create a minimal go.mod
|
||||
goMod := `module testapp
|
||||
|
||||
go 1.21
|
||||
|
||||
require github.com/wailsapp/wails/v3 v3.0.0
|
||||
`
|
||||
err = os.WriteFile(filepath.Join(dir, "go.mod"), []byte(goMod), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create a minimal main.go
|
||||
mainGo := `package main
|
||||
|
||||
func main() {
|
||||
println("hello wails")
|
||||
}
|
||||
`
|
||||
err = os.WriteFile(filepath.Join(dir, "main.go"), []byte(mainGo), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create a minimal Taskfile.yml
|
||||
taskfile := `version: '3'
|
||||
tasks:
|
||||
build:
|
||||
cmds:
|
||||
- mkdir -p {{.OUTPUT_DIR}}/{{.GOOS}}_{{.GOARCH}}
|
||||
- touch {{.OUTPUT_DIR}}/{{.GOOS}}_{{.GOARCH}}/testapp
|
||||
`
|
||||
err = os.WriteFile(filepath.Join(dir, "Taskfile.yml"), []byte(taskfile), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
return dir
|
||||
}
|
||||
|
||||
// setupWailsV2TestProject creates a Wails v2 project structure.
|
||||
func setupWailsV2TestProject(t *testing.T) string {
|
||||
t.Helper()
|
||||
dir := t.TempDir()
|
||||
|
||||
// wails.json
|
||||
err := os.WriteFile(filepath.Join(dir, "wails.json"), []byte("{}"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
// go.mod with v2
|
||||
goMod := `module testapp
|
||||
go 1.21
|
||||
require github.com/wailsapp/wails/v2 v2.8.0
|
||||
`
|
||||
err = os.WriteFile(filepath.Join(dir, "go.mod"), []byte(goMod), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
return dir
|
||||
}
|
||||
|
||||
func TestWailsBuilder_Build_Taskfile_Good(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test in short mode")
|
||||
}
|
||||
|
||||
// Check if task is available
|
||||
if _, err := exec.LookPath("task"); err != nil {
|
||||
t.Skip("task not installed, skipping test")
|
||||
}
|
||||
|
||||
t.Run("delegates to Taskfile if present", func(t *testing.T) {
|
||||
fs := io.Local
|
||||
projectDir := setupWailsTestProject(t)
|
||||
outputDir := t.TempDir()
|
||||
|
||||
// Create a Taskfile that just touches a file
|
||||
taskfile := `version: '3'
|
||||
tasks:
|
||||
build:
|
||||
cmds:
|
||||
- mkdir -p {{.OUTPUT_DIR}}/{{.GOOS}}_{{.GOARCH}}
|
||||
- touch {{.OUTPUT_DIR}}/{{.GOOS}}_{{.GOARCH}}/testapp
|
||||
`
|
||||
err := os.WriteFile(filepath.Join(projectDir, "Taskfile.yml"), []byte(taskfile), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := NewWailsBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: fs,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: outputDir,
|
||||
Name: "testapp",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, targets)
|
||||
require.NoError(t, err)
|
||||
assert.NotEmpty(t, artifacts)
|
||||
})
|
||||
}
|
||||
|
||||
func TestWailsBuilder_Name_Good(t *testing.T) {
|
||||
builder := NewWailsBuilder()
|
||||
assert.Equal(t, "wails", builder.Name())
|
||||
}
|
||||
|
||||
func TestWailsBuilder_Build_V2_Good(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test in short mode")
|
||||
}
|
||||
|
||||
if _, err := exec.LookPath("wails"); err != nil {
|
||||
t.Skip("wails not installed, skipping integration test")
|
||||
}
|
||||
|
||||
t.Run("builds v2 project", func(t *testing.T) {
|
||||
fs := io.Local
|
||||
projectDir := setupWailsV2TestProject(t)
|
||||
outputDir := t.TempDir()
|
||||
|
||||
builder := NewWailsBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: fs,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: outputDir,
|
||||
Name: "testapp",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
|
||||
// This will likely fail in a real run because we can't easily mock the full wails v2 build process
|
||||
// (which needs a valid project with main.go etc).
|
||||
// But it validates we are trying to run the command.
|
||||
// For now, we just verify it attempts the build - error is expected
|
||||
_, _ = builder.Build(context.Background(), cfg, targets)
|
||||
})
|
||||
}
|
||||
|
||||
func TestWailsBuilder_Detect_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("detects Wails project with wails.json", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
err := os.WriteFile(filepath.Join(dir, "wails.json"), []byte("{}"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := NewWailsBuilder()
|
||||
detected, err := builder.Detect(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, detected)
|
||||
})
|
||||
|
||||
t.Run("returns false for Go-only project", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
err := os.WriteFile(filepath.Join(dir, "go.mod"), []byte("module test"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := NewWailsBuilder()
|
||||
detected, err := builder.Detect(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, detected)
|
||||
})
|
||||
|
||||
t.Run("returns false for Node.js project", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
err := os.WriteFile(filepath.Join(dir, "package.json"), []byte("{}"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := NewWailsBuilder()
|
||||
detected, err := builder.Detect(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, detected)
|
||||
})
|
||||
|
||||
t.Run("returns false for empty directory", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
builder := NewWailsBuilder()
|
||||
detected, err := builder.Detect(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, detected)
|
||||
})
|
||||
}
|
||||
|
||||
func TestDetectPackageManager_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("detects bun from bun.lockb", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
err := os.WriteFile(filepath.Join(dir, "bun.lockb"), []byte(""), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
result := detectPackageManager(fs, dir)
|
||||
assert.Equal(t, "bun", result)
|
||||
})
|
||||
|
||||
t.Run("detects pnpm from pnpm-lock.yaml", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
err := os.WriteFile(filepath.Join(dir, "pnpm-lock.yaml"), []byte(""), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
result := detectPackageManager(fs, dir)
|
||||
assert.Equal(t, "pnpm", result)
|
||||
})
|
||||
|
||||
t.Run("detects yarn from yarn.lock", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
err := os.WriteFile(filepath.Join(dir, "yarn.lock"), []byte(""), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
result := detectPackageManager(fs, dir)
|
||||
assert.Equal(t, "yarn", result)
|
||||
})
|
||||
|
||||
t.Run("detects npm from package-lock.json", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
err := os.WriteFile(filepath.Join(dir, "package-lock.json"), []byte(""), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
result := detectPackageManager(fs, dir)
|
||||
assert.Equal(t, "npm", result)
|
||||
})
|
||||
|
||||
t.Run("defaults to npm when no lock file", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
result := detectPackageManager(fs, dir)
|
||||
assert.Equal(t, "npm", result)
|
||||
})
|
||||
|
||||
t.Run("prefers bun over other lock files", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
// Create multiple lock files
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, "bun.lockb"), []byte(""), 0644))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, "yarn.lock"), []byte(""), 0644))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, "package-lock.json"), []byte(""), 0644))
|
||||
|
||||
result := detectPackageManager(fs, dir)
|
||||
assert.Equal(t, "bun", result)
|
||||
})
|
||||
|
||||
t.Run("prefers pnpm over yarn and npm", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
// Create multiple lock files (no bun)
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, "pnpm-lock.yaml"), []byte(""), 0644))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, "yarn.lock"), []byte(""), 0644))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, "package-lock.json"), []byte(""), 0644))
|
||||
|
||||
result := detectPackageManager(fs, dir)
|
||||
assert.Equal(t, "pnpm", result)
|
||||
})
|
||||
|
||||
t.Run("prefers yarn over npm", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
// Create multiple lock files (no bun or pnpm)
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, "yarn.lock"), []byte(""), 0644))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, "package-lock.json"), []byte(""), 0644))
|
||||
|
||||
result := detectPackageManager(fs, dir)
|
||||
assert.Equal(t, "yarn", result)
|
||||
})
|
||||
}
|
||||
|
||||
func TestWailsBuilder_Build_Bad(t *testing.T) {
|
||||
t.Run("returns error for nil config", func(t *testing.T) {
|
||||
builder := NewWailsBuilder()
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), nil, []build.Target{{OS: "linux", Arch: "amd64"}})
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, artifacts)
|
||||
assert.Contains(t, err.Error(), "config is nil")
|
||||
})
|
||||
|
||||
t.Run("returns error for empty targets", func(t *testing.T) {
|
||||
projectDir := setupWailsTestProject(t)
|
||||
|
||||
builder := NewWailsBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: t.TempDir(),
|
||||
Name: "test",
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, []build.Target{})
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, artifacts)
|
||||
assert.Contains(t, err.Error(), "no targets specified")
|
||||
})
|
||||
}
|
||||
|
||||
func TestWailsBuilder_Build_Good(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test in short mode")
|
||||
}
|
||||
|
||||
// Check if wails3 is available in PATH
|
||||
if _, err := exec.LookPath("wails3"); err != nil {
|
||||
t.Skip("wails3 not installed, skipping integration test")
|
||||
}
|
||||
|
||||
t.Run("builds for current platform", func(t *testing.T) {
|
||||
projectDir := setupWailsTestProject(t)
|
||||
outputDir := t.TempDir()
|
||||
|
||||
builder := NewWailsBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: outputDir,
|
||||
Name: "testapp",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
|
||||
artifacts, err := builder.Build(context.Background(), cfg, targets)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, artifacts, 1)
|
||||
|
||||
// Verify artifact properties
|
||||
artifact := artifacts[0]
|
||||
assert.Equal(t, runtime.GOOS, artifact.OS)
|
||||
assert.Equal(t, runtime.GOARCH, artifact.Arch)
|
||||
})
|
||||
}
|
||||
|
||||
func TestWailsBuilder_Interface_Good(t *testing.T) {
|
||||
// Verify WailsBuilder implements Builder interface
|
||||
var _ build.Builder = (*WailsBuilder)(nil)
|
||||
var _ build.Builder = NewWailsBuilder()
|
||||
}
|
||||
|
||||
func TestWailsBuilder_Ugly(t *testing.T) {
|
||||
t.Run("handles nonexistent frontend directory gracefully", func(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test in short mode")
|
||||
}
|
||||
|
||||
// Create a Wails project without a frontend directory
|
||||
dir := t.TempDir()
|
||||
err := os.WriteFile(filepath.Join(dir, "wails.json"), []byte("{}"), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := NewWailsBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: dir,
|
||||
OutputDir: t.TempDir(),
|
||||
Name: "test",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
|
||||
// This will fail because wails3 isn't set up, but it shouldn't panic
|
||||
// due to missing frontend directory
|
||||
_, err = builder.Build(context.Background(), cfg, targets)
|
||||
// We expect an error (wails3 build will fail), but not a panic
|
||||
// The error should be about wails3 build, not about frontend
|
||||
if err != nil {
|
||||
assert.NotContains(t, err.Error(), "frontend dependencies")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("handles context cancellation", func(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test in short mode")
|
||||
}
|
||||
|
||||
projectDir := setupWailsTestProject(t)
|
||||
|
||||
builder := NewWailsBuilder()
|
||||
cfg := &build.Config{
|
||||
FS: io.Local,
|
||||
ProjectDir: projectDir,
|
||||
OutputDir: t.TempDir(),
|
||||
Name: "canceltest",
|
||||
}
|
||||
targets := []build.Target{
|
||||
{OS: runtime.GOOS, Arch: runtime.GOARCH},
|
||||
}
|
||||
|
||||
// Create an already cancelled context
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
|
||||
artifacts, err := builder.Build(ctx, cfg, targets)
|
||||
assert.Error(t, err)
|
||||
assert.Empty(t, artifacts)
|
||||
})
|
||||
}
|
||||
|
|
@ -1,97 +0,0 @@
|
|||
// Package build provides project type detection and cross-compilation for the Core build system.
|
||||
package build
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"path/filepath"
|
||||
|
||||
io_interface "forge.lthn.ai/core/go/pkg/io"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Checksum computes SHA256 for an artifact and returns the artifact with the Checksum field filled.
|
||||
func Checksum(fs io_interface.Medium, artifact Artifact) (Artifact, error) {
|
||||
if artifact.Path == "" {
|
||||
return Artifact{}, fmt.Errorf("build.Checksum: artifact path is empty")
|
||||
}
|
||||
|
||||
// Open the file
|
||||
file, err := fs.Open(artifact.Path)
|
||||
if err != nil {
|
||||
return Artifact{}, fmt.Errorf("build.Checksum: failed to open file: %w", err)
|
||||
}
|
||||
defer func() { _ = file.Close() }()
|
||||
|
||||
// Compute SHA256 hash
|
||||
hasher := sha256.New()
|
||||
if _, err := io.Copy(hasher, file); err != nil {
|
||||
return Artifact{}, fmt.Errorf("build.Checksum: failed to hash file: %w", err)
|
||||
}
|
||||
|
||||
checksum := hex.EncodeToString(hasher.Sum(nil))
|
||||
|
||||
return Artifact{
|
||||
Path: artifact.Path,
|
||||
OS: artifact.OS,
|
||||
Arch: artifact.Arch,
|
||||
Checksum: checksum,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ChecksumAll computes checksums for all artifacts.
|
||||
// Returns a slice of artifacts with their Checksum fields filled.
|
||||
func ChecksumAll(fs io_interface.Medium, artifacts []Artifact) ([]Artifact, error) {
|
||||
if len(artifacts) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var checksummed []Artifact
|
||||
for _, artifact := range artifacts {
|
||||
cs, err := Checksum(fs, artifact)
|
||||
if err != nil {
|
||||
return checksummed, fmt.Errorf("build.ChecksumAll: failed to checksum %s: %w", artifact.Path, err)
|
||||
}
|
||||
checksummed = append(checksummed, cs)
|
||||
}
|
||||
|
||||
return checksummed, nil
|
||||
}
|
||||
|
||||
// WriteChecksumFile writes a CHECKSUMS.txt file with the format:
|
||||
//
|
||||
// sha256hash filename1
|
||||
// sha256hash filename2
|
||||
//
|
||||
// The artifacts should have their Checksum fields filled (call ChecksumAll first).
|
||||
// Filenames are relative to the output directory (just the basename).
|
||||
func WriteChecksumFile(fs io_interface.Medium, artifacts []Artifact, path string) error {
|
||||
if len(artifacts) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Build the content
|
||||
var lines []string
|
||||
for _, artifact := range artifacts {
|
||||
if artifact.Checksum == "" {
|
||||
return fmt.Errorf("build.WriteChecksumFile: artifact %s has no checksum", artifact.Path)
|
||||
}
|
||||
filename := filepath.Base(artifact.Path)
|
||||
lines = append(lines, fmt.Sprintf("%s %s", artifact.Checksum, filename))
|
||||
}
|
||||
|
||||
// Sort lines for consistent output
|
||||
sort.Strings(lines)
|
||||
|
||||
content := strings.Join(lines, "\n") + "\n"
|
||||
|
||||
// Write the file using the medium (which handles directory creation in Write)
|
||||
if err := fs.Write(path, content); err != nil {
|
||||
return fmt.Errorf("build.WriteChecksumFile: failed to write file: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,282 +0,0 @@
|
|||
package build
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// setupChecksumTestFile creates a test file with known content.
|
||||
func setupChecksumTestFile(t *testing.T, content string) string {
|
||||
t.Helper()
|
||||
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "testfile")
|
||||
err := os.WriteFile(path, []byte(content), 0644)
|
||||
require.NoError(t, err)
|
||||
|
||||
return path
|
||||
}
|
||||
|
||||
func TestChecksum_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("computes SHA256 checksum", func(t *testing.T) {
|
||||
// Known SHA256 of "Hello, World!\n"
|
||||
path := setupChecksumTestFile(t, "Hello, World!\n")
|
||||
expectedChecksum := "c98c24b677eff44860afea6f493bbaec5bb1c4cbb209c6fc2bbb47f66ff2ad31"
|
||||
|
||||
artifact := Artifact{
|
||||
Path: path,
|
||||
OS: "linux",
|
||||
Arch: "amd64",
|
||||
}
|
||||
|
||||
result, err := Checksum(fs, artifact)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, expectedChecksum, result.Checksum)
|
||||
})
|
||||
|
||||
t.Run("preserves artifact fields", func(t *testing.T) {
|
||||
path := setupChecksumTestFile(t, "test content")
|
||||
|
||||
artifact := Artifact{
|
||||
Path: path,
|
||||
OS: "darwin",
|
||||
Arch: "arm64",
|
||||
}
|
||||
|
||||
result, err := Checksum(fs, artifact)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, path, result.Path)
|
||||
assert.Equal(t, "darwin", result.OS)
|
||||
assert.Equal(t, "arm64", result.Arch)
|
||||
assert.NotEmpty(t, result.Checksum)
|
||||
})
|
||||
|
||||
t.Run("produces 64 character hex string", func(t *testing.T) {
|
||||
path := setupChecksumTestFile(t, "any content")
|
||||
|
||||
artifact := Artifact{Path: path, OS: "linux", Arch: "amd64"}
|
||||
|
||||
result, err := Checksum(fs, artifact)
|
||||
require.NoError(t, err)
|
||||
|
||||
// SHA256 produces 32 bytes = 64 hex characters
|
||||
assert.Len(t, result.Checksum, 64)
|
||||
})
|
||||
|
||||
t.Run("different content produces different checksums", func(t *testing.T) {
|
||||
path1 := setupChecksumTestFile(t, "content one")
|
||||
path2 := setupChecksumTestFile(t, "content two")
|
||||
|
||||
result1, err := Checksum(fs, Artifact{Path: path1, OS: "linux", Arch: "amd64"})
|
||||
require.NoError(t, err)
|
||||
|
||||
result2, err := Checksum(fs, Artifact{Path: path2, OS: "linux", Arch: "amd64"})
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.NotEqual(t, result1.Checksum, result2.Checksum)
|
||||
})
|
||||
|
||||
t.Run("same content produces same checksum", func(t *testing.T) {
|
||||
content := "identical content"
|
||||
path1 := setupChecksumTestFile(t, content)
|
||||
path2 := setupChecksumTestFile(t, content)
|
||||
|
||||
result1, err := Checksum(fs, Artifact{Path: path1, OS: "linux", Arch: "amd64"})
|
||||
require.NoError(t, err)
|
||||
|
||||
result2, err := Checksum(fs, Artifact{Path: path2, OS: "linux", Arch: "amd64"})
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, result1.Checksum, result2.Checksum)
|
||||
})
|
||||
}
|
||||
|
||||
func TestChecksum_Bad(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("returns error for empty path", func(t *testing.T) {
|
||||
artifact := Artifact{
|
||||
Path: "",
|
||||
OS: "linux",
|
||||
Arch: "amd64",
|
||||
}
|
||||
|
||||
result, err := Checksum(fs, artifact)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "artifact path is empty")
|
||||
assert.Empty(t, result.Checksum)
|
||||
})
|
||||
|
||||
t.Run("returns error for non-existent file", func(t *testing.T) {
|
||||
artifact := Artifact{
|
||||
Path: "/nonexistent/path/file",
|
||||
OS: "linux",
|
||||
Arch: "amd64",
|
||||
}
|
||||
|
||||
result, err := Checksum(fs, artifact)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "failed to open file")
|
||||
assert.Empty(t, result.Checksum)
|
||||
})
|
||||
}
|
||||
|
||||
func TestChecksumAll_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("checksums multiple artifacts", func(t *testing.T) {
|
||||
paths := []string{
|
||||
setupChecksumTestFile(t, "content one"),
|
||||
setupChecksumTestFile(t, "content two"),
|
||||
setupChecksumTestFile(t, "content three"),
|
||||
}
|
||||
|
||||
artifacts := []Artifact{
|
||||
{Path: paths[0], OS: "linux", Arch: "amd64"},
|
||||
{Path: paths[1], OS: "darwin", Arch: "arm64"},
|
||||
{Path: paths[2], OS: "windows", Arch: "amd64"},
|
||||
}
|
||||
|
||||
results, err := ChecksumAll(fs, artifacts)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, results, 3)
|
||||
|
||||
for i, result := range results {
|
||||
assert.Equal(t, artifacts[i].Path, result.Path)
|
||||
assert.Equal(t, artifacts[i].OS, result.OS)
|
||||
assert.Equal(t, artifacts[i].Arch, result.Arch)
|
||||
assert.NotEmpty(t, result.Checksum)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("returns nil for empty slice", func(t *testing.T) {
|
||||
results, err := ChecksumAll(fs, []Artifact{})
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, results)
|
||||
})
|
||||
|
||||
t.Run("returns nil for nil slice", func(t *testing.T) {
|
||||
results, err := ChecksumAll(fs, nil)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, results)
|
||||
})
|
||||
}
|
||||
|
||||
func TestChecksumAll_Bad(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("returns partial results on error", func(t *testing.T) {
|
||||
path := setupChecksumTestFile(t, "valid content")
|
||||
|
||||
artifacts := []Artifact{
|
||||
{Path: path, OS: "linux", Arch: "amd64"},
|
||||
{Path: "/nonexistent/file", OS: "linux", Arch: "arm64"}, // This will fail
|
||||
}
|
||||
|
||||
results, err := ChecksumAll(fs, artifacts)
|
||||
assert.Error(t, err)
|
||||
// Should have the first successful result
|
||||
assert.Len(t, results, 1)
|
||||
assert.NotEmpty(t, results[0].Checksum)
|
||||
})
|
||||
}
|
||||
|
||||
func TestWriteChecksumFile_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("writes checksum file with correct format", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
checksumPath := filepath.Join(dir, "CHECKSUMS.txt")
|
||||
|
||||
artifacts := []Artifact{
|
||||
{Path: "/output/app_linux_amd64.tar.gz", Checksum: "abc123def456", OS: "linux", Arch: "amd64"},
|
||||
{Path: "/output/app_darwin_arm64.tar.gz", Checksum: "789xyz000111", OS: "darwin", Arch: "arm64"},
|
||||
}
|
||||
|
||||
err := WriteChecksumFile(fs, artifacts, checksumPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Read and verify content
|
||||
content, err := os.ReadFile(checksumPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
lines := strings.Split(strings.TrimSpace(string(content)), "\n")
|
||||
require.Len(t, lines, 2)
|
||||
|
||||
// Lines should be sorted alphabetically
|
||||
assert.Equal(t, "789xyz000111 app_darwin_arm64.tar.gz", lines[0])
|
||||
assert.Equal(t, "abc123def456 app_linux_amd64.tar.gz", lines[1])
|
||||
})
|
||||
|
||||
t.Run("creates parent directories", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
checksumPath := filepath.Join(dir, "nested", "deep", "CHECKSUMS.txt")
|
||||
|
||||
artifacts := []Artifact{
|
||||
{Path: "/output/app.tar.gz", Checksum: "abc123", OS: "linux", Arch: "amd64"},
|
||||
}
|
||||
|
||||
err := WriteChecksumFile(fs, artifacts, checksumPath)
|
||||
require.NoError(t, err)
|
||||
assert.FileExists(t, checksumPath)
|
||||
})
|
||||
|
||||
t.Run("does nothing for empty artifacts", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
checksumPath := filepath.Join(dir, "CHECKSUMS.txt")
|
||||
|
||||
err := WriteChecksumFile(fs, []Artifact{}, checksumPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
// File should not exist
|
||||
_, err = os.Stat(checksumPath)
|
||||
assert.True(t, os.IsNotExist(err))
|
||||
})
|
||||
|
||||
t.Run("does nothing for nil artifacts", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
checksumPath := filepath.Join(dir, "CHECKSUMS.txt")
|
||||
|
||||
err := WriteChecksumFile(fs, nil, checksumPath)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("uses only basename for filenames", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
checksumPath := filepath.Join(dir, "CHECKSUMS.txt")
|
||||
|
||||
artifacts := []Artifact{
|
||||
{Path: "/some/deep/nested/path/myapp_linux_amd64.tar.gz", Checksum: "checksum123", OS: "linux", Arch: "amd64"},
|
||||
}
|
||||
|
||||
err := WriteChecksumFile(fs, artifacts, checksumPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
content, err := os.ReadFile(checksumPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Should only contain the basename
|
||||
assert.Contains(t, string(content), "myapp_linux_amd64.tar.gz")
|
||||
assert.NotContains(t, string(content), "/some/deep/nested/path/")
|
||||
})
|
||||
}
|
||||
|
||||
func TestWriteChecksumFile_Bad(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("returns error for artifact without checksum", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
checksumPath := filepath.Join(dir, "CHECKSUMS.txt")
|
||||
|
||||
artifacts := []Artifact{
|
||||
{Path: "/output/app.tar.gz", Checksum: "", OS: "linux", Arch: "amd64"}, // No checksum
|
||||
}
|
||||
|
||||
err := WriteChecksumFile(fs, artifacts, checksumPath)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "has no checksum")
|
||||
})
|
||||
}
|
||||
|
|
@ -1,169 +0,0 @@
|
|||
// Package build provides project type detection and cross-compilation for the Core build system.
|
||||
// This file handles configuration loading from .core/build.yaml files.
|
||||
package build
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/build/signing"
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// ConfigFileName is the name of the build configuration file.
|
||||
const ConfigFileName = "build.yaml"
|
||||
|
||||
// ConfigDir is the directory where build configuration is stored.
|
||||
const ConfigDir = ".core"
|
||||
|
||||
// BuildConfig holds the complete build configuration loaded from .core/build.yaml.
|
||||
// This is distinct from Config which holds runtime build parameters.
|
||||
type BuildConfig struct {
|
||||
// Version is the config file format version.
|
||||
Version int `yaml:"version"`
|
||||
// Project contains project metadata.
|
||||
Project Project `yaml:"project"`
|
||||
// Build contains build settings.
|
||||
Build Build `yaml:"build"`
|
||||
// Targets defines the build targets.
|
||||
Targets []TargetConfig `yaml:"targets"`
|
||||
// Sign contains code signing configuration.
|
||||
Sign signing.SignConfig `yaml:"sign,omitempty"`
|
||||
}
|
||||
|
||||
// Project holds project metadata.
|
||||
type Project struct {
|
||||
// Name is the project name.
|
||||
Name string `yaml:"name"`
|
||||
// Description is a brief description of the project.
|
||||
Description string `yaml:"description"`
|
||||
// Main is the path to the main package (e.g., ./cmd/core).
|
||||
Main string `yaml:"main"`
|
||||
// Binary is the output binary name.
|
||||
Binary string `yaml:"binary"`
|
||||
}
|
||||
|
||||
// Build holds build-time settings.
|
||||
type Build struct {
|
||||
// CGO enables CGO for the build.
|
||||
CGO bool `yaml:"cgo"`
|
||||
// Flags are additional build flags (e.g., ["-trimpath"]).
|
||||
Flags []string `yaml:"flags"`
|
||||
// LDFlags are linker flags (e.g., ["-s", "-w"]).
|
||||
LDFlags []string `yaml:"ldflags"`
|
||||
// Env are additional environment variables.
|
||||
Env []string `yaml:"env"`
|
||||
}
|
||||
|
||||
// TargetConfig defines a build target in the config file.
|
||||
// This is separate from Target to allow for additional config-specific fields.
|
||||
type TargetConfig struct {
|
||||
// OS is the target operating system (e.g., "linux", "darwin", "windows").
|
||||
OS string `yaml:"os"`
|
||||
// Arch is the target architecture (e.g., "amd64", "arm64").
|
||||
Arch string `yaml:"arch"`
|
||||
}
|
||||
|
||||
// LoadConfig loads build configuration from the .core/build.yaml file in the given directory.
|
||||
// If the config file does not exist, it returns DefaultConfig().
|
||||
// Returns an error if the file exists but cannot be parsed.
|
||||
func LoadConfig(fs io.Medium, dir string) (*BuildConfig, error) {
|
||||
configPath := filepath.Join(dir, ConfigDir, ConfigFileName)
|
||||
|
||||
content, err := fs.Read(configPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return DefaultConfig(), nil
|
||||
}
|
||||
return nil, fmt.Errorf("build.LoadConfig: failed to read config file: %w", err)
|
||||
}
|
||||
|
||||
var cfg BuildConfig
|
||||
data := []byte(content)
|
||||
if err := yaml.Unmarshal(data, &cfg); err != nil {
|
||||
return nil, fmt.Errorf("build.LoadConfig: failed to parse config file: %w", err)
|
||||
}
|
||||
|
||||
// Apply defaults for any missing fields
|
||||
applyDefaults(&cfg)
|
||||
|
||||
return &cfg, nil
|
||||
}
|
||||
|
||||
// DefaultConfig returns sensible defaults for Go projects.
|
||||
func DefaultConfig() *BuildConfig {
|
||||
return &BuildConfig{
|
||||
Version: 1,
|
||||
Project: Project{
|
||||
Name: "",
|
||||
Main: ".",
|
||||
Binary: "",
|
||||
},
|
||||
Build: Build{
|
||||
CGO: false,
|
||||
Flags: []string{"-trimpath"},
|
||||
LDFlags: []string{"-s", "-w"},
|
||||
Env: []string{},
|
||||
},
|
||||
Targets: []TargetConfig{
|
||||
{OS: "linux", Arch: "amd64"},
|
||||
{OS: "linux", Arch: "arm64"},
|
||||
{OS: "darwin", Arch: "arm64"},
|
||||
{OS: "windows", Arch: "amd64"},
|
||||
},
|
||||
Sign: signing.DefaultSignConfig(),
|
||||
}
|
||||
}
|
||||
|
||||
// applyDefaults fills in default values for any empty fields in the config.
|
||||
func applyDefaults(cfg *BuildConfig) {
|
||||
defaults := DefaultConfig()
|
||||
|
||||
if cfg.Version == 0 {
|
||||
cfg.Version = defaults.Version
|
||||
}
|
||||
|
||||
if cfg.Project.Main == "" {
|
||||
cfg.Project.Main = defaults.Project.Main
|
||||
}
|
||||
|
||||
if cfg.Build.Flags == nil {
|
||||
cfg.Build.Flags = defaults.Build.Flags
|
||||
}
|
||||
|
||||
if cfg.Build.LDFlags == nil {
|
||||
cfg.Build.LDFlags = defaults.Build.LDFlags
|
||||
}
|
||||
|
||||
if cfg.Build.Env == nil {
|
||||
cfg.Build.Env = defaults.Build.Env
|
||||
}
|
||||
|
||||
if len(cfg.Targets) == 0 {
|
||||
cfg.Targets = defaults.Targets
|
||||
}
|
||||
|
||||
// Expand environment variables in sign config
|
||||
cfg.Sign.ExpandEnv()
|
||||
}
|
||||
|
||||
// ConfigPath returns the path to the build config file for a given directory.
|
||||
func ConfigPath(dir string) string {
|
||||
return filepath.Join(dir, ConfigDir, ConfigFileName)
|
||||
}
|
||||
|
||||
// ConfigExists checks if a build config file exists in the given directory.
|
||||
func ConfigExists(fs io.Medium, dir string) bool {
|
||||
return fileExists(fs, ConfigPath(dir))
|
||||
}
|
||||
|
||||
// ToTargets converts TargetConfig slice to Target slice for use with builders.
|
||||
func (cfg *BuildConfig) ToTargets() []Target {
|
||||
targets := make([]Target, len(cfg.Targets))
|
||||
for i, t := range cfg.Targets {
|
||||
targets[i] = Target(t)
|
||||
}
|
||||
return targets
|
||||
}
|
||||
|
|
@ -1,324 +0,0 @@
|
|||
package build
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// setupConfigTestDir creates a temp directory with optional .core/build.yaml content.
|
||||
func setupConfigTestDir(t *testing.T, configContent string) string {
|
||||
t.Helper()
|
||||
dir := t.TempDir()
|
||||
|
||||
if configContent != "" {
|
||||
coreDir := filepath.Join(dir, ConfigDir)
|
||||
err := os.MkdirAll(coreDir, 0755)
|
||||
require.NoError(t, err)
|
||||
|
||||
configPath := filepath.Join(coreDir, ConfigFileName)
|
||||
err = os.WriteFile(configPath, []byte(configContent), 0644)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
return dir
|
||||
}
|
||||
|
||||
func TestLoadConfig_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("loads valid config", func(t *testing.T) {
|
||||
content := `
|
||||
version: 1
|
||||
project:
|
||||
name: myapp
|
||||
description: A test application
|
||||
main: ./cmd/myapp
|
||||
binary: myapp
|
||||
build:
|
||||
cgo: true
|
||||
flags:
|
||||
- -trimpath
|
||||
- -race
|
||||
ldflags:
|
||||
- -s
|
||||
- -w
|
||||
env:
|
||||
- FOO=bar
|
||||
targets:
|
||||
- os: linux
|
||||
arch: amd64
|
||||
- os: darwin
|
||||
arch: arm64
|
||||
`
|
||||
dir := setupConfigTestDir(t, content)
|
||||
|
||||
cfg, err := LoadConfig(fs, dir)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, cfg)
|
||||
|
||||
assert.Equal(t, 1, cfg.Version)
|
||||
assert.Equal(t, "myapp", cfg.Project.Name)
|
||||
assert.Equal(t, "A test application", cfg.Project.Description)
|
||||
assert.Equal(t, "./cmd/myapp", cfg.Project.Main)
|
||||
assert.Equal(t, "myapp", cfg.Project.Binary)
|
||||
assert.True(t, cfg.Build.CGO)
|
||||
assert.Equal(t, []string{"-trimpath", "-race"}, cfg.Build.Flags)
|
||||
assert.Equal(t, []string{"-s", "-w"}, cfg.Build.LDFlags)
|
||||
assert.Equal(t, []string{"FOO=bar"}, cfg.Build.Env)
|
||||
assert.Len(t, cfg.Targets, 2)
|
||||
assert.Equal(t, "linux", cfg.Targets[0].OS)
|
||||
assert.Equal(t, "amd64", cfg.Targets[0].Arch)
|
||||
assert.Equal(t, "darwin", cfg.Targets[1].OS)
|
||||
assert.Equal(t, "arm64", cfg.Targets[1].Arch)
|
||||
})
|
||||
|
||||
t.Run("returns defaults when config file missing", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
cfg, err := LoadConfig(fs, dir)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, cfg)
|
||||
|
||||
defaults := DefaultConfig()
|
||||
assert.Equal(t, defaults.Version, cfg.Version)
|
||||
assert.Equal(t, defaults.Project.Main, cfg.Project.Main)
|
||||
assert.Equal(t, defaults.Build.CGO, cfg.Build.CGO)
|
||||
assert.Equal(t, defaults.Build.Flags, cfg.Build.Flags)
|
||||
assert.Equal(t, defaults.Build.LDFlags, cfg.Build.LDFlags)
|
||||
assert.Equal(t, defaults.Targets, cfg.Targets)
|
||||
})
|
||||
|
||||
t.Run("applies defaults for missing fields", func(t *testing.T) {
|
||||
content := `
|
||||
version: 2
|
||||
project:
|
||||
name: partial
|
||||
`
|
||||
dir := setupConfigTestDir(t, content)
|
||||
|
||||
cfg, err := LoadConfig(fs, dir)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, cfg)
|
||||
|
||||
// Explicit values preserved
|
||||
assert.Equal(t, 2, cfg.Version)
|
||||
assert.Equal(t, "partial", cfg.Project.Name)
|
||||
|
||||
// Defaults applied
|
||||
defaults := DefaultConfig()
|
||||
assert.Equal(t, defaults.Project.Main, cfg.Project.Main)
|
||||
assert.Equal(t, defaults.Build.Flags, cfg.Build.Flags)
|
||||
assert.Equal(t, defaults.Build.LDFlags, cfg.Build.LDFlags)
|
||||
assert.Equal(t, defaults.Targets, cfg.Targets)
|
||||
})
|
||||
|
||||
t.Run("preserves empty arrays when explicitly set", func(t *testing.T) {
|
||||
content := `
|
||||
version: 1
|
||||
project:
|
||||
name: noflags
|
||||
build:
|
||||
flags: []
|
||||
ldflags: []
|
||||
targets:
|
||||
- os: linux
|
||||
arch: amd64
|
||||
`
|
||||
dir := setupConfigTestDir(t, content)
|
||||
|
||||
cfg, err := LoadConfig(fs, dir)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, cfg)
|
||||
|
||||
// Empty arrays are preserved (not replaced with defaults)
|
||||
assert.Empty(t, cfg.Build.Flags)
|
||||
assert.Empty(t, cfg.Build.LDFlags)
|
||||
// Targets explicitly set
|
||||
assert.Len(t, cfg.Targets, 1)
|
||||
})
|
||||
}
|
||||
|
||||
func TestLoadConfig_Bad(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("returns error for invalid YAML", func(t *testing.T) {
|
||||
content := `
|
||||
version: 1
|
||||
project:
|
||||
name: [invalid yaml
|
||||
`
|
||||
dir := setupConfigTestDir(t, content)
|
||||
|
||||
cfg, err := LoadConfig(fs, dir)
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, cfg)
|
||||
assert.Contains(t, err.Error(), "failed to parse config file")
|
||||
})
|
||||
|
||||
t.Run("returns error for unreadable file", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
coreDir := filepath.Join(dir, ConfigDir)
|
||||
err := os.MkdirAll(coreDir, 0755)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create config as a directory instead of file
|
||||
configPath := filepath.Join(coreDir, ConfigFileName)
|
||||
err = os.Mkdir(configPath, 0755)
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg, err := LoadConfig(fs, dir)
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, cfg)
|
||||
assert.Contains(t, err.Error(), "failed to read config file")
|
||||
})
|
||||
}
|
||||
|
||||
func TestDefaultConfig_Good(t *testing.T) {
|
||||
t.Run("returns sensible defaults", func(t *testing.T) {
|
||||
cfg := DefaultConfig()
|
||||
|
||||
assert.Equal(t, 1, cfg.Version)
|
||||
assert.Equal(t, ".", cfg.Project.Main)
|
||||
assert.Empty(t, cfg.Project.Name)
|
||||
assert.Empty(t, cfg.Project.Binary)
|
||||
assert.False(t, cfg.Build.CGO)
|
||||
assert.Contains(t, cfg.Build.Flags, "-trimpath")
|
||||
assert.Contains(t, cfg.Build.LDFlags, "-s")
|
||||
assert.Contains(t, cfg.Build.LDFlags, "-w")
|
||||
assert.Empty(t, cfg.Build.Env)
|
||||
|
||||
// Default targets cover common platforms
|
||||
assert.Len(t, cfg.Targets, 4)
|
||||
hasLinuxAmd64 := false
|
||||
hasDarwinArm64 := false
|
||||
hasWindowsAmd64 := false
|
||||
for _, t := range cfg.Targets {
|
||||
if t.OS == "linux" && t.Arch == "amd64" {
|
||||
hasLinuxAmd64 = true
|
||||
}
|
||||
if t.OS == "darwin" && t.Arch == "arm64" {
|
||||
hasDarwinArm64 = true
|
||||
}
|
||||
if t.OS == "windows" && t.Arch == "amd64" {
|
||||
hasWindowsAmd64 = true
|
||||
}
|
||||
}
|
||||
assert.True(t, hasLinuxAmd64)
|
||||
assert.True(t, hasDarwinArm64)
|
||||
assert.True(t, hasWindowsAmd64)
|
||||
})
|
||||
}
|
||||
|
||||
func TestConfigPath_Good(t *testing.T) {
|
||||
t.Run("returns correct path", func(t *testing.T) {
|
||||
path := ConfigPath("/project/root")
|
||||
assert.Equal(t, "/project/root/.core/build.yaml", path)
|
||||
})
|
||||
}
|
||||
|
||||
func TestConfigExists_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("returns true when config exists", func(t *testing.T) {
|
||||
dir := setupConfigTestDir(t, "version: 1")
|
||||
assert.True(t, ConfigExists(fs, dir))
|
||||
})
|
||||
|
||||
t.Run("returns false when config missing", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
assert.False(t, ConfigExists(fs, dir))
|
||||
})
|
||||
|
||||
t.Run("returns false when .core dir missing", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
assert.False(t, ConfigExists(fs, dir))
|
||||
})
|
||||
}
|
||||
|
||||
func TestLoadConfig_Good_SignConfig(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
coreDir := filepath.Join(tmpDir, ".core")
|
||||
_ = os.MkdirAll(coreDir, 0755)
|
||||
|
||||
configContent := `version: 1
|
||||
sign:
|
||||
enabled: true
|
||||
gpg:
|
||||
key: "ABCD1234"
|
||||
macos:
|
||||
identity: "Developer ID Application: Test"
|
||||
notarize: true
|
||||
`
|
||||
_ = os.WriteFile(filepath.Join(coreDir, "build.yaml"), []byte(configContent), 0644)
|
||||
|
||||
cfg, err := LoadConfig(io.Local, tmpDir)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if !cfg.Sign.Enabled {
|
||||
t.Error("expected Sign.Enabled to be true")
|
||||
}
|
||||
if cfg.Sign.GPG.Key != "ABCD1234" {
|
||||
t.Errorf("expected GPG.Key 'ABCD1234', got %q", cfg.Sign.GPG.Key)
|
||||
}
|
||||
if cfg.Sign.MacOS.Identity != "Developer ID Application: Test" {
|
||||
t.Errorf("expected MacOS.Identity, got %q", cfg.Sign.MacOS.Identity)
|
||||
}
|
||||
if !cfg.Sign.MacOS.Notarize {
|
||||
t.Error("expected MacOS.Notarize to be true")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildConfig_ToTargets_Good(t *testing.T) {
|
||||
t.Run("converts TargetConfig to Target", func(t *testing.T) {
|
||||
cfg := &BuildConfig{
|
||||
Targets: []TargetConfig{
|
||||
{OS: "linux", Arch: "amd64"},
|
||||
{OS: "darwin", Arch: "arm64"},
|
||||
{OS: "windows", Arch: "386"},
|
||||
},
|
||||
}
|
||||
|
||||
targets := cfg.ToTargets()
|
||||
require.Len(t, targets, 3)
|
||||
|
||||
assert.Equal(t, Target{OS: "linux", Arch: "amd64"}, targets[0])
|
||||
assert.Equal(t, Target{OS: "darwin", Arch: "arm64"}, targets[1])
|
||||
assert.Equal(t, Target{OS: "windows", Arch: "386"}, targets[2])
|
||||
})
|
||||
|
||||
t.Run("returns empty slice for no targets", func(t *testing.T) {
|
||||
cfg := &BuildConfig{
|
||||
Targets: []TargetConfig{},
|
||||
}
|
||||
|
||||
targets := cfg.ToTargets()
|
||||
assert.Empty(t, targets)
|
||||
})
|
||||
}
|
||||
|
||||
// TestLoadConfig_Testdata tests loading from the testdata fixture.
|
||||
func TestLoadConfig_Testdata(t *testing.T) {
|
||||
fs := io.Local
|
||||
abs, err := filepath.Abs("testdata/config-project")
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("loads config-project fixture", func(t *testing.T) {
|
||||
cfg, err := LoadConfig(fs, abs)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, cfg)
|
||||
|
||||
assert.Equal(t, 1, cfg.Version)
|
||||
assert.Equal(t, "example-cli", cfg.Project.Name)
|
||||
assert.Equal(t, "An example CLI application", cfg.Project.Description)
|
||||
assert.Equal(t, "./cmd/example", cfg.Project.Main)
|
||||
assert.Equal(t, "example", cfg.Project.Binary)
|
||||
assert.False(t, cfg.Build.CGO)
|
||||
assert.Equal(t, []string{"-trimpath"}, cfg.Build.Flags)
|
||||
assert.Equal(t, []string{"-s", "-w"}, cfg.Build.LDFlags)
|
||||
assert.Len(t, cfg.Targets, 3)
|
||||
})
|
||||
}
|
||||
|
|
@ -1,94 +0,0 @@
|
|||
package build
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"slices"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// Marker files for project type detection.
|
||||
const (
|
||||
markerGoMod = "go.mod"
|
||||
markerWails = "wails.json"
|
||||
markerNodePackage = "package.json"
|
||||
markerComposer = "composer.json"
|
||||
)
|
||||
|
||||
// projectMarker maps a marker file to its project type.
|
||||
type projectMarker struct {
|
||||
file string
|
||||
projectType ProjectType
|
||||
}
|
||||
|
||||
// markers defines the detection order. More specific types come first.
|
||||
// Wails projects have both wails.json and go.mod, so wails is checked first.
|
||||
var markers = []projectMarker{
|
||||
{markerWails, ProjectTypeWails},
|
||||
{markerGoMod, ProjectTypeGo},
|
||||
{markerNodePackage, ProjectTypeNode},
|
||||
{markerComposer, ProjectTypePHP},
|
||||
}
|
||||
|
||||
// Discover detects project types in the given directory by checking for marker files.
|
||||
// Returns a slice of detected project types, ordered by priority (most specific first).
|
||||
// For example, a Wails project returns [wails, go] since it has both wails.json and go.mod.
|
||||
func Discover(fs io.Medium, dir string) ([]ProjectType, error) {
|
||||
var detected []ProjectType
|
||||
|
||||
for _, m := range markers {
|
||||
path := filepath.Join(dir, m.file)
|
||||
if fileExists(fs, path) {
|
||||
// Avoid duplicates (shouldn't happen with current markers, but defensive)
|
||||
if !slices.Contains(detected, m.projectType) {
|
||||
detected = append(detected, m.projectType)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return detected, nil
|
||||
}
|
||||
|
||||
// PrimaryType returns the most specific project type detected in the directory.
|
||||
// Returns empty string if no project type is detected.
|
||||
func PrimaryType(fs io.Medium, dir string) (ProjectType, error) {
|
||||
types, err := Discover(fs, dir)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if len(types) == 0 {
|
||||
return "", nil
|
||||
}
|
||||
return types[0], nil
|
||||
}
|
||||
|
||||
// IsGoProject checks if the directory contains a Go project (go.mod or wails.json).
|
||||
func IsGoProject(fs io.Medium, dir string) bool {
|
||||
return fileExists(fs, filepath.Join(dir, markerGoMod)) ||
|
||||
fileExists(fs, filepath.Join(dir, markerWails))
|
||||
}
|
||||
|
||||
// IsWailsProject checks if the directory contains a Wails project.
|
||||
func IsWailsProject(fs io.Medium, dir string) bool {
|
||||
return fileExists(fs, filepath.Join(dir, markerWails))
|
||||
}
|
||||
|
||||
// IsNodeProject checks if the directory contains a Node.js project.
|
||||
func IsNodeProject(fs io.Medium, dir string) bool {
|
||||
return fileExists(fs, filepath.Join(dir, markerNodePackage))
|
||||
}
|
||||
|
||||
// IsPHPProject checks if the directory contains a PHP project.
|
||||
func IsPHPProject(fs io.Medium, dir string) bool {
|
||||
return fileExists(fs, filepath.Join(dir, markerComposer))
|
||||
}
|
||||
|
||||
// IsCPPProject checks if the directory contains a C++ project (CMakeLists.txt).
|
||||
func IsCPPProject(fs io.Medium, dir string) bool {
|
||||
return fileExists(fs, filepath.Join(dir, "CMakeLists.txt"))
|
||||
}
|
||||
|
||||
// fileExists checks if a file exists and is not a directory.
|
||||
func fileExists(fs io.Medium, path string) bool {
|
||||
return fs.IsFile(path)
|
||||
}
|
||||
|
|
@ -1,228 +0,0 @@
|
|||
package build
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// setupTestDir creates a temporary directory with the specified marker files.
|
||||
func setupTestDir(t *testing.T, markers ...string) string {
|
||||
t.Helper()
|
||||
dir := t.TempDir()
|
||||
for _, m := range markers {
|
||||
path := filepath.Join(dir, m)
|
||||
err := os.WriteFile(path, []byte("{}"), 0644)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
return dir
|
||||
}
|
||||
|
||||
func TestDiscover_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("detects Go project", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "go.mod")
|
||||
types, err := Discover(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, []ProjectType{ProjectTypeGo}, types)
|
||||
})
|
||||
|
||||
t.Run("detects Wails project with priority over Go", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "wails.json", "go.mod")
|
||||
types, err := Discover(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, []ProjectType{ProjectTypeWails, ProjectTypeGo}, types)
|
||||
})
|
||||
|
||||
t.Run("detects Node.js project", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "package.json")
|
||||
types, err := Discover(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, []ProjectType{ProjectTypeNode}, types)
|
||||
})
|
||||
|
||||
t.Run("detects PHP project", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "composer.json")
|
||||
types, err := Discover(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, []ProjectType{ProjectTypePHP}, types)
|
||||
})
|
||||
|
||||
t.Run("detects multiple project types", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "go.mod", "package.json")
|
||||
types, err := Discover(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, []ProjectType{ProjectTypeGo, ProjectTypeNode}, types)
|
||||
})
|
||||
|
||||
t.Run("empty directory returns empty slice", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
types, err := Discover(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.Empty(t, types)
|
||||
})
|
||||
}
|
||||
|
||||
func TestDiscover_Bad(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("non-existent directory returns empty slice", func(t *testing.T) {
|
||||
types, err := Discover(fs, "/non/existent/path")
|
||||
assert.NoError(t, err) // os.Stat fails silently in fileExists
|
||||
assert.Empty(t, types)
|
||||
})
|
||||
|
||||
t.Run("directory marker is ignored", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
// Create go.mod as a directory instead of a file
|
||||
err := os.Mkdir(filepath.Join(dir, "go.mod"), 0755)
|
||||
require.NoError(t, err)
|
||||
|
||||
types, err := Discover(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.Empty(t, types)
|
||||
})
|
||||
}
|
||||
|
||||
func TestPrimaryType_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("returns wails for wails project", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "wails.json", "go.mod")
|
||||
primary, err := PrimaryType(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, ProjectTypeWails, primary)
|
||||
})
|
||||
|
||||
t.Run("returns go for go-only project", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "go.mod")
|
||||
primary, err := PrimaryType(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, ProjectTypeGo, primary)
|
||||
})
|
||||
|
||||
t.Run("returns empty string for empty directory", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
primary, err := PrimaryType(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
assert.Empty(t, primary)
|
||||
})
|
||||
}
|
||||
|
||||
func TestIsGoProject_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("true with go.mod", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "go.mod")
|
||||
assert.True(t, IsGoProject(fs, dir))
|
||||
})
|
||||
|
||||
t.Run("true with wails.json", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "wails.json")
|
||||
assert.True(t, IsGoProject(fs, dir))
|
||||
})
|
||||
|
||||
t.Run("false without markers", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
assert.False(t, IsGoProject(fs, dir))
|
||||
})
|
||||
}
|
||||
|
||||
func TestIsWailsProject_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("true with wails.json", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "wails.json")
|
||||
assert.True(t, IsWailsProject(fs, dir))
|
||||
})
|
||||
|
||||
t.Run("false with only go.mod", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "go.mod")
|
||||
assert.False(t, IsWailsProject(fs, dir))
|
||||
})
|
||||
}
|
||||
|
||||
func TestIsNodeProject_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("true with package.json", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "package.json")
|
||||
assert.True(t, IsNodeProject(fs, dir))
|
||||
})
|
||||
|
||||
t.Run("false without package.json", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
assert.False(t, IsNodeProject(fs, dir))
|
||||
})
|
||||
}
|
||||
|
||||
func TestIsPHPProject_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("true with composer.json", func(t *testing.T) {
|
||||
dir := setupTestDir(t, "composer.json")
|
||||
assert.True(t, IsPHPProject(fs, dir))
|
||||
})
|
||||
|
||||
t.Run("false without composer.json", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
assert.False(t, IsPHPProject(fs, dir))
|
||||
})
|
||||
}
|
||||
|
||||
func TestTarget_Good(t *testing.T) {
|
||||
target := Target{OS: "linux", Arch: "amd64"}
|
||||
assert.Equal(t, "linux/amd64", target.String())
|
||||
}
|
||||
|
||||
func TestFileExists_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("returns true for existing file", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "test.txt")
|
||||
err := os.WriteFile(path, []byte("content"), 0644)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, fileExists(fs, path))
|
||||
})
|
||||
|
||||
t.Run("returns false for directory", func(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
assert.False(t, fileExists(fs, dir))
|
||||
})
|
||||
|
||||
t.Run("returns false for non-existent path", func(t *testing.T) {
|
||||
assert.False(t, fileExists(fs, "/non/existent/file"))
|
||||
})
|
||||
}
|
||||
|
||||
// TestDiscover_Testdata tests discovery using the testdata fixtures.
|
||||
// These serve as integration tests with realistic project structures.
|
||||
func TestDiscover_Testdata(t *testing.T) {
|
||||
fs := io.Local
|
||||
testdataDir, err := filepath.Abs("testdata")
|
||||
require.NoError(t, err)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
dir string
|
||||
expected []ProjectType
|
||||
}{
|
||||
{"go-project", "go-project", []ProjectType{ProjectTypeGo}},
|
||||
{"wails-project", "wails-project", []ProjectType{ProjectTypeWails, ProjectTypeGo}},
|
||||
{"node-project", "node-project", []ProjectType{ProjectTypeNode}},
|
||||
{"php-project", "php-project", []ProjectType{ProjectTypePHP}},
|
||||
{"multi-project", "multi-project", []ProjectType{ProjectTypeGo, ProjectTypeNode}},
|
||||
{"empty-project", "empty-project", []ProjectType{}},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
dir := filepath.Join(testdataDir, tt.dir)
|
||||
types, err := Discover(fs, dir)
|
||||
assert.NoError(t, err)
|
||||
if len(tt.expected) == 0 {
|
||||
assert.Empty(t, types)
|
||||
} else {
|
||||
assert.Equal(t, tt.expected, types)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
@ -1,103 +0,0 @@
|
|||
package signing
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"runtime"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// MacOSSigner signs binaries using macOS codesign.
|
||||
type MacOSSigner struct {
|
||||
config MacOSConfig
|
||||
}
|
||||
|
||||
// Compile-time interface check.
|
||||
var _ Signer = (*MacOSSigner)(nil)
|
||||
|
||||
// NewMacOSSigner creates a new macOS signer.
|
||||
func NewMacOSSigner(cfg MacOSConfig) *MacOSSigner {
|
||||
return &MacOSSigner{config: cfg}
|
||||
}
|
||||
|
||||
// Name returns "codesign".
|
||||
func (s *MacOSSigner) Name() string {
|
||||
return "codesign"
|
||||
}
|
||||
|
||||
// Available checks if running on macOS with codesign and identity configured.
|
||||
func (s *MacOSSigner) Available() bool {
|
||||
if runtime.GOOS != "darwin" {
|
||||
return false
|
||||
}
|
||||
if s.config.Identity == "" {
|
||||
return false
|
||||
}
|
||||
_, err := exec.LookPath("codesign")
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// Sign codesigns a binary with hardened runtime.
|
||||
func (s *MacOSSigner) Sign(ctx context.Context, fs io.Medium, binary string) error {
|
||||
if !s.Available() {
|
||||
return fmt.Errorf("codesign.Sign: codesign not available")
|
||||
}
|
||||
|
||||
cmd := exec.CommandContext(ctx, "codesign",
|
||||
"--sign", s.config.Identity,
|
||||
"--timestamp",
|
||||
"--options", "runtime", // Hardened runtime for notarization
|
||||
"--force",
|
||||
binary,
|
||||
)
|
||||
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("codesign.Sign: %w\nOutput: %s", err, string(output))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Notarize submits binary to Apple for notarization and staples the ticket.
|
||||
// This blocks until Apple responds (typically 1-5 minutes).
|
||||
func (s *MacOSSigner) Notarize(ctx context.Context, fs io.Medium, binary string) error {
|
||||
if s.config.AppleID == "" || s.config.TeamID == "" || s.config.AppPassword == "" {
|
||||
return fmt.Errorf("codesign.Notarize: missing Apple credentials (apple_id, team_id, app_password)")
|
||||
}
|
||||
|
||||
// Create ZIP for submission
|
||||
zipPath := binary + ".zip"
|
||||
zipCmd := exec.CommandContext(ctx, "zip", "-j", zipPath, binary)
|
||||
if output, err := zipCmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("codesign.Notarize: failed to create zip: %w\nOutput: %s", err, string(output))
|
||||
}
|
||||
defer func() { _ = fs.Delete(zipPath) }()
|
||||
|
||||
// Submit to Apple and wait
|
||||
submitCmd := exec.CommandContext(ctx, "xcrun", "notarytool", "submit",
|
||||
zipPath,
|
||||
"--apple-id", s.config.AppleID,
|
||||
"--team-id", s.config.TeamID,
|
||||
"--password", s.config.AppPassword,
|
||||
"--wait",
|
||||
)
|
||||
if output, err := submitCmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("codesign.Notarize: notarization failed: %w\nOutput: %s", err, string(output))
|
||||
}
|
||||
|
||||
// Staple the ticket
|
||||
stapleCmd := exec.CommandContext(ctx, "xcrun", "stapler", "staple", binary)
|
||||
if output, err := stapleCmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("codesign.Notarize: failed to staple: %w\nOutput: %s", err, string(output))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ShouldNotarize returns true if notarization is enabled.
|
||||
func (s *MacOSSigner) ShouldNotarize() bool {
|
||||
return s.config.Notarize
|
||||
}
|
||||
|
|
@ -1,62 +0,0 @@
|
|||
package signing
|
||||
|
||||
import (
|
||||
"context"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestMacOSSigner_Good_Name(t *testing.T) {
|
||||
s := NewMacOSSigner(MacOSConfig{Identity: "Developer ID Application: Test"})
|
||||
assert.Equal(t, "codesign", s.Name())
|
||||
}
|
||||
|
||||
func TestMacOSSigner_Good_Available(t *testing.T) {
|
||||
s := NewMacOSSigner(MacOSConfig{Identity: "Developer ID Application: Test"})
|
||||
|
||||
if runtime.GOOS == "darwin" {
|
||||
// Just verify it doesn't panic
|
||||
_ = s.Available()
|
||||
} else {
|
||||
assert.False(t, s.Available())
|
||||
}
|
||||
}
|
||||
|
||||
func TestMacOSSigner_Bad_NoIdentity(t *testing.T) {
|
||||
s := NewMacOSSigner(MacOSConfig{})
|
||||
assert.False(t, s.Available())
|
||||
}
|
||||
|
||||
func TestMacOSSigner_Sign_Bad(t *testing.T) {
|
||||
t.Run("fails when not available", func(t *testing.T) {
|
||||
if runtime.GOOS == "darwin" {
|
||||
t.Skip("skipping on macOS")
|
||||
}
|
||||
fs := io.Local
|
||||
s := NewMacOSSigner(MacOSConfig{Identity: "test"})
|
||||
err := s.Sign(context.Background(), fs, "test")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "not available")
|
||||
})
|
||||
}
|
||||
|
||||
func TestMacOSSigner_Notarize_Bad(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("fails with missing credentials", func(t *testing.T) {
|
||||
s := NewMacOSSigner(MacOSConfig{})
|
||||
err := s.Notarize(context.Background(), fs, "test")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "missing Apple credentials")
|
||||
})
|
||||
}
|
||||
|
||||
func TestMacOSSigner_ShouldNotarize(t *testing.T) {
|
||||
s := NewMacOSSigner(MacOSConfig{Notarize: true})
|
||||
assert.True(t, s.ShouldNotarize())
|
||||
|
||||
s2 := NewMacOSSigner(MacOSConfig{Notarize: false})
|
||||
assert.False(t, s2.ShouldNotarize())
|
||||
}
|
||||
|
|
@ -1,59 +0,0 @@
|
|||
package signing
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// GPGSigner signs files using GPG.
|
||||
type GPGSigner struct {
|
||||
KeyID string
|
||||
}
|
||||
|
||||
// Compile-time interface check.
|
||||
var _ Signer = (*GPGSigner)(nil)
|
||||
|
||||
// NewGPGSigner creates a new GPG signer.
|
||||
func NewGPGSigner(keyID string) *GPGSigner {
|
||||
return &GPGSigner{KeyID: keyID}
|
||||
}
|
||||
|
||||
// Name returns "gpg".
|
||||
func (s *GPGSigner) Name() string {
|
||||
return "gpg"
|
||||
}
|
||||
|
||||
// Available checks if gpg is installed and key is configured.
|
||||
func (s *GPGSigner) Available() bool {
|
||||
if s.KeyID == "" {
|
||||
return false
|
||||
}
|
||||
_, err := exec.LookPath("gpg")
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// Sign creates a detached ASCII-armored signature.
|
||||
// For file.txt, creates file.txt.asc
|
||||
func (s *GPGSigner) Sign(ctx context.Context, fs io.Medium, file string) error {
|
||||
if !s.Available() {
|
||||
return fmt.Errorf("gpg.Sign: gpg not available or key not configured")
|
||||
}
|
||||
|
||||
cmd := exec.CommandContext(ctx, "gpg",
|
||||
"--detach-sign",
|
||||
"--armor",
|
||||
"--local-user", s.KeyID,
|
||||
"--output", file+".asc",
|
||||
file,
|
||||
)
|
||||
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("gpg.Sign: %w\nOutput: %s", err, string(output))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,34 +0,0 @@
|
|||
package signing
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestGPGSigner_Good_Name(t *testing.T) {
|
||||
s := NewGPGSigner("ABCD1234")
|
||||
assert.Equal(t, "gpg", s.Name())
|
||||
}
|
||||
|
||||
func TestGPGSigner_Good_Available(t *testing.T) {
|
||||
s := NewGPGSigner("ABCD1234")
|
||||
_ = s.Available()
|
||||
}
|
||||
|
||||
func TestGPGSigner_Bad_NoKey(t *testing.T) {
|
||||
s := NewGPGSigner("")
|
||||
assert.False(t, s.Available())
|
||||
}
|
||||
|
||||
func TestGPGSigner_Sign_Bad(t *testing.T) {
|
||||
fs := io.Local
|
||||
t.Run("fails when no key", func(t *testing.T) {
|
||||
s := NewGPGSigner("")
|
||||
err := s.Sign(context.Background(), fs, "test.txt")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "not available or key not configured")
|
||||
})
|
||||
}
|
||||
|
|
@ -1,96 +0,0 @@
|
|||
package signing
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"runtime"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// Artifact represents a build output that can be signed.
|
||||
// This mirrors build.Artifact to avoid import cycles.
|
||||
type Artifact struct {
|
||||
Path string
|
||||
OS string
|
||||
Arch string
|
||||
}
|
||||
|
||||
// SignBinaries signs macOS binaries in the artifacts list.
|
||||
// Only signs darwin binaries when running on macOS with a configured identity.
|
||||
func SignBinaries(ctx context.Context, fs io.Medium, cfg SignConfig, artifacts []Artifact) error {
|
||||
if !cfg.Enabled {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Only sign on macOS
|
||||
if runtime.GOOS != "darwin" {
|
||||
return nil
|
||||
}
|
||||
|
||||
signer := NewMacOSSigner(cfg.MacOS)
|
||||
if !signer.Available() {
|
||||
return nil // Silently skip if not configured
|
||||
}
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
if artifact.OS != "darwin" {
|
||||
continue
|
||||
}
|
||||
|
||||
fmt.Printf(" Signing %s...\n", artifact.Path)
|
||||
if err := signer.Sign(ctx, fs, artifact.Path); err != nil {
|
||||
return fmt.Errorf("failed to sign %s: %w", artifact.Path, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// NotarizeBinaries notarizes macOS binaries if enabled.
|
||||
func NotarizeBinaries(ctx context.Context, fs io.Medium, cfg SignConfig, artifacts []Artifact) error {
|
||||
if !cfg.Enabled || !cfg.MacOS.Notarize {
|
||||
return nil
|
||||
}
|
||||
|
||||
if runtime.GOOS != "darwin" {
|
||||
return nil
|
||||
}
|
||||
|
||||
signer := NewMacOSSigner(cfg.MacOS)
|
||||
if !signer.Available() {
|
||||
return fmt.Errorf("notarization requested but codesign not available")
|
||||
}
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
if artifact.OS != "darwin" {
|
||||
continue
|
||||
}
|
||||
|
||||
fmt.Printf(" Notarizing %s (this may take a few minutes)...\n", artifact.Path)
|
||||
if err := signer.Notarize(ctx, fs, artifact.Path); err != nil {
|
||||
return fmt.Errorf("failed to notarize %s: %w", artifact.Path, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SignChecksums signs the checksums file with GPG.
|
||||
func SignChecksums(ctx context.Context, fs io.Medium, cfg SignConfig, checksumFile string) error {
|
||||
if !cfg.Enabled {
|
||||
return nil
|
||||
}
|
||||
|
||||
signer := NewGPGSigner(cfg.GPG.Key)
|
||||
if !signer.Available() {
|
||||
return nil // Silently skip if not configured
|
||||
}
|
||||
|
||||
fmt.Printf(" Signing %s with GPG...\n", checksumFile)
|
||||
if err := signer.Sign(ctx, fs, checksumFile); err != nil {
|
||||
return fmt.Errorf("failed to sign checksums: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,83 +0,0 @@
|
|||
// Package signing provides code signing for build artifacts.
|
||||
package signing
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// Signer defines the interface for code signing implementations.
|
||||
type Signer interface {
|
||||
// Name returns the signer's identifier.
|
||||
Name() string
|
||||
// Available checks if this signer can be used.
|
||||
Available() bool
|
||||
// Sign signs the artifact at the given path.
|
||||
Sign(ctx context.Context, fs io.Medium, path string) error
|
||||
}
|
||||
|
||||
// SignConfig holds signing configuration from .core/build.yaml.
|
||||
type SignConfig struct {
|
||||
Enabled bool `yaml:"enabled"`
|
||||
GPG GPGConfig `yaml:"gpg,omitempty"`
|
||||
MacOS MacOSConfig `yaml:"macos,omitempty"`
|
||||
Windows WindowsConfig `yaml:"windows,omitempty"`
|
||||
}
|
||||
|
||||
// GPGConfig holds GPG signing configuration.
|
||||
type GPGConfig struct {
|
||||
Key string `yaml:"key"` // Key ID or fingerprint, supports $ENV
|
||||
}
|
||||
|
||||
// MacOSConfig holds macOS codesign configuration.
|
||||
type MacOSConfig struct {
|
||||
Identity string `yaml:"identity"` // Developer ID Application: ...
|
||||
Notarize bool `yaml:"notarize"` // Submit to Apple for notarization
|
||||
AppleID string `yaml:"apple_id"` // Apple account email
|
||||
TeamID string `yaml:"team_id"` // Team ID
|
||||
AppPassword string `yaml:"app_password"` // App-specific password
|
||||
}
|
||||
|
||||
// WindowsConfig holds Windows signtool configuration (placeholder).
|
||||
type WindowsConfig struct {
|
||||
Certificate string `yaml:"certificate"` // Path to .pfx
|
||||
Password string `yaml:"password"` // Certificate password
|
||||
}
|
||||
|
||||
// DefaultSignConfig returns sensible defaults.
|
||||
func DefaultSignConfig() SignConfig {
|
||||
return SignConfig{
|
||||
Enabled: true,
|
||||
GPG: GPGConfig{
|
||||
Key: os.Getenv("GPG_KEY_ID"),
|
||||
},
|
||||
MacOS: MacOSConfig{
|
||||
Identity: os.Getenv("CODESIGN_IDENTITY"),
|
||||
AppleID: os.Getenv("APPLE_ID"),
|
||||
TeamID: os.Getenv("APPLE_TEAM_ID"),
|
||||
AppPassword: os.Getenv("APPLE_APP_PASSWORD"),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// ExpandEnv expands environment variables in config values.
|
||||
func (c *SignConfig) ExpandEnv() {
|
||||
c.GPG.Key = expandEnv(c.GPG.Key)
|
||||
c.MacOS.Identity = expandEnv(c.MacOS.Identity)
|
||||
c.MacOS.AppleID = expandEnv(c.MacOS.AppleID)
|
||||
c.MacOS.TeamID = expandEnv(c.MacOS.TeamID)
|
||||
c.MacOS.AppPassword = expandEnv(c.MacOS.AppPassword)
|
||||
c.Windows.Certificate = expandEnv(c.Windows.Certificate)
|
||||
c.Windows.Password = expandEnv(c.Windows.Password)
|
||||
}
|
||||
|
||||
// expandEnv expands $VAR or ${VAR} in a string.
|
||||
func expandEnv(s string) string {
|
||||
if strings.HasPrefix(s, "$") {
|
||||
return os.ExpandEnv(s)
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
|
@ -1,162 +0,0 @@
|
|||
package signing
|
||||
|
||||
import (
|
||||
"context"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestSignBinaries_Good_SkipsNonDarwin(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
fs := io.Local
|
||||
cfg := SignConfig{
|
||||
Enabled: true,
|
||||
MacOS: MacOSConfig{
|
||||
Identity: "Developer ID Application: Test",
|
||||
},
|
||||
}
|
||||
|
||||
// Create fake artifact for linux
|
||||
artifacts := []Artifact{
|
||||
{Path: "/tmp/test-binary", OS: "linux", Arch: "amd64"},
|
||||
}
|
||||
|
||||
// Should not error even though binary doesn't exist (skips non-darwin)
|
||||
err := SignBinaries(ctx, fs, cfg, artifacts)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSignBinaries_Good_DisabledConfig(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
fs := io.Local
|
||||
cfg := SignConfig{
|
||||
Enabled: false,
|
||||
}
|
||||
|
||||
artifacts := []Artifact{
|
||||
{Path: "/tmp/test-binary", OS: "darwin", Arch: "arm64"},
|
||||
}
|
||||
|
||||
err := SignBinaries(ctx, fs, cfg, artifacts)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSignBinaries_Good_SkipsOnNonMacOS(t *testing.T) {
|
||||
if runtime.GOOS == "darwin" {
|
||||
t.Skip("Skipping on macOS - this tests non-macOS behavior")
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
fs := io.Local
|
||||
cfg := SignConfig{
|
||||
Enabled: true,
|
||||
MacOS: MacOSConfig{
|
||||
Identity: "Developer ID Application: Test",
|
||||
},
|
||||
}
|
||||
|
||||
artifacts := []Artifact{
|
||||
{Path: "/tmp/test-binary", OS: "darwin", Arch: "arm64"},
|
||||
}
|
||||
|
||||
err := SignBinaries(ctx, fs, cfg, artifacts)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNotarizeBinaries_Good_DisabledConfig(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
fs := io.Local
|
||||
cfg := SignConfig{
|
||||
Enabled: false,
|
||||
}
|
||||
|
||||
artifacts := []Artifact{
|
||||
{Path: "/tmp/test-binary", OS: "darwin", Arch: "arm64"},
|
||||
}
|
||||
|
||||
err := NotarizeBinaries(ctx, fs, cfg, artifacts)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNotarizeBinaries_Good_NotarizeDisabled(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
fs := io.Local
|
||||
cfg := SignConfig{
|
||||
Enabled: true,
|
||||
MacOS: MacOSConfig{
|
||||
Notarize: false,
|
||||
},
|
||||
}
|
||||
|
||||
artifacts := []Artifact{
|
||||
{Path: "/tmp/test-binary", OS: "darwin", Arch: "arm64"},
|
||||
}
|
||||
|
||||
err := NotarizeBinaries(ctx, fs, cfg, artifacts)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSignChecksums_Good_SkipsNoKey(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
fs := io.Local
|
||||
cfg := SignConfig{
|
||||
Enabled: true,
|
||||
GPG: GPGConfig{
|
||||
Key: "", // No key configured
|
||||
},
|
||||
}
|
||||
|
||||
// Should silently skip when no key
|
||||
err := SignChecksums(ctx, fs, cfg, "/tmp/CHECKSUMS.txt")
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSignChecksums_Good_Disabled(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
fs := io.Local
|
||||
cfg := SignConfig{
|
||||
Enabled: false,
|
||||
}
|
||||
|
||||
err := SignChecksums(ctx, fs, cfg, "/tmp/CHECKSUMS.txt")
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDefaultSignConfig(t *testing.T) {
|
||||
cfg := DefaultSignConfig()
|
||||
assert.True(t, cfg.Enabled)
|
||||
}
|
||||
|
||||
func TestSignConfig_ExpandEnv(t *testing.T) {
|
||||
t.Setenv("TEST_KEY", "ABC")
|
||||
cfg := SignConfig{
|
||||
GPG: GPGConfig{Key: "$TEST_KEY"},
|
||||
}
|
||||
cfg.ExpandEnv()
|
||||
assert.Equal(t, "ABC", cfg.GPG.Key)
|
||||
}
|
||||
|
||||
func TestWindowsSigner_Good(t *testing.T) {
|
||||
fs := io.Local
|
||||
s := NewWindowsSigner(WindowsConfig{})
|
||||
assert.Equal(t, "signtool", s.Name())
|
||||
assert.False(t, s.Available())
|
||||
assert.NoError(t, s.Sign(context.Background(), fs, "test.exe"))
|
||||
}
|
||||
|
|
@ -1,36 +0,0 @@
|
|||
package signing
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
)
|
||||
|
||||
// WindowsSigner signs binaries using Windows signtool (placeholder).
|
||||
type WindowsSigner struct {
|
||||
config WindowsConfig
|
||||
}
|
||||
|
||||
// Compile-time interface check.
|
||||
var _ Signer = (*WindowsSigner)(nil)
|
||||
|
||||
// NewWindowsSigner creates a new Windows signer.
|
||||
func NewWindowsSigner(cfg WindowsConfig) *WindowsSigner {
|
||||
return &WindowsSigner{config: cfg}
|
||||
}
|
||||
|
||||
// Name returns "signtool".
|
||||
func (s *WindowsSigner) Name() string {
|
||||
return "signtool"
|
||||
}
|
||||
|
||||
// Available returns false (not yet implemented).
|
||||
func (s *WindowsSigner) Available() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// Sign is a placeholder that does nothing.
|
||||
func (s *WindowsSigner) Sign(ctx context.Context, fs io.Medium, binary string) error {
|
||||
// TODO: Implement Windows signing
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
# Example build configuration for Core build system
|
||||
version: 1
|
||||
|
||||
project:
|
||||
name: example-cli
|
||||
description: An example CLI application
|
||||
main: ./cmd/example
|
||||
binary: example
|
||||
|
||||
build:
|
||||
cgo: false
|
||||
flags:
|
||||
- -trimpath
|
||||
ldflags:
|
||||
- -s
|
||||
- -w
|
||||
env: []
|
||||
|
||||
targets:
|
||||
- os: linux
|
||||
arch: amd64
|
||||
- os: darwin
|
||||
arch: arm64
|
||||
- os: windows
|
||||
arch: amd64
|
||||
|
|
@ -1,2 +0,0 @@
|
|||
cmake_minimum_required(VERSION 3.16)
|
||||
project(TestCPP)
|
||||
0
pkg/build/testdata/empty-project/.gitkeep
vendored
0
pkg/build/testdata/empty-project/.gitkeep
vendored
3
pkg/build/testdata/go-project/go.mod
vendored
3
pkg/build/testdata/go-project/go.mod
vendored
|
|
@ -1,3 +0,0 @@
|
|||
module example.com/go-project
|
||||
|
||||
go 1.21
|
||||
3
pkg/build/testdata/multi-project/go.mod
vendored
3
pkg/build/testdata/multi-project/go.mod
vendored
|
|
@ -1,3 +0,0 @@
|
|||
module example.com/multi-project
|
||||
|
||||
go 1.21
|
||||
|
|
@ -1,4 +0,0 @@
|
|||
{
|
||||
"name": "multi-project",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
4
pkg/build/testdata/node-project/package.json
vendored
4
pkg/build/testdata/node-project/package.json
vendored
|
|
@ -1,4 +0,0 @@
|
|||
{
|
||||
"name": "node-project",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
4
pkg/build/testdata/php-project/composer.json
vendored
4
pkg/build/testdata/php-project/composer.json
vendored
|
|
@ -1,4 +0,0 @@
|
|||
{
|
||||
"name": "vendor/php-project",
|
||||
"type": "library"
|
||||
}
|
||||
3
pkg/build/testdata/wails-project/go.mod
vendored
3
pkg/build/testdata/wails-project/go.mod
vendored
|
|
@ -1,3 +0,0 @@
|
|||
module example.com/wails-project
|
||||
|
||||
go 1.21
|
||||
4
pkg/build/testdata/wails-project/wails.json
vendored
4
pkg/build/testdata/wails-project/wails.json
vendored
|
|
@ -1,4 +0,0 @@
|
|||
{
|
||||
"name": "wails-project",
|
||||
"outputfilename": "wails-project"
|
||||
}
|
||||
30
pkg/cache/cache.go
vendored
30
pkg/cache/cache.go
vendored
|
|
@ -3,6 +3,7 @@ package cache
|
|||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
|
@ -15,6 +16,7 @@ const DefaultTTL = 1 * time.Hour
|
|||
|
||||
// Cache represents a file-based cache.
|
||||
type Cache struct {
|
||||
medium io.Medium
|
||||
baseDir string
|
||||
ttl time.Duration
|
||||
}
|
||||
|
|
@ -27,8 +29,13 @@ type Entry struct {
|
|||
}
|
||||
|
||||
// New creates a new cache instance.
|
||||
// If baseDir is empty, uses .core/cache in current directory
|
||||
func New(baseDir string, ttl time.Duration) (*Cache, error) {
|
||||
// If medium is nil, uses io.Local (filesystem).
|
||||
// If baseDir is empty, uses .core/cache in current directory.
|
||||
func New(medium io.Medium, baseDir string, ttl time.Duration) (*Cache, error) {
|
||||
if medium == nil {
|
||||
medium = io.Local
|
||||
}
|
||||
|
||||
if baseDir == "" {
|
||||
// Use .core/cache in current working directory
|
||||
cwd, err := os.Getwd()
|
||||
|
|
@ -43,11 +50,12 @@ func New(baseDir string, ttl time.Duration) (*Cache, error) {
|
|||
}
|
||||
|
||||
// Ensure cache directory exists
|
||||
if err := io.Local.EnsureDir(baseDir); err != nil {
|
||||
if err := medium.EnsureDir(baseDir); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &Cache{
|
||||
medium: medium,
|
||||
baseDir: baseDir,
|
||||
ttl: ttl,
|
||||
}, nil
|
||||
|
|
@ -62,9 +70,9 @@ func (c *Cache) Path(key string) string {
|
|||
func (c *Cache) Get(key string, dest interface{}) (bool, error) {
|
||||
path := c.Path(key)
|
||||
|
||||
dataStr, err := io.Local.Read(path)
|
||||
dataStr, err := c.medium.Read(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
return false, nil
|
||||
}
|
||||
return false, err
|
||||
|
|
@ -94,7 +102,7 @@ func (c *Cache) Set(key string, data interface{}) error {
|
|||
path := c.Path(key)
|
||||
|
||||
// Ensure parent directory exists
|
||||
if err := io.Local.EnsureDir(filepath.Dir(path)); err != nil {
|
||||
if err := c.medium.EnsureDir(filepath.Dir(path)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
|
@ -115,14 +123,14 @@ func (c *Cache) Set(key string, data interface{}) error {
|
|||
return err
|
||||
}
|
||||
|
||||
return io.Local.Write(path, string(entryBytes))
|
||||
return c.medium.Write(path, string(entryBytes))
|
||||
}
|
||||
|
||||
// Delete removes an item from the cache.
|
||||
func (c *Cache) Delete(key string) error {
|
||||
path := c.Path(key)
|
||||
err := io.Local.Delete(path)
|
||||
if os.IsNotExist(err) {
|
||||
err := c.medium.Delete(path)
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
|
|
@ -130,14 +138,14 @@ func (c *Cache) Delete(key string) error {
|
|||
|
||||
// Clear removes all cached items.
|
||||
func (c *Cache) Clear() error {
|
||||
return io.Local.DeleteAll(c.baseDir)
|
||||
return c.medium.DeleteAll(c.baseDir)
|
||||
}
|
||||
|
||||
// Age returns how old a cached item is, or -1 if not cached.
|
||||
func (c *Cache) Age(key string) time.Duration {
|
||||
path := c.Path(key)
|
||||
|
||||
dataStr, err := io.Local.Read(path)
|
||||
dataStr, err := c.medium.Read(path)
|
||||
if err != nil {
|
||||
return -1
|
||||
}
|
||||
|
|
|
|||
163
pkg/cli/ansi.go
163
pkg/cli/ansi.go
|
|
@ -1,163 +0,0 @@
|
|||
package cli
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// ANSI escape codes
|
||||
const (
|
||||
ansiReset = "\033[0m"
|
||||
ansiBold = "\033[1m"
|
||||
ansiDim = "\033[2m"
|
||||
ansiItalic = "\033[3m"
|
||||
ansiUnderline = "\033[4m"
|
||||
)
|
||||
|
||||
var (
|
||||
colorEnabled = true
|
||||
colorEnabledMu sync.RWMutex
|
||||
)
|
||||
|
||||
func init() {
|
||||
// NO_COLOR standard: https://no-color.org/
|
||||
// If NO_COLOR is set (to any value, including empty), disable colors.
|
||||
if _, exists := os.LookupEnv("NO_COLOR"); exists {
|
||||
colorEnabled = false
|
||||
return
|
||||
}
|
||||
|
||||
// TERM=dumb indicates a terminal without color support.
|
||||
if os.Getenv("TERM") == "dumb" {
|
||||
colorEnabled = false
|
||||
}
|
||||
}
|
||||
|
||||
// ColorEnabled returns true if ANSI color output is enabled.
|
||||
func ColorEnabled() bool {
|
||||
colorEnabledMu.RLock()
|
||||
defer colorEnabledMu.RUnlock()
|
||||
return colorEnabled
|
||||
}
|
||||
|
||||
// SetColorEnabled enables or disables ANSI color output.
|
||||
// This overrides the NO_COLOR environment variable check.
|
||||
func SetColorEnabled(enabled bool) {
|
||||
colorEnabledMu.Lock()
|
||||
colorEnabled = enabled
|
||||
colorEnabledMu.Unlock()
|
||||
}
|
||||
|
||||
// AnsiStyle represents terminal text styling.
|
||||
// Use NewStyle() to create, chain methods, call Render().
|
||||
type AnsiStyle struct {
|
||||
bold bool
|
||||
dim bool
|
||||
italic bool
|
||||
underline bool
|
||||
fg string
|
||||
bg string
|
||||
}
|
||||
|
||||
// NewStyle creates a new empty style.
|
||||
func NewStyle() *AnsiStyle {
|
||||
return &AnsiStyle{}
|
||||
}
|
||||
|
||||
// Bold enables bold text.
|
||||
func (s *AnsiStyle) Bold() *AnsiStyle {
|
||||
s.bold = true
|
||||
return s
|
||||
}
|
||||
|
||||
// Dim enables dim text.
|
||||
func (s *AnsiStyle) Dim() *AnsiStyle {
|
||||
s.dim = true
|
||||
return s
|
||||
}
|
||||
|
||||
// Italic enables italic text.
|
||||
func (s *AnsiStyle) Italic() *AnsiStyle {
|
||||
s.italic = true
|
||||
return s
|
||||
}
|
||||
|
||||
// Underline enables underlined text.
|
||||
func (s *AnsiStyle) Underline() *AnsiStyle {
|
||||
s.underline = true
|
||||
return s
|
||||
}
|
||||
|
||||
// Foreground sets foreground color from hex string.
|
||||
func (s *AnsiStyle) Foreground(hex string) *AnsiStyle {
|
||||
s.fg = fgColorHex(hex)
|
||||
return s
|
||||
}
|
||||
|
||||
// Background sets background color from hex string.
|
||||
func (s *AnsiStyle) Background(hex string) *AnsiStyle {
|
||||
s.bg = bgColorHex(hex)
|
||||
return s
|
||||
}
|
||||
|
||||
// Render applies the style to text.
|
||||
// Returns plain text if NO_COLOR is set or colors are disabled.
|
||||
func (s *AnsiStyle) Render(text string) string {
|
||||
if s == nil || !ColorEnabled() {
|
||||
return text
|
||||
}
|
||||
|
||||
var codes []string
|
||||
if s.bold {
|
||||
codes = append(codes, ansiBold)
|
||||
}
|
||||
if s.dim {
|
||||
codes = append(codes, ansiDim)
|
||||
}
|
||||
if s.italic {
|
||||
codes = append(codes, ansiItalic)
|
||||
}
|
||||
if s.underline {
|
||||
codes = append(codes, ansiUnderline)
|
||||
}
|
||||
if s.fg != "" {
|
||||
codes = append(codes, s.fg)
|
||||
}
|
||||
if s.bg != "" {
|
||||
codes = append(codes, s.bg)
|
||||
}
|
||||
|
||||
if len(codes) == 0 {
|
||||
return text
|
||||
}
|
||||
|
||||
return strings.Join(codes, "") + text + ansiReset
|
||||
}
|
||||
|
||||
// fgColorHex converts a hex string to an ANSI foreground color code.
|
||||
func fgColorHex(hex string) string {
|
||||
r, g, b := hexToRGB(hex)
|
||||
return fmt.Sprintf("\033[38;2;%d;%d;%dm", r, g, b)
|
||||
}
|
||||
|
||||
// bgColorHex converts a hex string to an ANSI background color code.
|
||||
func bgColorHex(hex string) string {
|
||||
r, g, b := hexToRGB(hex)
|
||||
return fmt.Sprintf("\033[48;2;%d;%d;%dm", r, g, b)
|
||||
}
|
||||
|
||||
// hexToRGB converts a hex string to RGB values.
|
||||
func hexToRGB(hex string) (int, int, int) {
|
||||
hex = strings.TrimPrefix(hex, "#")
|
||||
if len(hex) != 6 {
|
||||
return 255, 255, 255
|
||||
}
|
||||
// Use 8-bit parsing since RGB values are 0-255, avoiding integer overflow on 32-bit systems.
|
||||
r, _ := strconv.ParseUint(hex[0:2], 16, 8)
|
||||
g, _ := strconv.ParseUint(hex[2:4], 16, 8)
|
||||
b, _ := strconv.ParseUint(hex[4:6], 16, 8)
|
||||
return int(r), int(g), int(b)
|
||||
}
|
||||
|
|
@ -1,97 +0,0 @@
|
|||
package cli
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestAnsiStyle_Render(t *testing.T) {
|
||||
// Ensure colors are enabled for this test
|
||||
SetColorEnabled(true)
|
||||
defer SetColorEnabled(true) // Reset after test
|
||||
|
||||
s := NewStyle().Bold().Foreground("#ff0000")
|
||||
got := s.Render("test")
|
||||
if got == "test" {
|
||||
t.Error("Expected styled output")
|
||||
}
|
||||
if !strings.Contains(got, "test") {
|
||||
t.Error("Output should contain text")
|
||||
}
|
||||
if !strings.Contains(got, "[1m") {
|
||||
t.Error("Output should contain bold code")
|
||||
}
|
||||
}
|
||||
|
||||
func TestColorEnabled_Good(t *testing.T) {
|
||||
// Save original state
|
||||
original := ColorEnabled()
|
||||
defer SetColorEnabled(original)
|
||||
|
||||
// Test enabling
|
||||
SetColorEnabled(true)
|
||||
if !ColorEnabled() {
|
||||
t.Error("ColorEnabled should return true")
|
||||
}
|
||||
|
||||
// Test disabling
|
||||
SetColorEnabled(false)
|
||||
if ColorEnabled() {
|
||||
t.Error("ColorEnabled should return false")
|
||||
}
|
||||
}
|
||||
|
||||
func TestRender_ColorDisabled_Good(t *testing.T) {
|
||||
// Save original state
|
||||
original := ColorEnabled()
|
||||
defer SetColorEnabled(original)
|
||||
|
||||
// Disable colors
|
||||
SetColorEnabled(false)
|
||||
|
||||
s := NewStyle().Bold().Foreground("#ff0000")
|
||||
got := s.Render("test")
|
||||
|
||||
// Should return plain text without ANSI codes
|
||||
if got != "test" {
|
||||
t.Errorf("Expected plain 'test', got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRender_ColorEnabled_Good(t *testing.T) {
|
||||
// Save original state
|
||||
original := ColorEnabled()
|
||||
defer SetColorEnabled(original)
|
||||
|
||||
// Enable colors
|
||||
SetColorEnabled(true)
|
||||
|
||||
s := NewStyle().Bold()
|
||||
got := s.Render("test")
|
||||
|
||||
// Should contain ANSI codes
|
||||
if !strings.Contains(got, "\033[") {
|
||||
t.Error("Expected ANSI codes when colors enabled")
|
||||
}
|
||||
}
|
||||
|
||||
func TestUseASCII_Good(t *testing.T) {
|
||||
// Save original state
|
||||
original := ColorEnabled()
|
||||
defer SetColorEnabled(original)
|
||||
|
||||
// Enable first, then UseASCII should disable colors
|
||||
SetColorEnabled(true)
|
||||
UseASCII()
|
||||
if ColorEnabled() {
|
||||
t.Error("UseASCII should disable colors")
|
||||
}
|
||||
}
|
||||
|
||||
func TestRender_NilStyle_Good(t *testing.T) {
|
||||
var s *AnsiStyle
|
||||
got := s.Render("test")
|
||||
if got != "test" {
|
||||
t.Errorf("Nil style should return plain text, got %q", got)
|
||||
}
|
||||
}
|
||||
151
pkg/cli/app.go
151
pkg/cli/app.go
|
|
@ -1,151 +0,0 @@
|
|||
package cli
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"runtime/debug"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/crypt/openpgp"
|
||||
"forge.lthn.ai/core/go/pkg/framework"
|
||||
"forge.lthn.ai/core/go/pkg/log"
|
||||
"forge.lthn.ai/core/go/pkg/workspace"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
const (
|
||||
// AppName is the CLI application name.
|
||||
AppName = "core"
|
||||
)
|
||||
|
||||
// Build-time variables set via ldflags (SemVer 2.0.0):
|
||||
//
|
||||
// go build -ldflags="-X forge.lthn.ai/core/go/pkg/cli.AppVersion=1.2.0 \
|
||||
// -X forge.lthn.ai/core/go/pkg/cli.BuildCommit=df94c24 \
|
||||
// -X forge.lthn.ai/core/go/pkg/cli.BuildDate=2026-02-06 \
|
||||
// -X forge.lthn.ai/core/go/pkg/cli.BuildPreRelease=dev.8"
|
||||
var (
|
||||
AppVersion = "0.0.0"
|
||||
BuildCommit = "unknown"
|
||||
BuildDate = "unknown"
|
||||
BuildPreRelease = ""
|
||||
)
|
||||
|
||||
// SemVer returns the full SemVer 2.0.0 version string.
|
||||
// - Release: 1.2.0
|
||||
// - Pre-release: 1.2.0-dev.8
|
||||
// - Full: 1.2.0-dev.8+df94c24.20260206
|
||||
func SemVer() string {
|
||||
v := AppVersion
|
||||
if BuildPreRelease != "" {
|
||||
v += "-" + BuildPreRelease
|
||||
}
|
||||
if BuildCommit != "unknown" {
|
||||
v += "+" + BuildCommit
|
||||
if BuildDate != "unknown" {
|
||||
v += "." + BuildDate
|
||||
}
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
// Main initialises and runs the CLI application.
|
||||
// This is the main entry point for the CLI.
|
||||
// Exits with code 1 on error or panic.
|
||||
func Main() {
|
||||
// Recovery from panics
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
log.Error("recovered from panic", "error", r, "stack", string(debug.Stack()))
|
||||
Shutdown()
|
||||
Fatal(fmt.Errorf("panic: %v", r))
|
||||
}
|
||||
}()
|
||||
|
||||
// Initialise CLI runtime with services
|
||||
if err := Init(Options{
|
||||
AppName: AppName,
|
||||
Version: SemVer(),
|
||||
Services: []framework.Option{
|
||||
framework.WithName("i18n", NewI18nService(I18nOptions{})),
|
||||
framework.WithName("log", NewLogService(log.Options{
|
||||
Level: log.LevelInfo,
|
||||
})),
|
||||
framework.WithName("crypt", openpgp.New),
|
||||
framework.WithName("workspace", workspace.New),
|
||||
},
|
||||
}); err != nil {
|
||||
Error(err.Error())
|
||||
os.Exit(1)
|
||||
}
|
||||
defer Shutdown()
|
||||
|
||||
// Add completion command to the CLI's root
|
||||
RootCmd().AddCommand(completionCmd)
|
||||
|
||||
if err := Execute(); err != nil {
|
||||
code := 1
|
||||
var exitErr *ExitError
|
||||
if As(err, &exitErr) {
|
||||
code = exitErr.Code
|
||||
}
|
||||
Error(err.Error())
|
||||
os.Exit(code)
|
||||
}
|
||||
}
|
||||
|
||||
// completionCmd generates shell completion scripts.
|
||||
var completionCmd = &cobra.Command{
|
||||
Use: "completion [bash|zsh|fish|powershell]",
|
||||
Short: "Generate shell completion script",
|
||||
Long: `Generate shell completion script for the specified shell.
|
||||
|
||||
To load completions:
|
||||
|
||||
Bash:
|
||||
$ source <(core completion bash)
|
||||
|
||||
# To load completions for each session, execute once:
|
||||
# Linux:
|
||||
$ core completion bash > /etc/bash_completion.d/core
|
||||
# macOS:
|
||||
$ core completion bash > $(brew --prefix)/etc/bash_completion.d/core
|
||||
|
||||
Zsh:
|
||||
# If shell completion is not already enabled in your environment,
|
||||
# you will need to enable it. You can execute the following once:
|
||||
$ echo "autoload -U compinit; compinit" >> ~/.zshrc
|
||||
|
||||
# To load completions for each session, execute once:
|
||||
$ core completion zsh > "${fpath[1]}/_core"
|
||||
|
||||
# You will need to start a new shell for this setup to take effect.
|
||||
|
||||
Fish:
|
||||
$ core completion fish | source
|
||||
|
||||
# To load completions for each session, execute once:
|
||||
$ core completion fish > ~/.config/fish/completions/core.fish
|
||||
|
||||
PowerShell:
|
||||
PS> core completion powershell | Out-String | Invoke-Expression
|
||||
|
||||
# To load completions for every new session, run:
|
||||
PS> core completion powershell > core.ps1
|
||||
# and source this file from your PowerShell profile.
|
||||
`,
|
||||
DisableFlagsInUseLine: true,
|
||||
ValidArgs: []string{"bash", "zsh", "fish", "powershell"},
|
||||
Args: cobra.MatchAll(cobra.ExactArgs(1), cobra.OnlyValidArgs),
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
switch args[0] {
|
||||
case "bash":
|
||||
_ = cmd.Root().GenBashCompletion(os.Stdout)
|
||||
case "zsh":
|
||||
_ = cmd.Root().GenZshCompletion(os.Stdout)
|
||||
case "fish":
|
||||
_ = cmd.Root().GenFishCompletion(os.Stdout, true)
|
||||
case "powershell":
|
||||
_ = cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout)
|
||||
}
|
||||
},
|
||||
}
|
||||
|
|
@ -1,164 +0,0 @@
|
|||
package cli
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"runtime/debug"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// TestPanicRecovery_Good verifies that the panic recovery mechanism
|
||||
// catches panics and calls the appropriate shutdown and error handling.
|
||||
func TestPanicRecovery_Good(t *testing.T) {
|
||||
t.Run("recovery captures panic value and stack", func(t *testing.T) {
|
||||
var recovered any
|
||||
var capturedStack []byte
|
||||
var shutdownCalled bool
|
||||
|
||||
// Simulate the panic recovery pattern from Main()
|
||||
func() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
recovered = r
|
||||
capturedStack = debug.Stack()
|
||||
shutdownCalled = true // simulates Shutdown() call
|
||||
}
|
||||
}()
|
||||
|
||||
panic("test panic")
|
||||
}()
|
||||
|
||||
assert.Equal(t, "test panic", recovered)
|
||||
assert.True(t, shutdownCalled, "Shutdown should be called after panic recovery")
|
||||
assert.NotEmpty(t, capturedStack, "Stack trace should be captured")
|
||||
assert.Contains(t, string(capturedStack), "TestPanicRecovery_Good")
|
||||
})
|
||||
|
||||
t.Run("recovery handles error type panics", func(t *testing.T) {
|
||||
var recovered any
|
||||
|
||||
func() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
recovered = r
|
||||
}
|
||||
}()
|
||||
|
||||
panic(fmt.Errorf("error panic"))
|
||||
}()
|
||||
|
||||
err, ok := recovered.(error)
|
||||
assert.True(t, ok, "Recovered value should be an error")
|
||||
assert.Equal(t, "error panic", err.Error())
|
||||
})
|
||||
|
||||
t.Run("recovery handles nil panic gracefully", func(t *testing.T) {
|
||||
recoveryExecuted := false
|
||||
|
||||
func() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
recoveryExecuted = true
|
||||
}
|
||||
}()
|
||||
|
||||
// No panic occurs
|
||||
}()
|
||||
|
||||
assert.False(t, recoveryExecuted, "Recovery block should not execute without panic")
|
||||
})
|
||||
}
|
||||
|
||||
// TestPanicRecovery_Bad tests error conditions in panic recovery.
|
||||
func TestPanicRecovery_Bad(t *testing.T) {
|
||||
t.Run("recovery handles concurrent panics", func(t *testing.T) {
|
||||
var wg sync.WaitGroup
|
||||
recoveryCount := 0
|
||||
var mu sync.Mutex
|
||||
|
||||
for i := 0; i < 3; i++ {
|
||||
wg.Add(1)
|
||||
go func(id int) {
|
||||
defer wg.Done()
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
mu.Lock()
|
||||
recoveryCount++
|
||||
mu.Unlock()
|
||||
}
|
||||
}()
|
||||
|
||||
panic(fmt.Sprintf("panic from goroutine %d", id))
|
||||
}(i)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
assert.Equal(t, 3, recoveryCount, "All goroutine panics should be recovered")
|
||||
})
|
||||
}
|
||||
|
||||
// TestPanicRecovery_Ugly tests edge cases in panic recovery.
|
||||
func TestPanicRecovery_Ugly(t *testing.T) {
|
||||
t.Run("recovery handles typed panic values", func(t *testing.T) {
|
||||
type customError struct {
|
||||
code int
|
||||
msg string
|
||||
}
|
||||
|
||||
var recovered any
|
||||
|
||||
func() {
|
||||
defer func() {
|
||||
recovered = recover()
|
||||
}()
|
||||
|
||||
panic(customError{code: 500, msg: "internal error"})
|
||||
}()
|
||||
|
||||
ce, ok := recovered.(customError)
|
||||
assert.True(t, ok, "Should recover custom type")
|
||||
assert.Equal(t, 500, ce.code)
|
||||
assert.Equal(t, "internal error", ce.msg)
|
||||
})
|
||||
}
|
||||
|
||||
// TestMainPanicRecoveryPattern verifies the exact pattern used in Main().
|
||||
func TestMainPanicRecoveryPattern(t *testing.T) {
|
||||
t.Run("pattern logs error and calls shutdown", func(t *testing.T) {
|
||||
var logBuffer bytes.Buffer
|
||||
var shutdownCalled bool
|
||||
var fatalErr error
|
||||
|
||||
// Mock implementations
|
||||
mockLogError := func(msg string, args ...any) {
|
||||
fmt.Fprintf(&logBuffer, msg, args...)
|
||||
}
|
||||
mockShutdown := func() {
|
||||
shutdownCalled = true
|
||||
}
|
||||
mockFatal := func(err error) {
|
||||
fatalErr = err
|
||||
}
|
||||
|
||||
// Execute the pattern from Main()
|
||||
func() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
mockLogError("recovered from panic: %v", r)
|
||||
mockShutdown()
|
||||
mockFatal(fmt.Errorf("panic: %v", r))
|
||||
}
|
||||
}()
|
||||
|
||||
panic("simulated crash")
|
||||
}()
|
||||
|
||||
assert.Contains(t, logBuffer.String(), "recovered from panic: simulated crash")
|
||||
assert.True(t, shutdownCalled, "Shutdown must be called on panic")
|
||||
assert.NotNil(t, fatalErr, "Fatal must be called with error")
|
||||
assert.Equal(t, "panic: simulated crash", fatalErr.Error())
|
||||
})
|
||||
}
|
||||
|
|
@ -1,91 +0,0 @@
|
|||
package cli
|
||||
|
||||
import "fmt"
|
||||
|
||||
// CheckBuilder provides fluent API for check results.
|
||||
type CheckBuilder struct {
|
||||
name string
|
||||
status string
|
||||
style *AnsiStyle
|
||||
icon string
|
||||
duration string
|
||||
}
|
||||
|
||||
// Check starts building a check result line.
|
||||
//
|
||||
// cli.Check("audit").Pass()
|
||||
// cli.Check("fmt").Fail().Duration("2.3s")
|
||||
// cli.Check("test").Skip()
|
||||
func Check(name string) *CheckBuilder {
|
||||
return &CheckBuilder{name: name}
|
||||
}
|
||||
|
||||
// Pass marks the check as passed.
|
||||
func (c *CheckBuilder) Pass() *CheckBuilder {
|
||||
c.status = "passed"
|
||||
c.style = SuccessStyle
|
||||
c.icon = Glyph(":check:")
|
||||
return c
|
||||
}
|
||||
|
||||
// Fail marks the check as failed.
|
||||
func (c *CheckBuilder) Fail() *CheckBuilder {
|
||||
c.status = "failed"
|
||||
c.style = ErrorStyle
|
||||
c.icon = Glyph(":cross:")
|
||||
return c
|
||||
}
|
||||
|
||||
// Skip marks the check as skipped.
|
||||
func (c *CheckBuilder) Skip() *CheckBuilder {
|
||||
c.status = "skipped"
|
||||
c.style = DimStyle
|
||||
c.icon = "-"
|
||||
return c
|
||||
}
|
||||
|
||||
// Warn marks the check as warning.
|
||||
func (c *CheckBuilder) Warn() *CheckBuilder {
|
||||
c.status = "warning"
|
||||
c.style = WarningStyle
|
||||
c.icon = Glyph(":warn:")
|
||||
return c
|
||||
}
|
||||
|
||||
// Duration adds duration to the check result.
|
||||
func (c *CheckBuilder) Duration(d string) *CheckBuilder {
|
||||
c.duration = d
|
||||
return c
|
||||
}
|
||||
|
||||
// Message adds a custom message instead of status.
|
||||
func (c *CheckBuilder) Message(msg string) *CheckBuilder {
|
||||
c.status = msg
|
||||
return c
|
||||
}
|
||||
|
||||
// String returns the formatted check line.
|
||||
func (c *CheckBuilder) String() string {
|
||||
icon := c.icon
|
||||
if c.style != nil {
|
||||
icon = c.style.Render(c.icon)
|
||||
}
|
||||
|
||||
status := c.status
|
||||
if c.style != nil && c.status != "" {
|
||||
status = c.style.Render(c.status)
|
||||
}
|
||||
|
||||
if c.duration != "" {
|
||||
return fmt.Sprintf(" %s %-20s %-10s %s", icon, c.name, status, DimStyle.Render(c.duration))
|
||||
}
|
||||
if status != "" {
|
||||
return fmt.Sprintf(" %s %s %s", icon, c.name, status)
|
||||
}
|
||||
return fmt.Sprintf(" %s %s", icon, c.name)
|
||||
}
|
||||
|
||||
// Print outputs the check result.
|
||||
func (c *CheckBuilder) Print() {
|
||||
fmt.Println(c.String())
|
||||
}
|
||||
|
|
@ -1,49 +0,0 @@
|
|||
package cli
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestCheckBuilder(t *testing.T) {
|
||||
UseASCII() // Deterministic output
|
||||
|
||||
// Pass
|
||||
c := Check("foo").Pass()
|
||||
got := c.String()
|
||||
if got == "" {
|
||||
t.Error("Empty output for Pass")
|
||||
}
|
||||
|
||||
// Fail
|
||||
c = Check("foo").Fail()
|
||||
got = c.String()
|
||||
if got == "" {
|
||||
t.Error("Empty output for Fail")
|
||||
}
|
||||
|
||||
// Skip
|
||||
c = Check("foo").Skip()
|
||||
got = c.String()
|
||||
if got == "" {
|
||||
t.Error("Empty output for Skip")
|
||||
}
|
||||
|
||||
// Warn
|
||||
c = Check("foo").Warn()
|
||||
got = c.String()
|
||||
if got == "" {
|
||||
t.Error("Empty output for Warn")
|
||||
}
|
||||
|
||||
// Duration
|
||||
c = Check("foo").Pass().Duration("1s")
|
||||
got = c.String()
|
||||
if got == "" {
|
||||
t.Error("Empty output for Duration")
|
||||
}
|
||||
|
||||
// Message
|
||||
c = Check("foo").Message("status")
|
||||
got = c.String()
|
||||
if got == "" {
|
||||
t.Error("Empty output for Message")
|
||||
}
|
||||
}
|
||||
|
|
@ -1,193 +0,0 @@
|
|||
package cli
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
// Command Type Re-export
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
// Command is the cobra command type.
|
||||
// Re-exported for convenience so packages don't need to import cobra directly.
|
||||
type Command = cobra.Command
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
// Command Builders
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
// NewCommand creates a new command with a RunE handler.
|
||||
// This is the standard way to create commands that may return errors.
|
||||
//
|
||||
// cmd := cli.NewCommand("build", "Build the project", "", func(cmd *cli.Command, args []string) error {
|
||||
// // Build logic
|
||||
// return nil
|
||||
// })
|
||||
func NewCommand(use, short, long string, run func(cmd *Command, args []string) error) *Command {
|
||||
cmd := &Command{
|
||||
Use: use,
|
||||
Short: short,
|
||||
RunE: run,
|
||||
}
|
||||
if long != "" {
|
||||
cmd.Long = long
|
||||
}
|
||||
return cmd
|
||||
}
|
||||
|
||||
// NewGroup creates a new command group (no RunE).
|
||||
// Use this for parent commands that only contain subcommands.
|
||||
//
|
||||
// devCmd := cli.NewGroup("dev", "Development commands", "")
|
||||
// devCmd.AddCommand(buildCmd, testCmd)
|
||||
func NewGroup(use, short, long string) *Command {
|
||||
cmd := &Command{
|
||||
Use: use,
|
||||
Short: short,
|
||||
}
|
||||
if long != "" {
|
||||
cmd.Long = long
|
||||
}
|
||||
return cmd
|
||||
}
|
||||
|
||||
// NewRun creates a new command with a simple Run handler (no error return).
|
||||
// Use when the command cannot fail.
|
||||
//
|
||||
// cmd := cli.NewRun("version", "Show version", "", func(cmd *cli.Command, args []string) {
|
||||
// cli.Println("v1.0.0")
|
||||
// })
|
||||
func NewRun(use, short, long string, run func(cmd *Command, args []string)) *Command {
|
||||
cmd := &Command{
|
||||
Use: use,
|
||||
Short: short,
|
||||
Run: run,
|
||||
}
|
||||
if long != "" {
|
||||
cmd.Long = long
|
||||
}
|
||||
return cmd
|
||||
}
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
// Flag Helpers
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
// StringFlag adds a string flag to a command.
|
||||
// The value will be stored in the provided pointer.
|
||||
//
|
||||
// var output string
|
||||
// cli.StringFlag(cmd, &output, "output", "o", "", "Output file path")
|
||||
func StringFlag(cmd *Command, ptr *string, name, short, def, usage string) {
|
||||
if short != "" {
|
||||
cmd.Flags().StringVarP(ptr, name, short, def, usage)
|
||||
} else {
|
||||
cmd.Flags().StringVar(ptr, name, def, usage)
|
||||
}
|
||||
}
|
||||
|
||||
// BoolFlag adds a boolean flag to a command.
|
||||
// The value will be stored in the provided pointer.
|
||||
//
|
||||
// var verbose bool
|
||||
// cli.BoolFlag(cmd, &verbose, "verbose", "v", false, "Enable verbose output")
|
||||
func BoolFlag(cmd *Command, ptr *bool, name, short string, def bool, usage string) {
|
||||
if short != "" {
|
||||
cmd.Flags().BoolVarP(ptr, name, short, def, usage)
|
||||
} else {
|
||||
cmd.Flags().BoolVar(ptr, name, def, usage)
|
||||
}
|
||||
}
|
||||
|
||||
// IntFlag adds an integer flag to a command.
|
||||
// The value will be stored in the provided pointer.
|
||||
//
|
||||
// var count int
|
||||
// cli.IntFlag(cmd, &count, "count", "n", 10, "Number of items")
|
||||
func IntFlag(cmd *Command, ptr *int, name, short string, def int, usage string) {
|
||||
if short != "" {
|
||||
cmd.Flags().IntVarP(ptr, name, short, def, usage)
|
||||
} else {
|
||||
cmd.Flags().IntVar(ptr, name, def, usage)
|
||||
}
|
||||
}
|
||||
|
||||
// StringSliceFlag adds a string slice flag to a command.
|
||||
// The value will be stored in the provided pointer.
|
||||
//
|
||||
// var tags []string
|
||||
// cli.StringSliceFlag(cmd, &tags, "tag", "t", nil, "Tags to apply")
|
||||
func StringSliceFlag(cmd *Command, ptr *[]string, name, short string, def []string, usage string) {
|
||||
if short != "" {
|
||||
cmd.Flags().StringSliceVarP(ptr, name, short, def, usage)
|
||||
} else {
|
||||
cmd.Flags().StringSliceVar(ptr, name, def, usage)
|
||||
}
|
||||
}
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
// Persistent Flag Helpers
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
// PersistentStringFlag adds a persistent string flag (inherited by subcommands).
|
||||
func PersistentStringFlag(cmd *Command, ptr *string, name, short, def, usage string) {
|
||||
if short != "" {
|
||||
cmd.PersistentFlags().StringVarP(ptr, name, short, def, usage)
|
||||
} else {
|
||||
cmd.PersistentFlags().StringVar(ptr, name, def, usage)
|
||||
}
|
||||
}
|
||||
|
||||
// PersistentBoolFlag adds a persistent boolean flag (inherited by subcommands).
|
||||
func PersistentBoolFlag(cmd *Command, ptr *bool, name, short string, def bool, usage string) {
|
||||
if short != "" {
|
||||
cmd.PersistentFlags().BoolVarP(ptr, name, short, def, usage)
|
||||
} else {
|
||||
cmd.PersistentFlags().BoolVar(ptr, name, def, usage)
|
||||
}
|
||||
}
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
// Command Configuration
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
// WithArgs sets the Args validation function for a command.
|
||||
// Returns the command for chaining.
|
||||
//
|
||||
// cmd := cli.NewCommand("build", "Build", "", run).WithArgs(cobra.ExactArgs(1))
|
||||
func WithArgs(cmd *Command, args cobra.PositionalArgs) *Command {
|
||||
cmd.Args = args
|
||||
return cmd
|
||||
}
|
||||
|
||||
// WithExample sets the Example field for a command.
|
||||
// Returns the command for chaining.
|
||||
func WithExample(cmd *Command, example string) *Command {
|
||||
cmd.Example = example
|
||||
return cmd
|
||||
}
|
||||
|
||||
// ExactArgs returns a PositionalArgs that accepts exactly N arguments.
|
||||
func ExactArgs(n int) cobra.PositionalArgs {
|
||||
return cobra.ExactArgs(n)
|
||||
}
|
||||
|
||||
// MinimumNArgs returns a PositionalArgs that accepts minimum N arguments.
|
||||
func MinimumNArgs(n int) cobra.PositionalArgs {
|
||||
return cobra.MinimumNArgs(n)
|
||||
}
|
||||
|
||||
// MaximumNArgs returns a PositionalArgs that accepts maximum N arguments.
|
||||
func MaximumNArgs(n int) cobra.PositionalArgs {
|
||||
return cobra.MaximumNArgs(n)
|
||||
}
|
||||
|
||||
// NoArgs returns a PositionalArgs that accepts no arguments.
|
||||
func NoArgs() cobra.PositionalArgs {
|
||||
return cobra.NoArgs
|
||||
}
|
||||
|
||||
// ArbitraryArgs returns a PositionalArgs that accepts any arguments.
|
||||
func ArbitraryArgs() cobra.PositionalArgs {
|
||||
return cobra.ArbitraryArgs
|
||||
}
|
||||
|
|
@ -1,50 +0,0 @@
|
|||
// Package cli provides the CLI runtime and utilities.
|
||||
package cli
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// CommandRegistration is a function that adds commands to the root.
|
||||
type CommandRegistration func(root *cobra.Command)
|
||||
|
||||
var (
|
||||
registeredCommands []CommandRegistration
|
||||
registeredCommandsMu sync.Mutex
|
||||
commandsAttached bool
|
||||
)
|
||||
|
||||
// RegisterCommands registers a function that adds commands to the CLI.
|
||||
// Call this in your package's init() to register commands.
|
||||
//
|
||||
// func init() {
|
||||
// cli.RegisterCommands(AddCommands)
|
||||
// }
|
||||
//
|
||||
// func AddCommands(root *cobra.Command) {
|
||||
// root.AddCommand(myCmd)
|
||||
// }
|
||||
func RegisterCommands(fn CommandRegistration) {
|
||||
registeredCommandsMu.Lock()
|
||||
defer registeredCommandsMu.Unlock()
|
||||
registeredCommands = append(registeredCommands, fn)
|
||||
|
||||
// If commands already attached (CLI already running), attach immediately
|
||||
if commandsAttached && instance != nil && instance.root != nil {
|
||||
fn(instance.root)
|
||||
}
|
||||
}
|
||||
|
||||
// attachRegisteredCommands calls all registered command functions.
|
||||
// Called by Init() after creating the root command.
|
||||
func attachRegisteredCommands(root *cobra.Command) {
|
||||
registeredCommandsMu.Lock()
|
||||
defer registeredCommandsMu.Unlock()
|
||||
|
||||
for _, fn := range registeredCommands {
|
||||
fn(root)
|
||||
}
|
||||
commandsAttached = true
|
||||
}
|
||||
|
|
@ -1,446 +0,0 @@
|
|||
// Package cli provides the CLI runtime and utilities.
|
||||
package cli
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"golang.org/x/term"
|
||||
)
|
||||
|
||||
// Mode represents the CLI execution mode.
|
||||
type Mode int
|
||||
|
||||
const (
|
||||
// ModeInteractive indicates TTY attached with coloured output.
|
||||
ModeInteractive Mode = iota
|
||||
// ModePipe indicates stdout is piped, colours disabled.
|
||||
ModePipe
|
||||
// ModeDaemon indicates headless execution, log-only output.
|
||||
ModeDaemon
|
||||
)
|
||||
|
||||
// String returns the string representation of the Mode.
|
||||
func (m Mode) String() string {
|
||||
switch m {
|
||||
case ModeInteractive:
|
||||
return "interactive"
|
||||
case ModePipe:
|
||||
return "pipe"
|
||||
case ModeDaemon:
|
||||
return "daemon"
|
||||
default:
|
||||
return "unknown"
|
||||
}
|
||||
}
|
||||
|
||||
// DetectMode determines the execution mode based on environment.
|
||||
// Checks CORE_DAEMON env var first, then TTY status.
|
||||
func DetectMode() Mode {
|
||||
if os.Getenv("CORE_DAEMON") == "1" {
|
||||
return ModeDaemon
|
||||
}
|
||||
if !IsTTY() {
|
||||
return ModePipe
|
||||
}
|
||||
return ModeInteractive
|
||||
}
|
||||
|
||||
// IsTTY returns true if stdout is a terminal.
|
||||
func IsTTY() bool {
|
||||
return term.IsTerminal(int(os.Stdout.Fd()))
|
||||
}
|
||||
|
||||
// IsStdinTTY returns true if stdin is a terminal.
|
||||
func IsStdinTTY() bool {
|
||||
return term.IsTerminal(int(os.Stdin.Fd()))
|
||||
}
|
||||
|
||||
// IsStderrTTY returns true if stderr is a terminal.
|
||||
func IsStderrTTY() bool {
|
||||
return term.IsTerminal(int(os.Stderr.Fd()))
|
||||
}
|
||||
|
||||
// --- PID File Management ---
|
||||
|
||||
// PIDFile manages a process ID file for single-instance enforcement.
|
||||
type PIDFile struct {
|
||||
path string
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
// NewPIDFile creates a PID file manager.
|
||||
func NewPIDFile(path string) *PIDFile {
|
||||
return &PIDFile{path: path}
|
||||
}
|
||||
|
||||
// Acquire writes the current PID to the file.
|
||||
// Returns error if another instance is running.
|
||||
func (p *PIDFile) Acquire() error {
|
||||
p.mu.Lock()
|
||||
defer p.mu.Unlock()
|
||||
|
||||
// Check if PID file exists
|
||||
if data, err := io.Local.Read(p.path); err == nil {
|
||||
pid, err := strconv.Atoi(data)
|
||||
if err == nil && pid > 0 {
|
||||
// Check if process is still running
|
||||
if process, err := os.FindProcess(pid); err == nil {
|
||||
if err := process.Signal(syscall.Signal(0)); err == nil {
|
||||
return fmt.Errorf("another instance is running (PID %d)", pid)
|
||||
}
|
||||
}
|
||||
}
|
||||
// Stale PID file, remove it
|
||||
_ = io.Local.Delete(p.path)
|
||||
}
|
||||
|
||||
// Ensure directory exists
|
||||
if dir := filepath.Dir(p.path); dir != "." {
|
||||
if err := io.Local.EnsureDir(dir); err != nil {
|
||||
return fmt.Errorf("failed to create PID directory: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Write current PID
|
||||
pid := os.Getpid()
|
||||
if err := io.Local.Write(p.path, strconv.Itoa(pid)); err != nil {
|
||||
return fmt.Errorf("failed to write PID file: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Release removes the PID file.
|
||||
func (p *PIDFile) Release() error {
|
||||
p.mu.Lock()
|
||||
defer p.mu.Unlock()
|
||||
return io.Local.Delete(p.path)
|
||||
}
|
||||
|
||||
// Path returns the PID file path.
|
||||
func (p *PIDFile) Path() string {
|
||||
return p.path
|
||||
}
|
||||
|
||||
// --- Health Check Server ---
|
||||
|
||||
// HealthServer provides a minimal HTTP health check endpoint.
|
||||
type HealthServer struct {
|
||||
addr string
|
||||
server *http.Server
|
||||
listener net.Listener
|
||||
mu sync.Mutex
|
||||
ready bool
|
||||
checks []HealthCheck
|
||||
}
|
||||
|
||||
// HealthCheck is a function that returns nil if healthy.
|
||||
type HealthCheck func() error
|
||||
|
||||
// NewHealthServer creates a health check server.
|
||||
func NewHealthServer(addr string) *HealthServer {
|
||||
return &HealthServer{
|
||||
addr: addr,
|
||||
ready: true,
|
||||
}
|
||||
}
|
||||
|
||||
// AddCheck registers a health check function.
|
||||
func (h *HealthServer) AddCheck(check HealthCheck) {
|
||||
h.mu.Lock()
|
||||
h.checks = append(h.checks, check)
|
||||
h.mu.Unlock()
|
||||
}
|
||||
|
||||
// SetReady sets the readiness status.
|
||||
func (h *HealthServer) SetReady(ready bool) {
|
||||
h.mu.Lock()
|
||||
h.ready = ready
|
||||
h.mu.Unlock()
|
||||
}
|
||||
|
||||
// Start begins serving health check endpoints.
|
||||
// Endpoints:
|
||||
// - /health - liveness probe (always 200 if server is up)
|
||||
// - /ready - readiness probe (200 if ready, 503 if not)
|
||||
func (h *HealthServer) Start() error {
|
||||
mux := http.NewServeMux()
|
||||
|
||||
mux.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
|
||||
h.mu.Lock()
|
||||
checks := h.checks
|
||||
h.mu.Unlock()
|
||||
|
||||
for _, check := range checks {
|
||||
if err := check(); err != nil {
|
||||
w.WriteHeader(http.StatusServiceUnavailable)
|
||||
_, _ = fmt.Fprintf(w, "unhealthy: %v\n", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
w.WriteHeader(http.StatusOK)
|
||||
_, _ = fmt.Fprintln(w, "ok")
|
||||
})
|
||||
|
||||
mux.HandleFunc("/ready", func(w http.ResponseWriter, r *http.Request) {
|
||||
h.mu.Lock()
|
||||
ready := h.ready
|
||||
h.mu.Unlock()
|
||||
|
||||
if !ready {
|
||||
w.WriteHeader(http.StatusServiceUnavailable)
|
||||
_, _ = fmt.Fprintln(w, "not ready")
|
||||
return
|
||||
}
|
||||
|
||||
w.WriteHeader(http.StatusOK)
|
||||
_, _ = fmt.Fprintln(w, "ready")
|
||||
})
|
||||
|
||||
listener, err := net.Listen("tcp", h.addr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to listen on %s: %w", h.addr, err)
|
||||
}
|
||||
|
||||
h.listener = listener
|
||||
h.server = &http.Server{Handler: mux}
|
||||
|
||||
go func() {
|
||||
if err := h.server.Serve(listener); err != http.ErrServerClosed {
|
||||
LogError(fmt.Sprintf("health server error: %v", err))
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop gracefully shuts down the health server.
|
||||
func (h *HealthServer) Stop(ctx context.Context) error {
|
||||
if h.server == nil {
|
||||
return nil
|
||||
}
|
||||
return h.server.Shutdown(ctx)
|
||||
}
|
||||
|
||||
// Addr returns the actual address the server is listening on.
|
||||
// Useful when using port 0 for dynamic port assignment.
|
||||
func (h *HealthServer) Addr() string {
|
||||
if h.listener != nil {
|
||||
return h.listener.Addr().String()
|
||||
}
|
||||
return h.addr
|
||||
}
|
||||
|
||||
// --- Daemon Runner ---
|
||||
|
||||
// DaemonOptions configures daemon mode execution.
|
||||
type DaemonOptions struct {
|
||||
// PIDFile path for single-instance enforcement.
|
||||
// Leave empty to skip PID file management.
|
||||
PIDFile string
|
||||
|
||||
// ShutdownTimeout is the maximum time to wait for graceful shutdown.
|
||||
// Default: 30 seconds.
|
||||
ShutdownTimeout time.Duration
|
||||
|
||||
// HealthAddr is the address for health check endpoints.
|
||||
// Example: ":8080", "127.0.0.1:9000"
|
||||
// Leave empty to disable health checks.
|
||||
HealthAddr string
|
||||
|
||||
// HealthChecks are additional health check functions.
|
||||
HealthChecks []HealthCheck
|
||||
|
||||
// OnReload is called when SIGHUP is received.
|
||||
// Use for config reloading. Leave nil to ignore SIGHUP.
|
||||
OnReload func() error
|
||||
}
|
||||
|
||||
// Daemon manages daemon lifecycle.
|
||||
type Daemon struct {
|
||||
opts DaemonOptions
|
||||
pid *PIDFile
|
||||
health *HealthServer
|
||||
reload chan struct{}
|
||||
running bool
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
// NewDaemon creates a daemon runner with the given options.
|
||||
func NewDaemon(opts DaemonOptions) *Daemon {
|
||||
if opts.ShutdownTimeout == 0 {
|
||||
opts.ShutdownTimeout = 30 * time.Second
|
||||
}
|
||||
|
||||
d := &Daemon{
|
||||
opts: opts,
|
||||
reload: make(chan struct{}, 1),
|
||||
}
|
||||
|
||||
if opts.PIDFile != "" {
|
||||
d.pid = NewPIDFile(opts.PIDFile)
|
||||
}
|
||||
|
||||
if opts.HealthAddr != "" {
|
||||
d.health = NewHealthServer(opts.HealthAddr)
|
||||
for _, check := range opts.HealthChecks {
|
||||
d.health.AddCheck(check)
|
||||
}
|
||||
}
|
||||
|
||||
return d
|
||||
}
|
||||
|
||||
// Start initialises the daemon (PID file, health server).
|
||||
// Call this after cli.Init().
|
||||
func (d *Daemon) Start() error {
|
||||
d.mu.Lock()
|
||||
defer d.mu.Unlock()
|
||||
|
||||
if d.running {
|
||||
return fmt.Errorf("daemon already running")
|
||||
}
|
||||
|
||||
// Acquire PID file
|
||||
if d.pid != nil {
|
||||
if err := d.pid.Acquire(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Start health server
|
||||
if d.health != nil {
|
||||
if err := d.health.Start(); err != nil {
|
||||
if d.pid != nil {
|
||||
_ = d.pid.Release()
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
d.running = true
|
||||
return nil
|
||||
}
|
||||
|
||||
// Run blocks until the context is cancelled or a signal is received.
|
||||
// Handles graceful shutdown with the configured timeout.
|
||||
func (d *Daemon) Run(ctx context.Context) error {
|
||||
d.mu.Lock()
|
||||
if !d.running {
|
||||
d.mu.Unlock()
|
||||
return fmt.Errorf("daemon not started - call Start() first")
|
||||
}
|
||||
d.mu.Unlock()
|
||||
|
||||
// Wait for context cancellation (from signal handler)
|
||||
<-ctx.Done()
|
||||
|
||||
return d.Stop()
|
||||
}
|
||||
|
||||
// Stop performs graceful shutdown.
|
||||
func (d *Daemon) Stop() error {
|
||||
d.mu.Lock()
|
||||
defer d.mu.Unlock()
|
||||
|
||||
if !d.running {
|
||||
return nil
|
||||
}
|
||||
|
||||
var errs []error
|
||||
|
||||
// Create shutdown context with timeout
|
||||
shutdownCtx, cancel := context.WithTimeout(context.Background(), d.opts.ShutdownTimeout)
|
||||
defer cancel()
|
||||
|
||||
// Stop health server
|
||||
if d.health != nil {
|
||||
d.health.SetReady(false)
|
||||
if err := d.health.Stop(shutdownCtx); err != nil {
|
||||
errs = append(errs, fmt.Errorf("health server: %w", err))
|
||||
}
|
||||
}
|
||||
|
||||
// Release PID file
|
||||
if d.pid != nil {
|
||||
if err := d.pid.Release(); err != nil && !os.IsNotExist(err) {
|
||||
errs = append(errs, fmt.Errorf("pid file: %w", err))
|
||||
}
|
||||
}
|
||||
|
||||
d.running = false
|
||||
|
||||
if len(errs) > 0 {
|
||||
return fmt.Errorf("shutdown errors: %v", errs)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetReady sets the daemon readiness status for health checks.
|
||||
func (d *Daemon) SetReady(ready bool) {
|
||||
if d.health != nil {
|
||||
d.health.SetReady(ready)
|
||||
}
|
||||
}
|
||||
|
||||
// HealthAddr returns the health server address, or empty if disabled.
|
||||
func (d *Daemon) HealthAddr() string {
|
||||
if d.health != nil {
|
||||
return d.health.Addr()
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// --- Convenience Functions ---
|
||||
|
||||
// Run blocks until context is cancelled or signal received.
|
||||
// Simple helper for daemon mode without advanced features.
|
||||
//
|
||||
// cli.Init(cli.Options{AppName: "myapp"})
|
||||
// defer cli.Shutdown()
|
||||
// cli.Run(cli.Context())
|
||||
func Run(ctx context.Context) error {
|
||||
mustInit()
|
||||
<-ctx.Done()
|
||||
return ctx.Err()
|
||||
}
|
||||
|
||||
// RunWithTimeout wraps Run with a graceful shutdown timeout.
|
||||
// The returned function should be deferred to replace cli.Shutdown().
|
||||
//
|
||||
// cli.Init(cli.Options{AppName: "myapp"})
|
||||
// shutdown := cli.RunWithTimeout(30 * time.Second)
|
||||
// defer shutdown()
|
||||
// cli.Run(cli.Context())
|
||||
func RunWithTimeout(timeout time.Duration) func() {
|
||||
return func() {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||
defer cancel()
|
||||
|
||||
// Create done channel for shutdown completion
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
Shutdown()
|
||||
close(done)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
// Clean shutdown
|
||||
case <-ctx.Done():
|
||||
// Timeout - force exit
|
||||
LogWarn("shutdown timeout exceeded, forcing exit")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,254 +0,0 @@
|
|||
package cli
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestDetectMode(t *testing.T) {
|
||||
t.Run("daemon mode from env", func(t *testing.T) {
|
||||
t.Setenv("CORE_DAEMON", "1")
|
||||
assert.Equal(t, ModeDaemon, DetectMode())
|
||||
})
|
||||
|
||||
t.Run("mode string", func(t *testing.T) {
|
||||
assert.Equal(t, "interactive", ModeInteractive.String())
|
||||
assert.Equal(t, "pipe", ModePipe.String())
|
||||
assert.Equal(t, "daemon", ModeDaemon.String())
|
||||
assert.Equal(t, "unknown", Mode(99).String())
|
||||
})
|
||||
}
|
||||
|
||||
func TestPIDFile(t *testing.T) {
|
||||
t.Run("acquire and release", func(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
pidPath := "/tmp/test.pid"
|
||||
|
||||
pid := NewPIDFile(m, pidPath)
|
||||
|
||||
// Acquire should succeed
|
||||
err := pid.Acquire()
|
||||
require.NoError(t, err)
|
||||
|
||||
// File should exist with our PID
|
||||
data, err := m.Read(pidPath)
|
||||
require.NoError(t, err)
|
||||
assert.NotEmpty(t, data)
|
||||
|
||||
// Release should remove file
|
||||
err = pid.Release()
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.False(t, m.Exists(pidPath))
|
||||
})
|
||||
|
||||
t.Run("stale pid file", func(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
pidPath := "/tmp/stale.pid"
|
||||
|
||||
// Write a stale PID (non-existent process)
|
||||
err := m.Write(pidPath, "999999999")
|
||||
require.NoError(t, err)
|
||||
|
||||
pid := NewPIDFile(m, pidPath)
|
||||
|
||||
// Should acquire successfully (stale PID removed)
|
||||
err = pid.Acquire()
|
||||
require.NoError(t, err)
|
||||
|
||||
err = pid.Release()
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("creates parent directory", func(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
pidPath := "/tmp/subdir/nested/test.pid"
|
||||
|
||||
pid := NewPIDFile(m, pidPath)
|
||||
|
||||
err := pid.Acquire()
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.True(t, m.Exists(pidPath))
|
||||
|
||||
err = pid.Release()
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("path getter", func(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
pid := NewPIDFile(m, "/tmp/test.pid")
|
||||
assert.Equal(t, "/tmp/test.pid", pid.Path())
|
||||
})
|
||||
}
|
||||
|
||||
func TestHealthServer(t *testing.T) {
|
||||
t.Run("health and ready endpoints", func(t *testing.T) {
|
||||
hs := NewHealthServer("127.0.0.1:0") // Random port
|
||||
|
||||
err := hs.Start()
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = hs.Stop(context.Background()) }()
|
||||
|
||||
addr := hs.Addr()
|
||||
require.NotEmpty(t, addr)
|
||||
|
||||
// Health should be OK
|
||||
resp, err := http.Get("http://" + addr + "/health")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, http.StatusOK, resp.StatusCode)
|
||||
_ = resp.Body.Close()
|
||||
|
||||
// Ready should be OK by default
|
||||
resp, err = http.Get("http://" + addr + "/ready")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, http.StatusOK, resp.StatusCode)
|
||||
_ = resp.Body.Close()
|
||||
|
||||
// Set not ready
|
||||
hs.SetReady(false)
|
||||
|
||||
resp, err = http.Get("http://" + addr + "/ready")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, http.StatusServiceUnavailable, resp.StatusCode)
|
||||
_ = resp.Body.Close()
|
||||
})
|
||||
|
||||
t.Run("with health checks", func(t *testing.T) {
|
||||
hs := NewHealthServer("127.0.0.1:0")
|
||||
|
||||
healthy := true
|
||||
hs.AddCheck(func() error {
|
||||
if !healthy {
|
||||
return assert.AnError
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
err := hs.Start()
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = hs.Stop(context.Background()) }()
|
||||
|
||||
addr := hs.Addr()
|
||||
|
||||
// Should be healthy
|
||||
resp, err := http.Get("http://" + addr + "/health")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, http.StatusOK, resp.StatusCode)
|
||||
_ = resp.Body.Close()
|
||||
|
||||
// Make unhealthy
|
||||
healthy = false
|
||||
|
||||
resp, err = http.Get("http://" + addr + "/health")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, http.StatusServiceUnavailable, resp.StatusCode)
|
||||
_ = resp.Body.Close()
|
||||
})
|
||||
}
|
||||
|
||||
func TestDaemon(t *testing.T) {
|
||||
t.Run("start and stop", func(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
pidPath := "/tmp/test.pid"
|
||||
|
||||
d := NewDaemon(DaemonOptions{
|
||||
Medium: m,
|
||||
PIDFile: pidPath,
|
||||
HealthAddr: "127.0.0.1:0",
|
||||
ShutdownTimeout: 5 * time.Second,
|
||||
})
|
||||
|
||||
err := d.Start()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Health server should be running
|
||||
addr := d.HealthAddr()
|
||||
require.NotEmpty(t, addr)
|
||||
|
||||
resp, err := http.Get("http://" + addr + "/health")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, http.StatusOK, resp.StatusCode)
|
||||
_ = resp.Body.Close()
|
||||
|
||||
// Stop should succeed
|
||||
err = d.Stop()
|
||||
require.NoError(t, err)
|
||||
|
||||
// PID file should be removed
|
||||
assert.False(t, m.Exists(pidPath))
|
||||
})
|
||||
|
||||
t.Run("double start fails", func(t *testing.T) {
|
||||
d := NewDaemon(DaemonOptions{
|
||||
HealthAddr: "127.0.0.1:0",
|
||||
})
|
||||
|
||||
err := d.Start()
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = d.Stop() }()
|
||||
|
||||
err = d.Start()
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "already running")
|
||||
})
|
||||
|
||||
t.Run("run without start fails", func(t *testing.T) {
|
||||
d := NewDaemon(DaemonOptions{})
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
|
||||
err := d.Run(ctx)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "not started")
|
||||
})
|
||||
|
||||
t.Run("set ready", func(t *testing.T) {
|
||||
d := NewDaemon(DaemonOptions{
|
||||
HealthAddr: "127.0.0.1:0",
|
||||
})
|
||||
|
||||
err := d.Start()
|
||||
require.NoError(t, err)
|
||||
defer func() { _ = d.Stop() }()
|
||||
|
||||
addr := d.HealthAddr()
|
||||
|
||||
// Initially ready
|
||||
resp, _ := http.Get("http://" + addr + "/ready")
|
||||
assert.Equal(t, http.StatusOK, resp.StatusCode)
|
||||
_ = resp.Body.Close()
|
||||
|
||||
// Set not ready
|
||||
d.SetReady(false)
|
||||
|
||||
resp, _ = http.Get("http://" + addr + "/ready")
|
||||
assert.Equal(t, http.StatusServiceUnavailable, resp.StatusCode)
|
||||
_ = resp.Body.Close()
|
||||
})
|
||||
|
||||
t.Run("no health addr returns empty", func(t *testing.T) {
|
||||
d := NewDaemon(DaemonOptions{})
|
||||
assert.Empty(t, d.HealthAddr())
|
||||
})
|
||||
|
||||
t.Run("default shutdown timeout", func(t *testing.T) {
|
||||
d := NewDaemon(DaemonOptions{})
|
||||
assert.Equal(t, 30*time.Second, d.opts.ShutdownTimeout)
|
||||
})
|
||||
}
|
||||
|
||||
func TestRunWithTimeout(t *testing.T) {
|
||||
t.Run("creates shutdown function", func(t *testing.T) {
|
||||
// Just test that it returns a function
|
||||
shutdown := RunWithTimeout(100 * time.Millisecond)
|
||||
assert.NotNil(t, shutdown)
|
||||
})
|
||||
}
|
||||
|
|
@ -1,162 +0,0 @@
|
|||
package cli
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/i18n"
|
||||
)
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
// Error Creation (replace fmt.Errorf)
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
// Err creates a new error from a format string.
|
||||
// This is a direct replacement for fmt.Errorf.
|
||||
func Err(format string, args ...any) error {
|
||||
return fmt.Errorf(format, args...)
|
||||
}
|
||||
|
||||
// Wrap wraps an error with a message.
|
||||
// Returns nil if err is nil.
|
||||
//
|
||||
// return cli.Wrap(err, "load config") // "load config: <original error>"
|
||||
func Wrap(err error, msg string) error {
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("%s: %w", msg, err)
|
||||
}
|
||||
|
||||
// WrapVerb wraps an error using i18n grammar for "Failed to verb subject".
|
||||
// Uses the i18n.ActionFailed function for proper grammar composition.
|
||||
// Returns nil if err is nil.
|
||||
//
|
||||
// return cli.WrapVerb(err, "load", "config") // "Failed to load config: <original error>"
|
||||
func WrapVerb(err error, verb, subject string) error {
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
msg := i18n.ActionFailed(verb, subject)
|
||||
return fmt.Errorf("%s: %w", msg, err)
|
||||
}
|
||||
|
||||
// WrapAction wraps an error using i18n grammar for "Failed to verb".
|
||||
// Uses the i18n.ActionFailed function for proper grammar composition.
|
||||
// Returns nil if err is nil.
|
||||
//
|
||||
// return cli.WrapAction(err, "connect") // "Failed to connect: <original error>"
|
||||
func WrapAction(err error, verb string) error {
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
msg := i18n.ActionFailed(verb, "")
|
||||
return fmt.Errorf("%s: %w", msg, err)
|
||||
}
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
// Error Helpers
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
// Is reports whether any error in err's tree matches target.
|
||||
// This is a re-export of errors.Is for convenience.
|
||||
func Is(err, target error) bool {
|
||||
return errors.Is(err, target)
|
||||
}
|
||||
|
||||
// As finds the first error in err's tree that matches target.
|
||||
// This is a re-export of errors.As for convenience.
|
||||
func As(err error, target any) bool {
|
||||
return errors.As(err, target)
|
||||
}
|
||||
|
||||
// Join returns an error that wraps the given errors.
|
||||
// This is a re-export of errors.Join for convenience.
|
||||
func Join(errs ...error) error {
|
||||
return errors.Join(errs...)
|
||||
}
|
||||
|
||||
// ExitError represents an error that should cause the CLI to exit with a specific code.
|
||||
type ExitError struct {
|
||||
Code int
|
||||
Err error
|
||||
}
|
||||
|
||||
func (e *ExitError) Error() string {
|
||||
if e.Err == nil {
|
||||
return ""
|
||||
}
|
||||
return e.Err.Error()
|
||||
}
|
||||
|
||||
func (e *ExitError) Unwrap() error {
|
||||
return e.Err
|
||||
}
|
||||
|
||||
// Exit creates a new ExitError with the given code and error.
|
||||
// Use this to return an error from a command with a specific exit code.
|
||||
func Exit(code int, err error) error {
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
return &ExitError{Code: code, Err: err}
|
||||
}
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
// Fatal Functions (Deprecated - return error from command instead)
|
||||
// ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
// Fatal prints an error message to stderr, logs it, and exits with code 1.
|
||||
//
|
||||
// Deprecated: return an error from the command instead.
|
||||
func Fatal(err error) {
|
||||
if err != nil {
|
||||
LogError("Fatal error", "err", err)
|
||||
fmt.Fprintln(os.Stderr, ErrorStyle.Render(Glyph(":cross:")+" "+err.Error()))
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// Fatalf prints a formatted error message to stderr, logs it, and exits with code 1.
|
||||
//
|
||||
// Deprecated: return an error from the command instead.
|
||||
func Fatalf(format string, args ...any) {
|
||||
msg := fmt.Sprintf(format, args...)
|
||||
LogError("Fatal error", "msg", msg)
|
||||
fmt.Fprintln(os.Stderr, ErrorStyle.Render(Glyph(":cross:")+" "+msg))
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// FatalWrap prints a wrapped error message to stderr, logs it, and exits with code 1.
|
||||
// Does nothing if err is nil.
|
||||
//
|
||||
// Deprecated: return an error from the command instead.
|
||||
//
|
||||
// cli.FatalWrap(err, "load config") // Prints "✗ load config: <error>" and exits
|
||||
func FatalWrap(err error, msg string) {
|
||||
if err == nil {
|
||||
return
|
||||
}
|
||||
LogError("Fatal error", "msg", msg, "err", err)
|
||||
fullMsg := fmt.Sprintf("%s: %v", msg, err)
|
||||
fmt.Fprintln(os.Stderr, ErrorStyle.Render(Glyph(":cross:")+" "+fullMsg))
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// FatalWrapVerb prints a wrapped error using i18n grammar to stderr, logs it, and exits with code 1.
|
||||
// Does nothing if err is nil.
|
||||
//
|
||||
// Deprecated: return an error from the command instead.
|
||||
//
|
||||
// cli.FatalWrapVerb(err, "load", "config") // Prints "✗ Failed to load config: <error>" and exits
|
||||
func FatalWrapVerb(err error, verb, subject string) {
|
||||
if err == nil {
|
||||
return
|
||||
}
|
||||
msg := i18n.ActionFailed(verb, subject)
|
||||
LogError("Fatal error", "msg", msg, "err", err, "verb", verb, "subject", subject)
|
||||
fullMsg := fmt.Sprintf("%s: %v", msg, err)
|
||||
fmt.Fprintln(os.Stderr, ErrorStyle.Render(Glyph(":cross:")+" "+fullMsg))
|
||||
os.Exit(1)
|
||||
}
|
||||
|
|
@ -1,92 +0,0 @@
|
|||
package cli
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
// GlyphTheme defines which symbols to use.
|
||||
type GlyphTheme int
|
||||
|
||||
const (
|
||||
// ThemeUnicode uses standard Unicode symbols.
|
||||
ThemeUnicode GlyphTheme = iota
|
||||
// ThemeEmoji uses Emoji symbols.
|
||||
ThemeEmoji
|
||||
// ThemeASCII uses ASCII fallback symbols.
|
||||
ThemeASCII
|
||||
)
|
||||
|
||||
var currentTheme = ThemeUnicode
|
||||
|
||||
// UseUnicode switches the glyph theme to Unicode.
|
||||
func UseUnicode() { currentTheme = ThemeUnicode }
|
||||
|
||||
// UseEmoji switches the glyph theme to Emoji.
|
||||
func UseEmoji() { currentTheme = ThemeEmoji }
|
||||
|
||||
// UseASCII switches the glyph theme to ASCII and disables colors.
|
||||
func UseASCII() {
|
||||
currentTheme = ThemeASCII
|
||||
SetColorEnabled(false)
|
||||
}
|
||||
|
||||
func glyphMap() map[string]string {
|
||||
switch currentTheme {
|
||||
case ThemeEmoji:
|
||||
return glyphMapEmoji
|
||||
case ThemeASCII:
|
||||
return glyphMapASCII
|
||||
default:
|
||||
return glyphMapUnicode
|
||||
}
|
||||
}
|
||||
|
||||
// Glyph converts a shortcode (e.g. ":check:") to its symbol based on the current theme.
|
||||
func Glyph(code string) string {
|
||||
if sym, ok := glyphMap()[code]; ok {
|
||||
return sym
|
||||
}
|
||||
return code
|
||||
}
|
||||
|
||||
func compileGlyphs(x string) string {
|
||||
if x == "" {
|
||||
return ""
|
||||
}
|
||||
input := bytes.NewBufferString(x)
|
||||
output := bytes.NewBufferString("")
|
||||
|
||||
for {
|
||||
r, _, err := input.ReadRune()
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
if r == ':' {
|
||||
output.WriteString(replaceGlyph(input))
|
||||
} else {
|
||||
output.WriteRune(r)
|
||||
}
|
||||
}
|
||||
return output.String()
|
||||
}
|
||||
|
||||
func replaceGlyph(input *bytes.Buffer) string {
|
||||
code := bytes.NewBufferString(":")
|
||||
for {
|
||||
r, _, err := input.ReadRune()
|
||||
if err != nil {
|
||||
return code.String()
|
||||
}
|
||||
if r == ':' && code.Len() == 1 {
|
||||
return code.String() + replaceGlyph(input)
|
||||
}
|
||||
code.WriteRune(r)
|
||||
if unicode.IsSpace(r) {
|
||||
return code.String()
|
||||
}
|
||||
if r == ':' {
|
||||
return Glyph(code.String())
|
||||
}
|
||||
}
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue