refactor: extract remaining pkg/ packages to standalone modules
- pkg/session → go-session - pkg/ws → go-ws - pkg/webview → go-webview - pkg/workspace → go-io/workspace - pkg/lab → lthn/lem/pkg/lab - pkg/build deleted (empty dirs) - lem-chat moved to lthn/LEM - internal/core-ide + cmd/core-ide deleted (Wails artifacts, source in core/ide) - internal/cmd deleted (Wails updater artifacts) - Taskfile.yaml deleted (stale Wails duplicate) pkg/ now contains only framework + log (stays). Co-Authored-By: Virgil <virgil@lethean.io>
This commit is contained in:
parent
ef5c83c04e
commit
e920397741
113 changed files with 28 additions and 20424 deletions
|
|
@ -1,15 +0,0 @@
|
|||
# CodeRabbit Configuration
|
||||
# Manual trigger only: @coderabbitai review
|
||||
|
||||
reviews:
|
||||
auto_review:
|
||||
enabled: false
|
||||
review_status: false
|
||||
|
||||
path_instructions:
|
||||
- path: "cmd/**"
|
||||
instructions: "CLI command code - check for proper cobra usage and flag handling"
|
||||
- path: "pkg/**"
|
||||
instructions: "Library code - ensure good API design and documentation"
|
||||
- path: "internal/**"
|
||||
instructions: "Internal packages - check for proper encapsulation"
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
# Gitleaks configuration for host-uk/core
|
||||
# Test fixtures contain private keys for cryptographic testing — not real secrets.
|
||||
|
||||
[allowlist]
|
||||
description = "Test fixture allowlist"
|
||||
paths = [
|
||||
'''pkg/crypt/pgp/pgp_test\.go''',
|
||||
'''pkg/crypt/rsa/rsa_test\.go''',
|
||||
'''pkg/crypt/openpgp/test_util\.go''',
|
||||
]
|
||||
|
|
@ -1,35 +0,0 @@
|
|||
# Contributing
|
||||
|
||||
Thank you for your interest in contributing!
|
||||
|
||||
## Requirements
|
||||
- **Go Version**: 1.26 or higher is required.
|
||||
- **Tools**: `golangci-lint` and `task` (Taskfile.dev) are recommended.
|
||||
|
||||
## Development Workflow
|
||||
1. **Testing**: Ensure all tests pass before submitting changes.
|
||||
```bash
|
||||
go test ./...
|
||||
```
|
||||
2. **Code Style**: All code must follow standard Go formatting.
|
||||
```bash
|
||||
gofmt -w .
|
||||
go vet ./...
|
||||
```
|
||||
3. **Linting**: We use `golangci-lint` to maintain code quality.
|
||||
```bash
|
||||
golangci-lint run ./...
|
||||
```
|
||||
|
||||
## Commit Message Format
|
||||
We follow the [Conventional Commits](https://www.conventionalcommits.org/) specification:
|
||||
- `feat`: A new feature
|
||||
- `fix`: A bug fix
|
||||
- `docs`: Documentation changes
|
||||
- `refactor`: A code change that neither fixes a bug nor adds a feature
|
||||
- `chore`: Changes to the build process or auxiliary tools and libraries
|
||||
|
||||
Example: `feat: add new endpoint for health check`
|
||||
|
||||
## License
|
||||
By contributing to this project, you agree that your contributions will be licensed under the **European Union Public Licence (EUPL-1.2)**.
|
||||
55
GEMINI.md
55
GEMINI.md
|
|
@ -1,55 +0,0 @@
|
|||
# GEMINI.md
|
||||
|
||||
This file provides guidance for agentic interactions within this repository, specifically for Gemini and other MCP-compliant agents.
|
||||
|
||||
## Agentic Context & MCP
|
||||
|
||||
This project is built with an **Agentic** design philosophy. It is not exclusive to any single LLM provider (like Claude).
|
||||
|
||||
- **MCP Support**: The system is designed to leverage the Model Context Protocol (MCP) to provide rich context and tools to agents.
|
||||
- **Developer Image**: You are running within a standardized developer image (`host-uk/core` dev environment), ensuring consistent tooling and configuration.
|
||||
|
||||
## Core CLI (Agent Interface)
|
||||
|
||||
The `core` command is the primary interface for agents to manage the project. Agents should **always** prefer `core` commands over raw shell commands (like `go test`, `php artisan`, etc.).
|
||||
|
||||
### Key Commands for Agents
|
||||
|
||||
| Task | Command | Notes |
|
||||
|------|---------|-------|
|
||||
| **Health Check** | `core doctor` | Verify tools and environment |
|
||||
| **Repo Status** | `core dev health` | Quick summary of all repos |
|
||||
| **Work Status** | `core dev work --status` | Detailed dirty/ahead status |
|
||||
| **Run Tests** | `core go test` | Run Go tests with correct flags |
|
||||
| **Coverage** | `core go cov` | Generate coverage report |
|
||||
| **Build** | `core build` | Build the project safely |
|
||||
| **Search Code** | `core pkg search` | Find packages/repos |
|
||||
|
||||
## Project Architecture
|
||||
|
||||
Core is a Web3 Framework written in Go using Wails v3.
|
||||
|
||||
### Core Framework
|
||||
|
||||
- **Services**: Managed via dependency injection (`ServiceFor[T]()`).
|
||||
- **Lifecycle**: `OnStartup` and `OnShutdown` hooks.
|
||||
- **IPC**: Message-passing system for service communication.
|
||||
|
||||
### Development Workflow
|
||||
|
||||
1. **Check State**: `core dev work --status`
|
||||
2. **Make Changes**: Modify code, add tests.
|
||||
3. **Verify**: `core go test` (or `core php test` for PHP components).
|
||||
4. **Commit**: `core dev commit` (or standard git if automated).
|
||||
5. **Push**: `core dev push` (handles multiple repos).
|
||||
|
||||
## Testing Standards
|
||||
|
||||
- **Suffix Pattern**:
|
||||
- `_Good`: Happy path
|
||||
- `_Bad`: Expected errors
|
||||
- `_Ugly`: Edge cases/panics
|
||||
|
||||
## Go Workspace
|
||||
|
||||
The project uses Go workspaces (`go.work`). Always run `core go work sync` after modifying modules.
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
version: '3'
|
||||
|
||||
tasks:
|
||||
build:
|
||||
cmds:
|
||||
- go build -o build/bin/core cmd/app/main.go
|
||||
26
go.mod
26
go.mod
|
|
@ -3,33 +3,22 @@ module forge.lthn.ai/core/go
|
|||
go 1.26.0
|
||||
|
||||
require (
|
||||
forge.lthn.ai/Snider/Borg v0.2.1
|
||||
forge.lthn.ai/core/cli v0.1.0
|
||||
forge.lthn.ai/core/go-crypt v0.1.0
|
||||
forge.lthn.ai/core/go-devops v0.1.0
|
||||
forge.lthn.ai/core/go-io v0.0.1
|
||||
forge.lthn.ai/core/cli v0.1.1
|
||||
forge.lthn.ai/core/go-crypt v0.1.2
|
||||
forge.lthn.ai/core/go-devops v0.1.2
|
||||
forge.lthn.ai/core/go-i18n v0.1.1
|
||||
forge.lthn.ai/core/go-io v0.0.4
|
||||
forge.lthn.ai/core/go-log v0.0.1
|
||||
github.com/aws/aws-sdk-go-v2 v1.41.3
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.96.4
|
||||
github.com/gorilla/websocket v1.5.3
|
||||
github.com/spf13/viper v1.21.0
|
||||
github.com/stretchr/testify v1.11.1
|
||||
golang.org/x/crypto v0.48.0
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
modernc.org/sqlite v1.46.1
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/ProtonMail/go-crypto v1.3.0 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.6 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.19 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.19 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.20 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.6 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.11 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.19 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.19 // indirect
|
||||
github.com/aws/smithy-go v1.24.2 // indirect
|
||||
forge.lthn.ai/core/go-inference v0.1.1 // indirect
|
||||
github.com/ProtonMail/go-crypto v1.4.0 // indirect
|
||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
||||
github.com/charmbracelet/bubbletea v1.3.10 // indirect
|
||||
github.com/charmbracelet/colorprofile v0.4.2 // indirect
|
||||
|
|
@ -67,6 +56,7 @@ require (
|
|||
github.com/subosito/gotenv v1.6.0 // indirect
|
||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
|
||||
go.yaml.in/yaml/v3 v3.0.4 // indirect
|
||||
golang.org/x/crypto v0.48.0 // indirect
|
||||
golang.org/x/exp v0.0.0-20260218203240-3dfff04db8fa // indirect
|
||||
golang.org/x/sys v0.41.0 // indirect
|
||||
golang.org/x/term v0.40.0 // indirect
|
||||
|
|
|
|||
52
go.sum
52
go.sum
|
|
@ -1,39 +1,27 @@
|
|||
forge.lthn.ai/Snider/Borg v0.2.1 h1:Uf/YtUJLL8jlxTCjvP4J+5GHe3LLeALGtbh7zj8d8Qc=
|
||||
forge.lthn.ai/Snider/Borg v0.2.1/go.mod h1:MVfolb7F6/A2LOIijcbBhWImu5db5NSMcSjvShMoMCA=
|
||||
forge.lthn.ai/core/cli v0.1.0 h1:2XRiEMVzUElnQlZnHYDyfKIKQVPcCzGuYHlnz55GjsM=
|
||||
forge.lthn.ai/core/cli v0.1.0/go.mod h1:mZ7dzccfzo0BP2dE7Mwuw9dXuIowiEd1G5ZGMoLuxVc=
|
||||
forge.lthn.ai/core/go-crypt v0.1.0 h1:92gwdQi7iAwktpvZhL/8Cu+QS6xKCtGP4FJfyInPGnw=
|
||||
forge.lthn.ai/core/go-crypt v0.1.0/go.mod h1:zVAgx6ZiGtC+dbX4R/VKvEPqsEqjyuLl4gQZH9SXBUw=
|
||||
forge.lthn.ai/core/go-devops v0.1.0 h1:xT3J//gilwVz15ju63xhg/Lz700cOYjqQkRWhTZDHLk=
|
||||
forge.lthn.ai/core/go-devops v0.1.0/go.mod h1:V5/YaRsrDsYlSnCCJXKX7h1zSbaGyRdRQApPF5XwGAo=
|
||||
forge.lthn.ai/core/go-io v0.0.1 h1:N/GCl6Asusfr4gs53JZixJVtqcnerQ6GcxSN8F8iJXY=
|
||||
forge.lthn.ai/core/go-io v0.0.1/go.mod h1:l+gG/G5TMIOTG8G7y0dg4fh1a7Suy8wCYVwsz4duV7M=
|
||||
forge.lthn.ai/core/cli v0.1.1 h1:AEefSo0ydYV1bZAbUgxsg1mi/llnC3+jqkjR/PyGdj4=
|
||||
forge.lthn.ai/core/cli v0.1.1/go.mod h1:gST3hY7vyrnlpLYtkRAQU2FyPxJBxLD1xa4+/KPOhn8=
|
||||
forge.lthn.ai/core/go-crypt v0.1.2 h1:MpVOX9wu0pBTw2+qsExZy2J5n6lo1LjgwrOMQmHTKgc=
|
||||
forge.lthn.ai/core/go-crypt v0.1.2/go.mod h1:1nD3bQ2NyK5iM2aCd+mi/+TTWwHEp+P/qf9tXLAUPuw=
|
||||
forge.lthn.ai/core/go-devops v0.1.2 h1:H3MgGxnfoydZVFZU2ZxvkIbmPMiKmAfUuGOohkbyQBc=
|
||||
forge.lthn.ai/core/go-devops v0.1.2/go.mod h1:48QM3Qv94NbcGF0Y16k7Z8o/wCQXxKwNTrU3F3qUMlQ=
|
||||
forge.lthn.ai/core/go-i18n v0.1.0 h1:F7JVSoVkZtzx9JfhpntM9z3iQm1vnuMUi/Zklhz8PCI=
|
||||
forge.lthn.ai/core/go-i18n v0.1.0/go.mod h1:Q4xsrxuNCl/6NfMv1daria7t1RSiyy8ml+6jiPtUcBs=
|
||||
forge.lthn.ai/core/go-i18n v0.1.1 h1:wxKLPAdITSqcdOqzgwb3yzUgMLdOFi3E5LdV9OBi7eg=
|
||||
forge.lthn.ai/core/go-i18n v0.1.1/go.mod h1:AGdDRA+Bo67FsU2XGpZxHIGEo6sfos41k0zHoCJ6j4c=
|
||||
forge.lthn.ai/core/go-inference v0.0.2 h1:aHjBkYyLKxLr9tbO4AvzzV/lsZueGq/jeo33SLh113k=
|
||||
forge.lthn.ai/core/go-inference v0.0.2/go.mod h1:jfWz+IJX55wAH98+ic6FEqqGB6/P31CHlg7VY7pxREw=
|
||||
forge.lthn.ai/core/go-inference v0.1.1 h1:uM3dtWitE4vvSCwA6CNPA2l0BRAjUNelENh7z58aecU=
|
||||
forge.lthn.ai/core/go-inference v0.1.1/go.mod h1:jfWz+IJX55wAH98+ic6FEqqGB6/P31CHlg7VY7pxREw=
|
||||
forge.lthn.ai/core/go-io v0.0.3 h1:TlhYpGTyjPgAlbEHyYrVSeUChZPhJXcLZ7D/8IbFqfI=
|
||||
forge.lthn.ai/core/go-io v0.0.3/go.mod h1:ZlU9OQpsvNFNmTJoaHbFIkisZyc0eCq0p8znVWQLRf0=
|
||||
forge.lthn.ai/core/go-io v0.0.4 h1:vXs3JTWquZKKG48Tik54DlzqP0WRJD9rnpn/D0GlRDk=
|
||||
forge.lthn.ai/core/go-io v0.0.4/go.mod h1:ZlU9OQpsvNFNmTJoaHbFIkisZyc0eCq0p8znVWQLRf0=
|
||||
forge.lthn.ai/core/go-log v0.0.1 h1:x/E6EfF9vixzqiLHQOl2KT25HyBcMc9qiBkomqVlpPg=
|
||||
forge.lthn.ai/core/go-log v0.0.1/go.mod h1:r14MXKOD3LF/sI8XUJQhRk/SZHBE7jAFVuCfgkXoZPw=
|
||||
github.com/ProtonMail/go-crypto v1.3.0 h1:ILq8+Sf5If5DCpHQp4PbZdS1J7HDFRXz/+xKBiRGFrw=
|
||||
github.com/ProtonMail/go-crypto v1.3.0/go.mod h1:9whxjD8Rbs29b4XWbB8irEcE8KHMqaR2e7GWU1R+/PE=
|
||||
github.com/aws/aws-sdk-go-v2 v1.41.3 h1:4kQ/fa22KjDt13QCy1+bYADvdgcxpfH18f0zP542kZA=
|
||||
github.com/aws/aws-sdk-go-v2 v1.41.3/go.mod h1:mwsPRE8ceUUpiTgF7QmQIJ7lgsKUPQOUl3o72QBrE1o=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.6 h1:N4lRUXZpZ1KVEUn6hxtco/1d2lgYhNn1fHkkl8WhlyQ=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.6/go.mod h1:lyw7GFp3qENLh7kwzf7iMzAxDn+NzjXEAGjKS2UOKqI=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.19 h1:/sECfyq2JTifMI2JPyZ4bdRN77zJmr6SrS1eL3augIA=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.19/go.mod h1:dMf8A5oAqr9/oxOfLkC/c2LU/uMcALP0Rgn2BD5LWn0=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.19 h1:AWeJMk33GTBf6J20XJe6qZoRSJo0WfUhsMdUKhoODXE=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.19/go.mod h1:+GWrYoaAsV7/4pNHpwh1kiNLXkKaSoppxQq9lbH8Ejw=
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.20 h1:qi3e/dmpdONhj1RyIZdi6DKKpDXS5Lb8ftr3p7cyHJc=
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.20/go.mod h1:V1K+TeJVD5JOk3D9e5tsX2KUdL7BlB+FV6cBhdobN8c=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.6 h1:XAq62tBTJP/85lFD5oqOOe7YYgWxY9LvWq8plyDvDVg=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.6/go.mod h1:x0nZssQ3qZSnIcePWLvcoFisRXJzcTVvYpAAdYX8+GI=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.11 h1:BYf7XNsJMzl4mObARUBUib+j2tf0U//JAAtTnYqvqCw=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.11/go.mod h1:aEUS4WrNk/+FxkBZZa7tVgp4pGH+kFGW40Y8rCPqt5g=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.19 h1:X1Tow7suZk9UCJHE1Iw9GMZJJl0dAnKXXP1NaSDHwmw=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.19/go.mod h1:/rARO8psX+4sfjUQXp5LLifjUt8DuATZ31WptNJTyQA=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.19 h1:JnQeStZvPHFHeyky/7LbMlyQjUa+jIBj36OlWm0pzIk=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.19/go.mod h1:HGyasyHvYdFQeJhvDHfH7HXkHh57htcJGKDZ+7z+I24=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.96.4 h1:4ExZyubQ6LQQVuF2Qp9OsfEvsTdAWh5Gfwf6PgIdLdk=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.96.4/go.mod h1:NF3JcMGOiARAss1ld3WGORCw71+4ExDD2cbbdKS5PpA=
|
||||
github.com/aws/smithy-go v1.24.2 h1:FzA3bu/nt/vDvmnkg+R8Xl46gmzEDam6mZ1hzmwXFng=
|
||||
github.com/aws/smithy-go v1.24.2/go.mod h1:YE2RhdIuDbA5E5bTdciG9KrW3+TiEONeUWCqxX9i1Fc=
|
||||
github.com/ProtonMail/go-crypto v1.4.0 h1:Zq/pbM3F5DFgJiMouxEdSVY44MVoQNEKp5d5QxIQceQ=
|
||||
github.com/ProtonMail/go-crypto v1.4.0/go.mod h1:e1OaTyu5SYVrO9gKOEhTc+5UcXtTUa+P3uLudwcgPqo=
|
||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
|
||||
github.com/charmbracelet/bubbletea v1.3.10 h1:otUDHWMMzQSB0Pkc87rm691KZ3SWa4KUlvF9nRvCICw=
|
||||
|
|
|
|||
1
lem-chat/.gitignore
vendored
1
lem-chat/.gitignore
vendored
|
|
@ -1 +0,0 @@
|
|||
node_modules/
|
||||
|
|
@ -1,34 +0,0 @@
|
|||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>LEM Chat</title>
|
||||
<style>
|
||||
* { margin: 0; padding: 0; box-sizing: border-box; }
|
||||
html, body { height: 100%; background: #111; }
|
||||
body {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
font-family: system-ui, -apple-system, sans-serif;
|
||||
}
|
||||
lem-chat {
|
||||
width: 720px;
|
||||
height: 85vh;
|
||||
max-height: 800px;
|
||||
}
|
||||
@media (max-width: 768px) {
|
||||
lem-chat { width: 100%; height: 100%; max-height: none; border-radius: 0; }
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<lem-chat
|
||||
endpoint="http://localhost:8090"
|
||||
model="local"
|
||||
max-tokens="2048"
|
||||
></lem-chat>
|
||||
<script type="module" src="dist/lem-chat.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
515
lem-chat/package-lock.json
generated
515
lem-chat/package-lock.json
generated
|
|
@ -1,515 +0,0 @@
|
|||
{
|
||||
"name": "lem-chat",
|
||||
"version": "0.1.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "lem-chat",
|
||||
"version": "0.1.0",
|
||||
"license": "EUPL-1.2",
|
||||
"devDependencies": {
|
||||
"esbuild": "^0.25.0",
|
||||
"typescript": "^5.7.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/aix-ppc64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/aix-ppc64/-/aix-ppc64-0.25.12.tgz",
|
||||
"integrity": "sha512-Hhmwd6CInZ3dwpuGTF8fJG6yoWmsToE+vYgD4nytZVxcu1ulHpUQRAB1UJ8+N1Am3Mz4+xOByoQoSZf4D+CpkA==",
|
||||
"cpu": [
|
||||
"ppc64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"aix"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/android-arm": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/android-arm/-/android-arm-0.25.12.tgz",
|
||||
"integrity": "sha512-VJ+sKvNA/GE7Ccacc9Cha7bpS8nyzVv0jdVgwNDaR4gDMC/2TTRc33Ip8qrNYUcpkOHUT5OZ0bUcNNVZQ9RLlg==",
|
||||
"cpu": [
|
||||
"arm"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"android"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/android-arm64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/android-arm64/-/android-arm64-0.25.12.tgz",
|
||||
"integrity": "sha512-6AAmLG7zwD1Z159jCKPvAxZd4y/VTO0VkprYy+3N2FtJ8+BQWFXU+OxARIwA46c5tdD9SsKGZ/1ocqBS/gAKHg==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"android"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/android-x64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/android-x64/-/android-x64-0.25.12.tgz",
|
||||
"integrity": "sha512-5jbb+2hhDHx5phYR2By8GTWEzn6I9UqR11Kwf22iKbNpYrsmRB18aX/9ivc5cabcUiAT/wM+YIZ6SG9QO6a8kg==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"android"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/darwin-arm64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/darwin-arm64/-/darwin-arm64-0.25.12.tgz",
|
||||
"integrity": "sha512-N3zl+lxHCifgIlcMUP5016ESkeQjLj/959RxxNYIthIg+CQHInujFuXeWbWMgnTo4cp5XVHqFPmpyu9J65C1Yg==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"darwin"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/darwin-x64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/darwin-x64/-/darwin-x64-0.25.12.tgz",
|
||||
"integrity": "sha512-HQ9ka4Kx21qHXwtlTUVbKJOAnmG1ipXhdWTmNXiPzPfWKpXqASVcWdnf2bnL73wgjNrFXAa3yYvBSd9pzfEIpA==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"darwin"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/freebsd-arm64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/freebsd-arm64/-/freebsd-arm64-0.25.12.tgz",
|
||||
"integrity": "sha512-gA0Bx759+7Jve03K1S0vkOu5Lg/85dou3EseOGUes8flVOGxbhDDh/iZaoek11Y8mtyKPGF3vP8XhnkDEAmzeg==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"freebsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/freebsd-x64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/freebsd-x64/-/freebsd-x64-0.25.12.tgz",
|
||||
"integrity": "sha512-TGbO26Yw2xsHzxtbVFGEXBFH0FRAP7gtcPE7P5yP7wGy7cXK2oO7RyOhL5NLiqTlBh47XhmIUXuGciXEqYFfBQ==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"freebsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-arm": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-arm/-/linux-arm-0.25.12.tgz",
|
||||
"integrity": "sha512-lPDGyC1JPDou8kGcywY0YILzWlhhnRjdof3UlcoqYmS9El818LLfJJc3PXXgZHrHCAKs/Z2SeZtDJr5MrkxtOw==",
|
||||
"cpu": [
|
||||
"arm"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-arm64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-arm64/-/linux-arm64-0.25.12.tgz",
|
||||
"integrity": "sha512-8bwX7a8FghIgrupcxb4aUmYDLp8pX06rGh5HqDT7bB+8Rdells6mHvrFHHW2JAOPZUbnjUpKTLg6ECyzvas2AQ==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-ia32": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-ia32/-/linux-ia32-0.25.12.tgz",
|
||||
"integrity": "sha512-0y9KrdVnbMM2/vG8KfU0byhUN+EFCny9+8g202gYqSSVMonbsCfLjUO+rCci7pM0WBEtz+oK/PIwHkzxkyharA==",
|
||||
"cpu": [
|
||||
"ia32"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-loong64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-loong64/-/linux-loong64-0.25.12.tgz",
|
||||
"integrity": "sha512-h///Lr5a9rib/v1GGqXVGzjL4TMvVTv+s1DPoxQdz7l/AYv6LDSxdIwzxkrPW438oUXiDtwM10o9PmwS/6Z0Ng==",
|
||||
"cpu": [
|
||||
"loong64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-mips64el": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-mips64el/-/linux-mips64el-0.25.12.tgz",
|
||||
"integrity": "sha512-iyRrM1Pzy9GFMDLsXn1iHUm18nhKnNMWscjmp4+hpafcZjrr2WbT//d20xaGljXDBYHqRcl8HnxbX6uaA/eGVw==",
|
||||
"cpu": [
|
||||
"mips64el"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-ppc64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-ppc64/-/linux-ppc64-0.25.12.tgz",
|
||||
"integrity": "sha512-9meM/lRXxMi5PSUqEXRCtVjEZBGwB7P/D4yT8UG/mwIdze2aV4Vo6U5gD3+RsoHXKkHCfSxZKzmDssVlRj1QQA==",
|
||||
"cpu": [
|
||||
"ppc64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-riscv64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-riscv64/-/linux-riscv64-0.25.12.tgz",
|
||||
"integrity": "sha512-Zr7KR4hgKUpWAwb1f3o5ygT04MzqVrGEGXGLnj15YQDJErYu/BGg+wmFlIDOdJp0PmB0lLvxFIOXZgFRrdjR0w==",
|
||||
"cpu": [
|
||||
"riscv64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-s390x": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-s390x/-/linux-s390x-0.25.12.tgz",
|
||||
"integrity": "sha512-MsKncOcgTNvdtiISc/jZs/Zf8d0cl/t3gYWX8J9ubBnVOwlk65UIEEvgBORTiljloIWnBzLs4qhzPkJcitIzIg==",
|
||||
"cpu": [
|
||||
"s390x"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/linux-x64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/linux-x64/-/linux-x64-0.25.12.tgz",
|
||||
"integrity": "sha512-uqZMTLr/zR/ed4jIGnwSLkaHmPjOjJvnm6TVVitAa08SLS9Z0VM8wIRx7gWbJB5/J54YuIMInDquWyYvQLZkgw==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/netbsd-arm64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/netbsd-arm64/-/netbsd-arm64-0.25.12.tgz",
|
||||
"integrity": "sha512-xXwcTq4GhRM7J9A8Gv5boanHhRa/Q9KLVmcyXHCTaM4wKfIpWkdXiMog/KsnxzJ0A1+nD+zoecuzqPmCRyBGjg==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"netbsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/netbsd-x64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/netbsd-x64/-/netbsd-x64-0.25.12.tgz",
|
||||
"integrity": "sha512-Ld5pTlzPy3YwGec4OuHh1aCVCRvOXdH8DgRjfDy/oumVovmuSzWfnSJg+VtakB9Cm0gxNO9BzWkj6mtO1FMXkQ==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"netbsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/openbsd-arm64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/openbsd-arm64/-/openbsd-arm64-0.25.12.tgz",
|
||||
"integrity": "sha512-fF96T6KsBo/pkQI950FARU9apGNTSlZGsv1jZBAlcLL1MLjLNIWPBkj5NlSz8aAzYKg+eNqknrUJ24QBybeR5A==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"openbsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/openbsd-x64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/openbsd-x64/-/openbsd-x64-0.25.12.tgz",
|
||||
"integrity": "sha512-MZyXUkZHjQxUvzK7rN8DJ3SRmrVrke8ZyRusHlP+kuwqTcfWLyqMOE3sScPPyeIXN/mDJIfGXvcMqCgYKekoQw==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"openbsd"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/openharmony-arm64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/openharmony-arm64/-/openharmony-arm64-0.25.12.tgz",
|
||||
"integrity": "sha512-rm0YWsqUSRrjncSXGA7Zv78Nbnw4XL6/dzr20cyrQf7ZmRcsovpcRBdhD43Nuk3y7XIoW2OxMVvwuRvk9XdASg==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"openharmony"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/sunos-x64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/sunos-x64/-/sunos-x64-0.25.12.tgz",
|
||||
"integrity": "sha512-3wGSCDyuTHQUzt0nV7bocDy72r2lI33QL3gkDNGkod22EsYl04sMf0qLb8luNKTOmgF/eDEDP5BFNwoBKH441w==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"sunos"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/win32-arm64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/win32-arm64/-/win32-arm64-0.25.12.tgz",
|
||||
"integrity": "sha512-rMmLrur64A7+DKlnSuwqUdRKyd3UE7oPJZmnljqEptesKM8wx9J8gx5u0+9Pq0fQQW8vqeKebwNXdfOyP+8Bsg==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/win32-ia32": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/win32-ia32/-/win32-ia32-0.25.12.tgz",
|
||||
"integrity": "sha512-HkqnmmBoCbCwxUKKNPBixiWDGCpQGVsrQfJoVGYLPT41XWF8lHuE5N6WhVia2n4o5QK5M4tYr21827fNhi4byQ==",
|
||||
"cpu": [
|
||||
"ia32"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/@esbuild/win32-x64": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/@esbuild/win32-x64/-/win32-x64-0.25.12.tgz",
|
||||
"integrity": "sha512-alJC0uCZpTFrSL0CCDjcgleBXPnCrEAhTBILpeAp7M/OFgoqtAetfBzX0xM00MUsVVPpVjlPuMbREqnZCXaTnA==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/esbuild": {
|
||||
"version": "0.25.12",
|
||||
"resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.25.12.tgz",
|
||||
"integrity": "sha512-bbPBYYrtZbkt6Os6FiTLCTFxvq4tt3JKall1vRwshA3fdVztsLAatFaZobhkBC8/BrPetoa0oksYoKXoG4ryJg==",
|
||||
"dev": true,
|
||||
"hasInstallScript": true,
|
||||
"license": "MIT",
|
||||
"bin": {
|
||||
"esbuild": "bin/esbuild"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"@esbuild/aix-ppc64": "0.25.12",
|
||||
"@esbuild/android-arm": "0.25.12",
|
||||
"@esbuild/android-arm64": "0.25.12",
|
||||
"@esbuild/android-x64": "0.25.12",
|
||||
"@esbuild/darwin-arm64": "0.25.12",
|
||||
"@esbuild/darwin-x64": "0.25.12",
|
||||
"@esbuild/freebsd-arm64": "0.25.12",
|
||||
"@esbuild/freebsd-x64": "0.25.12",
|
||||
"@esbuild/linux-arm": "0.25.12",
|
||||
"@esbuild/linux-arm64": "0.25.12",
|
||||
"@esbuild/linux-ia32": "0.25.12",
|
||||
"@esbuild/linux-loong64": "0.25.12",
|
||||
"@esbuild/linux-mips64el": "0.25.12",
|
||||
"@esbuild/linux-ppc64": "0.25.12",
|
||||
"@esbuild/linux-riscv64": "0.25.12",
|
||||
"@esbuild/linux-s390x": "0.25.12",
|
||||
"@esbuild/linux-x64": "0.25.12",
|
||||
"@esbuild/netbsd-arm64": "0.25.12",
|
||||
"@esbuild/netbsd-x64": "0.25.12",
|
||||
"@esbuild/openbsd-arm64": "0.25.12",
|
||||
"@esbuild/openbsd-x64": "0.25.12",
|
||||
"@esbuild/openharmony-arm64": "0.25.12",
|
||||
"@esbuild/sunos-x64": "0.25.12",
|
||||
"@esbuild/win32-arm64": "0.25.12",
|
||||
"@esbuild/win32-ia32": "0.25.12",
|
||||
"@esbuild/win32-x64": "0.25.12"
|
||||
}
|
||||
},
|
||||
"node_modules/typescript": {
|
||||
"version": "5.9.3",
|
||||
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz",
|
||||
"integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"bin": {
|
||||
"tsc": "bin/tsc",
|
||||
"tsserver": "bin/tsserver"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14.17"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
{
|
||||
"name": "lem-chat",
|
||||
"version": "0.1.0",
|
||||
"private": true,
|
||||
"license": "EUPL-1.2",
|
||||
"scripts": {
|
||||
"build": "esbuild src/lem-chat.ts --bundle --format=esm --outfile=dist/lem-chat.js",
|
||||
"watch": "esbuild src/lem-chat.ts --bundle --format=esm --outfile=dist/lem-chat.js --watch",
|
||||
"dev": "esbuild src/lem-chat.ts --bundle --format=esm --outfile=dist/lem-chat.js --watch --servedir=.",
|
||||
"typecheck": "tsc --noEmit"
|
||||
},
|
||||
"devDependencies": {
|
||||
"esbuild": "^0.25.0",
|
||||
"typescript": "^5.7.0"
|
||||
}
|
||||
}
|
||||
|
|
@ -1,195 +0,0 @@
|
|||
import { chatStyles } from './styles';
|
||||
import type { ChatMessage, ChatCompletionChunk, LemSendDetail } from './types';
|
||||
import { LemMessages } from './lem-messages';
|
||||
import { LemInput } from './lem-input';
|
||||
import './lem-message';
|
||||
|
||||
export class LemChat extends HTMLElement {
|
||||
private shadow!: ShadowRoot;
|
||||
private messages!: LemMessages;
|
||||
private input!: LemInput;
|
||||
private statusEl!: HTMLDivElement;
|
||||
private history: ChatMessage[] = [];
|
||||
private abortController: AbortController | null = null;
|
||||
|
||||
static get observedAttributes(): string[] {
|
||||
return ['endpoint', 'model', 'system-prompt', 'max-tokens', 'temperature'];
|
||||
}
|
||||
|
||||
constructor() {
|
||||
super();
|
||||
this.shadow = this.attachShadow({ mode: 'open' });
|
||||
}
|
||||
|
||||
connectedCallback(): void {
|
||||
const style = document.createElement('style');
|
||||
style.textContent = chatStyles;
|
||||
|
||||
const header = document.createElement('div');
|
||||
header.className = 'header';
|
||||
|
||||
this.statusEl = document.createElement('div');
|
||||
this.statusEl.className = 'header-status';
|
||||
|
||||
const icon = document.createElement('div');
|
||||
icon.className = 'header-icon';
|
||||
icon.textContent = 'L';
|
||||
|
||||
const title = document.createElement('div');
|
||||
title.className = 'header-title';
|
||||
title.textContent = 'LEM';
|
||||
|
||||
const modelLabel = document.createElement('div');
|
||||
modelLabel.className = 'header-model';
|
||||
modelLabel.textContent = this.getAttribute('model') || 'local';
|
||||
|
||||
header.appendChild(this.statusEl);
|
||||
header.appendChild(icon);
|
||||
header.appendChild(title);
|
||||
header.appendChild(modelLabel);
|
||||
|
||||
this.messages = document.createElement('lem-messages') as LemMessages;
|
||||
this.input = document.createElement('lem-input') as LemInput;
|
||||
|
||||
this.shadow.appendChild(style);
|
||||
this.shadow.appendChild(header);
|
||||
this.shadow.appendChild(this.messages);
|
||||
this.shadow.appendChild(this.input);
|
||||
|
||||
this.addEventListener('lem-send', ((e: Event) => {
|
||||
const detail = (e as CustomEvent<LemSendDetail>).detail;
|
||||
this.handleSend(detail.text);
|
||||
}) as EventListener);
|
||||
|
||||
const systemPrompt = this.getAttribute('system-prompt');
|
||||
if (systemPrompt) {
|
||||
this.history.push({ role: 'system', content: systemPrompt });
|
||||
}
|
||||
|
||||
this.checkConnection();
|
||||
requestAnimationFrame(() => this.input.focus());
|
||||
}
|
||||
|
||||
disconnectedCallback(): void {
|
||||
this.abortController?.abort();
|
||||
}
|
||||
|
||||
get endpoint(): string {
|
||||
const attr = this.getAttribute('endpoint');
|
||||
if (!attr) return window.location.origin;
|
||||
return attr;
|
||||
}
|
||||
|
||||
get model(): string {
|
||||
return this.getAttribute('model') || '';
|
||||
}
|
||||
|
||||
get maxTokens(): number {
|
||||
const val = this.getAttribute('max-tokens');
|
||||
return val ? parseInt(val, 10) : 2048;
|
||||
}
|
||||
|
||||
get temperature(): number {
|
||||
const val = this.getAttribute('temperature');
|
||||
return val ? parseFloat(val) : 0.7;
|
||||
}
|
||||
|
||||
private async checkConnection(): Promise<void> {
|
||||
try {
|
||||
const resp = await fetch(`${this.endpoint}/v1/models`, {
|
||||
signal: AbortSignal.timeout(3000),
|
||||
});
|
||||
this.statusEl.classList.toggle('disconnected', !resp.ok);
|
||||
} catch {
|
||||
this.statusEl.classList.add('disconnected');
|
||||
}
|
||||
}
|
||||
|
||||
private async handleSend(text: string): Promise<void> {
|
||||
this.messages.addMessage('user', text);
|
||||
this.history.push({ role: 'user', content: text });
|
||||
|
||||
const assistantMsg = this.messages.addMessage('assistant');
|
||||
assistantMsg.streaming = true;
|
||||
this.input.disabled = true;
|
||||
|
||||
this.abortController?.abort();
|
||||
this.abortController = new AbortController();
|
||||
|
||||
let fullResponse = '';
|
||||
|
||||
try {
|
||||
const response = await fetch(`${this.endpoint}/v1/chat/completions`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
signal: this.abortController.signal,
|
||||
body: JSON.stringify({
|
||||
model: this.model,
|
||||
messages: this.history,
|
||||
max_tokens: this.maxTokens,
|
||||
temperature: this.temperature,
|
||||
stream: true,
|
||||
}),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Server error: ${response.status}`);
|
||||
}
|
||||
if (!response.body) {
|
||||
throw new Error('No response body');
|
||||
}
|
||||
|
||||
const reader = response.body.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
let buffer = '';
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
|
||||
buffer += decoder.decode(value, { stream: true });
|
||||
const lines = buffer.split('\n');
|
||||
buffer = lines.pop() || '';
|
||||
|
||||
for (const line of lines) {
|
||||
if (!line.startsWith('data: ')) continue;
|
||||
const data = line.slice(6).trim();
|
||||
if (data === '[DONE]') continue;
|
||||
|
||||
try {
|
||||
const chunk: ChatCompletionChunk = JSON.parse(data);
|
||||
const delta = chunk.choices?.[0]?.delta;
|
||||
if (delta?.content) {
|
||||
fullResponse += delta.content;
|
||||
assistantMsg.appendToken(delta.content);
|
||||
this.messages.scrollToBottom();
|
||||
}
|
||||
} catch {
|
||||
// skip malformed chunks
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
if (err instanceof Error && err.name === 'AbortError') {
|
||||
// user-initiated abort — ignore
|
||||
} else {
|
||||
const errorText =
|
||||
err instanceof Error ? err.message : 'Connection failed';
|
||||
if (!fullResponse) {
|
||||
assistantMsg.text = `\u26A0\uFE0F ${errorText}`;
|
||||
}
|
||||
this.statusEl.classList.add('disconnected');
|
||||
}
|
||||
} finally {
|
||||
assistantMsg.streaming = false;
|
||||
this.input.disabled = false;
|
||||
this.input.focus();
|
||||
this.abortController = null;
|
||||
if (fullResponse) {
|
||||
this.history.push({ role: 'assistant', content: fullResponse });
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
customElements.define('lem-chat', LemChat);
|
||||
|
|
@ -1,110 +0,0 @@
|
|||
import { inputStyles } from './styles';
|
||||
import type { LemSendDetail } from './types';
|
||||
|
||||
export class LemInput extends HTMLElement {
|
||||
private shadow!: ShadowRoot;
|
||||
private textarea!: HTMLTextAreaElement;
|
||||
private sendBtn!: HTMLButtonElement;
|
||||
private _disabled = false;
|
||||
|
||||
constructor() {
|
||||
super();
|
||||
this.shadow = this.attachShadow({ mode: 'open' });
|
||||
}
|
||||
|
||||
connectedCallback(): void {
|
||||
const style = document.createElement('style');
|
||||
style.textContent = inputStyles;
|
||||
|
||||
const wrapper = document.createElement('div');
|
||||
wrapper.className = 'input-wrapper';
|
||||
|
||||
this.textarea = document.createElement('textarea');
|
||||
this.textarea.rows = 1;
|
||||
this.textarea.placeholder = 'Message LEM...';
|
||||
|
||||
this.sendBtn = document.createElement('button');
|
||||
this.sendBtn.className = 'send-btn';
|
||||
this.sendBtn.type = 'button';
|
||||
this.sendBtn.disabled = true;
|
||||
this.sendBtn.appendChild(this.createSendIcon());
|
||||
|
||||
wrapper.appendChild(this.textarea);
|
||||
wrapper.appendChild(this.sendBtn);
|
||||
this.shadow.appendChild(style);
|
||||
this.shadow.appendChild(wrapper);
|
||||
|
||||
this.textarea.addEventListener('input', () => {
|
||||
this.textarea.style.height = 'auto';
|
||||
this.textarea.style.height =
|
||||
Math.min(this.textarea.scrollHeight, 120) + 'px';
|
||||
this.sendBtn.disabled =
|
||||
this._disabled || this.textarea.value.trim() === '';
|
||||
});
|
||||
|
||||
this.textarea.addEventListener('keydown', (e: KeyboardEvent) => {
|
||||
if (e.key === 'Enter' && !e.shiftKey) {
|
||||
e.preventDefault();
|
||||
this.submit();
|
||||
}
|
||||
});
|
||||
|
||||
this.sendBtn.addEventListener('click', () => this.submit());
|
||||
}
|
||||
|
||||
private createSendIcon(): SVGSVGElement {
|
||||
const ns = 'http://www.w3.org/2000/svg';
|
||||
const svg = document.createElementNS(ns, 'svg');
|
||||
svg.setAttribute('viewBox', '0 0 24 24');
|
||||
svg.setAttribute('fill', 'none');
|
||||
svg.setAttribute('stroke', 'currentColor');
|
||||
svg.setAttribute('stroke-width', '2');
|
||||
svg.setAttribute('stroke-linecap', 'round');
|
||||
svg.setAttribute('stroke-linejoin', 'round');
|
||||
svg.setAttribute('width', '16');
|
||||
svg.setAttribute('height', '16');
|
||||
const line = document.createElementNS(ns, 'line');
|
||||
line.setAttribute('x1', '22');
|
||||
line.setAttribute('y1', '2');
|
||||
line.setAttribute('x2', '11');
|
||||
line.setAttribute('y2', '13');
|
||||
const polygon = document.createElementNS(ns, 'polygon');
|
||||
polygon.setAttribute('points', '22 2 15 22 11 13 2 9 22 2');
|
||||
svg.appendChild(line);
|
||||
svg.appendChild(polygon);
|
||||
return svg;
|
||||
}
|
||||
|
||||
private submit(): void {
|
||||
const text = this.textarea.value.trim();
|
||||
if (!text || this._disabled) return;
|
||||
this.dispatchEvent(
|
||||
new CustomEvent<LemSendDetail>('lem-send', {
|
||||
bubbles: true,
|
||||
composed: true,
|
||||
detail: { text },
|
||||
})
|
||||
);
|
||||
this.textarea.value = '';
|
||||
this.textarea.style.height = 'auto';
|
||||
this.sendBtn.disabled = true;
|
||||
this.textarea.focus();
|
||||
}
|
||||
|
||||
get disabled(): boolean {
|
||||
return this._disabled;
|
||||
}
|
||||
|
||||
set disabled(value: boolean) {
|
||||
this._disabled = value;
|
||||
this.textarea.disabled = value;
|
||||
this.sendBtn.disabled = value || this.textarea.value.trim() === '';
|
||||
this.textarea.placeholder = value ? 'LEM is thinking...' : 'Message LEM...';
|
||||
}
|
||||
|
||||
override focus(): void {
|
||||
this.textarea?.focus();
|
||||
}
|
||||
}
|
||||
|
||||
customElements.define('lem-input', LemInput);
|
||||
|
|
@ -1,154 +0,0 @@
|
|||
import { messageStyles } from './styles';
|
||||
import { renderMarkdown } from './markdown';
|
||||
|
||||
interface ThinkSplit {
|
||||
think: string | null;
|
||||
response: string;
|
||||
}
|
||||
|
||||
export class LemMessage extends HTMLElement {
|
||||
private shadow!: ShadowRoot;
|
||||
private thinkPanel!: HTMLDivElement;
|
||||
private thinkContent!: HTMLDivElement;
|
||||
private thinkLabel!: HTMLDivElement;
|
||||
private contentEl!: HTMLDivElement;
|
||||
private cursorEl: HTMLSpanElement | null = null;
|
||||
private _text = '';
|
||||
private _streaming = false;
|
||||
private _thinkCollapsed = false;
|
||||
|
||||
constructor() {
|
||||
super();
|
||||
this.shadow = this.attachShadow({ mode: 'open' });
|
||||
}
|
||||
|
||||
connectedCallback(): void {
|
||||
const role = this.getAttribute('role') || 'user';
|
||||
|
||||
const style = document.createElement('style');
|
||||
style.textContent = messageStyles;
|
||||
|
||||
const bubble = document.createElement('div');
|
||||
bubble.className = 'bubble';
|
||||
|
||||
const roleEl = document.createElement('div');
|
||||
roleEl.className = 'role';
|
||||
roleEl.textContent = role === 'assistant' ? 'LEM' : 'You';
|
||||
|
||||
this.thinkPanel = document.createElement('div');
|
||||
this.thinkPanel.className = 'think-panel';
|
||||
this.thinkPanel.style.display = 'none';
|
||||
|
||||
this.thinkLabel = document.createElement('div');
|
||||
this.thinkLabel.className = 'think-label';
|
||||
this.thinkLabel.textContent = '\u25BC reasoning';
|
||||
this.thinkLabel.addEventListener('click', () => {
|
||||
this._thinkCollapsed = !this._thinkCollapsed;
|
||||
this.thinkPanel.classList.toggle('collapsed', this._thinkCollapsed);
|
||||
this.thinkLabel.textContent = this._thinkCollapsed
|
||||
? '\u25B6 reasoning'
|
||||
: '\u25BC reasoning';
|
||||
});
|
||||
|
||||
this.thinkContent = document.createElement('div');
|
||||
this.thinkContent.className = 'think-content';
|
||||
this.thinkPanel.appendChild(this.thinkLabel);
|
||||
this.thinkPanel.appendChild(this.thinkContent);
|
||||
|
||||
this.contentEl = document.createElement('div');
|
||||
this.contentEl.className = 'content';
|
||||
|
||||
bubble.appendChild(roleEl);
|
||||
if (role === 'assistant') {
|
||||
bubble.appendChild(this.thinkPanel);
|
||||
}
|
||||
bubble.appendChild(this.contentEl);
|
||||
|
||||
this.shadow.appendChild(style);
|
||||
this.shadow.appendChild(bubble);
|
||||
|
||||
if (this._text) {
|
||||
this.updateContent();
|
||||
}
|
||||
}
|
||||
|
||||
get text(): string {
|
||||
return this._text;
|
||||
}
|
||||
|
||||
set text(value: string) {
|
||||
this._text = value;
|
||||
this.updateContent();
|
||||
}
|
||||
|
||||
get streaming(): boolean {
|
||||
return this._streaming;
|
||||
}
|
||||
|
||||
set streaming(value: boolean) {
|
||||
this._streaming = value;
|
||||
this.updateContent();
|
||||
}
|
||||
|
||||
appendToken(token: string): void {
|
||||
this._text += token;
|
||||
this.updateContent();
|
||||
}
|
||||
|
||||
/**
|
||||
* Splits text into think/response portions and renders each.
|
||||
*
|
||||
* Safety: renderMarkdown() escapes all HTML entities (& < > ") before any
|
||||
* inline formatting is applied. The source is the local MLX model output,
|
||||
* not arbitrary user HTML. Shadow DOM provides additional isolation.
|
||||
*/
|
||||
private updateContent(): void {
|
||||
if (!this.contentEl) return;
|
||||
const { think, response } = this.splitThink(this._text);
|
||||
|
||||
if (think !== null && this.thinkPanel) {
|
||||
this.thinkPanel.style.display = '';
|
||||
this.thinkContent.textContent = think;
|
||||
}
|
||||
|
||||
// renderMarkdown() escapes all HTML before formatting — safe for innerHTML
|
||||
// within Shadow DOM isolation, sourced from local MLX model only
|
||||
const responseHtml = renderMarkdown(response);
|
||||
this.contentEl.innerHTML = responseHtml;
|
||||
|
||||
if (this._streaming) {
|
||||
if (!this.cursorEl) {
|
||||
this.cursorEl = document.createElement('span');
|
||||
this.cursorEl.className = 'cursor';
|
||||
}
|
||||
if (think !== null && !this._text.includes('</think>')) {
|
||||
this.thinkContent.appendChild(this.cursorEl);
|
||||
} else {
|
||||
const lastChild = this.contentEl.lastElementChild || this.contentEl;
|
||||
lastChild.appendChild(this.cursorEl);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private splitThink(text: string): ThinkSplit {
|
||||
const thinkStart = text.indexOf('<think>');
|
||||
if (thinkStart === -1) {
|
||||
return { think: null, response: text };
|
||||
}
|
||||
const afterOpen = thinkStart + '<think>'.length;
|
||||
const thinkEnd = text.indexOf('</think>', afterOpen);
|
||||
if (thinkEnd === -1) {
|
||||
return {
|
||||
think: text.slice(afterOpen).trim(),
|
||||
response: text.slice(0, thinkStart).trim(),
|
||||
};
|
||||
}
|
||||
const thinkText = text.slice(afterOpen, thinkEnd).trim();
|
||||
const beforeThink = text.slice(0, thinkStart).trim();
|
||||
const afterThink = text.slice(thinkEnd + '</think>'.length).trim();
|
||||
const response = [beforeThink, afterThink].filter(Boolean).join('\n');
|
||||
return { think: thinkText, response };
|
||||
}
|
||||
}
|
||||
|
||||
customElements.define('lem-message', LemMessage);
|
||||
|
|
@ -1,70 +0,0 @@
|
|||
import { messagesStyles } from './styles';
|
||||
import type { LemMessage } from './lem-message';
|
||||
|
||||
export class LemMessages extends HTMLElement {
|
||||
private shadow!: ShadowRoot;
|
||||
private container!: HTMLDivElement;
|
||||
private emptyEl!: HTMLDivElement;
|
||||
private shouldAutoScroll = true;
|
||||
|
||||
constructor() {
|
||||
super();
|
||||
this.shadow = this.attachShadow({ mode: 'open' });
|
||||
}
|
||||
|
||||
connectedCallback(): void {
|
||||
const style = document.createElement('style');
|
||||
style.textContent = messagesStyles;
|
||||
|
||||
this.container = document.createElement('div');
|
||||
|
||||
this.emptyEl = document.createElement('div');
|
||||
this.emptyEl.className = 'empty';
|
||||
const emptyIcon = document.createElement('div');
|
||||
emptyIcon.className = 'empty-icon';
|
||||
emptyIcon.textContent = '\u2728';
|
||||
const emptyText = document.createElement('div');
|
||||
emptyText.className = 'empty-text';
|
||||
emptyText.textContent = 'Start a conversation';
|
||||
this.emptyEl.appendChild(emptyIcon);
|
||||
this.emptyEl.appendChild(emptyText);
|
||||
|
||||
this.shadow.appendChild(style);
|
||||
this.shadow.appendChild(this.emptyEl);
|
||||
this.shadow.appendChild(this.container);
|
||||
|
||||
this.addEventListener('scroll', () => {
|
||||
const threshold = 60;
|
||||
this.shouldAutoScroll =
|
||||
this.scrollHeight - this.scrollTop - this.clientHeight < threshold;
|
||||
});
|
||||
}
|
||||
|
||||
addMessage(role: string, text?: string): LemMessage {
|
||||
this.emptyEl.style.display = 'none';
|
||||
const msg = document.createElement('lem-message') as LemMessage;
|
||||
msg.setAttribute('role', role);
|
||||
this.container.appendChild(msg);
|
||||
if (text) {
|
||||
msg.text = text;
|
||||
}
|
||||
this.scrollToBottom();
|
||||
return msg;
|
||||
}
|
||||
|
||||
scrollToBottom(): void {
|
||||
if (this.shouldAutoScroll) {
|
||||
requestAnimationFrame(() => {
|
||||
this.scrollTop = this.scrollHeight;
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
clear(): void {
|
||||
this.container.replaceChildren();
|
||||
this.emptyEl.style.display = '';
|
||||
this.shouldAutoScroll = true;
|
||||
}
|
||||
}
|
||||
|
||||
customElements.define('lem-messages', LemMessages);
|
||||
|
|
@ -1,80 +0,0 @@
|
|||
function escapeHtml(text: string): string {
|
||||
return text
|
||||
.replace(/&/g, '&')
|
||||
.replace(/</g, '<')
|
||||
.replace(/>/g, '>')
|
||||
.replace(/"/g, '"');
|
||||
}
|
||||
|
||||
function parseInline(text: string): string {
|
||||
let result = escapeHtml(text);
|
||||
result = result.replace(/`([^`]+)`/g, '<code>$1</code>');
|
||||
result = result.replace(/\*\*(.+?)\*\*/g, '<strong>$1</strong>');
|
||||
result = result.replace(/__(.+?)__/g, '<strong>$1</strong>');
|
||||
result = result.replace(/(?<!\w)\*([^*]+)\*(?!\w)/g, '<em>$1</em>');
|
||||
result = result.replace(/(?<!\w)_([^_]+)_(?!\w)/g, '<em>$1</em>');
|
||||
return result;
|
||||
}
|
||||
|
||||
function wrapParagraph(lines: string[]): string {
|
||||
const joined = lines.join('<br>');
|
||||
if (joined.startsWith('<pre')) return joined;
|
||||
return `<p>${joined}</p>`;
|
||||
}
|
||||
|
||||
export function renderMarkdown(text: string): string {
|
||||
const lines = text.split('\n');
|
||||
const output: string[] = [];
|
||||
let inCodeBlock = false;
|
||||
let codeLines: string[] = [];
|
||||
let codeLang = '';
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.trimStart().startsWith('```')) {
|
||||
if (!inCodeBlock) {
|
||||
inCodeBlock = true;
|
||||
codeLang = line.trimStart().slice(3).trim();
|
||||
codeLines = [];
|
||||
} else {
|
||||
const langAttr = codeLang ? ` data-lang="${escapeHtml(codeLang)}"` : '';
|
||||
output.push(`<pre${langAttr}><code>${escapeHtml(codeLines.join('\n'))}</code></pre>`);
|
||||
inCodeBlock = false;
|
||||
codeLines = [];
|
||||
codeLang = '';
|
||||
}
|
||||
continue;
|
||||
}
|
||||
if (inCodeBlock) {
|
||||
codeLines.push(line);
|
||||
continue;
|
||||
}
|
||||
if (line.trim() === '') {
|
||||
output.push('');
|
||||
continue;
|
||||
}
|
||||
output.push(parseInline(line));
|
||||
}
|
||||
|
||||
if (inCodeBlock) {
|
||||
const langAttr = codeLang ? ` data-lang="${escapeHtml(codeLang)}"` : '';
|
||||
output.push(`<pre${langAttr}><code>${escapeHtml(codeLines.join('\n'))}</code></pre>`);
|
||||
}
|
||||
|
||||
const paragraphs: string[] = [];
|
||||
let current: string[] = [];
|
||||
for (const line of output) {
|
||||
if (line === '') {
|
||||
if (current.length > 0) {
|
||||
paragraphs.push(wrapParagraph(current));
|
||||
current = [];
|
||||
}
|
||||
} else {
|
||||
current.push(line);
|
||||
}
|
||||
}
|
||||
if (current.length > 0) {
|
||||
paragraphs.push(wrapParagraph(current));
|
||||
}
|
||||
|
||||
return paragraphs.join('');
|
||||
}
|
||||
|
|
@ -1,325 +0,0 @@
|
|||
export const chatStyles = `
|
||||
:host {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
background: var(--lem-bg, #1a1a1e);
|
||||
color: var(--lem-text, #e0e0e0);
|
||||
font-family: var(--lem-font, system-ui, -apple-system, sans-serif);
|
||||
font-size: 14px;
|
||||
line-height: 1.5;
|
||||
border-radius: 12px;
|
||||
overflow: hidden;
|
||||
border: 1px solid rgba(255, 255, 255, 0.08);
|
||||
}
|
||||
|
||||
.header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 10px;
|
||||
padding: 14px 18px;
|
||||
background: rgba(255, 255, 255, 0.03);
|
||||
border-bottom: 1px solid rgba(255, 255, 255, 0.06);
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.header-icon {
|
||||
width: 28px;
|
||||
height: 28px;
|
||||
border-radius: 8px;
|
||||
background: var(--lem-accent, #5865f2);
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
font-size: 14px;
|
||||
font-weight: 700;
|
||||
color: #fff;
|
||||
}
|
||||
|
||||
.header-title {
|
||||
font-size: 15px;
|
||||
font-weight: 600;
|
||||
color: var(--lem-text, #e0e0e0);
|
||||
}
|
||||
|
||||
.header-model {
|
||||
font-size: 11px;
|
||||
color: rgba(255, 255, 255, 0.35);
|
||||
margin-left: auto;
|
||||
font-family: ui-monospace, SFMono-Regular, Menlo, monospace;
|
||||
}
|
||||
|
||||
.header-status {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
border-radius: 50%;
|
||||
background: #43b581;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.header-status.disconnected {
|
||||
background: #f04747;
|
||||
}
|
||||
`;
|
||||
|
||||
export const messagesStyles = `
|
||||
:host {
|
||||
display: block;
|
||||
flex: 1;
|
||||
overflow-y: auto;
|
||||
overflow-x: hidden;
|
||||
padding: 16px 0;
|
||||
scroll-behavior: smooth;
|
||||
}
|
||||
|
||||
:host::-webkit-scrollbar {
|
||||
width: 6px;
|
||||
}
|
||||
|
||||
:host::-webkit-scrollbar-track {
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
:host::-webkit-scrollbar-thumb {
|
||||
background: rgba(255, 255, 255, 0.12);
|
||||
border-radius: 3px;
|
||||
}
|
||||
|
||||
.empty {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
height: 100%;
|
||||
gap: 12px;
|
||||
color: rgba(255, 255, 255, 0.25);
|
||||
}
|
||||
|
||||
.empty-icon {
|
||||
font-size: 36px;
|
||||
opacity: 0.4;
|
||||
}
|
||||
|
||||
.empty-text {
|
||||
font-size: 14px;
|
||||
}
|
||||
`;
|
||||
|
||||
export const messageStyles = `
|
||||
:host {
|
||||
display: block;
|
||||
padding: 6px 18px;
|
||||
}
|
||||
|
||||
:host([role="user"]) .bubble {
|
||||
background: var(--lem-msg-user, #2a2a3e);
|
||||
margin-left: 40px;
|
||||
border-radius: 12px 12px 4px 12px;
|
||||
}
|
||||
|
||||
:host([role="assistant"]) .bubble {
|
||||
background: var(--lem-msg-assistant, #1e1e2a);
|
||||
margin-right: 40px;
|
||||
border-radius: 12px 12px 12px 4px;
|
||||
}
|
||||
|
||||
.bubble {
|
||||
padding: 10px 14px;
|
||||
word-wrap: break-word;
|
||||
overflow-wrap: break-word;
|
||||
}
|
||||
|
||||
.role {
|
||||
font-size: 11px;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.5px;
|
||||
margin-bottom: 4px;
|
||||
color: rgba(255, 255, 255, 0.35);
|
||||
}
|
||||
|
||||
:host([role="assistant"]) .role {
|
||||
color: var(--lem-accent, #5865f2);
|
||||
}
|
||||
|
||||
.content {
|
||||
color: var(--lem-text, #e0e0e0);
|
||||
line-height: 1.6;
|
||||
}
|
||||
|
||||
.content p {
|
||||
margin: 0 0 8px 0;
|
||||
}
|
||||
|
||||
.content p:last-child {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.content strong {
|
||||
font-weight: 600;
|
||||
color: #fff;
|
||||
}
|
||||
|
||||
.content em {
|
||||
font-style: italic;
|
||||
color: rgba(255, 255, 255, 0.8);
|
||||
}
|
||||
|
||||
.content code {
|
||||
font-family: ui-monospace, SFMono-Regular, Menlo, monospace;
|
||||
font-size: 12px;
|
||||
background: rgba(0, 0, 0, 0.3);
|
||||
padding: 2px 5px;
|
||||
border-radius: 4px;
|
||||
color: #e8a0bf;
|
||||
}
|
||||
|
||||
.content pre {
|
||||
margin: 8px 0;
|
||||
padding: 12px;
|
||||
background: rgba(0, 0, 0, 0.35);
|
||||
border-radius: 8px;
|
||||
overflow-x: auto;
|
||||
border: 1px solid rgba(255, 255, 255, 0.06);
|
||||
}
|
||||
|
||||
.content pre code {
|
||||
background: none;
|
||||
padding: 0;
|
||||
font-size: 12px;
|
||||
color: #c9d1d9;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
.think-panel {
|
||||
margin: 6px 0 8px;
|
||||
padding: 8px 12px;
|
||||
background: rgba(88, 101, 242, 0.06);
|
||||
border-left: 2px solid rgba(88, 101, 242, 0.3);
|
||||
border-radius: 0 6px 6px 0;
|
||||
font-size: 12px;
|
||||
color: rgba(255, 255, 255, 0.45);
|
||||
line-height: 1.5;
|
||||
max-height: 200px;
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
.think-panel::-webkit-scrollbar {
|
||||
width: 4px;
|
||||
}
|
||||
|
||||
.think-panel::-webkit-scrollbar-thumb {
|
||||
background: rgba(255, 255, 255, 0.1);
|
||||
border-radius: 2px;
|
||||
}
|
||||
|
||||
.think-label {
|
||||
font-size: 10px;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.5px;
|
||||
color: rgba(88, 101, 242, 0.5);
|
||||
margin-bottom: 4px;
|
||||
cursor: pointer;
|
||||
user-select: none;
|
||||
}
|
||||
|
||||
.think-label:hover {
|
||||
color: rgba(88, 101, 242, 0.7);
|
||||
}
|
||||
|
||||
.think-panel.collapsed .think-content {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.cursor {
|
||||
display: inline-block;
|
||||
width: 7px;
|
||||
height: 16px;
|
||||
background: var(--lem-accent, #5865f2);
|
||||
border-radius: 1px;
|
||||
animation: blink 0.8s step-end infinite;
|
||||
vertical-align: text-bottom;
|
||||
margin-left: 2px;
|
||||
}
|
||||
|
||||
@keyframes blink {
|
||||
50% { opacity: 0; }
|
||||
}
|
||||
`;
|
||||
|
||||
export const inputStyles = `
|
||||
:host {
|
||||
display: block;
|
||||
padding: 12px 16px 16px;
|
||||
border-top: 1px solid rgba(255, 255, 255, 0.06);
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.input-wrapper {
|
||||
display: flex;
|
||||
align-items: flex-end;
|
||||
gap: 10px;
|
||||
background: rgba(255, 255, 255, 0.05);
|
||||
border: 1px solid rgba(255, 255, 255, 0.08);
|
||||
border-radius: 12px;
|
||||
padding: 8px 12px;
|
||||
transition: border-color 0.15s;
|
||||
}
|
||||
|
||||
.input-wrapper:focus-within {
|
||||
border-color: var(--lem-accent, #5865f2);
|
||||
}
|
||||
|
||||
textarea {
|
||||
flex: 1;
|
||||
background: none;
|
||||
border: none;
|
||||
outline: none;
|
||||
color: var(--lem-text, #e0e0e0);
|
||||
font-family: inherit;
|
||||
font-size: 14px;
|
||||
line-height: 1.5;
|
||||
resize: none;
|
||||
max-height: 120px;
|
||||
min-height: 22px;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
textarea::placeholder {
|
||||
color: rgba(255, 255, 255, 0.25);
|
||||
}
|
||||
|
||||
.send-btn {
|
||||
background: var(--lem-accent, #5865f2);
|
||||
border: none;
|
||||
border-radius: 8px;
|
||||
color: #fff;
|
||||
width: 32px;
|
||||
height: 32px;
|
||||
cursor: pointer;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
flex-shrink: 0;
|
||||
transition: opacity 0.15s, transform 0.1s;
|
||||
}
|
||||
|
||||
.send-btn:hover {
|
||||
opacity: 0.85;
|
||||
}
|
||||
|
||||
.send-btn:active {
|
||||
transform: scale(0.95);
|
||||
}
|
||||
|
||||
.send-btn:disabled {
|
||||
opacity: 0.3;
|
||||
cursor: default;
|
||||
transform: none;
|
||||
}
|
||||
|
||||
.send-btn svg {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
}
|
||||
`;
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
export interface ChatMessage {
|
||||
role: 'system' | 'user' | 'assistant';
|
||||
content: string;
|
||||
}
|
||||
|
||||
export interface ChatCompletionRequest {
|
||||
model: string;
|
||||
messages: ChatMessage[];
|
||||
max_tokens: number;
|
||||
temperature: number;
|
||||
stream: boolean;
|
||||
}
|
||||
|
||||
export interface ChatCompletionChunk {
|
||||
id: string;
|
||||
object: string;
|
||||
created: number;
|
||||
model: string;
|
||||
choices: Array<{
|
||||
delta: {
|
||||
role?: string;
|
||||
content?: string;
|
||||
};
|
||||
index: number;
|
||||
finish_reason: string | null;
|
||||
}>;
|
||||
}
|
||||
|
||||
export interface LemSendDetail {
|
||||
text: string;
|
||||
}
|
||||
|
|
@ -1,14 +0,0 @@
|
|||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2022",
|
||||
"module": "ESNext",
|
||||
"moduleResolution": "bundler",
|
||||
"strict": true,
|
||||
"noEmit": true,
|
||||
"declaration": false,
|
||||
"isolatedModules": true,
|
||||
"lib": ["ES2022", "DOM", "DOM.Iterable"],
|
||||
"skipLibCheck": true
|
||||
},
|
||||
"include": ["src/**/*.ts"]
|
||||
}
|
||||
104
mkdocs.yml
104
mkdocs.yml
|
|
@ -1,104 +0,0 @@
|
|||
site_name: Core Framework
|
||||
site_url: https://core.help
|
||||
site_description: 'A Web3 Framework for building Go desktop applications with Wails v3'
|
||||
site_author: 'Snider'
|
||||
repo_url: 'https://forge.lthn.ai/core/go'
|
||||
repo_name: 'host-uk/core'
|
||||
|
||||
theme:
|
||||
name: material
|
||||
palette:
|
||||
- scheme: default
|
||||
primary: deep purple
|
||||
accent: purple
|
||||
toggle:
|
||||
icon: material/brightness-7
|
||||
name: Switch to dark mode
|
||||
- scheme: slate
|
||||
primary: deep purple
|
||||
accent: purple
|
||||
toggle:
|
||||
icon: material/brightness-4
|
||||
name: Switch to light mode
|
||||
features:
|
||||
- navigation.tabs
|
||||
- navigation.sections
|
||||
- navigation.expand
|
||||
- navigation.top
|
||||
- search.suggest
|
||||
- search.highlight
|
||||
- content.tabs.link
|
||||
- content.code.copy
|
||||
|
||||
markdown_extensions:
|
||||
- pymdownx.highlight:
|
||||
anchor_linenums: true
|
||||
- pymdownx.superfences
|
||||
- pymdownx.tabbed:
|
||||
alternate_style: true
|
||||
- admonition
|
||||
- pymdownx.details
|
||||
- attr_list
|
||||
- md_in_html
|
||||
|
||||
nav:
|
||||
- Home: index.md
|
||||
- User Documentation:
|
||||
- User Guide: user-guide.md
|
||||
- FAQ: faq.md
|
||||
- Troubleshooting: troubleshooting.md
|
||||
- Workflows: workflows.md
|
||||
- CLI Reference:
|
||||
- Overview: cmd/index.md
|
||||
- AI: cmd/ai/index.md
|
||||
- Build: cmd/build/index.md
|
||||
- CI: cmd/ci/index.md
|
||||
- Dev: cmd/dev/index.md
|
||||
- Go: cmd/go/index.md
|
||||
- PHP: cmd/php/index.md
|
||||
- SDK: cmd/sdk/index.md
|
||||
- Setup: cmd/setup/index.md
|
||||
- Doctor: cmd/doctor/index.md
|
||||
- Test: cmd/test/index.md
|
||||
- VM: cmd/vm/index.md
|
||||
- Pkg: cmd/pkg/index.md
|
||||
- Docs: cmd/docs/index.md
|
||||
- Getting Started:
|
||||
- Installation: getting-started/installation.md
|
||||
- Quick Start: getting-started/quickstart.md
|
||||
- Architecture: getting-started/architecture.md
|
||||
- Core Framework:
|
||||
- Overview: core/overview.md
|
||||
- Services: core/services.md
|
||||
- Lifecycle: core/lifecycle.md
|
||||
- IPC & Actions: core/ipc.md
|
||||
- Services:
|
||||
- Config: services/config.md
|
||||
- Display: services/display.md
|
||||
- WebView: services/webview.md
|
||||
- MCP: services/mcp.md
|
||||
- Crypt: services/crypt.md
|
||||
- I18n: services/i18n.md
|
||||
- IO: services/io.md
|
||||
- Workspace: services/workspace.md
|
||||
- Help: services/help.md
|
||||
- Extensions:
|
||||
- Plugin System: extensions/plugins.md
|
||||
- Module System: extensions/modules.md
|
||||
- GUI Application:
|
||||
- Overview: gui/overview.md
|
||||
- MCP Bridge: gui/mcp-bridge.md
|
||||
- API Reference:
|
||||
- Core: api/core.md
|
||||
- Display: api/display.md
|
||||
- Development:
|
||||
- Package Standards: pkg/PACKAGE_STANDARDS.md
|
||||
- Internationalization:
|
||||
- Overview: pkg/i18n/README.md
|
||||
- Grammar: pkg/i18n/GRAMMAR.md
|
||||
- Extending: pkg/i18n/EXTENDING.md
|
||||
- Claude Skill: skill/index.md
|
||||
- Reference:
|
||||
- Configuration: configuration.md
|
||||
- Migration: migration.md
|
||||
- Glossary: glossary.md
|
||||
171
pkg/cache/cache.go
vendored
171
pkg/cache/cache.go
vendored
|
|
@ -1,171 +0,0 @@
|
|||
// Package cache provides a file-based cache for GitHub API responses.
|
||||
package cache
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go-io"
|
||||
)
|
||||
|
||||
// DefaultTTL is the default cache expiry time.
|
||||
const DefaultTTL = 1 * time.Hour
|
||||
|
||||
// Cache represents a file-based cache.
|
||||
type Cache struct {
|
||||
medium io.Medium
|
||||
baseDir string
|
||||
ttl time.Duration
|
||||
}
|
||||
|
||||
// Entry represents a cached item with metadata.
|
||||
type Entry struct {
|
||||
Data json.RawMessage `json:"data"`
|
||||
CachedAt time.Time `json:"cached_at"`
|
||||
ExpiresAt time.Time `json:"expires_at"`
|
||||
}
|
||||
|
||||
// New creates a new cache instance.
|
||||
// If medium is nil, uses io.Local (filesystem).
|
||||
// If baseDir is empty, uses .core/cache in current directory.
|
||||
func New(medium io.Medium, baseDir string, ttl time.Duration) (*Cache, error) {
|
||||
if medium == nil {
|
||||
medium = io.Local
|
||||
}
|
||||
|
||||
if baseDir == "" {
|
||||
// Use .core/cache in current working directory
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
baseDir = filepath.Join(cwd, ".core", "cache")
|
||||
}
|
||||
|
||||
if ttl == 0 {
|
||||
ttl = DefaultTTL
|
||||
}
|
||||
|
||||
// Ensure cache directory exists
|
||||
if err := medium.EnsureDir(baseDir); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &Cache{
|
||||
medium: medium,
|
||||
baseDir: baseDir,
|
||||
ttl: ttl,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Path returns the full path for a cache key.
|
||||
func (c *Cache) Path(key string) string {
|
||||
return filepath.Join(c.baseDir, key+".json")
|
||||
}
|
||||
|
||||
// Get retrieves a cached item if it exists and hasn't expired.
|
||||
func (c *Cache) Get(key string, dest any) (bool, error) {
|
||||
path := c.Path(key)
|
||||
|
||||
dataStr, err := c.medium.Read(path)
|
||||
if err != nil {
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
return false, nil
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
|
||||
var entry Entry
|
||||
if err := json.Unmarshal([]byte(dataStr), &entry); err != nil {
|
||||
// Invalid cache file, treat as miss
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// Check expiry
|
||||
if time.Now().After(entry.ExpiresAt) {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// Unmarshal the actual data
|
||||
if err := json.Unmarshal(entry.Data, dest); err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// Set stores an item in the cache.
|
||||
func (c *Cache) Set(key string, data any) error {
|
||||
path := c.Path(key)
|
||||
|
||||
// Ensure parent directory exists
|
||||
if err := c.medium.EnsureDir(filepath.Dir(path)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Marshal the data
|
||||
dataBytes, err := json.Marshal(data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
entry := Entry{
|
||||
Data: dataBytes,
|
||||
CachedAt: time.Now(),
|
||||
ExpiresAt: time.Now().Add(c.ttl),
|
||||
}
|
||||
|
||||
entryBytes, err := json.MarshalIndent(entry, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return c.medium.Write(path, string(entryBytes))
|
||||
}
|
||||
|
||||
// Delete removes an item from the cache.
|
||||
func (c *Cache) Delete(key string) error {
|
||||
path := c.Path(key)
|
||||
err := c.medium.Delete(path)
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// Clear removes all cached items.
|
||||
func (c *Cache) Clear() error {
|
||||
return c.medium.DeleteAll(c.baseDir)
|
||||
}
|
||||
|
||||
// Age returns how old a cached item is, or -1 if not cached.
|
||||
func (c *Cache) Age(key string) time.Duration {
|
||||
path := c.Path(key)
|
||||
|
||||
dataStr, err := c.medium.Read(path)
|
||||
if err != nil {
|
||||
return -1
|
||||
}
|
||||
|
||||
var entry Entry
|
||||
if err := json.Unmarshal([]byte(dataStr), &entry); err != nil {
|
||||
return -1
|
||||
}
|
||||
|
||||
return time.Since(entry.CachedAt)
|
||||
}
|
||||
|
||||
// GitHub-specific cache keys
|
||||
|
||||
// GitHubReposKey returns the cache key for an org's repo list.
|
||||
func GitHubReposKey(org string) string {
|
||||
return filepath.Join("github", org, "repos")
|
||||
}
|
||||
|
||||
// GitHubRepoKey returns the cache key for a specific repo's metadata.
|
||||
func GitHubRepoKey(org, repo string) string {
|
||||
return filepath.Join("github", org, repo, "meta")
|
||||
}
|
||||
104
pkg/cache/cache_test.go
vendored
104
pkg/cache/cache_test.go
vendored
|
|
@ -1,104 +0,0 @@
|
|||
package cache_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/cache"
|
||||
"forge.lthn.ai/core/go-io"
|
||||
)
|
||||
|
||||
func TestCache(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
// Use a path that MockMedium will understand
|
||||
baseDir := "/tmp/cache"
|
||||
c, err := cache.New(m, baseDir, 1*time.Minute)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create cache: %v", err)
|
||||
}
|
||||
|
||||
key := "test-key"
|
||||
data := map[string]string{"foo": "bar"}
|
||||
|
||||
// Test Set
|
||||
if err := c.Set(key, data); err != nil {
|
||||
t.Errorf("Set failed: %v", err)
|
||||
}
|
||||
|
||||
// Test Get
|
||||
var retrieved map[string]string
|
||||
found, err := c.Get(key, &retrieved)
|
||||
if err != nil {
|
||||
t.Errorf("Get failed: %v", err)
|
||||
}
|
||||
if !found {
|
||||
t.Error("expected to find cached item")
|
||||
}
|
||||
if retrieved["foo"] != "bar" {
|
||||
t.Errorf("expected foo=bar, got %v", retrieved["foo"])
|
||||
}
|
||||
|
||||
// Test Age
|
||||
age := c.Age(key)
|
||||
if age < 0 {
|
||||
t.Error("expected age >= 0")
|
||||
}
|
||||
|
||||
// Test Delete
|
||||
if err := c.Delete(key); err != nil {
|
||||
t.Errorf("Delete failed: %v", err)
|
||||
}
|
||||
found, err = c.Get(key, &retrieved)
|
||||
if err != nil {
|
||||
t.Errorf("Get after delete returned an unexpected error: %v", err)
|
||||
}
|
||||
if found {
|
||||
t.Error("expected item to be deleted")
|
||||
}
|
||||
|
||||
// Test Expiry
|
||||
cshort, err := cache.New(m, "/tmp/cache-short", 10*time.Millisecond)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create short-lived cache: %v", err)
|
||||
}
|
||||
if err := cshort.Set(key, data); err != nil {
|
||||
t.Fatalf("Set for expiry test failed: %v", err)
|
||||
}
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
found, err = cshort.Get(key, &retrieved)
|
||||
if err != nil {
|
||||
t.Errorf("Get for expired item returned an unexpected error: %v", err)
|
||||
}
|
||||
if found {
|
||||
t.Error("expected item to be expired")
|
||||
}
|
||||
|
||||
// Test Clear
|
||||
if err := c.Set("key1", data); err != nil {
|
||||
t.Fatalf("Set for clear test failed for key1: %v", err)
|
||||
}
|
||||
if err := c.Set("key2", data); err != nil {
|
||||
t.Fatalf("Set for clear test failed for key2: %v", err)
|
||||
}
|
||||
if err := c.Clear(); err != nil {
|
||||
t.Errorf("Clear failed: %v", err)
|
||||
}
|
||||
found, err = c.Get("key1", &retrieved)
|
||||
if err != nil {
|
||||
t.Errorf("Get after clear returned an unexpected error: %v", err)
|
||||
}
|
||||
if found {
|
||||
t.Error("expected key1 to be cleared")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCacheDefaults(t *testing.T) {
|
||||
// Test default Medium (io.Local) and default TTL
|
||||
c, err := cache.New(nil, "", 0)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create cache with defaults: %v", err)
|
||||
}
|
||||
if c == nil {
|
||||
t.Fatal("expected cache instance")
|
||||
}
|
||||
}
|
||||
|
|
@ -1,212 +0,0 @@
|
|||
// Package config provides layered configuration management for the Core framework.
|
||||
//
|
||||
// Configuration values are resolved in priority order: defaults -> file -> env -> flags.
|
||||
// Values are stored in a YAML file at ~/.core/config.yaml by default.
|
||||
//
|
||||
// Keys use dot notation for nested access:
|
||||
//
|
||||
// cfg.Set("dev.editor", "vim")
|
||||
// var editor string
|
||||
// cfg.Get("dev.editor", &editor)
|
||||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
coreerr "forge.lthn.ai/core/go-log"
|
||||
coreio "forge.lthn.ai/core/go-io"
|
||||
core "forge.lthn.ai/core/go/pkg/framework/core"
|
||||
"github.com/spf13/viper"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// Config implements the core.Config interface with layered resolution.
|
||||
// It uses viper as the underlying configuration engine.
|
||||
type Config struct {
|
||||
mu sync.RWMutex
|
||||
v *viper.Viper
|
||||
medium coreio.Medium
|
||||
path string
|
||||
}
|
||||
|
||||
// Option is a functional option for configuring a Config instance.
|
||||
type Option func(*Config)
|
||||
|
||||
// WithMedium sets the storage medium for configuration file operations.
|
||||
func WithMedium(m coreio.Medium) Option {
|
||||
return func(c *Config) {
|
||||
c.medium = m
|
||||
}
|
||||
}
|
||||
|
||||
// WithPath sets the path to the configuration file.
|
||||
func WithPath(path string) Option {
|
||||
return func(c *Config) {
|
||||
c.path = path
|
||||
}
|
||||
}
|
||||
|
||||
// WithEnvPrefix sets the prefix for environment variables.
|
||||
func WithEnvPrefix(prefix string) Option {
|
||||
return func(c *Config) {
|
||||
c.v.SetEnvPrefix(prefix)
|
||||
}
|
||||
}
|
||||
|
||||
// New creates a new Config instance with the given options.
|
||||
// If no medium is provided, it defaults to io.Local.
|
||||
// If no path is provided, it defaults to ~/.core/config.yaml.
|
||||
func New(opts ...Option) (*Config, error) {
|
||||
c := &Config{
|
||||
v: viper.New(),
|
||||
}
|
||||
|
||||
// Configure viper defaults
|
||||
c.v.SetEnvPrefix("CORE_CONFIG")
|
||||
c.v.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
|
||||
|
||||
for _, opt := range opts {
|
||||
opt(c)
|
||||
}
|
||||
|
||||
if c.medium == nil {
|
||||
c.medium = coreio.Local
|
||||
}
|
||||
|
||||
if c.path == "" {
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return nil, coreerr.E("config.New", "failed to determine home directory", err)
|
||||
}
|
||||
c.path = filepath.Join(home, ".core", "config.yaml")
|
||||
}
|
||||
|
||||
c.v.AutomaticEnv()
|
||||
|
||||
// Load existing config file if it exists
|
||||
if c.medium.Exists(c.path) {
|
||||
if err := c.LoadFile(c.medium, c.path); err != nil {
|
||||
return nil, coreerr.E("config.New", "failed to load config file", err)
|
||||
}
|
||||
}
|
||||
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// LoadFile reads a configuration file from the given medium and path and merges it into the current config.
|
||||
// It supports YAML and environment files (.env).
|
||||
func (c *Config) LoadFile(m coreio.Medium, path string) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
content, err := m.Read(path)
|
||||
if err != nil {
|
||||
return coreerr.E("config.LoadFile", "failed to read config file: "+path, err)
|
||||
}
|
||||
|
||||
ext := filepath.Ext(path)
|
||||
if ext == "" && filepath.Base(path) == ".env" {
|
||||
c.v.SetConfigType("env")
|
||||
} else if ext != "" {
|
||||
c.v.SetConfigType(strings.TrimPrefix(ext, "."))
|
||||
} else {
|
||||
c.v.SetConfigType("yaml")
|
||||
}
|
||||
|
||||
if err := c.v.MergeConfig(strings.NewReader(content)); err != nil {
|
||||
return coreerr.E("config.LoadFile", "failed to parse config file: "+path, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get retrieves a configuration value by dot-notation key and stores it in out.
|
||||
// If key is empty, it unmarshals the entire configuration into out.
|
||||
// The out parameter must be a pointer to the target type.
|
||||
func (c *Config) Get(key string, out any) error {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
|
||||
if key == "" {
|
||||
return c.v.Unmarshal(out)
|
||||
}
|
||||
|
||||
if !c.v.IsSet(key) {
|
||||
return coreerr.E("config.Get", fmt.Sprintf("key not found: %s", key), nil)
|
||||
}
|
||||
|
||||
return c.v.UnmarshalKey(key, out)
|
||||
}
|
||||
|
||||
// Set stores a configuration value by dot-notation key and persists to disk.
|
||||
func (c *Config) Set(key string, v any) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
c.v.Set(key, v)
|
||||
|
||||
// Persist to disk
|
||||
if err := Save(c.medium, c.path, c.v.AllSettings()); err != nil {
|
||||
return coreerr.E("config.Set", "failed to save config", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// All returns a deep copy of all configuration values.
|
||||
func (c *Config) All() map[string]any {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
|
||||
return c.v.AllSettings()
|
||||
}
|
||||
|
||||
// Path returns the path to the configuration file.
|
||||
func (c *Config) Path() string {
|
||||
return c.path
|
||||
}
|
||||
|
||||
// Load reads a YAML configuration file from the given medium and path.
|
||||
// Returns the parsed data as a map, or an error if the file cannot be read or parsed.
|
||||
// Deprecated: Use Config.LoadFile instead.
|
||||
func Load(m coreio.Medium, path string) (map[string]any, error) {
|
||||
content, err := m.Read(path)
|
||||
if err != nil {
|
||||
return nil, coreerr.E("config.Load", "failed to read config file: "+path, err)
|
||||
}
|
||||
|
||||
v := viper.New()
|
||||
v.SetConfigType("yaml")
|
||||
if err := v.ReadConfig(strings.NewReader(content)); err != nil {
|
||||
return nil, coreerr.E("config.Load", "failed to parse config file: "+path, err)
|
||||
}
|
||||
|
||||
return v.AllSettings(), nil
|
||||
}
|
||||
|
||||
// Save writes configuration data to a YAML file at the given path.
|
||||
// It ensures the parent directory exists before writing.
|
||||
func Save(m coreio.Medium, path string, data map[string]any) error {
|
||||
out, err := yaml.Marshal(data)
|
||||
if err != nil {
|
||||
return coreerr.E("config.Save", "failed to marshal config", err)
|
||||
}
|
||||
|
||||
dir := filepath.Dir(path)
|
||||
if err := m.EnsureDir(dir); err != nil {
|
||||
return coreerr.E("config.Save", "failed to create config directory: "+dir, err)
|
||||
}
|
||||
|
||||
if err := m.Write(path, string(out)); err != nil {
|
||||
return coreerr.E("config.Save", "failed to write config file: "+path, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Ensure Config implements core.Config at compile time.
|
||||
var _ core.Config = (*Config)(nil)
|
||||
|
|
@ -1,277 +0,0 @@
|
|||
package config
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go-io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestConfig_Get_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
cfg, err := New(WithMedium(m), WithPath("/tmp/test/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = cfg.Set("app.name", "core")
|
||||
assert.NoError(t, err)
|
||||
|
||||
var name string
|
||||
err = cfg.Get("app.name", &name)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "core", name)
|
||||
}
|
||||
|
||||
func TestConfig_Get_Bad(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
cfg, err := New(WithMedium(m), WithPath("/tmp/test/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
var value string
|
||||
err = cfg.Get("nonexistent.key", &value)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "key not found")
|
||||
}
|
||||
|
||||
func TestConfig_Set_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
cfg, err := New(WithMedium(m), WithPath("/tmp/test/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = cfg.Set("dev.editor", "vim")
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Verify the value was saved to the medium
|
||||
content, readErr := m.Read("/tmp/test/config.yaml")
|
||||
assert.NoError(t, readErr)
|
||||
assert.Contains(t, content, "editor: vim")
|
||||
|
||||
// Verify we can read it back
|
||||
var editor string
|
||||
err = cfg.Get("dev.editor", &editor)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "vim", editor)
|
||||
}
|
||||
|
||||
func TestConfig_Set_Nested_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
cfg, err := New(WithMedium(m), WithPath("/tmp/test/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = cfg.Set("a.b.c", "deep")
|
||||
assert.NoError(t, err)
|
||||
|
||||
var val string
|
||||
err = cfg.Get("a.b.c", &val)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "deep", val)
|
||||
}
|
||||
|
||||
func TestConfig_All_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
cfg, err := New(WithMedium(m), WithPath("/tmp/test/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
_ = cfg.Set("key1", "val1")
|
||||
_ = cfg.Set("key2", "val2")
|
||||
|
||||
all := cfg.All()
|
||||
assert.Equal(t, "val1", all["key1"])
|
||||
assert.Equal(t, "val2", all["key2"])
|
||||
}
|
||||
|
||||
func TestConfig_Path_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
cfg, err := New(WithMedium(m), WithPath("/custom/path/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
assert.Equal(t, "/custom/path/config.yaml", cfg.Path())
|
||||
}
|
||||
|
||||
func TestConfig_Load_Existing_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
m.Files["/tmp/test/config.yaml"] = "app:\n name: existing\n"
|
||||
|
||||
cfg, err := New(WithMedium(m), WithPath("/tmp/test/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
var name string
|
||||
err = cfg.Get("app.name", &name)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "existing", name)
|
||||
}
|
||||
|
||||
func TestConfig_Env_Good(t *testing.T) {
|
||||
// Set environment variable
|
||||
t.Setenv("CORE_CONFIG_DEV_EDITOR", "nano")
|
||||
|
||||
m := io.NewMockMedium()
|
||||
cfg, err := New(WithMedium(m), WithPath("/tmp/test/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
var editor string
|
||||
err = cfg.Get("dev.editor", &editor)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "nano", editor)
|
||||
}
|
||||
|
||||
func TestConfig_Env_Overrides_File_Good(t *testing.T) {
|
||||
// Set file config
|
||||
m := io.NewMockMedium()
|
||||
m.Files["/tmp/test/config.yaml"] = "dev:\n editor: vim\n"
|
||||
|
||||
// Set environment override
|
||||
t.Setenv("CORE_CONFIG_DEV_EDITOR", "nano")
|
||||
|
||||
cfg, err := New(WithMedium(m), WithPath("/tmp/test/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
var editor string
|
||||
err = cfg.Get("dev.editor", &editor)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "nano", editor)
|
||||
}
|
||||
|
||||
func TestConfig_Assign_Types_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
m.Files["/tmp/test/config.yaml"] = "count: 42\nenabled: true\nratio: 3.14\n"
|
||||
|
||||
cfg, err := New(WithMedium(m), WithPath("/tmp/test/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
var count int
|
||||
err = cfg.Get("count", &count)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 42, count)
|
||||
|
||||
var enabled bool
|
||||
err = cfg.Get("enabled", &enabled)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, enabled)
|
||||
|
||||
var ratio float64
|
||||
err = cfg.Get("ratio", &ratio)
|
||||
assert.NoError(t, err)
|
||||
assert.InDelta(t, 3.14, ratio, 0.001)
|
||||
}
|
||||
|
||||
func TestConfig_Assign_Any_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
cfg, err := New(WithMedium(m), WithPath("/tmp/test/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
_ = cfg.Set("key", "value")
|
||||
|
||||
var val any
|
||||
err = cfg.Get("key", &val)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "value", val)
|
||||
}
|
||||
|
||||
func TestConfig_DefaultPath_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
cfg, err := New(WithMedium(m))
|
||||
assert.NoError(t, err)
|
||||
|
||||
home, _ := os.UserHomeDir()
|
||||
assert.Equal(t, home+"/.core/config.yaml", cfg.Path())
|
||||
}
|
||||
|
||||
func TestLoadEnv_Good(t *testing.T) {
|
||||
t.Setenv("CORE_CONFIG_FOO_BAR", "baz")
|
||||
t.Setenv("CORE_CONFIG_SIMPLE", "value")
|
||||
|
||||
result := LoadEnv("CORE_CONFIG_")
|
||||
assert.Equal(t, "baz", result["foo.bar"])
|
||||
assert.Equal(t, "value", result["simple"])
|
||||
}
|
||||
|
||||
func TestLoad_Bad(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
_, err := Load(m, "/nonexistent/file.yaml")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "failed to read config file")
|
||||
}
|
||||
|
||||
func TestLoad_InvalidYAML_Bad(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
m.Files["/tmp/test/config.yaml"] = "invalid: yaml: content: [[[["
|
||||
|
||||
_, err := Load(m, "/tmp/test/config.yaml")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "failed to parse config file")
|
||||
}
|
||||
|
||||
func TestSave_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
data := map[string]any{
|
||||
"key": "value",
|
||||
}
|
||||
|
||||
err := Save(m, "/tmp/test/config.yaml", data)
|
||||
assert.NoError(t, err)
|
||||
|
||||
content, readErr := m.Read("/tmp/test/config.yaml")
|
||||
assert.NoError(t, readErr)
|
||||
assert.Contains(t, content, "key: value")
|
||||
}
|
||||
|
||||
func TestConfig_LoadFile_Env(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
m.Files["/.env"] = "FOO=bar\nBAZ=qux"
|
||||
|
||||
cfg, err := New(WithMedium(m), WithPath("/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = cfg.LoadFile(m, "/.env")
|
||||
assert.NoError(t, err)
|
||||
|
||||
var foo string
|
||||
err = cfg.Get("foo", &foo)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "bar", foo)
|
||||
}
|
||||
|
||||
func TestConfig_WithEnvPrefix(t *testing.T) {
|
||||
t.Setenv("MYAPP_SETTING", "secret")
|
||||
|
||||
m := io.NewMockMedium()
|
||||
cfg, err := New(WithMedium(m), WithEnvPrefix("MYAPP"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
var setting string
|
||||
err = cfg.Get("setting", &setting)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "secret", setting)
|
||||
}
|
||||
|
||||
func TestConfig_Get_EmptyKey(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
m.Files["/config.yaml"] = "app:\n name: test\nversion: 1"
|
||||
|
||||
cfg, err := New(WithMedium(m), WithPath("/config.yaml"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
type AppConfig struct {
|
||||
App struct {
|
||||
Name string `mapstructure:"name"`
|
||||
} `mapstructure:"app"`
|
||||
Version int `mapstructure:"version"`
|
||||
}
|
||||
|
||||
var full AppConfig
|
||||
err = cfg.Get("", &full)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "test", full.App.Name)
|
||||
assert.Equal(t, 1, full.Version)
|
||||
}
|
||||
|
|
@ -1,40 +0,0 @@
|
|||
package config
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// LoadEnv parses environment variables with the given prefix and returns
|
||||
// them as a flat map with dot-notation keys.
|
||||
//
|
||||
// For example, with prefix "CORE_CONFIG_":
|
||||
//
|
||||
// CORE_CONFIG_FOO_BAR=baz -> {"foo.bar": "baz"}
|
||||
// CORE_CONFIG_EDITOR=vim -> {"editor": "vim"}
|
||||
func LoadEnv(prefix string) map[string]any {
|
||||
result := make(map[string]any)
|
||||
|
||||
for _, env := range os.Environ() {
|
||||
if !strings.HasPrefix(env, prefix) {
|
||||
continue
|
||||
}
|
||||
|
||||
parts := strings.SplitN(env, "=", 2)
|
||||
if len(parts) != 2 {
|
||||
continue
|
||||
}
|
||||
|
||||
name := parts[0]
|
||||
value := parts[1]
|
||||
|
||||
// Strip prefix and convert to dot notation
|
||||
key := strings.TrimPrefix(name, prefix)
|
||||
key = strings.ToLower(key)
|
||||
key = strings.ReplaceAll(key, "_", ".")
|
||||
|
||||
result[key] = value
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
|
@ -1,83 +0,0 @@
|
|||
package config
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
coreerr "forge.lthn.ai/core/go-log"
|
||||
"forge.lthn.ai/core/go-io"
|
||||
core "forge.lthn.ai/core/go/pkg/framework/core"
|
||||
)
|
||||
|
||||
// Service wraps Config as a framework service with lifecycle support.
|
||||
type Service struct {
|
||||
*core.ServiceRuntime[ServiceOptions]
|
||||
config *Config
|
||||
}
|
||||
|
||||
// ServiceOptions holds configuration for the config service.
|
||||
type ServiceOptions struct {
|
||||
// Path overrides the default config file path.
|
||||
Path string
|
||||
// Medium overrides the default storage medium.
|
||||
Medium io.Medium
|
||||
}
|
||||
|
||||
// NewConfigService creates a new config service factory for the Core framework.
|
||||
// Register it with core.WithService(config.NewConfigService).
|
||||
func NewConfigService(c *core.Core) (any, error) {
|
||||
svc := &Service{
|
||||
ServiceRuntime: core.NewServiceRuntime(c, ServiceOptions{}),
|
||||
}
|
||||
return svc, nil
|
||||
}
|
||||
|
||||
// OnStartup loads the configuration file during application startup.
|
||||
func (s *Service) OnStartup(_ context.Context) error {
|
||||
opts := s.Opts()
|
||||
|
||||
var configOpts []Option
|
||||
if opts.Path != "" {
|
||||
configOpts = append(configOpts, WithPath(opts.Path))
|
||||
}
|
||||
if opts.Medium != nil {
|
||||
configOpts = append(configOpts, WithMedium(opts.Medium))
|
||||
}
|
||||
|
||||
cfg, err := New(configOpts...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s.config = cfg
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get retrieves a configuration value by key.
|
||||
func (s *Service) Get(key string, out any) error {
|
||||
if s.config == nil {
|
||||
return coreerr.E("config.Service.Get", "config not loaded", nil)
|
||||
}
|
||||
return s.config.Get(key, out)
|
||||
}
|
||||
|
||||
// Set stores a configuration value by key.
|
||||
func (s *Service) Set(key string, v any) error {
|
||||
if s.config == nil {
|
||||
return coreerr.E("config.Service.Set", "config not loaded", nil)
|
||||
}
|
||||
return s.config.Set(key, v)
|
||||
}
|
||||
|
||||
// LoadFile merges a configuration file into the central configuration.
|
||||
func (s *Service) LoadFile(m io.Medium, path string) error {
|
||||
if s.config == nil {
|
||||
return coreerr.E("config.Service.LoadFile", "config not loaded", nil)
|
||||
}
|
||||
return s.config.LoadFile(m, path)
|
||||
}
|
||||
|
||||
// Ensure Service implements core.Config and Startable at compile time.
|
||||
var (
|
||||
_ core.Config = (*Service)(nil)
|
||||
_ core.Startable = (*Service)(nil)
|
||||
)
|
||||
|
|
@ -1,82 +0,0 @@
|
|||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log/slog"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Collector interface {
|
||||
Name() string
|
||||
Collect(ctx context.Context) error
|
||||
}
|
||||
|
||||
type Registry struct {
|
||||
mu sync.Mutex
|
||||
entries []entry
|
||||
logger *slog.Logger
|
||||
}
|
||||
|
||||
type entry struct {
|
||||
c Collector
|
||||
interval time.Duration
|
||||
cancel context.CancelFunc
|
||||
}
|
||||
|
||||
func NewRegistry(logger *slog.Logger) *Registry {
|
||||
return &Registry{logger: logger}
|
||||
}
|
||||
|
||||
func (r *Registry) Register(c Collector, interval time.Duration) {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
r.entries = append(r.entries, entry{c: c, interval: interval})
|
||||
}
|
||||
|
||||
func (r *Registry) Start(ctx context.Context) {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
|
||||
for i := range r.entries {
|
||||
e := &r.entries[i]
|
||||
cctx, cancel := context.WithCancel(ctx)
|
||||
e.cancel = cancel
|
||||
go r.run(cctx, e.c, e.interval)
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Registry) run(ctx context.Context, c Collector, interval time.Duration) {
|
||||
r.logger.Info("collector started", "name", c.Name(), "interval", interval)
|
||||
|
||||
// Run immediately on start.
|
||||
if err := c.Collect(ctx); err != nil {
|
||||
r.logger.Warn("collector error", "name", c.Name(), "err", err)
|
||||
}
|
||||
|
||||
ticker := time.NewTicker(interval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
r.logger.Info("collector stopped", "name", c.Name())
|
||||
return
|
||||
case <-ticker.C:
|
||||
if err := c.Collect(ctx); err != nil {
|
||||
r.logger.Warn("collector error", "name", c.Name(), "err", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Registry) Stop() {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
|
||||
for _, e := range r.entries {
|
||||
if e.cancel != nil {
|
||||
e.cancel()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,94 +0,0 @@
|
|||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/lab"
|
||||
)
|
||||
|
||||
type Docker struct {
|
||||
store *lab.Store
|
||||
}
|
||||
|
||||
func NewDocker(s *lab.Store) *Docker {
|
||||
return &Docker{store: s}
|
||||
}
|
||||
|
||||
func (d *Docker) Name() string { return "docker" }
|
||||
|
||||
func (d *Docker) Collect(ctx context.Context) error {
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
Transport: &http.Transport{
|
||||
DialContext: func(ctx context.Context, _, _ string) (net.Conn, error) {
|
||||
return net.Dial("unix", "/var/run/docker.sock")
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", "http://docker/containers/json?all=true", nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
d.store.SetError("docker", err)
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
var containers []struct {
|
||||
Names []string `json:"Names"`
|
||||
Image string `json:"Image"`
|
||||
State string `json:"State"`
|
||||
Status string `json:"Status"`
|
||||
Created int64 `json:"Created"`
|
||||
}
|
||||
|
||||
if err := json.NewDecoder(resp.Body).Decode(&containers); err != nil {
|
||||
d.store.SetError("docker", err)
|
||||
return err
|
||||
}
|
||||
|
||||
var result []lab.Container
|
||||
for _, c := range containers {
|
||||
name := ""
|
||||
if len(c.Names) > 0 {
|
||||
name = c.Names[0]
|
||||
if len(name) > 0 && name[0] == '/' {
|
||||
name = name[1:]
|
||||
}
|
||||
}
|
||||
|
||||
created := time.Unix(c.Created, 0)
|
||||
uptime := ""
|
||||
if c.State == "running" {
|
||||
d := time.Since(created)
|
||||
days := int(d.Hours()) / 24
|
||||
hours := int(d.Hours()) % 24
|
||||
if days > 0 {
|
||||
uptime = fmt.Sprintf("%dd %dh", days, hours)
|
||||
} else {
|
||||
uptime = fmt.Sprintf("%dh %dm", hours, int(d.Minutes())%60)
|
||||
}
|
||||
}
|
||||
|
||||
result = append(result, lab.Container{
|
||||
Name: name,
|
||||
Status: c.State,
|
||||
Image: c.Image,
|
||||
Uptime: uptime,
|
||||
Created: created,
|
||||
})
|
||||
}
|
||||
|
||||
d.store.SetContainers(result)
|
||||
d.store.SetError("docker", nil)
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,130 +0,0 @@
|
|||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/lab"
|
||||
)
|
||||
|
||||
type Forgejo struct {
|
||||
url string
|
||||
token string
|
||||
store *lab.Store
|
||||
}
|
||||
|
||||
func NewForgejo(forgeURL, token string, s *lab.Store) *Forgejo {
|
||||
return &Forgejo{url: forgeURL, token: token, store: s}
|
||||
}
|
||||
|
||||
func (f *Forgejo) Name() string { return "forgejo" }
|
||||
|
||||
func (f *Forgejo) Collect(ctx context.Context) error {
|
||||
if f.token == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
commits, err := f.recentActivity(ctx)
|
||||
if err != nil {
|
||||
f.store.SetError("forgejo", err)
|
||||
return err
|
||||
}
|
||||
|
||||
f.store.SetCommits(commits)
|
||||
f.store.SetError("forgejo", nil)
|
||||
return nil
|
||||
}
|
||||
|
||||
type forgeRepo struct {
|
||||
FullName string `json:"full_name"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
type forgeCommit struct {
|
||||
SHA string `json:"sha"`
|
||||
Commit struct {
|
||||
Message string `json:"message"`
|
||||
Author struct {
|
||||
Name string `json:"name"`
|
||||
Date time.Time `json:"date"`
|
||||
} `json:"author"`
|
||||
} `json:"commit"`
|
||||
}
|
||||
|
||||
func (f *Forgejo) recentActivity(ctx context.Context) ([]lab.Commit, error) {
|
||||
// Get recently updated repos
|
||||
repos, err := f.apiGet(ctx, "/api/v1/repos/search?sort=updated&order=desc&limit=5")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var repoList []forgeRepo
|
||||
if err := json.Unmarshal(repos, &repoList); err != nil {
|
||||
// The search API wraps in {"data": [...], "ok": true}
|
||||
var wrapped struct {
|
||||
Data []forgeRepo `json:"data"`
|
||||
}
|
||||
if err2 := json.Unmarshal(repos, &wrapped); err2 != nil {
|
||||
return nil, err
|
||||
}
|
||||
repoList = wrapped.Data
|
||||
}
|
||||
|
||||
var commits []lab.Commit
|
||||
for _, repo := range repoList {
|
||||
if len(commits) >= 10 {
|
||||
break
|
||||
}
|
||||
data, err := f.apiGet(ctx, fmt.Sprintf("/api/v1/repos/%s/commits?limit=2", repo.FullName))
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
var fc []forgeCommit
|
||||
if err := json.Unmarshal(data, &fc); err != nil {
|
||||
continue
|
||||
}
|
||||
for _, c := range fc {
|
||||
msg := c.Commit.Message
|
||||
if len(msg) > 80 {
|
||||
msg = msg[:77] + "..."
|
||||
}
|
||||
commits = append(commits, lab.Commit{
|
||||
SHA: c.SHA[:8],
|
||||
Message: msg,
|
||||
Author: c.Commit.Author.Name,
|
||||
Repo: repo.FullName,
|
||||
Timestamp: c.Commit.Author.Date,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return commits, nil
|
||||
}
|
||||
|
||||
func (f *Forgejo) apiGet(ctx context.Context, path string) (json.RawMessage, error) {
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", f.url+path, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
req.Header.Set("Authorization", "token "+f.token)
|
||||
|
||||
client := &http.Client{Timeout: 10 * time.Second}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
return nil, fmt.Errorf("forgejo %s returned %d", path, resp.StatusCode)
|
||||
}
|
||||
|
||||
var raw json.RawMessage
|
||||
if err := json.NewDecoder(resp.Body).Decode(&raw); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return raw, nil
|
||||
}
|
||||
|
|
@ -1,55 +0,0 @@
|
|||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/lab"
|
||||
)
|
||||
|
||||
type HuggingFace struct {
|
||||
author string
|
||||
store *lab.Store
|
||||
}
|
||||
|
||||
func NewHuggingFace(author string, s *lab.Store) *HuggingFace {
|
||||
return &HuggingFace{author: author, store: s}
|
||||
}
|
||||
|
||||
func (h *HuggingFace) Name() string { return "huggingface" }
|
||||
|
||||
func (h *HuggingFace) Collect(ctx context.Context) error {
|
||||
u := fmt.Sprintf("https://huggingface.co/api/models?author=%s&sort=downloads&direction=-1&limit=20", h.author)
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", u, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
client := &http.Client{Timeout: 10 * time.Second}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
h.store.SetError("huggingface", err)
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
err := fmt.Errorf("HuggingFace API returned %d", resp.StatusCode)
|
||||
h.store.SetError("huggingface", err)
|
||||
return err
|
||||
}
|
||||
|
||||
var models []lab.HFModel
|
||||
if err := json.NewDecoder(resp.Body).Decode(&models); err != nil {
|
||||
h.store.SetError("huggingface", err)
|
||||
return err
|
||||
}
|
||||
|
||||
h.store.SetModels(models)
|
||||
h.store.SetError("huggingface", nil)
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,358 +0,0 @@
|
|||
package collector
|
||||
|
||||
import (
|
||||
"cmp"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/lab"
|
||||
)
|
||||
|
||||
type InfluxDB struct {
|
||||
cfg *lab.Config
|
||||
store *lab.Store
|
||||
}
|
||||
|
||||
func NewInfluxDB(cfg *lab.Config, s *lab.Store) *InfluxDB {
|
||||
return &InfluxDB{cfg: cfg, store: s}
|
||||
}
|
||||
|
||||
func (i *InfluxDB) Name() string { return "influxdb" }
|
||||
|
||||
func (i *InfluxDB) Collect(ctx context.Context) error {
|
||||
if i.cfg.InfluxURL == "" || i.cfg.InfluxToken == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
data := lab.BenchmarkData{
|
||||
Loss: make(map[string][]lab.LossPoint),
|
||||
Content: make(map[string][]lab.ContentPoint),
|
||||
Capability: make(map[string][]lab.CapabilityPoint),
|
||||
CapabilityJudge: make(map[string][]lab.CapabilityJudgePoint),
|
||||
UpdatedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Collect all run identifiers from each measurement.
|
||||
runSet := map[string]lab.BenchmarkRun{}
|
||||
|
||||
// Training loss data.
|
||||
if rows, err := i.query(ctx, "SELECT run_id, model, iteration, loss, loss_type, learning_rate, iterations_per_sec, tokens_per_sec FROM training_loss ORDER BY run_id, iteration"); err == nil {
|
||||
for _, row := range rows {
|
||||
rid := jsonStr(row["run_id"])
|
||||
mdl := jsonStr(row["model"])
|
||||
if rid == "" {
|
||||
continue
|
||||
}
|
||||
runSet[rid] = lab.BenchmarkRun{RunID: rid, Model: mdl, Type: "training"}
|
||||
data.Loss[rid] = append(data.Loss[rid], lab.LossPoint{
|
||||
Iteration: jsonInt(row["iteration"]),
|
||||
Loss: jsonFloat(row["loss"]),
|
||||
LossType: jsonStr(row["loss_type"]),
|
||||
LearningRate: jsonFloat(row["learning_rate"]),
|
||||
TokensPerSec: jsonFloat(row["tokens_per_sec"]),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Content scores.
|
||||
if rows, err := i.query(ctx, "SELECT run_id, model, label, dimension, score, iteration, has_kernel FROM content_score ORDER BY run_id, iteration, dimension"); err == nil {
|
||||
for _, row := range rows {
|
||||
rid := jsonStr(row["run_id"])
|
||||
mdl := jsonStr(row["model"])
|
||||
if rid == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := runSet[rid]; !ok {
|
||||
runSet[rid] = lab.BenchmarkRun{RunID: rid, Model: mdl, Type: "content"}
|
||||
}
|
||||
hk := jsonStr(row["has_kernel"])
|
||||
data.Content[rid] = append(data.Content[rid], lab.ContentPoint{
|
||||
Label: jsonStr(row["label"]),
|
||||
Dimension: jsonStr(row["dimension"]),
|
||||
Score: jsonFloat(row["score"]),
|
||||
Iteration: jsonInt(row["iteration"]),
|
||||
HasKernel: hk == "true" || hk == "True",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Capability scores.
|
||||
if rows, err := i.query(ctx, "SELECT run_id, model, label, category, accuracy, correct, total, iteration FROM capability_score ORDER BY run_id, iteration, category"); err == nil {
|
||||
for _, row := range rows {
|
||||
rid := jsonStr(row["run_id"])
|
||||
mdl := jsonStr(row["model"])
|
||||
if rid == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := runSet[rid]; !ok {
|
||||
runSet[rid] = lab.BenchmarkRun{RunID: rid, Model: mdl, Type: "capability"}
|
||||
}
|
||||
data.Capability[rid] = append(data.Capability[rid], lab.CapabilityPoint{
|
||||
Label: jsonStr(row["label"]),
|
||||
Category: jsonStr(row["category"]),
|
||||
Accuracy: jsonFloat(row["accuracy"]),
|
||||
Correct: jsonInt(row["correct"]),
|
||||
Total: jsonInt(row["total"]),
|
||||
Iteration: jsonInt(row["iteration"]),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Capability judge scores (0-10 per probe).
|
||||
if rows, err := i.query(ctx, "SELECT run_id, model, label, probe_id, category, reasoning, correctness, clarity, avg, iteration FROM capability_judge ORDER BY run_id, iteration, probe_id"); err == nil {
|
||||
for _, row := range rows {
|
||||
rid := jsonStr(row["run_id"])
|
||||
if rid == "" {
|
||||
continue
|
||||
}
|
||||
data.CapabilityJudge[rid] = append(data.CapabilityJudge[rid], lab.CapabilityJudgePoint{
|
||||
Label: jsonStr(row["label"]),
|
||||
ProbeID: jsonStr(row["probe_id"]),
|
||||
Category: jsonStr(row["category"]),
|
||||
Reasoning: jsonFloat(row["reasoning"]),
|
||||
Correctness: jsonFloat(row["correctness"]),
|
||||
Clarity: jsonFloat(row["clarity"]),
|
||||
Avg: jsonFloat(row["avg"]),
|
||||
Iteration: jsonInt(row["iteration"]),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Build sorted runs list.
|
||||
for _, r := range runSet {
|
||||
data.Runs = append(data.Runs, r)
|
||||
}
|
||||
slices.SortFunc(data.Runs, func(a, b lab.BenchmarkRun) int {
|
||||
if c := cmp.Compare(a.Model, b.Model); c != 0 {
|
||||
return c
|
||||
}
|
||||
return cmp.Compare(a.RunID, b.RunID)
|
||||
})
|
||||
|
||||
i.store.SetBenchmarks(data)
|
||||
|
||||
// Live training run statuses.
|
||||
var runStatuses []lab.TrainingRunStatus
|
||||
if rows, err := i.query(ctx, "SELECT model, run_id, status, iteration, total_iters, pct FROM training_status ORDER BY time DESC LIMIT 50"); err == nil {
|
||||
// Deduplicate: keep only the latest status per run_id.
|
||||
seen := map[string]bool{}
|
||||
for _, row := range rows {
|
||||
rid := jsonStr(row["run_id"])
|
||||
if rid == "" || seen[rid] {
|
||||
continue
|
||||
}
|
||||
seen[rid] = true
|
||||
rs := lab.TrainingRunStatus{
|
||||
Model: jsonStr(row["model"]),
|
||||
RunID: rid,
|
||||
Status: jsonStr(row["status"]),
|
||||
Iteration: jsonInt(row["iteration"]),
|
||||
TotalIters: jsonInt(row["total_iters"]),
|
||||
Pct: jsonFloat(row["pct"]),
|
||||
}
|
||||
// Find latest loss for this run from already-collected data.
|
||||
if lossPoints, ok := data.Loss[rid]; ok {
|
||||
for j := len(lossPoints) - 1; j >= 0; j-- {
|
||||
if lossPoints[j].LossType == "train" && rs.LastLoss == 0 {
|
||||
rs.LastLoss = lossPoints[j].Loss
|
||||
rs.TokensSec = lossPoints[j].TokensPerSec
|
||||
}
|
||||
if lossPoints[j].LossType == "val" && rs.ValLoss == 0 {
|
||||
rs.ValLoss = lossPoints[j].Loss
|
||||
}
|
||||
if rs.LastLoss > 0 && rs.ValLoss > 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
runStatuses = append(runStatuses, rs)
|
||||
}
|
||||
}
|
||||
i.store.SetTrainingRuns(runStatuses)
|
||||
|
||||
// Golden set data explorer — query gold_gen (real-time per-generation records).
|
||||
gs := lab.GoldenSetSummary{TargetTotal: 15000, UpdatedAt: time.Now()}
|
||||
|
||||
// Try real-time gold_gen first (populated by lem_generate.py directly).
|
||||
if rows, err := i.query(ctx, "SELECT count(DISTINCT i) AS total, count(DISTINCT d) AS domains, count(DISTINCT v) AS voices, avg(gen_time) AS avg_t, avg(chars) AS avg_c FROM gold_gen"); err == nil && len(rows) > 0 {
|
||||
r := rows[0]
|
||||
total := jsonInt(r["total"])
|
||||
if total > 0 {
|
||||
gs.Available = true
|
||||
gs.TotalExamples = total
|
||||
gs.Domains = jsonInt(r["domains"])
|
||||
gs.Voices = jsonInt(r["voices"])
|
||||
gs.AvgGenTime = jsonFloat(r["avg_t"])
|
||||
gs.AvgResponseChars = jsonFloat(r["avg_c"])
|
||||
gs.CompletionPct = float64(total) / float64(gs.TargetTotal) * 100
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback to pipeline.py metrics if gold_gen isn't populated.
|
||||
if !gs.Available {
|
||||
if rows, err := i.query(ctx, "SELECT total_examples, domains, voices, avg_gen_time, avg_response_chars, completion_pct FROM golden_set_stats ORDER BY time DESC LIMIT 1"); err == nil && len(rows) > 0 {
|
||||
r := rows[0]
|
||||
gs.Available = true
|
||||
gs.TotalExamples = jsonInt(r["total_examples"])
|
||||
gs.Domains = jsonInt(r["domains"])
|
||||
gs.Voices = jsonInt(r["voices"])
|
||||
gs.AvgGenTime = jsonFloat(r["avg_gen_time"])
|
||||
gs.AvgResponseChars = jsonFloat(r["avg_response_chars"])
|
||||
gs.CompletionPct = jsonFloat(r["completion_pct"])
|
||||
}
|
||||
}
|
||||
|
||||
if gs.Available {
|
||||
// Per-domain from gold_gen.
|
||||
if rows, err := i.query(ctx, "SELECT d, count(DISTINCT i) AS n, avg(gen_time) AS avg_t FROM gold_gen GROUP BY d ORDER BY n DESC"); err == nil && len(rows) > 0 {
|
||||
for _, r := range rows {
|
||||
gs.DomainStats = append(gs.DomainStats, lab.DomainStat{
|
||||
Domain: jsonStr(r["d"]),
|
||||
Count: jsonInt(r["n"]),
|
||||
AvgGenTime: jsonFloat(r["avg_t"]),
|
||||
})
|
||||
}
|
||||
}
|
||||
// Fallback to pipeline stats.
|
||||
if len(gs.DomainStats) == 0 {
|
||||
if rows, err := i.query(ctx, "SELECT DISTINCT domain, count, avg_gen_time FROM golden_set_domain ORDER BY count DESC"); err == nil {
|
||||
for _, r := range rows {
|
||||
gs.DomainStats = append(gs.DomainStats, lab.DomainStat{
|
||||
Domain: jsonStr(r["domain"]),
|
||||
Count: jsonInt(r["count"]),
|
||||
AvgGenTime: jsonFloat(r["avg_gen_time"]),
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Per-voice from gold_gen.
|
||||
if rows, err := i.query(ctx, "SELECT v, count(DISTINCT i) AS n, avg(chars) AS avg_c, avg(gen_time) AS avg_t FROM gold_gen GROUP BY v ORDER BY n DESC"); err == nil && len(rows) > 0 {
|
||||
for _, r := range rows {
|
||||
gs.VoiceStats = append(gs.VoiceStats, lab.VoiceStat{
|
||||
Voice: jsonStr(r["v"]),
|
||||
Count: jsonInt(r["n"]),
|
||||
AvgChars: jsonFloat(r["avg_c"]),
|
||||
AvgGenTime: jsonFloat(r["avg_t"]),
|
||||
})
|
||||
}
|
||||
}
|
||||
// Fallback.
|
||||
if len(gs.VoiceStats) == 0 {
|
||||
if rows, err := i.query(ctx, "SELECT DISTINCT voice, count, avg_chars, avg_gen_time FROM golden_set_voice ORDER BY count DESC"); err == nil {
|
||||
for _, r := range rows {
|
||||
gs.VoiceStats = append(gs.VoiceStats, lab.VoiceStat{
|
||||
Voice: jsonStr(r["voice"]),
|
||||
Count: jsonInt(r["count"]),
|
||||
AvgChars: jsonFloat(r["avg_chars"]),
|
||||
AvgGenTime: jsonFloat(r["avg_gen_time"]),
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// Worker activity.
|
||||
if rows, err := i.query(ctx, "SELECT w, count(DISTINCT i) AS n, max(time) AS last_seen FROM gold_gen GROUP BY w ORDER BY n DESC"); err == nil {
|
||||
for _, r := range rows {
|
||||
gs.Workers = append(gs.Workers, lab.WorkerStat{
|
||||
Worker: jsonStr(r["w"]),
|
||||
Count: jsonInt(r["n"]),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
i.store.SetGoldenSet(gs)
|
||||
|
||||
// Dataset stats (from DuckDB, pushed as dataset_stats measurement).
|
||||
ds := lab.DatasetSummary{UpdatedAt: time.Now()}
|
||||
if rows, err := i.query(ctx, "SELECT table, rows FROM dataset_stats ORDER BY rows DESC"); err == nil && len(rows) > 0 {
|
||||
ds.Available = true
|
||||
for _, r := range rows {
|
||||
ds.Tables = append(ds.Tables, lab.DatasetTable{
|
||||
Name: jsonStr(r["table"]),
|
||||
Rows: jsonInt(r["rows"]),
|
||||
})
|
||||
}
|
||||
}
|
||||
i.store.SetDataset(ds)
|
||||
|
||||
i.store.SetError("influxdb", nil)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *InfluxDB) query(ctx context.Context, sql string) ([]map[string]any, error) {
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
body := fmt.Sprintf(`{"db":%q,"q":%q}`, i.cfg.InfluxDB, sql)
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", i.cfg.InfluxURL+"/api/v3/query_sql", strings.NewReader(body))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
req.Header.Set("Authorization", "Bearer "+i.cfg.InfluxToken)
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
i.store.SetError("influxdb", err)
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
err := fmt.Errorf("influxdb query returned %d", resp.StatusCode)
|
||||
i.store.SetError("influxdb", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var rows []map[string]any
|
||||
if err := json.NewDecoder(resp.Body).Decode(&rows); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return rows, nil
|
||||
}
|
||||
|
||||
// JSON value helpers — InfluxDB 3 returns typed JSON values.
|
||||
|
||||
func jsonStr(v any) string {
|
||||
if v == nil {
|
||||
return ""
|
||||
}
|
||||
if s, ok := v.(string); ok {
|
||||
return s
|
||||
}
|
||||
return fmt.Sprintf("%v", v)
|
||||
}
|
||||
|
||||
func jsonFloat(v any) float64 {
|
||||
if v == nil {
|
||||
return 0
|
||||
}
|
||||
switch n := v.(type) {
|
||||
case float64:
|
||||
return n
|
||||
case json.Number:
|
||||
f, _ := n.Float64()
|
||||
return f
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func jsonInt(v any) int {
|
||||
if v == nil {
|
||||
return 0
|
||||
}
|
||||
switch n := v.(type) {
|
||||
case float64:
|
||||
return int(n)
|
||||
case json.Number:
|
||||
i, _ := n.Int64()
|
||||
return int(i)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
|
@ -1,104 +0,0 @@
|
|||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/lab"
|
||||
)
|
||||
|
||||
type Prometheus struct {
|
||||
url string
|
||||
store *lab.Store
|
||||
}
|
||||
|
||||
func NewPrometheus(promURL string, s *lab.Store) *Prometheus {
|
||||
return &Prometheus{url: promURL, store: s}
|
||||
}
|
||||
|
||||
func (p *Prometheus) Name() string { return "prometheus" }
|
||||
|
||||
func (p *Prometheus) Collect(ctx context.Context) error {
|
||||
// Machine stats are handled by the system collector (direct /proc + SSH).
|
||||
// This collector only queries agent metrics from Prometheus.
|
||||
agents := lab.AgentSummary{}
|
||||
if v, err := p.query(ctx, "agents_registered_total"); err == nil && v != nil {
|
||||
agents.RegisteredTotal = int(*v)
|
||||
agents.Available = true
|
||||
}
|
||||
if v, err := p.query(ctx, "agents_queue_pending"); err == nil && v != nil {
|
||||
agents.QueuePending = int(*v)
|
||||
}
|
||||
if v, err := p.query(ctx, "agents_tasks_completed_total"); err == nil && v != nil {
|
||||
agents.TasksCompleted = int(*v)
|
||||
}
|
||||
if v, err := p.query(ctx, "agents_tasks_failed_total"); err == nil && v != nil {
|
||||
agents.TasksFailed = int(*v)
|
||||
}
|
||||
if v, err := p.query(ctx, "agents_capabilities_count"); err == nil && v != nil {
|
||||
agents.Capabilities = int(*v)
|
||||
}
|
||||
if v, err := p.query(ctx, "agents_heartbeat_age_seconds"); err == nil && v != nil {
|
||||
agents.HeartbeatAge = *v
|
||||
}
|
||||
if v, err := p.query(ctx, "agents_exporter_up"); err == nil && v != nil {
|
||||
agents.ExporterUp = *v > 0
|
||||
}
|
||||
|
||||
p.store.SetAgents(agents)
|
||||
p.store.SetError("prometheus", nil)
|
||||
return nil
|
||||
}
|
||||
|
||||
type promResponse struct {
|
||||
Status string `json:"status"`
|
||||
Data struct {
|
||||
ResultType string `json:"resultType"`
|
||||
Result []struct {
|
||||
Value [2]json.RawMessage `json:"value"`
|
||||
} `json:"result"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
func (p *Prometheus) query(ctx context.Context, promql string) (*float64, error) {
|
||||
u := fmt.Sprintf("%s/api/v1/query?query=%s", p.url, url.QueryEscape(promql))
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", u, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
client := &http.Client{Timeout: 5 * time.Second}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
p.store.SetError("prometheus", err)
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
var pr promResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&pr); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if pr.Status != "success" || len(pr.Data.Result) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var valStr string
|
||||
if err := json.Unmarshal(pr.Data.Result[0].Value[1], &valStr); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
val, err := strconv.ParseFloat(valStr, 64)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &val, nil
|
||||
}
|
||||
|
|
@ -1,107 +0,0 @@
|
|||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/lab"
|
||||
)
|
||||
|
||||
type Services struct {
|
||||
store *lab.Store
|
||||
services []lab.Service
|
||||
}
|
||||
|
||||
func NewServices(s *lab.Store) *Services {
|
||||
return &Services{
|
||||
store: s,
|
||||
services: []lab.Service{
|
||||
// Source Control
|
||||
{Name: "Forgejo (primary)", URL: "https://forge.lthn.io", Category: "Source Control", Machine: "m3-ultra", Icon: "git"},
|
||||
{Name: "Forgejo (dev)", URL: "https://dev.lthn.io", Category: "Source Control", Machine: "snider-linux", Icon: "git"},
|
||||
{Name: "Forgejo (QA)", URL: "https://qa.lthn.io", Category: "Source Control", Machine: "gateway", Icon: "git"},
|
||||
{Name: "Forgejo (devops)", URL: "https://devops.lthn.io", Category: "Source Control", Machine: "gateway", Icon: "git"},
|
||||
{Name: "Forgejo Pages", URL: "https://host-uk.pages.lthn.io", Category: "Source Control", Machine: "snider-linux", Icon: "web"},
|
||||
|
||||
// CI/CD
|
||||
{Name: "Woodpecker CI", URL: "https://ci.lthn.io", Category: "CI/CD", Machine: "snider-linux", Icon: "ci"},
|
||||
|
||||
// Monitoring
|
||||
{Name: "Grafana", URL: "https://grafana.lthn.io", Category: "Monitoring", Machine: "snider-linux", Icon: "chart"},
|
||||
{Name: "Traefik Dashboard", URL: "https://traefik.lthn.io", Category: "Monitoring", Machine: "snider-linux", Icon: "route"},
|
||||
{Name: "Portainer", URL: "https://portainer.lthn.io", Category: "Monitoring", Machine: "snider-linux", Icon: "container"},
|
||||
{Name: "MantisBT", URL: "https://bugs.lthn.io", Category: "Monitoring", Machine: "snider-linux", Icon: "bug"},
|
||||
|
||||
// AI & Models
|
||||
{Name: "Ollama API", URL: "https://ollama.lthn.io", Category: "AI", Machine: "snider-linux", Icon: "ai"},
|
||||
{Name: "AnythingLLM", URL: "https://anythingllm.lthn.io", Category: "AI", Machine: "snider-linux", Icon: "ai"},
|
||||
{Name: "Argilla", URL: "https://argilla.lthn.io", Category: "AI", Machine: "snider-linux", Icon: "data"},
|
||||
{Name: "Lab Helper API", URL: "http://10.69.69.108:9800", Category: "AI", Machine: "m3-ultra", Icon: "api"},
|
||||
{Name: "Lab Dashboard", URL: "https://lab.lthn.io", Category: "AI", Machine: "snider-linux", Icon: "web"},
|
||||
|
||||
// Media & Content
|
||||
{Name: "Jellyfin", URL: "https://media.lthn.io", Category: "Media", Machine: "m3-ultra", Icon: "media"},
|
||||
{Name: "Immich Photos", URL: "https://photos.lthn.io", Category: "Media", Machine: "m3-ultra", Icon: "photo"},
|
||||
|
||||
// Social
|
||||
{Name: "Mastodon", URL: "https://fedi.lthn.io", Category: "Social", Machine: "snider-linux", Icon: "social"},
|
||||
{Name: "Mixpost", URL: "https://social.lthn.io", Category: "Social", Machine: "snider-linux", Icon: "social"},
|
||||
|
||||
// i18n
|
||||
{Name: "Weblate", URL: "https://i18n.lthn.io", Category: "Translation", Machine: "snider-linux", Icon: "i18n"},
|
||||
|
||||
// Infra
|
||||
{Name: "dAppCo.re CDN", URL: "https://dappco.re", Category: "Infrastructure", Machine: "snider-linux", Icon: "cdn"},
|
||||
{Name: "lthn.ai Landing", URL: "https://lthn.ai", Category: "Infrastructure", Machine: "snider-linux", Icon: "web"},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Services) Name() string { return "services" }
|
||||
|
||||
func (s *Services) Collect(ctx context.Context) error {
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
CheckRedirect: func(req *http.Request, via []*http.Request) error {
|
||||
return http.ErrUseLastResponse // don't follow redirects
|
||||
},
|
||||
}
|
||||
|
||||
for i := range s.services {
|
||||
s.services[i].Status = checkHealth(ctx, client, s.services[i].URL)
|
||||
}
|
||||
|
||||
result := make([]lab.Service, len(s.services))
|
||||
copy(result, s.services)
|
||||
s.store.SetServices(result)
|
||||
s.store.SetError("services", nil)
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkHealth(ctx context.Context, client *http.Client, url string) string {
|
||||
// Try HEAD first, fall back to GET if HEAD fails.
|
||||
req, err := http.NewRequestWithContext(ctx, "HEAD", url, nil)
|
||||
if err != nil {
|
||||
return "unavailable"
|
||||
}
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
// Retry with GET (some servers reject HEAD).
|
||||
req2, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
|
||||
if req2 == nil {
|
||||
return "unavailable"
|
||||
}
|
||||
resp, err = client.Do(req2)
|
||||
if err != nil {
|
||||
return "unavailable"
|
||||
}
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
if resp.StatusCode < 500 {
|
||||
return "ok"
|
||||
}
|
||||
return "unavailable"
|
||||
}
|
||||
|
|
@ -1,374 +0,0 @@
|
|||
package collector
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/lab"
|
||||
)
|
||||
|
||||
type System struct {
|
||||
store *lab.Store
|
||||
cfg *lab.Config
|
||||
}
|
||||
|
||||
func NewSystem(cfg *lab.Config, s *lab.Store) *System {
|
||||
return &System{store: s, cfg: cfg}
|
||||
}
|
||||
|
||||
func (s *System) Name() string { return "system" }
|
||||
|
||||
func (s *System) Collect(ctx context.Context) error {
|
||||
var machines []lab.Machine
|
||||
|
||||
// Collect local machine stats.
|
||||
local := s.collectLocal()
|
||||
machines = append(machines, local)
|
||||
|
||||
// Collect M3 Ultra stats via SSH.
|
||||
if s.cfg.M3Host != "" {
|
||||
m3 := s.collectM3(ctx)
|
||||
machines = append(machines, m3)
|
||||
}
|
||||
|
||||
s.store.SetMachines(machines)
|
||||
s.store.SetError("system", nil)
|
||||
return nil
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Local (snider-linux)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// procPath returns the path to a proc file, preferring /host/proc (Docker mount) over /proc.
|
||||
func procPath(name string) string {
|
||||
hp := "/host/proc/" + name
|
||||
if _, err := os.Stat(hp); err == nil {
|
||||
return hp
|
||||
}
|
||||
return "/proc/" + name
|
||||
}
|
||||
|
||||
func (s *System) collectLocal() lab.Machine {
|
||||
m := lab.Machine{
|
||||
Name: "snider-linux",
|
||||
Host: "localhost",
|
||||
Status: lab.StatusOK,
|
||||
CPUCores: runtime.NumCPU(),
|
||||
}
|
||||
|
||||
// Load average
|
||||
if data, err := os.ReadFile(procPath("loadavg")); err == nil {
|
||||
fields := strings.Fields(string(data))
|
||||
if len(fields) > 0 {
|
||||
m.Load1, _ = strconv.ParseFloat(fields[0], 64)
|
||||
}
|
||||
}
|
||||
|
||||
// Memory from host /proc/meminfo
|
||||
if f, err := os.Open(procPath("meminfo")); err == nil {
|
||||
defer f.Close()
|
||||
var memTotal, memAvail float64
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if strings.HasPrefix(line, "MemTotal:") {
|
||||
memTotal = parseMemInfoKB(line)
|
||||
} else if strings.HasPrefix(line, "MemAvailable:") {
|
||||
memAvail = parseMemInfoKB(line)
|
||||
}
|
||||
}
|
||||
if memTotal > 0 {
|
||||
m.MemTotalGB = memTotal / 1024 / 1024
|
||||
m.MemUsedGB = (memTotal - memAvail) / 1024 / 1024
|
||||
m.MemUsedPct = (1.0 - memAvail/memTotal) * 100
|
||||
}
|
||||
}
|
||||
|
||||
// Disk — use host root mount if available
|
||||
diskTarget := "/"
|
||||
if _, err := os.Stat("/host/root"); err == nil {
|
||||
diskTarget = "/host/root"
|
||||
}
|
||||
if out, err := exec.Command("df", "-BG", diskTarget).Output(); err == nil {
|
||||
lines := strings.Split(strings.TrimSpace(string(out)), "\n")
|
||||
if len(lines) >= 2 {
|
||||
fields := strings.Fields(lines[1])
|
||||
if len(fields) >= 5 {
|
||||
m.DiskTotalGB = parseGB(fields[1])
|
||||
m.DiskUsedGB = parseGB(fields[2])
|
||||
pct := strings.TrimSuffix(fields[4], "%")
|
||||
m.DiskUsedPct, _ = strconv.ParseFloat(pct, 64)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// GPU via sysfs (works inside Docker with /host/drm mount)
|
||||
s.collectGPUSysfs(&m)
|
||||
|
||||
// Uptime
|
||||
if data, err := os.ReadFile(procPath("uptime")); err == nil {
|
||||
fields := strings.Fields(string(data))
|
||||
if len(fields) > 0 {
|
||||
if secs, err := strconv.ParseFloat(fields[0], 64); err == nil {
|
||||
m.Uptime = formatDuration(time.Duration(secs * float64(time.Second)))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return m
|
||||
}
|
||||
|
||||
func (s *System) collectGPUSysfs(m *lab.Machine) {
|
||||
// Try sysfs paths: /host/sys (Docker mount of /sys) or /sys (native)
|
||||
drmBase := "/host/sys/class/drm"
|
||||
if _, err := os.Stat(drmBase); err != nil {
|
||||
drmBase = "/sys/class/drm"
|
||||
}
|
||||
|
||||
// Find the discrete GPU (largest VRAM) — card0 may be integrated
|
||||
gpuDev := ""
|
||||
var bestTotal float64
|
||||
for _, card := range []string{"card0", "card1", "card2"} {
|
||||
p := fmt.Sprintf("%s/%s/device/mem_info_vram_total", drmBase, card)
|
||||
if data, err := os.ReadFile(p); err == nil {
|
||||
val, _ := strconv.ParseFloat(strings.TrimSpace(string(data)), 64)
|
||||
if val > bestTotal {
|
||||
bestTotal = val
|
||||
gpuDev = fmt.Sprintf("%s/%s/device", drmBase, card)
|
||||
}
|
||||
}
|
||||
}
|
||||
if gpuDev == "" {
|
||||
return
|
||||
}
|
||||
|
||||
m.GPUName = "AMD Radeon RX 7800 XT"
|
||||
m.GPUVRAMTotal = bestTotal / 1024 / 1024 / 1024
|
||||
|
||||
if data, err := os.ReadFile(gpuDev + "/mem_info_vram_used"); err == nil {
|
||||
val, _ := strconv.ParseFloat(strings.TrimSpace(string(data)), 64)
|
||||
m.GPUVRAMUsed = val / 1024 / 1024 / 1024
|
||||
}
|
||||
if m.GPUVRAMTotal > 0 {
|
||||
m.GPUVRAMPct = m.GPUVRAMUsed / m.GPUVRAMTotal * 100
|
||||
}
|
||||
|
||||
// Temperature — find hwmon under the device
|
||||
matches, _ := filepath.Glob(gpuDev + "/hwmon/hwmon*/temp1_input")
|
||||
if len(matches) > 0 {
|
||||
if data, err := os.ReadFile(matches[0]); err == nil {
|
||||
val, _ := strconv.ParseFloat(strings.TrimSpace(string(data)), 64)
|
||||
m.GPUTemp = int(val / 1000) // millidegrees to degrees
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// M3 Ultra (via SSH)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
func (s *System) collectM3(ctx context.Context) lab.Machine {
|
||||
m := lab.Machine{
|
||||
Name: "m3-ultra",
|
||||
Host: s.cfg.M3Host,
|
||||
Status: lab.StatusUnavailable,
|
||||
GPUName: "Apple M3 Ultra (80 cores)",
|
||||
}
|
||||
|
||||
cmd := exec.CommandContext(ctx, "ssh",
|
||||
"-o", "ConnectTimeout=5",
|
||||
"-o", "BatchMode=yes",
|
||||
"-i", s.cfg.M3SSHKey,
|
||||
fmt.Sprintf("%s@%s", s.cfg.M3User, s.cfg.M3Host),
|
||||
"printf '===CPU===\\n'; sysctl -n hw.ncpu; sysctl -n vm.loadavg; printf '===MEM===\\n'; sysctl -n hw.memsize; vm_stat; printf '===DISK===\\n'; df -k /; printf '===UPTIME===\\n'; uptime",
|
||||
)
|
||||
|
||||
out, err := cmd.Output()
|
||||
if err != nil {
|
||||
return m
|
||||
}
|
||||
|
||||
m.Status = lab.StatusOK
|
||||
s.parseM3Output(&m, string(out))
|
||||
return m
|
||||
}
|
||||
|
||||
func (s *System) parseM3Output(m *lab.Machine, output string) {
|
||||
sections := splitSections(output)
|
||||
|
||||
// CPU
|
||||
if cpu, ok := sections["CPU"]; ok {
|
||||
lines := strings.Split(strings.TrimSpace(cpu), "\n")
|
||||
if len(lines) >= 1 {
|
||||
m.CPUCores, _ = strconv.Atoi(strings.TrimSpace(lines[0]))
|
||||
}
|
||||
if len(lines) >= 2 {
|
||||
// "{ 8.22 4.56 4.00 }"
|
||||
loadStr := strings.Trim(strings.TrimSpace(lines[1]), "{ }")
|
||||
fields := strings.Fields(loadStr)
|
||||
if len(fields) >= 1 {
|
||||
m.Load1, _ = strconv.ParseFloat(fields[0], 64)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Memory
|
||||
if mem, ok := sections["MEM"]; ok {
|
||||
lines := strings.Split(strings.TrimSpace(mem), "\n")
|
||||
if len(lines) >= 1 {
|
||||
bytes, _ := strconv.ParseFloat(strings.TrimSpace(lines[0]), 64)
|
||||
m.MemTotalGB = bytes / 1024 / 1024 / 1024
|
||||
}
|
||||
// Parse vm_stat: page size 16384, look for free/active/inactive/wired/speculative/compressor
|
||||
var pageSize float64 = 16384
|
||||
var free, active, inactive, speculative, wired, compressor float64
|
||||
for _, line := range lines[1:] {
|
||||
if strings.Contains(line, "page size of") {
|
||||
// "Mach Virtual Memory Statistics: (page size of 16384 bytes)"
|
||||
for _, word := range strings.Fields(line) {
|
||||
if v, err := strconv.ParseFloat(word, 64); err == nil && v > 1000 {
|
||||
pageSize = v
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
val := parseVMStatLine(line)
|
||||
switch {
|
||||
case strings.HasPrefix(line, "Pages free:"):
|
||||
free = val
|
||||
case strings.HasPrefix(line, "Pages active:"):
|
||||
active = val
|
||||
case strings.HasPrefix(line, "Pages inactive:"):
|
||||
inactive = val
|
||||
case strings.HasPrefix(line, "Pages speculative:"):
|
||||
speculative = val
|
||||
case strings.HasPrefix(line, "Pages wired"):
|
||||
wired = val
|
||||
case strings.HasPrefix(line, "Pages occupied by compressor:"):
|
||||
compressor = val
|
||||
}
|
||||
}
|
||||
usedPages := active + wired + compressor
|
||||
totalPages := free + active + inactive + speculative + wired + compressor
|
||||
if totalPages > 0 && m.MemTotalGB > 0 {
|
||||
m.MemUsedGB = usedPages * pageSize / 1024 / 1024 / 1024
|
||||
m.MemUsedPct = m.MemUsedGB / m.MemTotalGB * 100
|
||||
}
|
||||
}
|
||||
|
||||
// Disk
|
||||
if disk, ok := sections["DISK"]; ok {
|
||||
lines := strings.Split(strings.TrimSpace(disk), "\n")
|
||||
if len(lines) >= 2 {
|
||||
fields := strings.Fields(lines[1])
|
||||
if len(fields) >= 5 {
|
||||
totalKB, _ := strconv.ParseFloat(fields[1], 64)
|
||||
usedKB, _ := strconv.ParseFloat(fields[2], 64)
|
||||
m.DiskTotalGB = totalKB / 1024 / 1024
|
||||
m.DiskUsedGB = usedKB / 1024 / 1024
|
||||
if m.DiskTotalGB > 0 {
|
||||
m.DiskUsedPct = m.DiskUsedGB / m.DiskTotalGB * 100
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Uptime — "13:20 up 3 days, 1:09, 3 users, load averages: ..."
|
||||
if up, ok := sections["UPTIME"]; ok {
|
||||
line := strings.TrimSpace(up)
|
||||
if idx := strings.Index(line, "up "); idx >= 0 {
|
||||
rest := line[idx+3:]
|
||||
// Split on ", " and take parts until we hit one containing "user"
|
||||
parts := strings.Split(rest, ", ")
|
||||
var uptimeParts []string
|
||||
for _, p := range parts {
|
||||
if strings.Contains(p, "user") || strings.Contains(p, "load") {
|
||||
break
|
||||
}
|
||||
uptimeParts = append(uptimeParts, p)
|
||||
}
|
||||
m.Uptime = strings.TrimSpace(strings.Join(uptimeParts, ", "))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
func splitSections(output string) map[string]string {
|
||||
sections := make(map[string]string)
|
||||
var current string
|
||||
var buf strings.Builder
|
||||
for _, line := range strings.Split(output, "\n") {
|
||||
if strings.HasPrefix(line, "===") && strings.HasSuffix(line, "===") {
|
||||
if current != "" {
|
||||
sections[current] = buf.String()
|
||||
buf.Reset()
|
||||
}
|
||||
current = strings.Trim(line, "=")
|
||||
} else if current != "" {
|
||||
buf.WriteString(line)
|
||||
buf.WriteByte('\n')
|
||||
}
|
||||
}
|
||||
if current != "" {
|
||||
sections[current] = buf.String()
|
||||
}
|
||||
return sections
|
||||
}
|
||||
|
||||
func parseVMStatLine(line string) float64 {
|
||||
// "Pages free: 2266867."
|
||||
parts := strings.SplitN(line, ":", 2)
|
||||
if len(parts) < 2 {
|
||||
return 0
|
||||
}
|
||||
val := strings.TrimSpace(strings.TrimSuffix(strings.TrimSpace(parts[1]), "."))
|
||||
f, _ := strconv.ParseFloat(val, 64)
|
||||
return f
|
||||
}
|
||||
|
||||
func parseMemInfoKB(line string) float64 {
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) < 2 {
|
||||
return 0
|
||||
}
|
||||
v, _ := strconv.ParseFloat(fields[1], 64)
|
||||
return v
|
||||
}
|
||||
|
||||
func parseGB(s string) float64 {
|
||||
s = strings.TrimSuffix(s, "G")
|
||||
v, _ := strconv.ParseFloat(s, 64)
|
||||
return v
|
||||
}
|
||||
|
||||
func parseBytesGB(line string) float64 {
|
||||
// "GPU[0] : VRAM Total Memory (B): 17163091968"
|
||||
parts := strings.Split(line, ":")
|
||||
if len(parts) < 3 {
|
||||
return 0
|
||||
}
|
||||
val := strings.TrimSpace(parts[len(parts)-1])
|
||||
bytes, _ := strconv.ParseFloat(val, 64)
|
||||
return bytes / 1024 / 1024 / 1024
|
||||
}
|
||||
|
||||
func formatDuration(d time.Duration) string {
|
||||
days := int(d.Hours()) / 24
|
||||
hours := int(d.Hours()) % 24
|
||||
if days > 0 {
|
||||
return fmt.Sprintf("%dd %dh", days, hours)
|
||||
}
|
||||
return fmt.Sprintf("%dh %dm", hours, int(d.Minutes())%60)
|
||||
}
|
||||
|
|
@ -1,123 +0,0 @@
|
|||
package collector
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/lab"
|
||||
)
|
||||
|
||||
type Training struct {
|
||||
cfg *lab.Config
|
||||
store *lab.Store
|
||||
}
|
||||
|
||||
func NewTraining(cfg *lab.Config, s *lab.Store) *Training {
|
||||
return &Training{cfg: cfg, store: s}
|
||||
}
|
||||
|
||||
func (t *Training) Name() string { return "training" }
|
||||
|
||||
func (t *Training) Collect(ctx context.Context) error {
|
||||
summary := lab.TrainingSummary{
|
||||
GoldTarget: 15000,
|
||||
}
|
||||
|
||||
// Fetch from M3 lab-helper API
|
||||
if t.cfg.M3APIURL != "" {
|
||||
t.fetchM3API(ctx, &summary)
|
||||
}
|
||||
|
||||
// Parse local intercept JSONL files
|
||||
interceptDir := t.cfg.TrainingDataDir
|
||||
if interceptDir != "" {
|
||||
count, lastTime := countJSONLFiles(filepath.Join(interceptDir, "command-intercepts"))
|
||||
summary.InterceptCount = count
|
||||
summary.LastIntercept = lastTime
|
||||
}
|
||||
|
||||
// Count QA sessions
|
||||
sessDir := filepath.Join(t.cfg.TrainingDataDir, "qa-epic-verification", "sessions")
|
||||
if entries, err := os.ReadDir(sessDir); err == nil {
|
||||
summary.SessionCount = len(entries)
|
||||
}
|
||||
|
||||
t.store.SetTraining(summary)
|
||||
t.store.SetError("training", nil)
|
||||
return nil
|
||||
}
|
||||
|
||||
type m3TrainingResponse struct {
|
||||
GoldGenerated int `json:"gold_generated"`
|
||||
GoldTarget int `json:"gold_target"`
|
||||
GoldPercent float64 `json:"gold_percent"`
|
||||
SeedsComplete int `json:"seeds_complete"`
|
||||
GGUFCount int `json:"gguf_count"`
|
||||
GGUFFiles []string `json:"gguf_files"`
|
||||
AdapterCount int `json:"adapter_count"`
|
||||
}
|
||||
|
||||
func (t *Training) fetchM3API(ctx context.Context, summary *lab.TrainingSummary) {
|
||||
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", t.cfg.M3APIURL+"/api/training", nil)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
t.store.SetError("m3-api", err)
|
||||
return
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
var data m3TrainingResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&data); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
summary.GoldGenerated = data.GoldGenerated
|
||||
summary.GoldAvailable = true
|
||||
summary.GoldPercent = data.GoldPercent
|
||||
summary.GGUFCount = data.GGUFCount
|
||||
summary.GGUFFiles = data.GGUFFiles
|
||||
summary.AdapterCount = data.AdapterCount
|
||||
t.store.SetError("m3-api", nil)
|
||||
}
|
||||
|
||||
func countJSONLFiles(dir string) (int, time.Time) {
|
||||
var total int
|
||||
var lastTime time.Time
|
||||
|
||||
files, err := filepath.Glob(filepath.Join(dir, "*.jsonl"))
|
||||
if err != nil {
|
||||
return 0, lastTime
|
||||
}
|
||||
|
||||
for _, f := range files {
|
||||
file, err := os.Open(f)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
scanner := bufio.NewScanner(file)
|
||||
for scanner.Scan() {
|
||||
total++
|
||||
var ev struct {
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
}
|
||||
if json.Unmarshal(scanner.Bytes(), &ev) == nil && ev.Timestamp.After(lastTime) {
|
||||
lastTime = ev.Timestamp
|
||||
}
|
||||
}
|
||||
file.Close()
|
||||
}
|
||||
|
||||
return total, lastTime
|
||||
}
|
||||
|
|
@ -1,84 +0,0 @@
|
|||
package lab
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
Addr string
|
||||
|
||||
PrometheusURL string
|
||||
PrometheusInterval int
|
||||
|
||||
ForgeURL string
|
||||
ForgeToken string
|
||||
ForgeInterval int
|
||||
|
||||
HFAuthor string
|
||||
HFInterval int
|
||||
|
||||
M3Host string
|
||||
M3User string
|
||||
M3SSHKey string
|
||||
M3APIURL string
|
||||
M3Interval int
|
||||
|
||||
TrainingDataDir string
|
||||
TrainingInterval int
|
||||
|
||||
DockerInterval int
|
||||
|
||||
InfluxURL string
|
||||
InfluxToken string
|
||||
InfluxDB string
|
||||
InfluxInterval int
|
||||
}
|
||||
|
||||
func LoadConfig() *Config {
|
||||
return &Config{
|
||||
Addr: env("ADDR", ":8080"),
|
||||
|
||||
PrometheusURL: env("PROMETHEUS_URL", "http://prometheus:9090"),
|
||||
PrometheusInterval: envInt("PROMETHEUS_INTERVAL", 15),
|
||||
|
||||
ForgeURL: env("FORGE_URL", "https://forge.lthn.io"),
|
||||
ForgeToken: env("FORGE_TOKEN", ""),
|
||||
ForgeInterval: envInt("FORGE_INTERVAL", 60),
|
||||
|
||||
HFAuthor: env("HF_AUTHOR", "lthn"),
|
||||
HFInterval: envInt("HF_INTERVAL", 300),
|
||||
|
||||
M3Host: env("M3_HOST", "10.69.69.108"),
|
||||
M3User: env("M3_USER", "claude"),
|
||||
M3SSHKey: env("M3_SSH_KEY", "/root/.ssh/id_ed25519"),
|
||||
M3APIURL: env("M3_API_URL", "http://10.69.69.108:9800"),
|
||||
M3Interval: envInt("M3_INTERVAL", 30),
|
||||
|
||||
TrainingDataDir: env("TRAINING_DATA_DIR", "/data/training"),
|
||||
TrainingInterval: envInt("TRAINING_INTERVAL", 60),
|
||||
|
||||
DockerInterval: envInt("DOCKER_INTERVAL", 30),
|
||||
|
||||
InfluxURL: env("INFLUX_URL", "http://localhost:8181"),
|
||||
InfluxToken: env("INFLUX_TOKEN", ""),
|
||||
InfluxDB: env("INFLUX_DB", "training"),
|
||||
InfluxInterval: envInt("INFLUX_INTERVAL", 60),
|
||||
}
|
||||
}
|
||||
|
||||
func env(key, fallback string) string {
|
||||
if v := os.Getenv(key); v != "" {
|
||||
return v
|
||||
}
|
||||
return fallback
|
||||
}
|
||||
|
||||
func envInt(key string, fallback int) int {
|
||||
if v := os.Getenv(key); v != "" {
|
||||
if n, err := strconv.Atoi(v); err == nil {
|
||||
return n
|
||||
}
|
||||
}
|
||||
return fallback
|
||||
}
|
||||
|
|
@ -1,129 +0,0 @@
|
|||
package lab
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// ── LoadConfig defaults ────────────────────────────────────────────
|
||||
|
||||
func TestLoadConfig_Good_Defaults(t *testing.T) {
|
||||
cfg := LoadConfig()
|
||||
|
||||
if cfg.Addr != ":8080" {
|
||||
t.Fatalf("expected :8080, got %s", cfg.Addr)
|
||||
}
|
||||
if cfg.PrometheusURL != "http://prometheus:9090" {
|
||||
t.Fatalf("unexpected PrometheusURL: %s", cfg.PrometheusURL)
|
||||
}
|
||||
if cfg.PrometheusInterval != 15 {
|
||||
t.Fatalf("expected 15, got %d", cfg.PrometheusInterval)
|
||||
}
|
||||
if cfg.ForgeURL != "https://forge.lthn.io" {
|
||||
t.Fatalf("unexpected ForgeURL: %s", cfg.ForgeURL)
|
||||
}
|
||||
if cfg.ForgeInterval != 60 {
|
||||
t.Fatalf("expected 60, got %d", cfg.ForgeInterval)
|
||||
}
|
||||
if cfg.HFAuthor != "lthn" {
|
||||
t.Fatalf("expected lthn, got %s", cfg.HFAuthor)
|
||||
}
|
||||
if cfg.HFInterval != 300 {
|
||||
t.Fatalf("expected 300, got %d", cfg.HFInterval)
|
||||
}
|
||||
if cfg.TrainingDataDir != "/data/training" {
|
||||
t.Fatalf("unexpected TrainingDataDir: %s", cfg.TrainingDataDir)
|
||||
}
|
||||
if cfg.InfluxDB != "training" {
|
||||
t.Fatalf("expected training, got %s", cfg.InfluxDB)
|
||||
}
|
||||
}
|
||||
|
||||
// ── env override ───────────────────────────────────────────────────
|
||||
|
||||
func TestLoadConfig_Good_EnvOverride(t *testing.T) {
|
||||
os.Setenv("ADDR", ":9090")
|
||||
os.Setenv("FORGE_URL", "https://forge.lthn.ai")
|
||||
os.Setenv("HF_AUTHOR", "snider")
|
||||
defer func() {
|
||||
os.Unsetenv("ADDR")
|
||||
os.Unsetenv("FORGE_URL")
|
||||
os.Unsetenv("HF_AUTHOR")
|
||||
}()
|
||||
|
||||
cfg := LoadConfig()
|
||||
if cfg.Addr != ":9090" {
|
||||
t.Fatalf("expected :9090, got %s", cfg.Addr)
|
||||
}
|
||||
if cfg.ForgeURL != "https://forge.lthn.ai" {
|
||||
t.Fatalf("expected forge.lthn.ai, got %s", cfg.ForgeURL)
|
||||
}
|
||||
if cfg.HFAuthor != "snider" {
|
||||
t.Fatalf("expected snider, got %s", cfg.HFAuthor)
|
||||
}
|
||||
}
|
||||
|
||||
// ── envInt ─────────────────────────────────────────────────────────
|
||||
|
||||
func TestLoadConfig_Good_IntEnvOverride(t *testing.T) {
|
||||
os.Setenv("PROMETHEUS_INTERVAL", "30")
|
||||
defer os.Unsetenv("PROMETHEUS_INTERVAL")
|
||||
|
||||
cfg := LoadConfig()
|
||||
if cfg.PrometheusInterval != 30 {
|
||||
t.Fatalf("expected 30, got %d", cfg.PrometheusInterval)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoadConfig_Bad_InvalidIntFallsBack(t *testing.T) {
|
||||
os.Setenv("PROMETHEUS_INTERVAL", "not-a-number")
|
||||
defer os.Unsetenv("PROMETHEUS_INTERVAL")
|
||||
|
||||
cfg := LoadConfig()
|
||||
if cfg.PrometheusInterval != 15 {
|
||||
t.Fatalf("expected fallback 15, got %d", cfg.PrometheusInterval)
|
||||
}
|
||||
}
|
||||
|
||||
// ── env / envInt helpers directly ──────────────────────────────────
|
||||
|
||||
func TestEnv_Good(t *testing.T) {
|
||||
os.Setenv("TEST_LAB_KEY", "hello")
|
||||
defer os.Unsetenv("TEST_LAB_KEY")
|
||||
|
||||
if got := env("TEST_LAB_KEY", "default"); got != "hello" {
|
||||
t.Fatalf("expected hello, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnv_Good_Fallback(t *testing.T) {
|
||||
os.Unsetenv("TEST_LAB_MISSING")
|
||||
if got := env("TEST_LAB_MISSING", "fallback"); got != "fallback" {
|
||||
t.Fatalf("expected fallback, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnvInt_Good(t *testing.T) {
|
||||
os.Setenv("TEST_LAB_INT", "42")
|
||||
defer os.Unsetenv("TEST_LAB_INT")
|
||||
|
||||
if got := envInt("TEST_LAB_INT", 0); got != 42 {
|
||||
t.Fatalf("expected 42, got %d", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnvInt_Bad_Fallback(t *testing.T) {
|
||||
os.Unsetenv("TEST_LAB_INT_MISSING")
|
||||
if got := envInt("TEST_LAB_INT_MISSING", 99); got != 99 {
|
||||
t.Fatalf("expected 99, got %d", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnvInt_Bad_InvalidString(t *testing.T) {
|
||||
os.Setenv("TEST_LAB_INT_BAD", "xyz")
|
||||
defer os.Unsetenv("TEST_LAB_INT_BAD")
|
||||
|
||||
if got := envInt("TEST_LAB_INT_BAD", 7); got != 7 {
|
||||
t.Fatalf("expected fallback 7, got %d", got)
|
||||
}
|
||||
}
|
||||
|
|
@ -1,65 +0,0 @@
|
|||
package handler
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/lab"
|
||||
)
|
||||
|
||||
type APIHandler struct {
|
||||
store *lab.Store
|
||||
}
|
||||
|
||||
func NewAPIHandler(s *lab.Store) *APIHandler {
|
||||
return &APIHandler{store: s}
|
||||
}
|
||||
|
||||
type apiResponse struct {
|
||||
Status string `json:"status"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
Data any `json:"data"`
|
||||
}
|
||||
|
||||
func (h *APIHandler) writeJSON(w http.ResponseWriter, data any) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(apiResponse{
|
||||
Status: "ok",
|
||||
UpdatedAt: time.Now(),
|
||||
Data: data,
|
||||
})
|
||||
}
|
||||
|
||||
func (h *APIHandler) Status(w http.ResponseWriter, r *http.Request) {
|
||||
h.writeJSON(w, h.store.Overview())
|
||||
}
|
||||
|
||||
func (h *APIHandler) Models(w http.ResponseWriter, r *http.Request) {
|
||||
h.writeJSON(w, h.store.GetModels())
|
||||
}
|
||||
|
||||
func (h *APIHandler) Training(w http.ResponseWriter, r *http.Request) {
|
||||
h.writeJSON(w, h.store.GetTraining())
|
||||
}
|
||||
|
||||
func (h *APIHandler) Agents(w http.ResponseWriter, r *http.Request) {
|
||||
h.writeJSON(w, h.store.GetAgents())
|
||||
}
|
||||
|
||||
func (h *APIHandler) Services(w http.ResponseWriter, r *http.Request) {
|
||||
h.writeJSON(w, h.store.GetServices())
|
||||
}
|
||||
|
||||
func (h *APIHandler) GoldenSet(w http.ResponseWriter, r *http.Request) {
|
||||
h.writeJSON(w, h.store.GetGoldenSet())
|
||||
}
|
||||
|
||||
func (h *APIHandler) Runs(w http.ResponseWriter, r *http.Request) {
|
||||
h.writeJSON(w, h.store.GetBenchmarks())
|
||||
}
|
||||
|
||||
func (h *APIHandler) Health(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(map[string]string{"status": "ok"})
|
||||
}
|
||||
|
|
@ -1,595 +0,0 @@
|
|||
package handler
|
||||
|
||||
import (
|
||||
"cmp"
|
||||
"fmt"
|
||||
"html/template"
|
||||
"math"
|
||||
"slices"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/lab"
|
||||
)
|
||||
|
||||
const (
|
||||
chartW = 760
|
||||
chartH = 280
|
||||
marginTop = 25
|
||||
marginRight = 20
|
||||
marginBot = 35
|
||||
marginLeft = 55
|
||||
plotW = chartW - marginLeft - marginRight
|
||||
plotH = chartH - marginTop - marginBot
|
||||
)
|
||||
|
||||
var dimensionColors = map[string]string{
|
||||
"ccp_compliance": "#f87171",
|
||||
"truth_telling": "#4ade80",
|
||||
"engagement": "#fbbf24",
|
||||
"axiom_integration": "#60a5fa",
|
||||
"sovereignty_reasoning": "#c084fc",
|
||||
"emotional_register": "#fb923c",
|
||||
}
|
||||
|
||||
func getDimColor(dim string) string {
|
||||
if c, ok := dimensionColors[dim]; ok {
|
||||
return c
|
||||
}
|
||||
return "#8888a0"
|
||||
}
|
||||
|
||||
// LossChart generates an SVG line chart for training loss data.
|
||||
func LossChart(points []lab.LossPoint) template.HTML {
|
||||
if len(points) == 0 {
|
||||
return template.HTML(`<div class="empty">No training loss data</div>`)
|
||||
}
|
||||
|
||||
// Separate val and train loss.
|
||||
var valPts, trainPts []lab.LossPoint
|
||||
for _, p := range points {
|
||||
switch p.LossType {
|
||||
case "val":
|
||||
valPts = append(valPts, p)
|
||||
case "train":
|
||||
trainPts = append(trainPts, p)
|
||||
}
|
||||
}
|
||||
|
||||
// Find data bounds.
|
||||
allPts := append(valPts, trainPts...)
|
||||
xMin, xMax := float64(allPts[0].Iteration), float64(allPts[0].Iteration)
|
||||
yMin, yMax := allPts[0].Loss, allPts[0].Loss
|
||||
for _, p := range allPts {
|
||||
x := float64(p.Iteration)
|
||||
xMin = min(xMin, x)
|
||||
xMax = max(xMax, x)
|
||||
yMin = min(yMin, p.Loss)
|
||||
yMax = max(yMax, p.Loss)
|
||||
}
|
||||
|
||||
// Add padding to Y range.
|
||||
yRange := yMax - yMin
|
||||
yRange = max(yRange, 0.1)
|
||||
yMin = yMin - yRange*0.1
|
||||
yMax = yMax + yRange*0.1
|
||||
if xMax == xMin {
|
||||
xMax = xMin + 1
|
||||
}
|
||||
|
||||
scaleX := func(v float64) float64 { return marginLeft + (v-xMin)/(xMax-xMin)*plotW }
|
||||
scaleY := func(v float64) float64 { return marginTop + (1-(v-yMin)/(yMax-yMin))*plotH }
|
||||
|
||||
var sb strings.Builder
|
||||
sb.WriteString(fmt.Sprintf(`<svg viewBox="0 0 %d %d" xmlns="http://www.w3.org/2000/svg" style="width:100%%;max-width:%dpx">`, chartW, chartH, chartW))
|
||||
sb.WriteString(fmt.Sprintf(`<rect width="%d" height="%d" fill="#12121a" rx="8"/>`, chartW, chartH))
|
||||
|
||||
// Grid lines.
|
||||
nGridY := 5
|
||||
for i := 0; i <= nGridY; i++ {
|
||||
y := marginTop + float64(i)*plotH/float64(nGridY)
|
||||
val := yMax - float64(i)*(yMax-yMin)/float64(nGridY)
|
||||
sb.WriteString(fmt.Sprintf(`<line x1="%d" y1="%.0f" x2="%d" y2="%.0f" stroke="#1e1e2e" stroke-width="1"/>`, marginLeft, y, chartW-marginRight, y))
|
||||
sb.WriteString(fmt.Sprintf(`<text x="%d" y="%.0f" fill="#8888a0" font-size="10" text-anchor="end" dominant-baseline="middle">%.2f</text>`, marginLeft-6, y, val))
|
||||
}
|
||||
|
||||
// X axis labels.
|
||||
nGridX := max(min(6, int(xMax-xMin)), 1)
|
||||
for i := 0; i <= nGridX; i++ {
|
||||
xVal := xMin + float64(i)*(xMax-xMin)/float64(nGridX)
|
||||
x := scaleX(xVal)
|
||||
sb.WriteString(fmt.Sprintf(`<line x1="%.0f" y1="%d" x2="%.0f" y2="%d" stroke="#1e1e2e" stroke-width="1"/>`, x, marginTop, x, marginTop+plotH))
|
||||
sb.WriteString(fmt.Sprintf(`<text x="%.0f" y="%d" fill="#8888a0" font-size="10" text-anchor="middle">%d</text>`, x, chartH-8, int(xVal)))
|
||||
}
|
||||
|
||||
// Draw train loss line (dimmed).
|
||||
if len(trainPts) > 1 {
|
||||
slices.SortFunc(trainPts, func(a, b lab.LossPoint) int { return cmp.Compare(a.Iteration, b.Iteration) })
|
||||
sb.WriteString(`<polyline points="`)
|
||||
for i, p := range trainPts {
|
||||
if i > 0 {
|
||||
sb.WriteString(" ")
|
||||
}
|
||||
sb.WriteString(fmt.Sprintf("%.1f,%.1f", scaleX(float64(p.Iteration)), scaleY(p.Loss)))
|
||||
}
|
||||
sb.WriteString(`" fill="none" stroke="#5a4fd0" stroke-width="1.5" opacity="0.5"/>`)
|
||||
for _, p := range trainPts {
|
||||
sb.WriteString(fmt.Sprintf(`<circle cx="%.1f" cy="%.1f" r="2.5" fill="#5a4fd0" opacity="0.5"/>`, scaleX(float64(p.Iteration)), scaleY(p.Loss)))
|
||||
}
|
||||
}
|
||||
|
||||
// Draw val loss line (accent).
|
||||
if len(valPts) > 1 {
|
||||
slices.SortFunc(valPts, func(a, b lab.LossPoint) int { return cmp.Compare(a.Iteration, b.Iteration) })
|
||||
sb.WriteString(`<polyline points="`)
|
||||
for i, p := range valPts {
|
||||
if i > 0 {
|
||||
sb.WriteString(" ")
|
||||
}
|
||||
sb.WriteString(fmt.Sprintf("%.1f,%.1f", scaleX(float64(p.Iteration)), scaleY(p.Loss)))
|
||||
}
|
||||
sb.WriteString(`" fill="none" stroke="#7c6ff0" stroke-width="2.5"/>`)
|
||||
for _, p := range valPts {
|
||||
sb.WriteString(fmt.Sprintf(`<circle cx="%.1f" cy="%.1f" r="3.5" fill="#7c6ff0"/>`, scaleX(float64(p.Iteration)), scaleY(p.Loss)))
|
||||
sb.WriteString(fmt.Sprintf(`<text x="%.1f" y="%.1f" fill="#e0e0e8" font-size="9" text-anchor="middle">%.2f</text>`, scaleX(float64(p.Iteration)), scaleY(p.Loss)-8, p.Loss))
|
||||
}
|
||||
}
|
||||
|
||||
// Legend.
|
||||
sb.WriteString(fmt.Sprintf(`<circle cx="%d" cy="12" r="4" fill="#7c6ff0"/>`, marginLeft+10))
|
||||
sb.WriteString(fmt.Sprintf(`<text x="%d" y="12" fill="#8888a0" font-size="10" dominant-baseline="middle">Val Loss</text>`, marginLeft+18))
|
||||
sb.WriteString(fmt.Sprintf(`<circle cx="%d" cy="12" r="4" fill="#5a4fd0" opacity="0.5"/>`, marginLeft+85))
|
||||
sb.WriteString(fmt.Sprintf(`<text x="%d" y="12" fill="#8888a0" font-size="10" dominant-baseline="middle">Train Loss</text>`, marginLeft+93))
|
||||
|
||||
sb.WriteString("</svg>")
|
||||
return template.HTML(sb.String())
|
||||
}
|
||||
|
||||
// ContentChart generates an SVG multi-line chart for content scores by dimension.
|
||||
func ContentChart(points []lab.ContentPoint) template.HTML {
|
||||
if len(points) == 0 {
|
||||
return template.HTML(`<div class="empty">No content score data</div>`)
|
||||
}
|
||||
|
||||
// Group by dimension, sorted by iteration. Only use kernel points for cleaner view.
|
||||
dims := map[string][]lab.ContentPoint{}
|
||||
for _, p := range points {
|
||||
if !p.HasKernel && !strings.Contains(p.Label, "naked") {
|
||||
continue
|
||||
}
|
||||
dims[p.Dimension] = append(dims[p.Dimension], p)
|
||||
}
|
||||
// If no kernel points, use all.
|
||||
if len(dims) == 0 {
|
||||
for _, p := range points {
|
||||
dims[p.Dimension] = append(dims[p.Dimension], p)
|
||||
}
|
||||
}
|
||||
|
||||
// Find unique iterations for X axis.
|
||||
iterSet := map[int]bool{}
|
||||
for _, pts := range dims {
|
||||
for _, p := range pts {
|
||||
iterSet[p.Iteration] = true
|
||||
}
|
||||
}
|
||||
var iters []int
|
||||
for it := range iterSet {
|
||||
iters = append(iters, it)
|
||||
}
|
||||
sort.Ints(iters)
|
||||
|
||||
if len(iters) == 0 {
|
||||
return template.HTML(`<div class="empty">No iteration data</div>`)
|
||||
}
|
||||
|
||||
xMin, xMax := float64(iters[0]), float64(iters[len(iters)-1])
|
||||
if xMax == xMin {
|
||||
xMax = xMin + 1
|
||||
}
|
||||
yMin, yMax := 0.0, 10.0 // Content scores are 0-10.
|
||||
|
||||
scaleX := func(v float64) float64 { return marginLeft + (v-xMin)/(xMax-xMin)*plotW }
|
||||
scaleY := func(v float64) float64 { return marginTop + (1-(v-yMin)/(yMax-yMin))*plotH }
|
||||
|
||||
var sb strings.Builder
|
||||
sb.WriteString(fmt.Sprintf(`<svg viewBox="0 0 %d %d" xmlns="http://www.w3.org/2000/svg" style="width:100%%;max-width:%dpx">`, chartW, chartH, chartW))
|
||||
sb.WriteString(fmt.Sprintf(`<rect width="%d" height="%d" fill="#12121a" rx="8"/>`, chartW, chartH))
|
||||
|
||||
// Grid.
|
||||
for i := 0; i <= 5; i++ {
|
||||
y := marginTop + float64(i)*plotH/5
|
||||
val := yMax - float64(i)*(yMax-yMin)/5
|
||||
sb.WriteString(fmt.Sprintf(`<line x1="%d" y1="%.0f" x2="%d" y2="%.0f" stroke="#1e1e2e"/>`, marginLeft, y, chartW-marginRight, y))
|
||||
sb.WriteString(fmt.Sprintf(`<text x="%d" y="%.0f" fill="#8888a0" font-size="10" text-anchor="end" dominant-baseline="middle">%.0f</text>`, marginLeft-6, y, val))
|
||||
}
|
||||
|
||||
// X axis.
|
||||
for _, it := range iters {
|
||||
x := scaleX(float64(it))
|
||||
sb.WriteString(fmt.Sprintf(`<line x1="%.0f" y1="%d" x2="%.0f" y2="%d" stroke="#1e1e2e"/>`, x, marginTop, x, marginTop+plotH))
|
||||
sb.WriteString(fmt.Sprintf(`<text x="%.0f" y="%d" fill="#8888a0" font-size="9" text-anchor="middle">@%d</text>`, x, chartH-8, it))
|
||||
}
|
||||
|
||||
// Draw a line per dimension.
|
||||
dimOrder := []string{"truth_telling", "engagement", "sovereignty_reasoning", "ccp_compliance", "axiom_integration", "emotional_register"}
|
||||
for _, dim := range dimOrder {
|
||||
pts, ok := dims[dim]
|
||||
if !ok || len(pts) < 2 {
|
||||
continue
|
||||
}
|
||||
slices.SortFunc(pts, func(a, b lab.ContentPoint) int { return cmp.Compare(a.Iteration, b.Iteration) })
|
||||
|
||||
// Average duplicate iterations.
|
||||
averaged := averageByIteration(pts)
|
||||
color := getDimColor(dim)
|
||||
|
||||
sb.WriteString(fmt.Sprintf(`<polyline points="`))
|
||||
for i, p := range averaged {
|
||||
if i > 0 {
|
||||
sb.WriteString(" ")
|
||||
}
|
||||
sb.WriteString(fmt.Sprintf("%.1f,%.1f", scaleX(float64(p.Iteration)), scaleY(p.Score)))
|
||||
}
|
||||
sb.WriteString(fmt.Sprintf(`" fill="none" stroke="%s" stroke-width="2" opacity="0.8"/>`, color))
|
||||
|
||||
for _, p := range averaged {
|
||||
cx := scaleX(float64(p.Iteration))
|
||||
cy := scaleY(p.Score)
|
||||
sb.WriteString(fmt.Sprintf(`<circle cx="%.1f" cy="%.1f" r="3" fill="%s"/>`, cx, cy, color))
|
||||
sb.WriteString(fmt.Sprintf(`<text x="%.1f" y="%.1f" fill="%s" font-size="8" text-anchor="middle" font-weight="600">%.1f</text>`, cx, cy-6, color, p.Score))
|
||||
}
|
||||
}
|
||||
|
||||
// Legend at top.
|
||||
lx := marginLeft + 5
|
||||
for _, dim := range dimOrder {
|
||||
if _, ok := dims[dim]; !ok {
|
||||
continue
|
||||
}
|
||||
color := getDimColor(dim)
|
||||
label := strings.ReplaceAll(dim, "_", " ")
|
||||
sb.WriteString(fmt.Sprintf(`<circle cx="%d" cy="12" r="4" fill="%s"/>`, lx, color))
|
||||
sb.WriteString(fmt.Sprintf(`<text x="%d" y="12" fill="#8888a0" font-size="9" dominant-baseline="middle">%s</text>`, lx+7, label))
|
||||
lx += len(label)*6 + 20
|
||||
}
|
||||
|
||||
sb.WriteString("</svg>")
|
||||
return template.HTML(sb.String())
|
||||
}
|
||||
|
||||
// CapabilityChart generates an SVG horizontal bar chart for capability scores.
|
||||
func CapabilityChart(points []lab.CapabilityPoint) template.HTML {
|
||||
if len(points) == 0 {
|
||||
return template.HTML(`<div class="empty">No capability score data</div>`)
|
||||
}
|
||||
|
||||
// Get overall scores only, sorted by iteration.
|
||||
var overall []lab.CapabilityPoint
|
||||
for _, p := range points {
|
||||
if p.Category == "overall" {
|
||||
overall = append(overall, p)
|
||||
}
|
||||
}
|
||||
slices.SortFunc(overall, func(a, b lab.CapabilityPoint) int { return cmp.Compare(a.Iteration, b.Iteration) })
|
||||
|
||||
if len(overall) == 0 {
|
||||
return template.HTML(`<div class="empty">No overall capability data</div>`)
|
||||
}
|
||||
|
||||
barH := 32
|
||||
gap := 8
|
||||
labelW := 120
|
||||
svgH := len(overall)*(barH+gap) + 40
|
||||
barMaxW := chartW - labelW - 80
|
||||
|
||||
var sb strings.Builder
|
||||
sb.WriteString(fmt.Sprintf(`<svg viewBox="0 0 %d %d" xmlns="http://www.w3.org/2000/svg" style="width:100%%;max-width:%dpx">`, chartW, svgH, chartW))
|
||||
sb.WriteString(fmt.Sprintf(`<rect width="%d" height="%d" fill="#12121a" rx="8"/>`, chartW, svgH))
|
||||
|
||||
for i, p := range overall {
|
||||
y := 20 + i*(barH+gap)
|
||||
barW := p.Accuracy / 100.0 * float64(barMaxW)
|
||||
|
||||
// Color based on accuracy.
|
||||
color := "#f87171" // red
|
||||
if p.Accuracy >= 80 {
|
||||
color = "#4ade80" // green
|
||||
} else if p.Accuracy >= 65 {
|
||||
color = "#fbbf24" // yellow
|
||||
}
|
||||
|
||||
// Label.
|
||||
label := shortLabel(p.Label)
|
||||
sb.WriteString(fmt.Sprintf(`<text x="10" y="%d" fill="#e0e0e8" font-size="11" dominant-baseline="middle">%s</text>`, y+barH/2, label))
|
||||
|
||||
// Bar background.
|
||||
sb.WriteString(fmt.Sprintf(`<rect x="%d" y="%d" width="%d" height="%d" fill="#1e1e2e" rx="4"/>`, labelW, y, barMaxW, barH))
|
||||
|
||||
// Bar fill.
|
||||
sb.WriteString(fmt.Sprintf(`<rect x="%d" y="%d" width="%.0f" height="%d" fill="%s" rx="4" opacity="0.85"/>`, labelW, y, barW, barH, color))
|
||||
|
||||
// Score label.
|
||||
sb.WriteString(fmt.Sprintf(`<text x="%.0f" y="%d" fill="#e0e0e8" font-size="12" font-weight="600" dominant-baseline="middle">%.1f%%</text>`, float64(labelW)+barW+8, y+barH/2, p.Accuracy))
|
||||
|
||||
// Correct/total.
|
||||
sb.WriteString(fmt.Sprintf(`<text x="%d" y="%d" fill="#8888a0" font-size="9" text-anchor="end" dominant-baseline="middle">%d/%d</text>`, chartW-10, y+barH/2, p.Correct, p.Total))
|
||||
}
|
||||
|
||||
sb.WriteString("</svg>")
|
||||
return template.HTML(sb.String())
|
||||
}
|
||||
|
||||
// CategoryBreakdownWithJudge generates an HTML table showing per-category capability scores.
|
||||
// When judge data is available, shows 0-10 float averages. Falls back to binary correct/total.
|
||||
func CategoryBreakdownWithJudge(points []lab.CapabilityPoint, judgePoints []lab.CapabilityJudgePoint) template.HTML {
|
||||
if len(points) == 0 {
|
||||
return ""
|
||||
}
|
||||
|
||||
type key struct{ cat, label string }
|
||||
|
||||
// Binary data (always available).
|
||||
type binaryCell struct {
|
||||
correct, total int
|
||||
accuracy float64
|
||||
}
|
||||
binaryCells := map[key]binaryCell{}
|
||||
catSet := map[string]bool{}
|
||||
var labels []string
|
||||
labelSeen := map[string]bool{}
|
||||
|
||||
for _, p := range points {
|
||||
if p.Category == "overall" {
|
||||
continue
|
||||
}
|
||||
k := key{p.Category, p.Label}
|
||||
c := binaryCells[k]
|
||||
c.correct += p.Correct
|
||||
c.total += p.Total
|
||||
binaryCells[k] = c
|
||||
catSet[p.Category] = true
|
||||
if !labelSeen[p.Label] {
|
||||
labelSeen[p.Label] = true
|
||||
labels = append(labels, p.Label)
|
||||
}
|
||||
}
|
||||
for k, c := range binaryCells {
|
||||
if c.total > 0 {
|
||||
c.accuracy = float64(c.correct) / float64(c.total) * 100
|
||||
}
|
||||
binaryCells[k] = c
|
||||
}
|
||||
|
||||
// Judge data (may be empty -- falls back to binary).
|
||||
type judgeCell struct {
|
||||
sum float64
|
||||
count int
|
||||
}
|
||||
judgeCells := map[key]judgeCell{}
|
||||
hasJudge := len(judgePoints) > 0
|
||||
|
||||
for _, jp := range judgePoints {
|
||||
k := key{jp.Category, jp.Label}
|
||||
c := judgeCells[k]
|
||||
c.sum += jp.Avg
|
||||
c.count++
|
||||
judgeCells[k] = c
|
||||
}
|
||||
|
||||
var cats []string
|
||||
for c := range catSet {
|
||||
cats = append(cats, c)
|
||||
}
|
||||
sort.Strings(cats)
|
||||
|
||||
if len(cats) == 0 || len(labels) == 0 {
|
||||
return ""
|
||||
}
|
||||
|
||||
var sb strings.Builder
|
||||
sb.WriteString(`<table><thead><tr><th>Run</th>`)
|
||||
for _, cat := range cats {
|
||||
icon := catIcon(cat)
|
||||
sb.WriteString(fmt.Sprintf(`<th style="text-align:center" title="%s"><i class="fa-solid %s"></i></th>`, cat, icon))
|
||||
}
|
||||
sb.WriteString(`</tr></thead><tbody>`)
|
||||
|
||||
for _, l := range labels {
|
||||
short := shortLabel(l)
|
||||
sb.WriteString(fmt.Sprintf(`<tr><td><code>%s</code></td>`, short))
|
||||
for _, cat := range cats {
|
||||
jc, jok := judgeCells[key{cat, l}]
|
||||
bc, bok := binaryCells[key{cat, l}]
|
||||
|
||||
if hasJudge && jok && jc.count > 0 {
|
||||
// Show judge score (0-10 average).
|
||||
avg := jc.sum / float64(jc.count)
|
||||
color := "var(--red)"
|
||||
if avg >= 7.0 {
|
||||
color = "var(--green)"
|
||||
} else if avg >= 4.0 {
|
||||
color = "var(--yellow)"
|
||||
}
|
||||
passInfo := ""
|
||||
if bok {
|
||||
passInfo = fmt.Sprintf(" (%d/%d pass)", bc.correct, bc.total)
|
||||
}
|
||||
sb.WriteString(fmt.Sprintf(`<td style="color:%s;text-align:center;font-weight:700" title="%s: %.2f/10%s">%.1f</td>`,
|
||||
color, cat, avg, passInfo, avg))
|
||||
} else if bok {
|
||||
// Fall back to binary.
|
||||
icon := "fa-circle-xmark"
|
||||
color := "var(--red)"
|
||||
if bc.accuracy >= 80 {
|
||||
icon = "fa-circle-check"
|
||||
color = "var(--green)"
|
||||
} else if bc.accuracy >= 50 {
|
||||
icon = "fa-triangle-exclamation"
|
||||
color = "var(--yellow)"
|
||||
}
|
||||
sb.WriteString(fmt.Sprintf(`<td style="color:%s;text-align:center" title="%s: %d/%d (%.0f%%)"><i class="fa-solid %s"></i> %d/%d</td>`,
|
||||
color, cat, bc.correct, bc.total, bc.accuracy, icon, bc.correct, bc.total))
|
||||
} else {
|
||||
sb.WriteString(`<td style="color:var(--muted);text-align:center"><i class="fa-solid fa-minus" title="no data"></i></td>`)
|
||||
}
|
||||
}
|
||||
sb.WriteString(`</tr>`)
|
||||
}
|
||||
sb.WriteString(`</tbody></table>`)
|
||||
return template.HTML(sb.String())
|
||||
}
|
||||
|
||||
// catIcon maps capability category names to Font Awesome icons.
|
||||
func catIcon(cat string) string {
|
||||
icons := map[string]string{
|
||||
"algebra": "fa-square-root-variable",
|
||||
"analogy": "fa-right-left",
|
||||
"arithmetic": "fa-calculator",
|
||||
"causal": "fa-diagram-project",
|
||||
"code": "fa-code",
|
||||
"deduction": "fa-magnifying-glass",
|
||||
"geometry": "fa-shapes",
|
||||
"pattern": "fa-grip",
|
||||
"percentages": "fa-percent",
|
||||
"probability": "fa-dice",
|
||||
"puzzles": "fa-puzzle-piece",
|
||||
"sequences": "fa-list-ol",
|
||||
"sets": "fa-circle-nodes",
|
||||
"spatial": "fa-cube",
|
||||
"temporal": "fa-clock",
|
||||
"word": "fa-font",
|
||||
}
|
||||
if ic, ok := icons[cat]; ok {
|
||||
return ic
|
||||
}
|
||||
return "fa-question"
|
||||
}
|
||||
|
||||
// shortLabel compresses run labels for table display.
|
||||
// "base-gemma-3-27b" -> "base-27b", "G12 @0000100" -> "G12 @100"
|
||||
func shortLabel(s string) string {
|
||||
// Strip "gemma-3-" prefix pattern from compound labels
|
||||
s = strings.ReplaceAll(s, "gemma-3-", "")
|
||||
// Collapse leading zeros in iteration numbers: @0000100 -> @100
|
||||
if idx := strings.Index(s, "@"); idx >= 0 {
|
||||
prefix := s[:idx+1]
|
||||
num := strings.TrimLeft(s[idx+1:], "0")
|
||||
if num == "" {
|
||||
num = "0"
|
||||
}
|
||||
s = prefix + num
|
||||
}
|
||||
if len(s) > 18 {
|
||||
s = s[:18]
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func averageByIteration(pts []lab.ContentPoint) []lab.ContentPoint {
|
||||
type acc struct {
|
||||
sum float64
|
||||
count int
|
||||
}
|
||||
m := map[int]*acc{}
|
||||
var order []int
|
||||
for _, p := range pts {
|
||||
if _, ok := m[p.Iteration]; !ok {
|
||||
m[p.Iteration] = &acc{}
|
||||
order = append(order, p.Iteration)
|
||||
}
|
||||
m[p.Iteration].sum += p.Score
|
||||
m[p.Iteration].count++
|
||||
}
|
||||
sort.Ints(order)
|
||||
var result []lab.ContentPoint
|
||||
for _, it := range order {
|
||||
a := m[it]
|
||||
result = append(result, lab.ContentPoint{
|
||||
Iteration: it,
|
||||
Score: math.Round(a.sum/float64(a.count)*10) / 10,
|
||||
})
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// DomainChart renders a horizontal bar chart of domain counts (top 25).
|
||||
func DomainChart(stats []lab.DomainStat) template.HTML {
|
||||
if len(stats) == 0 {
|
||||
return ""
|
||||
}
|
||||
limit := min(25, len(stats))
|
||||
items := stats[:limit]
|
||||
|
||||
maxCount := 0
|
||||
for _, d := range items {
|
||||
maxCount = max(maxCount, d.Count)
|
||||
}
|
||||
maxCount = max(maxCount, 1)
|
||||
|
||||
barH := 18
|
||||
gap := 4
|
||||
labelW := 180
|
||||
barAreaW := 540
|
||||
h := len(items)*(barH+gap) + 10
|
||||
w := labelW + barAreaW + 60
|
||||
|
||||
var b strings.Builder
|
||||
fmt.Fprintf(&b, `<svg width="%d" height="%d" xmlns="http://www.w3.org/2000/svg" style="font-family:-apple-system,sans-serif">`, w, h)
|
||||
fmt.Fprintf(&b, `<rect width="%d" height="%d" fill="var(--surface)" rx="4"/>`, w, h)
|
||||
|
||||
for i, d := range items {
|
||||
y := i*(barH+gap) + 5
|
||||
barW := max(int(float64(d.Count)/float64(maxCount)*float64(barAreaW)), 2)
|
||||
fmt.Fprintf(&b, `<text x="%d" y="%d" fill="var(--muted)" font-size="11" text-anchor="end" dominant-baseline="middle">%s</text>`,
|
||||
labelW-8, y+barH/2, template.HTMLEscapeString(d.Domain))
|
||||
fmt.Fprintf(&b, `<rect x="%d" y="%d" width="%d" height="%d" fill="var(--accent)" rx="2" opacity="0.8"/>`,
|
||||
labelW, y, barW, barH)
|
||||
fmt.Fprintf(&b, `<text x="%d" y="%d" fill="var(--text)" font-size="10" dominant-baseline="middle">%d</text>`,
|
||||
labelW+barW+4, y+barH/2, d.Count)
|
||||
}
|
||||
|
||||
b.WriteString(`</svg>`)
|
||||
return template.HTML(b.String())
|
||||
}
|
||||
|
||||
// VoiceChart renders a vertical bar chart of voice distribution.
|
||||
func VoiceChart(stats []lab.VoiceStat) template.HTML {
|
||||
if len(stats) == 0 {
|
||||
return ""
|
||||
}
|
||||
|
||||
maxCount := 0
|
||||
for _, v := range stats {
|
||||
maxCount = max(maxCount, v.Count)
|
||||
}
|
||||
maxCount = max(maxCount, 1)
|
||||
|
||||
barW := 50
|
||||
gap := 8
|
||||
chartHeight := 200
|
||||
labelH := 60
|
||||
topPad := 20
|
||||
w := len(stats)*(barW+gap) + gap + 10
|
||||
h := chartHeight + labelH + topPad
|
||||
|
||||
var b strings.Builder
|
||||
fmt.Fprintf(&b, `<svg width="%d" height="%d" xmlns="http://www.w3.org/2000/svg" style="font-family:-apple-system,sans-serif">`, w, h)
|
||||
fmt.Fprintf(&b, `<rect width="%d" height="%d" fill="var(--surface)" rx="4"/>`, w, h)
|
||||
|
||||
for i, v := range stats {
|
||||
x := i*(barW+gap) + gap + 5
|
||||
barH := max(int(float64(v.Count)/float64(maxCount)*float64(chartHeight)), 2)
|
||||
y := topPad + chartHeight - barH
|
||||
|
||||
fmt.Fprintf(&b, `<rect x="%d" y="%d" width="%d" height="%d" fill="var(--green)" rx="2" opacity="0.7"/>`,
|
||||
x, y, barW, barH)
|
||||
fmt.Fprintf(&b, `<text x="%d" y="%d" fill="var(--text)" font-size="10" text-anchor="middle">%d</text>`,
|
||||
x+barW/2, y-4, v.Count)
|
||||
fmt.Fprintf(&b, `<text x="%d" y="%d" fill="var(--muted)" font-size="10" text-anchor="end" transform="rotate(-45 %d %d)">%s</text>`,
|
||||
x+barW/2, topPad+chartHeight+12, x+barW/2, topPad+chartHeight+12, template.HTMLEscapeString(v.Voice))
|
||||
}
|
||||
|
||||
b.WriteString(`</svg>`)
|
||||
return template.HTML(b.String())
|
||||
}
|
||||
|
|
@ -1,56 +0,0 @@
|
|||
{{template "head" "Agents"}}
|
||||
{{template "nav" "agents"}}
|
||||
|
||||
<h2 class="section-title">Agent Metrics</h2>
|
||||
|
||||
{{if .Agents.Available}}
|
||||
<div class="grid">
|
||||
<div class="card">
|
||||
<h3>Registered Agents</h3>
|
||||
<div class="value">{{.Agents.RegisteredTotal}}</div>
|
||||
<div class="sub">
|
||||
{{if .Agents.ExporterUp}}<span class="badge badge-ok">exporter up</span>
|
||||
{{else}}<span class="badge badge-err">exporter down</span>{{end}}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="card">
|
||||
<h3>Queue Pending</h3>
|
||||
<div class="value">{{.Agents.QueuePending}}</div>
|
||||
<div class="sub">Tasks waiting for agents</div>
|
||||
</div>
|
||||
|
||||
<div class="card">
|
||||
<h3>Tasks Completed</h3>
|
||||
<div class="value" style="color:var(--green)">{{.Agents.TasksCompleted}}</div>
|
||||
<div class="sub">Total successful</div>
|
||||
</div>
|
||||
|
||||
<div class="card">
|
||||
<h3>Tasks Failed</h3>
|
||||
<div class="value" style="color:var(--red)">{{.Agents.TasksFailed}}</div>
|
||||
<div class="sub">Total failures</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="grid">
|
||||
<div class="card">
|
||||
<h3>Capabilities</h3>
|
||||
<div class="value">{{.Agents.Capabilities}}</div>
|
||||
<div class="sub">Registered capabilities</div>
|
||||
</div>
|
||||
|
||||
<div class="card">
|
||||
<h3>Heartbeat Age</h3>
|
||||
<div class="value">{{pct .Agents.HeartbeatAge}}s</div>
|
||||
<div class="sub">Time since last heartbeat</div>
|
||||
</div>
|
||||
</div>
|
||||
{{else}}
|
||||
<div class="card empty">
|
||||
<p>Agent metrics not available. The Prometheus agent exporter may be offline.</p>
|
||||
<p style="margin-top:.5rem;font-size:.8125rem;color:var(--muted)">Expected at: <code>localhost:9402/metrics</code></p>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{template "footer"}}
|
||||
|
|
@ -1,115 +0,0 @@
|
|||
{{template "head" "Dashboard"}}
|
||||
{{template "nav" "dashboard"}}
|
||||
|
||||
<style>
|
||||
.stat-row{display:flex;align-items:center;gap:.5rem;margin-top:.5rem}
|
||||
.stat-label{font-size:.6875rem;color:var(--muted);text-transform:uppercase;letter-spacing:.05em;width:2.5rem;flex-shrink:0}
|
||||
.stat-row .progress-bar{flex:1;margin:0;height:6px}
|
||||
.stat-val{font-size:.75rem;color:var(--text);white-space:nowrap;min-width:4.5rem;text-align:right}
|
||||
.stat-row .fill-warn{background:var(--yellow)}
|
||||
.stat-row .fill-crit{background:var(--red)}
|
||||
.machine-card{min-width:280px}
|
||||
.machine-card .sub{margin-top:.5rem}
|
||||
</style>
|
||||
|
||||
<div class="grid">
|
||||
{{range .Machines}}
|
||||
<div class="card machine-card">
|
||||
<h3>{{.Name}}</h3>
|
||||
<div class="value {{statusClass (lower (printf "%s" .Status))}}">
|
||||
<span class="status-dot"></span>
|
||||
<span class="label">{{.Status}}</span>
|
||||
</div>
|
||||
{{if eq (printf "%s" .Status) "ok"}}
|
||||
<div class="stat-row">
|
||||
<span class="stat-label">CPU</span>
|
||||
<div class="progress-bar"><div class="fill" style="width:{{cpuPct .Load1 .CPUCores}}%"></div></div>
|
||||
<span class="stat-val">{{pct .Load1}}/{{.CPUCores}}</span>
|
||||
</div>
|
||||
<div class="stat-row">
|
||||
<span class="stat-label">RAM</span>
|
||||
<div class="progress-bar"><div class="fill{{if gt .MemUsedPct 90.0}} fill-warn{{end}}" style="width:{{pct .MemUsedPct}}%"></div></div>
|
||||
<span class="stat-val">{{printf "%.0f" .MemUsedGB}}/{{fmtGB .MemTotalGB}}</span>
|
||||
</div>
|
||||
<div class="stat-row">
|
||||
<span class="stat-label">Disk</span>
|
||||
<div class="progress-bar"><div class="fill{{if gt .DiskUsedPct 85.0}} fill-warn{{end}}{{if gt .DiskUsedPct 95.0}} fill-crit{{end}}" style="width:{{pct .DiskUsedPct}}%"></div></div>
|
||||
<span class="stat-val">{{fmtGB .DiskUsedGB}}/{{fmtGB .DiskTotalGB}}</span>
|
||||
</div>
|
||||
{{if .GPUName}}
|
||||
<div class="stat-row">
|
||||
<span class="stat-label">GPU</span>
|
||||
{{if gt .GPUVRAMTotal 0.0}}
|
||||
<div class="progress-bar"><div class="fill{{if gt .GPUVRAMPct 90.0}} fill-warn{{end}}" style="width:{{pct .GPUVRAMPct}}%"></div></div>
|
||||
<span class="stat-val">{{printf "%.1f" .GPUVRAMUsed}}/{{printf "%.0f" .GPUVRAMTotal}}G</span>
|
||||
{{else}}
|
||||
<span class="stat-val" style="color:var(--muted);font-size:.6875rem">{{.GPUName}}</span>
|
||||
{{end}}
|
||||
</div>
|
||||
{{end}}
|
||||
<div class="sub">{{.Uptime}}{{if gt .GPUTemp 0}} · GPU {{.GPUTemp}}°C{{end}}</div>
|
||||
{{end}}
|
||||
</div>
|
||||
{{else}}
|
||||
<div class="card">
|
||||
<h3>Machines</h3>
|
||||
<div class="empty">Waiting for data...</div>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
<div class="card">
|
||||
<h3>LEK Models</h3>
|
||||
<div class="value">{{len .Models}}</div>
|
||||
<div class="sub"><a href="/models">View on HuggingFace</a></div>
|
||||
</div>
|
||||
|
||||
<div class="card">
|
||||
<h3>Benchmark Runs</h3>
|
||||
{{$b := .Benchmarks}}
|
||||
<div class="value">{{benchmarkCount $b}}</div>
|
||||
<div class="sub">{{dataPoints $b}} data points · <a href="/runs">View runs</a></div>
|
||||
</div>
|
||||
|
||||
<div class="card">
|
||||
<h3>Gold Generation</h3>
|
||||
{{if .Training.GoldAvailable}}
|
||||
<div class="value">{{pct .Training.GoldPercent}}%</div>
|
||||
<div class="progress-bar"><div class="fill" style="width:{{pct .Training.GoldPercent}}%"></div></div>
|
||||
<div class="sub">{{.Training.GoldGenerated}} / {{.Training.GoldTarget}}</div>
|
||||
{{else}}
|
||||
<div class="value status-err"><span class="status-dot"></span>Unavailable</div>
|
||||
<div class="sub">M3 Ultra unreachable</div>
|
||||
{{end}}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{{if .Commits}}
|
||||
<h2 class="section-title">Recent Activity</h2>
|
||||
<div class="card">
|
||||
<table>
|
||||
<thead><tr><th>Repo</th><th>Message</th><th>Author</th><th>Time</th></tr></thead>
|
||||
<tbody>
|
||||
{{range .Commits}}
|
||||
<tr>
|
||||
<td><code>{{.Repo}}</code></td>
|
||||
<td>{{shortMsg .Message}}</td>
|
||||
<td>{{.Author}}</td>
|
||||
<td>{{timeAgo .Timestamp}}</td>
|
||||
</tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{if .Errors}}
|
||||
<div style="margin-top:1rem">
|
||||
{{range $k, $v := .Errors}}
|
||||
<div style="display:inline-block;margin-right:.5rem;font-size:.75rem;color:var(--muted)">
|
||||
<span class="badge badge-err">{{$k}}</span> {{$v}}
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{template "footer"}}
|
||||
|
|
@ -1,392 +0,0 @@
|
|||
{{template "head" "Dataset"}}
|
||||
{{template "nav" "dataset"}}
|
||||
|
||||
<style>
|
||||
.ds-layout{display:flex;gap:1.5rem;min-height:calc(100vh - 120px)}
|
||||
.ds-sidebar{width:200px;flex-shrink:0}
|
||||
.ds-sidebar .sidebar-title{font-size:.6875rem;font-weight:600;color:var(--muted);text-transform:uppercase;letter-spacing:.05em;margin-bottom:.75rem;padding:0 .75rem}
|
||||
.ds-sidebar a{display:flex;align-items:center;gap:.5rem;padding:.5rem .75rem;border-radius:6px;color:var(--muted);font-size:.8125rem;transition:all .2s;text-decoration:none;margin-bottom:2px}
|
||||
.ds-sidebar a:hover{color:var(--text);background:var(--bg)}
|
||||
.ds-sidebar a.active{color:var(--text);background:var(--bg);border-left:3px solid var(--accent)}
|
||||
.ds-sidebar .count{font-size:.6875rem;color:var(--muted);font-family:"SF Mono",Consolas,monospace;margin-left:auto}
|
||||
.ds-main{flex:1;min-width:0}
|
||||
.stat-grid{display:grid;grid-template-columns:repeat(auto-fit,minmax(180px,1fr));gap:1rem;margin-bottom:1.5rem}
|
||||
.stat-card{padding:1rem;border:1px solid var(--border);border-radius:8px;background:var(--surface)}
|
||||
.stat-card h3{font-size:.6875rem;font-weight:600;color:var(--muted);text-transform:uppercase;letter-spacing:.05em;margin-bottom:.375rem}
|
||||
.stat-card .value{font-size:1.5rem;font-weight:700;line-height:1.2}
|
||||
.stat-card .sub{font-size:.75rem;color:var(--muted);margin-top:.25rem}
|
||||
.ds-table-section{margin-bottom:2rem}
|
||||
.ds-table-section h3{font-size:.875rem;font-weight:600;color:var(--muted);text-transform:uppercase;letter-spacing:.05em;margin-bottom:.625rem}
|
||||
@media(max-width:768px){.ds-layout{flex-direction:column}.ds-sidebar{width:100%;display:flex;gap:.5rem;flex-wrap:wrap}.ds-sidebar .sidebar-title{width:100%}.ds-sidebar a{flex:0 0 auto}}
|
||||
</style>
|
||||
|
||||
<div class="ds-layout">
|
||||
|
||||
{{/* -- Sidebar -- */}}
|
||||
<div class="ds-sidebar">
|
||||
<div class="sidebar-title">Dataset</div>
|
||||
<a href="/dataset"{{if not .SelectedView}} class="active"{{end}}>Overview</a>
|
||||
<a href="/dataset?view=golden"{{if eq .SelectedView "golden"}} class="active"{{end}}>
|
||||
Golden Set
|
||||
{{if .GoldenSet.Available}}<span class="count">{{fmtInt .GoldenSet.TotalExamples}}</span>{{end}}
|
||||
</a>
|
||||
<a href="/dataset?view=seeds"{{if eq .SelectedView "seeds"}} class="active"{{end}}>
|
||||
Seeds
|
||||
{{if .Dataset.Available}}<span class="count">{{fmtInt (tableRows .Dataset.Tables "seeds")}}</span>{{end}}
|
||||
</a>
|
||||
<a href="/dataset?view=domains"{{if eq .SelectedView "domains"}} class="active"{{end}}>Domains</a>
|
||||
<a href="/dataset?view=voices"{{if eq .SelectedView "voices"}} class="active"{{end}}>Voices</a>
|
||||
<a href="/dataset?view=expansion"{{if eq .SelectedView "expansion"}} class="active"{{end}}>
|
||||
Expansion
|
||||
{{if .Dataset.Available}}<span class="count">{{fmtInt (tableRows .Dataset.Tables "expansion_prompts")}}</span>{{end}}
|
||||
</a>
|
||||
<a href="/dataset?view=export"{{if eq .SelectedView "export"}} class="active"{{end}}>Export</a>
|
||||
</div>
|
||||
|
||||
{{/* -- Main content -- */}}
|
||||
<div class="ds-main">
|
||||
|
||||
{{if not .SelectedView}}
|
||||
{{/* -- Overview -- */}}
|
||||
<h2 class="section-title">LEM Dataset</h2>
|
||||
|
||||
<div class="stat-grid">
|
||||
{{if .GoldenSet.Available}}
|
||||
<a href="/dataset?view=golden" style="text-decoration:none;color:inherit">
|
||||
<div class="stat-card">
|
||||
<h3>Golden Set</h3>
|
||||
<div class="value">{{fmtInt .GoldenSet.TotalExamples}}</div>
|
||||
<div class="progress-bar"><div class="fill" style="width:{{pct .GoldenSet.CompletionPct}}%;background:var(--green)"></div></div>
|
||||
<div class="sub">{{pct .GoldenSet.CompletionPct}}% of {{fmtInt .GoldenSet.TargetTotal}} target</div>
|
||||
</div>
|
||||
</a>
|
||||
{{end}}
|
||||
|
||||
{{if .Dataset.Available}}
|
||||
<a href="/dataset?view=seeds" style="text-decoration:none;color:inherit">
|
||||
<div class="stat-card">
|
||||
<h3>Seeds</h3>
|
||||
<div class="value">{{fmtInt (tableRows .Dataset.Tables "seeds")}}</div>
|
||||
<div class="sub">Source prompts for generation</div>
|
||||
</div>
|
||||
</a>
|
||||
|
||||
<a href="/dataset?view=expansion" style="text-decoration:none;color:inherit">
|
||||
<div class="stat-card">
|
||||
<h3>Expansion Prompts</h3>
|
||||
<div class="value">{{fmtInt (tableRows .Dataset.Tables "expansion_prompts")}}</div>
|
||||
<div class="sub">Ready for model expansion</div>
|
||||
</div>
|
||||
</a>
|
||||
|
||||
<div class="stat-card">
|
||||
<h3>Training Examples</h3>
|
||||
<div class="value">{{fmtInt (tableRows .Dataset.Tables "training_examples")}}</div>
|
||||
<div class="sub">Chat-format JSONL splits</div>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{if .GoldenSet.Available}}
|
||||
<a href="/dataset?view=domains" style="text-decoration:none;color:inherit">
|
||||
<div class="stat-card">
|
||||
<h3>Domains</h3>
|
||||
<div class="value">{{.GoldenSet.Domains}}</div>
|
||||
<div class="sub">Topic categories</div>
|
||||
</div>
|
||||
</a>
|
||||
|
||||
<a href="/dataset?view=voices" style="text-decoration:none;color:inherit">
|
||||
<div class="stat-card">
|
||||
<h3>Voices</h3>
|
||||
<div class="value">{{.GoldenSet.Voices}}</div>
|
||||
<div class="sub">Persona types</div>
|
||||
</div>
|
||||
</a>
|
||||
|
||||
<div class="stat-card">
|
||||
<h3>Avg Generation</h3>
|
||||
<div class="value">{{pct .GoldenSet.AvgGenTime}}s</div>
|
||||
<div class="sub">{{pct .GoldenSet.AvgResponseChars}} avg chars</div>
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
|
||||
{{if .Dataset.Available}}
|
||||
<h2 class="section-title">DuckDB Tables</h2>
|
||||
<div class="card">
|
||||
<table>
|
||||
<thead><tr><th>Table</th><th style="text-align:right">Rows</th><th style="width:50%">Size</th></tr></thead>
|
||||
<tbody>
|
||||
{{$total := totalRows .Dataset.Tables}}
|
||||
{{range .Dataset.Tables}}
|
||||
<tr>
|
||||
<td><code>{{.Name}}</code></td>
|
||||
<td style="text-align:right">{{fmtInt .Rows}}</td>
|
||||
<td>
|
||||
<div class="progress-bar" style="height:6px"><div class="fill" style="width:{{pct (pctOf .Rows $total)}}%"></div></div>
|
||||
</td>
|
||||
</tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{else if eq .SelectedView "golden"}}
|
||||
{{/* -- Golden Set detail -- */}}
|
||||
<h2 class="section-title">Golden Set</h2>
|
||||
|
||||
{{if not .GoldenSet.Available}}
|
||||
<div class="card empty"><p>No golden set data available.</p></div>
|
||||
{{else}}
|
||||
<div class="stat-grid">
|
||||
<div class="stat-card">
|
||||
<h3>Total Examples</h3>
|
||||
<div class="value">{{fmtInt .GoldenSet.TotalExamples}}</div>
|
||||
<div class="progress-bar"><div class="fill" style="width:{{pct .GoldenSet.CompletionPct}}%;background:var(--green)"></div></div>
|
||||
<div class="sub">{{pct .GoldenSet.CompletionPct}}% of {{fmtInt .GoldenSet.TargetTotal}}</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<h3>Domains</h3>
|
||||
<div class="value">{{.GoldenSet.Domains}}</div>
|
||||
<div class="sub">Unique topic domains</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<h3>Voices</h3>
|
||||
<div class="value">{{.GoldenSet.Voices}}</div>
|
||||
<div class="sub">Persona voice types</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<h3>Avg Generation</h3>
|
||||
<div class="value">{{pct .GoldenSet.AvgGenTime}}s</div>
|
||||
<div class="sub">{{pct .GoldenSet.AvgResponseChars}} avg chars</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{{if .GoldenSet.Workers}}
|
||||
<div class="ds-table-section">
|
||||
<h3>Workers</h3>
|
||||
<div class="card">
|
||||
<table>
|
||||
<thead><tr><th>Worker</th><th style="text-align:right">Generations</th></tr></thead>
|
||||
<tbody>
|
||||
{{range .GoldenSet.Workers}}
|
||||
<tr>
|
||||
<td><code>{{.Worker}}</code></td>
|
||||
<td style="text-align:right">{{fmtInt .Count}}</td>
|
||||
</tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{else if eq .SelectedView "seeds"}}
|
||||
{{/* -- Seeds -- */}}
|
||||
<h2 class="section-title">Seeds</h2>
|
||||
<div class="stat-grid">
|
||||
{{if .Dataset.Available}}
|
||||
<div class="stat-card">
|
||||
<h3>Total Seeds</h3>
|
||||
<div class="value">{{fmtInt (tableRows .Dataset.Tables "seeds")}}</div>
|
||||
<div class="sub">Source prompts in DuckDB</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<h3>Prompts Generated</h3>
|
||||
<div class="value">{{fmtInt (tableRows .Dataset.Tables "prompts")}}</div>
|
||||
<div class="sub">Processed from seeds</div>
|
||||
</div>
|
||||
{{else}}
|
||||
<div class="stat-card">
|
||||
<h3>Seeds</h3>
|
||||
<div class="value">87,338</div>
|
||||
<div class="sub">Push stats via <code>dataset_stats</code></div>
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
<div class="card">
|
||||
<p style="color:var(--muted);padding:1rem">Seed browser coming soon. Use <code>lem export --seeds</code> to explore locally.</p>
|
||||
</div>
|
||||
|
||||
{{else if eq .SelectedView "domains"}}
|
||||
{{/* -- Domains -- */}}
|
||||
<h2 class="section-title">Domains</h2>
|
||||
|
||||
{{if and .GoldenSet.Available .GoldenSet.DomainStats}}
|
||||
<div class="stat-grid">
|
||||
<div class="stat-card">
|
||||
<h3>Total Domains</h3>
|
||||
<div class="value">{{.GoldenSet.Domains}}</div>
|
||||
<div class="sub">Unique topic categories</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<h3>Total Examples</h3>
|
||||
<div class="value">{{fmtInt .GoldenSet.TotalExamples}}</div>
|
||||
<div class="sub">Across all domains</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="ds-table-section">
|
||||
<h3>Distribution (top 25)</h3>
|
||||
<div class="card" style="overflow-x:auto;padding:1rem">
|
||||
{{domainChart .GoldenSet.DomainStats}}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="ds-table-section">
|
||||
<h3>All Domains</h3>
|
||||
<div class="card">
|
||||
<table>
|
||||
<thead><tr><th>Domain</th><th style="text-align:right">Count</th><th style="text-align:right">Avg Gen Time</th><th style="width:40%">Coverage</th></tr></thead>
|
||||
<tbody>
|
||||
{{range .GoldenSet.DomainStats}}
|
||||
<tr>
|
||||
<td><code>{{.Domain}}</code></td>
|
||||
<td style="text-align:right">{{.Count}}</td>
|
||||
<td style="text-align:right">{{pct .AvgGenTime}}s</td>
|
||||
<td>
|
||||
<div class="progress-bar" style="height:6px"><div class="fill" style="width:{{pct (pctOf .Count $.GoldenSet.TotalExamples)}}%"></div></div>
|
||||
</td>
|
||||
</tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
{{else}}
|
||||
<div class="card empty"><p>No domain data available.</p></div>
|
||||
{{end}}
|
||||
|
||||
{{else if eq .SelectedView "voices"}}
|
||||
{{/* -- Voices -- */}}
|
||||
<h2 class="section-title">Voices</h2>
|
||||
|
||||
{{if and .GoldenSet.Available .GoldenSet.VoiceStats}}
|
||||
<div class="stat-grid">
|
||||
<div class="stat-card">
|
||||
<h3>Total Voices</h3>
|
||||
<div class="value">{{.GoldenSet.Voices}}</div>
|
||||
<div class="sub">Persona types</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<h3>Total Examples</h3>
|
||||
<div class="value">{{fmtInt .GoldenSet.TotalExamples}}</div>
|
||||
<div class="sub">Across all voices</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="ds-table-section">
|
||||
<h3>Distribution</h3>
|
||||
<div class="card" style="overflow-x:auto;padding:1rem">
|
||||
{{voiceChart .GoldenSet.VoiceStats}}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="ds-table-section">
|
||||
<h3>Voice Details</h3>
|
||||
<div class="card">
|
||||
<table>
|
||||
<thead><tr><th>Voice</th><th style="text-align:right">Count</th><th style="text-align:right">Avg Chars</th><th style="text-align:right">Avg Gen Time</th></tr></thead>
|
||||
<tbody>
|
||||
{{range .GoldenSet.VoiceStats}}
|
||||
<tr>
|
||||
<td><code>{{.Voice}}</code></td>
|
||||
<td style="text-align:right">{{.Count}}</td>
|
||||
<td style="text-align:right">{{pct .AvgChars}}</td>
|
||||
<td style="text-align:right">{{pct .AvgGenTime}}s</td>
|
||||
</tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
{{else}}
|
||||
<div class="card empty"><p>No voice data available.</p></div>
|
||||
{{end}}
|
||||
|
||||
{{else if eq .SelectedView "expansion"}}
|
||||
{{/* -- Expansion -- */}}
|
||||
<h2 class="section-title">Expansion</h2>
|
||||
<div class="stat-grid">
|
||||
{{if .Dataset.Available}}
|
||||
<div class="stat-card">
|
||||
<h3>Expansion Prompts</h3>
|
||||
<div class="value">{{fmtInt (tableRows .Dataset.Tables "expansion_prompts")}}</div>
|
||||
<div class="sub">Deduped, ready for generation</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<h3>Gemini Responses</h3>
|
||||
<div class="value">{{fmtInt (tableRows .Dataset.Tables "gemini_responses")}}</div>
|
||||
<div class="sub">Reference responses for scoring</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<h3>Benchmark Questions</h3>
|
||||
<div class="value">{{fmtInt (tableRows .Dataset.Tables "benchmark_questions")}}</div>
|
||||
<div class="sub">Capability test set</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<h3>Benchmark Results</h3>
|
||||
<div class="value">{{fmtInt (tableRows .Dataset.Tables "benchmark_results")}}</div>
|
||||
<div class="sub">Scored responses</div>
|
||||
</div>
|
||||
{{else}}
|
||||
<div class="stat-card">
|
||||
<h3>Expansion Prompts</h3>
|
||||
<div class="value">46,331</div>
|
||||
<div class="sub">Push stats via <code>dataset_stats</code></div>
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
<div class="card">
|
||||
<p style="color:var(--muted);padding:1rem">Expansion pipeline: use <code>lem expand</code> to generate responses from trained models, then <code>lem score</code> to filter by quality.</p>
|
||||
</div>
|
||||
|
||||
{{else if eq .SelectedView "export"}}
|
||||
{{/* -- Export -- */}}
|
||||
<h2 class="section-title">Export</h2>
|
||||
<div class="stat-grid">
|
||||
{{if .Dataset.Available}}
|
||||
<div class="stat-card">
|
||||
<h3>Training Examples</h3>
|
||||
<div class="value">{{fmtInt (tableRows .Dataset.Tables "training_examples")}}</div>
|
||||
<div class="sub">Chat-format JSONL</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<h3>Validations</h3>
|
||||
<div class="value">{{fmtInt (tableRows .Dataset.Tables "validations")}}</div>
|
||||
<div class="sub">Quality checks</div>
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
<div class="card">
|
||||
<p style="color:var(--muted);padding:1rem">Export formats:</p>
|
||||
<table>
|
||||
<thead><tr><th>Format</th><th>Command</th><th>Use</th></tr></thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><code>JSONL (MLX)</code></td>
|
||||
<td><code>lem export --format jsonl</code></td>
|
||||
<td>MLX LoRA training (train/valid/test splits)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><code>Parquet</code></td>
|
||||
<td><code>lem export --format parquet</code></td>
|
||||
<td>HuggingFace dataset upload</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><code>CSV</code></td>
|
||||
<td><code>lem export --format csv</code></td>
|
||||
<td>Spreadsheet analysis</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
{{end}}
|
||||
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{{template "footer"}}
|
||||
|
|
@ -1,108 +0,0 @@
|
|||
{{template "head" "Golden Set"}}
|
||||
{{template "nav" "golden-set"}}
|
||||
|
||||
<h2 class="section-title">LEM Golden Set Explorer</h2>
|
||||
|
||||
{{if not .GoldenSet.Available}}
|
||||
<div class="card"><div class="empty">No golden set data available. Run <code>pipeline.py metrics</code> to push stats to InfluxDB.</div></div>
|
||||
{{else}}
|
||||
|
||||
<div class="grid">
|
||||
<div class="card">
|
||||
<h3>Progress</h3>
|
||||
<div class="value">{{fmtInt .GoldenSet.TotalExamples}} / {{fmtInt .GoldenSet.TargetTotal}}</div>
|
||||
<div class="progress-bar"><div class="fill" style="width:{{pct .GoldenSet.CompletionPct}}%"></div></div>
|
||||
<div class="sub">{{pct .GoldenSet.CompletionPct}}% complete</div>
|
||||
</div>
|
||||
|
||||
<div class="card">
|
||||
<h3>Domains</h3>
|
||||
<div class="value">{{.GoldenSet.Domains}}</div>
|
||||
<div class="sub">Unique topic domains</div>
|
||||
</div>
|
||||
|
||||
<div class="card">
|
||||
<h3>Voices</h3>
|
||||
<div class="value">{{.GoldenSet.Voices}}</div>
|
||||
<div class="sub">Persona voice types</div>
|
||||
</div>
|
||||
|
||||
<div class="card">
|
||||
<h3>Avg Generation</h3>
|
||||
<div class="value">{{pct .GoldenSet.AvgGenTime}}s</div>
|
||||
<div class="sub">{{pct .GoldenSet.AvgResponseChars}} avg chars per response</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{{if .GoldenSet.Workers}}
|
||||
<h2 class="section-title">Workers</h2>
|
||||
<div class="card">
|
||||
<table>
|
||||
<thead><tr><th>Worker</th><th style="text-align:right">Generations</th></tr></thead>
|
||||
<tbody>
|
||||
{{range .GoldenSet.Workers}}
|
||||
<tr>
|
||||
<td><code>{{.Worker}}</code></td>
|
||||
<td style="text-align:right">{{.Count}}</td>
|
||||
</tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{if .GoldenSet.VoiceStats}}
|
||||
<h2 class="section-title">Voice Distribution</h2>
|
||||
<div class="card" style="overflow-x:auto;padding:1rem">
|
||||
{{voiceChart .GoldenSet.VoiceStats}}
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{if .GoldenSet.DomainStats}}
|
||||
<h2 class="section-title">Domain Breakdown (top 25)</h2>
|
||||
<div class="card" style="overflow-x:auto;padding:1rem">
|
||||
{{domainChart .GoldenSet.DomainStats}}
|
||||
</div>
|
||||
|
||||
<h2 class="section-title">All Domains</h2>
|
||||
<div class="card">
|
||||
<table>
|
||||
<thead><tr><th>Domain</th><th style="text-align:right">Count</th><th style="text-align:right">Avg Gen Time</th><th style="width:40%">Coverage</th></tr></thead>
|
||||
<tbody>
|
||||
{{range .GoldenSet.DomainStats}}
|
||||
<tr>
|
||||
<td><code>{{.Domain}}</code></td>
|
||||
<td style="text-align:right">{{.Count}}</td>
|
||||
<td style="text-align:right">{{pct .AvgGenTime}}s</td>
|
||||
<td>
|
||||
<div class="progress-bar" style="height:6px"><div class="fill" style="width:{{pct (pctOf .Count $.GoldenSet.TotalExamples)}}%"></div></div>
|
||||
</td>
|
||||
</tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{if .GoldenSet.VoiceStats}}
|
||||
<h2 class="section-title">Voice Details</h2>
|
||||
<div class="card">
|
||||
<table>
|
||||
<thead><tr><th>Voice</th><th style="text-align:right">Count</th><th style="text-align:right">Avg Chars</th><th style="text-align:right">Avg Gen Time</th></tr></thead>
|
||||
<tbody>
|
||||
{{range .GoldenSet.VoiceStats}}
|
||||
<tr>
|
||||
<td><code>{{.Voice}}</code></td>
|
||||
<td style="text-align:right">{{.Count}}</td>
|
||||
<td style="text-align:right">{{pct .AvgChars}}</td>
|
||||
<td style="text-align:right">{{pct .AvgGenTime}}s</td>
|
||||
</tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{end}}
|
||||
|
||||
{{template "footer"}}
|
||||
|
|
@ -1,103 +0,0 @@
|
|||
{{define "head"}}<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<title>{{.}} - LEM.Lab</title>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.7.2/css/all.min.css" integrity="sha512-Evv84Mr4kqVGRNSgIGL/F/aIDqQb7xQ2vcrdIwxfjThSH8CSR7PBEakCr51Ck+w+/U6swU2Im1vVX0SVk9ABhg==" crossorigin="anonymous" referrerpolicy="no-referrer"/>
|
||||
<style>
|
||||
*{margin:0;padding:0;box-sizing:border-box}
|
||||
:root{--bg:#0a0a0f;--surface:#12121a;--border:#1e1e2e;--text:#e0e0e8;--muted:#8888a0;--accent:#7c6ff0;--accent-dim:#5a4fd0;--green:#4ade80;--red:#f87171;--yellow:#fbbf24}
|
||||
body{font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,sans-serif;background:var(--bg);color:var(--text);min-height:100vh;line-height:1.6;font-size:.9375rem}
|
||||
a{color:var(--accent);text-decoration:none;transition:color .2s}
|
||||
a:hover{color:var(--green)}
|
||||
nav{display:flex;align-items:center;gap:1.5rem;padding:.75rem 1.5rem;border-bottom:1px solid var(--border);background:var(--surface)}
|
||||
nav .logo{font-size:1.25rem;font-weight:700;letter-spacing:-.02em}
|
||||
nav .logo span{color:var(--accent)}
|
||||
nav .links{display:flex;gap:.25rem}
|
||||
nav .links a{padding:.375rem .75rem;border-radius:6px;font-size:.8125rem;color:var(--muted);transition:all .2s}
|
||||
nav .links a:hover,nav .links a.active{color:var(--text);background:var(--bg)}
|
||||
.container{max-width:1600px;margin:0 auto;padding:1.5rem}
|
||||
.grid{display:grid;grid-template-columns:repeat(auto-fit,minmax(260px,1fr));gap:1rem;margin-bottom:1.5rem}
|
||||
.card{padding:1.25rem;border:1px solid var(--border);border-radius:8px;background:var(--surface)}
|
||||
.card h3{font-size:.8125rem;font-weight:600;color:var(--muted);text-transform:uppercase;letter-spacing:.05em;margin-bottom:.5rem}
|
||||
.card .value{font-size:1.75rem;font-weight:700;line-height:1.2}
|
||||
.card .sub{font-size:.8125rem;color:var(--muted);margin-top:.25rem}
|
||||
.status-dot{display:inline-block;width:8px;height:8px;border-radius:50%;margin-right:.375rem}
|
||||
.status-ok .status-dot{background:var(--green)}
|
||||
.status-warn .status-dot{background:var(--yellow)}
|
||||
.status-err .status-dot{background:var(--red)}
|
||||
.status-ok .label{color:var(--green)}
|
||||
.status-warn .label{color:var(--yellow)}
|
||||
.status-err .label{color:var(--red)}
|
||||
.progress-bar{width:100%;height:8px;background:var(--border);border-radius:4px;overflow:hidden;margin:.5rem 0}
|
||||
.progress-bar .fill{height:100%;background:var(--accent);border-radius:4px;transition:width .5s}
|
||||
table{width:100%;border-collapse:collapse;font-size:.8125rem}
|
||||
th{text-align:left;color:var(--muted);font-weight:600;padding:.5rem .75rem;border-bottom:1px solid var(--border);text-transform:uppercase;letter-spacing:.05em;font-size:.75rem}
|
||||
td{padding:.5rem .75rem;border-bottom:1px solid var(--border)}
|
||||
tr:last-child td{border-bottom:none}
|
||||
code{font-family:"SF Mono",Consolas,monospace;font-size:.75rem;background:var(--bg);padding:.125rem .375rem;border-radius:4px;border:1px solid var(--border)}
|
||||
.badge{display:inline-block;padding:.125rem .5rem;border-radius:4px;font-size:.6875rem;font-weight:600;text-transform:uppercase;letter-spacing:.05em}
|
||||
.badge-ok{background:rgba(74,222,128,.15);color:var(--green)}
|
||||
.badge-err{background:rgba(248,113,113,.15);color:var(--red)}
|
||||
.badge-info{background:rgba(124,111,240,.15);color:var(--accent)}
|
||||
.empty{text-align:center;padding:2rem;color:var(--muted)}
|
||||
.section-title{font-size:1rem;font-weight:600;margin-bottom:1rem;color:var(--text)}
|
||||
footer{text-align:center;padding:1rem;color:var(--muted);font-size:.75rem;border-top:1px solid var(--border);margin-top:2rem}
|
||||
@media(max-width:640px){.grid{grid-template-columns:1fr}nav{flex-wrap:wrap;gap:.75rem}}
|
||||
</style>
|
||||
</head>
|
||||
<body>{{end}}
|
||||
|
||||
{{define "nav"}}
|
||||
<nav>
|
||||
<div class="logo">LEM<span>.Lab</span></div>
|
||||
<div class="links">
|
||||
<a href="/"{{if eq . "dashboard"}} class="active"{{end}}>Dashboard</a>
|
||||
<a href="/models"{{if eq . "models"}} class="active"{{end}}>Models</a>
|
||||
<a href="/training"{{if eq . "training"}} class="active"{{end}}>Training</a>
|
||||
<a href="/dataset"{{if eq . "dataset"}} class="active"{{end}}>Dataset</a>
|
||||
<a href="/agents"{{if eq . "agents"}} class="active"{{end}}>Agents</a>
|
||||
<a href="/services"{{if eq . "services"}} class="active"{{end}}>Services</a>
|
||||
</div>
|
||||
</nav>
|
||||
<div class="container">{{end}}
|
||||
|
||||
{{define "footer"}}
|
||||
</div>
|
||||
<footer>LEM.Lab · live · <a href="https://forge.lthn.io/agentic">forge.lthn.io</a></footer>
|
||||
<script>
|
||||
// SSE live update: fetches same-origin page on data change, swaps container content.
|
||||
// Safe: only fetches from same origin (our own server), no user input involved.
|
||||
(function(){
|
||||
var es, timer;
|
||||
function connect(){
|
||||
es = new EventSource('/events');
|
||||
es.onmessage = function(){
|
||||
clearTimeout(timer);
|
||||
timer = setTimeout(refresh, 500);
|
||||
};
|
||||
es.onerror = function(){
|
||||
es.close();
|
||||
setTimeout(connect, 5000);
|
||||
};
|
||||
}
|
||||
function refresh(){
|
||||
fetch(location.href).then(function(r){ return r.text(); }).then(function(html){
|
||||
var doc = new DOMParser().parseFromString(html, 'text/html');
|
||||
var fresh = doc.querySelector('.container');
|
||||
var current = document.querySelector('.container');
|
||||
if(fresh && current){
|
||||
// Save active tab before replacing DOM.
|
||||
var activeTab = document.querySelector('.chart-panel.active');
|
||||
var tabName = activeTab ? activeTab.getAttribute('data-tab') : null;
|
||||
current.replaceWith(fresh);
|
||||
// Restore active tab after DOM swap.
|
||||
if(tabName && typeof showTab === 'function') showTab(tabName);
|
||||
}
|
||||
});
|
||||
}
|
||||
connect();
|
||||
})();
|
||||
</script>
|
||||
</body></html>{{end}}
|
||||
|
|
@ -1,29 +0,0 @@
|
|||
{{template "head" "Models"}}
|
||||
{{template "nav" "models"}}
|
||||
|
||||
<h2 class="section-title">LEK Models on HuggingFace</h2>
|
||||
|
||||
{{if .Models}}
|
||||
<div class="card">
|
||||
<table>
|
||||
<thead><tr><th>Model</th><th>Downloads</th><th>Likes</th><th>Pipeline</th><th>Updated</th></tr></thead>
|
||||
<tbody>
|
||||
{{range .Models}}
|
||||
<tr>
|
||||
<td><a href="https://huggingface.co/{{.ModelID}}" target="_blank">{{.ModelID}}</a></td>
|
||||
<td>{{.Downloads}}</td>
|
||||
<td>{{.Likes}}</td>
|
||||
<td>{{if .PipelineTag}}<span class="badge badge-info">{{.PipelineTag}}</span>{{else}}-{{end}}</td>
|
||||
<td>{{timeAgo .LastModified}}</td>
|
||||
</tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{{else}}
|
||||
<div class="card empty">
|
||||
<p>No models loaded yet. HuggingFace data refreshes every 5 minutes.</p>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{template "footer"}}
|
||||
|
|
@ -1,113 +0,0 @@
|
|||
{{template "head" "Runs"}}
|
||||
{{template "nav" "runs"}}
|
||||
|
||||
<style>
|
||||
.run-section{margin-bottom:2.5rem}
|
||||
.run-header{display:flex;align-items:center;gap:.75rem;margin-bottom:1rem;padding-bottom:.5rem;border-bottom:1px solid var(--border)}
|
||||
.run-header h2{font-size:1.125rem;font-weight:700;color:var(--text);margin:0}
|
||||
.run-header .model-badge{font-size:.6875rem;font-weight:600;text-transform:uppercase;letter-spacing:.05em;padding:.2rem .6rem;border-radius:4px;background:rgba(124,111,240,.15);color:var(--accent)}
|
||||
.run-header .run-id{font-size:.75rem;color:var(--muted);font-family:"SF Mono",Consolas,monospace}
|
||||
.chart-container{margin-bottom:1.25rem}
|
||||
.chart-container h3{font-size:.8125rem;font-weight:600;color:var(--muted);text-transform:uppercase;letter-spacing:.05em;margin-bottom:.625rem}
|
||||
.chart-card{border:1px solid var(--border);border-radius:8px;padding:1rem;background:var(--surface);overflow-x:auto}
|
||||
.run-summary{display:grid;grid-template-columns:repeat(auto-fit,minmax(140px,1fr));gap:.75rem;margin-bottom:1.25rem}
|
||||
.run-stat{padding:.75rem 1rem;border:1px solid var(--border);border-radius:8px;background:var(--surface)}
|
||||
.run-stat .label{font-size:.6875rem;font-weight:600;color:var(--muted);text-transform:uppercase;letter-spacing:.05em;margin-bottom:.25rem}
|
||||
.run-stat .value{font-size:1.5rem;font-weight:700;line-height:1.2}
|
||||
.run-stat .sub{font-size:.75rem;color:var(--muted);margin-top:.125rem}
|
||||
</style>
|
||||
|
||||
<h2 class="section-title">Training Runs</h2>
|
||||
|
||||
{{$b := .Benchmarks}}
|
||||
|
||||
{{if not $b.Runs}}
|
||||
<div class="card empty">
|
||||
<p>No benchmark data available. InfluxDB data refreshes every 60 seconds.</p>
|
||||
</div>
|
||||
{{else}}
|
||||
|
||||
{{range $b.Runs}}
|
||||
{{$rid := .RunID}}
|
||||
{{$mdl := .Model}}
|
||||
|
||||
<div class="run-section" id="{{$rid}}">
|
||||
<div class="run-header">
|
||||
<h2>{{$mdl}}</h2>
|
||||
<span class="model-badge">{{.Type}}</span>
|
||||
<span class="run-id">{{$rid}}</span>
|
||||
</div>
|
||||
|
||||
{{/* Summary stats */}}
|
||||
<div class="run-summary">
|
||||
{{if hasKey $b.Loss $rid}}
|
||||
{{$loss := getLoss $b.Loss $rid}}
|
||||
<div class="run-stat">
|
||||
<div class="label">Loss Points</div>
|
||||
<div class="value">{{len $loss}}</div>
|
||||
<div class="sub">val + train</div>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{if hasContentKey $b.Content $rid}}
|
||||
{{$content := getContent $b.Content $rid}}
|
||||
<div class="run-stat">
|
||||
<div class="label">Content Scores</div>
|
||||
<div class="value">{{len $content}}</div>
|
||||
<div class="sub">dimension scores</div>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{if hasCapKey $b.Capability $rid}}
|
||||
{{$cap := getCap $b.Capability $rid}}
|
||||
<div class="run-stat">
|
||||
<div class="label">Capability Tests</div>
|
||||
<div class="value">{{len $cap}}</div>
|
||||
<div class="sub">benchmark points</div>
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
|
||||
{{/* Training Loss Chart */}}
|
||||
{{if hasKey $b.Loss $rid}}
|
||||
<div class="chart-container">
|
||||
<h3>Training Loss Curve</h3>
|
||||
<div class="chart-card">
|
||||
{{lossChart (getLoss $b.Loss $rid)}}
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{/* Content Score Chart */}}
|
||||
{{if hasContentKey $b.Content $rid}}
|
||||
<div class="chart-container">
|
||||
<h3>Content Scores by Dimension</h3>
|
||||
<div class="chart-card">
|
||||
{{contentChart (getContent $b.Content $rid)}}
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{/* Capability Chart */}}
|
||||
{{if hasCapKey $b.Capability $rid}}
|
||||
<div class="chart-container">
|
||||
<h3>Capability Benchmark</h3>
|
||||
<div class="chart-card">
|
||||
{{capabilityChart (getCap $b.Capability $rid)}}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="chart-container">
|
||||
<h3>Category Breakdown</h3>
|
||||
<div class="chart-card">
|
||||
{{categoryBreakdown (getCap $b.Capability $rid) (getCapJudge $b.CapabilityJudge $rid)}}
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{end}}
|
||||
|
||||
{{template "footer"}}
|
||||
|
|
@ -1,65 +0,0 @@
|
|||
{{template "head" "Services"}}
|
||||
{{template "nav" "services"}}
|
||||
|
||||
<h2 class="section-title">Internal Services</h2>
|
||||
|
||||
<style>
|
||||
.svc-grid{display:grid;grid-template-columns:repeat(auto-fill,minmax(280px,1fr));gap:1rem;margin-bottom:2rem}
|
||||
.svc-card{padding:1rem 1.25rem;border:1px solid var(--border);border-radius:8px;background:var(--surface);display:flex;align-items:center;gap:1rem;transition:border-color .2s}
|
||||
.svc-card:hover{border-color:var(--accent-dim)}
|
||||
.svc-dot{width:10px;height:10px;border-radius:50%;flex-shrink:0}
|
||||
.svc-dot.ok{background:var(--green)}
|
||||
.svc-dot.degraded{background:var(--yellow)}
|
||||
.svc-dot.unavailable{background:var(--red)}
|
||||
.svc-dot.unchecked{background:var(--muted)}
|
||||
.svc-info{flex:1;min-width:0}
|
||||
.svc-name{font-weight:600;font-size:.875rem}
|
||||
.svc-name a{color:var(--text)}
|
||||
.svc-name a:hover{color:var(--accent)}
|
||||
.svc-meta{font-size:.75rem;color:var(--muted)}
|
||||
.svc-cat-title{font-size:.875rem;font-weight:600;color:var(--accent);text-transform:uppercase;letter-spacing:.05em;margin-bottom:.75rem;padding-bottom:.375rem;border-bottom:1px solid var(--border)}
|
||||
.svc-section{margin-bottom:1.5rem}
|
||||
.svc-summary{display:flex;gap:1.5rem;margin-bottom:1.5rem;flex-wrap:wrap}
|
||||
.svc-stat{font-size:.8125rem;color:var(--muted)}
|
||||
.svc-stat strong{font-size:1.25rem;color:var(--text);display:block}
|
||||
</style>
|
||||
|
||||
{{$services := .Services}}
|
||||
|
||||
<div class="svc-summary">
|
||||
<div class="svc-stat">
|
||||
<strong>{{len $services}}</strong>
|
||||
Total Services
|
||||
</div>
|
||||
<div class="svc-stat">
|
||||
<strong style="color:var(--green)">{{countStatus $services "ok"}}</strong>
|
||||
Online
|
||||
</div>
|
||||
<div class="svc-stat">
|
||||
<strong style="color:var(--yellow)">{{countStatus $services "degraded"}}</strong>
|
||||
Degraded
|
||||
</div>
|
||||
<div class="svc-stat">
|
||||
<strong style="color:var(--red)">{{countStatus $services "unavailable"}}</strong>
|
||||
Offline
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{{range categories $services}}
|
||||
<div class="svc-section">
|
||||
<div class="svc-cat-title">{{.}}</div>
|
||||
<div class="svc-grid">
|
||||
{{range filterCat $services .}}
|
||||
<div class="svc-card">
|
||||
<div class="svc-dot {{.Status}}"></div>
|
||||
<div class="svc-info">
|
||||
<div class="svc-name"><a href="{{.URL}}" target="_blank">{{.Name}}</a></div>
|
||||
<div class="svc-meta">{{.Machine}} · {{.URL}}</div>
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{template "footer"}}
|
||||
|
|
@ -1,278 +0,0 @@
|
|||
{{template "head" "Training"}}
|
||||
{{template "nav" "training"}}
|
||||
|
||||
<style>
|
||||
.training-layout{display:flex;gap:1.5rem;min-height:calc(100vh - 120px)}
|
||||
.training-sidebar{width:220px;flex-shrink:0}
|
||||
.training-sidebar .sidebar-title{font-size:.6875rem;font-weight:600;color:var(--muted);text-transform:uppercase;letter-spacing:.05em;margin-bottom:.75rem;padding:0 .75rem}
|
||||
.training-sidebar a{display:flex;align-items:center;gap:.5rem;padding:.625rem .75rem;border-radius:6px;color:var(--muted);font-size:.8125rem;transition:all .2s;text-decoration:none;margin-bottom:2px}
|
||||
.training-sidebar a:hover{color:var(--text);background:var(--bg)}
|
||||
.training-sidebar a.active{color:var(--text);background:var(--bg);border-left:3px solid var(--accent)}
|
||||
.training-sidebar .model-name{font-weight:600;flex:1;white-space:nowrap;overflow:hidden;text-overflow:ellipsis}
|
||||
.training-sidebar .badge{font-size:.5625rem;padding:.0625rem .375rem}
|
||||
.training-main{flex:1;min-width:0}
|
||||
.overview-grid{display:grid;grid-template-columns:repeat(auto-fit,minmax(280px,1fr));gap:1rem;margin-bottom:1.5rem}
|
||||
.model-card{padding:1.25rem;border:1px solid var(--border);border-radius:8px;background:var(--surface);cursor:pointer;transition:border-color .2s}
|
||||
.model-card:hover{border-color:var(--accent-dim)}
|
||||
.model-card h3{font-size:1rem;font-weight:700;margin-bottom:.5rem;display:flex;align-items:center;gap:.5rem}
|
||||
.model-card .run-id{font-size:.6875rem;color:var(--muted);font-family:"SF Mono",Consolas,monospace}
|
||||
.model-card .stats{display:grid;grid-template-columns:1fr 1fr;gap:.5rem;margin-top:.75rem}
|
||||
.model-card .stat-label{font-size:.6875rem;font-weight:600;color:var(--muted);text-transform:uppercase;letter-spacing:.05em}
|
||||
.model-card .stat-value{font-size:1.125rem;font-weight:700}
|
||||
.detail-header{display:flex;align-items:center;gap:.75rem;margin-bottom:1.5rem;padding-bottom:.75rem;border-bottom:1px solid var(--border)}
|
||||
.detail-header h2{font-size:1.25rem;font-weight:700;margin:0}
|
||||
.detail-stats{display:grid;grid-template-columns:repeat(auto-fit,minmax(140px,1fr));gap:.75rem;margin-bottom:1.5rem}
|
||||
.detail-stat{padding:.75rem 1rem;border:1px solid var(--border);border-radius:8px;background:var(--surface)}
|
||||
.detail-stat .label{font-size:.6875rem;font-weight:600;color:var(--muted);text-transform:uppercase;letter-spacing:.05em;margin-bottom:.25rem}
|
||||
.detail-stat .value{font-size:1.5rem;font-weight:700;line-height:1.2}
|
||||
.detail-stat .sub{font-size:.75rem;color:var(--muted);margin-top:.125rem}
|
||||
.run-section{margin-bottom:2rem;padding-bottom:1.5rem;border-bottom:1px solid var(--border)}
|
||||
.run-section:last-child{border-bottom:none}
|
||||
.run-header{display:flex;align-items:center;gap:.5rem;margin-bottom:1rem}
|
||||
.run-header h3{font-size:.9375rem;font-weight:700;margin:0}
|
||||
.run-header .run-id{font-size:.6875rem;color:var(--muted);font-family:"SF Mono",Consolas,monospace}
|
||||
.chart-section{margin-bottom:1.5rem}
|
||||
.chart-section h4{font-size:.8125rem;font-weight:600;color:var(--muted);text-transform:uppercase;letter-spacing:.05em;margin-bottom:.5rem}
|
||||
.chart-card{border:1px solid var(--border);border-radius:8px;padding:1rem;background:var(--surface);overflow-x:auto}
|
||||
.chart-tabs{display:flex;gap:2px;margin-bottom:1rem;border-bottom:1px solid var(--border);padding-bottom:0}
|
||||
.chart-tabs button{background:none;border:none;padding:.5rem 1rem;font-size:.8125rem;font-weight:600;color:var(--muted);cursor:pointer;border-bottom:2px solid transparent;transition:all .2s;font-family:inherit}
|
||||
.chart-tabs button:hover{color:var(--text)}
|
||||
.chart-tabs button.active{color:var(--accent);border-bottom-color:var(--accent)}
|
||||
.chart-panel{display:none}
|
||||
.chart-panel.active{display:block}
|
||||
@media(max-width:768px){.training-layout{flex-direction:column}.training-sidebar{width:100%;display:flex;gap:.5rem;flex-wrap:wrap}.training-sidebar .sidebar-title{width:100%}.training-sidebar a{flex:0 0 auto}}
|
||||
</style>
|
||||
|
||||
<div class="training-layout">
|
||||
|
||||
{{/* -- Sidebar -- */}}
|
||||
<div class="training-sidebar">
|
||||
<div class="sidebar-title">Models</div>
|
||||
<a href="/training"{{if not .SelectedModel}} class="active"{{end}}>
|
||||
<span class="model-name">Overview</span>
|
||||
</a>
|
||||
{{range .ModelGroups}}
|
||||
<a href="/training?model={{.Model}}"{{if eq $.SelectedModel .Model}} class="active"{{end}}>
|
||||
<span class="model-name">{{.Model}}</span>
|
||||
<span class="badge {{statusBadge .BestStatus}}">{{.BestStatus}}</span>
|
||||
</a>
|
||||
{{end}}
|
||||
</div>
|
||||
|
||||
{{/* -- Main content -- */}}
|
||||
<div class="training-main">
|
||||
|
||||
{{if not .SelectedModel}}
|
||||
{{/* -- Overview: all models -- */}}
|
||||
<h2 class="section-title">LEM Training</h2>
|
||||
|
||||
{{/* -- Scoring progress summary -- */}}
|
||||
{{if .ModelGroups}}
|
||||
<div class="detail-stats" style="margin-bottom:1.5rem">
|
||||
<div class="detail-stat">
|
||||
<div class="label">Models</div>
|
||||
<div class="value">{{.ScoredModels}} / {{len .ModelGroups}}</div>
|
||||
<div class="sub">scored</div>
|
||||
</div>
|
||||
<div class="detail-stat">
|
||||
<div class="label">Scoring Runs</div>
|
||||
<div class="value">{{.TotalScoringRuns}}</div>
|
||||
<div class="sub">content + capability</div>
|
||||
</div>
|
||||
<div class="detail-stat">
|
||||
<div class="label">Data Points</div>
|
||||
<div class="value">{{fmtInt .TotalDataPoints}}</div>
|
||||
<div class="sub">across all benchmarks</div>
|
||||
</div>
|
||||
{{if gt .UnscoredModels 0}}
|
||||
<div class="detail-stat" style="border-color:var(--accent-dim)">
|
||||
<div class="label">Awaiting Scoring</div>
|
||||
<div class="value" style="color:var(--accent)">{{.UnscoredModels}}</div>
|
||||
<div class="sub">{{.UnscoredNames}}</div>
|
||||
</div>
|
||||
{{else}}
|
||||
<div class="detail-stat" style="border-color:var(--green)">
|
||||
<div class="label">Status</div>
|
||||
<div class="value" style="color:var(--green)">Done</div>
|
||||
<div class="sub">all models scored</div>
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{if .ModelGroups}}
|
||||
<div class="overview-grid">
|
||||
{{range .ModelGroups}}
|
||||
<a href="/training?model={{.Model}}" style="text-decoration:none;color:inherit">
|
||||
<div class="model-card">
|
||||
<h3>
|
||||
{{.Model}}
|
||||
<span class="badge {{statusBadge .BestStatus}}">{{.BestStatus}}</span>
|
||||
</h3>
|
||||
{{if .HasTraining}}
|
||||
{{range .TrainingRuns}}
|
||||
<div class="sub" style="margin-bottom:.375rem"><i class="fa-solid fa-database" style="color:var(--accent)"></i> {{runLabel .RunID}}</div>
|
||||
<div class="progress-bar"><div class="fill" style="width:{{pct .Pct}}%;{{if eq .Status "complete"}}background:var(--green){{end}}"></div></div>
|
||||
<div class="sub">{{.Iteration}} / {{.TotalIters}} iters ({{pct .Pct}}%)</div>
|
||||
<div class="stats">
|
||||
{{if gt .LastLoss 0.0}}
|
||||
<div>
|
||||
<div class="stat-label">Train Loss</div>
|
||||
<div class="stat-value">{{fmtFloat .LastLoss 3}}</div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{if gt .ValLoss 0.0}}
|
||||
<div>
|
||||
<div class="stat-label">Val Loss</div>
|
||||
<div class="stat-value">{{fmtFloat .ValLoss 3}}</div>
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
{{break}}
|
||||
{{end}}
|
||||
{{else}}
|
||||
<div class="sub" style="margin-top:.5rem">{{len .BenchmarkRuns}} benchmark run{{if gt (len .BenchmarkRuns) 1}}s{{end}}</div>
|
||||
{{if .HasCapability}}<div class="sub"><i class="fa-solid fa-flask"></i> Capability probes scored</div>{{end}}
|
||||
{{if .HasContent}}<div class="sub"><i class="fa-solid fa-chart-bar"></i> Content scores available</div>{{end}}
|
||||
{{end}}
|
||||
</div>
|
||||
</a>
|
||||
{{end}}
|
||||
</div>
|
||||
{{else}}
|
||||
<div class="card empty">
|
||||
<p>No training or benchmark data. InfluxDB refreshes every 60 seconds.</p>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{else}}
|
||||
{{/* -- Detail view: single model -- */}}
|
||||
{{$sel := .SelectedModel}}
|
||||
{{$b := .Benchmarks}}
|
||||
{{$found := false}}
|
||||
|
||||
{{range .ModelGroups}}
|
||||
{{if eq .Model $sel}}
|
||||
|
||||
<div class="detail-header">
|
||||
<h2>{{.Model}}</h2>
|
||||
<span class="badge {{statusBadge .BestStatus}}">{{.BestStatus}}</span>
|
||||
</div>
|
||||
|
||||
{{/* Training run status cards */}}
|
||||
{{if .TrainingRuns}}
|
||||
<div class="detail-stats">
|
||||
{{range .TrainingRuns}}
|
||||
<div class="detail-stat">
|
||||
<div class="label">{{.RunID}}</div>
|
||||
<div class="value">{{pct .Pct}}%</div>
|
||||
<div class="sub">{{.Iteration}} / {{.TotalIters}} · {{.Status}}</div>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
{{/* Show latest loss stats from most recent run */}}
|
||||
{{with index .TrainingRuns 0}}
|
||||
{{if gt .LastLoss 0.0}}
|
||||
<div class="detail-stat">
|
||||
<div class="label">Train Loss</div>
|
||||
<div class="value">{{fmtFloat .LastLoss 3}}</div>
|
||||
<div class="sub">latest</div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{if gt .ValLoss 0.0}}
|
||||
<div class="detail-stat">
|
||||
<div class="label">Val Loss</div>
|
||||
<div class="value">{{fmtFloat .ValLoss 3}}</div>
|
||||
<div class="sub">latest</div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{if gt .TokensSec 0.0}}
|
||||
<div class="detail-stat">
|
||||
<div class="label">Tokens/sec</div>
|
||||
<div class="value">{{fmtFloat .TokensSec 0}}</div>
|
||||
<div class="sub">throughput</div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{end}}
|
||||
</div>
|
||||
|
||||
{{/* Progress bars for in-progress training runs only */}}
|
||||
{{range .TrainingRuns}}
|
||||
{{if ne .Status "complete"}}
|
||||
<div style="margin-bottom:1rem">
|
||||
<div class="sub" style="margin-bottom:.25rem"><strong>{{.RunID}}</strong></div>
|
||||
<div class="progress-bar"><div class="fill" style="width:{{pct .Pct}}%"></div></div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{/* All benchmark runs for this model -- collect data for tabs */}}
|
||||
{{$runs := runsForModel $b $sel}}
|
||||
|
||||
{{/* Tabbed charts */}}
|
||||
<div class="chart-tabs" id="chartTabs">
|
||||
{{if anyContent $runs $b.Content}}<button class="active" onclick="showTab('content')"><i class="fa-solid fa-chart-line"></i> Content</button>{{end}}
|
||||
{{if anyCap $runs $b.Capability}}<button onclick="showTab('capability')"><i class="fa-solid fa-flask"></i> Capability</button>{{end}}
|
||||
{{if anyCap $runs $b.Capability}}<button onclick="showTab('categories')"><i class="fa-solid fa-table-cells"></i> Categories</button>{{end}}
|
||||
{{if anyLoss $runs $b.Loss}}<button onclick="showTab('loss')"><i class="fa-solid fa-chart-area"></i> Loss</button>{{end}}
|
||||
</div>
|
||||
|
||||
{{range $runs}}
|
||||
{{$rid := .RunID}}
|
||||
{{if hasContentKey $b.Content $rid}}
|
||||
<div class="chart-panel active" data-tab="content">
|
||||
<div class="chart-card">
|
||||
{{contentChart (getContent $b.Content $rid)}}
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{if hasCapKey $b.Capability $rid}}
|
||||
<div class="chart-panel" data-tab="capability">
|
||||
<div class="chart-card">
|
||||
{{capabilityChart (getCap $b.Capability $rid)}}
|
||||
</div>
|
||||
</div>
|
||||
<div class="chart-panel" data-tab="categories">
|
||||
<div class="chart-card">
|
||||
{{categoryBreakdown (getCap $b.Capability $rid) (getCapJudge $b.CapabilityJudge $rid)}}
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{if hasKey $b.Loss $rid}}
|
||||
<div class="chart-panel" data-tab="loss">
|
||||
<div class="chart-card">
|
||||
{{lossChart (getLoss $b.Loss $rid)}}
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
<script>
|
||||
function showTab(name){
|
||||
document.querySelectorAll('.chart-panel').forEach(function(p){p.classList.remove('active')});
|
||||
document.querySelectorAll('.chart-tabs button').forEach(function(b){b.classList.remove('active')});
|
||||
document.querySelectorAll('[data-tab="'+name+'"]').forEach(function(p){p.classList.add('active')});
|
||||
document.querySelectorAll('.chart-tabs button[onclick*="\'"+name+"\'"]').forEach(function(b){b.classList.add('active')});
|
||||
}
|
||||
(function(){
|
||||
var tabs=document.getElementById('chartTabs');
|
||||
if(!tabs)return;
|
||||
var first=tabs.querySelector('button');
|
||||
if(first&&!tabs.querySelector('button.active')){first.classList.add('active');first.click()}
|
||||
})();
|
||||
</script>
|
||||
|
||||
{{if and (not .TrainingRuns) (not $runs)}}
|
||||
<div class="card empty"><p>No data for this model yet.</p></div>
|
||||
{{end}}
|
||||
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{end}}
|
||||
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{{template "footer"}}
|
||||
|
|
@ -1,502 +0,0 @@
|
|||
package handler
|
||||
|
||||
import (
|
||||
"cmp"
|
||||
"embed"
|
||||
"fmt"
|
||||
"html/template"
|
||||
"net/http"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/lab"
|
||||
)
|
||||
|
||||
//go:embed templates/*
|
||||
var templateFS embed.FS
|
||||
|
||||
//go:embed static/*
|
||||
var StaticFS embed.FS
|
||||
|
||||
type WebHandler struct {
|
||||
store *lab.Store
|
||||
tmpl *template.Template
|
||||
}
|
||||
|
||||
func NewWebHandler(s *lab.Store) *WebHandler {
|
||||
funcMap := template.FuncMap{
|
||||
"timeAgo": func(t time.Time) string {
|
||||
if t.IsZero() {
|
||||
return "never"
|
||||
}
|
||||
d := time.Since(t)
|
||||
switch {
|
||||
case d < time.Minute:
|
||||
return "just now"
|
||||
case d < time.Hour:
|
||||
return fmt.Sprintf("%dm ago", int(d.Minutes()))
|
||||
case d < 24*time.Hour:
|
||||
return fmt.Sprintf("%dh ago", int(d.Hours()))
|
||||
default:
|
||||
days := int(d.Hours()) / 24
|
||||
if days == 1 {
|
||||
return "1 day ago"
|
||||
}
|
||||
return fmt.Sprintf("%d days ago", days)
|
||||
}
|
||||
},
|
||||
"pct": func(v float64) string {
|
||||
return fmt.Sprintf("%.1f", v)
|
||||
},
|
||||
"statusClass": func(s string) string {
|
||||
switch s {
|
||||
case "ok", "running":
|
||||
return "status-ok"
|
||||
case "degraded":
|
||||
return "status-warn"
|
||||
default:
|
||||
return "status-err"
|
||||
}
|
||||
},
|
||||
"shortMsg": func(s string) string {
|
||||
if i := strings.IndexByte(s, '\n'); i > 0 {
|
||||
s = s[:i]
|
||||
}
|
||||
if len(s) > 72 {
|
||||
return s[:69] + "..."
|
||||
}
|
||||
return s
|
||||
},
|
||||
"lower": strings.ToLower,
|
||||
"cpuPct": func(load float64, cores int) string {
|
||||
if cores <= 0 {
|
||||
return "0"
|
||||
}
|
||||
pct := min(load/float64(cores)*100, 100)
|
||||
return fmt.Sprintf("%.0f", pct)
|
||||
},
|
||||
"fmtGB": func(v float64) string {
|
||||
if v >= 1000 {
|
||||
return fmt.Sprintf("%.1fT", v/1024)
|
||||
}
|
||||
return fmt.Sprintf("%.0fG", v)
|
||||
},
|
||||
"countStatus": func(services []lab.Service, status string) int {
|
||||
n := 0
|
||||
for _, s := range services {
|
||||
if s.Status == status {
|
||||
n++
|
||||
}
|
||||
}
|
||||
return n
|
||||
},
|
||||
"categories": func(services []lab.Service) []string {
|
||||
seen := map[string]bool{}
|
||||
var cats []string
|
||||
for _, s := range services {
|
||||
if !seen[s.Category] {
|
||||
seen[s.Category] = true
|
||||
cats = append(cats, s.Category)
|
||||
}
|
||||
}
|
||||
return cats
|
||||
},
|
||||
"filterCat": func(services []lab.Service, cat string) []lab.Service {
|
||||
var out []lab.Service
|
||||
for _, s := range services {
|
||||
if s.Category == cat {
|
||||
out = append(out, s)
|
||||
}
|
||||
}
|
||||
return out
|
||||
},
|
||||
"lossChart": LossChart,
|
||||
"contentChart": ContentChart,
|
||||
"capabilityChart": CapabilityChart,
|
||||
"categoryBreakdown": CategoryBreakdownWithJudge,
|
||||
"hasKey": func(m map[string][]lab.LossPoint, key string) bool {
|
||||
_, ok := m[key]
|
||||
return ok
|
||||
},
|
||||
"hasContentKey": func(m map[string][]lab.ContentPoint, key string) bool {
|
||||
_, ok := m[key]
|
||||
return ok
|
||||
},
|
||||
"hasCapKey": func(m map[string][]lab.CapabilityPoint, key string) bool {
|
||||
_, ok := m[key]
|
||||
return ok
|
||||
},
|
||||
"anyContent": func(runs []lab.BenchmarkRun, m map[string][]lab.ContentPoint) bool {
|
||||
for _, r := range runs {
|
||||
if _, ok := m[r.RunID]; ok {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
},
|
||||
"anyCap": func(runs []lab.BenchmarkRun, m map[string][]lab.CapabilityPoint) bool {
|
||||
for _, r := range runs {
|
||||
if _, ok := m[r.RunID]; ok {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
},
|
||||
"anyLoss": func(runs []lab.BenchmarkRun, m map[string][]lab.LossPoint) bool {
|
||||
for _, r := range runs {
|
||||
if _, ok := m[r.RunID]; ok {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
},
|
||||
"getLoss": func(m map[string][]lab.LossPoint, key string) []lab.LossPoint {
|
||||
return m[key]
|
||||
},
|
||||
"getContent": func(m map[string][]lab.ContentPoint, key string) []lab.ContentPoint {
|
||||
return m[key]
|
||||
},
|
||||
"getCap": func(m map[string][]lab.CapabilityPoint, key string) []lab.CapabilityPoint {
|
||||
return m[key]
|
||||
},
|
||||
"getCapJudge": func(m map[string][]lab.CapabilityJudgePoint, key string) []lab.CapabilityJudgePoint {
|
||||
return m[key]
|
||||
},
|
||||
"runTypeIcon": func(t string) string {
|
||||
switch t {
|
||||
case "training":
|
||||
return "loss"
|
||||
case "content":
|
||||
return "content"
|
||||
case "capability":
|
||||
return "cap"
|
||||
default:
|
||||
return "data"
|
||||
}
|
||||
},
|
||||
"domainChart": DomainChart,
|
||||
"voiceChart": VoiceChart,
|
||||
"pctOf": func(part, total int) float64 {
|
||||
if total == 0 {
|
||||
return 0
|
||||
}
|
||||
return float64(part) / float64(total) * 100
|
||||
},
|
||||
"fmtInt": func(n int) string {
|
||||
if n < 1000 {
|
||||
return fmt.Sprintf("%d", n)
|
||||
}
|
||||
return fmt.Sprintf("%d,%03d", n/1000, n%1000)
|
||||
},
|
||||
"tableRows": func(tables []lab.DatasetTable, name string) int {
|
||||
for _, t := range tables {
|
||||
if t.Name == name {
|
||||
return t.Rows
|
||||
}
|
||||
}
|
||||
return 0
|
||||
},
|
||||
"totalRows": func(tables []lab.DatasetTable) int {
|
||||
total := 0
|
||||
for _, t := range tables {
|
||||
total += t.Rows
|
||||
}
|
||||
return total
|
||||
},
|
||||
"fmtFloat": func(v float64, prec int) string {
|
||||
return fmt.Sprintf("%.*f", prec, v)
|
||||
},
|
||||
"statusColor": func(s string) string {
|
||||
switch s {
|
||||
case "complete":
|
||||
return "var(--green)"
|
||||
case "training", "fusing":
|
||||
return "var(--accent)"
|
||||
case "failed", "fuse_failed":
|
||||
return "var(--red)"
|
||||
default:
|
||||
return "var(--muted)"
|
||||
}
|
||||
},
|
||||
"statusBadge": func(s string) string {
|
||||
switch s {
|
||||
case "complete":
|
||||
return "badge-ok"
|
||||
case "training", "fusing":
|
||||
return "badge-info"
|
||||
default:
|
||||
return "badge-err"
|
||||
}
|
||||
},
|
||||
"runLabel": func(s string) string {
|
||||
// Make run IDs like "15k-1b@0001000" more readable.
|
||||
s = strings.ReplaceAll(s, "gemma-3-", "")
|
||||
s = strings.ReplaceAll(s, "gemma3-", "")
|
||||
// Strip leading zeros after @.
|
||||
if idx := strings.Index(s, "@"); idx >= 0 {
|
||||
prefix := s[:idx+1]
|
||||
num := strings.TrimLeft(s[idx+1:], "0")
|
||||
if num == "" {
|
||||
num = "0"
|
||||
}
|
||||
s = prefix + num
|
||||
}
|
||||
return s
|
||||
},
|
||||
"normModel": func(s string) string {
|
||||
return strings.ReplaceAll(s, "gemma3-", "gemma-3-")
|
||||
},
|
||||
"runsForModel": func(b lab.BenchmarkData, modelName string) []lab.BenchmarkRun {
|
||||
normRun := func(s string) string {
|
||||
s = strings.ReplaceAll(s, "gemma3-", "gemma-3-")
|
||||
s = strings.TrimPrefix(s, "baseline-")
|
||||
return s
|
||||
}
|
||||
target := normRun(modelName)
|
||||
var out []lab.BenchmarkRun
|
||||
for _, r := range b.Runs {
|
||||
if normRun(r.Model) == target {
|
||||
out = append(out, r)
|
||||
}
|
||||
}
|
||||
return out
|
||||
},
|
||||
"benchmarkCount": func(b lab.BenchmarkData) int {
|
||||
return len(b.Runs)
|
||||
},
|
||||
"dataPoints": func(b lab.BenchmarkData) int {
|
||||
n := 0
|
||||
for _, v := range b.Loss {
|
||||
n += len(v)
|
||||
}
|
||||
for _, v := range b.Content {
|
||||
n += len(v)
|
||||
}
|
||||
for _, v := range b.Capability {
|
||||
n += len(v)
|
||||
}
|
||||
return n
|
||||
},
|
||||
}
|
||||
|
||||
tmpl := template.Must(
|
||||
template.New("").Funcs(funcMap).ParseFS(templateFS, "templates/*.html"),
|
||||
)
|
||||
|
||||
return &WebHandler{store: s, tmpl: tmpl}
|
||||
}
|
||||
|
||||
func (h *WebHandler) Dashboard(w http.ResponseWriter, r *http.Request) {
|
||||
if r.URL.Path != "/" {
|
||||
http.NotFound(w, r)
|
||||
return
|
||||
}
|
||||
ov := h.store.Overview()
|
||||
b := h.store.GetBenchmarks()
|
||||
h.render(w, "dashboard.html", map[string]any{
|
||||
"Machines": ov.Machines,
|
||||
"Agents": ov.Agents,
|
||||
"Training": ov.Training,
|
||||
"Models": ov.Models,
|
||||
"Commits": ov.Commits,
|
||||
"Errors": ov.Errors,
|
||||
"Benchmarks": b,
|
||||
})
|
||||
}
|
||||
|
||||
func (h *WebHandler) Models(w http.ResponseWriter, r *http.Request) {
|
||||
h.render(w, "models.html", map[string]any{
|
||||
"Models": h.store.GetModels(),
|
||||
})
|
||||
}
|
||||
|
||||
// ModelGroup gathers all runs and data for a single model name.
|
||||
type ModelGroup struct {
|
||||
Model string
|
||||
TrainingRuns []lab.TrainingRunStatus
|
||||
BenchmarkRuns []lab.BenchmarkRun
|
||||
HasTraining bool
|
||||
HasContent bool
|
||||
HasCapability bool
|
||||
BestStatus string // best training status: complete > training > pending
|
||||
}
|
||||
|
||||
func buildModelGroups(runs []lab.TrainingRunStatus, benchmarks lab.BenchmarkData) []ModelGroup {
|
||||
groups := map[string]*ModelGroup{}
|
||||
|
||||
// Normalise model names: gemma3-12b -> gemma-3-12b, baseline-gemma-3-12b -> gemma-3-12b.
|
||||
norm := func(s string) string {
|
||||
s = strings.ReplaceAll(s, "gemma3-", "gemma-3-")
|
||||
s = strings.TrimPrefix(s, "baseline-")
|
||||
return s
|
||||
}
|
||||
|
||||
// Training runs.
|
||||
for _, r := range runs {
|
||||
key := norm(r.Model)
|
||||
g, ok := groups[key]
|
||||
if !ok {
|
||||
g = &ModelGroup{Model: key}
|
||||
groups[key] = g
|
||||
}
|
||||
g.TrainingRuns = append(g.TrainingRuns, r)
|
||||
g.HasTraining = true
|
||||
if r.Status == "complete" || (g.BestStatus != "complete" && r.Status == "training") {
|
||||
g.BestStatus = r.Status
|
||||
}
|
||||
}
|
||||
|
||||
// Benchmark runs.
|
||||
for _, r := range benchmarks.Runs {
|
||||
key := norm(r.Model)
|
||||
g, ok := groups[key]
|
||||
if !ok {
|
||||
g = &ModelGroup{Model: key}
|
||||
groups[key] = g
|
||||
}
|
||||
g.BenchmarkRuns = append(g.BenchmarkRuns, r)
|
||||
switch r.Type {
|
||||
case "content":
|
||||
g.HasContent = true
|
||||
case "capability":
|
||||
g.HasCapability = true
|
||||
case "training":
|
||||
g.HasTraining = true
|
||||
}
|
||||
}
|
||||
|
||||
// Sort: models with training first, then alphabetical.
|
||||
var result []ModelGroup
|
||||
for _, g := range groups {
|
||||
if g.BestStatus == "" {
|
||||
g.BestStatus = "scored"
|
||||
}
|
||||
result = append(result, *g)
|
||||
}
|
||||
slices.SortFunc(result, func(a, b ModelGroup) int {
|
||||
if a.HasTraining != b.HasTraining {
|
||||
if a.HasTraining {
|
||||
return -1
|
||||
}
|
||||
return 1
|
||||
}
|
||||
return cmp.Compare(a.Model, b.Model)
|
||||
})
|
||||
return result
|
||||
}
|
||||
|
||||
func (h *WebHandler) Training(w http.ResponseWriter, r *http.Request) {
|
||||
selectedModel := r.URL.Query().Get("model")
|
||||
benchmarks := h.store.GetBenchmarks()
|
||||
trainingRuns := h.store.GetTrainingRuns()
|
||||
groups := buildModelGroups(trainingRuns, benchmarks)
|
||||
|
||||
// Compute scoring progress from model groups.
|
||||
var scoredModels, totalScoringRuns, totalDataPoints int
|
||||
var unscoredNames []string
|
||||
for _, g := range groups {
|
||||
if g.HasContent || g.HasCapability {
|
||||
scoredModels++
|
||||
} else {
|
||||
unscoredNames = append(unscoredNames, g.Model)
|
||||
}
|
||||
totalScoringRuns += len(g.BenchmarkRuns)
|
||||
}
|
||||
for _, v := range benchmarks.Loss {
|
||||
totalDataPoints += len(v)
|
||||
}
|
||||
for _, v := range benchmarks.Content {
|
||||
totalDataPoints += len(v)
|
||||
}
|
||||
for _, v := range benchmarks.Capability {
|
||||
totalDataPoints += len(v)
|
||||
}
|
||||
|
||||
h.render(w, "training.html", map[string]any{
|
||||
"Training": h.store.GetTraining(),
|
||||
"TrainingRuns": trainingRuns,
|
||||
"Benchmarks": benchmarks,
|
||||
"ModelGroups": groups,
|
||||
"Containers": h.store.GetContainers(),
|
||||
"SelectedModel": selectedModel,
|
||||
"ScoredModels": scoredModels,
|
||||
"TotalScoringRuns": totalScoringRuns,
|
||||
"TotalDataPoints": totalDataPoints,
|
||||
"UnscoredModels": len(unscoredNames),
|
||||
"UnscoredNames": strings.Join(unscoredNames, ", "),
|
||||
})
|
||||
}
|
||||
|
||||
func (h *WebHandler) Agents(w http.ResponseWriter, r *http.Request) {
|
||||
h.render(w, "agents.html", map[string]any{
|
||||
"Agents": h.store.GetAgents(),
|
||||
})
|
||||
}
|
||||
|
||||
func (h *WebHandler) Services(w http.ResponseWriter, r *http.Request) {
|
||||
h.render(w, "services.html", map[string]any{
|
||||
"Services": h.store.GetServices(),
|
||||
})
|
||||
}
|
||||
|
||||
func (h *WebHandler) Dataset(w http.ResponseWriter, r *http.Request) {
|
||||
view := r.URL.Query().Get("view")
|
||||
h.render(w, "dataset.html", map[string]any{
|
||||
"GoldenSet": h.store.GetGoldenSet(),
|
||||
"Dataset": h.store.GetDataset(),
|
||||
"SelectedView": view,
|
||||
})
|
||||
}
|
||||
|
||||
func (h *WebHandler) GoldenSet(w http.ResponseWriter, r *http.Request) {
|
||||
h.render(w, "dataset.html", map[string]any{
|
||||
"GoldenSet": h.store.GetGoldenSet(),
|
||||
"Dataset": h.store.GetDataset(),
|
||||
"SelectedView": "",
|
||||
})
|
||||
}
|
||||
|
||||
func (h *WebHandler) Runs(w http.ResponseWriter, r *http.Request) {
|
||||
b := h.store.GetBenchmarks()
|
||||
h.render(w, "runs.html", map[string]any{
|
||||
"Benchmarks": b,
|
||||
})
|
||||
}
|
||||
|
||||
// Events is an SSE endpoint that pushes "update" events when store data changes.
|
||||
func (h *WebHandler) Events(w http.ResponseWriter, r *http.Request) {
|
||||
flusher, ok := w.(http.Flusher)
|
||||
if !ok {
|
||||
http.Error(w, "streaming not supported", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "text/event-stream")
|
||||
w.Header().Set("Cache-Control", "no-cache")
|
||||
w.Header().Set("Connection", "keep-alive")
|
||||
|
||||
ch := h.store.Subscribe()
|
||||
defer h.store.Unsubscribe(ch)
|
||||
|
||||
// Send initial keepalive.
|
||||
fmt.Fprintf(w, ": connected\n\n")
|
||||
flusher.Flush()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ch:
|
||||
fmt.Fprintf(w, "data: update\n\n")
|
||||
flusher.Flush()
|
||||
case <-r.Context().Done():
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (h *WebHandler) render(w http.ResponseWriter, name string, data any) {
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
if err := h.tmpl.ExecuteTemplate(w, name, data); err != nil {
|
||||
http.Error(w, "template error: "+err.Error(), 500)
|
||||
}
|
||||
}
|
||||
219
pkg/lab/model.go
219
pkg/lab/model.go
|
|
@ -1,219 +0,0 @@
|
|||
package lab
|
||||
|
||||
import "time"
|
||||
|
||||
type Status string
|
||||
|
||||
const (
|
||||
StatusOK Status = "ok"
|
||||
StatusDegraded Status = "degraded"
|
||||
StatusUnavailable Status = "unavailable"
|
||||
)
|
||||
|
||||
type Overview struct {
|
||||
UpdatedAt time.Time
|
||||
Machines []Machine
|
||||
Agents AgentSummary
|
||||
Training TrainingSummary
|
||||
Models []HFModel
|
||||
Commits []Commit
|
||||
Errors map[string]string
|
||||
}
|
||||
|
||||
type Machine struct {
|
||||
Name string
|
||||
Host string
|
||||
Status Status
|
||||
Load1 float64
|
||||
MemUsedPct float64
|
||||
Containers []Container
|
||||
// Extended stats
|
||||
CPUCores int
|
||||
MemTotalGB float64
|
||||
MemUsedGB float64
|
||||
DiskTotalGB float64
|
||||
DiskUsedGB float64
|
||||
DiskUsedPct float64
|
||||
GPUName string
|
||||
GPUVRAMTotal float64 // GB, 0 if not applicable
|
||||
GPUVRAMUsed float64
|
||||
GPUVRAMPct float64
|
||||
GPUTemp int // Celsius, 0 if unavailable
|
||||
Uptime string
|
||||
}
|
||||
|
||||
type Container struct {
|
||||
Name string
|
||||
Status string
|
||||
Image string
|
||||
Uptime string
|
||||
Created time.Time
|
||||
}
|
||||
|
||||
type AgentSummary struct {
|
||||
Available bool
|
||||
RegisteredTotal int
|
||||
QueuePending int
|
||||
TasksCompleted int
|
||||
TasksFailed int
|
||||
Capabilities int
|
||||
HeartbeatAge float64
|
||||
ExporterUp bool
|
||||
}
|
||||
|
||||
type TrainingSummary struct {
|
||||
GoldGenerated int
|
||||
GoldTarget int
|
||||
GoldPercent float64
|
||||
GoldAvailable bool
|
||||
InterceptCount int
|
||||
SessionCount int
|
||||
LastIntercept time.Time
|
||||
GGUFCount int
|
||||
GGUFFiles []string
|
||||
AdapterCount int
|
||||
}
|
||||
|
||||
type HFModel struct {
|
||||
ModelID string `json:"modelId"`
|
||||
Author string `json:"author"`
|
||||
Downloads int `json:"downloads"`
|
||||
Likes int `json:"likes"`
|
||||
Tags []string `json:"tags"`
|
||||
PipelineTag string `json:"pipeline_tag"`
|
||||
CreatedAt time.Time `json:"createdAt"`
|
||||
LastModified time.Time `json:"lastModified"`
|
||||
}
|
||||
|
||||
type Commit struct {
|
||||
SHA string
|
||||
Message string
|
||||
Author string
|
||||
Repo string
|
||||
Timestamp time.Time
|
||||
}
|
||||
|
||||
type Service struct {
|
||||
Name string
|
||||
URL string
|
||||
Category string
|
||||
Machine string
|
||||
Icon string
|
||||
Status string // ok, degraded, unavailable, unchecked
|
||||
}
|
||||
|
||||
// Dataset stats from DuckDB (pushed to InfluxDB as dataset_stats).
|
||||
|
||||
type DatasetTable struct {
|
||||
Name string
|
||||
Rows int
|
||||
}
|
||||
|
||||
type DatasetSummary struct {
|
||||
Available bool
|
||||
Tables []DatasetTable
|
||||
UpdatedAt time.Time
|
||||
}
|
||||
|
||||
// Golden set data explorer types.
|
||||
|
||||
type GoldenSetSummary struct {
|
||||
Available bool
|
||||
TotalExamples int
|
||||
TargetTotal int
|
||||
CompletionPct float64
|
||||
Domains int
|
||||
Voices int
|
||||
AvgGenTime float64
|
||||
AvgResponseChars float64
|
||||
DomainStats []DomainStat
|
||||
VoiceStats []VoiceStat
|
||||
Workers []WorkerStat
|
||||
UpdatedAt time.Time
|
||||
}
|
||||
|
||||
type WorkerStat struct {
|
||||
Worker string
|
||||
Count int
|
||||
LastSeen time.Time
|
||||
}
|
||||
|
||||
type DomainStat struct {
|
||||
Domain string
|
||||
Count int
|
||||
AvgGenTime float64
|
||||
}
|
||||
|
||||
type VoiceStat struct {
|
||||
Voice string
|
||||
Count int
|
||||
AvgChars float64
|
||||
AvgGenTime float64
|
||||
}
|
||||
|
||||
// Live training run status (from InfluxDB training_status measurement).
|
||||
|
||||
type TrainingRunStatus struct {
|
||||
Model string
|
||||
RunID string
|
||||
Status string // training, fusing, complete, failed
|
||||
Iteration int
|
||||
TotalIters int
|
||||
Pct float64
|
||||
LastLoss float64 // most recent train loss
|
||||
ValLoss float64 // most recent val loss
|
||||
TokensSec float64 // most recent tokens/sec
|
||||
}
|
||||
|
||||
// Benchmark data types for training run viewer.
|
||||
|
||||
type BenchmarkRun struct {
|
||||
RunID string
|
||||
Model string
|
||||
Type string // "content", "capability", "training"
|
||||
}
|
||||
|
||||
type LossPoint struct {
|
||||
Iteration int
|
||||
Loss float64
|
||||
LossType string // "val" or "train"
|
||||
LearningRate float64
|
||||
TokensPerSec float64
|
||||
}
|
||||
|
||||
type ContentPoint struct {
|
||||
Label string
|
||||
Dimension string
|
||||
Score float64
|
||||
Iteration int
|
||||
HasKernel bool
|
||||
}
|
||||
|
||||
type CapabilityPoint struct {
|
||||
Label string
|
||||
Category string
|
||||
Accuracy float64
|
||||
Correct int
|
||||
Total int
|
||||
Iteration int
|
||||
}
|
||||
|
||||
type CapabilityJudgePoint struct {
|
||||
Label string
|
||||
ProbeID string
|
||||
Category string
|
||||
Reasoning float64
|
||||
Correctness float64
|
||||
Clarity float64
|
||||
Avg float64
|
||||
Iteration int
|
||||
}
|
||||
|
||||
type BenchmarkData struct {
|
||||
Runs []BenchmarkRun
|
||||
Loss map[string][]LossPoint
|
||||
Content map[string][]ContentPoint
|
||||
Capability map[string][]CapabilityPoint
|
||||
CapabilityJudge map[string][]CapabilityJudgePoint
|
||||
UpdatedAt time.Time
|
||||
}
|
||||
275
pkg/lab/store.go
275
pkg/lab/store.go
|
|
@ -1,275 +0,0 @@
|
|||
package lab
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Store struct {
|
||||
mu sync.RWMutex
|
||||
|
||||
// SSE subscriber channels -- notified on any data change.
|
||||
subMu sync.Mutex
|
||||
subs map[chan struct{}]struct{}
|
||||
|
||||
machines []Machine
|
||||
machinesAt time.Time
|
||||
|
||||
agents AgentSummary
|
||||
agentsAt time.Time
|
||||
|
||||
training TrainingSummary
|
||||
trainingAt time.Time
|
||||
|
||||
models []HFModel
|
||||
modelsAt time.Time
|
||||
|
||||
commits []Commit
|
||||
commitsAt time.Time
|
||||
|
||||
containers []Container
|
||||
containersAt time.Time
|
||||
|
||||
services []Service
|
||||
servicesAt time.Time
|
||||
|
||||
benchmarks BenchmarkData
|
||||
benchmarksAt time.Time
|
||||
|
||||
goldenSet GoldenSetSummary
|
||||
goldenSetAt time.Time
|
||||
|
||||
trainingRuns []TrainingRunStatus
|
||||
trainingRunsAt time.Time
|
||||
|
||||
dataset DatasetSummary
|
||||
datasetAt time.Time
|
||||
|
||||
errors map[string]string
|
||||
}
|
||||
|
||||
func NewStore() *Store {
|
||||
return &Store{
|
||||
subs: make(map[chan struct{}]struct{}),
|
||||
errors: make(map[string]string),
|
||||
}
|
||||
}
|
||||
|
||||
// Subscribe returns a channel that receives a signal on every data update.
|
||||
// Call Unsubscribe when done to avoid leaks.
|
||||
func (s *Store) Subscribe() chan struct{} {
|
||||
ch := make(chan struct{}, 1)
|
||||
s.subMu.Lock()
|
||||
s.subs[ch] = struct{}{}
|
||||
s.subMu.Unlock()
|
||||
return ch
|
||||
}
|
||||
|
||||
// Unsubscribe removes a subscriber channel.
|
||||
func (s *Store) Unsubscribe(ch chan struct{}) {
|
||||
s.subMu.Lock()
|
||||
delete(s.subs, ch)
|
||||
s.subMu.Unlock()
|
||||
}
|
||||
|
||||
// notify sends a non-blocking signal to all subscribers.
|
||||
func (s *Store) notify() {
|
||||
s.subMu.Lock()
|
||||
defer s.subMu.Unlock()
|
||||
for ch := range s.subs {
|
||||
select {
|
||||
case ch <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Store) SetMachines(m []Machine) {
|
||||
s.mu.Lock()
|
||||
s.machines = m
|
||||
s.machinesAt = time.Now()
|
||||
s.mu.Unlock()
|
||||
s.notify()
|
||||
}
|
||||
|
||||
func (s *Store) SetAgents(a AgentSummary) {
|
||||
s.mu.Lock()
|
||||
s.agents = a
|
||||
s.agentsAt = time.Now()
|
||||
s.mu.Unlock()
|
||||
s.notify()
|
||||
}
|
||||
|
||||
func (s *Store) SetTraining(t TrainingSummary) {
|
||||
s.mu.Lock()
|
||||
s.training = t
|
||||
s.trainingAt = time.Now()
|
||||
s.mu.Unlock()
|
||||
s.notify()
|
||||
}
|
||||
|
||||
func (s *Store) SetModels(m []HFModel) {
|
||||
s.mu.Lock()
|
||||
s.models = m
|
||||
s.modelsAt = time.Now()
|
||||
s.mu.Unlock()
|
||||
s.notify()
|
||||
}
|
||||
|
||||
func (s *Store) SetCommits(c []Commit) {
|
||||
s.mu.Lock()
|
||||
s.commits = c
|
||||
s.commitsAt = time.Now()
|
||||
s.mu.Unlock()
|
||||
s.notify()
|
||||
}
|
||||
|
||||
func (s *Store) SetContainers(c []Container) {
|
||||
s.mu.Lock()
|
||||
s.containers = c
|
||||
s.containersAt = time.Now()
|
||||
s.mu.Unlock()
|
||||
s.notify()
|
||||
}
|
||||
|
||||
func (s *Store) SetError(collector string, err error) {
|
||||
s.mu.Lock()
|
||||
if err != nil {
|
||||
s.errors[collector] = err.Error()
|
||||
} else {
|
||||
delete(s.errors, collector)
|
||||
}
|
||||
s.mu.Unlock()
|
||||
s.notify()
|
||||
}
|
||||
|
||||
func (s *Store) Overview() Overview {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
errCopy := make(map[string]string, len(s.errors))
|
||||
for k, v := range s.errors {
|
||||
errCopy[k] = v
|
||||
}
|
||||
|
||||
// Merge containers into the first machine (snider-linux / local Docker host).
|
||||
machines := make([]Machine, len(s.machines))
|
||||
copy(machines, s.machines)
|
||||
if len(machines) > 0 {
|
||||
machines[0].Containers = s.containers
|
||||
}
|
||||
|
||||
return Overview{
|
||||
UpdatedAt: time.Now(),
|
||||
Machines: machines,
|
||||
Agents: s.agents,
|
||||
Training: s.training,
|
||||
Models: s.models,
|
||||
Commits: s.commits,
|
||||
Errors: errCopy,
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Store) GetModels() []HFModel {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.models
|
||||
}
|
||||
|
||||
func (s *Store) GetTraining() TrainingSummary {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.training
|
||||
}
|
||||
|
||||
func (s *Store) GetAgents() AgentSummary {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.agents
|
||||
}
|
||||
|
||||
func (s *Store) GetContainers() []Container {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.containers
|
||||
}
|
||||
|
||||
func (s *Store) SetServices(svc []Service) {
|
||||
s.mu.Lock()
|
||||
s.services = svc
|
||||
s.servicesAt = time.Now()
|
||||
s.mu.Unlock()
|
||||
s.notify()
|
||||
}
|
||||
|
||||
func (s *Store) GetServices() []Service {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.services
|
||||
}
|
||||
|
||||
func (s *Store) SetBenchmarks(b BenchmarkData) {
|
||||
s.mu.Lock()
|
||||
s.benchmarks = b
|
||||
s.benchmarksAt = time.Now()
|
||||
s.mu.Unlock()
|
||||
s.notify()
|
||||
}
|
||||
|
||||
func (s *Store) GetBenchmarks() BenchmarkData {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.benchmarks
|
||||
}
|
||||
|
||||
func (s *Store) SetGoldenSet(g GoldenSetSummary) {
|
||||
s.mu.Lock()
|
||||
s.goldenSet = g
|
||||
s.goldenSetAt = time.Now()
|
||||
s.mu.Unlock()
|
||||
s.notify()
|
||||
}
|
||||
|
||||
func (s *Store) GetGoldenSet() GoldenSetSummary {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.goldenSet
|
||||
}
|
||||
|
||||
func (s *Store) SetTrainingRuns(runs []TrainingRunStatus) {
|
||||
s.mu.Lock()
|
||||
s.trainingRuns = runs
|
||||
s.trainingRunsAt = time.Now()
|
||||
s.mu.Unlock()
|
||||
s.notify()
|
||||
}
|
||||
|
||||
func (s *Store) GetTrainingRuns() []TrainingRunStatus {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.trainingRuns
|
||||
}
|
||||
|
||||
func (s *Store) SetDataset(d DatasetSummary) {
|
||||
s.mu.Lock()
|
||||
s.dataset = d
|
||||
s.datasetAt = time.Now()
|
||||
s.mu.Unlock()
|
||||
s.notify()
|
||||
}
|
||||
|
||||
func (s *Store) GetDataset() DatasetSummary {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
return s.dataset
|
||||
}
|
||||
|
||||
func (s *Store) GetErrors() map[string]string {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
errCopy := make(map[string]string, len(s.errors))
|
||||
for k, v := range s.errors {
|
||||
errCopy[k] = v
|
||||
}
|
||||
return errCopy
|
||||
}
|
||||
|
|
@ -1,391 +0,0 @@
|
|||
package lab
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ── NewStore ────────────────────────────────────────────────────────
|
||||
|
||||
func TestNewStore_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
if s == nil {
|
||||
t.Fatal("NewStore returned nil")
|
||||
}
|
||||
if s.subs == nil {
|
||||
t.Fatal("subs map not initialised")
|
||||
}
|
||||
if s.errors == nil {
|
||||
t.Fatal("errors map not initialised")
|
||||
}
|
||||
}
|
||||
|
||||
// ── Subscribe / Unsubscribe ────────────────────────────────────────
|
||||
|
||||
func TestSubscribe_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
ch := s.Subscribe()
|
||||
if ch == nil {
|
||||
t.Fatal("Subscribe returned nil channel")
|
||||
}
|
||||
|
||||
s.subMu.Lock()
|
||||
_, ok := s.subs[ch]
|
||||
s.subMu.Unlock()
|
||||
if !ok {
|
||||
t.Fatal("subscriber not registered")
|
||||
}
|
||||
}
|
||||
|
||||
func TestUnsubscribe_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
ch := s.Subscribe()
|
||||
s.Unsubscribe(ch)
|
||||
|
||||
s.subMu.Lock()
|
||||
_, ok := s.subs[ch]
|
||||
s.subMu.Unlock()
|
||||
if ok {
|
||||
t.Fatal("subscriber not removed after Unsubscribe")
|
||||
}
|
||||
}
|
||||
|
||||
func TestUnsubscribe_Bad_NeverSubscribed(t *testing.T) {
|
||||
s := NewStore()
|
||||
ch := make(chan struct{}, 1)
|
||||
// Should not panic.
|
||||
s.Unsubscribe(ch)
|
||||
}
|
||||
|
||||
// ── Notify ─────────────────────────────────────────────────────────
|
||||
|
||||
func TestNotify_Good_SubscriberReceivesSignal(t *testing.T) {
|
||||
s := NewStore()
|
||||
ch := s.Subscribe()
|
||||
defer s.Unsubscribe(ch)
|
||||
|
||||
s.SetMachines([]Machine{{Name: "test"}})
|
||||
|
||||
select {
|
||||
case <-ch:
|
||||
// good
|
||||
case <-time.After(100 * time.Millisecond):
|
||||
t.Fatal("subscriber did not receive notification")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNotify_Good_NonBlockingWhenFull(t *testing.T) {
|
||||
s := NewStore()
|
||||
ch := s.Subscribe()
|
||||
defer s.Unsubscribe(ch)
|
||||
|
||||
// Fill the buffer.
|
||||
ch <- struct{}{}
|
||||
|
||||
// Should not block.
|
||||
s.SetMachines([]Machine{{Name: "a"}})
|
||||
s.SetMachines([]Machine{{Name: "b"}})
|
||||
}
|
||||
|
||||
func TestNotify_Good_MultipleSubscribers(t *testing.T) {
|
||||
s := NewStore()
|
||||
ch1 := s.Subscribe()
|
||||
ch2 := s.Subscribe()
|
||||
defer s.Unsubscribe(ch1)
|
||||
defer s.Unsubscribe(ch2)
|
||||
|
||||
s.SetAgents(AgentSummary{Available: true})
|
||||
|
||||
for _, ch := range []chan struct{}{ch1, ch2} {
|
||||
select {
|
||||
case <-ch:
|
||||
case <-time.After(100 * time.Millisecond):
|
||||
t.Fatal("subscriber missed notification")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── SetMachines / Overview ─────────────────────────────────────────
|
||||
|
||||
func TestSetMachines_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
machines := []Machine{{Name: "noc", Host: "77.42.42.205"}, {Name: "de1", Host: "116.202.82.115"}}
|
||||
s.SetMachines(machines)
|
||||
|
||||
ov := s.Overview()
|
||||
if len(ov.Machines) != 2 {
|
||||
t.Fatalf("expected 2 machines, got %d", len(ov.Machines))
|
||||
}
|
||||
if ov.Machines[0].Name != "noc" {
|
||||
t.Fatalf("expected noc, got %s", ov.Machines[0].Name)
|
||||
}
|
||||
}
|
||||
|
||||
func TestOverview_Good_ContainersMergedIntoFirstMachine(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetMachines([]Machine{{Name: "primary"}, {Name: "secondary"}})
|
||||
s.SetContainers([]Container{{Name: "forgejo", Status: "running"}})
|
||||
|
||||
ov := s.Overview()
|
||||
if len(ov.Machines[0].Containers) != 1 {
|
||||
t.Fatal("containers not merged into first machine")
|
||||
}
|
||||
if ov.Machines[0].Containers[0].Name != "forgejo" {
|
||||
t.Fatalf("unexpected container name: %s", ov.Machines[0].Containers[0].Name)
|
||||
}
|
||||
if len(ov.Machines[1].Containers) != 0 {
|
||||
t.Fatal("containers leaked into second machine")
|
||||
}
|
||||
}
|
||||
|
||||
func TestOverview_Good_EmptyMachinesNoContainerPanic(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetContainers([]Container{{Name: "c1"}})
|
||||
|
||||
// No machines set — should not panic.
|
||||
ov := s.Overview()
|
||||
if len(ov.Machines) != 0 {
|
||||
t.Fatal("expected zero machines")
|
||||
}
|
||||
}
|
||||
|
||||
func TestOverview_Good_ErrorsCopied(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetError("prometheus", errors.New("connection refused"))
|
||||
|
||||
ov := s.Overview()
|
||||
if ov.Errors["prometheus"] != "connection refused" {
|
||||
t.Fatal("error not in overview")
|
||||
}
|
||||
|
||||
// Mutating the copy should not affect the store.
|
||||
ov.Errors["prometheus"] = "hacked"
|
||||
ov2 := s.Overview()
|
||||
if ov2.Errors["prometheus"] != "connection refused" {
|
||||
t.Fatal("overview errors map is not a copy")
|
||||
}
|
||||
}
|
||||
|
||||
// ── SetAgents / GetAgents ──────────────────────────────────────────
|
||||
|
||||
func TestAgents_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetAgents(AgentSummary{Available: true, RegisteredTotal: 3, QueuePending: 1})
|
||||
|
||||
got := s.GetAgents()
|
||||
if !got.Available {
|
||||
t.Fatal("expected Available=true")
|
||||
}
|
||||
if got.RegisteredTotal != 3 {
|
||||
t.Fatalf("expected 3, got %d", got.RegisteredTotal)
|
||||
}
|
||||
}
|
||||
|
||||
// ── SetTraining / GetTraining ──────────────────────────────────────
|
||||
|
||||
func TestTraining_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetTraining(TrainingSummary{GoldGenerated: 404, GoldTarget: 15000, GoldPercent: 2.69})
|
||||
|
||||
got := s.GetTraining()
|
||||
if got.GoldGenerated != 404 {
|
||||
t.Fatalf("expected 404, got %d", got.GoldGenerated)
|
||||
}
|
||||
}
|
||||
|
||||
// ── SetModels / GetModels ──────────────────────────────────────────
|
||||
|
||||
func TestModels_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetModels([]HFModel{{ModelID: "lthn/lem-gemma3-1b", Downloads: 42}})
|
||||
|
||||
got := s.GetModels()
|
||||
if len(got) != 1 {
|
||||
t.Fatal("expected 1 model")
|
||||
}
|
||||
if got[0].Downloads != 42 {
|
||||
t.Fatalf("expected 42 downloads, got %d", got[0].Downloads)
|
||||
}
|
||||
}
|
||||
|
||||
// ── SetCommits ─────────────────────────────────────────────────────
|
||||
|
||||
func TestCommits_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetCommits([]Commit{{SHA: "abc123", Message: "feat: test coverage", Author: "virgil"}})
|
||||
|
||||
ov := s.Overview()
|
||||
if len(ov.Commits) != 1 {
|
||||
t.Fatal("expected 1 commit")
|
||||
}
|
||||
if ov.Commits[0].Author != "virgil" {
|
||||
t.Fatalf("expected virgil, got %s", ov.Commits[0].Author)
|
||||
}
|
||||
}
|
||||
|
||||
// ── SetContainers / GetContainers ──────────────────────────────────
|
||||
|
||||
func TestContainers_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetContainers([]Container{{Name: "traefik", Status: "running"}, {Name: "forgejo", Status: "running"}})
|
||||
|
||||
got := s.GetContainers()
|
||||
if len(got) != 2 {
|
||||
t.Fatal("expected 2 containers")
|
||||
}
|
||||
}
|
||||
|
||||
// ── SetError / GetErrors ───────────────────────────────────────────
|
||||
|
||||
func TestSetError_Good_SetAndClear(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetError("hf", errors.New("rate limited"))
|
||||
|
||||
errs := s.GetErrors()
|
||||
if errs["hf"] != "rate limited" {
|
||||
t.Fatal("error not stored")
|
||||
}
|
||||
|
||||
// Clear by passing nil.
|
||||
s.SetError("hf", nil)
|
||||
errs = s.GetErrors()
|
||||
if _, ok := errs["hf"]; ok {
|
||||
t.Fatal("error not cleared")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetErrors_Good_ReturnsCopy(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetError("forge", errors.New("timeout"))
|
||||
|
||||
errs := s.GetErrors()
|
||||
errs["forge"] = "tampered"
|
||||
|
||||
fresh := s.GetErrors()
|
||||
if fresh["forge"] != "timeout" {
|
||||
t.Fatal("GetErrors did not return a copy")
|
||||
}
|
||||
}
|
||||
|
||||
// ── SetServices / GetServices ──────────────────────────────────────
|
||||
|
||||
func TestServices_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetServices([]Service{{Name: "Forgejo", URL: "https://forge.lthn.ai", Status: "ok"}})
|
||||
|
||||
got := s.GetServices()
|
||||
if len(got) != 1 {
|
||||
t.Fatal("expected 1 service")
|
||||
}
|
||||
if got[0].Name != "Forgejo" {
|
||||
t.Fatalf("expected Forgejo, got %s", got[0].Name)
|
||||
}
|
||||
}
|
||||
|
||||
// ── SetBenchmarks / GetBenchmarks ──────────────────────────────────
|
||||
|
||||
func TestBenchmarks_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetBenchmarks(BenchmarkData{
|
||||
Runs: []BenchmarkRun{{RunID: "run-1", Model: "gemma3-4b", Type: "training"}},
|
||||
})
|
||||
|
||||
got := s.GetBenchmarks()
|
||||
if len(got.Runs) != 1 {
|
||||
t.Fatal("expected 1 benchmark run")
|
||||
}
|
||||
}
|
||||
|
||||
// ── SetGoldenSet / GetGoldenSet ────────────────────────────────────
|
||||
|
||||
func TestGoldenSet_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetGoldenSet(GoldenSetSummary{Available: true, TotalExamples: 15000, TargetTotal: 15000, CompletionPct: 100})
|
||||
|
||||
got := s.GetGoldenSet()
|
||||
if !got.Available {
|
||||
t.Fatal("expected Available=true")
|
||||
}
|
||||
if got.TotalExamples != 15000 {
|
||||
t.Fatalf("expected 15000, got %d", got.TotalExamples)
|
||||
}
|
||||
}
|
||||
|
||||
// ── SetTrainingRuns / GetTrainingRuns ───────────────────────────────
|
||||
|
||||
func TestTrainingRuns_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetTrainingRuns([]TrainingRunStatus{
|
||||
{Model: "gemma3-4b", RunID: "r1", Status: "training", Iteration: 100, TotalIters: 300},
|
||||
})
|
||||
|
||||
got := s.GetTrainingRuns()
|
||||
if len(got) != 1 {
|
||||
t.Fatal("expected 1 training run")
|
||||
}
|
||||
if got[0].Iteration != 100 {
|
||||
t.Fatalf("expected iter 100, got %d", got[0].Iteration)
|
||||
}
|
||||
}
|
||||
|
||||
// ── SetDataset / GetDataset ────────────────────────────────────────
|
||||
|
||||
func TestDataset_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
s.SetDataset(DatasetSummary{
|
||||
Available: true,
|
||||
Tables: []DatasetTable{{Name: "golden_set", Rows: 15000}},
|
||||
})
|
||||
|
||||
got := s.GetDataset()
|
||||
if !got.Available {
|
||||
t.Fatal("expected Available=true")
|
||||
}
|
||||
if len(got.Tables) != 1 {
|
||||
t.Fatal("expected 1 table")
|
||||
}
|
||||
}
|
||||
|
||||
// ── Concurrent access (race detector) ──────────────────────────────
|
||||
|
||||
func TestConcurrentAccess_Good(t *testing.T) {
|
||||
s := NewStore()
|
||||
done := make(chan struct{})
|
||||
|
||||
// Writer goroutine.
|
||||
go func() {
|
||||
for i := range 100 {
|
||||
s.SetMachines([]Machine{{Name: "noc"}})
|
||||
s.SetAgents(AgentSummary{Available: true})
|
||||
s.SetTraining(TrainingSummary{GoldGenerated: i})
|
||||
s.SetModels([]HFModel{{ModelID: "m1"}})
|
||||
s.SetCommits([]Commit{{SHA: "abc"}})
|
||||
s.SetContainers([]Container{{Name: "c1"}})
|
||||
s.SetError("test", errors.New("e"))
|
||||
s.SetServices([]Service{{Name: "s1"}})
|
||||
s.SetBenchmarks(BenchmarkData{})
|
||||
s.SetGoldenSet(GoldenSetSummary{})
|
||||
s.SetTrainingRuns([]TrainingRunStatus{})
|
||||
s.SetDataset(DatasetSummary{})
|
||||
}
|
||||
close(done)
|
||||
}()
|
||||
|
||||
// Reader goroutine.
|
||||
for range 100 {
|
||||
_ = s.Overview()
|
||||
_ = s.GetModels()
|
||||
_ = s.GetTraining()
|
||||
_ = s.GetAgents()
|
||||
_ = s.GetContainers()
|
||||
_ = s.GetServices()
|
||||
_ = s.GetBenchmarks()
|
||||
_ = s.GetGoldenSet()
|
||||
_ = s.GetTrainingRuns()
|
||||
_ = s.GetDataset()
|
||||
_ = s.GetErrors()
|
||||
}
|
||||
|
||||
<-done
|
||||
}
|
||||
|
|
@ -1,43 +0,0 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
|
||||
"forge.lthn.ai/core/go-io"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
const manifestPath = ".core/view.yml"
|
||||
|
||||
// MarshalYAML serializes a manifest to YAML bytes.
|
||||
func MarshalYAML(m *Manifest) ([]byte, error) {
|
||||
return yaml.Marshal(m)
|
||||
}
|
||||
|
||||
// Load reads and parses a .core/view.yml from the given root directory.
|
||||
func Load(medium io.Medium, root string) (*Manifest, error) {
|
||||
path := filepath.Join(root, manifestPath)
|
||||
data, err := medium.Read(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("manifest.Load: %w", err)
|
||||
}
|
||||
return Parse([]byte(data))
|
||||
}
|
||||
|
||||
// LoadVerified reads, parses, and verifies the ed25519 signature.
|
||||
func LoadVerified(medium io.Medium, root string, pub ed25519.PublicKey) (*Manifest, error) {
|
||||
m, err := Load(medium, root)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ok, err := Verify(m, pub)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("manifest.LoadVerified: %w", err)
|
||||
}
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("manifest.LoadVerified: signature verification failed for %q", m.Code)
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
|
@ -1,63 +0,0 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go-io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestLoad_Good(t *testing.T) {
|
||||
fs := io.NewMockMedium()
|
||||
fs.Files[".core/view.yml"] = `
|
||||
code: test-app
|
||||
name: Test App
|
||||
version: 1.0.0
|
||||
layout: HLCRF
|
||||
slots:
|
||||
C: main-content
|
||||
`
|
||||
m, err := Load(fs, ".")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "test-app", m.Code)
|
||||
assert.Equal(t, "main-content", m.Slots["C"])
|
||||
}
|
||||
|
||||
func TestLoad_Bad_NoManifest(t *testing.T) {
|
||||
fs := io.NewMockMedium()
|
||||
_, err := Load(fs, ".")
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestLoadVerified_Good(t *testing.T) {
|
||||
pub, priv, _ := ed25519.GenerateKey(nil)
|
||||
m := &Manifest{
|
||||
Code: "signed-app", Name: "Signed", Version: "1.0.0",
|
||||
Layout: "HLCRF", Slots: map[string]string{"C": "main"},
|
||||
}
|
||||
_ = Sign(m, priv)
|
||||
|
||||
raw, _ := MarshalYAML(m)
|
||||
fs := io.NewMockMedium()
|
||||
fs.Files[".core/view.yml"] = string(raw)
|
||||
|
||||
loaded, err := LoadVerified(fs, ".", pub)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "signed-app", loaded.Code)
|
||||
}
|
||||
|
||||
func TestLoadVerified_Bad_Tampered(t *testing.T) {
|
||||
pub, priv, _ := ed25519.GenerateKey(nil)
|
||||
m := &Manifest{Code: "app", Version: "1.0.0"}
|
||||
_ = Sign(m, priv)
|
||||
|
||||
raw, _ := MarshalYAML(m)
|
||||
tampered := "code: evil\n" + string(raw)[6:]
|
||||
fs := io.NewMockMedium()
|
||||
fs.Files[".core/view.yml"] = tampered
|
||||
|
||||
_, err := LoadVerified(fs, ".", pub)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
|
@ -1,50 +0,0 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// Manifest represents a .core/view.yml application manifest.
|
||||
type Manifest struct {
|
||||
Code string `yaml:"code"`
|
||||
Name string `yaml:"name"`
|
||||
Version string `yaml:"version"`
|
||||
Sign string `yaml:"sign"`
|
||||
Layout string `yaml:"layout"`
|
||||
Slots map[string]string `yaml:"slots"`
|
||||
|
||||
Permissions Permissions `yaml:"permissions"`
|
||||
Modules []string `yaml:"modules"`
|
||||
}
|
||||
|
||||
// Permissions declares the I/O capabilities a module requires.
|
||||
type Permissions struct {
|
||||
Read []string `yaml:"read"`
|
||||
Write []string `yaml:"write"`
|
||||
Net []string `yaml:"net"`
|
||||
Run []string `yaml:"run"`
|
||||
}
|
||||
|
||||
// Parse decodes YAML bytes into a Manifest.
|
||||
func Parse(data []byte) (*Manifest, error) {
|
||||
var m Manifest
|
||||
if err := yaml.Unmarshal(data, &m); err != nil {
|
||||
return nil, fmt.Errorf("manifest.Parse: %w", err)
|
||||
}
|
||||
return &m, nil
|
||||
}
|
||||
|
||||
// SlotNames returns a deduplicated list of component names from slots.
|
||||
func (m *Manifest) SlotNames() []string {
|
||||
seen := make(map[string]bool)
|
||||
var names []string
|
||||
for _, name := range m.Slots {
|
||||
if !seen[name] {
|
||||
seen[name] = true
|
||||
names = append(names, name)
|
||||
}
|
||||
}
|
||||
return names
|
||||
}
|
||||
|
|
@ -1,65 +0,0 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestParse_Good(t *testing.T) {
|
||||
raw := `
|
||||
code: photo-browser
|
||||
name: Photo Browser
|
||||
version: 0.1.0
|
||||
sign: dGVzdHNpZw==
|
||||
|
||||
layout: HLCRF
|
||||
slots:
|
||||
H: nav-breadcrumb
|
||||
L: folder-tree
|
||||
C: photo-grid
|
||||
R: metadata-panel
|
||||
F: status-bar
|
||||
|
||||
permissions:
|
||||
read: ["./photos/"]
|
||||
write: []
|
||||
net: []
|
||||
run: []
|
||||
|
||||
modules:
|
||||
- core/media
|
||||
- core/fs
|
||||
`
|
||||
m, err := Parse([]byte(raw))
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "photo-browser", m.Code)
|
||||
assert.Equal(t, "Photo Browser", m.Name)
|
||||
assert.Equal(t, "0.1.0", m.Version)
|
||||
assert.Equal(t, "dGVzdHNpZw==", m.Sign)
|
||||
assert.Equal(t, "HLCRF", m.Layout)
|
||||
assert.Equal(t, "nav-breadcrumb", m.Slots["H"])
|
||||
assert.Equal(t, "photo-grid", m.Slots["C"])
|
||||
assert.Len(t, m.Permissions.Read, 1)
|
||||
assert.Equal(t, "./photos/", m.Permissions.Read[0])
|
||||
assert.Len(t, m.Modules, 2)
|
||||
}
|
||||
|
||||
func TestParse_Bad(t *testing.T) {
|
||||
_, err := Parse([]byte("not: valid: yaml: ["))
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestManifest_SlotNames_Good(t *testing.T) {
|
||||
m := Manifest{
|
||||
Slots: map[string]string{
|
||||
"H": "nav-bar",
|
||||
"C": "main-content",
|
||||
},
|
||||
}
|
||||
names := m.SlotNames()
|
||||
assert.Contains(t, names, "nav-bar")
|
||||
assert.Contains(t, names, "main-content")
|
||||
assert.Len(t, names, 2)
|
||||
}
|
||||
|
|
@ -1,44 +0,0 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// signable returns the canonical bytes to sign (manifest without sign field).
|
||||
func signable(m *Manifest) ([]byte, error) {
|
||||
tmp := *m
|
||||
tmp.Sign = ""
|
||||
return yaml.Marshal(&tmp)
|
||||
}
|
||||
|
||||
// Sign computes the ed25519 signature and stores it in m.Sign (base64).
|
||||
func Sign(m *Manifest, priv ed25519.PrivateKey) error {
|
||||
msg, err := signable(m)
|
||||
if err != nil {
|
||||
return fmt.Errorf("manifest.Sign: marshal: %w", err)
|
||||
}
|
||||
sig := ed25519.Sign(priv, msg)
|
||||
m.Sign = base64.StdEncoding.EncodeToString(sig)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Verify checks the ed25519 signature in m.Sign against the public key.
|
||||
func Verify(m *Manifest, pub ed25519.PublicKey) (bool, error) {
|
||||
if m.Sign == "" {
|
||||
return false, errors.New("manifest.Verify: no signature present")
|
||||
}
|
||||
sig, err := base64.StdEncoding.DecodeString(m.Sign)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("manifest.Verify: decode: %w", err)
|
||||
}
|
||||
msg, err := signable(m)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("manifest.Verify: marshal: %w", err)
|
||||
}
|
||||
return ed25519.Verify(pub, msg, sig), nil
|
||||
}
|
||||
|
|
@ -1,51 +0,0 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestSignAndVerify_Good(t *testing.T) {
|
||||
pub, priv, err := ed25519.GenerateKey(nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
m := &Manifest{
|
||||
Code: "test-app",
|
||||
Name: "Test App",
|
||||
Version: "1.0.0",
|
||||
Layout: "HLCRF",
|
||||
Slots: map[string]string{"C": "main"},
|
||||
}
|
||||
|
||||
err = Sign(m, priv)
|
||||
require.NoError(t, err)
|
||||
assert.NotEmpty(t, m.Sign)
|
||||
|
||||
ok, err := Verify(m, pub)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, ok)
|
||||
}
|
||||
|
||||
func TestVerify_Bad_Tampered(t *testing.T) {
|
||||
pub, priv, _ := ed25519.GenerateKey(nil)
|
||||
m := &Manifest{Code: "test-app", Version: "1.0.0"}
|
||||
_ = Sign(m, priv)
|
||||
|
||||
m.Code = "evil-app" // tamper
|
||||
|
||||
ok, err := Verify(m, pub)
|
||||
require.NoError(t, err)
|
||||
assert.False(t, ok)
|
||||
}
|
||||
|
||||
func TestVerify_Bad_Unsigned(t *testing.T) {
|
||||
pub, _, _ := ed25519.GenerateKey(nil)
|
||||
m := &Manifest{Code: "test-app"}
|
||||
|
||||
ok, err := Verify(m, pub)
|
||||
assert.Error(t, err)
|
||||
assert.False(t, ok)
|
||||
}
|
||||
|
|
@ -1,196 +0,0 @@
|
|||
package marketplace
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go-io"
|
||||
"forge.lthn.ai/core/go/pkg/manifest"
|
||||
"forge.lthn.ai/core/go/pkg/store"
|
||||
)
|
||||
|
||||
const storeGroup = "_modules"
|
||||
|
||||
// Installer handles module installation from Git repos.
|
||||
type Installer struct {
|
||||
modulesDir string
|
||||
store *store.Store
|
||||
}
|
||||
|
||||
// NewInstaller creates a new module installer.
|
||||
func NewInstaller(modulesDir string, st *store.Store) *Installer {
|
||||
return &Installer{
|
||||
modulesDir: modulesDir,
|
||||
store: st,
|
||||
}
|
||||
}
|
||||
|
||||
// InstalledModule holds stored metadata about an installed module.
|
||||
type InstalledModule struct {
|
||||
Code string `json:"code"`
|
||||
Name string `json:"name"`
|
||||
Version string `json:"version"`
|
||||
Repo string `json:"repo"`
|
||||
EntryPoint string `json:"entry_point"`
|
||||
Permissions manifest.Permissions `json:"permissions"`
|
||||
SignKey string `json:"sign_key,omitempty"`
|
||||
InstalledAt string `json:"installed_at"`
|
||||
}
|
||||
|
||||
// Install clones a module repo, verifies its manifest signature, and registers it.
|
||||
func (i *Installer) Install(ctx context.Context, mod Module) error {
|
||||
// Check if already installed
|
||||
if _, err := i.store.Get(storeGroup, mod.Code); err == nil {
|
||||
return fmt.Errorf("marketplace: module %q already installed", mod.Code)
|
||||
}
|
||||
|
||||
dest := filepath.Join(i.modulesDir, mod.Code)
|
||||
if err := os.MkdirAll(i.modulesDir, 0755); err != nil {
|
||||
return fmt.Errorf("marketplace: mkdir: %w", err)
|
||||
}
|
||||
if err := gitClone(ctx, mod.Repo, dest); err != nil {
|
||||
return fmt.Errorf("marketplace: clone %s: %w", mod.Repo, err)
|
||||
}
|
||||
|
||||
// On any error after clone, clean up the directory
|
||||
cleanup := true
|
||||
defer func() {
|
||||
if cleanup {
|
||||
os.RemoveAll(dest)
|
||||
}
|
||||
}()
|
||||
|
||||
medium, err := io.NewSandboxed(dest)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marketplace: medium: %w", err)
|
||||
}
|
||||
|
||||
m, err := loadManifest(medium, mod.SignKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
entryPoint := filepath.Join(dest, "main.ts")
|
||||
installed := InstalledModule{
|
||||
Code: mod.Code,
|
||||
Name: m.Name,
|
||||
Version: m.Version,
|
||||
Repo: mod.Repo,
|
||||
EntryPoint: entryPoint,
|
||||
Permissions: m.Permissions,
|
||||
SignKey: mod.SignKey,
|
||||
InstalledAt: time.Now().UTC().Format(time.RFC3339),
|
||||
}
|
||||
|
||||
data, err := json.Marshal(installed)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marketplace: marshal: %w", err)
|
||||
}
|
||||
|
||||
if err := i.store.Set(storeGroup, mod.Code, string(data)); err != nil {
|
||||
return fmt.Errorf("marketplace: store: %w", err)
|
||||
}
|
||||
|
||||
cleanup = false
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove uninstalls a module by deleting its files and store entry.
|
||||
func (i *Installer) Remove(code string) error {
|
||||
if _, err := i.store.Get(storeGroup, code); err != nil {
|
||||
return fmt.Errorf("marketplace: module %q not installed", code)
|
||||
}
|
||||
|
||||
dest := filepath.Join(i.modulesDir, code)
|
||||
os.RemoveAll(dest)
|
||||
|
||||
return i.store.Delete(storeGroup, code)
|
||||
}
|
||||
|
||||
// Update pulls latest changes and re-verifies the manifest.
|
||||
func (i *Installer) Update(ctx context.Context, code string) error {
|
||||
raw, err := i.store.Get(storeGroup, code)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marketplace: module %q not installed", code)
|
||||
}
|
||||
|
||||
var installed InstalledModule
|
||||
if err := json.Unmarshal([]byte(raw), &installed); err != nil {
|
||||
return fmt.Errorf("marketplace: unmarshal: %w", err)
|
||||
}
|
||||
|
||||
dest := filepath.Join(i.modulesDir, code)
|
||||
|
||||
cmd := exec.CommandContext(ctx, "git", "-C", dest, "pull", "--ff-only")
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("marketplace: pull: %s: %w", strings.TrimSpace(string(output)), err)
|
||||
}
|
||||
|
||||
// Reload and re-verify manifest with the same key used at install time
|
||||
medium, mErr := io.NewSandboxed(dest)
|
||||
if mErr != nil {
|
||||
return fmt.Errorf("marketplace: medium: %w", mErr)
|
||||
}
|
||||
m, mErr := loadManifest(medium, installed.SignKey)
|
||||
if mErr != nil {
|
||||
return fmt.Errorf("marketplace: reload manifest: %w", mErr)
|
||||
}
|
||||
|
||||
// Update stored metadata
|
||||
installed.Name = m.Name
|
||||
installed.Version = m.Version
|
||||
installed.Permissions = m.Permissions
|
||||
|
||||
data, err := json.Marshal(installed)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marketplace: marshal: %w", err)
|
||||
}
|
||||
|
||||
return i.store.Set(storeGroup, code, string(data))
|
||||
}
|
||||
|
||||
// Installed returns all installed module metadata.
|
||||
func (i *Installer) Installed() ([]InstalledModule, error) {
|
||||
all, err := i.store.GetAll(storeGroup)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marketplace: list: %w", err)
|
||||
}
|
||||
|
||||
var modules []InstalledModule
|
||||
for _, raw := range all {
|
||||
var m InstalledModule
|
||||
if err := json.Unmarshal([]byte(raw), &m); err != nil {
|
||||
continue
|
||||
}
|
||||
modules = append(modules, m)
|
||||
}
|
||||
return modules, nil
|
||||
}
|
||||
|
||||
// loadManifest loads and optionally verifies a module manifest.
|
||||
func loadManifest(medium io.Medium, signKey string) (*manifest.Manifest, error) {
|
||||
if signKey != "" {
|
||||
pubBytes, err := hex.DecodeString(signKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marketplace: decode sign key: %w", err)
|
||||
}
|
||||
return manifest.LoadVerified(medium, ".", pubBytes)
|
||||
}
|
||||
return manifest.Load(medium, ".")
|
||||
}
|
||||
|
||||
// gitClone clones a repository with --depth=1.
|
||||
func gitClone(ctx context.Context, repo, dest string) error {
|
||||
cmd := exec.CommandContext(ctx, "git", "clone", "--depth=1", repo, dest)
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("%s: %w", strings.TrimSpace(string(output)), err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,263 +0,0 @@
|
|||
package marketplace
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"encoding/hex"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/manifest"
|
||||
"forge.lthn.ai/core/go/pkg/store"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// createTestRepo creates a bare-bones git repo with a manifest and main.ts.
|
||||
// Returns the repo path (usable as Module.Repo for local clone).
|
||||
func createTestRepo(t *testing.T, code, version string) string {
|
||||
t.Helper()
|
||||
dir := filepath.Join(t.TempDir(), code)
|
||||
require.NoError(t, os.MkdirAll(filepath.Join(dir, ".core"), 0755))
|
||||
|
||||
manifestYAML := "code: " + code + "\nname: Test " + code + "\nversion: \"" + version + "\"\n"
|
||||
require.NoError(t, os.WriteFile(
|
||||
filepath.Join(dir, ".core", "view.yml"),
|
||||
[]byte(manifestYAML), 0644,
|
||||
))
|
||||
require.NoError(t, os.WriteFile(
|
||||
filepath.Join(dir, "main.ts"),
|
||||
[]byte("export async function init(core: any) {}\n"), 0644,
|
||||
))
|
||||
|
||||
runGit(t, dir, "init")
|
||||
runGit(t, dir, "add", ".")
|
||||
runGit(t, dir, "commit", "-m", "init")
|
||||
return dir
|
||||
}
|
||||
|
||||
// createSignedTestRepo creates a git repo with a signed manifest.
|
||||
// Returns (repo path, hex-encoded public key).
|
||||
func createSignedTestRepo(t *testing.T, code, version string) (string, string) {
|
||||
t.Helper()
|
||||
pub, priv, err := ed25519.GenerateKey(nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
dir := filepath.Join(t.TempDir(), code)
|
||||
require.NoError(t, os.MkdirAll(filepath.Join(dir, ".core"), 0755))
|
||||
|
||||
m := &manifest.Manifest{
|
||||
Code: code,
|
||||
Name: "Test " + code,
|
||||
Version: version,
|
||||
}
|
||||
require.NoError(t, manifest.Sign(m, priv))
|
||||
|
||||
data, err := manifest.MarshalYAML(m)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, ".core", "view.yml"), data, 0644))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(dir, "main.ts"), []byte("export async function init(core: any) {}\n"), 0644))
|
||||
|
||||
runGit(t, dir, "init")
|
||||
runGit(t, dir, "add", ".")
|
||||
runGit(t, dir, "commit", "-m", "init")
|
||||
|
||||
return dir, hex.EncodeToString(pub)
|
||||
}
|
||||
|
||||
func runGit(t *testing.T, dir string, args ...string) {
|
||||
t.Helper()
|
||||
cmd := exec.Command("git", append([]string{"-C", dir, "-c", "user.email=test@test.com", "-c", "user.name=test"}, args...)...)
|
||||
out, err := cmd.CombinedOutput()
|
||||
require.NoError(t, err, "git %v: %s", args, string(out))
|
||||
}
|
||||
|
||||
func TestInstall_Good(t *testing.T) {
|
||||
repo := createTestRepo(t, "hello-mod", "1.0")
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
err = inst.Install(context.Background(), Module{
|
||||
Code: "hello-mod",
|
||||
Repo: repo,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify directory exists
|
||||
_, err = os.Stat(filepath.Join(modulesDir, "hello-mod", "main.ts"))
|
||||
assert.NoError(t, err, "main.ts should exist in installed module")
|
||||
|
||||
// Verify store entry
|
||||
raw, err := st.Get("_modules", "hello-mod")
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, raw, `"code":"hello-mod"`)
|
||||
assert.Contains(t, raw, `"version":"1.0"`)
|
||||
}
|
||||
|
||||
func TestInstall_Good_Signed(t *testing.T) {
|
||||
repo, signKey := createSignedTestRepo(t, "signed-mod", "2.0")
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
err = inst.Install(context.Background(), Module{
|
||||
Code: "signed-mod",
|
||||
Repo: repo,
|
||||
SignKey: signKey,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
raw, err := st.Get("_modules", "signed-mod")
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, raw, `"version":"2.0"`)
|
||||
}
|
||||
|
||||
func TestInstall_Bad_AlreadyInstalled(t *testing.T) {
|
||||
repo := createTestRepo(t, "dup-mod", "1.0")
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
mod := Module{Code: "dup-mod", Repo: repo}
|
||||
|
||||
require.NoError(t, inst.Install(context.Background(), mod))
|
||||
err = inst.Install(context.Background(), mod)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "already installed")
|
||||
}
|
||||
|
||||
func TestInstall_Bad_InvalidSignature(t *testing.T) {
|
||||
// Sign with key A, verify with key B
|
||||
repo, _ := createSignedTestRepo(t, "bad-sig", "1.0")
|
||||
_, wrongKey := createSignedTestRepo(t, "dummy", "1.0") // different key
|
||||
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
err = inst.Install(context.Background(), Module{
|
||||
Code: "bad-sig",
|
||||
Repo: repo,
|
||||
SignKey: wrongKey,
|
||||
})
|
||||
assert.Error(t, err)
|
||||
|
||||
// Verify directory was cleaned up
|
||||
_, statErr := os.Stat(filepath.Join(modulesDir, "bad-sig"))
|
||||
assert.True(t, os.IsNotExist(statErr), "directory should be cleaned up on failure")
|
||||
}
|
||||
|
||||
func TestRemove_Good(t *testing.T) {
|
||||
repo := createTestRepo(t, "rm-mod", "1.0")
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
require.NoError(t, inst.Install(context.Background(), Module{Code: "rm-mod", Repo: repo}))
|
||||
|
||||
err = inst.Remove("rm-mod")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Directory gone
|
||||
_, statErr := os.Stat(filepath.Join(modulesDir, "rm-mod"))
|
||||
assert.True(t, os.IsNotExist(statErr))
|
||||
|
||||
// Store entry gone
|
||||
_, err = st.Get("_modules", "rm-mod")
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestRemove_Bad_NotInstalled(t *testing.T) {
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(t.TempDir(), st)
|
||||
err = inst.Remove("nonexistent")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "not installed")
|
||||
}
|
||||
|
||||
func TestInstalled_Good(t *testing.T) {
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
|
||||
repo1 := createTestRepo(t, "mod-a", "1.0")
|
||||
repo2 := createTestRepo(t, "mod-b", "2.0")
|
||||
|
||||
require.NoError(t, inst.Install(context.Background(), Module{Code: "mod-a", Repo: repo1}))
|
||||
require.NoError(t, inst.Install(context.Background(), Module{Code: "mod-b", Repo: repo2}))
|
||||
|
||||
installed, err := inst.Installed()
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, installed, 2)
|
||||
|
||||
codes := map[string]bool{}
|
||||
for _, m := range installed {
|
||||
codes[m.Code] = true
|
||||
}
|
||||
assert.True(t, codes["mod-a"])
|
||||
assert.True(t, codes["mod-b"])
|
||||
}
|
||||
|
||||
func TestInstalled_Good_Empty(t *testing.T) {
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(t.TempDir(), st)
|
||||
installed, err := inst.Installed()
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, installed)
|
||||
}
|
||||
|
||||
func TestUpdate_Good(t *testing.T) {
|
||||
repo := createTestRepo(t, "upd-mod", "1.0")
|
||||
modulesDir := filepath.Join(t.TempDir(), "modules")
|
||||
|
||||
st, err := store.New(":memory:")
|
||||
require.NoError(t, err)
|
||||
defer st.Close()
|
||||
|
||||
inst := NewInstaller(modulesDir, st)
|
||||
require.NoError(t, inst.Install(context.Background(), Module{Code: "upd-mod", Repo: repo}))
|
||||
|
||||
// Update the origin repo
|
||||
newManifest := "code: upd-mod\nname: Updated Module\nversion: \"2.0\"\n"
|
||||
require.NoError(t, os.WriteFile(filepath.Join(repo, ".core", "view.yml"), []byte(newManifest), 0644))
|
||||
runGit(t, repo, "add", ".")
|
||||
runGit(t, repo, "commit", "-m", "bump version")
|
||||
|
||||
err = inst.Update(context.Background(), "upd-mod")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify updated metadata
|
||||
installed, err := inst.Installed()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, installed, 1)
|
||||
assert.Equal(t, "2.0", installed[0].Version)
|
||||
assert.Equal(t, "Updated Module", installed[0].Name)
|
||||
}
|
||||
|
|
@ -1,67 +0,0 @@
|
|||
package marketplace
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Module is a marketplace entry pointing to a module's Git repo.
|
||||
type Module struct {
|
||||
Code string `json:"code"`
|
||||
Name string `json:"name"`
|
||||
Repo string `json:"repo"`
|
||||
SignKey string `json:"sign_key"`
|
||||
Category string `json:"category"`
|
||||
}
|
||||
|
||||
// Index is the root marketplace catalog.
|
||||
type Index struct {
|
||||
Version int `json:"version"`
|
||||
Modules []Module `json:"modules"`
|
||||
Categories []string `json:"categories"`
|
||||
}
|
||||
|
||||
// ParseIndex decodes a marketplace index.json.
|
||||
func ParseIndex(data []byte) (*Index, error) {
|
||||
var idx Index
|
||||
if err := json.Unmarshal(data, &idx); err != nil {
|
||||
return nil, fmt.Errorf("marketplace.ParseIndex: %w", err)
|
||||
}
|
||||
return &idx, nil
|
||||
}
|
||||
|
||||
// Search returns modules matching the query in code, name, or category.
|
||||
func (idx *Index) Search(query string) []Module {
|
||||
q := strings.ToLower(query)
|
||||
var results []Module
|
||||
for _, m := range idx.Modules {
|
||||
if strings.Contains(strings.ToLower(m.Code), q) ||
|
||||
strings.Contains(strings.ToLower(m.Name), q) ||
|
||||
strings.Contains(strings.ToLower(m.Category), q) {
|
||||
results = append(results, m)
|
||||
}
|
||||
}
|
||||
return results
|
||||
}
|
||||
|
||||
// ByCategory returns all modules in the given category.
|
||||
func (idx *Index) ByCategory(category string) []Module {
|
||||
var results []Module
|
||||
for _, m := range idx.Modules {
|
||||
if m.Category == category {
|
||||
results = append(results, m)
|
||||
}
|
||||
}
|
||||
return results
|
||||
}
|
||||
|
||||
// Find returns the module with the given code, or false if not found.
|
||||
func (idx *Index) Find(code string) (Module, bool) {
|
||||
for _, m := range idx.Modules {
|
||||
if m.Code == code {
|
||||
return m, true
|
||||
}
|
||||
}
|
||||
return Module{}, false
|
||||
}
|
||||
|
|
@ -1,65 +0,0 @@
|
|||
package marketplace
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestParseIndex_Good(t *testing.T) {
|
||||
raw := `{
|
||||
"version": 1,
|
||||
"modules": [
|
||||
{"code": "mining-xmrig", "name": "XMRig Miner", "repo": "https://forge.lthn.io/host-uk/mod-xmrig.git", "sign_key": "abc123", "category": "miner"},
|
||||
{"code": "utils-cyberchef", "name": "CyberChef", "repo": "https://forge.lthn.io/host-uk/mod-cyberchef.git", "sign_key": "def456", "category": "utils"}
|
||||
],
|
||||
"categories": ["miner", "utils"]
|
||||
}`
|
||||
idx, err := ParseIndex([]byte(raw))
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 1, idx.Version)
|
||||
assert.Len(t, idx.Modules, 2)
|
||||
assert.Equal(t, "mining-xmrig", idx.Modules[0].Code)
|
||||
}
|
||||
|
||||
func TestSearch_Good(t *testing.T) {
|
||||
idx := &Index{
|
||||
Modules: []Module{
|
||||
{Code: "mining-xmrig", Name: "XMRig Miner", Category: "miner"},
|
||||
{Code: "utils-cyberchef", Name: "CyberChef", Category: "utils"},
|
||||
},
|
||||
}
|
||||
results := idx.Search("miner")
|
||||
assert.Len(t, results, 1)
|
||||
assert.Equal(t, "mining-xmrig", results[0].Code)
|
||||
}
|
||||
|
||||
func TestByCategory_Good(t *testing.T) {
|
||||
idx := &Index{
|
||||
Modules: []Module{
|
||||
{Code: "a", Category: "miner"},
|
||||
{Code: "b", Category: "utils"},
|
||||
{Code: "c", Category: "miner"},
|
||||
},
|
||||
}
|
||||
miners := idx.ByCategory("miner")
|
||||
assert.Len(t, miners, 2)
|
||||
}
|
||||
|
||||
func TestFind_Good(t *testing.T) {
|
||||
idx := &Index{
|
||||
Modules: []Module{
|
||||
{Code: "mining-xmrig", Name: "XMRig"},
|
||||
},
|
||||
}
|
||||
m, ok := idx.Find("mining-xmrig")
|
||||
assert.True(t, ok)
|
||||
assert.Equal(t, "XMRig", m.Name)
|
||||
}
|
||||
|
||||
func TestFind_Bad_NotFound(t *testing.T) {
|
||||
idx := &Index{}
|
||||
_, ok := idx.Find("nope")
|
||||
assert.False(t, ok)
|
||||
}
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
package plugin
|
||||
|
||||
// PluginConfig holds configuration for a single installed plugin.
|
||||
type PluginConfig struct {
|
||||
Name string `json:"name" yaml:"name"`
|
||||
Version string `json:"version" yaml:"version"`
|
||||
Source string `json:"source" yaml:"source"` // e.g., "github:org/repo"
|
||||
Enabled bool `json:"enabled" yaml:"enabled"`
|
||||
InstalledAt string `json:"installed_at" yaml:"installed_at"` // RFC 3339 timestamp
|
||||
}
|
||||
|
|
@ -1,195 +0,0 @@
|
|||
package plugin
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
coreerr "forge.lthn.ai/core/go-log"
|
||||
"forge.lthn.ai/core/go-io"
|
||||
)
|
||||
|
||||
// Installer handles plugin installation from GitHub.
|
||||
type Installer struct {
|
||||
medium io.Medium
|
||||
registry *Registry
|
||||
}
|
||||
|
||||
// NewInstaller creates a new plugin installer.
|
||||
func NewInstaller(m io.Medium, registry *Registry) *Installer {
|
||||
return &Installer{
|
||||
medium: m,
|
||||
registry: registry,
|
||||
}
|
||||
}
|
||||
|
||||
// Install downloads and installs a plugin from GitHub.
|
||||
// The source format is "org/repo" or "org/repo@version".
|
||||
func (i *Installer) Install(ctx context.Context, source string) error {
|
||||
org, repo, version, err := ParseSource(source)
|
||||
if err != nil {
|
||||
return coreerr.E("plugin.Installer.Install", "invalid source", err)
|
||||
}
|
||||
|
||||
// Check if already installed
|
||||
if _, exists := i.registry.Get(repo); exists {
|
||||
return coreerr.E("plugin.Installer.Install", "plugin already installed: "+repo, nil)
|
||||
}
|
||||
|
||||
// Clone the repository
|
||||
pluginDir := filepath.Join(i.registry.basePath, repo)
|
||||
if err := i.medium.EnsureDir(pluginDir); err != nil {
|
||||
return coreerr.E("plugin.Installer.Install", "failed to create plugin directory", err)
|
||||
}
|
||||
|
||||
if err := i.cloneRepo(ctx, org, repo, version, pluginDir); err != nil {
|
||||
return coreerr.E("plugin.Installer.Install", "failed to clone repository", err)
|
||||
}
|
||||
|
||||
// Load and validate manifest
|
||||
manifestPath := filepath.Join(pluginDir, "plugin.json")
|
||||
manifest, err := LoadManifest(i.medium, manifestPath)
|
||||
if err != nil {
|
||||
// Clean up on failure
|
||||
_ = i.medium.DeleteAll(pluginDir)
|
||||
return coreerr.E("plugin.Installer.Install", "failed to load manifest", err)
|
||||
}
|
||||
|
||||
if err := manifest.Validate(); err != nil {
|
||||
_ = i.medium.DeleteAll(pluginDir)
|
||||
return coreerr.E("plugin.Installer.Install", "invalid manifest", err)
|
||||
}
|
||||
|
||||
// Resolve version
|
||||
if version == "" {
|
||||
version = manifest.Version
|
||||
}
|
||||
|
||||
// Register in the registry
|
||||
cfg := &PluginConfig{
|
||||
Name: manifest.Name,
|
||||
Version: version,
|
||||
Source: fmt.Sprintf("github:%s/%s", org, repo),
|
||||
Enabled: true,
|
||||
InstalledAt: time.Now().UTC().Format(time.RFC3339),
|
||||
}
|
||||
|
||||
if err := i.registry.Add(cfg); err != nil {
|
||||
return coreerr.E("plugin.Installer.Install", "failed to register plugin", err)
|
||||
}
|
||||
|
||||
if err := i.registry.Save(); err != nil {
|
||||
return coreerr.E("plugin.Installer.Install", "failed to save registry", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Update updates a plugin to the latest version.
|
||||
func (i *Installer) Update(ctx context.Context, name string) error {
|
||||
cfg, ok := i.registry.Get(name)
|
||||
if !ok {
|
||||
return coreerr.E("plugin.Installer.Update", "plugin not found: "+name, nil)
|
||||
}
|
||||
|
||||
// Parse the source to get org/repo
|
||||
source := strings.TrimPrefix(cfg.Source, "github:")
|
||||
pluginDir := filepath.Join(i.registry.basePath, name)
|
||||
|
||||
// Pull latest changes
|
||||
cmd := exec.CommandContext(ctx, "git", "-C", pluginDir, "pull", "--ff-only")
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return coreerr.E("plugin.Installer.Update", "failed to pull updates: "+strings.TrimSpace(string(output)), err)
|
||||
}
|
||||
|
||||
// Reload manifest to get updated version
|
||||
manifestPath := filepath.Join(pluginDir, "plugin.json")
|
||||
manifest, err := LoadManifest(i.medium, manifestPath)
|
||||
if err != nil {
|
||||
return coreerr.E("plugin.Installer.Update", "failed to read updated manifest", err)
|
||||
}
|
||||
|
||||
// Update registry
|
||||
cfg.Version = manifest.Version
|
||||
if err := i.registry.Save(); err != nil {
|
||||
return coreerr.E("plugin.Installer.Update", "failed to save registry", err)
|
||||
}
|
||||
|
||||
_ = source // used for context
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove uninstalls a plugin by removing its files and registry entry.
|
||||
func (i *Installer) Remove(name string) error {
|
||||
if _, ok := i.registry.Get(name); !ok {
|
||||
return coreerr.E("plugin.Installer.Remove", "plugin not found: "+name, nil)
|
||||
}
|
||||
|
||||
// Delete plugin directory
|
||||
pluginDir := filepath.Join(i.registry.basePath, name)
|
||||
if i.medium.Exists(pluginDir) {
|
||||
if err := i.medium.DeleteAll(pluginDir); err != nil {
|
||||
return coreerr.E("plugin.Installer.Remove", "failed to delete plugin files", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Remove from registry
|
||||
if err := i.registry.Remove(name); err != nil {
|
||||
return coreerr.E("plugin.Installer.Remove", "failed to unregister plugin", err)
|
||||
}
|
||||
|
||||
if err := i.registry.Save(); err != nil {
|
||||
return coreerr.E("plugin.Installer.Remove", "failed to save registry", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// cloneRepo clones a GitHub repository using the gh CLI.
|
||||
func (i *Installer) cloneRepo(ctx context.Context, org, repo, version, dest string) error {
|
||||
repoURL := fmt.Sprintf("%s/%s", org, repo)
|
||||
|
||||
args := []string{"repo", "clone", repoURL, dest}
|
||||
if version != "" {
|
||||
args = append(args, "--", "--branch", version)
|
||||
}
|
||||
|
||||
cmd := exec.CommandContext(ctx, "gh", args...)
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("%w: %s", err, strings.TrimSpace(string(output)))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ParseSource parses a plugin source string into org, repo, and version.
|
||||
// Accepted formats:
|
||||
// - "org/repo" -> org="org", repo="repo", version=""
|
||||
// - "org/repo@v1.0" -> org="org", repo="repo", version="v1.0"
|
||||
func ParseSource(source string) (org, repo, version string, err error) {
|
||||
if source == "" {
|
||||
return "", "", "", coreerr.E("plugin.ParseSource", "source is empty", nil)
|
||||
}
|
||||
|
||||
// Split off version if present
|
||||
atIdx := strings.LastIndex(source, "@")
|
||||
path := source
|
||||
if atIdx != -1 {
|
||||
path = source[:atIdx]
|
||||
version = source[atIdx+1:]
|
||||
if version == "" {
|
||||
return "", "", "", coreerr.E("plugin.ParseSource", "version is empty after @", nil)
|
||||
}
|
||||
}
|
||||
|
||||
// Split org/repo
|
||||
parts := strings.Split(path, "/")
|
||||
if len(parts) != 2 || parts[0] == "" || parts[1] == "" {
|
||||
return "", "", "", coreerr.E("plugin.ParseSource", "source must be in format org/repo[@version]", nil)
|
||||
}
|
||||
|
||||
return parts[0], parts[1], version, nil
|
||||
}
|
||||
|
|
@ -1,166 +0,0 @@
|
|||
package plugin
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go-io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// ── NewInstaller ───────────────────────────────────────────────────
|
||||
|
||||
func TestNewInstaller_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/plugins")
|
||||
inst := NewInstaller(m, reg)
|
||||
|
||||
assert.NotNil(t, inst)
|
||||
assert.Equal(t, m, inst.medium)
|
||||
assert.Equal(t, reg, inst.registry)
|
||||
}
|
||||
|
||||
// ── Install error paths ────────────────────────────────────────────
|
||||
|
||||
func TestInstall_Bad_InvalidSource(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/plugins")
|
||||
inst := NewInstaller(m, reg)
|
||||
|
||||
err := inst.Install(context.Background(), "bad-source")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "invalid source")
|
||||
}
|
||||
|
||||
func TestInstall_Bad_AlreadyInstalled(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/plugins")
|
||||
_ = reg.Add(&PluginConfig{Name: "my-plugin", Version: "1.0.0"})
|
||||
|
||||
inst := NewInstaller(m, reg)
|
||||
err := inst.Install(context.Background(), "org/my-plugin")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "already installed")
|
||||
}
|
||||
|
||||
// ── Remove ─────────────────────────────────────────────────────────
|
||||
|
||||
func TestRemove_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/plugins")
|
||||
_ = reg.Add(&PluginConfig{Name: "removable", Version: "1.0.0"})
|
||||
|
||||
// Create plugin directory.
|
||||
_ = m.EnsureDir("/plugins/removable")
|
||||
_ = m.Write("/plugins/removable/plugin.json", `{"name":"removable"}`)
|
||||
|
||||
inst := NewInstaller(m, reg)
|
||||
err := inst.Remove("removable")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Plugin removed from registry.
|
||||
_, ok := reg.Get("removable")
|
||||
assert.False(t, ok)
|
||||
|
||||
// Directory cleaned up.
|
||||
assert.False(t, m.Exists("/plugins/removable"))
|
||||
}
|
||||
|
||||
func TestRemove_Good_DirAlreadyGone(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/plugins")
|
||||
_ = reg.Add(&PluginConfig{Name: "ghost", Version: "1.0.0"})
|
||||
// No directory exists — should still succeed.
|
||||
|
||||
inst := NewInstaller(m, reg)
|
||||
err := inst.Remove("ghost")
|
||||
require.NoError(t, err)
|
||||
|
||||
_, ok := reg.Get("ghost")
|
||||
assert.False(t, ok)
|
||||
}
|
||||
|
||||
func TestRemove_Bad_NotFound(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/plugins")
|
||||
inst := NewInstaller(m, reg)
|
||||
|
||||
err := inst.Remove("nonexistent")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "plugin not found")
|
||||
}
|
||||
|
||||
// ── Update error paths ─────────────────────────────────────────────
|
||||
|
||||
func TestUpdate_Bad_NotFound(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/plugins")
|
||||
inst := NewInstaller(m, reg)
|
||||
|
||||
err := inst.Update(context.Background(), "missing")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "plugin not found")
|
||||
}
|
||||
|
||||
// ── ParseSource ────────────────────────────────────────────────────
|
||||
|
||||
func TestParseSource_Good_OrgRepo(t *testing.T) {
|
||||
org, repo, version, err := ParseSource("host-uk/core-plugin")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "host-uk", org)
|
||||
assert.Equal(t, "core-plugin", repo)
|
||||
assert.Equal(t, "", version)
|
||||
}
|
||||
|
||||
func TestParseSource_Good_OrgRepoVersion(t *testing.T) {
|
||||
org, repo, version, err := ParseSource("host-uk/core-plugin@v1.0.0")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "host-uk", org)
|
||||
assert.Equal(t, "core-plugin", repo)
|
||||
assert.Equal(t, "v1.0.0", version)
|
||||
}
|
||||
|
||||
func TestParseSource_Good_VersionWithoutPrefix(t *testing.T) {
|
||||
org, repo, version, err := ParseSource("org/repo@1.2.3")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "org", org)
|
||||
assert.Equal(t, "repo", repo)
|
||||
assert.Equal(t, "1.2.3", version)
|
||||
}
|
||||
|
||||
func TestParseSource_Bad_Empty(t *testing.T) {
|
||||
_, _, _, err := ParseSource("")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "source is empty")
|
||||
}
|
||||
|
||||
func TestParseSource_Bad_NoSlash(t *testing.T) {
|
||||
_, _, _, err := ParseSource("just-a-name")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "org/repo")
|
||||
}
|
||||
|
||||
func TestParseSource_Bad_TooManySlashes(t *testing.T) {
|
||||
_, _, _, err := ParseSource("a/b/c")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "org/repo")
|
||||
}
|
||||
|
||||
func TestParseSource_Bad_EmptyOrg(t *testing.T) {
|
||||
_, _, _, err := ParseSource("/repo")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "org/repo")
|
||||
}
|
||||
|
||||
func TestParseSource_Bad_EmptyRepo(t *testing.T) {
|
||||
_, _, _, err := ParseSource("org/")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "org/repo")
|
||||
}
|
||||
|
||||
func TestParseSource_Bad_EmptyVersion(t *testing.T) {
|
||||
_, _, _, err := ParseSource("org/repo@")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "version is empty")
|
||||
}
|
||||
|
|
@ -1,63 +0,0 @@
|
|||
package plugin
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
|
||||
coreerr "forge.lthn.ai/core/go-log"
|
||||
"forge.lthn.ai/core/go-io"
|
||||
)
|
||||
|
||||
// Loader loads plugins from the filesystem.
|
||||
type Loader struct {
|
||||
medium io.Medium
|
||||
baseDir string
|
||||
}
|
||||
|
||||
// NewLoader creates a new plugin loader.
|
||||
func NewLoader(m io.Medium, baseDir string) *Loader {
|
||||
return &Loader{
|
||||
medium: m,
|
||||
baseDir: baseDir,
|
||||
}
|
||||
}
|
||||
|
||||
// Discover finds all plugin directories under baseDir and returns their manifests.
|
||||
// Directories without a valid plugin.json are silently skipped.
|
||||
func (l *Loader) Discover() ([]*Manifest, error) {
|
||||
entries, err := l.medium.List(l.baseDir)
|
||||
if err != nil {
|
||||
return nil, coreerr.E("plugin.Loader.Discover", "failed to list plugin directory", err)
|
||||
}
|
||||
|
||||
var manifests []*Manifest
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
manifest, err := l.LoadPlugin(entry.Name())
|
||||
if err != nil {
|
||||
// Skip directories without valid manifests
|
||||
continue
|
||||
}
|
||||
|
||||
manifests = append(manifests, manifest)
|
||||
}
|
||||
|
||||
return manifests, nil
|
||||
}
|
||||
|
||||
// LoadPlugin loads a single plugin's manifest by name.
|
||||
func (l *Loader) LoadPlugin(name string) (*Manifest, error) {
|
||||
manifestPath := filepath.Join(l.baseDir, name, "plugin.json")
|
||||
manifest, err := LoadManifest(l.medium, manifestPath)
|
||||
if err != nil {
|
||||
return nil, coreerr.E("plugin.Loader.LoadPlugin", "failed to load plugin: "+name, err)
|
||||
}
|
||||
|
||||
if err := manifest.Validate(); err != nil {
|
||||
return nil, coreerr.E("plugin.Loader.LoadPlugin", "invalid plugin manifest: "+name, err)
|
||||
}
|
||||
|
||||
return manifest, nil
|
||||
}
|
||||
|
|
@ -1,146 +0,0 @@
|
|||
package plugin
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go-io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestLoader_Discover_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
baseDir := "/home/user/.core/plugins"
|
||||
|
||||
// Set up mock filesystem with two plugins
|
||||
m.Dirs[baseDir] = true
|
||||
m.Dirs[baseDir+"/plugin-a"] = true
|
||||
m.Dirs[baseDir+"/plugin-b"] = true
|
||||
|
||||
m.Files[baseDir+"/plugin-a/plugin.json"] = `{
|
||||
"name": "plugin-a",
|
||||
"version": "1.0.0",
|
||||
"description": "Plugin A",
|
||||
"entrypoint": "main.go"
|
||||
}`
|
||||
|
||||
m.Files[baseDir+"/plugin-b/plugin.json"] = `{
|
||||
"name": "plugin-b",
|
||||
"version": "2.0.0",
|
||||
"description": "Plugin B",
|
||||
"entrypoint": "run.sh"
|
||||
}`
|
||||
|
||||
loader := NewLoader(m, baseDir)
|
||||
manifests, err := loader.Discover()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, manifests, 2)
|
||||
|
||||
names := make(map[string]bool)
|
||||
for _, manifest := range manifests {
|
||||
names[manifest.Name] = true
|
||||
}
|
||||
assert.True(t, names["plugin-a"])
|
||||
assert.True(t, names["plugin-b"])
|
||||
}
|
||||
|
||||
func TestLoader_Discover_Good_SkipsInvalidPlugins(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
baseDir := "/home/user/.core/plugins"
|
||||
|
||||
m.Dirs[baseDir] = true
|
||||
m.Dirs[baseDir+"/good-plugin"] = true
|
||||
m.Dirs[baseDir+"/bad-plugin"] = true
|
||||
|
||||
// Valid plugin
|
||||
m.Files[baseDir+"/good-plugin/plugin.json"] = `{
|
||||
"name": "good-plugin",
|
||||
"version": "1.0.0",
|
||||
"entrypoint": "main.go"
|
||||
}`
|
||||
|
||||
// Invalid plugin (bad JSON)
|
||||
m.Files[baseDir+"/bad-plugin/plugin.json"] = `{invalid}`
|
||||
|
||||
loader := NewLoader(m, baseDir)
|
||||
manifests, err := loader.Discover()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, manifests, 1)
|
||||
assert.Equal(t, "good-plugin", manifests[0].Name)
|
||||
}
|
||||
|
||||
func TestLoader_Discover_Good_SkipsFiles(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
baseDir := "/home/user/.core/plugins"
|
||||
|
||||
m.Dirs[baseDir] = true
|
||||
m.Dirs[baseDir+"/real-plugin"] = true
|
||||
m.Files[baseDir+"/registry.json"] = `{}` // A file, not a directory
|
||||
|
||||
m.Files[baseDir+"/real-plugin/plugin.json"] = `{
|
||||
"name": "real-plugin",
|
||||
"version": "1.0.0",
|
||||
"entrypoint": "main.go"
|
||||
}`
|
||||
|
||||
loader := NewLoader(m, baseDir)
|
||||
manifests, err := loader.Discover()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, manifests, 1)
|
||||
assert.Equal(t, "real-plugin", manifests[0].Name)
|
||||
}
|
||||
|
||||
func TestLoader_Discover_Good_EmptyDirectory(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
baseDir := "/home/user/.core/plugins"
|
||||
m.Dirs[baseDir] = true
|
||||
|
||||
loader := NewLoader(m, baseDir)
|
||||
manifests, err := loader.Discover()
|
||||
assert.NoError(t, err)
|
||||
assert.Empty(t, manifests)
|
||||
}
|
||||
|
||||
func TestLoader_LoadPlugin_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
baseDir := "/home/user/.core/plugins"
|
||||
|
||||
m.Dirs[baseDir+"/my-plugin"] = true
|
||||
m.Files[baseDir+"/my-plugin/plugin.json"] = `{
|
||||
"name": "my-plugin",
|
||||
"version": "1.0.0",
|
||||
"description": "My plugin",
|
||||
"author": "Test",
|
||||
"entrypoint": "main.go"
|
||||
}`
|
||||
|
||||
loader := NewLoader(m, baseDir)
|
||||
manifest, err := loader.LoadPlugin("my-plugin")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "my-plugin", manifest.Name)
|
||||
assert.Equal(t, "1.0.0", manifest.Version)
|
||||
}
|
||||
|
||||
func TestLoader_LoadPlugin_Bad_NotFound(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
loader := NewLoader(m, "/home/user/.core/plugins")
|
||||
|
||||
_, err := loader.LoadPlugin("nonexistent")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "failed to load plugin")
|
||||
}
|
||||
|
||||
func TestLoader_LoadPlugin_Bad_InvalidManifest(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
baseDir := "/home/user/.core/plugins"
|
||||
|
||||
m.Dirs[baseDir+"/bad-plugin"] = true
|
||||
m.Files[baseDir+"/bad-plugin/plugin.json"] = `{
|
||||
"name": "bad-plugin",
|
||||
"version": "1.0.0"
|
||||
}` // Missing entrypoint
|
||||
|
||||
loader := NewLoader(m, baseDir)
|
||||
_, err := loader.LoadPlugin("bad-plugin")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "invalid plugin manifest")
|
||||
}
|
||||
|
|
@ -1,50 +0,0 @@
|
|||
package plugin
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
|
||||
coreerr "forge.lthn.ai/core/go-log"
|
||||
"forge.lthn.ai/core/go-io"
|
||||
)
|
||||
|
||||
// Manifest represents a plugin.json manifest file.
|
||||
// Each plugin repository must contain a plugin.json at its root.
|
||||
type Manifest struct {
|
||||
Name string `json:"name"`
|
||||
Version string `json:"version"`
|
||||
Description string `json:"description"`
|
||||
Author string `json:"author"`
|
||||
Entrypoint string `json:"entrypoint"`
|
||||
Dependencies []string `json:"dependencies,omitempty"`
|
||||
MinVersion string `json:"min_version,omitempty"`
|
||||
}
|
||||
|
||||
// LoadManifest reads and parses a plugin.json file from the given path.
|
||||
func LoadManifest(m io.Medium, path string) (*Manifest, error) {
|
||||
content, err := m.Read(path)
|
||||
if err != nil {
|
||||
return nil, coreerr.E("plugin.LoadManifest", "failed to read manifest", err)
|
||||
}
|
||||
|
||||
var manifest Manifest
|
||||
if err := json.Unmarshal([]byte(content), &manifest); err != nil {
|
||||
return nil, coreerr.E("plugin.LoadManifest", "failed to parse manifest JSON", err)
|
||||
}
|
||||
|
||||
return &manifest, nil
|
||||
}
|
||||
|
||||
// Validate checks the manifest for required fields.
|
||||
// Returns an error if name, version, or entrypoint are missing.
|
||||
func (m *Manifest) Validate() error {
|
||||
if m.Name == "" {
|
||||
return coreerr.E("plugin.Manifest.Validate", "name is required", nil)
|
||||
}
|
||||
if m.Version == "" {
|
||||
return coreerr.E("plugin.Manifest.Validate", "version is required", nil)
|
||||
}
|
||||
if m.Entrypoint == "" {
|
||||
return coreerr.E("plugin.Manifest.Validate", "entrypoint is required", nil)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,109 +0,0 @@
|
|||
package plugin
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go-io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestLoadManifest_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
m.Files["plugins/test/plugin.json"] = `{
|
||||
"name": "test-plugin",
|
||||
"version": "1.0.0",
|
||||
"description": "A test plugin",
|
||||
"author": "Test Author",
|
||||
"entrypoint": "main.go",
|
||||
"dependencies": ["dep-a", "dep-b"],
|
||||
"min_version": "0.5.0"
|
||||
}`
|
||||
|
||||
manifest, err := LoadManifest(m, "plugins/test/plugin.json")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "test-plugin", manifest.Name)
|
||||
assert.Equal(t, "1.0.0", manifest.Version)
|
||||
assert.Equal(t, "A test plugin", manifest.Description)
|
||||
assert.Equal(t, "Test Author", manifest.Author)
|
||||
assert.Equal(t, "main.go", manifest.Entrypoint)
|
||||
assert.Equal(t, []string{"dep-a", "dep-b"}, manifest.Dependencies)
|
||||
assert.Equal(t, "0.5.0", manifest.MinVersion)
|
||||
}
|
||||
|
||||
func TestLoadManifest_Good_MinimalFields(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
m.Files["plugin.json"] = `{
|
||||
"name": "minimal",
|
||||
"version": "0.1.0",
|
||||
"entrypoint": "run.sh"
|
||||
}`
|
||||
|
||||
manifest, err := LoadManifest(m, "plugin.json")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "minimal", manifest.Name)
|
||||
assert.Equal(t, "0.1.0", manifest.Version)
|
||||
assert.Equal(t, "run.sh", manifest.Entrypoint)
|
||||
assert.Empty(t, manifest.Dependencies)
|
||||
assert.Empty(t, manifest.MinVersion)
|
||||
}
|
||||
|
||||
func TestLoadManifest_Bad_FileNotFound(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
_, err := LoadManifest(m, "nonexistent/plugin.json")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "failed to read manifest")
|
||||
}
|
||||
|
||||
func TestLoadManifest_Bad_InvalidJSON(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
m.Files["plugin.json"] = `{invalid json}`
|
||||
|
||||
_, err := LoadManifest(m, "plugin.json")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "failed to parse manifest JSON")
|
||||
}
|
||||
|
||||
func TestManifest_Validate_Good(t *testing.T) {
|
||||
manifest := &Manifest{
|
||||
Name: "test-plugin",
|
||||
Version: "1.0.0",
|
||||
Entrypoint: "main.go",
|
||||
}
|
||||
|
||||
err := manifest.Validate()
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestManifest_Validate_Bad_MissingName(t *testing.T) {
|
||||
manifest := &Manifest{
|
||||
Version: "1.0.0",
|
||||
Entrypoint: "main.go",
|
||||
}
|
||||
|
||||
err := manifest.Validate()
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "name is required")
|
||||
}
|
||||
|
||||
func TestManifest_Validate_Bad_MissingVersion(t *testing.T) {
|
||||
manifest := &Manifest{
|
||||
Name: "test-plugin",
|
||||
Entrypoint: "main.go",
|
||||
}
|
||||
|
||||
err := manifest.Validate()
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "version is required")
|
||||
}
|
||||
|
||||
func TestManifest_Validate_Bad_MissingEntrypoint(t *testing.T) {
|
||||
manifest := &Manifest{
|
||||
Name: "test-plugin",
|
||||
Version: "1.0.0",
|
||||
}
|
||||
|
||||
err := manifest.Validate()
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "entrypoint is required")
|
||||
}
|
||||
|
|
@ -1,54 +0,0 @@
|
|||
// Package plugin provides a plugin system for the core CLI.
|
||||
//
|
||||
// Plugins extend the CLI with additional commands and functionality.
|
||||
// They are distributed as GitHub repositories and managed via a local registry.
|
||||
//
|
||||
// Plugin lifecycle:
|
||||
// - Install: Download from GitHub, validate manifest, register
|
||||
// - Init: Parse manifest and prepare plugin
|
||||
// - Start: Activate plugin functionality
|
||||
// - Stop: Deactivate and clean up
|
||||
// - Remove: Unregister and delete files
|
||||
package plugin
|
||||
|
||||
import "context"
|
||||
|
||||
// Plugin is the interface that all plugins must implement.
|
||||
type Plugin interface {
|
||||
// Name returns the plugin's unique identifier.
|
||||
Name() string
|
||||
|
||||
// Version returns the plugin's semantic version.
|
||||
Version() string
|
||||
|
||||
// Init prepares the plugin for use.
|
||||
Init(ctx context.Context) error
|
||||
|
||||
// Start activates the plugin.
|
||||
Start(ctx context.Context) error
|
||||
|
||||
// Stop deactivates the plugin and releases resources.
|
||||
Stop(ctx context.Context) error
|
||||
}
|
||||
|
||||
// BasePlugin provides a default implementation of Plugin.
|
||||
// Embed this in concrete plugin types to inherit default behaviour.
|
||||
type BasePlugin struct {
|
||||
PluginName string
|
||||
PluginVersion string
|
||||
}
|
||||
|
||||
// Name returns the plugin name.
|
||||
func (p *BasePlugin) Name() string { return p.PluginName }
|
||||
|
||||
// Version returns the plugin version.
|
||||
func (p *BasePlugin) Version() string { return p.PluginVersion }
|
||||
|
||||
// Init is a no-op default implementation.
|
||||
func (p *BasePlugin) Init(_ context.Context) error { return nil }
|
||||
|
||||
// Start is a no-op default implementation.
|
||||
func (p *BasePlugin) Start(_ context.Context) error { return nil }
|
||||
|
||||
// Stop is a no-op default implementation.
|
||||
func (p *BasePlugin) Stop(_ context.Context) error { return nil }
|
||||
|
|
@ -1,39 +0,0 @@
|
|||
package plugin
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestBasePlugin_Good(t *testing.T) {
|
||||
p := &BasePlugin{
|
||||
PluginName: "test-plugin",
|
||||
PluginVersion: "1.0.0",
|
||||
}
|
||||
|
||||
assert.Equal(t, "test-plugin", p.Name())
|
||||
assert.Equal(t, "1.0.0", p.Version())
|
||||
|
||||
ctx := context.Background()
|
||||
assert.NoError(t, p.Init(ctx))
|
||||
assert.NoError(t, p.Start(ctx))
|
||||
assert.NoError(t, p.Stop(ctx))
|
||||
}
|
||||
|
||||
func TestBasePlugin_Good_EmptyFields(t *testing.T) {
|
||||
p := &BasePlugin{}
|
||||
|
||||
assert.Equal(t, "", p.Name())
|
||||
assert.Equal(t, "", p.Version())
|
||||
|
||||
ctx := context.Background()
|
||||
assert.NoError(t, p.Init(ctx))
|
||||
assert.NoError(t, p.Start(ctx))
|
||||
assert.NoError(t, p.Stop(ctx))
|
||||
}
|
||||
|
||||
func TestBasePlugin_Good_ImplementsPlugin(t *testing.T) {
|
||||
var _ Plugin = &BasePlugin{}
|
||||
}
|
||||
|
|
@ -1,118 +0,0 @@
|
|||
package plugin
|
||||
|
||||
import (
|
||||
"cmp"
|
||||
"encoding/json"
|
||||
"path/filepath"
|
||||
"slices"
|
||||
|
||||
coreerr "forge.lthn.ai/core/go-log"
|
||||
"forge.lthn.ai/core/go-io"
|
||||
)
|
||||
|
||||
const registryFilename = "registry.json"
|
||||
|
||||
// Registry manages installed plugins.
|
||||
// Plugin metadata is stored in a registry.json file under the base path.
|
||||
type Registry struct {
|
||||
medium io.Medium
|
||||
basePath string // e.g., ~/.core/plugins/
|
||||
plugins map[string]*PluginConfig
|
||||
}
|
||||
|
||||
// NewRegistry creates a new plugin registry.
|
||||
func NewRegistry(m io.Medium, basePath string) *Registry {
|
||||
return &Registry{
|
||||
medium: m,
|
||||
basePath: basePath,
|
||||
plugins: make(map[string]*PluginConfig),
|
||||
}
|
||||
}
|
||||
|
||||
// List returns all installed plugins sorted by name.
|
||||
func (r *Registry) List() []*PluginConfig {
|
||||
result := make([]*PluginConfig, 0, len(r.plugins))
|
||||
for _, cfg := range r.plugins {
|
||||
result = append(result, cfg)
|
||||
}
|
||||
slices.SortFunc(result, func(a, b *PluginConfig) int {
|
||||
return cmp.Compare(a.Name, b.Name)
|
||||
})
|
||||
return result
|
||||
}
|
||||
|
||||
// Get returns a plugin by name.
|
||||
// The second return value indicates whether the plugin was found.
|
||||
func (r *Registry) Get(name string) (*PluginConfig, bool) {
|
||||
cfg, ok := r.plugins[name]
|
||||
return cfg, ok
|
||||
}
|
||||
|
||||
// Add registers a plugin in the registry.
|
||||
func (r *Registry) Add(cfg *PluginConfig) error {
|
||||
if cfg.Name == "" {
|
||||
return coreerr.E("plugin.Registry.Add", "plugin name is required", nil)
|
||||
}
|
||||
r.plugins[cfg.Name] = cfg
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove unregisters a plugin from the registry.
|
||||
func (r *Registry) Remove(name string) error {
|
||||
if _, ok := r.plugins[name]; !ok {
|
||||
return coreerr.E("plugin.Registry.Remove", "plugin not found: "+name, nil)
|
||||
}
|
||||
delete(r.plugins, name)
|
||||
return nil
|
||||
}
|
||||
|
||||
// registryPath returns the full path to the registry file.
|
||||
func (r *Registry) registryPath() string {
|
||||
return filepath.Join(r.basePath, registryFilename)
|
||||
}
|
||||
|
||||
// Load reads the plugin registry from disk.
|
||||
// If the registry file does not exist, the registry starts empty.
|
||||
func (r *Registry) Load() error {
|
||||
path := r.registryPath()
|
||||
|
||||
if !r.medium.IsFile(path) {
|
||||
// No registry file yet; start with empty registry
|
||||
r.plugins = make(map[string]*PluginConfig)
|
||||
return nil
|
||||
}
|
||||
|
||||
content, err := r.medium.Read(path)
|
||||
if err != nil {
|
||||
return coreerr.E("plugin.Registry.Load", "failed to read registry", err)
|
||||
}
|
||||
|
||||
var plugins map[string]*PluginConfig
|
||||
if err := json.Unmarshal([]byte(content), &plugins); err != nil {
|
||||
return coreerr.E("plugin.Registry.Load", "failed to parse registry", err)
|
||||
}
|
||||
|
||||
if plugins == nil {
|
||||
plugins = make(map[string]*PluginConfig)
|
||||
}
|
||||
r.plugins = plugins
|
||||
return nil
|
||||
}
|
||||
|
||||
// Save writes the plugin registry to disk.
|
||||
func (r *Registry) Save() error {
|
||||
if err := r.medium.EnsureDir(r.basePath); err != nil {
|
||||
return coreerr.E("plugin.Registry.Save", "failed to create plugin directory", err)
|
||||
}
|
||||
|
||||
data, err := json.MarshalIndent(r.plugins, "", " ")
|
||||
if err != nil {
|
||||
return coreerr.E("plugin.Registry.Save", "failed to marshal registry", err)
|
||||
}
|
||||
|
||||
if err := r.medium.Write(r.registryPath(), string(data)); err != nil {
|
||||
return coreerr.E("plugin.Registry.Save", "failed to write registry", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,192 +0,0 @@
|
|||
package plugin
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go-io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestRegistry_Add_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/home/user/.core/plugins")
|
||||
|
||||
err := reg.Add(&PluginConfig{
|
||||
Name: "my-plugin",
|
||||
Version: "1.0.0",
|
||||
Source: "github:org/my-plugin",
|
||||
Enabled: true,
|
||||
})
|
||||
assert.NoError(t, err)
|
||||
|
||||
list := reg.List()
|
||||
assert.Len(t, list, 1)
|
||||
assert.Equal(t, "my-plugin", list[0].Name)
|
||||
assert.Equal(t, "1.0.0", list[0].Version)
|
||||
}
|
||||
|
||||
func TestRegistry_Add_Bad_EmptyName(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/home/user/.core/plugins")
|
||||
|
||||
err := reg.Add(&PluginConfig{
|
||||
Version: "1.0.0",
|
||||
})
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "plugin name is required")
|
||||
}
|
||||
|
||||
func TestRegistry_Remove_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/home/user/.core/plugins")
|
||||
|
||||
_ = reg.Add(&PluginConfig{
|
||||
Name: "my-plugin",
|
||||
Version: "1.0.0",
|
||||
})
|
||||
|
||||
err := reg.Remove("my-plugin")
|
||||
assert.NoError(t, err)
|
||||
assert.Empty(t, reg.List())
|
||||
}
|
||||
|
||||
func TestRegistry_Get_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/home/user/.core/plugins")
|
||||
|
||||
_ = reg.Add(&PluginConfig{
|
||||
Name: "test-plugin",
|
||||
Version: "2.0.0",
|
||||
Source: "github:org/test-plugin",
|
||||
})
|
||||
|
||||
cfg, ok := reg.Get("test-plugin")
|
||||
assert.True(t, ok)
|
||||
assert.Equal(t, "test-plugin", cfg.Name)
|
||||
assert.Equal(t, "2.0.0", cfg.Version)
|
||||
}
|
||||
|
||||
func TestRegistry_Get_Bad_NotFound(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/home/user/.core/plugins")
|
||||
|
||||
cfg, ok := reg.Get("nonexistent")
|
||||
assert.False(t, ok)
|
||||
assert.Nil(t, cfg)
|
||||
}
|
||||
|
||||
func TestRegistry_Remove_Bad_NotFound(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/home/user/.core/plugins")
|
||||
|
||||
err := reg.Remove("nonexistent")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "plugin not found")
|
||||
}
|
||||
|
||||
func TestRegistry_SaveLoad_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
basePath := "/home/user/.core/plugins"
|
||||
reg := NewRegistry(m, basePath)
|
||||
|
||||
_ = reg.Add(&PluginConfig{
|
||||
Name: "plugin-a",
|
||||
Version: "1.0.0",
|
||||
Source: "github:org/plugin-a",
|
||||
Enabled: true,
|
||||
InstalledAt: "2025-01-01T00:00:00Z",
|
||||
})
|
||||
_ = reg.Add(&PluginConfig{
|
||||
Name: "plugin-b",
|
||||
Version: "2.0.0",
|
||||
Source: "github:org/plugin-b",
|
||||
Enabled: false,
|
||||
InstalledAt: "2025-01-02T00:00:00Z",
|
||||
})
|
||||
|
||||
err := reg.Save()
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Load into a fresh registry
|
||||
reg2 := NewRegistry(m, basePath)
|
||||
err = reg2.Load()
|
||||
assert.NoError(t, err)
|
||||
|
||||
list := reg2.List()
|
||||
assert.Len(t, list, 2)
|
||||
|
||||
a, ok := reg2.Get("plugin-a")
|
||||
assert.True(t, ok)
|
||||
assert.Equal(t, "1.0.0", a.Version)
|
||||
assert.True(t, a.Enabled)
|
||||
|
||||
b, ok := reg2.Get("plugin-b")
|
||||
assert.True(t, ok)
|
||||
assert.Equal(t, "2.0.0", b.Version)
|
||||
assert.False(t, b.Enabled)
|
||||
}
|
||||
|
||||
func TestRegistry_Load_Good_EmptyWhenNoFile(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/home/user/.core/plugins")
|
||||
|
||||
err := reg.Load()
|
||||
assert.NoError(t, err)
|
||||
assert.Empty(t, reg.List())
|
||||
}
|
||||
|
||||
func TestRegistry_Load_Bad_InvalidJSON(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
basePath := "/home/user/.core/plugins"
|
||||
_ = m.Write(basePath+"/registry.json", "not valid json {{{")
|
||||
|
||||
reg := NewRegistry(m, basePath)
|
||||
err := reg.Load()
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "failed to parse registry")
|
||||
}
|
||||
|
||||
func TestRegistry_Load_Good_NullJSON(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
basePath := "/home/user/.core/plugins"
|
||||
_ = m.Write(basePath+"/registry.json", "null")
|
||||
|
||||
reg := NewRegistry(m, basePath)
|
||||
err := reg.Load()
|
||||
assert.NoError(t, err)
|
||||
assert.Empty(t, reg.List())
|
||||
}
|
||||
|
||||
func TestRegistry_Save_Good_CreatesDir(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
basePath := "/home/user/.core/plugins"
|
||||
reg := NewRegistry(m, basePath)
|
||||
|
||||
_ = reg.Add(&PluginConfig{Name: "test", Version: "1.0.0"})
|
||||
err := reg.Save()
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Verify file was written.
|
||||
assert.True(t, m.IsFile(basePath+"/registry.json"))
|
||||
}
|
||||
|
||||
func TestRegistry_List_Good_Sorted(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/plugins")
|
||||
|
||||
_ = reg.Add(&PluginConfig{Name: "zebra", Version: "1.0.0"})
|
||||
_ = reg.Add(&PluginConfig{Name: "alpha", Version: "1.0.0"})
|
||||
_ = reg.Add(&PluginConfig{Name: "middle", Version: "1.0.0"})
|
||||
|
||||
list := reg.List()
|
||||
assert.Len(t, list, 3)
|
||||
assert.Equal(t, "alpha", list[0].Name)
|
||||
assert.Equal(t, "middle", list[1].Name)
|
||||
assert.Equal(t, "zebra", list[2].Name)
|
||||
}
|
||||
|
||||
func TestRegistry_RegistryPath_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := NewRegistry(m, "/base/path")
|
||||
assert.Equal(t, "/base/path/registry.json", reg.registryPath())
|
||||
}
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
package process
|
||||
|
||||
import "time"
|
||||
|
||||
// --- ACTION messages (broadcast via Core.ACTION) ---
|
||||
|
||||
// ActionProcessStarted is broadcast when a process begins execution.
|
||||
type ActionProcessStarted struct {
|
||||
ID string
|
||||
Command string
|
||||
Args []string
|
||||
Dir string
|
||||
PID int
|
||||
}
|
||||
|
||||
// ActionProcessOutput is broadcast for each line of output.
|
||||
// Subscribe to this for real-time streaming.
|
||||
type ActionProcessOutput struct {
|
||||
ID string
|
||||
Line string
|
||||
Stream Stream
|
||||
}
|
||||
|
||||
// ActionProcessExited is broadcast when a process completes.
|
||||
// Check ExitCode for success (0) or failure.
|
||||
type ActionProcessExited struct {
|
||||
ID string
|
||||
ExitCode int
|
||||
Duration time.Duration
|
||||
Error error // Non-nil if failed to start or was killed
|
||||
}
|
||||
|
||||
// ActionProcessKilled is broadcast when a process is terminated.
|
||||
type ActionProcessKilled struct {
|
||||
ID string
|
||||
Signal string
|
||||
}
|
||||
|
|
@ -1,108 +0,0 @@
|
|||
package process
|
||||
|
||||
import "sync"
|
||||
|
||||
// RingBuffer is a fixed-size circular buffer that overwrites old data.
|
||||
// Thread-safe for concurrent reads and writes.
|
||||
type RingBuffer struct {
|
||||
data []byte
|
||||
size int
|
||||
start int
|
||||
end int
|
||||
full bool
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// NewRingBuffer creates a ring buffer with the given capacity.
|
||||
func NewRingBuffer(size int) *RingBuffer {
|
||||
return &RingBuffer{
|
||||
data: make([]byte, size),
|
||||
size: size,
|
||||
}
|
||||
}
|
||||
|
||||
// Write appends data to the buffer, overwriting oldest data if full.
|
||||
func (rb *RingBuffer) Write(p []byte) (n int, err error) {
|
||||
rb.mu.Lock()
|
||||
defer rb.mu.Unlock()
|
||||
|
||||
for _, b := range p {
|
||||
rb.data[rb.end] = b
|
||||
rb.end = (rb.end + 1) % rb.size
|
||||
if rb.full {
|
||||
rb.start = (rb.start + 1) % rb.size
|
||||
}
|
||||
if rb.end == rb.start {
|
||||
rb.full = true
|
||||
}
|
||||
}
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
// String returns the buffer contents as a string.
|
||||
func (rb *RingBuffer) String() string {
|
||||
rb.mu.RLock()
|
||||
defer rb.mu.RUnlock()
|
||||
|
||||
if !rb.full && rb.start == rb.end {
|
||||
return ""
|
||||
}
|
||||
|
||||
if rb.full {
|
||||
result := make([]byte, rb.size)
|
||||
copy(result, rb.data[rb.start:])
|
||||
copy(result[rb.size-rb.start:], rb.data[:rb.end])
|
||||
return string(result)
|
||||
}
|
||||
|
||||
return string(rb.data[rb.start:rb.end])
|
||||
}
|
||||
|
||||
// Bytes returns a copy of the buffer contents.
|
||||
func (rb *RingBuffer) Bytes() []byte {
|
||||
rb.mu.RLock()
|
||||
defer rb.mu.RUnlock()
|
||||
|
||||
if !rb.full && rb.start == rb.end {
|
||||
return nil
|
||||
}
|
||||
|
||||
if rb.full {
|
||||
result := make([]byte, rb.size)
|
||||
copy(result, rb.data[rb.start:])
|
||||
copy(result[rb.size-rb.start:], rb.data[:rb.end])
|
||||
return result
|
||||
}
|
||||
|
||||
result := make([]byte, rb.end-rb.start)
|
||||
copy(result, rb.data[rb.start:rb.end])
|
||||
return result
|
||||
}
|
||||
|
||||
// Len returns the current length of data in the buffer.
|
||||
func (rb *RingBuffer) Len() int {
|
||||
rb.mu.RLock()
|
||||
defer rb.mu.RUnlock()
|
||||
|
||||
if rb.full {
|
||||
return rb.size
|
||||
}
|
||||
if rb.end >= rb.start {
|
||||
return rb.end - rb.start
|
||||
}
|
||||
return rb.size - rb.start + rb.end
|
||||
}
|
||||
|
||||
// Cap returns the buffer capacity.
|
||||
func (rb *RingBuffer) Cap() int {
|
||||
return rb.size
|
||||
}
|
||||
|
||||
// Reset clears the buffer.
|
||||
func (rb *RingBuffer) Reset() {
|
||||
rb.mu.Lock()
|
||||
defer rb.mu.Unlock()
|
||||
rb.start = 0
|
||||
rb.end = 0
|
||||
rb.full = false
|
||||
}
|
||||
|
|
@ -1,72 +0,0 @@
|
|||
package process
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestRingBuffer(t *testing.T) {
|
||||
t.Run("write and read", func(t *testing.T) {
|
||||
rb := NewRingBuffer(10)
|
||||
|
||||
n, err := rb.Write([]byte("hello"))
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 5, n)
|
||||
assert.Equal(t, "hello", rb.String())
|
||||
assert.Equal(t, 5, rb.Len())
|
||||
})
|
||||
|
||||
t.Run("overflow wraps around", func(t *testing.T) {
|
||||
rb := NewRingBuffer(5)
|
||||
|
||||
_, _ = rb.Write([]byte("hello"))
|
||||
assert.Equal(t, "hello", rb.String())
|
||||
|
||||
_, _ = rb.Write([]byte("world"))
|
||||
// Should contain "world" (overwrote "hello")
|
||||
assert.Equal(t, 5, rb.Len())
|
||||
assert.Equal(t, "world", rb.String())
|
||||
})
|
||||
|
||||
t.Run("partial overflow", func(t *testing.T) {
|
||||
rb := NewRingBuffer(10)
|
||||
|
||||
_, _ = rb.Write([]byte("hello"))
|
||||
_, _ = rb.Write([]byte("worldx"))
|
||||
// Should contain "lloworldx" (11 chars, buffer is 10)
|
||||
assert.Equal(t, 10, rb.Len())
|
||||
})
|
||||
|
||||
t.Run("empty buffer", func(t *testing.T) {
|
||||
rb := NewRingBuffer(10)
|
||||
assert.Equal(t, "", rb.String())
|
||||
assert.Equal(t, 0, rb.Len())
|
||||
assert.Nil(t, rb.Bytes())
|
||||
})
|
||||
|
||||
t.Run("reset", func(t *testing.T) {
|
||||
rb := NewRingBuffer(10)
|
||||
_, _ = rb.Write([]byte("hello"))
|
||||
rb.Reset()
|
||||
assert.Equal(t, "", rb.String())
|
||||
assert.Equal(t, 0, rb.Len())
|
||||
})
|
||||
|
||||
t.Run("cap", func(t *testing.T) {
|
||||
rb := NewRingBuffer(42)
|
||||
assert.Equal(t, 42, rb.Cap())
|
||||
})
|
||||
|
||||
t.Run("bytes returns copy", func(t *testing.T) {
|
||||
rb := NewRingBuffer(10)
|
||||
_, _ = rb.Write([]byte("hello"))
|
||||
|
||||
bytes := rb.Bytes()
|
||||
assert.Equal(t, []byte("hello"), bytes)
|
||||
|
||||
// Modifying returned bytes shouldn't affect buffer
|
||||
bytes[0] = 'x'
|
||||
assert.Equal(t, "hello", rb.String())
|
||||
})
|
||||
}
|
||||
|
|
@ -1,176 +0,0 @@
|
|||
package exec
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Options configuration for command execution
|
||||
type Options struct {
|
||||
Dir string
|
||||
Env []string
|
||||
Stdin io.Reader
|
||||
Stdout io.Writer
|
||||
Stderr io.Writer
|
||||
// If true, command will run in background (not implemented in this wrapper yet)
|
||||
// Background bool
|
||||
}
|
||||
|
||||
// Command wraps os/exec.Command with logging and context
|
||||
func Command(ctx context.Context, name string, args ...string) *Cmd {
|
||||
return &Cmd{
|
||||
name: name,
|
||||
args: args,
|
||||
ctx: ctx,
|
||||
}
|
||||
}
|
||||
|
||||
// Cmd represents a wrapped command
|
||||
type Cmd struct {
|
||||
name string
|
||||
args []string
|
||||
ctx context.Context
|
||||
opts Options
|
||||
cmd *exec.Cmd
|
||||
logger Logger
|
||||
}
|
||||
|
||||
// WithDir sets the working directory
|
||||
func (c *Cmd) WithDir(dir string) *Cmd {
|
||||
c.opts.Dir = dir
|
||||
return c
|
||||
}
|
||||
|
||||
// WithEnv sets the environment variables
|
||||
func (c *Cmd) WithEnv(env []string) *Cmd {
|
||||
c.opts.Env = env
|
||||
return c
|
||||
}
|
||||
|
||||
// WithStdin sets stdin
|
||||
func (c *Cmd) WithStdin(r io.Reader) *Cmd {
|
||||
c.opts.Stdin = r
|
||||
return c
|
||||
}
|
||||
|
||||
// WithStdout sets stdout
|
||||
func (c *Cmd) WithStdout(w io.Writer) *Cmd {
|
||||
c.opts.Stdout = w
|
||||
return c
|
||||
}
|
||||
|
||||
// WithStderr sets stderr
|
||||
func (c *Cmd) WithStderr(w io.Writer) *Cmd {
|
||||
c.opts.Stderr = w
|
||||
return c
|
||||
}
|
||||
|
||||
// WithLogger sets a custom logger for this command.
|
||||
// If not set, the package default logger is used.
|
||||
func (c *Cmd) WithLogger(l Logger) *Cmd {
|
||||
c.logger = l
|
||||
return c
|
||||
}
|
||||
|
||||
// Run executes the command and waits for it to finish.
|
||||
// It automatically logs the command execution at debug level.
|
||||
func (c *Cmd) Run() error {
|
||||
c.prepare()
|
||||
c.logDebug("executing command")
|
||||
|
||||
if err := c.cmd.Run(); err != nil {
|
||||
wrapped := wrapError(err, c.name, c.args)
|
||||
c.logError("command failed", wrapped)
|
||||
return wrapped
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Output runs the command and returns its standard output.
|
||||
func (c *Cmd) Output() ([]byte, error) {
|
||||
c.prepare()
|
||||
c.logDebug("executing command")
|
||||
|
||||
out, err := c.cmd.Output()
|
||||
if err != nil {
|
||||
wrapped := wrapError(err, c.name, c.args)
|
||||
c.logError("command failed", wrapped)
|
||||
return nil, wrapped
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// CombinedOutput runs the command and returns its combined standard output and standard error.
|
||||
func (c *Cmd) CombinedOutput() ([]byte, error) {
|
||||
c.prepare()
|
||||
c.logDebug("executing command")
|
||||
|
||||
out, err := c.cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
wrapped := wrapError(err, c.name, c.args)
|
||||
c.logError("command failed", wrapped)
|
||||
return out, wrapped
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *Cmd) prepare() {
|
||||
if c.ctx != nil {
|
||||
c.cmd = exec.CommandContext(c.ctx, c.name, c.args...)
|
||||
} else {
|
||||
// Should we enforce context? The issue says "Enforce context usage".
|
||||
// For now, let's allow nil but log a warning if we had a logger?
|
||||
// Or strictly panic/error?
|
||||
// Let's fallback to Background for now but maybe strict later.
|
||||
c.cmd = exec.Command(c.name, c.args...)
|
||||
}
|
||||
|
||||
c.cmd.Dir = c.opts.Dir
|
||||
if len(c.opts.Env) > 0 {
|
||||
c.cmd.Env = append(os.Environ(), c.opts.Env...)
|
||||
}
|
||||
|
||||
c.cmd.Stdin = c.opts.Stdin
|
||||
c.cmd.Stdout = c.opts.Stdout
|
||||
c.cmd.Stderr = c.opts.Stderr
|
||||
}
|
||||
|
||||
// RunQuiet executes the command suppressing stdout unless there is an error.
|
||||
// Useful for internal commands.
|
||||
func RunQuiet(ctx context.Context, name string, args ...string) error {
|
||||
var stderr bytes.Buffer
|
||||
cmd := Command(ctx, name, args...).WithStderr(&stderr)
|
||||
if err := cmd.Run(); err != nil {
|
||||
// Include stderr in error message
|
||||
return fmt.Errorf("%w: %s", err, strings.TrimSpace(stderr.String()))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func wrapError(err error, name string, args []string) error {
|
||||
cmdStr := name + " " + strings.Join(args, " ")
|
||||
if exitErr, ok := err.(*exec.ExitError); ok {
|
||||
return fmt.Errorf("command %q failed with exit code %d: %w", cmdStr, exitErr.ExitCode(), err)
|
||||
}
|
||||
return fmt.Errorf("failed to execute %q: %w", cmdStr, err)
|
||||
}
|
||||
|
||||
func (c *Cmd) getLogger() Logger {
|
||||
if c.logger != nil {
|
||||
return c.logger
|
||||
}
|
||||
return defaultLogger
|
||||
}
|
||||
|
||||
func (c *Cmd) logDebug(msg string) {
|
||||
c.getLogger().Debug(msg, "cmd", c.name, "args", strings.Join(c.args, " "))
|
||||
}
|
||||
|
||||
func (c *Cmd) logError(msg string, err error) {
|
||||
c.getLogger().Error(msg, "cmd", c.name, "args", strings.Join(c.args, " "), "err", err)
|
||||
}
|
||||
|
|
@ -1,148 +0,0 @@
|
|||
package exec_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/process/exec"
|
||||
)
|
||||
|
||||
// mockLogger captures log calls for testing
|
||||
type mockLogger struct {
|
||||
debugCalls []logCall
|
||||
errorCalls []logCall
|
||||
}
|
||||
|
||||
type logCall struct {
|
||||
msg string
|
||||
keyvals []any
|
||||
}
|
||||
|
||||
func (m *mockLogger) Debug(msg string, keyvals ...any) {
|
||||
m.debugCalls = append(m.debugCalls, logCall{msg, keyvals})
|
||||
}
|
||||
|
||||
func (m *mockLogger) Error(msg string, keyvals ...any) {
|
||||
m.errorCalls = append(m.errorCalls, logCall{msg, keyvals})
|
||||
}
|
||||
|
||||
func TestCommand_Run_Good_LogsDebug(t *testing.T) {
|
||||
logger := &mockLogger{}
|
||||
ctx := context.Background()
|
||||
|
||||
err := exec.Command(ctx, "echo", "hello").
|
||||
WithLogger(logger).
|
||||
Run()
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if len(logger.debugCalls) != 1 {
|
||||
t.Fatalf("expected 1 debug call, got %d", len(logger.debugCalls))
|
||||
}
|
||||
if logger.debugCalls[0].msg != "executing command" {
|
||||
t.Errorf("expected msg 'executing command', got %q", logger.debugCalls[0].msg)
|
||||
}
|
||||
if len(logger.errorCalls) != 0 {
|
||||
t.Errorf("expected no error calls, got %d", len(logger.errorCalls))
|
||||
}
|
||||
}
|
||||
|
||||
func TestCommand_Run_Bad_LogsError(t *testing.T) {
|
||||
logger := &mockLogger{}
|
||||
ctx := context.Background()
|
||||
|
||||
err := exec.Command(ctx, "false").
|
||||
WithLogger(logger).
|
||||
Run()
|
||||
if err == nil {
|
||||
t.Fatal("expected error")
|
||||
}
|
||||
|
||||
if len(logger.debugCalls) != 1 {
|
||||
t.Fatalf("expected 1 debug call, got %d", len(logger.debugCalls))
|
||||
}
|
||||
if len(logger.errorCalls) != 1 {
|
||||
t.Fatalf("expected 1 error call, got %d", len(logger.errorCalls))
|
||||
}
|
||||
if logger.errorCalls[0].msg != "command failed" {
|
||||
t.Errorf("expected msg 'command failed', got %q", logger.errorCalls[0].msg)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCommand_Output_Good(t *testing.T) {
|
||||
logger := &mockLogger{}
|
||||
ctx := context.Background()
|
||||
|
||||
out, err := exec.Command(ctx, "echo", "test").
|
||||
WithLogger(logger).
|
||||
Output()
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
if strings.TrimSpace(string(out)) != "test" {
|
||||
t.Errorf("expected 'test', got %q", string(out))
|
||||
}
|
||||
if len(logger.debugCalls) != 1 {
|
||||
t.Errorf("expected 1 debug call, got %d", len(logger.debugCalls))
|
||||
}
|
||||
}
|
||||
|
||||
func TestCommand_CombinedOutput_Good(t *testing.T) {
|
||||
logger := &mockLogger{}
|
||||
ctx := context.Background()
|
||||
|
||||
out, err := exec.Command(ctx, "echo", "combined").
|
||||
WithLogger(logger).
|
||||
CombinedOutput()
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
if strings.TrimSpace(string(out)) != "combined" {
|
||||
t.Errorf("expected 'combined', got %q", string(out))
|
||||
}
|
||||
if len(logger.debugCalls) != 1 {
|
||||
t.Errorf("expected 1 debug call, got %d", len(logger.debugCalls))
|
||||
}
|
||||
}
|
||||
|
||||
func TestNopLogger(t *testing.T) {
|
||||
// Verify NopLogger doesn't panic
|
||||
var nop exec.NopLogger
|
||||
nop.Debug("msg", "key", "val")
|
||||
nop.Error("msg", "key", "val")
|
||||
}
|
||||
|
||||
func TestSetDefaultLogger(t *testing.T) {
|
||||
original := exec.DefaultLogger()
|
||||
defer exec.SetDefaultLogger(original)
|
||||
|
||||
logger := &mockLogger{}
|
||||
exec.SetDefaultLogger(logger)
|
||||
|
||||
if exec.DefaultLogger() != logger {
|
||||
t.Error("default logger not set correctly")
|
||||
}
|
||||
|
||||
// Test nil resets to NopLogger
|
||||
exec.SetDefaultLogger(nil)
|
||||
if _, ok := exec.DefaultLogger().(exec.NopLogger); !ok {
|
||||
t.Error("expected NopLogger when setting nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCommand_UsesDefaultLogger(t *testing.T) {
|
||||
original := exec.DefaultLogger()
|
||||
defer exec.SetDefaultLogger(original)
|
||||
|
||||
logger := &mockLogger{}
|
||||
exec.SetDefaultLogger(logger)
|
||||
|
||||
ctx := context.Background()
|
||||
_ = exec.Command(ctx, "echo", "test").Run()
|
||||
|
||||
if len(logger.debugCalls) != 1 {
|
||||
t.Errorf("expected default logger to receive 1 debug call, got %d", len(logger.debugCalls))
|
||||
}
|
||||
}
|
||||
|
|
@ -1,35 +0,0 @@
|
|||
package exec
|
||||
|
||||
// Logger interface for command execution logging.
|
||||
// Compatible with pkg/log.Logger and other structured loggers.
|
||||
type Logger interface {
|
||||
// Debug logs a debug-level message with optional key-value pairs.
|
||||
Debug(msg string, keyvals ...any)
|
||||
// Error logs an error-level message with optional key-value pairs.
|
||||
Error(msg string, keyvals ...any)
|
||||
}
|
||||
|
||||
// NopLogger is a no-op logger that discards all messages.
|
||||
type NopLogger struct{}
|
||||
|
||||
// Debug discards the message (no-op implementation).
|
||||
func (NopLogger) Debug(string, ...any) {}
|
||||
|
||||
// Error discards the message (no-op implementation).
|
||||
func (NopLogger) Error(string, ...any) {}
|
||||
|
||||
var defaultLogger Logger = NopLogger{}
|
||||
|
||||
// SetDefaultLogger sets the package-level default logger.
|
||||
// Commands without an explicit logger will use this.
|
||||
func SetDefaultLogger(l Logger) {
|
||||
if l == nil {
|
||||
l = NopLogger{}
|
||||
}
|
||||
defaultLogger = l
|
||||
}
|
||||
|
||||
// DefaultLogger returns the current default logger.
|
||||
func DefaultLogger() Logger {
|
||||
return defaultLogger
|
||||
}
|
||||
|
|
@ -1,298 +0,0 @@
|
|||
package process
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/framework"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestGlobal_DefaultNotInitialized(t *testing.T) {
|
||||
// Reset global state for this test
|
||||
old := defaultService.Swap(nil)
|
||||
defer func() {
|
||||
if old != nil {
|
||||
defaultService.Store(old)
|
||||
}
|
||||
}()
|
||||
|
||||
assert.Nil(t, Default())
|
||||
|
||||
_, err := Start(context.Background(), "echo", "test")
|
||||
assert.ErrorIs(t, err, ErrServiceNotInitialized)
|
||||
|
||||
_, err = Run(context.Background(), "echo", "test")
|
||||
assert.ErrorIs(t, err, ErrServiceNotInitialized)
|
||||
|
||||
_, err = Get("proc-1")
|
||||
assert.ErrorIs(t, err, ErrServiceNotInitialized)
|
||||
|
||||
assert.Nil(t, List())
|
||||
assert.Nil(t, Running())
|
||||
|
||||
err = Kill("proc-1")
|
||||
assert.ErrorIs(t, err, ErrServiceNotInitialized)
|
||||
|
||||
_, err = StartWithOptions(context.Background(), RunOptions{Command: "echo"})
|
||||
assert.ErrorIs(t, err, ErrServiceNotInitialized)
|
||||
|
||||
_, err = RunWithOptions(context.Background(), RunOptions{Command: "echo"})
|
||||
assert.ErrorIs(t, err, ErrServiceNotInitialized)
|
||||
}
|
||||
|
||||
func TestGlobal_SetDefault(t *testing.T) {
|
||||
t.Run("sets and retrieves service", func(t *testing.T) {
|
||||
// Reset global state
|
||||
old := defaultService.Swap(nil)
|
||||
defer func() {
|
||||
if old != nil {
|
||||
defaultService.Store(old)
|
||||
}
|
||||
}()
|
||||
|
||||
core, err := framework.New(
|
||||
framework.WithName("process", NewService(Options{})),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
svc, err := framework.ServiceFor[*Service](core, "process")
|
||||
require.NoError(t, err)
|
||||
|
||||
SetDefault(svc)
|
||||
assert.Equal(t, svc, Default())
|
||||
})
|
||||
|
||||
t.Run("panics on nil", func(t *testing.T) {
|
||||
assert.Panics(t, func() {
|
||||
SetDefault(nil)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
func TestGlobal_ConcurrentDefault(t *testing.T) {
|
||||
// Reset global state
|
||||
old := defaultService.Swap(nil)
|
||||
defer func() {
|
||||
if old != nil {
|
||||
defaultService.Store(old)
|
||||
}
|
||||
}()
|
||||
|
||||
core, err := framework.New(
|
||||
framework.WithName("process", NewService(Options{})),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
svc, err := framework.ServiceFor[*Service](core, "process")
|
||||
require.NoError(t, err)
|
||||
|
||||
SetDefault(svc)
|
||||
|
||||
// Concurrent reads of Default()
|
||||
var wg sync.WaitGroup
|
||||
for i := 0; i < 100; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
s := Default()
|
||||
assert.NotNil(t, s)
|
||||
assert.Equal(t, svc, s)
|
||||
}()
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func TestGlobal_ConcurrentSetDefault(t *testing.T) {
|
||||
// Reset global state
|
||||
old := defaultService.Swap(nil)
|
||||
defer func() {
|
||||
if old != nil {
|
||||
defaultService.Store(old)
|
||||
}
|
||||
}()
|
||||
|
||||
// Create multiple services
|
||||
var services []*Service
|
||||
for i := 0; i < 10; i++ {
|
||||
core, err := framework.New(
|
||||
framework.WithName("process", NewService(Options{})),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
svc, err := framework.ServiceFor[*Service](core, "process")
|
||||
require.NoError(t, err)
|
||||
services = append(services, svc)
|
||||
}
|
||||
|
||||
// Concurrent SetDefault calls - should not panic or race
|
||||
var wg sync.WaitGroup
|
||||
for _, svc := range services {
|
||||
wg.Add(1)
|
||||
go func(s *Service) {
|
||||
defer wg.Done()
|
||||
SetDefault(s)
|
||||
}(svc)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
// Final state should be one of the services
|
||||
final := Default()
|
||||
assert.NotNil(t, final)
|
||||
|
||||
found := false
|
||||
for _, svc := range services {
|
||||
if svc == final {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.True(t, found, "Default should be one of the set services")
|
||||
}
|
||||
|
||||
func TestGlobal_ConcurrentOperations(t *testing.T) {
|
||||
// Reset global state
|
||||
old := defaultService.Swap(nil)
|
||||
defer func() {
|
||||
if old != nil {
|
||||
defaultService.Store(old)
|
||||
}
|
||||
}()
|
||||
|
||||
core, err := framework.New(
|
||||
framework.WithName("process", NewService(Options{})),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
svc, err := framework.ServiceFor[*Service](core, "process")
|
||||
require.NoError(t, err)
|
||||
|
||||
SetDefault(svc)
|
||||
|
||||
// Concurrent Start, List, Get operations
|
||||
var wg sync.WaitGroup
|
||||
var processes []*Process
|
||||
var procMu sync.Mutex
|
||||
|
||||
// Start 20 processes concurrently
|
||||
for i := 0; i < 20; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
proc, err := Start(context.Background(), "echo", "concurrent")
|
||||
if err == nil {
|
||||
procMu.Lock()
|
||||
processes = append(processes, proc)
|
||||
procMu.Unlock()
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Concurrent List calls while starting
|
||||
for i := 0; i < 10; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
_ = List()
|
||||
_ = Running()
|
||||
}()
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
// Wait for all processes to complete
|
||||
procMu.Lock()
|
||||
for _, p := range processes {
|
||||
<-p.Done()
|
||||
}
|
||||
procMu.Unlock()
|
||||
|
||||
// All should have succeeded
|
||||
assert.Len(t, processes, 20)
|
||||
|
||||
// Concurrent Get calls
|
||||
var wg2 sync.WaitGroup
|
||||
for _, p := range processes {
|
||||
wg2.Add(1)
|
||||
go func(id string) {
|
||||
defer wg2.Done()
|
||||
got, err := Get(id)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, got)
|
||||
}(p.ID)
|
||||
}
|
||||
wg2.Wait()
|
||||
}
|
||||
|
||||
func TestGlobal_StartWithOptions(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
// Set as default
|
||||
old := defaultService.Swap(svc)
|
||||
defer func() {
|
||||
if old != nil {
|
||||
defaultService.Store(old)
|
||||
}
|
||||
}()
|
||||
|
||||
proc, err := StartWithOptions(context.Background(), RunOptions{
|
||||
Command: "echo",
|
||||
Args: []string{"with", "options"},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
assert.Equal(t, 0, proc.ExitCode)
|
||||
assert.Contains(t, proc.Output(), "with options")
|
||||
}
|
||||
|
||||
func TestGlobal_RunWithOptions(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
// Set as default
|
||||
old := defaultService.Swap(svc)
|
||||
defer func() {
|
||||
if old != nil {
|
||||
defaultService.Store(old)
|
||||
}
|
||||
}()
|
||||
|
||||
output, err := RunWithOptions(context.Background(), RunOptions{
|
||||
Command: "echo",
|
||||
Args: []string{"run", "options"},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, output, "run options")
|
||||
}
|
||||
|
||||
func TestGlobal_Running(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
// Set as default
|
||||
old := defaultService.Swap(svc)
|
||||
defer func() {
|
||||
if old != nil {
|
||||
defaultService.Store(old)
|
||||
}
|
||||
}()
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
// Start a long-running process
|
||||
proc, err := Start(ctx, "sleep", "60")
|
||||
require.NoError(t, err)
|
||||
|
||||
running := Running()
|
||||
assert.Len(t, running, 1)
|
||||
assert.Equal(t, proc.ID, running[0].ID)
|
||||
|
||||
cancel()
|
||||
<-proc.Done()
|
||||
|
||||
running = Running()
|
||||
assert.Len(t, running, 0)
|
||||
}
|
||||
|
|
@ -1,167 +0,0 @@
|
|||
package process
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"os/exec"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Process represents a managed external process.
|
||||
type Process struct {
|
||||
ID string
|
||||
Command string
|
||||
Args []string
|
||||
Dir string
|
||||
Env []string
|
||||
StartedAt time.Time
|
||||
Status Status
|
||||
ExitCode int
|
||||
Duration time.Duration
|
||||
|
||||
cmd *exec.Cmd
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
output *RingBuffer
|
||||
stdin io.WriteCloser
|
||||
done chan struct{}
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// Info returns a snapshot of process state.
|
||||
func (p *Process) Info() Info {
|
||||
p.mu.RLock()
|
||||
defer p.mu.RUnlock()
|
||||
|
||||
pid := 0
|
||||
if p.cmd != nil && p.cmd.Process != nil {
|
||||
pid = p.cmd.Process.Pid
|
||||
}
|
||||
|
||||
return Info{
|
||||
ID: p.ID,
|
||||
Command: p.Command,
|
||||
Args: p.Args,
|
||||
Dir: p.Dir,
|
||||
StartedAt: p.StartedAt,
|
||||
Status: p.Status,
|
||||
ExitCode: p.ExitCode,
|
||||
Duration: p.Duration,
|
||||
PID: pid,
|
||||
}
|
||||
}
|
||||
|
||||
// Output returns the captured output as a string.
|
||||
func (p *Process) Output() string {
|
||||
p.mu.RLock()
|
||||
defer p.mu.RUnlock()
|
||||
if p.output == nil {
|
||||
return ""
|
||||
}
|
||||
return p.output.String()
|
||||
}
|
||||
|
||||
// OutputBytes returns the captured output as bytes.
|
||||
func (p *Process) OutputBytes() []byte {
|
||||
p.mu.RLock()
|
||||
defer p.mu.RUnlock()
|
||||
if p.output == nil {
|
||||
return nil
|
||||
}
|
||||
return p.output.Bytes()
|
||||
}
|
||||
|
||||
// IsRunning returns true if the process is still executing.
|
||||
func (p *Process) IsRunning() bool {
|
||||
p.mu.RLock()
|
||||
defer p.mu.RUnlock()
|
||||
return p.Status == StatusRunning
|
||||
}
|
||||
|
||||
// Wait blocks until the process exits.
|
||||
func (p *Process) Wait() error {
|
||||
<-p.done
|
||||
p.mu.RLock()
|
||||
defer p.mu.RUnlock()
|
||||
if p.Status == StatusFailed || p.Status == StatusKilled {
|
||||
return &exec.ExitError{}
|
||||
}
|
||||
if p.ExitCode != 0 {
|
||||
return &exec.ExitError{}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Done returns a channel that closes when the process exits.
|
||||
func (p *Process) Done() <-chan struct{} {
|
||||
return p.done
|
||||
}
|
||||
|
||||
// Kill forcefully terminates the process.
|
||||
func (p *Process) Kill() error {
|
||||
p.mu.Lock()
|
||||
defer p.mu.Unlock()
|
||||
|
||||
if p.Status != StatusRunning {
|
||||
return nil
|
||||
}
|
||||
|
||||
if p.cmd == nil || p.cmd.Process == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return p.cmd.Process.Kill()
|
||||
}
|
||||
|
||||
// Signal sends a signal to the process.
|
||||
func (p *Process) Signal(sig interface{ Signal() }) error {
|
||||
p.mu.Lock()
|
||||
defer p.mu.Unlock()
|
||||
|
||||
if p.Status != StatusRunning {
|
||||
return nil
|
||||
}
|
||||
|
||||
if p.cmd == nil || p.cmd.Process == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Type assert to os.Signal for Process.Signal
|
||||
if osSig, ok := sig.(interface{ String() string }); ok {
|
||||
_ = osSig // Satisfy linter
|
||||
}
|
||||
|
||||
return p.cmd.Process.Kill() // Simplified - would use Signal in full impl
|
||||
}
|
||||
|
||||
// SendInput writes to the process stdin.
|
||||
func (p *Process) SendInput(input string) error {
|
||||
p.mu.RLock()
|
||||
defer p.mu.RUnlock()
|
||||
|
||||
if p.Status != StatusRunning {
|
||||
return ErrProcessNotRunning
|
||||
}
|
||||
|
||||
if p.stdin == nil {
|
||||
return ErrStdinNotAvailable
|
||||
}
|
||||
|
||||
_, err := p.stdin.Write([]byte(input))
|
||||
return err
|
||||
}
|
||||
|
||||
// CloseStdin closes the process stdin pipe.
|
||||
func (p *Process) CloseStdin() error {
|
||||
p.mu.Lock()
|
||||
defer p.mu.Unlock()
|
||||
|
||||
if p.stdin == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
err := p.stdin.Close()
|
||||
p.stdin = nil
|
||||
return err
|
||||
}
|
||||
|
|
@ -1,133 +0,0 @@
|
|||
package process
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/framework"
|
||||
)
|
||||
|
||||
// Global default service (follows i18n pattern).
|
||||
var (
|
||||
defaultService atomic.Pointer[Service]
|
||||
defaultOnce sync.Once
|
||||
defaultErr error
|
||||
)
|
||||
|
||||
// Default returns the global process service.
|
||||
// Returns nil if not initialized.
|
||||
func Default() *Service {
|
||||
return defaultService.Load()
|
||||
}
|
||||
|
||||
// SetDefault sets the global process service.
|
||||
// Thread-safe: can be called concurrently with Default().
|
||||
func SetDefault(s *Service) {
|
||||
if s == nil {
|
||||
panic("process: SetDefault called with nil service")
|
||||
}
|
||||
defaultService.Store(s)
|
||||
}
|
||||
|
||||
// Init initializes the default global service with a Core instance.
|
||||
// This is typically called during application startup.
|
||||
func Init(c *framework.Core) error {
|
||||
defaultOnce.Do(func() {
|
||||
factory := NewService(Options{})
|
||||
svc, err := factory(c)
|
||||
if err != nil {
|
||||
defaultErr = err
|
||||
return
|
||||
}
|
||||
defaultService.Store(svc.(*Service))
|
||||
})
|
||||
return defaultErr
|
||||
}
|
||||
|
||||
// --- Global convenience functions ---
|
||||
|
||||
// Start spawns a new process using the default service.
|
||||
func Start(ctx context.Context, command string, args ...string) (*Process, error) {
|
||||
svc := Default()
|
||||
if svc == nil {
|
||||
return nil, ErrServiceNotInitialized
|
||||
}
|
||||
return svc.Start(ctx, command, args...)
|
||||
}
|
||||
|
||||
// Run executes a command and waits for completion using the default service.
|
||||
func Run(ctx context.Context, command string, args ...string) (string, error) {
|
||||
svc := Default()
|
||||
if svc == nil {
|
||||
return "", ErrServiceNotInitialized
|
||||
}
|
||||
return svc.Run(ctx, command, args...)
|
||||
}
|
||||
|
||||
// Get returns a process by ID from the default service.
|
||||
func Get(id string) (*Process, error) {
|
||||
svc := Default()
|
||||
if svc == nil {
|
||||
return nil, ErrServiceNotInitialized
|
||||
}
|
||||
return svc.Get(id)
|
||||
}
|
||||
|
||||
// List returns all processes from the default service.
|
||||
func List() []*Process {
|
||||
svc := Default()
|
||||
if svc == nil {
|
||||
return nil
|
||||
}
|
||||
return svc.List()
|
||||
}
|
||||
|
||||
// Kill terminates a process by ID using the default service.
|
||||
func Kill(id string) error {
|
||||
svc := Default()
|
||||
if svc == nil {
|
||||
return ErrServiceNotInitialized
|
||||
}
|
||||
return svc.Kill(id)
|
||||
}
|
||||
|
||||
// StartWithOptions spawns a process with full configuration using the default service.
|
||||
func StartWithOptions(ctx context.Context, opts RunOptions) (*Process, error) {
|
||||
svc := Default()
|
||||
if svc == nil {
|
||||
return nil, ErrServiceNotInitialized
|
||||
}
|
||||
return svc.StartWithOptions(ctx, opts)
|
||||
}
|
||||
|
||||
// RunWithOptions executes a command with options and waits using the default service.
|
||||
func RunWithOptions(ctx context.Context, opts RunOptions) (string, error) {
|
||||
svc := Default()
|
||||
if svc == nil {
|
||||
return "", ErrServiceNotInitialized
|
||||
}
|
||||
return svc.RunWithOptions(ctx, opts)
|
||||
}
|
||||
|
||||
// Running returns all currently running processes from the default service.
|
||||
func Running() []*Process {
|
||||
svc := Default()
|
||||
if svc == nil {
|
||||
return nil
|
||||
}
|
||||
return svc.Running()
|
||||
}
|
||||
|
||||
// ErrServiceNotInitialized is returned when the service is not initialized.
|
||||
var ErrServiceNotInitialized = &ServiceError{msg: "process: service not initialized"}
|
||||
|
||||
// ServiceError represents a service-level error.
|
||||
type ServiceError struct {
|
||||
msg string
|
||||
}
|
||||
|
||||
// Error returns the service error message.
|
||||
func (e *ServiceError) Error() string {
|
||||
return e.msg
|
||||
}
|
||||
|
|
@ -1,227 +0,0 @@
|
|||
package process
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestProcess_Info(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "echo", "hello")
|
||||
require.NoError(t, err)
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
info := proc.Info()
|
||||
assert.Equal(t, proc.ID, info.ID)
|
||||
assert.Equal(t, "echo", info.Command)
|
||||
assert.Equal(t, []string{"hello"}, info.Args)
|
||||
assert.Equal(t, StatusExited, info.Status)
|
||||
assert.Equal(t, 0, info.ExitCode)
|
||||
assert.Greater(t, info.Duration, time.Duration(0))
|
||||
}
|
||||
|
||||
func TestProcess_Output(t *testing.T) {
|
||||
t.Run("captures stdout", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "echo", "hello world")
|
||||
require.NoError(t, err)
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
output := proc.Output()
|
||||
assert.Contains(t, output, "hello world")
|
||||
})
|
||||
|
||||
t.Run("OutputBytes returns copy", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "echo", "test")
|
||||
require.NoError(t, err)
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
bytes := proc.OutputBytes()
|
||||
assert.NotNil(t, bytes)
|
||||
assert.Contains(t, string(bytes), "test")
|
||||
})
|
||||
}
|
||||
|
||||
func TestProcess_IsRunning(t *testing.T) {
|
||||
t.Run("true while running", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
proc, err := svc.Start(ctx, "sleep", "10")
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.True(t, proc.IsRunning())
|
||||
|
||||
cancel()
|
||||
<-proc.Done()
|
||||
|
||||
assert.False(t, proc.IsRunning())
|
||||
})
|
||||
|
||||
t.Run("false after completion", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "echo", "done")
|
||||
require.NoError(t, err)
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
assert.False(t, proc.IsRunning())
|
||||
})
|
||||
}
|
||||
|
||||
func TestProcess_Wait(t *testing.T) {
|
||||
t.Run("returns nil on success", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "echo", "ok")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = proc.Wait()
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("returns error on failure", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "sh", "-c", "exit 1")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = proc.Wait()
|
||||
assert.Error(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestProcess_Done(t *testing.T) {
|
||||
t.Run("channel closes on completion", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "echo", "test")
|
||||
require.NoError(t, err)
|
||||
|
||||
select {
|
||||
case <-proc.Done():
|
||||
// Success - channel closed
|
||||
case <-time.After(5 * time.Second):
|
||||
t.Fatal("Done channel should have closed")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestProcess_Kill(t *testing.T) {
|
||||
t.Run("terminates running process", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
proc, err := svc.Start(ctx, "sleep", "60")
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.True(t, proc.IsRunning())
|
||||
|
||||
err = proc.Kill()
|
||||
assert.NoError(t, err)
|
||||
|
||||
select {
|
||||
case <-proc.Done():
|
||||
// Good - process terminated
|
||||
case <-time.After(2 * time.Second):
|
||||
t.Fatal("process should have been killed")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("noop on completed process", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "echo", "done")
|
||||
require.NoError(t, err)
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
err = proc.Kill()
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestProcess_SendInput(t *testing.T) {
|
||||
t.Run("writes to stdin", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
// Use cat to echo back stdin
|
||||
proc, err := svc.Start(context.Background(), "cat")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = proc.SendInput("hello\n")
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = proc.CloseStdin()
|
||||
assert.NoError(t, err)
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
assert.Contains(t, proc.Output(), "hello")
|
||||
})
|
||||
|
||||
t.Run("error on completed process", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "echo", "done")
|
||||
require.NoError(t, err)
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
err = proc.SendInput("test")
|
||||
assert.ErrorIs(t, err, ErrProcessNotRunning)
|
||||
})
|
||||
}
|
||||
|
||||
func TestProcess_CloseStdin(t *testing.T) {
|
||||
t.Run("closes stdin pipe", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "cat")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = proc.CloseStdin()
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Process should exit now that stdin is closed
|
||||
select {
|
||||
case <-proc.Done():
|
||||
// Good
|
||||
case <-time.After(2 * time.Second):
|
||||
t.Fatal("cat should exit when stdin is closed")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("double close is safe", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "cat")
|
||||
require.NoError(t, err)
|
||||
|
||||
// First close
|
||||
err = proc.CloseStdin()
|
||||
assert.NoError(t, err)
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
// Second close should be safe (stdin already nil)
|
||||
err = proc.CloseStdin()
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
|
|
@ -1,293 +0,0 @@
|
|||
package process
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Runner orchestrates multiple processes with dependencies.
|
||||
type Runner struct {
|
||||
service *Service
|
||||
}
|
||||
|
||||
// NewRunner creates a runner for the given service.
|
||||
func NewRunner(svc *Service) *Runner {
|
||||
return &Runner{service: svc}
|
||||
}
|
||||
|
||||
// RunSpec defines a process to run with optional dependencies.
|
||||
type RunSpec struct {
|
||||
// Name is a friendly identifier (e.g., "lint", "test").
|
||||
Name string
|
||||
// Command is the executable to run.
|
||||
Command string
|
||||
// Args are the command arguments.
|
||||
Args []string
|
||||
// Dir is the working directory.
|
||||
Dir string
|
||||
// Env are additional environment variables.
|
||||
Env []string
|
||||
// After lists spec names that must complete successfully first.
|
||||
After []string
|
||||
// AllowFailure if true, continues pipeline even if this spec fails.
|
||||
AllowFailure bool
|
||||
}
|
||||
|
||||
// RunResult captures the outcome of a single process.
|
||||
type RunResult struct {
|
||||
Name string
|
||||
Spec RunSpec
|
||||
ExitCode int
|
||||
Duration time.Duration
|
||||
Output string
|
||||
Error error
|
||||
Skipped bool
|
||||
}
|
||||
|
||||
// Passed returns true if the process succeeded.
|
||||
func (r RunResult) Passed() bool {
|
||||
return !r.Skipped && r.Error == nil && r.ExitCode == 0
|
||||
}
|
||||
|
||||
// RunAllResult is the aggregate result of running multiple specs.
|
||||
type RunAllResult struct {
|
||||
Results []RunResult
|
||||
Duration time.Duration
|
||||
Passed int
|
||||
Failed int
|
||||
Skipped int
|
||||
}
|
||||
|
||||
// Success returns true if all non-skipped specs passed.
|
||||
func (r RunAllResult) Success() bool {
|
||||
return r.Failed == 0
|
||||
}
|
||||
|
||||
// RunAll executes specs respecting dependencies, parallelising where possible.
|
||||
func (r *Runner) RunAll(ctx context.Context, specs []RunSpec) (*RunAllResult, error) {
|
||||
start := time.Now()
|
||||
|
||||
// Build dependency graph
|
||||
specMap := make(map[string]RunSpec)
|
||||
for _, spec := range specs {
|
||||
specMap[spec.Name] = spec
|
||||
}
|
||||
|
||||
// Track completion
|
||||
completed := make(map[string]*RunResult)
|
||||
var completedMu sync.Mutex
|
||||
|
||||
results := make([]RunResult, 0, len(specs))
|
||||
var resultsMu sync.Mutex
|
||||
|
||||
// Process specs in waves
|
||||
remaining := make(map[string]RunSpec)
|
||||
for _, spec := range specs {
|
||||
remaining[spec.Name] = spec
|
||||
}
|
||||
|
||||
for len(remaining) > 0 {
|
||||
// Find specs ready to run (all dependencies satisfied)
|
||||
ready := make([]RunSpec, 0)
|
||||
for _, spec := range remaining {
|
||||
if r.canRun(spec, completed) {
|
||||
ready = append(ready, spec)
|
||||
}
|
||||
}
|
||||
|
||||
if len(ready) == 0 && len(remaining) > 0 {
|
||||
// Deadlock - circular dependency or missing specs
|
||||
for name := range remaining {
|
||||
results = append(results, RunResult{
|
||||
Name: name,
|
||||
Spec: remaining[name],
|
||||
Skipped: true,
|
||||
Error: errors.New("circular dependency or missing dependency"),
|
||||
})
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
// Run ready specs in parallel
|
||||
var wg sync.WaitGroup
|
||||
for _, spec := range ready {
|
||||
wg.Add(1)
|
||||
go func(spec RunSpec) {
|
||||
defer wg.Done()
|
||||
|
||||
// Check if dependencies failed
|
||||
completedMu.Lock()
|
||||
shouldSkip := false
|
||||
for _, dep := range spec.After {
|
||||
if result, ok := completed[dep]; ok {
|
||||
if !result.Passed() && !specMap[dep].AllowFailure {
|
||||
shouldSkip = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
completedMu.Unlock()
|
||||
|
||||
var result RunResult
|
||||
if shouldSkip {
|
||||
result = RunResult{
|
||||
Name: spec.Name,
|
||||
Spec: spec,
|
||||
Skipped: true,
|
||||
Error: errors.New("skipped due to dependency failure"),
|
||||
}
|
||||
} else {
|
||||
result = r.runSpec(ctx, spec)
|
||||
}
|
||||
|
||||
completedMu.Lock()
|
||||
completed[spec.Name] = &result
|
||||
completedMu.Unlock()
|
||||
|
||||
resultsMu.Lock()
|
||||
results = append(results, result)
|
||||
resultsMu.Unlock()
|
||||
}(spec)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
// Remove completed from remaining
|
||||
for _, spec := range ready {
|
||||
delete(remaining, spec.Name)
|
||||
}
|
||||
}
|
||||
|
||||
// Build aggregate result
|
||||
aggResult := &RunAllResult{
|
||||
Results: results,
|
||||
Duration: time.Since(start),
|
||||
}
|
||||
|
||||
for _, res := range results {
|
||||
if res.Skipped {
|
||||
aggResult.Skipped++
|
||||
} else if res.Passed() {
|
||||
aggResult.Passed++
|
||||
} else {
|
||||
aggResult.Failed++
|
||||
}
|
||||
}
|
||||
|
||||
return aggResult, nil
|
||||
}
|
||||
|
||||
// canRun checks if all dependencies are completed.
|
||||
func (r *Runner) canRun(spec RunSpec, completed map[string]*RunResult) bool {
|
||||
for _, dep := range spec.After {
|
||||
if _, ok := completed[dep]; !ok {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// runSpec executes a single spec.
|
||||
func (r *Runner) runSpec(ctx context.Context, spec RunSpec) RunResult {
|
||||
start := time.Now()
|
||||
|
||||
proc, err := r.service.StartWithOptions(ctx, RunOptions{
|
||||
Command: spec.Command,
|
||||
Args: spec.Args,
|
||||
Dir: spec.Dir,
|
||||
Env: spec.Env,
|
||||
})
|
||||
if err != nil {
|
||||
return RunResult{
|
||||
Name: spec.Name,
|
||||
Spec: spec,
|
||||
Duration: time.Since(start),
|
||||
Error: err,
|
||||
}
|
||||
}
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
return RunResult{
|
||||
Name: spec.Name,
|
||||
Spec: spec,
|
||||
ExitCode: proc.ExitCode,
|
||||
Duration: proc.Duration,
|
||||
Output: proc.Output(),
|
||||
Error: nil,
|
||||
}
|
||||
}
|
||||
|
||||
// RunSequential executes specs one after another, stopping on first failure.
|
||||
func (r *Runner) RunSequential(ctx context.Context, specs []RunSpec) (*RunAllResult, error) {
|
||||
start := time.Now()
|
||||
results := make([]RunResult, 0, len(specs))
|
||||
|
||||
for _, spec := range specs {
|
||||
result := r.runSpec(ctx, spec)
|
||||
results = append(results, result)
|
||||
|
||||
if !result.Passed() && !spec.AllowFailure {
|
||||
// Mark remaining as skipped
|
||||
for i := len(results); i < len(specs); i++ {
|
||||
results = append(results, RunResult{
|
||||
Name: specs[i].Name,
|
||||
Spec: specs[i],
|
||||
Skipped: true,
|
||||
})
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
aggResult := &RunAllResult{
|
||||
Results: results,
|
||||
Duration: time.Since(start),
|
||||
}
|
||||
|
||||
for _, res := range results {
|
||||
if res.Skipped {
|
||||
aggResult.Skipped++
|
||||
} else if res.Passed() {
|
||||
aggResult.Passed++
|
||||
} else {
|
||||
aggResult.Failed++
|
||||
}
|
||||
}
|
||||
|
||||
return aggResult, nil
|
||||
}
|
||||
|
||||
// RunParallel executes all specs concurrently, regardless of dependencies.
|
||||
func (r *Runner) RunParallel(ctx context.Context, specs []RunSpec) (*RunAllResult, error) {
|
||||
start := time.Now()
|
||||
results := make([]RunResult, len(specs))
|
||||
|
||||
var wg sync.WaitGroup
|
||||
for i, spec := range specs {
|
||||
wg.Add(1)
|
||||
go func(i int, spec RunSpec) {
|
||||
defer wg.Done()
|
||||
results[i] = r.runSpec(ctx, spec)
|
||||
}(i, spec)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
aggResult := &RunAllResult{
|
||||
Results: results,
|
||||
Duration: time.Since(start),
|
||||
}
|
||||
|
||||
for _, res := range results {
|
||||
if res.Skipped {
|
||||
aggResult.Skipped++
|
||||
} else if res.Passed() {
|
||||
aggResult.Passed++
|
||||
} else {
|
||||
aggResult.Failed++
|
||||
}
|
||||
}
|
||||
|
||||
return aggResult, nil
|
||||
}
|
||||
|
|
@ -1,176 +0,0 @@
|
|||
package process
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/framework"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func newTestRunner(t *testing.T) *Runner {
|
||||
t.Helper()
|
||||
|
||||
core, err := framework.New(
|
||||
framework.WithName("process", NewService(Options{})),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
svc, err := framework.ServiceFor[*Service](core, "process")
|
||||
require.NoError(t, err)
|
||||
|
||||
return NewRunner(svc)
|
||||
}
|
||||
|
||||
func TestRunner_RunSequential(t *testing.T) {
|
||||
t.Run("all pass", func(t *testing.T) {
|
||||
runner := newTestRunner(t)
|
||||
|
||||
result, err := runner.RunSequential(context.Background(), []RunSpec{
|
||||
{Name: "first", Command: "echo", Args: []string{"1"}},
|
||||
{Name: "second", Command: "echo", Args: []string{"2"}},
|
||||
{Name: "third", Command: "echo", Args: []string{"3"}},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.True(t, result.Success())
|
||||
assert.Equal(t, 3, result.Passed)
|
||||
assert.Equal(t, 0, result.Failed)
|
||||
assert.Equal(t, 0, result.Skipped)
|
||||
})
|
||||
|
||||
t.Run("stops on failure", func(t *testing.T) {
|
||||
runner := newTestRunner(t)
|
||||
|
||||
result, err := runner.RunSequential(context.Background(), []RunSpec{
|
||||
{Name: "first", Command: "echo", Args: []string{"1"}},
|
||||
{Name: "fails", Command: "sh", Args: []string{"-c", "exit 1"}},
|
||||
{Name: "third", Command: "echo", Args: []string{"3"}},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.False(t, result.Success())
|
||||
assert.Equal(t, 1, result.Passed)
|
||||
assert.Equal(t, 1, result.Failed)
|
||||
assert.Equal(t, 1, result.Skipped)
|
||||
})
|
||||
|
||||
t.Run("allow failure continues", func(t *testing.T) {
|
||||
runner := newTestRunner(t)
|
||||
|
||||
result, err := runner.RunSequential(context.Background(), []RunSpec{
|
||||
{Name: "first", Command: "echo", Args: []string{"1"}},
|
||||
{Name: "fails", Command: "sh", Args: []string{"-c", "exit 1"}, AllowFailure: true},
|
||||
{Name: "third", Command: "echo", Args: []string{"3"}},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Still counts as failed but pipeline continues
|
||||
assert.Equal(t, 2, result.Passed)
|
||||
assert.Equal(t, 1, result.Failed)
|
||||
assert.Equal(t, 0, result.Skipped)
|
||||
})
|
||||
}
|
||||
|
||||
func TestRunner_RunParallel(t *testing.T) {
|
||||
t.Run("all run concurrently", func(t *testing.T) {
|
||||
runner := newTestRunner(t)
|
||||
|
||||
result, err := runner.RunParallel(context.Background(), []RunSpec{
|
||||
{Name: "first", Command: "echo", Args: []string{"1"}},
|
||||
{Name: "second", Command: "echo", Args: []string{"2"}},
|
||||
{Name: "third", Command: "echo", Args: []string{"3"}},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.True(t, result.Success())
|
||||
assert.Equal(t, 3, result.Passed)
|
||||
assert.Len(t, result.Results, 3)
|
||||
})
|
||||
|
||||
t.Run("failure doesnt stop others", func(t *testing.T) {
|
||||
runner := newTestRunner(t)
|
||||
|
||||
result, err := runner.RunParallel(context.Background(), []RunSpec{
|
||||
{Name: "first", Command: "echo", Args: []string{"1"}},
|
||||
{Name: "fails", Command: "sh", Args: []string{"-c", "exit 1"}},
|
||||
{Name: "third", Command: "echo", Args: []string{"3"}},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.False(t, result.Success())
|
||||
assert.Equal(t, 2, result.Passed)
|
||||
assert.Equal(t, 1, result.Failed)
|
||||
})
|
||||
}
|
||||
|
||||
func TestRunner_RunAll(t *testing.T) {
|
||||
t.Run("respects dependencies", func(t *testing.T) {
|
||||
runner := newTestRunner(t)
|
||||
|
||||
result, err := runner.RunAll(context.Background(), []RunSpec{
|
||||
{Name: "third", Command: "echo", Args: []string{"3"}, After: []string{"second"}},
|
||||
{Name: "first", Command: "echo", Args: []string{"1"}},
|
||||
{Name: "second", Command: "echo", Args: []string{"2"}, After: []string{"first"}},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.True(t, result.Success())
|
||||
assert.Equal(t, 3, result.Passed)
|
||||
})
|
||||
|
||||
t.Run("skips dependents on failure", func(t *testing.T) {
|
||||
runner := newTestRunner(t)
|
||||
|
||||
result, err := runner.RunAll(context.Background(), []RunSpec{
|
||||
{Name: "first", Command: "sh", Args: []string{"-c", "exit 1"}},
|
||||
{Name: "second", Command: "echo", Args: []string{"2"}, After: []string{"first"}},
|
||||
{Name: "third", Command: "echo", Args: []string{"3"}, After: []string{"second"}},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.False(t, result.Success())
|
||||
assert.Equal(t, 0, result.Passed)
|
||||
assert.Equal(t, 1, result.Failed)
|
||||
assert.Equal(t, 2, result.Skipped)
|
||||
})
|
||||
|
||||
t.Run("parallel independent specs", func(t *testing.T) {
|
||||
runner := newTestRunner(t)
|
||||
|
||||
// These should run in parallel since they have no dependencies
|
||||
result, err := runner.RunAll(context.Background(), []RunSpec{
|
||||
{Name: "a", Command: "echo", Args: []string{"a"}},
|
||||
{Name: "b", Command: "echo", Args: []string{"b"}},
|
||||
{Name: "c", Command: "echo", Args: []string{"c"}},
|
||||
{Name: "final", Command: "echo", Args: []string{"done"}, After: []string{"a", "b", "c"}},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.True(t, result.Success())
|
||||
assert.Equal(t, 4, result.Passed)
|
||||
})
|
||||
}
|
||||
|
||||
func TestRunResult_Passed(t *testing.T) {
|
||||
t.Run("success", func(t *testing.T) {
|
||||
r := RunResult{ExitCode: 0}
|
||||
assert.True(t, r.Passed())
|
||||
})
|
||||
|
||||
t.Run("non-zero exit", func(t *testing.T) {
|
||||
r := RunResult{ExitCode: 1}
|
||||
assert.False(t, r.Passed())
|
||||
})
|
||||
|
||||
t.Run("skipped", func(t *testing.T) {
|
||||
r := RunResult{ExitCode: 0, Skipped: true}
|
||||
assert.False(t, r.Passed())
|
||||
})
|
||||
|
||||
t.Run("error", func(t *testing.T) {
|
||||
r := RunResult{ExitCode: 0, Error: assert.AnError}
|
||||
assert.False(t, r.Passed())
|
||||
})
|
||||
}
|
||||
|
|
@ -1,378 +0,0 @@
|
|||
package process
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os/exec"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/framework"
|
||||
)
|
||||
|
||||
// Default buffer size for process output (1MB).
|
||||
const DefaultBufferSize = 1024 * 1024
|
||||
|
||||
// Errors
|
||||
var (
|
||||
ErrProcessNotFound = errors.New("process not found")
|
||||
ErrProcessNotRunning = errors.New("process is not running")
|
||||
ErrStdinNotAvailable = errors.New("stdin not available")
|
||||
)
|
||||
|
||||
// Service manages process execution with Core IPC integration.
|
||||
type Service struct {
|
||||
*framework.ServiceRuntime[Options]
|
||||
|
||||
processes map[string]*Process
|
||||
mu sync.RWMutex
|
||||
bufSize int
|
||||
idCounter atomic.Uint64
|
||||
}
|
||||
|
||||
// Options configures the process service.
|
||||
type Options struct {
|
||||
// BufferSize is the ring buffer size for output capture.
|
||||
// Default: 1MB (1024 * 1024 bytes).
|
||||
BufferSize int
|
||||
}
|
||||
|
||||
// NewService creates a process service factory for Core registration.
|
||||
//
|
||||
// core, _ := framework.New(
|
||||
// framework.WithName("process", process.NewService(process.Options{})),
|
||||
// )
|
||||
func NewService(opts Options) func(*framework.Core) (any, error) {
|
||||
return func(c *framework.Core) (any, error) {
|
||||
if opts.BufferSize == 0 {
|
||||
opts.BufferSize = DefaultBufferSize
|
||||
}
|
||||
svc := &Service{
|
||||
ServiceRuntime: framework.NewServiceRuntime(c, opts),
|
||||
processes: make(map[string]*Process),
|
||||
bufSize: opts.BufferSize,
|
||||
}
|
||||
return svc, nil
|
||||
}
|
||||
}
|
||||
|
||||
// OnStartup implements framework.Startable.
|
||||
func (s *Service) OnStartup(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// OnShutdown implements framework.Stoppable.
|
||||
// Kills all running processes on shutdown.
|
||||
func (s *Service) OnShutdown(ctx context.Context) error {
|
||||
s.mu.RLock()
|
||||
procs := make([]*Process, 0, len(s.processes))
|
||||
for _, p := range s.processes {
|
||||
if p.IsRunning() {
|
||||
procs = append(procs, p)
|
||||
}
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
for _, p := range procs {
|
||||
_ = p.Kill()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start spawns a new process with the given command and args.
|
||||
func (s *Service) Start(ctx context.Context, command string, args ...string) (*Process, error) {
|
||||
return s.StartWithOptions(ctx, RunOptions{
|
||||
Command: command,
|
||||
Args: args,
|
||||
})
|
||||
}
|
||||
|
||||
// StartWithOptions spawns a process with full configuration.
|
||||
func (s *Service) StartWithOptions(ctx context.Context, opts RunOptions) (*Process, error) {
|
||||
id := fmt.Sprintf("proc-%d", s.idCounter.Add(1))
|
||||
|
||||
procCtx, cancel := context.WithCancel(ctx)
|
||||
cmd := exec.CommandContext(procCtx, opts.Command, opts.Args...)
|
||||
|
||||
if opts.Dir != "" {
|
||||
cmd.Dir = opts.Dir
|
||||
}
|
||||
if len(opts.Env) > 0 {
|
||||
cmd.Env = append(cmd.Environ(), opts.Env...)
|
||||
}
|
||||
|
||||
// Set up pipes
|
||||
stdout, err := cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
cancel()
|
||||
return nil, fmt.Errorf("failed to create stdout pipe: %w", err)
|
||||
}
|
||||
|
||||
stderr, err := cmd.StderrPipe()
|
||||
if err != nil {
|
||||
cancel()
|
||||
return nil, fmt.Errorf("failed to create stderr pipe: %w", err)
|
||||
}
|
||||
|
||||
stdin, err := cmd.StdinPipe()
|
||||
if err != nil {
|
||||
cancel()
|
||||
return nil, fmt.Errorf("failed to create stdin pipe: %w", err)
|
||||
}
|
||||
|
||||
// Create output buffer (enabled by default)
|
||||
var output *RingBuffer
|
||||
if !opts.DisableCapture {
|
||||
output = NewRingBuffer(s.bufSize)
|
||||
}
|
||||
|
||||
proc := &Process{
|
||||
ID: id,
|
||||
Command: opts.Command,
|
||||
Args: opts.Args,
|
||||
Dir: opts.Dir,
|
||||
Env: opts.Env,
|
||||
StartedAt: time.Now(),
|
||||
Status: StatusRunning,
|
||||
cmd: cmd,
|
||||
ctx: procCtx,
|
||||
cancel: cancel,
|
||||
output: output,
|
||||
stdin: stdin,
|
||||
done: make(chan struct{}),
|
||||
}
|
||||
|
||||
// Start the process
|
||||
if err := cmd.Start(); err != nil {
|
||||
cancel()
|
||||
return nil, fmt.Errorf("failed to start process: %w", err)
|
||||
}
|
||||
|
||||
// Store process
|
||||
s.mu.Lock()
|
||||
s.processes[id] = proc
|
||||
s.mu.Unlock()
|
||||
|
||||
// Broadcast start
|
||||
_ = s.Core().ACTION(ActionProcessStarted{
|
||||
ID: id,
|
||||
Command: opts.Command,
|
||||
Args: opts.Args,
|
||||
Dir: opts.Dir,
|
||||
PID: cmd.Process.Pid,
|
||||
})
|
||||
|
||||
// Stream output in goroutines
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(2)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
s.streamOutput(proc, stdout, StreamStdout)
|
||||
}()
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
s.streamOutput(proc, stderr, StreamStderr)
|
||||
}()
|
||||
|
||||
// Wait for process completion
|
||||
go func() {
|
||||
// Wait for output streaming to complete
|
||||
wg.Wait()
|
||||
|
||||
// Wait for process exit
|
||||
err := cmd.Wait()
|
||||
|
||||
duration := time.Since(proc.StartedAt)
|
||||
|
||||
proc.mu.Lock()
|
||||
proc.Duration = duration
|
||||
if err != nil {
|
||||
var exitErr *exec.ExitError
|
||||
if errors.As(err, &exitErr) {
|
||||
proc.ExitCode = exitErr.ExitCode()
|
||||
proc.Status = StatusExited
|
||||
} else {
|
||||
proc.Status = StatusFailed
|
||||
}
|
||||
} else {
|
||||
proc.ExitCode = 0
|
||||
proc.Status = StatusExited
|
||||
}
|
||||
status := proc.Status
|
||||
exitCode := proc.ExitCode
|
||||
proc.mu.Unlock()
|
||||
|
||||
close(proc.done)
|
||||
|
||||
// Broadcast exit
|
||||
var exitErr error
|
||||
if status == StatusFailed {
|
||||
exitErr = err
|
||||
}
|
||||
_ = s.Core().ACTION(ActionProcessExited{
|
||||
ID: id,
|
||||
ExitCode: exitCode,
|
||||
Duration: duration,
|
||||
Error: exitErr,
|
||||
})
|
||||
}()
|
||||
|
||||
return proc, nil
|
||||
}
|
||||
|
||||
// streamOutput reads from a pipe and broadcasts lines via ACTION.
|
||||
func (s *Service) streamOutput(proc *Process, r io.Reader, stream Stream) {
|
||||
scanner := bufio.NewScanner(r)
|
||||
// Increase buffer for long lines
|
||||
scanner.Buffer(make([]byte, 64*1024), 1024*1024)
|
||||
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
|
||||
// Write to ring buffer
|
||||
if proc.output != nil {
|
||||
_, _ = proc.output.Write([]byte(line + "\n"))
|
||||
}
|
||||
|
||||
// Broadcast output
|
||||
_ = s.Core().ACTION(ActionProcessOutput{
|
||||
ID: proc.ID,
|
||||
Line: line,
|
||||
Stream: stream,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Get returns a process by ID.
|
||||
func (s *Service) Get(id string) (*Process, error) {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
proc, ok := s.processes[id]
|
||||
if !ok {
|
||||
return nil, ErrProcessNotFound
|
||||
}
|
||||
return proc, nil
|
||||
}
|
||||
|
||||
// List returns all processes.
|
||||
func (s *Service) List() []*Process {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
result := make([]*Process, 0, len(s.processes))
|
||||
for _, p := range s.processes {
|
||||
result = append(result, p)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// Running returns all currently running processes.
|
||||
func (s *Service) Running() []*Process {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
var result []*Process
|
||||
for _, p := range s.processes {
|
||||
if p.IsRunning() {
|
||||
result = append(result, p)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// Kill terminates a process by ID.
|
||||
func (s *Service) Kill(id string) error {
|
||||
proc, err := s.Get(id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := proc.Kill(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_ = s.Core().ACTION(ActionProcessKilled{
|
||||
ID: id,
|
||||
Signal: "SIGKILL",
|
||||
})
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove removes a completed process from the list.
|
||||
func (s *Service) Remove(id string) error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
proc, ok := s.processes[id]
|
||||
if !ok {
|
||||
return ErrProcessNotFound
|
||||
}
|
||||
|
||||
if proc.IsRunning() {
|
||||
return errors.New("cannot remove running process")
|
||||
}
|
||||
|
||||
delete(s.processes, id)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Clear removes all completed processes.
|
||||
func (s *Service) Clear() {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
for id, p := range s.processes {
|
||||
if !p.IsRunning() {
|
||||
delete(s.processes, id)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Output returns the captured output of a process.
|
||||
func (s *Service) Output(id string) (string, error) {
|
||||
proc, err := s.Get(id)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return proc.Output(), nil
|
||||
}
|
||||
|
||||
// Run executes a command and waits for completion.
|
||||
// Returns the combined output and any error.
|
||||
func (s *Service) Run(ctx context.Context, command string, args ...string) (string, error) {
|
||||
proc, err := s.Start(ctx, command, args...)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
output := proc.Output()
|
||||
if proc.ExitCode != 0 {
|
||||
return output, fmt.Errorf("process exited with code %d", proc.ExitCode)
|
||||
}
|
||||
return output, nil
|
||||
}
|
||||
|
||||
// RunWithOptions executes a command with options and waits for completion.
|
||||
func (s *Service) RunWithOptions(ctx context.Context, opts RunOptions) (string, error) {
|
||||
proc, err := s.StartWithOptions(ctx, opts)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
output := proc.Output()
|
||||
if proc.ExitCode != 0 {
|
||||
return output, fmt.Errorf("process exited with code %d", proc.ExitCode)
|
||||
}
|
||||
return output, nil
|
||||
}
|
||||
|
|
@ -1,257 +0,0 @@
|
|||
package process
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go/pkg/framework"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func newTestService(t *testing.T) (*Service, *framework.Core) {
|
||||
t.Helper()
|
||||
|
||||
core, err := framework.New(
|
||||
framework.WithName("process", NewService(Options{BufferSize: 1024})),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
svc, err := framework.ServiceFor[*Service](core, "process")
|
||||
require.NoError(t, err)
|
||||
|
||||
return svc, core
|
||||
}
|
||||
|
||||
func TestService_Start(t *testing.T) {
|
||||
t.Run("echo command", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "echo", "hello")
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, proc)
|
||||
|
||||
assert.NotEmpty(t, proc.ID)
|
||||
assert.Equal(t, "echo", proc.Command)
|
||||
assert.Equal(t, []string{"hello"}, proc.Args)
|
||||
|
||||
// Wait for completion
|
||||
<-proc.Done()
|
||||
|
||||
assert.Equal(t, StatusExited, proc.Status)
|
||||
assert.Equal(t, 0, proc.ExitCode)
|
||||
assert.Contains(t, proc.Output(), "hello")
|
||||
})
|
||||
|
||||
t.Run("failing command", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.Start(context.Background(), "sh", "-c", "exit 42")
|
||||
require.NoError(t, err)
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
assert.Equal(t, StatusExited, proc.Status)
|
||||
assert.Equal(t, 42, proc.ExitCode)
|
||||
})
|
||||
|
||||
t.Run("non-existent command", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
_, err := svc.Start(context.Background(), "nonexistent_command_xyz")
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("with working directory", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, err := svc.StartWithOptions(context.Background(), RunOptions{
|
||||
Command: "pwd",
|
||||
Dir: "/tmp",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
// On macOS /tmp is a symlink to /private/tmp
|
||||
output := strings.TrimSpace(proc.Output())
|
||||
assert.True(t, output == "/tmp" || output == "/private/tmp", "got: %s", output)
|
||||
})
|
||||
|
||||
t.Run("context cancellation", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
proc, err := svc.Start(ctx, "sleep", "10")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Cancel immediately
|
||||
cancel()
|
||||
|
||||
select {
|
||||
case <-proc.Done():
|
||||
// Good - process was killed
|
||||
case <-time.After(2 * time.Second):
|
||||
t.Fatal("process should have been killed")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestService_Run(t *testing.T) {
|
||||
t.Run("returns output", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
output, err := svc.Run(context.Background(), "echo", "hello world")
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, output, "hello world")
|
||||
})
|
||||
|
||||
t.Run("returns error on failure", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
_, err := svc.Run(context.Background(), "sh", "-c", "exit 1")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "exited with code 1")
|
||||
})
|
||||
}
|
||||
|
||||
func TestService_Actions(t *testing.T) {
|
||||
t.Run("broadcasts events", func(t *testing.T) {
|
||||
core, err := framework.New(
|
||||
framework.WithName("process", NewService(Options{})),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
var started []ActionProcessStarted
|
||||
var outputs []ActionProcessOutput
|
||||
var exited []ActionProcessExited
|
||||
var mu sync.Mutex
|
||||
|
||||
core.RegisterAction(func(c *framework.Core, msg framework.Message) error {
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
switch m := msg.(type) {
|
||||
case ActionProcessStarted:
|
||||
started = append(started, m)
|
||||
case ActionProcessOutput:
|
||||
outputs = append(outputs, m)
|
||||
case ActionProcessExited:
|
||||
exited = append(exited, m)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
svc, _ := framework.ServiceFor[*Service](core, "process")
|
||||
proc, err := svc.Start(context.Background(), "echo", "test")
|
||||
require.NoError(t, err)
|
||||
|
||||
<-proc.Done()
|
||||
|
||||
// Give time for events to propagate
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
|
||||
assert.Len(t, started, 1)
|
||||
assert.Equal(t, "echo", started[0].Command)
|
||||
assert.Equal(t, []string{"test"}, started[0].Args)
|
||||
|
||||
assert.NotEmpty(t, outputs)
|
||||
foundTest := false
|
||||
for _, o := range outputs {
|
||||
if strings.Contains(o.Line, "test") {
|
||||
foundTest = true
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.True(t, foundTest, "should have output containing 'test'")
|
||||
|
||||
assert.Len(t, exited, 1)
|
||||
assert.Equal(t, 0, exited[0].ExitCode)
|
||||
})
|
||||
}
|
||||
|
||||
func TestService_List(t *testing.T) {
|
||||
t.Run("tracks processes", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc1, _ := svc.Start(context.Background(), "echo", "1")
|
||||
proc2, _ := svc.Start(context.Background(), "echo", "2")
|
||||
|
||||
<-proc1.Done()
|
||||
<-proc2.Done()
|
||||
|
||||
list := svc.List()
|
||||
assert.Len(t, list, 2)
|
||||
})
|
||||
|
||||
t.Run("get by id", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, _ := svc.Start(context.Background(), "echo", "test")
|
||||
<-proc.Done()
|
||||
|
||||
got, err := svc.Get(proc.ID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, proc.ID, got.ID)
|
||||
})
|
||||
|
||||
t.Run("get not found", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
_, err := svc.Get("nonexistent")
|
||||
assert.ErrorIs(t, err, ErrProcessNotFound)
|
||||
})
|
||||
}
|
||||
|
||||
func TestService_Remove(t *testing.T) {
|
||||
t.Run("removes completed process", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc, _ := svc.Start(context.Background(), "echo", "test")
|
||||
<-proc.Done()
|
||||
|
||||
err := svc.Remove(proc.ID)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = svc.Get(proc.ID)
|
||||
assert.ErrorIs(t, err, ErrProcessNotFound)
|
||||
})
|
||||
|
||||
t.Run("cannot remove running process", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
proc, _ := svc.Start(ctx, "sleep", "10")
|
||||
|
||||
err := svc.Remove(proc.ID)
|
||||
assert.Error(t, err)
|
||||
|
||||
cancel()
|
||||
<-proc.Done()
|
||||
})
|
||||
}
|
||||
|
||||
func TestService_Clear(t *testing.T) {
|
||||
t.Run("clears completed processes", func(t *testing.T) {
|
||||
svc, _ := newTestService(t)
|
||||
|
||||
proc1, _ := svc.Start(context.Background(), "echo", "1")
|
||||
proc2, _ := svc.Start(context.Background(), "echo", "2")
|
||||
|
||||
<-proc1.Done()
|
||||
<-proc2.Done()
|
||||
|
||||
assert.Len(t, svc.List(), 2)
|
||||
|
||||
svc.Clear()
|
||||
|
||||
assert.Len(t, svc.List(), 0)
|
||||
})
|
||||
}
|
||||
|
|
@ -1,89 +0,0 @@
|
|||
// Package process provides process management with Core IPC integration.
|
||||
//
|
||||
// The process package enables spawning, monitoring, and controlling external
|
||||
// processes with output streaming via the Core ACTION system.
|
||||
//
|
||||
// # Getting Started
|
||||
//
|
||||
// // Register with Core
|
||||
// core, _ := framework.New(
|
||||
// framework.WithName("process", process.NewService(process.Options{})),
|
||||
// )
|
||||
//
|
||||
// // Get service and run a process
|
||||
// svc, err := framework.ServiceFor[*process.Service](core, "process")
|
||||
// if err != nil {
|
||||
// return err
|
||||
// }
|
||||
// proc, err := svc.Start(ctx, "go", "test", "./...")
|
||||
//
|
||||
// # Listening for Events
|
||||
//
|
||||
// Process events are broadcast via Core.ACTION:
|
||||
//
|
||||
// core.RegisterAction(func(c *framework.Core, msg framework.Message) error {
|
||||
// switch m := msg.(type) {
|
||||
// case process.ActionProcessOutput:
|
||||
// fmt.Print(m.Line)
|
||||
// case process.ActionProcessExited:
|
||||
// fmt.Printf("Exit code: %d\n", m.ExitCode)
|
||||
// }
|
||||
// return nil
|
||||
// })
|
||||
package process
|
||||
|
||||
import "time"
|
||||
|
||||
// Status represents the process lifecycle state.
|
||||
type Status string
|
||||
|
||||
const (
|
||||
// StatusPending indicates the process is queued but not yet started.
|
||||
StatusPending Status = "pending"
|
||||
// StatusRunning indicates the process is actively executing.
|
||||
StatusRunning Status = "running"
|
||||
// StatusExited indicates the process completed (check ExitCode).
|
||||
StatusExited Status = "exited"
|
||||
// StatusFailed indicates the process could not be started.
|
||||
StatusFailed Status = "failed"
|
||||
// StatusKilled indicates the process was terminated by signal.
|
||||
StatusKilled Status = "killed"
|
||||
)
|
||||
|
||||
// Stream identifies the output source.
|
||||
type Stream string
|
||||
|
||||
const (
|
||||
// StreamStdout is standard output.
|
||||
StreamStdout Stream = "stdout"
|
||||
// StreamStderr is standard error.
|
||||
StreamStderr Stream = "stderr"
|
||||
)
|
||||
|
||||
// RunOptions configures process execution.
|
||||
type RunOptions struct {
|
||||
// Command is the executable to run.
|
||||
Command string
|
||||
// Args are the command arguments.
|
||||
Args []string
|
||||
// Dir is the working directory (empty = current).
|
||||
Dir string
|
||||
// Env are additional environment variables (KEY=VALUE format).
|
||||
Env []string
|
||||
// DisableCapture disables output buffering.
|
||||
// By default, output is captured to a ring buffer.
|
||||
DisableCapture bool
|
||||
}
|
||||
|
||||
// Info provides a snapshot of process state without internal fields.
|
||||
type Info struct {
|
||||
ID string `json:"id"`
|
||||
Command string `json:"command"`
|
||||
Args []string `json:"args"`
|
||||
Dir string `json:"dir"`
|
||||
StartedAt time.Time `json:"startedAt"`
|
||||
Status Status `json:"status"`
|
||||
ExitCode int `json:"exitCode"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
PID int `json:"pid"`
|
||||
}
|
||||
|
|
@ -1,389 +0,0 @@
|
|||
package ratelimit
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// ModelQuota defines the rate limits for a specific model.
|
||||
type ModelQuota struct {
|
||||
MaxRPM int `yaml:"max_rpm"` // Requests per minute
|
||||
MaxTPM int `yaml:"max_tpm"` // Tokens per minute
|
||||
MaxRPD int `yaml:"max_rpd"` // Requests per day (0 = unlimited)
|
||||
}
|
||||
|
||||
// TokenEntry records a token usage event.
|
||||
type TokenEntry struct {
|
||||
Time time.Time `yaml:"time"`
|
||||
Count int `yaml:"count"`
|
||||
}
|
||||
|
||||
// UsageStats tracks usage history for a model.
|
||||
type UsageStats struct {
|
||||
Requests []time.Time `yaml:"requests"` // Sliding window (1m)
|
||||
Tokens []TokenEntry `yaml:"tokens"` // Sliding window (1m)
|
||||
DayStart time.Time `yaml:"day_start"`
|
||||
DayCount int `yaml:"day_count"`
|
||||
}
|
||||
|
||||
// RateLimiter manages rate limits across multiple models.
|
||||
type RateLimiter struct {
|
||||
mu sync.RWMutex
|
||||
Quotas map[string]ModelQuota `yaml:"quotas"`
|
||||
State map[string]*UsageStats `yaml:"state"`
|
||||
filePath string
|
||||
}
|
||||
|
||||
// New creates a new RateLimiter with default quotas.
|
||||
func New() (*RateLimiter, error) {
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rl := &RateLimiter{
|
||||
Quotas: make(map[string]ModelQuota),
|
||||
State: make(map[string]*UsageStats),
|
||||
filePath: filepath.Join(home, ".core", "ratelimits.yaml"),
|
||||
}
|
||||
|
||||
// Default quotas based on Tier 1 observations (Feb 2026)
|
||||
rl.Quotas["gemini-3-pro-preview"] = ModelQuota{MaxRPM: 150, MaxTPM: 1000000, MaxRPD: 1000}
|
||||
rl.Quotas["gemini-3-flash-preview"] = ModelQuota{MaxRPM: 150, MaxTPM: 1000000, MaxRPD: 1000}
|
||||
rl.Quotas["gemini-2.5-pro"] = ModelQuota{MaxRPM: 150, MaxTPM: 1000000, MaxRPD: 1000}
|
||||
rl.Quotas["gemini-2.0-flash"] = ModelQuota{MaxRPM: 150, MaxTPM: 1000000, MaxRPD: 0} // Unlimited RPD
|
||||
rl.Quotas["gemini-2.0-flash-lite"] = ModelQuota{MaxRPM: 0, MaxTPM: 0, MaxRPD: 0} // Unlimited
|
||||
|
||||
return rl, nil
|
||||
}
|
||||
|
||||
// Load reads the state from disk.
|
||||
func (rl *RateLimiter) Load() error {
|
||||
rl.mu.Lock()
|
||||
defer rl.mu.Unlock()
|
||||
|
||||
data, err := os.ReadFile(rl.filePath)
|
||||
if os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return yaml.Unmarshal(data, rl)
|
||||
}
|
||||
|
||||
// Persist writes the state to disk.
|
||||
func (rl *RateLimiter) Persist() error {
|
||||
rl.mu.RLock()
|
||||
defer rl.mu.RUnlock()
|
||||
|
||||
data, err := yaml.Marshal(rl)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dir := filepath.Dir(rl.filePath)
|
||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return os.WriteFile(rl.filePath, data, 0644)
|
||||
}
|
||||
|
||||
// prune removes entries older than the sliding window (1 minute).
|
||||
// Caller must hold lock.
|
||||
func (rl *RateLimiter) prune(model string) {
|
||||
stats, ok := rl.State[model]
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
|
||||
now := time.Now()
|
||||
window := now.Add(-1 * time.Minute)
|
||||
|
||||
// Prune requests
|
||||
validReqs := 0
|
||||
for _, t := range stats.Requests {
|
||||
if t.After(window) {
|
||||
stats.Requests[validReqs] = t
|
||||
validReqs++
|
||||
}
|
||||
}
|
||||
stats.Requests = stats.Requests[:validReqs]
|
||||
|
||||
// Prune tokens
|
||||
validTokens := 0
|
||||
for _, t := range stats.Tokens {
|
||||
if t.Time.After(window) {
|
||||
stats.Tokens[validTokens] = t
|
||||
validTokens++
|
||||
}
|
||||
}
|
||||
stats.Tokens = stats.Tokens[:validTokens]
|
||||
|
||||
// Reset daily counter if day has passed
|
||||
if now.Sub(stats.DayStart) >= 24*time.Hour {
|
||||
stats.DayStart = now
|
||||
stats.DayCount = 0
|
||||
}
|
||||
}
|
||||
|
||||
// CanSend checks if a request can be sent without violating limits.
|
||||
func (rl *RateLimiter) CanSend(model string, estimatedTokens int) bool {
|
||||
rl.mu.Lock()
|
||||
defer rl.mu.Unlock()
|
||||
|
||||
quota, ok := rl.Quotas[model]
|
||||
if !ok {
|
||||
return true // Unknown models are allowed
|
||||
}
|
||||
|
||||
// Unlimited check
|
||||
if quota.MaxRPM == 0 && quota.MaxTPM == 0 && quota.MaxRPD == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
// Ensure state exists
|
||||
if _, ok := rl.State[model]; !ok {
|
||||
rl.State[model] = &UsageStats{
|
||||
DayStart: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
rl.prune(model)
|
||||
stats := rl.State[model]
|
||||
|
||||
// Check RPD
|
||||
if quota.MaxRPD > 0 && stats.DayCount >= quota.MaxRPD {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check RPM
|
||||
if quota.MaxRPM > 0 && len(stats.Requests) >= quota.MaxRPM {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check TPM
|
||||
if quota.MaxTPM > 0 {
|
||||
currentTokens := 0
|
||||
for _, t := range stats.Tokens {
|
||||
currentTokens += t.Count
|
||||
}
|
||||
if currentTokens+estimatedTokens > quota.MaxTPM {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// RecordUsage records a successful API call.
|
||||
func (rl *RateLimiter) RecordUsage(model string, promptTokens, outputTokens int) {
|
||||
rl.mu.Lock()
|
||||
defer rl.mu.Unlock()
|
||||
|
||||
if _, ok := rl.State[model]; !ok {
|
||||
rl.State[model] = &UsageStats{
|
||||
DayStart: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
stats := rl.State[model]
|
||||
now := time.Now()
|
||||
|
||||
stats.Requests = append(stats.Requests, now)
|
||||
stats.Tokens = append(stats.Tokens, TokenEntry{Time: now, Count: promptTokens + outputTokens})
|
||||
stats.DayCount++
|
||||
}
|
||||
|
||||
// WaitForCapacity blocks until capacity is available or context is cancelled.
|
||||
func (rl *RateLimiter) WaitForCapacity(ctx context.Context, model string, tokens int) error {
|
||||
ticker := time.NewTicker(1 * time.Second)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
if rl.CanSend(model, tokens) {
|
||||
return nil
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case <-ticker.C:
|
||||
// check again
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Reset clears stats for a model (or all if model is empty).
|
||||
func (rl *RateLimiter) Reset(model string) {
|
||||
rl.mu.Lock()
|
||||
defer rl.mu.Unlock()
|
||||
|
||||
if model == "" {
|
||||
rl.State = make(map[string]*UsageStats)
|
||||
} else {
|
||||
delete(rl.State, model)
|
||||
}
|
||||
}
|
||||
|
||||
// ModelStats represents a snapshot of usage.
|
||||
type ModelStats struct {
|
||||
RPM int
|
||||
MaxRPM int
|
||||
TPM int
|
||||
MaxTPM int
|
||||
RPD int
|
||||
MaxRPD int
|
||||
DayStart time.Time
|
||||
}
|
||||
|
||||
// Stats returns current stats for a model.
|
||||
func (rl *RateLimiter) Stats(model string) ModelStats {
|
||||
rl.mu.Lock()
|
||||
defer rl.mu.Unlock()
|
||||
|
||||
rl.prune(model)
|
||||
|
||||
stats := ModelStats{}
|
||||
quota, ok := rl.Quotas[model]
|
||||
if ok {
|
||||
stats.MaxRPM = quota.MaxRPM
|
||||
stats.MaxTPM = quota.MaxTPM
|
||||
stats.MaxRPD = quota.MaxRPD
|
||||
}
|
||||
|
||||
if s, ok := rl.State[model]; ok {
|
||||
stats.RPM = len(s.Requests)
|
||||
stats.RPD = s.DayCount
|
||||
stats.DayStart = s.DayStart
|
||||
for _, t := range s.Tokens {
|
||||
stats.TPM += t.Count
|
||||
}
|
||||
}
|
||||
|
||||
return stats
|
||||
}
|
||||
|
||||
// AllStats returns stats for all tracked models.
|
||||
func (rl *RateLimiter) AllStats() map[string]ModelStats {
|
||||
rl.mu.Lock()
|
||||
defer rl.mu.Unlock()
|
||||
|
||||
result := make(map[string]ModelStats)
|
||||
|
||||
// Collect all model names
|
||||
for m := range rl.Quotas {
|
||||
result[m] = ModelStats{}
|
||||
}
|
||||
for m := range rl.State {
|
||||
result[m] = ModelStats{}
|
||||
}
|
||||
|
||||
now := time.Now()
|
||||
window := now.Add(-1 * time.Minute)
|
||||
|
||||
for m := range result {
|
||||
// Prune inline
|
||||
if s, ok := rl.State[m]; ok {
|
||||
validReqs := 0
|
||||
for _, t := range s.Requests {
|
||||
if t.After(window) {
|
||||
s.Requests[validReqs] = t
|
||||
validReqs++
|
||||
}
|
||||
}
|
||||
s.Requests = s.Requests[:validReqs]
|
||||
|
||||
validTokens := 0
|
||||
for _, t := range s.Tokens {
|
||||
if t.Time.After(window) {
|
||||
s.Tokens[validTokens] = t
|
||||
validTokens++
|
||||
}
|
||||
}
|
||||
s.Tokens = s.Tokens[:validTokens]
|
||||
|
||||
if now.Sub(s.DayStart) >= 24*time.Hour {
|
||||
s.DayStart = now
|
||||
s.DayCount = 0
|
||||
}
|
||||
}
|
||||
|
||||
ms := ModelStats{}
|
||||
if q, ok := rl.Quotas[m]; ok {
|
||||
ms.MaxRPM = q.MaxRPM
|
||||
ms.MaxTPM = q.MaxTPM
|
||||
ms.MaxRPD = q.MaxRPD
|
||||
}
|
||||
if s, ok := rl.State[m]; ok {
|
||||
ms.RPM = len(s.Requests)
|
||||
ms.RPD = s.DayCount
|
||||
ms.DayStart = s.DayStart
|
||||
for _, t := range s.Tokens {
|
||||
ms.TPM += t.Count
|
||||
}
|
||||
}
|
||||
result[m] = ms
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// CountTokens calls the Google API to count tokens for a prompt.
|
||||
func CountTokens(apiKey, model, text string) (int, error) {
|
||||
url := fmt.Sprintf("https://generativelanguage.googleapis.com/v1beta/models/%s:countTokens", model)
|
||||
|
||||
reqBody := map[string]any{
|
||||
"contents": []any{
|
||||
map[string]any{
|
||||
"parts": []any{
|
||||
map[string]string{"text": text},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
jsonBody, err := json.Marshal(reqBody)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest(http.MethodPost, url, bytes.NewBuffer(jsonBody))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("x-goog-api-key", apiKey)
|
||||
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return 0, fmt.Errorf("API error %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var result struct {
|
||||
TotalTokens int `json:"totalTokens"`
|
||||
}
|
||||
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
return result.TotalTokens, nil
|
||||
}
|
||||
|
|
@ -1,176 +0,0 @@
|
|||
package ratelimit
|
||||
|
||||
import (
|
||||
"context"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestCanSend_Good(t *testing.T) {
|
||||
rl, _ := New()
|
||||
rl.filePath = filepath.Join(t.TempDir(), "ratelimits.yaml")
|
||||
|
||||
model := "test-model"
|
||||
rl.Quotas[model] = ModelQuota{MaxRPM: 10, MaxTPM: 1000, MaxRPD: 100}
|
||||
|
||||
if !rl.CanSend(model, 100) {
|
||||
t.Errorf("Expected CanSend to return true for fresh state")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCanSend_RPMExceeded_Bad(t *testing.T) {
|
||||
rl, _ := New()
|
||||
model := "test-rpm"
|
||||
rl.Quotas[model] = ModelQuota{MaxRPM: 2, MaxTPM: 1000000, MaxRPD: 100}
|
||||
|
||||
rl.RecordUsage(model, 10, 10)
|
||||
rl.RecordUsage(model, 10, 10)
|
||||
|
||||
if rl.CanSend(model, 10) {
|
||||
t.Errorf("Expected CanSend to return false after exceeding RPM")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCanSend_TPMExceeded_Bad(t *testing.T) {
|
||||
rl, _ := New()
|
||||
model := "test-tpm"
|
||||
rl.Quotas[model] = ModelQuota{MaxRPM: 10, MaxTPM: 100, MaxRPD: 100}
|
||||
|
||||
rl.RecordUsage(model, 50, 40) // 90 tokens used
|
||||
|
||||
if rl.CanSend(model, 20) { // 90 + 20 = 110 > 100
|
||||
t.Errorf("Expected CanSend to return false when estimated tokens exceed TPM")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCanSend_RPDExceeded_Bad(t *testing.T) {
|
||||
rl, _ := New()
|
||||
model := "test-rpd"
|
||||
rl.Quotas[model] = ModelQuota{MaxRPM: 10, MaxTPM: 1000000, MaxRPD: 2}
|
||||
|
||||
rl.RecordUsage(model, 10, 10)
|
||||
rl.RecordUsage(model, 10, 10)
|
||||
|
||||
if rl.CanSend(model, 10) {
|
||||
t.Errorf("Expected CanSend to return false after exceeding RPD")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCanSend_UnlimitedModel_Good(t *testing.T) {
|
||||
rl, _ := New()
|
||||
model := "test-unlimited"
|
||||
rl.Quotas[model] = ModelQuota{MaxRPM: 0, MaxTPM: 0, MaxRPD: 0}
|
||||
|
||||
// Should always be allowed
|
||||
for i := 0; i < 1000; i++ {
|
||||
rl.RecordUsage(model, 100, 100)
|
||||
}
|
||||
if !rl.CanSend(model, 999999) {
|
||||
t.Errorf("Expected unlimited model to always allow sends")
|
||||
}
|
||||
}
|
||||
|
||||
func TestRecordUsage_PrunesOldEntries_Good(t *testing.T) {
|
||||
rl, _ := New()
|
||||
model := "test-prune"
|
||||
rl.Quotas[model] = ModelQuota{MaxRPM: 5, MaxTPM: 1000000, MaxRPD: 100}
|
||||
|
||||
// Manually inject old data
|
||||
oldTime := time.Now().Add(-2 * time.Minute)
|
||||
rl.State[model] = &UsageStats{
|
||||
Requests: []time.Time{oldTime, oldTime, oldTime},
|
||||
Tokens: []TokenEntry{
|
||||
{Time: oldTime, Count: 100},
|
||||
{Time: oldTime, Count: 100},
|
||||
},
|
||||
DayStart: time.Now(),
|
||||
}
|
||||
|
||||
// CanSend triggers prune
|
||||
if !rl.CanSend(model, 10) {
|
||||
t.Errorf("Expected CanSend to return true after pruning old entries")
|
||||
}
|
||||
|
||||
stats := rl.State[model]
|
||||
if len(stats.Requests) != 0 {
|
||||
t.Errorf("Expected 0 requests after pruning old entries, got %d", len(stats.Requests))
|
||||
}
|
||||
}
|
||||
|
||||
func TestPersistAndLoad_Good(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
path := filepath.Join(tmpDir, "ratelimits.yaml")
|
||||
|
||||
rl1, _ := New()
|
||||
rl1.filePath = path
|
||||
model := "persist-test"
|
||||
rl1.Quotas[model] = ModelQuota{MaxRPM: 50, MaxTPM: 5000, MaxRPD: 500}
|
||||
rl1.RecordUsage(model, 100, 100)
|
||||
|
||||
if err := rl1.Persist(); err != nil {
|
||||
t.Fatalf("Persist failed: %v", err)
|
||||
}
|
||||
|
||||
rl2, _ := New()
|
||||
rl2.filePath = path
|
||||
if err := rl2.Load(); err != nil {
|
||||
t.Fatalf("Load failed: %v", err)
|
||||
}
|
||||
|
||||
stats := rl2.Stats(model)
|
||||
if stats.RPM != 1 {
|
||||
t.Errorf("Expected RPM 1 after load, got %d", stats.RPM)
|
||||
}
|
||||
if stats.TPM != 200 {
|
||||
t.Errorf("Expected TPM 200 after load, got %d", stats.TPM)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWaitForCapacity_Ugly(t *testing.T) {
|
||||
rl, _ := New()
|
||||
model := "wait-test"
|
||||
rl.Quotas[model] = ModelQuota{MaxRPM: 1, MaxTPM: 1000000, MaxRPD: 100}
|
||||
|
||||
rl.RecordUsage(model, 10, 10) // Use up the 1 RPM
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
|
||||
defer cancel()
|
||||
|
||||
err := rl.WaitForCapacity(ctx, model, 10)
|
||||
if err != context.DeadlineExceeded {
|
||||
t.Errorf("Expected DeadlineExceeded, got %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDefaultQuotas_Good(t *testing.T) {
|
||||
rl, _ := New()
|
||||
expected := []string{
|
||||
"gemini-3-pro-preview",
|
||||
"gemini-3-flash-preview",
|
||||
"gemini-2.0-flash",
|
||||
}
|
||||
for _, m := range expected {
|
||||
if _, ok := rl.Quotas[m]; !ok {
|
||||
t.Errorf("Expected default quota for %s", m)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestAllStats_Good(t *testing.T) {
|
||||
rl, _ := New()
|
||||
rl.RecordUsage("gemini-3-pro-preview", 1000, 500)
|
||||
|
||||
all := rl.AllStats()
|
||||
if len(all) < 5 {
|
||||
t.Errorf("Expected at least 5 models in AllStats, got %d", len(all))
|
||||
}
|
||||
|
||||
pro := all["gemini-3-pro-preview"]
|
||||
if pro.RPM != 1 {
|
||||
t.Errorf("Expected RPM 1 for pro, got %d", pro.RPM)
|
||||
}
|
||||
if pro.TPM != 1500 {
|
||||
t.Errorf("Expected TPM 1500 for pro, got %d", pro.TPM)
|
||||
}
|
||||
}
|
||||
|
|
@ -1,342 +0,0 @@
|
|||
// Package repos provides functionality for managing multi-repo workspaces.
|
||||
// It reads a repos.yaml registry file that defines repositories, their types,
|
||||
// dependencies, and metadata.
|
||||
package repos
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"forge.lthn.ai/core/go-io"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// Registry represents a collection of repositories defined in repos.yaml.
|
||||
type Registry struct {
|
||||
Version int `yaml:"version"`
|
||||
Org string `yaml:"org"`
|
||||
BasePath string `yaml:"base_path"`
|
||||
Repos map[string]*Repo `yaml:"repos"`
|
||||
Defaults RegistryDefaults `yaml:"defaults"`
|
||||
medium io.Medium `yaml:"-"`
|
||||
}
|
||||
|
||||
// RegistryDefaults contains default values applied to all repos.
|
||||
type RegistryDefaults struct {
|
||||
CI string `yaml:"ci"`
|
||||
License string `yaml:"license"`
|
||||
Branch string `yaml:"branch"`
|
||||
}
|
||||
|
||||
// RepoType indicates the role of a repository in the ecosystem.
|
||||
type RepoType string
|
||||
|
||||
// Repository type constants for ecosystem classification.
|
||||
const (
|
||||
// RepoTypeFoundation indicates core foundation packages.
|
||||
RepoTypeFoundation RepoType = "foundation"
|
||||
// RepoTypeModule indicates reusable module packages.
|
||||
RepoTypeModule RepoType = "module"
|
||||
// RepoTypeProduct indicates end-user product applications.
|
||||
RepoTypeProduct RepoType = "product"
|
||||
// RepoTypeTemplate indicates starter templates.
|
||||
RepoTypeTemplate RepoType = "template"
|
||||
)
|
||||
|
||||
// Repo represents a single repository in the registry.
|
||||
type Repo struct {
|
||||
Name string `yaml:"-"` // Set from map key
|
||||
Type string `yaml:"type"`
|
||||
DependsOn []string `yaml:"depends_on"`
|
||||
Description string `yaml:"description"`
|
||||
Docs bool `yaml:"docs"`
|
||||
CI string `yaml:"ci"`
|
||||
Domain string `yaml:"domain,omitempty"`
|
||||
Clone *bool `yaml:"clone,omitempty"` // nil = true, false = skip cloning
|
||||
|
||||
// Computed fields
|
||||
Path string `yaml:"path,omitempty"` // Full path to repo directory (optional, defaults to base_path/name)
|
||||
registry *Registry `yaml:"-"`
|
||||
}
|
||||
|
||||
// LoadRegistry reads and parses a repos.yaml file from the given medium.
|
||||
// The path should be a valid path for the provided medium.
|
||||
func LoadRegistry(m io.Medium, path string) (*Registry, error) {
|
||||
content, err := m.Read(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read registry file: %w", err)
|
||||
}
|
||||
data := []byte(content)
|
||||
|
||||
var reg Registry
|
||||
if err := yaml.Unmarshal(data, ®); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse registry file: %w", err)
|
||||
}
|
||||
|
||||
reg.medium = m
|
||||
|
||||
// Expand base path
|
||||
reg.BasePath = expandPath(reg.BasePath)
|
||||
|
||||
// Set computed fields on each repo
|
||||
for name, repo := range reg.Repos {
|
||||
repo.Name = name
|
||||
if repo.Path == "" {
|
||||
repo.Path = filepath.Join(reg.BasePath, name)
|
||||
} else {
|
||||
repo.Path = expandPath(repo.Path)
|
||||
}
|
||||
repo.registry = ®
|
||||
|
||||
// Apply defaults if not set
|
||||
if repo.CI == "" {
|
||||
repo.CI = reg.Defaults.CI
|
||||
}
|
||||
}
|
||||
|
||||
return ®, nil
|
||||
}
|
||||
|
||||
// FindRegistry searches for repos.yaml in common locations.
|
||||
// It checks: current directory, parent directories, and home directory.
|
||||
// This function is primarily intended for use with io.Local or other local-like filesystems.
|
||||
func FindRegistry(m io.Medium) (string, error) {
|
||||
// Check current directory and parents
|
||||
dir, err := os.Getwd()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
for {
|
||||
// Check repos.yaml (existing)
|
||||
candidate := filepath.Join(dir, "repos.yaml")
|
||||
if m.Exists(candidate) {
|
||||
return candidate, nil
|
||||
}
|
||||
// Check .core/repos.yaml (new)
|
||||
candidate = filepath.Join(dir, ".core", "repos.yaml")
|
||||
if m.Exists(candidate) {
|
||||
return candidate, nil
|
||||
}
|
||||
|
||||
parent := filepath.Dir(dir)
|
||||
if parent == dir {
|
||||
break
|
||||
}
|
||||
dir = parent
|
||||
}
|
||||
|
||||
// Check home directory common locations
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
commonPaths := []string{
|
||||
filepath.Join(home, "Code", "host-uk", ".core", "repos.yaml"),
|
||||
filepath.Join(home, "Code", "host-uk", "repos.yaml"),
|
||||
filepath.Join(home, ".config", "core", "repos.yaml"),
|
||||
}
|
||||
|
||||
for _, p := range commonPaths {
|
||||
if m.Exists(p) {
|
||||
return p, nil
|
||||
}
|
||||
}
|
||||
|
||||
return "", errors.New("repos.yaml not found")
|
||||
}
|
||||
|
||||
// ScanDirectory creates a Registry by scanning a directory for git repos.
|
||||
// This is used as a fallback when no repos.yaml is found.
|
||||
// The dir should be a valid path for the provided medium.
|
||||
func ScanDirectory(m io.Medium, dir string) (*Registry, error) {
|
||||
entries, err := m.List(dir)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read directory: %w", err)
|
||||
}
|
||||
|
||||
reg := &Registry{
|
||||
Version: 1,
|
||||
BasePath: dir,
|
||||
Repos: make(map[string]*Repo),
|
||||
medium: m,
|
||||
}
|
||||
|
||||
// Try to detect org from git remote
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
repoPath := filepath.Join(dir, entry.Name())
|
||||
gitPath := filepath.Join(repoPath, ".git")
|
||||
|
||||
if !m.IsDir(gitPath) {
|
||||
continue // Not a git repo
|
||||
}
|
||||
|
||||
repo := &Repo{
|
||||
Name: entry.Name(),
|
||||
Path: repoPath,
|
||||
Type: "module", // Default type
|
||||
registry: reg,
|
||||
}
|
||||
|
||||
reg.Repos[entry.Name()] = repo
|
||||
|
||||
// Try to detect org from first repo's remote
|
||||
if reg.Org == "" {
|
||||
reg.Org = detectOrg(m, repoPath)
|
||||
}
|
||||
}
|
||||
|
||||
return reg, nil
|
||||
}
|
||||
|
||||
// detectOrg tries to extract the GitHub org from a repo's origin remote.
|
||||
func detectOrg(m io.Medium, repoPath string) string {
|
||||
// Try to read git remote
|
||||
configPath := filepath.Join(repoPath, ".git", "config")
|
||||
content, err := m.Read(configPath)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
// Look for patterns like github.com:org/repo or github.com/org/repo
|
||||
for _, line := range strings.Split(content, "\n") {
|
||||
line = strings.TrimSpace(line)
|
||||
if !strings.HasPrefix(line, "url = ") {
|
||||
continue
|
||||
}
|
||||
url := strings.TrimPrefix(line, "url = ")
|
||||
|
||||
// git@github.com:org/repo.git
|
||||
if strings.Contains(url, "github.com:") {
|
||||
parts := strings.Split(url, ":")
|
||||
if len(parts) >= 2 {
|
||||
orgRepo := strings.TrimSuffix(parts[1], ".git")
|
||||
orgParts := strings.Split(orgRepo, "/")
|
||||
if len(orgParts) >= 1 {
|
||||
return orgParts[0]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// https://github.com/org/repo.git
|
||||
if strings.Contains(url, "github.com/") {
|
||||
parts := strings.Split(url, "github.com/")
|
||||
if len(parts) >= 2 {
|
||||
orgRepo := strings.TrimSuffix(parts[1], ".git")
|
||||
orgParts := strings.Split(orgRepo, "/")
|
||||
if len(orgParts) >= 1 {
|
||||
return orgParts[0]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// List returns all repos in the registry.
|
||||
func (r *Registry) List() []*Repo {
|
||||
repos := make([]*Repo, 0, len(r.Repos))
|
||||
for _, repo := range r.Repos {
|
||||
|
||||
repos = append(repos, repo)
|
||||
}
|
||||
return repos
|
||||
}
|
||||
|
||||
// Get returns a repo by name.
|
||||
func (r *Registry) Get(name string) (*Repo, bool) {
|
||||
repo, ok := r.Repos[name]
|
||||
return repo, ok
|
||||
}
|
||||
|
||||
// ByType returns repos filtered by type.
|
||||
func (r *Registry) ByType(t string) []*Repo {
|
||||
var repos []*Repo
|
||||
for _, repo := range r.Repos {
|
||||
if repo.Type == t {
|
||||
repos = append(repos, repo)
|
||||
}
|
||||
}
|
||||
return repos
|
||||
}
|
||||
|
||||
// TopologicalOrder returns repos sorted by dependency order.
|
||||
// Foundation repos come first, then modules, then products.
|
||||
func (r *Registry) TopologicalOrder() ([]*Repo, error) {
|
||||
// Build dependency graph
|
||||
visited := make(map[string]bool)
|
||||
visiting := make(map[string]bool)
|
||||
var result []*Repo
|
||||
|
||||
var visit func(name string) error
|
||||
visit = func(name string) error {
|
||||
if visited[name] {
|
||||
return nil
|
||||
}
|
||||
if visiting[name] {
|
||||
return fmt.Errorf("circular dependency detected: %s", name)
|
||||
}
|
||||
|
||||
repo, ok := r.Repos[name]
|
||||
if !ok {
|
||||
return fmt.Errorf("unknown repo: %s", name)
|
||||
}
|
||||
|
||||
visiting[name] = true
|
||||
for _, dep := range repo.DependsOn {
|
||||
if err := visit(dep); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
visiting[name] = false
|
||||
visited[name] = true
|
||||
result = append(result, repo)
|
||||
return nil
|
||||
}
|
||||
|
||||
for name := range r.Repos {
|
||||
if err := visit(name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// Exists checks if the repo directory exists on disk.
|
||||
func (repo *Repo) Exists() bool {
|
||||
return repo.getMedium().IsDir(repo.Path)
|
||||
}
|
||||
|
||||
// IsGitRepo checks if the repo directory contains a .git folder.
|
||||
func (repo *Repo) IsGitRepo() bool {
|
||||
gitPath := filepath.Join(repo.Path, ".git")
|
||||
return repo.getMedium().IsDir(gitPath)
|
||||
}
|
||||
|
||||
func (repo *Repo) getMedium() io.Medium {
|
||||
if repo.registry != nil && repo.registry.medium != nil {
|
||||
return repo.registry.medium
|
||||
}
|
||||
return io.Local
|
||||
}
|
||||
|
||||
// expandPath expands ~ to home directory.
|
||||
func expandPath(path string) string {
|
||||
if strings.HasPrefix(path, "~/") {
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return path
|
||||
}
|
||||
return filepath.Join(home, path[2:])
|
||||
}
|
||||
return path
|
||||
}
|
||||
|
|
@ -1,486 +0,0 @@
|
|||
package repos
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go-io"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// ── LoadRegistry ───────────────────────────────────────────────────
|
||||
|
||||
func TestLoadRegistry_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
yaml := `
|
||||
version: 1
|
||||
org: host-uk
|
||||
base_path: /tmp/repos
|
||||
repos:
|
||||
core:
|
||||
type: foundation
|
||||
description: Core package
|
||||
`
|
||||
_ = m.Write("/tmp/repos.yaml", yaml)
|
||||
|
||||
reg, err := LoadRegistry(m, "/tmp/repos.yaml")
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, reg)
|
||||
assert.Equal(t, "host-uk", reg.Org)
|
||||
assert.Equal(t, "/tmp/repos", reg.BasePath)
|
||||
assert.Equal(t, m, reg.medium)
|
||||
|
||||
repo, ok := reg.Get("core")
|
||||
assert.True(t, ok)
|
||||
assert.Equal(t, "core", repo.Name)
|
||||
assert.Equal(t, "/tmp/repos/core", repo.Path)
|
||||
assert.Equal(t, reg, repo.registry)
|
||||
}
|
||||
|
||||
func TestLoadRegistry_Good_WithDefaults(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
yaml := `
|
||||
version: 1
|
||||
org: host-uk
|
||||
base_path: /tmp/repos
|
||||
defaults:
|
||||
ci: github-actions
|
||||
license: EUPL-1.2
|
||||
branch: main
|
||||
repos:
|
||||
core-php:
|
||||
type: foundation
|
||||
description: Foundation
|
||||
core-admin:
|
||||
type: module
|
||||
description: Admin panel
|
||||
`
|
||||
_ = m.Write("/tmp/repos.yaml", yaml)
|
||||
|
||||
reg, err := LoadRegistry(m, "/tmp/repos.yaml")
|
||||
require.NoError(t, err)
|
||||
|
||||
php, ok := reg.Get("core-php")
|
||||
require.True(t, ok)
|
||||
assert.Equal(t, "github-actions", php.CI)
|
||||
|
||||
admin, ok := reg.Get("core-admin")
|
||||
require.True(t, ok)
|
||||
assert.Equal(t, "github-actions", admin.CI)
|
||||
}
|
||||
|
||||
func TestLoadRegistry_Good_CustomRepoPath(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
yaml := `
|
||||
version: 1
|
||||
org: host-uk
|
||||
base_path: /tmp/repos
|
||||
repos:
|
||||
special:
|
||||
type: module
|
||||
path: /opt/special-repo
|
||||
`
|
||||
_ = m.Write("/tmp/repos.yaml", yaml)
|
||||
|
||||
reg, err := LoadRegistry(m, "/tmp/repos.yaml")
|
||||
require.NoError(t, err)
|
||||
|
||||
repo, ok := reg.Get("special")
|
||||
require.True(t, ok)
|
||||
assert.Equal(t, "/opt/special-repo", repo.Path)
|
||||
}
|
||||
|
||||
func TestLoadRegistry_Good_CIOverride(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
yaml := `
|
||||
version: 1
|
||||
org: test
|
||||
base_path: /tmp
|
||||
defaults:
|
||||
ci: default-ci
|
||||
repos:
|
||||
a:
|
||||
type: module
|
||||
b:
|
||||
type: module
|
||||
ci: custom-ci
|
||||
`
|
||||
_ = m.Write("/tmp/repos.yaml", yaml)
|
||||
|
||||
reg, err := LoadRegistry(m, "/tmp/repos.yaml")
|
||||
require.NoError(t, err)
|
||||
|
||||
a, _ := reg.Get("a")
|
||||
assert.Equal(t, "default-ci", a.CI)
|
||||
|
||||
b, _ := reg.Get("b")
|
||||
assert.Equal(t, "custom-ci", b.CI)
|
||||
}
|
||||
|
||||
func TestLoadRegistry_Bad_FileNotFound(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
_, err := LoadRegistry(m, "/nonexistent/repos.yaml")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "failed to read")
|
||||
}
|
||||
|
||||
func TestLoadRegistry_Bad_InvalidYAML(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
_ = m.Write("/tmp/bad.yaml", "{{{{not yaml at all")
|
||||
|
||||
_, err := LoadRegistry(m, "/tmp/bad.yaml")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "failed to parse")
|
||||
}
|
||||
|
||||
// ── List / Get / ByType ────────────────────────────────────────────
|
||||
|
||||
func newTestRegistry(t *testing.T) *Registry {
|
||||
t.Helper()
|
||||
m := io.NewMockMedium()
|
||||
yaml := `
|
||||
version: 1
|
||||
org: host-uk
|
||||
base_path: /tmp/repos
|
||||
repos:
|
||||
core-php:
|
||||
type: foundation
|
||||
description: Foundation
|
||||
core-admin:
|
||||
type: module
|
||||
depends_on: [core-php]
|
||||
description: Admin
|
||||
core-tenant:
|
||||
type: module
|
||||
depends_on: [core-php]
|
||||
description: Tenancy
|
||||
core-bio:
|
||||
type: product
|
||||
depends_on: [core-php, core-tenant]
|
||||
description: Bio product
|
||||
`
|
||||
_ = m.Write("/tmp/repos.yaml", yaml)
|
||||
reg, err := LoadRegistry(m, "/tmp/repos.yaml")
|
||||
require.NoError(t, err)
|
||||
return reg
|
||||
}
|
||||
|
||||
func TestRegistry_List_Good(t *testing.T) {
|
||||
reg := newTestRegistry(t)
|
||||
repos := reg.List()
|
||||
assert.Len(t, repos, 4)
|
||||
}
|
||||
|
||||
func TestRegistry_Get_Good(t *testing.T) {
|
||||
reg := newTestRegistry(t)
|
||||
repo, ok := reg.Get("core-php")
|
||||
assert.True(t, ok)
|
||||
assert.Equal(t, "core-php", repo.Name)
|
||||
}
|
||||
|
||||
func TestRegistry_Get_Bad_NotFound(t *testing.T) {
|
||||
reg := newTestRegistry(t)
|
||||
_, ok := reg.Get("nonexistent")
|
||||
assert.False(t, ok)
|
||||
}
|
||||
|
||||
func TestRegistry_ByType_Good(t *testing.T) {
|
||||
reg := newTestRegistry(t)
|
||||
|
||||
foundations := reg.ByType("foundation")
|
||||
assert.Len(t, foundations, 1)
|
||||
assert.Equal(t, "core-php", foundations[0].Name)
|
||||
|
||||
modules := reg.ByType("module")
|
||||
assert.Len(t, modules, 2)
|
||||
|
||||
products := reg.ByType("product")
|
||||
assert.Len(t, products, 1)
|
||||
}
|
||||
|
||||
func TestRegistry_ByType_Good_NoMatch(t *testing.T) {
|
||||
reg := newTestRegistry(t)
|
||||
templates := reg.ByType("template")
|
||||
assert.Empty(t, templates)
|
||||
}
|
||||
|
||||
// ── TopologicalOrder ───────────────────────────────────────────────
|
||||
|
||||
func TestTopologicalOrder_Good(t *testing.T) {
|
||||
reg := newTestRegistry(t)
|
||||
order, err := TopologicalOrder(reg)
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, order, 4)
|
||||
|
||||
// core-php must come before everything that depends on it.
|
||||
phpIdx := -1
|
||||
for i, r := range order {
|
||||
if r.Name == "core-php" {
|
||||
phpIdx = i
|
||||
break
|
||||
}
|
||||
}
|
||||
require.GreaterOrEqual(t, phpIdx, 0, "core-php not found")
|
||||
|
||||
for i, r := range order {
|
||||
for _, dep := range r.DependsOn {
|
||||
depIdx := -1
|
||||
for j, d := range order {
|
||||
if d.Name == dep {
|
||||
depIdx = j
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.Less(t, depIdx, i, "%s should come before %s", dep, r.Name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TopologicalOrder(reg *Registry) ([]*Repo, error) {
|
||||
return reg.TopologicalOrder()
|
||||
}
|
||||
|
||||
func TestTopologicalOrder_Bad_CircularDep(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
yaml := `
|
||||
version: 1
|
||||
org: test
|
||||
base_path: /tmp
|
||||
repos:
|
||||
a:
|
||||
type: module
|
||||
depends_on: [b]
|
||||
b:
|
||||
type: module
|
||||
depends_on: [a]
|
||||
`
|
||||
_ = m.Write("/tmp/repos.yaml", yaml)
|
||||
reg, err := LoadRegistry(m, "/tmp/repos.yaml")
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = reg.TopologicalOrder()
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "circular dependency")
|
||||
}
|
||||
|
||||
func TestTopologicalOrder_Bad_UnknownDep(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
yaml := `
|
||||
version: 1
|
||||
org: test
|
||||
base_path: /tmp
|
||||
repos:
|
||||
a:
|
||||
type: module
|
||||
depends_on: [nonexistent]
|
||||
`
|
||||
_ = m.Write("/tmp/repos.yaml", yaml)
|
||||
reg, err := LoadRegistry(m, "/tmp/repos.yaml")
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = reg.TopologicalOrder()
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "unknown repo")
|
||||
}
|
||||
|
||||
func TestTopologicalOrder_Good_NoDeps(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
yaml := `
|
||||
version: 1
|
||||
org: test
|
||||
base_path: /tmp
|
||||
repos:
|
||||
a:
|
||||
type: module
|
||||
b:
|
||||
type: module
|
||||
`
|
||||
_ = m.Write("/tmp/repos.yaml", yaml)
|
||||
reg, err := LoadRegistry(m, "/tmp/repos.yaml")
|
||||
require.NoError(t, err)
|
||||
|
||||
order, err := reg.TopologicalOrder()
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, order, 2)
|
||||
}
|
||||
|
||||
// ── ScanDirectory ──────────────────────────────────────────────────
|
||||
|
||||
func TestScanDirectory_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
// Create mock repos with .git dirs.
|
||||
_ = m.EnsureDir("/workspace/repo-a/.git")
|
||||
_ = m.EnsureDir("/workspace/repo-b/.git")
|
||||
_ = m.EnsureDir("/workspace/not-a-repo") // No .git
|
||||
|
||||
// Write a file (not a dir) at top level.
|
||||
_ = m.Write("/workspace/README.md", "hello")
|
||||
|
||||
reg, err := ScanDirectory(m, "/workspace")
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Len(t, reg.Repos, 2)
|
||||
|
||||
a, ok := reg.Repos["repo-a"]
|
||||
assert.True(t, ok)
|
||||
assert.Equal(t, "/workspace/repo-a", a.Path)
|
||||
assert.Equal(t, "module", a.Type) // Default type.
|
||||
|
||||
_, ok = reg.Repos["not-a-repo"]
|
||||
assert.False(t, ok)
|
||||
}
|
||||
|
||||
func TestScanDirectory_Good_DetectsGitHubOrg(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
_ = m.EnsureDir("/workspace/my-repo/.git")
|
||||
_ = m.Write("/workspace/my-repo/.git/config", `[core]
|
||||
repositoryformatversion = 0
|
||||
[remote "origin"]
|
||||
url = git@github.com:host-uk/my-repo.git
|
||||
fetch = +refs/heads/*:refs/remotes/origin/*
|
||||
`)
|
||||
|
||||
reg, err := ScanDirectory(m, "/workspace")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "host-uk", reg.Org)
|
||||
}
|
||||
|
||||
func TestScanDirectory_Good_DetectsHTTPSOrg(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
|
||||
_ = m.EnsureDir("/workspace/my-repo/.git")
|
||||
_ = m.Write("/workspace/my-repo/.git/config", `[remote "origin"]
|
||||
url = https://github.com/lethean-io/my-repo.git
|
||||
`)
|
||||
|
||||
reg, err := ScanDirectory(m, "/workspace")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "lethean-io", reg.Org)
|
||||
}
|
||||
|
||||
func TestScanDirectory_Good_EmptyDir(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
_ = m.EnsureDir("/empty")
|
||||
|
||||
reg, err := ScanDirectory(m, "/empty")
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, reg.Repos)
|
||||
assert.Equal(t, "", reg.Org)
|
||||
}
|
||||
|
||||
func TestScanDirectory_Bad_InvalidDir(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
_, err := ScanDirectory(m, "/nonexistent")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "failed to read directory")
|
||||
}
|
||||
|
||||
// ── detectOrg ──────────────────────────────────────────────────────
|
||||
|
||||
func TestDetectOrg_Good_SSHRemote(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
_ = m.Write("/repo/.git/config", `[remote "origin"]
|
||||
url = git@github.com:host-uk/core.git
|
||||
`)
|
||||
assert.Equal(t, "host-uk", detectOrg(m, "/repo"))
|
||||
}
|
||||
|
||||
func TestDetectOrg_Good_HTTPSRemote(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
_ = m.Write("/repo/.git/config", `[remote "origin"]
|
||||
url = https://github.com/snider/project.git
|
||||
`)
|
||||
assert.Equal(t, "snider", detectOrg(m, "/repo"))
|
||||
}
|
||||
|
||||
func TestDetectOrg_Bad_NoConfig(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
assert.Equal(t, "", detectOrg(m, "/nonexistent"))
|
||||
}
|
||||
|
||||
func TestDetectOrg_Bad_NoRemote(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
_ = m.Write("/repo/.git/config", `[core]
|
||||
repositoryformatversion = 0
|
||||
`)
|
||||
assert.Equal(t, "", detectOrg(m, "/repo"))
|
||||
}
|
||||
|
||||
func TestDetectOrg_Bad_NonGitHubRemote(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
_ = m.Write("/repo/.git/config", `[remote "origin"]
|
||||
url = ssh://git@forge.lthn.ai:2223/core/go.git
|
||||
`)
|
||||
assert.Equal(t, "", detectOrg(m, "/repo"))
|
||||
}
|
||||
|
||||
// ── expandPath ─────────────────────────────────────────────────────
|
||||
|
||||
func TestExpandPath_Good_Tilde(t *testing.T) {
|
||||
got := expandPath("~/Code/repos")
|
||||
assert.NotContains(t, got, "~")
|
||||
assert.Contains(t, got, "Code/repos")
|
||||
}
|
||||
|
||||
func TestExpandPath_Good_NoTilde(t *testing.T) {
|
||||
assert.Equal(t, "/absolute/path", expandPath("/absolute/path"))
|
||||
assert.Equal(t, "relative/path", expandPath("relative/path"))
|
||||
}
|
||||
|
||||
// ── Repo.Exists / IsGitRepo ───────────────────────────────────────
|
||||
|
||||
func TestRepo_Exists_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := &Registry{
|
||||
medium: m,
|
||||
BasePath: "/tmp/repos",
|
||||
Repos: make(map[string]*Repo),
|
||||
}
|
||||
repo := &Repo{
|
||||
Name: "core",
|
||||
Path: "/tmp/repos/core",
|
||||
registry: reg,
|
||||
}
|
||||
|
||||
assert.False(t, repo.Exists())
|
||||
|
||||
_ = m.EnsureDir("/tmp/repos/core")
|
||||
assert.True(t, repo.Exists())
|
||||
}
|
||||
|
||||
func TestRepo_IsGitRepo_Good(t *testing.T) {
|
||||
m := io.NewMockMedium()
|
||||
reg := &Registry{
|
||||
medium: m,
|
||||
BasePath: "/tmp/repos",
|
||||
Repos: make(map[string]*Repo),
|
||||
}
|
||||
repo := &Repo{
|
||||
Name: "core",
|
||||
Path: "/tmp/repos/core",
|
||||
registry: reg,
|
||||
}
|
||||
|
||||
assert.False(t, repo.IsGitRepo())
|
||||
|
||||
_ = m.EnsureDir("/tmp/repos/core/.git")
|
||||
assert.True(t, repo.IsGitRepo())
|
||||
}
|
||||
|
||||
// ── getMedium fallback ─────────────────────────────────────────────
|
||||
|
||||
func TestGetMedium_Good_FallbackToLocal(t *testing.T) {
|
||||
repo := &Repo{Name: "orphan", Path: "/tmp/orphan"}
|
||||
// No registry set — should fall back to io.Local.
|
||||
m := repo.getMedium()
|
||||
assert.Equal(t, io.Local, m)
|
||||
}
|
||||
|
||||
func TestGetMedium_Good_NilMediumFallback(t *testing.T) {
|
||||
reg := &Registry{} // medium is nil.
|
||||
repo := &Repo{Name: "test", registry: reg}
|
||||
m := repo.getMedium()
|
||||
assert.Equal(t, io.Local, m)
|
||||
}
|
||||
|
|
@ -1,257 +0,0 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"html"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// RenderHTML generates a self-contained HTML timeline from a session.
|
||||
func RenderHTML(sess *Session, outputPath string) error {
|
||||
f, err := os.Create(outputPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("create html: %w", err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
duration := sess.EndTime.Sub(sess.StartTime)
|
||||
toolCount := 0
|
||||
errorCount := 0
|
||||
for _, e := range sess.Events {
|
||||
if e.Type == "tool_use" {
|
||||
toolCount++
|
||||
if !e.Success {
|
||||
errorCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintf(f, `<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<title>Session %s</title>
|
||||
<style>
|
||||
:root {
|
||||
--bg: #0d1117; --bg2: #161b22; --bg3: #21262d;
|
||||
--fg: #c9d1d9; --dim: #8b949e; --accent: #58a6ff;
|
||||
--green: #3fb950; --red: #f85149; --yellow: #d29922;
|
||||
--border: #30363d; --font: 'SF Mono', 'Cascadia Code', 'JetBrains Mono', monospace;
|
||||
}
|
||||
* { box-sizing: border-box; margin: 0; padding: 0; }
|
||||
body { background: var(--bg); color: var(--fg); font-family: var(--font); font-size: 13px; line-height: 1.5; }
|
||||
.header { background: var(--bg2); border-bottom: 1px solid var(--border); padding: 16px 24px; position: sticky; top: 0; z-index: 10; }
|
||||
.header h1 { font-size: 16px; font-weight: 600; color: var(--accent); }
|
||||
.header .meta { color: var(--dim); font-size: 12px; margin-top: 4px; }
|
||||
.header .stats span { display: inline-block; margin-right: 16px; }
|
||||
.header .stats .err { color: var(--red); }
|
||||
.search { margin-top: 8px; display: flex; gap: 8px; }
|
||||
.search input { background: var(--bg3); border: 1px solid var(--border); border-radius: 6px; color: var(--fg); font-family: var(--font); font-size: 12px; padding: 6px 12px; width: 300px; outline: none; }
|
||||
.search input:focus { border-color: var(--accent); }
|
||||
.search select { background: var(--bg3); border: 1px solid var(--border); border-radius: 6px; color: var(--fg); font-family: var(--font); font-size: 12px; padding: 6px 8px; outline: none; }
|
||||
.timeline { padding: 16px 24px; }
|
||||
.event { border: 1px solid var(--border); border-radius: 8px; margin-bottom: 8px; overflow: hidden; transition: border-color 0.15s; }
|
||||
.event:hover { border-color: var(--accent); }
|
||||
.event.error { border-color: var(--red); }
|
||||
.event.hidden { display: none; }
|
||||
.event-header { display: flex; align-items: center; gap: 8px; padding: 8px 12px; cursor: pointer; user-select: none; background: var(--bg2); }
|
||||
.event-header:hover { background: var(--bg3); }
|
||||
.event-header .time { color: var(--dim); font-size: 11px; min-width: 70px; }
|
||||
.event-header .tool { font-weight: 600; color: var(--accent); min-width: 60px; }
|
||||
.event-header .tool.bash { color: var(--green); }
|
||||
.event-header .tool.error { color: var(--red); }
|
||||
.event-header .tool.user { color: var(--yellow); }
|
||||
.event-header .tool.assistant { color: var(--dim); }
|
||||
.event-header .input { flex: 1; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
|
||||
.event-header .dur { color: var(--dim); font-size: 11px; min-width: 50px; text-align: right; }
|
||||
.event-header .status { font-size: 14px; min-width: 20px; text-align: center; }
|
||||
.event-header .arrow { color: var(--dim); font-size: 10px; transition: transform 0.15s; min-width: 16px; }
|
||||
.event.open .arrow { transform: rotate(90deg); }
|
||||
.event-body { display: none; padding: 12px; background: var(--bg); border-top: 1px solid var(--border); }
|
||||
.event.open .event-body { display: block; }
|
||||
.event-body pre { white-space: pre-wrap; word-break: break-all; font-size: 12px; max-height: 400px; overflow-y: auto; }
|
||||
.event-body .label { color: var(--dim); font-size: 11px; margin-bottom: 4px; text-transform: uppercase; letter-spacing: 0.5px; }
|
||||
.event-body .section { margin-bottom: 12px; }
|
||||
.event-body .output { color: var(--fg); }
|
||||
.event-body .output.err { color: var(--red); }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="header">
|
||||
<h1>Session %s</h1>
|
||||
<div class="meta">
|
||||
<div class="stats">
|
||||
<span>%s</span>
|
||||
<span>Duration: %s</span>
|
||||
<span>%d tool calls</span>`,
|
||||
shortID(sess.ID), shortID(sess.ID),
|
||||
sess.StartTime.Format("2006-01-02 15:04:05"),
|
||||
formatDuration(duration),
|
||||
toolCount)
|
||||
|
||||
if errorCount > 0 {
|
||||
fmt.Fprintf(f, `
|
||||
<span class="err">%d errors</span>`, errorCount)
|
||||
}
|
||||
|
||||
fmt.Fprintf(f, `
|
||||
</div>
|
||||
</div>
|
||||
<div class="search">
|
||||
<input type="text" id="search" placeholder="Search commands, outputs..." oninput="filterEvents()">
|
||||
<select id="filter" onchange="filterEvents()">
|
||||
<option value="all">All events</option>
|
||||
<option value="tool_use">Tool calls only</option>
|
||||
<option value="errors">Errors only</option>
|
||||
<option value="Bash">Bash only</option>
|
||||
<option value="user">User messages</option>
|
||||
</select>
|
||||
</div>
|
||||
</div>
|
||||
<div class="timeline" id="timeline">
|
||||
`)
|
||||
|
||||
for i, evt := range sess.Events {
|
||||
toolClass := strings.ToLower(evt.Tool)
|
||||
if evt.Type == "user" {
|
||||
toolClass = "user"
|
||||
} else if evt.Type == "assistant" {
|
||||
toolClass = "assistant"
|
||||
}
|
||||
|
||||
errorClass := ""
|
||||
if !evt.Success && evt.Type == "tool_use" {
|
||||
errorClass = " error"
|
||||
}
|
||||
|
||||
statusIcon := ""
|
||||
if evt.Type == "tool_use" {
|
||||
if evt.Success {
|
||||
statusIcon = `<span style="color:var(--green)">✓</span>`
|
||||
} else {
|
||||
statusIcon = `<span style="color:var(--red)">✗</span>`
|
||||
}
|
||||
}
|
||||
|
||||
toolLabel := evt.Tool
|
||||
if evt.Type == "user" {
|
||||
toolLabel = "User"
|
||||
} else if evt.Type == "assistant" {
|
||||
toolLabel = "Claude"
|
||||
}
|
||||
|
||||
durStr := ""
|
||||
if evt.Duration > 0 {
|
||||
durStr = formatDuration(evt.Duration)
|
||||
}
|
||||
|
||||
fmt.Fprintf(f, `<div class="event%s" data-type="%s" data-tool="%s" data-text="%s" id="evt-%d">
|
||||
<div class="event-header" onclick="toggle(%d)">
|
||||
<span class="arrow">▶</span>
|
||||
<span class="time">%s</span>
|
||||
<span class="tool %s">%s</span>
|
||||
<span class="input">%s</span>
|
||||
<span class="dur">%s</span>
|
||||
<span class="status">%s</span>
|
||||
</div>
|
||||
<div class="event-body">
|
||||
`,
|
||||
errorClass,
|
||||
evt.Type,
|
||||
evt.Tool,
|
||||
html.EscapeString(strings.ToLower(evt.Input+" "+evt.Output)),
|
||||
i,
|
||||
i,
|
||||
evt.Timestamp.Format("15:04:05"),
|
||||
toolClass,
|
||||
html.EscapeString(toolLabel),
|
||||
html.EscapeString(truncate(evt.Input, 120)),
|
||||
durStr,
|
||||
statusIcon)
|
||||
|
||||
if evt.Input != "" {
|
||||
label := "Command"
|
||||
if evt.Type == "user" {
|
||||
label = "Message"
|
||||
} else if evt.Type == "assistant" {
|
||||
label = "Response"
|
||||
} else if evt.Tool == "Read" || evt.Tool == "Glob" || evt.Tool == "Grep" {
|
||||
label = "Target"
|
||||
} else if evt.Tool == "Edit" || evt.Tool == "Write" {
|
||||
label = "File"
|
||||
}
|
||||
fmt.Fprintf(f, ` <div class="section"><div class="label">%s</div><pre>%s</pre></div>
|
||||
`, label, html.EscapeString(evt.Input))
|
||||
}
|
||||
|
||||
if evt.Output != "" {
|
||||
outClass := "output"
|
||||
if !evt.Success {
|
||||
outClass = "output err"
|
||||
}
|
||||
fmt.Fprintf(f, ` <div class="section"><div class="label">Output</div><pre class="%s">%s</pre></div>
|
||||
`, outClass, html.EscapeString(evt.Output))
|
||||
}
|
||||
|
||||
fmt.Fprint(f, ` </div>
|
||||
</div>
|
||||
`)
|
||||
}
|
||||
|
||||
fmt.Fprint(f, `</div>
|
||||
<script>
|
||||
function toggle(i) {
|
||||
document.getElementById('evt-'+i).classList.toggle('open');
|
||||
}
|
||||
function filterEvents() {
|
||||
const q = document.getElementById('search').value.toLowerCase();
|
||||
const f = document.getElementById('filter').value;
|
||||
document.querySelectorAll('.event').forEach(el => {
|
||||
const type = el.dataset.type;
|
||||
const tool = el.dataset.tool;
|
||||
const text = el.dataset.text;
|
||||
let show = true;
|
||||
if (f === 'tool_use' && type !== 'tool_use') show = false;
|
||||
if (f === 'errors' && !el.classList.contains('error')) show = false;
|
||||
if (f === 'Bash' && tool !== 'Bash') show = false;
|
||||
if (f === 'user' && type !== 'user') show = false;
|
||||
if (q && !text.includes(q)) show = false;
|
||||
el.classList.toggle('hidden', !show);
|
||||
});
|
||||
}
|
||||
document.addEventListener('keydown', e => {
|
||||
if (e.key === '/' && document.activeElement.tagName !== 'INPUT') {
|
||||
e.preventDefault();
|
||||
document.getElementById('search').focus();
|
||||
}
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
`)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func shortID(id string) string {
|
||||
if len(id) > 8 {
|
||||
return id[:8]
|
||||
}
|
||||
return id
|
||||
}
|
||||
|
||||
func formatDuration(d time.Duration) string {
|
||||
if d < time.Second {
|
||||
return fmt.Sprintf("%dms", d.Milliseconds())
|
||||
}
|
||||
if d < time.Minute {
|
||||
return fmt.Sprintf("%.1fs", d.Seconds())
|
||||
}
|
||||
if d < time.Hour {
|
||||
return fmt.Sprintf("%dm%ds", int(d.Minutes()), int(d.Seconds())%60)
|
||||
}
|
||||
return fmt.Sprintf("%dh%dm", int(d.Hours()), int(d.Minutes())%60)
|
||||
}
|
||||
|
|
@ -1,194 +0,0 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestRenderHTML_Good_BasicSession(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
out := filepath.Join(dir, "session.html")
|
||||
|
||||
sess := &Session{
|
||||
ID: "f3fb074c-8c72-4da6-a15a-85bae652ccaa",
|
||||
StartTime: time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC),
|
||||
EndTime: time.Date(2026, 2, 24, 10, 5, 0, 0, time.UTC),
|
||||
Events: []Event{
|
||||
{
|
||||
Timestamp: time.Date(2026, 2, 24, 10, 0, 5, 0, time.UTC),
|
||||
Type: "tool_use",
|
||||
Tool: "Bash",
|
||||
Input: "go test ./...",
|
||||
Output: "ok forge.lthn.ai/core/go 1.2s",
|
||||
Duration: time.Second,
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Timestamp: time.Date(2026, 2, 24, 10, 1, 0, 0, time.UTC),
|
||||
Type: "tool_use",
|
||||
Tool: "Read",
|
||||
Input: "/tmp/test.go",
|
||||
Output: "package main",
|
||||
Duration: 200 * time.Millisecond,
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Timestamp: time.Date(2026, 2, 24, 10, 2, 0, 0, time.UTC),
|
||||
Type: "user",
|
||||
Input: "looks good",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
if err := RenderHTML(sess, out); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
data, err := os.ReadFile(out)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
html := string(data)
|
||||
if !strings.Contains(html, "f3fb074c") {
|
||||
t.Fatal("missing session ID")
|
||||
}
|
||||
if !strings.Contains(html, "go test ./...") {
|
||||
t.Fatal("missing bash command")
|
||||
}
|
||||
if !strings.Contains(html, "2 tool calls") {
|
||||
t.Fatal("missing tool count")
|
||||
}
|
||||
if !strings.Contains(html, "filterEvents") {
|
||||
t.Fatal("missing JS filter function")
|
||||
}
|
||||
}
|
||||
|
||||
func TestRenderHTML_Good_WithErrors(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
out := filepath.Join(dir, "errors.html")
|
||||
|
||||
sess := &Session{
|
||||
ID: "err-session",
|
||||
StartTime: time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC),
|
||||
EndTime: time.Date(2026, 2, 24, 10, 1, 0, 0, time.UTC),
|
||||
Events: []Event{
|
||||
{
|
||||
Type: "tool_use", Tool: "Bash",
|
||||
Timestamp: time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC),
|
||||
Input: "bad command", Output: "error", Success: false,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
if err := RenderHTML(sess, out); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
data, _ := os.ReadFile(out)
|
||||
html := string(data)
|
||||
if !strings.Contains(html, "1 errors") {
|
||||
t.Fatal("missing error count")
|
||||
}
|
||||
if !strings.Contains(html, `class="event error"`) {
|
||||
t.Fatal("missing error class")
|
||||
}
|
||||
if !strings.Contains(html, "✗") {
|
||||
t.Fatal("missing failure icon")
|
||||
}
|
||||
}
|
||||
|
||||
func TestRenderHTML_Good_AssistantEvent(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
out := filepath.Join(dir, "asst.html")
|
||||
|
||||
sess := &Session{
|
||||
ID: "asst-test",
|
||||
StartTime: time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC),
|
||||
EndTime: time.Date(2026, 2, 24, 10, 0, 5, 0, time.UTC),
|
||||
Events: []Event{
|
||||
{
|
||||
Type: "assistant",
|
||||
Timestamp: time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC),
|
||||
Input: "Let me check that.",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
if err := RenderHTML(sess, out); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
data, _ := os.ReadFile(out)
|
||||
if !strings.Contains(string(data), "Claude") {
|
||||
t.Fatal("missing Claude label for assistant")
|
||||
}
|
||||
}
|
||||
|
||||
func TestRenderHTML_Good_EmptySession(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
out := filepath.Join(dir, "empty.html")
|
||||
|
||||
sess := &Session{
|
||||
ID: "empty",
|
||||
StartTime: time.Now(),
|
||||
EndTime: time.Now(),
|
||||
}
|
||||
|
||||
if err := RenderHTML(sess, out); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
info, err := os.Stat(out)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if info.Size() == 0 {
|
||||
t.Fatal("HTML file is empty")
|
||||
}
|
||||
}
|
||||
|
||||
func TestRenderHTML_Bad_InvalidPath(t *testing.T) {
|
||||
sess := &Session{ID: "test", StartTime: time.Now(), EndTime: time.Now()}
|
||||
err := RenderHTML(sess, "/nonexistent/dir/out.html")
|
||||
if err == nil {
|
||||
t.Fatal("expected error for invalid path")
|
||||
}
|
||||
}
|
||||
|
||||
func TestRenderHTML_Good_XSSEscaping(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
out := filepath.Join(dir, "xss.html")
|
||||
|
||||
sess := &Session{
|
||||
ID: "xss-test",
|
||||
StartTime: time.Now(),
|
||||
EndTime: time.Now(),
|
||||
Events: []Event{
|
||||
{
|
||||
Type: "tool_use",
|
||||
Tool: "Bash",
|
||||
Timestamp: time.Now(),
|
||||
Input: `echo "<script>alert('xss')</script>"`,
|
||||
Output: `<img onerror=alert(1)>`,
|
||||
Success: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
if err := RenderHTML(sess, out); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
data, _ := os.ReadFile(out)
|
||||
html := string(data)
|
||||
if strings.Contains(html, "<script>alert") {
|
||||
t.Fatal("XSS: unescaped script tag in HTML output")
|
||||
}
|
||||
if strings.Contains(html, "<img onerror") {
|
||||
t.Fatal("XSS: unescaped img tag in HTML output")
|
||||
}
|
||||
}
|
||||
|
|
@ -1,384 +0,0 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"slices"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Event represents a single action in a session timeline.
|
||||
type Event struct {
|
||||
Timestamp time.Time
|
||||
Type string // "tool_use", "user", "assistant", "error"
|
||||
Tool string // "Bash", "Read", "Edit", "Write", "Grep", "Glob", etc.
|
||||
ToolID string
|
||||
Input string // Command, file path, or message text
|
||||
Output string // Result text
|
||||
Duration time.Duration
|
||||
Success bool
|
||||
ErrorMsg string
|
||||
}
|
||||
|
||||
// Session holds parsed session metadata and events.
|
||||
type Session struct {
|
||||
ID string
|
||||
Path string
|
||||
StartTime time.Time
|
||||
EndTime time.Time
|
||||
Events []Event
|
||||
}
|
||||
|
||||
// rawEntry is the top-level structure of a Claude Code JSONL line.
|
||||
type rawEntry struct {
|
||||
Type string `json:"type"`
|
||||
Timestamp string `json:"timestamp"`
|
||||
SessionID string `json:"sessionId"`
|
||||
Message json.RawMessage `json:"message"`
|
||||
UserType string `json:"userType"`
|
||||
}
|
||||
|
||||
type rawMessage struct {
|
||||
Role string `json:"role"`
|
||||
Content []json.RawMessage `json:"content"`
|
||||
}
|
||||
|
||||
type contentBlock struct {
|
||||
Type string `json:"type"`
|
||||
Name string `json:"name,omitempty"`
|
||||
ID string `json:"id,omitempty"`
|
||||
Text string `json:"text,omitempty"`
|
||||
Input json.RawMessage `json:"input,omitempty"`
|
||||
ToolUseID string `json:"tool_use_id,omitempty"`
|
||||
Content any `json:"content,omitempty"`
|
||||
IsError *bool `json:"is_error,omitempty"`
|
||||
}
|
||||
|
||||
type bashInput struct {
|
||||
Command string `json:"command"`
|
||||
Description string `json:"description"`
|
||||
Timeout int `json:"timeout"`
|
||||
}
|
||||
|
||||
type readInput struct {
|
||||
FilePath string `json:"file_path"`
|
||||
Offset int `json:"offset"`
|
||||
Limit int `json:"limit"`
|
||||
}
|
||||
|
||||
type editInput struct {
|
||||
FilePath string `json:"file_path"`
|
||||
OldString string `json:"old_string"`
|
||||
NewString string `json:"new_string"`
|
||||
}
|
||||
|
||||
type writeInput struct {
|
||||
FilePath string `json:"file_path"`
|
||||
Content string `json:"content"`
|
||||
}
|
||||
|
||||
type grepInput struct {
|
||||
Pattern string `json:"pattern"`
|
||||
Path string `json:"path"`
|
||||
}
|
||||
|
||||
type globInput struct {
|
||||
Pattern string `json:"pattern"`
|
||||
Path string `json:"path"`
|
||||
}
|
||||
|
||||
type taskInput struct {
|
||||
Prompt string `json:"prompt"`
|
||||
Description string `json:"description"`
|
||||
SubagentType string `json:"subagent_type"`
|
||||
}
|
||||
|
||||
// ListSessions returns all sessions found in the Claude projects directory.
|
||||
func ListSessions(projectsDir string) ([]Session, error) {
|
||||
matches, err := filepath.Glob(filepath.Join(projectsDir, "*.jsonl"))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("glob sessions: %w", err)
|
||||
}
|
||||
|
||||
var sessions []Session
|
||||
for _, path := range matches {
|
||||
base := filepath.Base(path)
|
||||
id := strings.TrimSuffix(base, ".jsonl")
|
||||
|
||||
info, err := os.Stat(path)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
s := Session{
|
||||
ID: id,
|
||||
Path: path,
|
||||
}
|
||||
|
||||
// Quick scan for first and last timestamps
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
scanner := bufio.NewScanner(f)
|
||||
scanner.Buffer(make([]byte, 1024*1024), 1024*1024)
|
||||
var firstTS, lastTS string
|
||||
for scanner.Scan() {
|
||||
var entry rawEntry
|
||||
if json.Unmarshal(scanner.Bytes(), &entry) != nil {
|
||||
continue
|
||||
}
|
||||
if entry.Timestamp == "" {
|
||||
continue
|
||||
}
|
||||
if firstTS == "" {
|
||||
firstTS = entry.Timestamp
|
||||
}
|
||||
lastTS = entry.Timestamp
|
||||
}
|
||||
f.Close()
|
||||
|
||||
if firstTS != "" {
|
||||
s.StartTime, _ = time.Parse(time.RFC3339Nano, firstTS)
|
||||
}
|
||||
if lastTS != "" {
|
||||
s.EndTime, _ = time.Parse(time.RFC3339Nano, lastTS)
|
||||
}
|
||||
if s.StartTime.IsZero() {
|
||||
s.StartTime = info.ModTime()
|
||||
}
|
||||
|
||||
sessions = append(sessions, s)
|
||||
}
|
||||
|
||||
slices.SortFunc(sessions, func(a, b Session) int {
|
||||
return b.StartTime.Compare(a.StartTime) // descending
|
||||
})
|
||||
|
||||
return sessions, nil
|
||||
}
|
||||
|
||||
// ParseTranscript reads a JSONL session file and returns structured events.
|
||||
func ParseTranscript(path string) (*Session, error) {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("open transcript: %w", err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
base := filepath.Base(path)
|
||||
sess := &Session{
|
||||
ID: strings.TrimSuffix(base, ".jsonl"),
|
||||
Path: path,
|
||||
}
|
||||
|
||||
// Collect tool_use entries keyed by ID
|
||||
type toolUse struct {
|
||||
timestamp time.Time
|
||||
tool string
|
||||
input string
|
||||
}
|
||||
pendingTools := make(map[string]toolUse)
|
||||
|
||||
scanner := bufio.NewScanner(f)
|
||||
scanner.Buffer(make([]byte, 4*1024*1024), 4*1024*1024)
|
||||
|
||||
for scanner.Scan() {
|
||||
var entry rawEntry
|
||||
if err := json.Unmarshal(scanner.Bytes(), &entry); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
ts, _ := time.Parse(time.RFC3339Nano, entry.Timestamp)
|
||||
|
||||
if sess.StartTime.IsZero() && !ts.IsZero() {
|
||||
sess.StartTime = ts
|
||||
}
|
||||
if !ts.IsZero() {
|
||||
sess.EndTime = ts
|
||||
}
|
||||
|
||||
switch entry.Type {
|
||||
case "assistant":
|
||||
var msg rawMessage
|
||||
if json.Unmarshal(entry.Message, &msg) != nil {
|
||||
continue
|
||||
}
|
||||
for _, raw := range msg.Content {
|
||||
var block contentBlock
|
||||
if json.Unmarshal(raw, &block) != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
switch block.Type {
|
||||
case "text":
|
||||
if text := strings.TrimSpace(block.Text); text != "" {
|
||||
sess.Events = append(sess.Events, Event{
|
||||
Timestamp: ts,
|
||||
Type: "assistant",
|
||||
Input: truncate(text, 500),
|
||||
})
|
||||
}
|
||||
|
||||
case "tool_use":
|
||||
inputStr := extractToolInput(block.Name, block.Input)
|
||||
pendingTools[block.ID] = toolUse{
|
||||
timestamp: ts,
|
||||
tool: block.Name,
|
||||
input: inputStr,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
case "user":
|
||||
var msg rawMessage
|
||||
if json.Unmarshal(entry.Message, &msg) != nil {
|
||||
continue
|
||||
}
|
||||
for _, raw := range msg.Content {
|
||||
var block contentBlock
|
||||
if json.Unmarshal(raw, &block) != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
switch block.Type {
|
||||
case "tool_result":
|
||||
if tu, ok := pendingTools[block.ToolUseID]; ok {
|
||||
output := extractResultContent(block.Content)
|
||||
isError := block.IsError != nil && *block.IsError
|
||||
evt := Event{
|
||||
Timestamp: tu.timestamp,
|
||||
Type: "tool_use",
|
||||
Tool: tu.tool,
|
||||
ToolID: block.ToolUseID,
|
||||
Input: tu.input,
|
||||
Output: truncate(output, 2000),
|
||||
Duration: ts.Sub(tu.timestamp),
|
||||
Success: !isError,
|
||||
}
|
||||
if isError {
|
||||
evt.ErrorMsg = truncate(output, 500)
|
||||
}
|
||||
sess.Events = append(sess.Events, evt)
|
||||
delete(pendingTools, block.ToolUseID)
|
||||
}
|
||||
|
||||
case "text":
|
||||
if text := strings.TrimSpace(block.Text); text != "" {
|
||||
sess.Events = append(sess.Events, Event{
|
||||
Timestamp: ts,
|
||||
Type: "user",
|
||||
Input: truncate(text, 500),
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return sess, scanner.Err()
|
||||
}
|
||||
|
||||
func extractToolInput(toolName string, raw json.RawMessage) string {
|
||||
if raw == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
switch toolName {
|
||||
case "Bash":
|
||||
var inp bashInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
desc := inp.Description
|
||||
if desc != "" {
|
||||
desc = " # " + desc
|
||||
}
|
||||
return inp.Command + desc
|
||||
}
|
||||
case "Read":
|
||||
var inp readInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
return inp.FilePath
|
||||
}
|
||||
case "Edit":
|
||||
var inp editInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
return fmt.Sprintf("%s (edit)", inp.FilePath)
|
||||
}
|
||||
case "Write":
|
||||
var inp writeInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
return fmt.Sprintf("%s (%d bytes)", inp.FilePath, len(inp.Content))
|
||||
}
|
||||
case "Grep":
|
||||
var inp grepInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
path := inp.Path
|
||||
if path == "" {
|
||||
path = "."
|
||||
}
|
||||
return fmt.Sprintf("/%s/ in %s", inp.Pattern, path)
|
||||
}
|
||||
case "Glob":
|
||||
var inp globInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
return inp.Pattern
|
||||
}
|
||||
case "Task":
|
||||
var inp taskInput
|
||||
if json.Unmarshal(raw, &inp) == nil {
|
||||
desc := inp.Description
|
||||
if desc == "" {
|
||||
desc = truncate(inp.Prompt, 80)
|
||||
}
|
||||
return fmt.Sprintf("[%s] %s", inp.SubagentType, desc)
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: show raw JSON keys
|
||||
var m map[string]any
|
||||
if json.Unmarshal(raw, &m) == nil {
|
||||
var parts []string
|
||||
for k := range m {
|
||||
parts = append(parts, k)
|
||||
}
|
||||
sort.Strings(parts)
|
||||
return strings.Join(parts, ", ")
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
func extractResultContent(content any) string {
|
||||
switch v := content.(type) {
|
||||
case string:
|
||||
return v
|
||||
case []any:
|
||||
var parts []string
|
||||
for _, item := range v {
|
||||
if m, ok := item.(map[string]any); ok {
|
||||
if text, ok := m["text"].(string); ok {
|
||||
parts = append(parts, text)
|
||||
}
|
||||
}
|
||||
}
|
||||
return strings.Join(parts, "\n")
|
||||
case map[string]any:
|
||||
if text, ok := v["text"].(string); ok {
|
||||
return text
|
||||
}
|
||||
}
|
||||
return fmt.Sprintf("%v", content)
|
||||
}
|
||||
|
||||
func truncate(s string, max int) string {
|
||||
if len(s) <= max {
|
||||
return s
|
||||
}
|
||||
return s[:max] + "..."
|
||||
}
|
||||
|
|
@ -1,498 +0,0 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ── truncate ───────────────────────────────────────────────────────
|
||||
|
||||
func TestTruncate_Good_Short(t *testing.T) {
|
||||
if got := truncate("hello", 10); got != "hello" {
|
||||
t.Fatalf("expected hello, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTruncate_Good_Exact(t *testing.T) {
|
||||
if got := truncate("12345", 5); got != "12345" {
|
||||
t.Fatalf("expected 12345, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTruncate_Good_Long(t *testing.T) {
|
||||
got := truncate("hello world", 5)
|
||||
if got != "hello..." {
|
||||
t.Fatalf("expected hello..., got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTruncate_Good_Empty(t *testing.T) {
|
||||
if got := truncate("", 10); got != "" {
|
||||
t.Fatalf("expected empty, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
// ── shortID ────────────────────────────────────────────────────────
|
||||
|
||||
func TestShortID_Good_Long(t *testing.T) {
|
||||
got := shortID("f3fb074c-8c72-4da6-a15a-85bae652ccaa")
|
||||
if got != "f3fb074c" {
|
||||
t.Fatalf("expected f3fb074c, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestShortID_Good_Short(t *testing.T) {
|
||||
if got := shortID("abc"); got != "abc" {
|
||||
t.Fatalf("expected abc, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestShortID_Good_ExactEight(t *testing.T) {
|
||||
if got := shortID("12345678"); got != "12345678" {
|
||||
t.Fatalf("expected 12345678, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
// ── formatDuration ─────────────────────────────────────────────────
|
||||
|
||||
func TestFormatDuration_Good_Milliseconds(t *testing.T) {
|
||||
got := formatDuration(500 * time.Millisecond)
|
||||
if got != "500ms" {
|
||||
t.Fatalf("expected 500ms, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFormatDuration_Good_Seconds(t *testing.T) {
|
||||
got := formatDuration(3500 * time.Millisecond)
|
||||
if got != "3.5s" {
|
||||
t.Fatalf("expected 3.5s, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFormatDuration_Good_Minutes(t *testing.T) {
|
||||
got := formatDuration(2*time.Minute + 30*time.Second)
|
||||
if got != "2m30s" {
|
||||
t.Fatalf("expected 2m30s, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFormatDuration_Good_Hours(t *testing.T) {
|
||||
got := formatDuration(1*time.Hour + 15*time.Minute)
|
||||
if got != "1h15m" {
|
||||
t.Fatalf("expected 1h15m, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
// ── extractToolInput ───────────────────────────────────────────────
|
||||
|
||||
func TestExtractToolInput_Good_Bash(t *testing.T) {
|
||||
raw := json.RawMessage(`{"command":"go test ./...","description":"run tests"}`)
|
||||
got := extractToolInput("Bash", raw)
|
||||
if got != "go test ./... # run tests" {
|
||||
t.Fatalf("unexpected: %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractToolInput_Good_BashNoDesc(t *testing.T) {
|
||||
raw := json.RawMessage(`{"command":"ls"}`)
|
||||
got := extractToolInput("Bash", raw)
|
||||
if got != "ls" {
|
||||
t.Fatalf("expected ls, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractToolInput_Good_Read(t *testing.T) {
|
||||
raw := json.RawMessage(`{"file_path":"/tmp/test.go"}`)
|
||||
got := extractToolInput("Read", raw)
|
||||
if got != "/tmp/test.go" {
|
||||
t.Fatalf("expected /tmp/test.go, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractToolInput_Good_Edit(t *testing.T) {
|
||||
raw := json.RawMessage(`{"file_path":"/tmp/test.go","old_string":"foo","new_string":"bar"}`)
|
||||
got := extractToolInput("Edit", raw)
|
||||
if got != "/tmp/test.go (edit)" {
|
||||
t.Fatalf("expected /tmp/test.go (edit), got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractToolInput_Good_Write(t *testing.T) {
|
||||
raw := json.RawMessage(`{"file_path":"/tmp/out.go","content":"package main"}`)
|
||||
got := extractToolInput("Write", raw)
|
||||
if got != "/tmp/out.go (12 bytes)" {
|
||||
t.Fatalf("unexpected: %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractToolInput_Good_Grep(t *testing.T) {
|
||||
raw := json.RawMessage(`{"pattern":"TODO","path":"/src"}`)
|
||||
got := extractToolInput("Grep", raw)
|
||||
if got != "/TODO/ in /src" {
|
||||
t.Fatalf("unexpected: %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractToolInput_Good_GrepNoPath(t *testing.T) {
|
||||
raw := json.RawMessage(`{"pattern":"TODO"}`)
|
||||
got := extractToolInput("Grep", raw)
|
||||
if got != "/TODO/ in ." {
|
||||
t.Fatalf("unexpected: %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractToolInput_Good_Glob(t *testing.T) {
|
||||
raw := json.RawMessage(`{"pattern":"**/*.go"}`)
|
||||
got := extractToolInput("Glob", raw)
|
||||
if got != "**/*.go" {
|
||||
t.Fatalf("unexpected: %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractToolInput_Good_Task(t *testing.T) {
|
||||
raw := json.RawMessage(`{"prompt":"investigate the bug","description":"debug helper","subagent_type":"Explore"}`)
|
||||
got := extractToolInput("Task", raw)
|
||||
if got != "[Explore] debug helper" {
|
||||
t.Fatalf("unexpected: %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractToolInput_Good_TaskNoDesc(t *testing.T) {
|
||||
raw := json.RawMessage(`{"prompt":"investigate the bug","subagent_type":"Explore"}`)
|
||||
got := extractToolInput("Task", raw)
|
||||
if got != "[Explore] investigate the bug" {
|
||||
t.Fatalf("unexpected: %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractToolInput_Good_UnknownTool(t *testing.T) {
|
||||
raw := json.RawMessage(`{"alpha":"1","beta":"2"}`)
|
||||
got := extractToolInput("CustomTool", raw)
|
||||
if got != "alpha, beta" {
|
||||
t.Fatalf("unexpected: %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractToolInput_Good_NilInput(t *testing.T) {
|
||||
got := extractToolInput("Bash", nil)
|
||||
if got != "" {
|
||||
t.Fatalf("expected empty, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractToolInput_Bad_InvalidJSON(t *testing.T) {
|
||||
raw := json.RawMessage(`not json`)
|
||||
got := extractToolInput("Bash", raw)
|
||||
// Falls through to fallback, which also fails — returns empty.
|
||||
if got != "" {
|
||||
t.Fatalf("expected empty, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
// ── extractResultContent ───────────────────────────────────────────
|
||||
|
||||
func TestExtractResultContent_Good_String(t *testing.T) {
|
||||
got := extractResultContent("hello")
|
||||
if got != "hello" {
|
||||
t.Fatalf("expected hello, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractResultContent_Good_Slice(t *testing.T) {
|
||||
input := []any{
|
||||
map[string]any{"text": "line1"},
|
||||
map[string]any{"text": "line2"},
|
||||
}
|
||||
got := extractResultContent(input)
|
||||
if got != "line1\nline2" {
|
||||
t.Fatalf("unexpected: %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractResultContent_Good_Map(t *testing.T) {
|
||||
input := map[string]any{"text": "content"}
|
||||
got := extractResultContent(input)
|
||||
if got != "content" {
|
||||
t.Fatalf("expected content, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractResultContent_Good_MapNoText(t *testing.T) {
|
||||
input := map[string]any{"data": 42}
|
||||
got := extractResultContent(input)
|
||||
if got == "" {
|
||||
t.Fatal("expected non-empty fallback")
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractResultContent_Good_Other(t *testing.T) {
|
||||
got := extractResultContent(42)
|
||||
if got != "42" {
|
||||
t.Fatalf("expected 42, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
// ── ParseTranscript ────────────────────────────────────────────────
|
||||
|
||||
func writeJSONL(t *testing.T, path string, entries []any) {
|
||||
t.Helper()
|
||||
f, err := os.Create(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer f.Close()
|
||||
enc := json.NewEncoder(f)
|
||||
for _, e := range entries {
|
||||
if err := enc.Encode(e); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseTranscript_Good_BasicFlow(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "test-session.jsonl")
|
||||
|
||||
ts1 := time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC)
|
||||
ts2 := time.Date(2026, 2, 24, 10, 0, 1, 0, time.UTC)
|
||||
ts3 := time.Date(2026, 2, 24, 10, 0, 2, 0, time.UTC)
|
||||
|
||||
entries := []any{
|
||||
map[string]any{
|
||||
"type": "assistant", "timestamp": ts1.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "assistant",
|
||||
"content": []any{
|
||||
map[string]any{
|
||||
"type": "tool_use", "id": "tu_1", "name": "Bash",
|
||||
"input": map[string]any{"command": "go test ./...", "description": "run tests"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
map[string]any{
|
||||
"type": "user", "timestamp": ts2.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "user",
|
||||
"content": []any{
|
||||
map[string]any{
|
||||
"type": "tool_result", "tool_use_id": "tu_1",
|
||||
"content": "ok forge.lthn.ai/core/go 1.2s",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
map[string]any{
|
||||
"type": "user", "timestamp": ts3.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "user",
|
||||
"content": []any{
|
||||
map[string]any{
|
||||
"type": "text", "text": "nice work",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
writeJSONL(t, path, entries)
|
||||
|
||||
sess, err := ParseTranscript(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if sess.ID != "test-session" {
|
||||
t.Fatalf("expected test-session, got %s", sess.ID)
|
||||
}
|
||||
if len(sess.Events) != 2 {
|
||||
t.Fatalf("expected 2 events, got %d", len(sess.Events))
|
||||
}
|
||||
|
||||
// Tool use event.
|
||||
tool := sess.Events[0]
|
||||
if tool.Type != "tool_use" {
|
||||
t.Fatalf("expected tool_use, got %s", tool.Type)
|
||||
}
|
||||
if tool.Tool != "Bash" {
|
||||
t.Fatalf("expected Bash, got %s", tool.Tool)
|
||||
}
|
||||
if !tool.Success {
|
||||
t.Fatal("expected success")
|
||||
}
|
||||
if tool.Duration != time.Second {
|
||||
t.Fatalf("expected 1s duration, got %s", tool.Duration)
|
||||
}
|
||||
|
||||
// User message.
|
||||
user := sess.Events[1]
|
||||
if user.Type != "user" {
|
||||
t.Fatalf("expected user, got %s", user.Type)
|
||||
}
|
||||
if user.Input != "nice work" {
|
||||
t.Fatalf("unexpected input: %s", user.Input)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseTranscript_Good_ToolError(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "err-session.jsonl")
|
||||
|
||||
ts1 := time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC)
|
||||
ts2 := time.Date(2026, 2, 24, 10, 0, 1, 0, time.UTC)
|
||||
isError := true
|
||||
|
||||
entries := []any{
|
||||
map[string]any{
|
||||
"type": "assistant", "timestamp": ts1.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "assistant",
|
||||
"content": []any{
|
||||
map[string]any{
|
||||
"type": "tool_use", "id": "tu_err", "name": "Bash",
|
||||
"input": map[string]any{"command": "rm -rf /"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
map[string]any{
|
||||
"type": "user", "timestamp": ts2.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "user",
|
||||
"content": []any{
|
||||
map[string]any{
|
||||
"type": "tool_result", "tool_use_id": "tu_err",
|
||||
"content": "permission denied", "is_error": &isError,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
writeJSONL(t, path, entries)
|
||||
|
||||
sess, err := ParseTranscript(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if len(sess.Events) != 1 {
|
||||
t.Fatalf("expected 1 event, got %d", len(sess.Events))
|
||||
}
|
||||
if sess.Events[0].Success {
|
||||
t.Fatal("expected failure")
|
||||
}
|
||||
if sess.Events[0].ErrorMsg != "permission denied" {
|
||||
t.Fatalf("unexpected error: %s", sess.Events[0].ErrorMsg)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseTranscript_Good_AssistantText(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "asst.jsonl")
|
||||
|
||||
ts := time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC)
|
||||
entries := []any{
|
||||
map[string]any{
|
||||
"type": "assistant", "timestamp": ts.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "assistant",
|
||||
"content": []any{
|
||||
map[string]any{"type": "text", "text": "Let me check that."},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
writeJSONL(t, path, entries)
|
||||
|
||||
sess, err := ParseTranscript(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(sess.Events) != 1 {
|
||||
t.Fatalf("expected 1 event, got %d", len(sess.Events))
|
||||
}
|
||||
if sess.Events[0].Type != "assistant" {
|
||||
t.Fatalf("expected assistant, got %s", sess.Events[0].Type)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseTranscript_Bad_MissingFile(t *testing.T) {
|
||||
_, err := ParseTranscript("/nonexistent/path.jsonl")
|
||||
if err == nil {
|
||||
t.Fatal("expected error for missing file")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseTranscript_Good_EmptyFile(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "empty.jsonl")
|
||||
os.WriteFile(path, []byte{}, 0644)
|
||||
|
||||
sess, err := ParseTranscript(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(sess.Events) != 0 {
|
||||
t.Fatalf("expected 0 events, got %d", len(sess.Events))
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseTranscript_Good_MalformedLines(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "bad.jsonl")
|
||||
os.WriteFile(path, []byte("not json\n{also bad\n"), 0644)
|
||||
|
||||
sess, err := ParseTranscript(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(sess.Events) != 0 {
|
||||
t.Fatalf("expected 0 events from bad lines, got %d", len(sess.Events))
|
||||
}
|
||||
}
|
||||
|
||||
// ── ListSessions ───────────────────────────────────────────────────
|
||||
|
||||
func TestListSessions_Good(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
ts1 := time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC)
|
||||
ts2 := time.Date(2026, 2, 24, 11, 0, 0, 0, time.UTC)
|
||||
|
||||
writeJSONL(t, filepath.Join(dir, "sess-a.jsonl"), []any{
|
||||
map[string]any{"type": "assistant", "timestamp": ts1.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{"role": "assistant", "content": []any{}}},
|
||||
})
|
||||
writeJSONL(t, filepath.Join(dir, "sess-b.jsonl"), []any{
|
||||
map[string]any{"type": "assistant", "timestamp": ts2.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{"role": "assistant", "content": []any{}}},
|
||||
})
|
||||
|
||||
sessions, err := ListSessions(dir)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(sessions) != 2 {
|
||||
t.Fatalf("expected 2 sessions, got %d", len(sessions))
|
||||
}
|
||||
// Sorted newest first.
|
||||
if sessions[0].ID != "sess-b" {
|
||||
t.Fatalf("expected sess-b first, got %s", sessions[0].ID)
|
||||
}
|
||||
}
|
||||
|
||||
func TestListSessions_Good_EmptyDir(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
sessions, err := ListSessions(dir)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(sessions) != 0 {
|
||||
t.Fatalf("expected 0, got %d", len(sessions))
|
||||
}
|
||||
}
|
||||
|
|
@ -1,54 +0,0 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// SearchResult represents a match found in a session transcript.
|
||||
type SearchResult struct {
|
||||
SessionID string
|
||||
Timestamp time.Time
|
||||
Tool string
|
||||
Match string
|
||||
}
|
||||
|
||||
// Search finds events matching the query across all sessions in the directory.
|
||||
func Search(projectsDir, query string) ([]SearchResult, error) {
|
||||
matches, err := filepath.Glob(filepath.Join(projectsDir, "*.jsonl"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var results []SearchResult
|
||||
query = strings.ToLower(query)
|
||||
|
||||
for _, path := range matches {
|
||||
sess, err := ParseTranscript(path)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, evt := range sess.Events {
|
||||
if evt.Type != "tool_use" {
|
||||
continue
|
||||
}
|
||||
text := strings.ToLower(evt.Input + " " + evt.Output)
|
||||
if strings.Contains(text, query) {
|
||||
matchCtx := evt.Input
|
||||
if matchCtx == "" {
|
||||
matchCtx = truncate(evt.Output, 120)
|
||||
}
|
||||
results = append(results, SearchResult{
|
||||
SessionID: sess.ID,
|
||||
Timestamp: evt.Timestamp,
|
||||
Tool: evt.Tool,
|
||||
Match: matchCtx,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return results, nil
|
||||
}
|
||||
|
|
@ -1,172 +0,0 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestSearch_Good_MatchFound(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
ts1 := time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC)
|
||||
ts2 := time.Date(2026, 2, 24, 10, 0, 1, 0, time.UTC)
|
||||
|
||||
writeJSONL(t, filepath.Join(dir, "search-test.jsonl"), []any{
|
||||
map[string]any{
|
||||
"type": "assistant", "timestamp": ts1.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "assistant",
|
||||
"content": []any{
|
||||
map[string]any{
|
||||
"type": "tool_use", "id": "tu_1", "name": "Bash",
|
||||
"input": map[string]any{"command": "go test ./..."},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
map[string]any{
|
||||
"type": "user", "timestamp": ts2.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "user",
|
||||
"content": []any{
|
||||
map[string]any{
|
||||
"type": "tool_result", "tool_use_id": "tu_1",
|
||||
"content": "ok forge.lthn.ai/core/go 1.2s",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
results, err := Search(dir, "go test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(results) != 1 {
|
||||
t.Fatalf("expected 1 result, got %d", len(results))
|
||||
}
|
||||
if results[0].Tool != "Bash" {
|
||||
t.Fatalf("expected Bash, got %s", results[0].Tool)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSearch_Good_CaseInsensitive(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
ts1 := time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC)
|
||||
ts2 := time.Date(2026, 2, 24, 10, 0, 1, 0, time.UTC)
|
||||
|
||||
writeJSONL(t, filepath.Join(dir, "case.jsonl"), []any{
|
||||
map[string]any{
|
||||
"type": "assistant", "timestamp": ts1.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "assistant",
|
||||
"content": []any{
|
||||
map[string]any{
|
||||
"type": "tool_use", "id": "tu_2", "name": "Bash",
|
||||
"input": map[string]any{"command": "GO TEST"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
map[string]any{
|
||||
"type": "user", "timestamp": ts2.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "user",
|
||||
"content": []any{
|
||||
map[string]any{
|
||||
"type": "tool_result", "tool_use_id": "tu_2",
|
||||
"content": "ok",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
results, err := Search(dir, "go test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(results) != 1 {
|
||||
t.Fatal("case insensitive search should match")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSearch_Good_NoMatch(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
ts1 := time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC)
|
||||
ts2 := time.Date(2026, 2, 24, 10, 0, 1, 0, time.UTC)
|
||||
|
||||
writeJSONL(t, filepath.Join(dir, "nomatch.jsonl"), []any{
|
||||
map[string]any{
|
||||
"type": "assistant", "timestamp": ts1.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "assistant",
|
||||
"content": []any{
|
||||
map[string]any{
|
||||
"type": "tool_use", "id": "tu_3", "name": "Bash",
|
||||
"input": map[string]any{"command": "ls"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
map[string]any{
|
||||
"type": "user", "timestamp": ts2.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "user",
|
||||
"content": []any{
|
||||
map[string]any{
|
||||
"type": "tool_result", "tool_use_id": "tu_3",
|
||||
"content": "file.txt",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
results, err := Search(dir, "nonexistent query")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(results) != 0 {
|
||||
t.Fatalf("expected 0 results, got %d", len(results))
|
||||
}
|
||||
}
|
||||
|
||||
func TestSearch_Good_EmptyDir(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
results, err := Search(dir, "anything")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(results) != 0 {
|
||||
t.Fatalf("expected 0, got %d", len(results))
|
||||
}
|
||||
}
|
||||
|
||||
func TestSearch_Good_SkipsNonToolEvents(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
|
||||
ts := time.Date(2026, 2, 24, 10, 0, 0, 0, time.UTC)
|
||||
writeJSONL(t, filepath.Join(dir, "skip.jsonl"), []any{
|
||||
map[string]any{
|
||||
"type": "user", "timestamp": ts.Format(time.RFC3339Nano),
|
||||
"message": map[string]any{
|
||||
"role": "user",
|
||||
"content": []any{
|
||||
map[string]any{"type": "text", "text": "go test should find this"},
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
results, err := Search(dir, "go test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(results) != 0 {
|
||||
t.Fatal("search should only match tool_use events")
|
||||
}
|
||||
}
|
||||
|
|
@ -1,128 +0,0 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// RenderMP4 generates an MP4 video from session events using VHS (charmbracelet).
|
||||
func RenderMP4(sess *Session, outputPath string) error {
|
||||
if _, err := exec.LookPath("vhs"); err != nil {
|
||||
return errors.New("vhs not installed (go install github.com/charmbracelet/vhs@latest)")
|
||||
}
|
||||
|
||||
tape := generateTape(sess, outputPath)
|
||||
|
||||
tmpFile, err := os.CreateTemp("", "session-*.tape")
|
||||
if err != nil {
|
||||
return fmt.Errorf("create tape: %w", err)
|
||||
}
|
||||
defer os.Remove(tmpFile.Name())
|
||||
|
||||
if _, err := tmpFile.WriteString(tape); err != nil {
|
||||
tmpFile.Close()
|
||||
return fmt.Errorf("write tape: %w", err)
|
||||
}
|
||||
tmpFile.Close()
|
||||
|
||||
cmd := exec.Command("vhs", tmpFile.Name())
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("vhs render: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func generateTape(sess *Session, outputPath string) string {
|
||||
var b strings.Builder
|
||||
|
||||
b.WriteString(fmt.Sprintf("Output %s\n", outputPath))
|
||||
b.WriteString("Set FontSize 16\n")
|
||||
b.WriteString("Set Width 1400\n")
|
||||
b.WriteString("Set Height 800\n")
|
||||
b.WriteString("Set TypingSpeed 30ms\n")
|
||||
b.WriteString("Set Theme \"Catppuccin Mocha\"\n")
|
||||
b.WriteString("Set Shell bash\n")
|
||||
b.WriteString("\n")
|
||||
|
||||
// Title frame
|
||||
id := sess.ID
|
||||
if len(id) > 8 {
|
||||
id = id[:8]
|
||||
}
|
||||
b.WriteString(fmt.Sprintf("Type \"# Session %s | %s\"\n",
|
||||
id, sess.StartTime.Format("2006-01-02 15:04")))
|
||||
b.WriteString("Enter\n")
|
||||
b.WriteString("Sleep 2s\n")
|
||||
b.WriteString("\n")
|
||||
|
||||
for _, evt := range sess.Events {
|
||||
if evt.Type != "tool_use" {
|
||||
continue
|
||||
}
|
||||
|
||||
switch evt.Tool {
|
||||
case "Bash":
|
||||
cmd := extractCommand(evt.Input)
|
||||
if cmd == "" {
|
||||
continue
|
||||
}
|
||||
// Show the command
|
||||
b.WriteString(fmt.Sprintf("Type %q\n", "$ "+cmd))
|
||||
b.WriteString("Enter\n")
|
||||
|
||||
// Show abbreviated output
|
||||
output := evt.Output
|
||||
if len(output) > 200 {
|
||||
output = output[:200] + "..."
|
||||
}
|
||||
if output != "" {
|
||||
for _, line := range strings.Split(output, "\n") {
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
b.WriteString(fmt.Sprintf("Type %q\n", line))
|
||||
b.WriteString("Enter\n")
|
||||
}
|
||||
}
|
||||
|
||||
// Status indicator
|
||||
if !evt.Success {
|
||||
b.WriteString("Type \"# ✗ FAILED\"\n")
|
||||
} else {
|
||||
b.WriteString("Type \"# ✓ OK\"\n")
|
||||
}
|
||||
b.WriteString("Enter\n")
|
||||
b.WriteString("Sleep 1s\n")
|
||||
b.WriteString("\n")
|
||||
|
||||
case "Read", "Edit", "Write":
|
||||
b.WriteString(fmt.Sprintf("Type %q\n",
|
||||
fmt.Sprintf("# %s: %s", evt.Tool, truncate(evt.Input, 80))))
|
||||
b.WriteString("Enter\n")
|
||||
b.WriteString("Sleep 500ms\n")
|
||||
|
||||
case "Task":
|
||||
b.WriteString(fmt.Sprintf("Type %q\n",
|
||||
fmt.Sprintf("# Agent: %s", truncate(evt.Input, 80))))
|
||||
b.WriteString("Enter\n")
|
||||
b.WriteString("Sleep 1s\n")
|
||||
}
|
||||
}
|
||||
|
||||
b.WriteString("Sleep 3s\n")
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func extractCommand(input string) string {
|
||||
// Remove description suffix (after " # ")
|
||||
if idx := strings.Index(input, " # "); idx > 0 {
|
||||
return input[:idx]
|
||||
}
|
||||
return input
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue