go-ml
ML inference backends, scoring engine, and agent orchestrator for the Lethean network.
Overview
go-ml provides pluggable inference backends, a multi-suite scoring engine with ethics-aware probes, GGUF model management, and a concurrent worker pipeline for batch evaluation.
Module: forge.lthn.ai/core/go-ml
Size: ~7,500 LOC (41 files, 6 test files)
Key Features
- Pluggable backends: MLX (Metal GPU), llama.cpp, HTTP (Ollama/vLLM/OpenAI-compatible)
- Scoring engine: Heuristic, semantic (LLM judge), exact match, ethics probes
- Agent orchestrator: Multi-model scoring runs with concurrent worker pool
- Data pipeline: DuckDB storage, Parquet I/O, GGUF model inventory
Dependencies
| Module |
Purpose |
core/go-mlx |
Native Metal GPU inference |
core/go |
Framework services, process management |
go-duckdb |
Embedded analytics database |
parquet-go |
Columnar data format |
Pages
- Backends — Inference backend interface and implementations
- Scoring — Multi-suite scoring engine and probes
- Agent — LLM agent orchestrator and worker pipeline