ReadScorerOutput error was silently discarded during resume merge,
risking partial data loss on TOCTOU file changes. Also clean up
compare command construction to pass RunE directly to NewCommand.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
All 28 commands now accessible as exported lem.Run* functions.
Prerequisite for CLI framework migration.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Package declares itself as 'ml', so the named import alias is
unnecessary. Go resolves the package name from the declaration,
not the module path.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace inference.LoadModel() with ml.NewMLXBackend() which wraps
the same Metal model with memory management (SetCacheLimit,
SetMemoryLimit). Replace raw iter.Seq token loop with backend.Chat()
returning Result{Text, Metrics}. Add runtime.GC() between probes
to prevent incremental memory leak.
Reference: go-ml/cmd/cmd_ab.go memory management pattern.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
CacheLimit (8GB) and MemoryLimit (16GB) in DistillConfig control
mlx.SetCacheLimit/SetMemoryLimit before model load. Conservative
defaults for 1B model on 96GB machine.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move training/lem/ (probes, lessons, eval sets) into git so the
full curriculum is publicly releasable. Update .core/ai configs
and distill.go to use repo-relative paths instead of /Volumes/Data/.
Co-Authored-By: Virgil <virgil@lethean.io>
Go client for the LEM distributed inference API (BugSETI/Agentic).
Workers register via Forgejo PAT auth, pull prompt batches, run local
inference (MLX/vLLM/llama.cpp), submit results. Credits tracked as
Phase 1 stub for Phase 2 blockchain LEM token.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Inspired by BugSETI architecture — system tray with WebView2 windows,
Docker Compose stack (Forgejo + InfluxDB + inference proxy), and
scoring agent integration. Builds as signed native binary on macOS.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Go scoring daemon that polls M3 for unscored LoRA checkpoints,
converts MLX→PEFT, runs 23 binary capability probes via OpenAI-
compatible API, and pushes results to InfluxDB. Zero Python deps.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Ingests benchmark data (content scores, capability scores, training
curves) from JSONL files and mlx_lm logs into InfluxDB. Batched
writes, iteration extraction from checkpoint labels.
Also adds github.com/hupe1980/go-huggingface for future HF sync.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Vi identity is a separate training concern. Seed conversations now
contain only philosophical/mindfulness content for the R300 calm phase.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Ports conversational_training.py to Go with InfluxDB reporting.
24 built-in seed conversations (Vi identity, philosophy, mindfulness).
Supports extra JSONL files and golden set conversion to chat format.
Also fixes InfluxDB client to accept 204 No Content on writes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
All scoring/influx/export/expand logic moves to pkg/lem as an
importable package. main.go is now a thin CLI dispatcher.
This lets new commands import the shared library directly —
ready for converting Python scripts to Go subcommands.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>