feat: add model discovery test and update TODO

Discover() finds 20 models across /Volumes/Data/lem — Gemma3 (1B/4B/
12B/27B), DeepSeek R1, Llama 3.1, GPT-OSS. Mark quantisation awareness
and inference metrics complete in TODO.

Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Snider 2026-02-19 23:37:37 +00:00
parent ceb966b66b
commit dd49b4afb6
2 changed files with 27 additions and 2 deletions

View file

@ -46,8 +46,8 @@ Implementation plan: `docs/plans/2026-02-19-backend-abstraction-plan.md`
## Phase 5: Ecosystem Integration (Virgil wishlist)
- [x] **Batch inference API** — ✅ `Classify` (prefill-only, fast path) and `BatchGenerate` (autoregressive) implemented. Added `ForwardMasked` to InternalModel interface, threaded attention masks through Gemma3 and Qwen3 decoders. Mask: `[N, 1, L, L]` combining causal + padding (0=attend, -inf=ignore). Right-padded, sorted by length. Gemma3-1B 4-bit: **152 prompts/s** classify (4 prompts), BatchGenerate produces coherent per-prompt output. Types (`ClassifyResult`, `BatchResult`, `WithLogits`) in go-inference. 6 new tests (3 mask unit, 3 model). Design doc: `docs/plans/2026-02-19-batch-inference-design.md`.
- [ ] **Inference metrics** — Expose tokens/sec, peak memory, GPU utilisation as structured data. LEM Lab dashboard and go-ai scoring engine both want this.
- [ ] **Model quantisation awareness** — MLX supports 4-bit and 8-bit quantised models. The loader already handles quantised safetensors (GroupSize, Bits in config).
- [x] **Inference metrics** — ✅ `GenerateMetrics` type in go-inference with `Metrics()` on `TextModel`. Captures: prefill/decode timing, token counts, throughput (tok/s), peak and active GPU memory. Instrumented Generate, Classify, and BatchGenerate. Gemma3-1B 4-bit: prefill 246 tok/s, decode 82 tok/s, peak 6.2 GB. 1 new test.
- [x] **Model quantisation awareness** — ✅ `ModelInfo` type in go-inference with `Info()` on `TextModel`. Exposes architecture, vocab size, layer count, hidden dimension, quantisation bits and group size. Loader already handles quantised safetensors transparently. 1 new test.
- [ ] **Embed-friendly model loading** — Add `Discover(baseDir)` that scans for available models and returns metadata.
- [ ] **mlxlm/ backend** — Python subprocess wrapper via `core/go/pkg/process`. Implements `mlx.Backend` for mlx_lm compatibility.

View file

@ -443,6 +443,31 @@ func TestLlama_Inference(t *testing.T) {
}
}
// --- Discover tests ---
func TestDiscover(t *testing.T) {
// Scan the safetensors directory for available models.
baseDir := "/Volumes/Data/lem"
if _, err := os.Stat(baseDir); err != nil {
t.Skipf("model directory not available: %s", baseDir)
}
models, err := inference.Discover(baseDir)
if err != nil {
t.Fatalf("Discover: %v", err)
}
if len(models) == 0 {
t.Skip("no models found")
}
for _, m := range models {
t.Logf("Found: %s (type=%s, quant=%d-bit, files=%d)",
m.Path, m.ModelType, m.QuantBits, m.NumFiles)
}
t.Logf("Total: %d models discovered", len(models))
}
// --- ModelInfo tests ---
func TestModelInfo(t *testing.T) {