From dd49b4afb6cb3d75d65c9546151fa9763ea0db67 Mon Sep 17 00:00:00 2001 From: Snider Date: Thu, 19 Feb 2026 23:37:37 +0000 Subject: [PATCH] feat: add model discovery test and update TODO MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Discover() finds 20 models across /Volumes/Data/lem — Gemma3 (1B/4B/ 12B/27B), DeepSeek R1, Llama 3.1, GPT-OSS. Mark quantisation awareness and inference metrics complete in TODO. Co-Authored-By: Virgil Co-Authored-By: Claude Opus 4.6 --- TODO.md | 4 ++-- mlx_test.go | 25 +++++++++++++++++++++++++ 2 files changed, 27 insertions(+), 2 deletions(-) diff --git a/TODO.md b/TODO.md index ac9a404..6beb151 100644 --- a/TODO.md +++ b/TODO.md @@ -46,8 +46,8 @@ Implementation plan: `docs/plans/2026-02-19-backend-abstraction-plan.md` ## Phase 5: Ecosystem Integration (Virgil wishlist) - [x] **Batch inference API** — ✅ `Classify` (prefill-only, fast path) and `BatchGenerate` (autoregressive) implemented. Added `ForwardMasked` to InternalModel interface, threaded attention masks through Gemma3 and Qwen3 decoders. Mask: `[N, 1, L, L]` combining causal + padding (0=attend, -inf=ignore). Right-padded, sorted by length. Gemma3-1B 4-bit: **152 prompts/s** classify (4 prompts), BatchGenerate produces coherent per-prompt output. Types (`ClassifyResult`, `BatchResult`, `WithLogits`) in go-inference. 6 new tests (3 mask unit, 3 model). Design doc: `docs/plans/2026-02-19-batch-inference-design.md`. -- [ ] **Inference metrics** — Expose tokens/sec, peak memory, GPU utilisation as structured data. LEM Lab dashboard and go-ai scoring engine both want this. -- [ ] **Model quantisation awareness** — MLX supports 4-bit and 8-bit quantised models. The loader already handles quantised safetensors (GroupSize, Bits in config). +- [x] **Inference metrics** — ✅ `GenerateMetrics` type in go-inference with `Metrics()` on `TextModel`. Captures: prefill/decode timing, token counts, throughput (tok/s), peak and active GPU memory. Instrumented Generate, Classify, and BatchGenerate. Gemma3-1B 4-bit: prefill 246 tok/s, decode 82 tok/s, peak 6.2 GB. 1 new test. +- [x] **Model quantisation awareness** — ✅ `ModelInfo` type in go-inference with `Info()` on `TextModel`. Exposes architecture, vocab size, layer count, hidden dimension, quantisation bits and group size. Loader already handles quantised safetensors transparently. 1 new test. - [ ] **Embed-friendly model loading** — Add `Discover(baseDir)` that scans for available models and returns metadata. - [ ] **mlxlm/ backend** — Python subprocess wrapper via `core/go/pkg/process`. Implements `mlx.Backend` for mlx_lm compatibility. diff --git a/mlx_test.go b/mlx_test.go index a00eed7..bfa0919 100644 --- a/mlx_test.go +++ b/mlx_test.go @@ -443,6 +443,31 @@ func TestLlama_Inference(t *testing.T) { } } +// --- Discover tests --- + +func TestDiscover(t *testing.T) { + // Scan the safetensors directory for available models. + baseDir := "/Volumes/Data/lem" + if _, err := os.Stat(baseDir); err != nil { + t.Skipf("model directory not available: %s", baseDir) + } + + models, err := inference.Discover(baseDir) + if err != nil { + t.Fatalf("Discover: %v", err) + } + + if len(models) == 0 { + t.Skip("no models found") + } + + for _, m := range models { + t.Logf("Found: %s (type=%s, quant=%d-bit, files=%d)", + m.Path, m.ModelType, m.QuantBits, m.NumFiles) + } + t.Logf("Total: %d models discovered", len(models)) +} + // --- ModelInfo tests --- func TestModelInfo(t *testing.T) {