Commit graph

9 commits

Author SHA1 Message Date
Snider
dd49b4afb6 feat: add model discovery test and update TODO
Discover() finds 20 models across /Volumes/Data/lem — Gemma3 (1B/4B/
12B/27B), DeepSeek R1, Llama 3.1, GPT-OSS. Mark quantisation awareness
and inference metrics complete in TODO.

Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 23:37:37 +00:00
Snider
ceb966b66b feat(metal): expose model metadata via Info()
Return architecture, vocab size, layer count, hidden dimension, and
quantisation config (bits + group size) for loaded models.

Gemma3-1B 4-bit: arch=gemma3, vocab=262144, layers=26, hidden=1152,
quant=4-bit/group64.

Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 23:36:23 +00:00
Snider
a44e9f5789 feat(metal): add inference metrics (timing, throughput, memory)
Instrument Generate, Classify, and BatchGenerate with:
- Prefill/decode timing (separate phases)
- Token counts (prompt + generated)
- Throughput (tok/s for each phase)
- Peak and active GPU memory via Metal allocator

Wire through metalAdapter.Metrics() to go-inference interface.
Test validates all fields populated after generation.

Gemma3-1B 4-bit on M3 Ultra: prefill 246 tok/s, decode 82 tok/s,
peak 6.2 GB GPU memory.

Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 23:34:40 +00:00
Snider
5644857034 feat(metal): implement batch inference (Classify, BatchGenerate)
- Add ForwardMasked to InternalModel, Gemma3 and Qwen3 architectures
- Thread attention mask through decoder layers and SDPA calls
- Use ScaledDotProductAttentionWithMask when explicit mask provided
- Create batch.go with padded batching, mask construction, Classify
  (prefill-only) and BatchGenerate (autoregressive) implementations
- Wire Classify/BatchGenerate through metalAdapter to go-inference
- Tests: mask unit tests (shape, values, multi-batch), Classify with
  4 prompts (152 prompts/s), WithLogits, BatchGenerate with 2 prompts

Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 23:28:15 +00:00
Snider
19c4823b04 feat(metal): add Llama 3 model support (Llama 3.1 8B validated)
Llama shares the Qwen3 loader (same decoder: pre-norm, SwiGLU, GQA).
Model type now detected from config.json model_type field instead of
weight-only heuristic. Llama 3 chat template and EOS token added.
Model tests now clear Metal GPU cache between runs.

Llama 3.1 8B Instruct 4-bit: 30 tok/s on M3 Ultra.

Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 23:06:43 +00:00
Snider
535b04d5d6 feat(metal): add Qwen2 model support (DeepSeek R1 validated)
Qwen2 and Qwen3 share the same architecture — Qwen3 adds Q/K RMS
normalization which Qwen2 lacks. The loader auto-detects the variant
from weight presence and reports the correct ModelType().

- Add "qwen2" to architecture dispatch in model.go
- Make Q/K norm optional in attention forward (nil-safe check)
- Store detected model type on Qwen3Model struct
- Add "qwen2" to chat template routing
- DeepSeek R1 7B (4-bit): 27 tok/s on M3 Ultra
- 2 new tests: inference + chat

Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 21:55:56 +00:00
Snider
18e8dca9f8 feat(metal): validate Gemma3-1B inference end-to-end (Phase 2)
- Fix model_type "gemma3_text" not matched in architecture dispatch
- Fix GPT-2 BPE false detection on large SentencePiece vocabs (Gemma3
  262K vocab contains Ġ but uses ▁ for spaces — check "Ġthe" not bare "Ġ")
- Add TestGemma3_1B_Inference: greedy decode, 46 tok/s, coherent output
- Add TestGemma3_1B_Chat: validates chat template formatting
- Add TestGemma3_1B_ContextCancel: validates ctx.Done() stops generation

4-bit quantised Gemma3-1B loads in ~700ms, generates at 46 tok/s on M3 Ultra.

Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 21:44:28 +00:00
Snider
bff97ccf19 feat(api): migrate to go-inference shared interfaces
Replace local TextModel, Backend, Token, Message, and option types with
forge.lthn.ai/core/go-inference. go-mlx is now a pure backend that
registers "metal" into the shared inference registry via init().

Deleted: textmodel.go, options.go, backend.go
Updated: register_metal.go (implements inference.Backend with Available()),
  mlx_test.go (uses inference.* types, 4 new tests), go.mod,
  internal/metal/generate.go (added RepeatPenalty)

159 tests passing (148 internal/metal + 11 root).

Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-19 20:15:42 +00:00
Snider
eb8dee31bf test(api): integration tests for public LoadModel + Generate
Tests: MetalAvailable, DefaultBackend, GetBackend, LoadModel error
paths, options, defaults. Model-dependent test skips when model
not available on disk.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-19 20:05:14 +00:00