Instrument Generate, Classify, and BatchGenerate with:
- Prefill/decode timing (separate phases)
- Token counts (prompt + generated)
- Throughput (tok/s for each phase)
- Peak and active GPU memory via Metal allocator
Wire through metalAdapter.Metrics() to go-inference interface.
Test validates all fields populated after generation.
Gemma3-1B 4-bit on M3 Ultra: prefill 246 tok/s, decode 82 tok/s,
peak 6.2 GB GPU memory.
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add ForwardMasked to InternalModel, Gemma3 and Qwen3 architectures
- Thread attention mask through decoder layers and SDPA calls
- Use ScaledDotProductAttentionWithMask when explicit mask provided
- Create batch.go with padded batching, mask construction, Classify
(prefill-only) and BatchGenerate (autoregressive) implementations
- Wire Classify/BatchGenerate through metalAdapter to go-inference
- Tests: mask unit tests (shape, values, multi-batch), Classify with
4 prompts (152 prompts/s), WithLogits, BatchGenerate with 2 prompts
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Llama shares the Qwen3 loader (same decoder: pre-norm, SwiGLU, GQA).
Model type now detected from config.json model_type field instead of
weight-only heuristic. Llama 3 chat template and EOS token added.
Model tests now clear Metal GPU cache between runs.
Llama 3.1 8B Instruct 4-bit: 30 tok/s on M3 Ultra.
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Qwen2 and Qwen3 share the same architecture — Qwen3 adds Q/K RMS
normalization which Qwen2 lacks. The loader auto-detects the variant
from weight presence and reports the correct ModelType().
- Add "qwen2" to architecture dispatch in model.go
- Make Q/K norm optional in attention forward (nil-safe check)
- Store detected model type on Qwen3Model struct
- Add "qwen2" to chat template routing
- DeepSeek R1 7B (4-bit): 27 tok/s on M3 Ultra
- 2 new tests: inference + chat
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace local TextModel, Backend, Token, Message, and option types with
forge.lthn.ai/core/go-inference. go-mlx is now a pure backend that
registers "metal" into the shared inference registry via init().
Deleted: textmodel.go, options.go, backend.go
Updated: register_metal.go (implements inference.Backend with Available()),
mlx_test.go (uses inference.* types, 4 new tests), go.mod,
internal/metal/generate.go (added RepeatPenalty)
159 tests passing (148 internal/metal + 11 root).
Co-Authored-By: Virgil <virgil@lethean.io>
Tests: MetalAvailable, DefaultBackend, GetBackend, LoadModel error
paths, options, defaults. Model-dependent test skips when model
not available on disk.
Co-Authored-By: Virgil <virgil@lethean.io>