Native Apple Metal GPU inference via mlx-c bindings
Llama shares the Qwen3 loader (same decoder: pre-norm, SwiGLU, GQA). Model type now detected from config.json model_type field instead of weight-only heuristic. Llama 3 chat template and EOS token added. Model tests now clear Metal GPU cache between runs. Llama 3.1 8B Instruct 4-bit: 30 tok/s on M3 Ultra. Co-Authored-By: Virgil <virgil@lethean.io> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| cpp | ||
| docs/plans | ||
| internal/metal | ||
| .gitignore | ||
| CLAUDE.md | ||
| CMakeLists.txt | ||
| FINDINGS.md | ||
| go.mod | ||
| mlx.go | ||
| mlx_stub.go | ||
| mlx_test.go | ||
| register_metal.go | ||
| TODO.md | ||