Native Apple Metal GPU inference via mlx-c bindings
Find a file
Snider e3fbc221ce feat(metal): add mixed precision training via LoRAConfig.DType (Phase 3)
LoRA A/B matrices can now be created in BFloat16 or Float16 for mixed
precision training. DType field added to LoRAConfig, passed through
ApplyLoRA and NewLoRALinear. MLX auto-promotes for cross-dtype ops.
BFloat16 validated: loss 7.15→6.29, matches Float32 accuracy with
half param memory.

Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 23:13:49 +00:00
cpp fix(metal): address 4 minor code review items 2026-02-19 21:36:40 +00:00
docs/plans fix(metal): address 4 minor code review items 2026-02-19 21:36:40 +00:00
internal/metal feat(metal): add mixed precision training via LoRAConfig.DType (Phase 3) 2026-02-19 23:13:49 +00:00
.gitignore chore: gitignore dist/ (CMake install output) 2026-02-19 19:30:23 +00:00
CLAUDE.md feat(api): migrate to go-inference shared interfaces 2026-02-19 20:15:42 +00:00
CMakeLists.txt feat: extract go-mlx from go-ai as standalone Metal inference package 2026-02-19 17:57:37 +00:00
FINDINGS.md fix(metal): address 4 minor code review items 2026-02-19 21:36:40 +00:00
go.mod feat(api): migrate to go-inference shared interfaces 2026-02-19 20:15:42 +00:00
mlx.go feat(api): migrate to go-inference shared interfaces 2026-02-19 20:15:42 +00:00
mlx_stub.go feat: extract go-mlx from go-ai as standalone Metal inference package 2026-02-19 17:57:37 +00:00
mlx_test.go feat(metal): add Llama 3 model support (Llama 3.1 8B validated) 2026-02-19 23:06:43 +00:00
register_metal.go fix(metal): address 3 critical code review items 2026-02-19 21:24:10 +00:00
TODO.md feat(metal): add mixed precision training via LoRAConfig.DType (Phase 3) 2026-02-19 23:13:49 +00:00