go-mlx/internal
Snider e3fbc221ce feat(metal): add mixed precision training via LoRAConfig.DType (Phase 3)
LoRA A/B matrices can now be created in BFloat16 or Float16 for mixed
precision training. DType field added to LoRAConfig, passed through
ApplyLoRA and NewLoRALinear. MLX auto-promotes for cross-dtype ops.
BFloat16 validated: loss 7.15→6.29, matches Float32 accuracy with
half param memory.

Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 23:13:49 +00:00
..
metal feat(metal): add mixed precision training via LoRAConfig.DType (Phase 3) 2026-02-19 23:13:49 +00:00