Remove mlx/ directory (now lives at forge.lthn.ai/core/go-mlx).
Update ml/backend_mlx.go imports to reference the new module.
Add replace directive for local development.
Co-Authored-By: Virgil <virgil@lethean.io>
LoRA: low-rank adaptation with trainable A/B matrices, Kaiming normal
init, safetensors save/load. AdamW: decoupled weight decay optimizer
with positional moment tracking for gradient-replaced params.
14 tests passing including end-to-end LoRA+AdamW training loop.
Co-Authored-By: Virgil <virgil@lethean.io>