- Fix model_type "gemma3_text" not matched in architecture dispatch - Fix GPT-2 BPE false detection on large SentencePiece vocabs (Gemma3 262K vocab contains Ġ but uses ▁ for spaces — check "Ġthe" not bare "Ġ") - Add TestGemma3_1B_Inference: greedy decode, 46 tok/s, coherent output - Add TestGemma3_1B_Chat: validates chat template formatting - Add TestGemma3_1B_ContextCancel: validates ctx.Done() stops generation 4-bit quantised Gemma3-1B loads in ~700ms, generates at 46 tok/s on M3 Ultra. Co-Authored-By: Virgil <virgil@lethean.io> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| metal | ||