Return architecture, vocab size, layer count, hidden dimension, and quantisation config (bits + group size) for loaded models. Gemma3-1B 4-bit: arch=gemma3, vocab=262144, layers=26, hidden=1152, quant=4-bit/group64. Co-Authored-By: Virgil <virgil@lethean.io> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| metal | ||