Claude
|
68bc7300aa
|
docs: Phase 0 complete — environment validated, llama-server built
- ROCm 7.2, gfx1100 (corrected from gfx1101), kernel 6.17
- llama-server built with HIP from llama.cpp 11c325c
- Gemma3-4B baseline: 109 tok/s decode, 396 tok/s prefill
- Critical: iGPU crash requires HIP_VISIBLE_DEVICES=0
- All Phase 0 tasks marked done
Co-Authored-By: Virgil <virgil@lethean.io>
|
2026-02-19 19:57:14 +00:00 |
|
Snider
|
aa42cff417
|
feat: scaffold go-rocm AMD GPU inference package
Implements inference.Backend via llama-server subprocess (llama.cpp + HIP/ROCm).
Targets RX 7800 XT (gfx1101, RDNA 3, 16GB VRAM).
Includes:
- Backend registration with build tags (linux/amd64)
- Stub backend.go with llama-server lifecycle outline
- CLAUDE.md with build instructions for llama.cpp + ROCm
- TODO.md with 5-phase task queue
- FINDINGS.md with hardware specs, VRAM budget, design rationale
Co-Authored-By: Virgil <virgil@lethean.io>
|
2026-02-19 19:39:40 +00:00 |
|