AMD ROCm GPU inference for Linux. llama.cpp + HIP backend for RDNA 3.
Find a file
Claude 68bc7300aa
docs: Phase 0 complete — environment validated, llama-server built
- ROCm 7.2, gfx1100 (corrected from gfx1101), kernel 6.17
- llama-server built with HIP from llama.cpp 11c325c
- Gemma3-4B baseline: 109 tok/s decode, 396 tok/s prefill
- Critical: iGPU crash requires HIP_VISIBLE_DEVICES=0
- All Phase 0 tasks marked done

Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-19 19:57:14 +00:00
backend.go feat: scaffold go-rocm AMD GPU inference package 2026-02-19 19:39:40 +00:00
CLAUDE.md docs: Phase 0 complete — environment validated, llama-server built 2026-02-19 19:57:14 +00:00
FINDINGS.md docs: Phase 0 complete — environment validated, llama-server built 2026-02-19 19:57:14 +00:00
go.mod feat: scaffold go-rocm AMD GPU inference package 2026-02-19 19:39:40 +00:00
README.md Initial commit 2026-02-19 19:35:55 +00:00
register_rocm.go feat: scaffold go-rocm AMD GPU inference package 2026-02-19 19:39:40 +00:00
rocm.go feat: scaffold go-rocm AMD GPU inference package 2026-02-19 19:39:40 +00:00
rocm_stub.go feat: scaffold go-rocm AMD GPU inference package 2026-02-19 19:39:40 +00:00
TODO.md docs: Phase 0 complete — environment validated, llama-server built 2026-02-19 19:57:14 +00:00

go-rocm

AMD ROCm GPU inference for Linux. llama.cpp + HIP backend for RDNA 3.