AMD ROCm GPU inference for Linux. llama.cpp + HIP backend for RDNA 3.
Find a file
Claude 34f02fdcd8
docs: Phase 2 complete — robustness features implemented
All 5 Phase 2 items done: crash detection, port retry,
graceful shutdown verification, VRAM monitoring, concurrent
requests. Concurrency findings documented.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-19 21:54:34 +00:00
docs/plans docs: Phase 2 robustness implementation plan 2026-02-19 21:31:24 +00:00
internal/llamacpp fix: guard response body lifecycle in SSE streaming client 2026-02-19 21:04:02 +00:00
backend.go feat: retry port selection in startServer on process failure 2026-02-19 21:40:05 +00:00
CLAUDE.md docs: Phase 0 complete — environment validated, llama-server built 2026-02-19 19:57:14 +00:00
FINDINGS.md docs: Phase 2 complete — robustness features implemented 2026-02-19 21:54:34 +00:00
go.mod feat: llamacpp health check client 2026-02-19 20:50:36 +00:00
go.sum feat: llamacpp health check client 2026-02-19 20:50:36 +00:00
model.go test: graceful shutdown and concurrent request integration tests 2026-02-19 21:50:47 +00:00
README.md Initial commit 2026-02-19 19:35:55 +00:00
register_rocm.go feat: scaffold go-rocm AMD GPU inference package 2026-02-19 19:39:40 +00:00
rocm.go feat: VRAM monitoring via sysfs with dGPU auto-detection 2026-02-19 21:45:02 +00:00
rocm_integration_test.go test: graceful shutdown and concurrent request integration tests 2026-02-19 21:50:47 +00:00
rocm_stub.go feat: VRAM monitoring via sysfs with dGPU auto-detection 2026-02-19 21:45:02 +00:00
server.go fix: only retry startServer on process exit, not timeout 2026-02-19 21:43:06 +00:00
server_test.go feat: retry port selection in startServer on process failure 2026-02-19 21:40:05 +00:00
TODO.md docs: Phase 2 complete — robustness features implemented 2026-02-19 21:54:34 +00:00
vram.go fix: clamp VRAM Free to prevent uint64 underflow 2026-02-19 21:48:19 +00:00
vram_test.go feat: VRAM monitoring via sysfs with dGPU auto-detection 2026-02-19 21:45:02 +00:00

go-rocm

AMD ROCm GPU inference for Linux. llama.cpp + HIP backend for RDNA 3.