AMD ROCm GPU inference for Linux. llama.cpp + HIP backend for RDNA 3.
Find a file
Claude 9aa7f624ba
feat: server lifecycle and helpers for llama-server subprocess
Adds server.go with the process lifecycle layer that manages spawning
llama-server, waiting for readiness, and graceful shutdown. Includes
three helper functions (findLlamaServer, freePort, serverEnv) and the
full startServer/waitReady/stop lifecycle. The serverEnv function
critically filters HIP_VISIBLE_DEVICES to mask the Ryzen 9 iGPU
which crashes llama-server if not excluded.

Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 21:08:07 +00:00
docs/plans docs: incorporate Charon review — safer serverEnv() filtering 2026-02-19 20:47:16 +00:00
internal/llamacpp fix: guard response body lifecycle in SSE streaming client 2026-02-19 21:04:02 +00:00
backend.go feat: scaffold go-rocm AMD GPU inference package 2026-02-19 19:39:40 +00:00
CLAUDE.md docs: Phase 0 complete — environment validated, llama-server built 2026-02-19 19:57:14 +00:00
FINDINGS.md docs: Phase 1 plan review — approved with notes 2026-02-19 20:44:37 +00:00
go.mod feat: llamacpp health check client 2026-02-19 20:50:36 +00:00
go.sum feat: llamacpp health check client 2026-02-19 20:50:36 +00:00
README.md Initial commit 2026-02-19 19:35:55 +00:00
register_rocm.go feat: scaffold go-rocm AMD GPU inference package 2026-02-19 19:39:40 +00:00
rocm.go feat: scaffold go-rocm AMD GPU inference package 2026-02-19 19:39:40 +00:00
rocm_stub.go feat: scaffold go-rocm AMD GPU inference package 2026-02-19 19:39:40 +00:00
server.go feat: server lifecycle and helpers for llama-server subprocess 2026-02-19 21:08:07 +00:00
server_test.go feat: server lifecycle and helpers for llama-server subprocess 2026-02-19 21:08:07 +00:00
TODO.md docs: Phase 0 complete — environment validated, llama-server built 2026-02-19 19:57:14 +00:00

go-rocm

AMD ROCm GPU inference for Linux. llama.cpp + HIP backend for RDNA 3.