AMD ROCm GPU inference for Linux. llama.cpp + HIP backend for RDNA 3.
Find a file
Claude 402bdc2205
Some checks failed
Security Scan / security (push) Successful in 8s
Test / Vet & Build (push) Failing after 29s
test: add integration tests for Classify, BatchGenerate, Info, Metrics
Verified on RX 7800 XT (gfx1100, ROCm 7.2):
- Classify: greedy single-token via max_tokens=1
- BatchGenerate: sequential multi-prompt generation
- Info: GGUF metadata (gemma3, 26 layers, Q5_K_M)
- Metrics: 250 tok/s decode, 3244 MiB VRAM

Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-24 18:52:10 +00:00
.forgejo/workflows ci: add Forgejo Actions test and security scan workflows 2026-02-23 03:28:08 +00:00
docs docs: graduate TODO/FINDINGS into production documentation 2026-02-20 15:03:17 +00:00
internal feat: GGUF metadata parser for model discovery 2026-02-19 22:20:02 +00:00
backend.go feat: implement Classify, BatchGenerate, Info, Metrics on rocmModel 2026-02-24 18:50:37 +00:00
CLAUDE.md docs: graduate TODO/FINDINGS into production documentation 2026-02-20 15:03:17 +00:00
discover.go feat: model discovery scanning directories for GGUF files 2026-02-19 22:21:48 +00:00
discover_test.go feat: model discovery scanning directories for GGUF files 2026-02-19 22:21:48 +00:00
go.mod chore: sync workspace dependency versions 2026-02-22 21:41:04 +00:00
go.sum feat: implement Classify, BatchGenerate, Info, Metrics on rocmModel 2026-02-24 18:50:37 +00:00
model.go feat: implement Classify, BatchGenerate, Info, Metrics on rocmModel 2026-02-24 18:50:37 +00:00
README.md docs: add README with quick start and docs links 2026-02-20 15:11:26 +00:00
register_rocm.go feat: scaffold go-rocm AMD GPU inference package 2026-02-19 19:39:40 +00:00
rocm.go feat: model discovery scanning directories for GGUF files 2026-02-19 22:21:48 +00:00
rocm_benchmark_test.go feat: benchmark suite for decode speed, TTFT, and concurrent throughput 2026-02-19 23:16:40 +00:00
rocm_integration_test.go test: add integration tests for Classify, BatchGenerate, Info, Metrics 2026-02-24 18:52:10 +00:00
rocm_stub.go feat: VRAM monitoring via sysfs with dGPU auto-detection 2026-02-19 21:45:02 +00:00
server.go feat: pass --parallel N to llama-server for concurrent inference slots 2026-02-19 23:13:19 +00:00
server_test.go feat: pass --parallel N to llama-server for concurrent inference slots 2026-02-19 23:13:19 +00:00
vram.go fix: clamp VRAM Free to prevent uint64 underflow 2026-02-19 21:48:19 +00:00
vram_test.go feat: VRAM monitoring via sysfs with dGPU auto-detection 2026-02-19 21:45:02 +00:00

go-rocm

AMD ROCm GPU inference for Linux via a managed llama-server subprocess. Implements the inference.Backend and inference.TextModel interfaces from go-inference for AMD RDNA 3+ GPUs (validated on RX 7800 XT with ROCm 7.2). Uses llama-server's OpenAI-compatible streaming API rather than direct HIP CGO bindings, giving access to 50+ GGUF model architectures with GPU crash isolation. Includes a GGUF v2/v3 binary metadata parser, sysfs VRAM monitoring, and model discovery. Platform-restricted: linux/amd64 only; a safe stub compiles everywhere else.

Module: forge.lthn.ai/core/go-rocm Licence: EUPL-1.2 Language: Go 1.25

Quick Start

import (
    "forge.lthn.ai/core/go-inference"
    _ "forge.lthn.ai/core/go-rocm"  // registers "rocm" backend via init()
)

// Requires llama-server compiled with HIP/ROCm on PATH
model, err := inference.LoadModel("/path/to/model.gguf")
defer model.Close()

for tok := range model.Generate(ctx, "Hello", inference.WithMaxTokens(256)) {
    fmt.Print(tok.Text)
}

Documentation

  • Architecture — subprocess design rationale, inference flow, server lifecycle, GGUF parser, VRAM monitoring
  • Development Guide — prerequisites (ROCm, llama-server build), test commands, benchmarks
  • Project History — completed phases, commit hashes, known limitations

Build & Test

go test ./...                             # unit tests, no GPU required
go test -tags rocm ./...                  # integration tests and benchmarks (GPU required)
go build ./...

Licence

European Union Public Licence 1.2 — see LICENCE for details.