Commit graph

7 commits

Author SHA1 Message Date
Snider
4669cc503d refactor: replace fmt.Errorf/errors.New with coreerr.E()
Some checks failed
Security Scan / security (push) Successful in 8s
Test / Vet & Build (push) Failing after 23s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-03-16 21:08:52 +00:00
Claude
b03f357f5d
feat: implement Classify, BatchGenerate, Info, Metrics on rocmModel
Some checks failed
Security Scan / security (push) Successful in 10s
Test / Vet & Build (push) Failing after 34s
Brings rocmModel into compliance with the updated inference.TextModel
interface from go-inference.

- Classify: simulates prefill-only via max_tokens=1, temperature=0
- BatchGenerate: sequential autoregressive per prompt via /v1/completions
- Info: populates ModelInfo from GGUF metadata (architecture, layers, quant)
- Metrics: captures timing + VRAM usage via sysfs after each operation
- Refactors duplicate server-exit error handling into setServerExitErr()
- Adds timing instrumentation to existing Generate and Chat methods

Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-24 18:50:37 +00:00
Claude
72120bb200
feat: pass --parallel N to llama-server for concurrent inference slots
Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-19 23:13:19 +00:00
Claude
2c77f6f968
feat: use GGUF metadata for model type and context window auto-detection
Replaces filename-based guessModelType with GGUF header parsing.
Caps default context at 4096 to prevent VRAM exhaustion on models
with 128K+ native context.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-19 22:23:07 +00:00
Claude
c50a8e9e9b
feat: retry port selection in startServer on process failure
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 21:40:05 +00:00
Claude
1d8d65f55b
feat: Backend Available() and LoadModel() with GPU detection
Replace stub backend with real implementation: Available() checks
/dev/kfd and llama-server presence, LoadModel() wires up server
lifecycle to return a rocmModel. Add guessModelType() for architecture
detection from GGUF filenames (handles hyphenated variants like
Llama-3). Add TestAvailable and TestGuessModelType.

Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 21:12:02 +00:00
Snider
aa42cff417 feat: scaffold go-rocm AMD GPU inference package
Implements inference.Backend via llama-server subprocess (llama.cpp + HIP/ROCm).
Targets RX 7800 XT (gfx1101, RDNA 3, 16GB VRAM).

Includes:
- Backend registration with build tags (linux/amd64)
- Stub backend.go with llama-server lifecycle outline
- CLAUDE.md with build instructions for llama.cpp + ROCm
- TODO.md with 5-phase task queue
- FINDINGS.md with hardware specs, VRAM budget, design rationale

Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-19 19:39:40 +00:00