Claude
d7db2d6e95
docs: Phase 3 complete — GGUF metadata, discovery, auto context
...
Integration test verifies model discovery on real GGUF files.
All 9 models in /data/lem/gguf/ discovered with correct metadata.
Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-19 22:24:52 +00:00
Claude
34f02fdcd8
docs: Phase 2 complete — robustness features implemented
...
All 5 Phase 2 items done: crash detection, port retry,
graceful shutdown verification, VRAM monitoring, concurrent
requests. Concurrency findings documented.
Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-19 21:54:34 +00:00
Claude
78e244f26f
docs: Phase 1 plan review — approved with notes
...
- Token.ID = 0 acceptable for Phase 1 (no consumer uses it)
- StopTokens: ignore in Phase 1 (YAGNI)
- serverEnv() should filter existing HIP_VISIBLE_DEVICES before appending
- guessModelType() fine for now, upgrade to /props endpoint in Phase 2
- Integration test build tag approach approved
Charon, 19 Feb 2026
Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-19 20:44:37 +00:00
Claude
ff9cf550e8
docs: flag Token.ID and StopTokens interface questions for Virgil
...
QUESTION: Token.ID always 0 — llama-server SSE doesn't include token IDs
QUESTION: StopTokens []int32 vs llama-server stop []string mismatch
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 20:41:53 +00:00
Claude
68bc7300aa
docs: Phase 0 complete — environment validated, llama-server built
...
- ROCm 7.2, gfx1100 (corrected from gfx1101), kernel 6.17
- llama-server built with HIP from llama.cpp 11c325c
- Gemma3-4B baseline: 109 tok/s decode, 396 tok/s prefill
- Critical: iGPU crash requires HIP_VISIBLE_DEVICES=0
- All Phase 0 tasks marked done
Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-19 19:57:14 +00:00
Snider
aa42cff417
feat: scaffold go-rocm AMD GPU inference package
...
Implements inference.Backend via llama-server subprocess (llama.cpp + HIP/ROCm).
Targets RX 7800 XT (gfx1101, RDNA 3, 16GB VRAM).
Includes:
- Backend registration with build tags (linux/amd64)
- Stub backend.go with llama-server lifecycle outline
- CLAUDE.md with build instructions for llama.cpp + ROCm
- TODO.md with 5-phase task queue
- FINDINGS.md with hardware specs, VRAM budget, design rationale
Co-Authored-By: Virgil <virgil@lethean.io>
2026-02-19 19:39:40 +00:00