docs: archive completed backend-result-type plan
Move backend-result-type plan to docs/plans/completed/ with summary. Unified Result struct now used across all ML backends. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
parent
0cf35221e6
commit
f413111f08
2 changed files with 38 additions and 0 deletions
38
docs/plans/completed/backend-result-type.md
Normal file
38
docs/plans/completed/backend-result-type.md
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
# Backend Result Type — Completion Summary
|
||||
|
||||
**Completed:** 22 February 2026
|
||||
**Module:** `forge.lthn.ai/core/go-ml`
|
||||
**Status:** Complete — unified Result struct across all backends
|
||||
|
||||
## What Was Built
|
||||
|
||||
Refactored Generate/Chat return types across all ML backends (HTTP, Llama,
|
||||
MLX adapter) to use a unified `Result` struct carrying both generated text
|
||||
and inference metrics.
|
||||
|
||||
### Result struct
|
||||
|
||||
```go
|
||||
type Result struct {
|
||||
Text string
|
||||
Metrics Metrics
|
||||
}
|
||||
|
||||
type Metrics struct {
|
||||
PromptTokens int
|
||||
CompletionTokens int
|
||||
TotalTokens int
|
||||
LatencyMs float64
|
||||
TokensPerSecond float64
|
||||
}
|
||||
```
|
||||
|
||||
### Backends updated
|
||||
|
||||
- HTTP backend (Ollama, OpenAI-compatible endpoints)
|
||||
- Llama backend (llama.cpp via CGo)
|
||||
- MLX adapter (delegates to go-mlx)
|
||||
|
||||
All tests updated to use the new return type. No breaking changes to the
|
||||
public `ml.Generate()` / `ml.Chat()` API — the Result struct is returned
|
||||
where previously only a string was.
|
||||
Loading…
Add table
Reference in a new issue