Add 4 concurrent race tests: multi-model (5 models), Persist+Load filesystem race, AllStats+RecordUsage, WaitForCapacity+RecordUsage. Add 4 benchmarks: CanSendWithPrune, Stats, AllStats, Persist. Total: 80 tests, 7 benchmarks, all pass under go test -race. Co-Authored-By: Virgil <virgil@lethean.io>
3.2 KiB
3.2 KiB
TODO.md -- go-ratelimit
Dispatched from core/go orchestration. Pick up tasks in order.
Phase 0: Hardening & Test Coverage
- Expand test coverage --
ratelimit_test.gorewritten with testify. Tests for:CanSend()at exact limits (RPM, TPM, RPD boundaries),RecordUsage()with concurrent goroutines,WaitForCapacity()timeout and immediate-capacity paths,prune()sliding window edge cases, daily reset logic (24h boundary), YAML persistence (save + reload), corrupt/unreadable state file recovery,Reset()single/all/nonexistent,Stats()known/unknown/quota-only models,AllStats()with pruning and daily reset. - Race condition test --
go test -race ./...with 20 goroutines callingCanSend()+RecordUsage()+Stats()concurrently. Additional tests: concurrentReset()+RecordUsage()+AllStats(), concurrent multi-model access (5 models), concurrentPersist()+Load()filesystem race, concurrentAllStats()+RecordUsage(), concurrentWaitForCapacity()+RecordUsage(). All pass clean. - Benchmark -- 7 benchmarks:
BenchmarkCanSend(1000-entry window),BenchmarkRecordUsage,BenchmarkCanSendConcurrent(parallel),BenchmarkCanSendWithPrune(500 old + 500 new),BenchmarkStats(1000 entries),BenchmarkAllStats(5 models x 200 entries),BenchmarkPersist(YAML I/O). Zero allocs on hot paths. go vet ./...clean -- No warnings.- Coverage: 95.1% (up from 77.1%). Remaining uncovered:
CountTokenssuccess path (hardcoded Google URL),yaml.Marshalerror path inPersist(),os.UserHomeDirerror path inNewWithConfig.
Phase 1: Generalise Beyond Gemini
- Provider-agnostic config -- Added
Providertype,ProviderProfile,Configstruct,NewWithConfig()constructor. Quotas are no longer hardcoded inNew(). - Quota profiles --
DefaultProfiles()returns pre-configured profiles for Gemini, OpenAI (gpt-4o, o1, o3-mini), Anthropic (claude-opus-4, claude-sonnet-4, claude-haiku-3.5), and Local (empty, user-configurable). - Configurable defaults --
Configstruct acceptsFilePath,Providerslist, and explicitQuotasmap. Explicit quotas override provider defaults. YAML-serialisable. - Backward compatibility --
New()delegates toNewWithConfig(Config{Providers: []Provider{ProviderGemini}}). Existing API unchanged. TestTestNewBackwardCompatibilityverifies exact parity. - Runtime configuration --
SetQuota()andAddProvider()allow modifying quotas after construction. Both are mutex-protected.
Phase 2: Persistent State
- Currently stores state in YAML file -- not safe for multi-process access
- Consider SQLite for concurrent read/write safety (WAL mode)
- Add state recovery on restart (reload sliding window from persisted data)
Phase 3: Integration
- Wire into go-ml backends for automatic rate limiting on inference calls
- Wire into go-ai facade so all providers share a unified rate limit layer
- Add metrics export (requests/minute, tokens/minute, rejections) for monitoring
Workflow
- Virgil in core/go writes tasks here after research
- This repo's dedicated session picks up tasks in phase order
- Mark
[x]when done, note commit hash