Compare commits
2 commits
feature/pe
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 544a3bf5ad | |||
|
|
fa998619dc |
14 changed files with 790 additions and 113 deletions
|
|
@ -1,37 +0,0 @@
|
|||
# Performance Audit
|
||||
|
||||
## Database Performance
|
||||
|
||||
Not applicable. This is a library and does not have a database.
|
||||
|
||||
## Memory Usage
|
||||
|
||||
- **Memory Leaks:** No memory leaks were identified. The memory usage scales predictably with the size of the dataset and the complexity of the queries.
|
||||
|
||||
- **Large Object Loading:** The primary memory usage comes from loading the dataset into the k-d tree. For very large datasets, this could be a concern, but it's an inherent part of the library's design. No unnecessary large objects are loaded.
|
||||
|
||||
- **Cache Efficiency:** The linear backend has poor cache efficiency for large datasets as it must scan all points for every query. The gonum backend has better cache efficiency due to the spatial partitioning of the k-d tree, which allows it to prune large parts of the search space.
|
||||
|
||||
- **Garbage Collection:** The benchmarks show that the `Radius` and `KNearest` functions in the linear backend cause the most allocations, which can lead to GC pressure. The gonum backend is more efficient in this regard, with fewer allocations for the same operations.
|
||||
|
||||
## Concurrency
|
||||
|
||||
- **Blocking Operations:** The library's operations are CPU-bound and will block the calling goroutine. This is expected behavior for a data structure library.
|
||||
|
||||
- **Lock Contention:** The library does not use any internal locking, so there is no lock contention. However, this also means the `KDTree` is not safe for concurrent use. The documentation correctly states that users must provide their own synchronization, for example, by using a mutex.
|
||||
|
||||
- **Thread Pool Sizing:** Not applicable. The library does not manage its own thread pool.
|
||||
|
||||
- **Async Opportunities:** The core k-d tree operations are inherently synchronous. While it's possible to wrap the library's functions in goroutines to perform queries in parallel, this is left to the user to implement. The library itself does not offer any async APIs.
|
||||
|
||||
## API Performance
|
||||
|
||||
Not applicable. This is a library and does not have an API.
|
||||
|
||||
## Build/Deploy Performance
|
||||
|
||||
- **Build Time:** The build process is fast and efficient. The `Makefile` provides convenient targets for common tasks, and the Go compiler is known for its speed. No build performance issues were identified.
|
||||
|
||||
- **Asset Size:** As this is a library, there are no assets to consider. The compiled code size is minimal. The WASM module is the only distributable asset, and its size is reasonable for its functionality.
|
||||
|
||||
- **Cold Start:** Not applicable. This is a library and does not have a cold start time.
|
||||
46
docs/plans/2026-02-16-math-expansion-design.md
Normal file
46
docs/plans/2026-02-16-math-expansion-design.md
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
# Poindexter Math Expansion
|
||||
|
||||
**Date:** 2026-02-16
|
||||
**Status:** Approved
|
||||
|
||||
## Context
|
||||
|
||||
Poindexter serves as the math pillar (alongside Borg=data, Enchantrix=encryption) in the Lethean ecosystem. It currently provides KD-Tree spatial queries, 5 distance metrics, sorting utilities, and normalization helpers.
|
||||
|
||||
Analysis of math operations scattered across core/go, core/go-ai, and core/mining revealed common patterns that Poindexter should centralize: descriptive statistics, scaling/interpolation, approximate equality, weighted scoring, and signal generation.
|
||||
|
||||
## New Modules
|
||||
|
||||
### stats.go — Descriptive statistics
|
||||
Sum, Mean, Variance, StdDev, MinMax, IsUnderrepresented.
|
||||
Consumers: ml/coverage.go, lab/handler/chart.go
|
||||
|
||||
### scale.go — Normalization and interpolation
|
||||
Lerp, InverseLerp, Remap, RoundToN, Clamp, MinMaxScale.
|
||||
Consumers: lab/handler/chart.go, i18n/numbers.go
|
||||
|
||||
### epsilon.go — Approximate equality
|
||||
ApproxEqual, ApproxZero.
|
||||
Consumers: ml/exact.go
|
||||
|
||||
### score.go — Weighted composite scoring
|
||||
Factor type, WeightedScore, Ratio, Delta, DeltaPercent.
|
||||
Consumers: ml/heuristic.go, ml/compare.go
|
||||
|
||||
### signal.go — Time-series primitives
|
||||
RampUp, SineWave, Oscillate, Noise (seeded RNG).
|
||||
Consumers: mining/simulated_miner.go
|
||||
|
||||
## Constraints
|
||||
|
||||
- Zero external dependencies (WASM-compilable)
|
||||
- Pure Go, stdlib only (math, math/rand)
|
||||
- Same package (`poindexter`), flat structure
|
||||
- Table-driven tests for every function
|
||||
- No changes to existing files
|
||||
|
||||
## Not In Scope
|
||||
|
||||
- MLX tensor ops (hardware-accelerated, stays in go-ai)
|
||||
- DNS tools migration to go-netops (separate PR)
|
||||
- gonum backend integration (future work)
|
||||
14
epsilon.go
Normal file
14
epsilon.go
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
package poindexter
|
||||
|
||||
import "math"
|
||||
|
||||
// ApproxEqual returns true if the absolute difference between a and b
|
||||
// is less than epsilon.
|
||||
func ApproxEqual(a, b, epsilon float64) bool {
|
||||
return math.Abs(a-b) < epsilon
|
||||
}
|
||||
|
||||
// ApproxZero returns true if the absolute value of v is less than epsilon.
|
||||
func ApproxZero(v, epsilon float64) bool {
|
||||
return math.Abs(v) < epsilon
|
||||
}
|
||||
50
epsilon_test.go
Normal file
50
epsilon_test.go
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
package poindexter
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestApproxEqual(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
a, b float64
|
||||
epsilon float64
|
||||
want bool
|
||||
}{
|
||||
{"equal", 1.0, 1.0, 0.01, true},
|
||||
{"close", 1.0, 1.005, 0.01, true},
|
||||
{"not_close", 1.0, 1.02, 0.01, false},
|
||||
{"negative", -1.0, -1.005, 0.01, true},
|
||||
{"zero", 0, 0.0001, 0.001, true},
|
||||
{"at_boundary", 1.0, 1.01, 0.01, false},
|
||||
{"large_epsilon", 100, 200, 150, true},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := ApproxEqual(tt.a, tt.b, tt.epsilon)
|
||||
if got != tt.want {
|
||||
t.Errorf("ApproxEqual(%v, %v, %v) = %v, want %v", tt.a, tt.b, tt.epsilon, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestApproxZero(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
v float64
|
||||
epsilon float64
|
||||
want bool
|
||||
}{
|
||||
{"zero", 0, 0.01, true},
|
||||
{"small_pos", 0.005, 0.01, true},
|
||||
{"small_neg", -0.005, 0.01, true},
|
||||
{"not_zero", 0.02, 0.01, false},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := ApproxZero(tt.v, tt.epsilon)
|
||||
if got != tt.want {
|
||||
t.Errorf("ApproxZero(%v, %v) = %v, want %v", tt.v, tt.epsilon, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
@ -1,42 +0,0 @@
|
|||
goos: linux
|
||||
goarch: amd64
|
||||
pkg: github.com/Snider/Poindexter
|
||||
cpu: Intel(R) Xeon(R) Processor @ 2.30GHz
|
||||
BenchmarkNearest_Linear_Uniform_100k_2D-4 1384 850417 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Uniform_100k_2D-4 825416 1422 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Uniform_100k_4D-4 1111 1108445 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Uniform_100k_4D-4 159350 16747 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Clustered_100k_2D-4 1156 897493 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Clustered_100k_2D-4 164542 7957 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Clustered_100k_4D-4 1093 1068889 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Clustered_100k_4D-4 679 1839479 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Uniform_1k_2D-4 140626 8470 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Uniform_1k_2D-4 1417580 721.6 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Uniform_10k_2D-4 14460 83143 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Uniform_10k_2D-4 1372161 864.0 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Uniform_1k_4D-4 112540 11183 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Uniform_1k_4D-4 352273 3433 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Uniform_10k_4D-4 10000 106582 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Uniform_10k_4D-4 183109 6641 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Clustered_1k_2D-4 140745 11324 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Clustered_1k_2D-4 593008 2362 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Clustered_10k_2D-4 14334 101665 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Clustered_10k_2D-4 258667 4285 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkKNN10_Linear_Uniform_10k_2D-4 393 2625727 ns/op 164496 B/op 6 allocs/op
|
||||
BenchmarkKNN10_Gonum_Uniform_10k_2D-4 172296 6853 ns/op 1384 B/op 12 allocs/op
|
||||
BenchmarkKNN10_Linear_Clustered_10k_2D-4 454 2595619 ns/op 164496 B/op 6 allocs/op
|
||||
BenchmarkKNN10_Gonum_Clustered_10k_2D-4 97267 11278 ns/op 1384 B/op 12 allocs/op
|
||||
BenchmarkRadiusMid_Linear_Uniform_10k_2D-4 236 4931056 ns/op 959204 B/op 123 allocs/op
|
||||
BenchmarkRadiusMid_Gonum_Uniform_10k_2D-4 223 5436124 ns/op 1025664 B/op 129 allocs/op
|
||||
BenchmarkRadiusMid_Linear_Clustered_10k_2D-4 199 5687141 ns/op 1232172 B/op 165 allocs/op
|
||||
BenchmarkRadiusMid_Gonum_Clustered_10k_2D-4 182 6186214 ns/op 1315417 B/op 179 allocs/op
|
||||
BenchmarkNearest_1k_2D-4 1000000 1070 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_10k_2D-4 1908055 1276 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_1k_4D-4 249206 4226 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_10k_4D-4 185893 5574 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkKNearest10_1k_2D-4 192446 5614 ns/op 1384 B/op 12 allocs/op
|
||||
BenchmarkKNearest10_10k_2D-4 171120 10207 ns/op 1384 B/op 12 allocs/op
|
||||
BenchmarkRadiusMid_1k_2D-4 2348 526415 ns/op 84118 B/op 18 allocs/op
|
||||
BenchmarkRadiusMid_10k_2D-4 122 10288116 ns/op 1036096 B/op 218 allocs/op
|
||||
PASS
|
||||
ok github.com/Snider/Poindexter 70.252s
|
||||
|
|
@ -1,34 +0,0 @@
|
|||
goos: linux
|
||||
goarch: amd64
|
||||
pkg: github.com/Snider/Poindexter
|
||||
cpu: Intel(R) Xeon(R) Processor @ 2.30GHz
|
||||
BenchmarkNearest_Linear_Uniform_1k_2D-4 138124 8534 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Uniform_1k_2D-4 133792 8428 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Uniform_10k_2D-4 10000 122322 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Uniform_10k_2D-4 13287 87229 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Uniform_1k_4D-4 119668 10099 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Uniform_1k_4D-4 120369 10518 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Uniform_10k_4D-4 12187 95500 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Uniform_10k_4D-4 12282 101452 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Clustered_1k_2D-4 141176 8635 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Clustered_1k_2D-4 141950 9332 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Linear_Clustered_10k_2D-4 13855 100933 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_Gonum_Clustered_10k_2D-4 10000 104974 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkKNN10_Linear_Uniform_10k_2D-4 447 2664549 ns/op 164497 B/op 6 allocs/op
|
||||
BenchmarkKNN10_Gonum_Uniform_10k_2D-4 448 2678659 ns/op 164496 B/op 6 allocs/op
|
||||
BenchmarkKNN10_Linear_Clustered_10k_2D-4 451 2655975 ns/op 164496 B/op 6 allocs/op
|
||||
BenchmarkKNN10_Gonum_Clustered_10k_2D-4 429 2796159 ns/op 164496 B/op 6 allocs/op
|
||||
BenchmarkRadiusMid_Linear_Uniform_10k_2D-4 205 5708833 ns/op 961263 B/op 138 allocs/op
|
||||
BenchmarkRadiusMid_Gonum_Uniform_10k_2D-4 196 5334473 ns/op 961862 B/op 143 allocs/op
|
||||
BenchmarkRadiusMid_Linear_Clustered_10k_2D-4 177 9435880 ns/op 1233949 B/op 182 allocs/op
|
||||
BenchmarkRadiusMid_Gonum_Clustered_10k_2D-4 163 6559096 ns/op 1235333 B/op 196 allocs/op
|
||||
BenchmarkNearest_1k_2D-4 116074 8685 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_10k_2D-4 14332 91255 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_1k_4D-4 108560 11050 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkNearest_10k_4D-4 10000 112694 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkKNearest10_1k_2D-4 4704 253934 ns/op 17032 B/op 6 allocs/op
|
||||
BenchmarkKNearest10_10k_2D-4 458 2664017 ns/op 164495 B/op 6 allocs/op
|
||||
BenchmarkRadiusMid_1k_2D-4 3313 336997 ns/op 77568 B/op 16 allocs/op
|
||||
BenchmarkRadiusMid_10k_2D-4 204 6112449 ns/op 969521 B/op 141 allocs/op
|
||||
PASS
|
||||
ok github.com/Snider/Poindexter 47.769s
|
||||
61
scale.go
Normal file
61
scale.go
Normal file
|
|
@ -0,0 +1,61 @@
|
|||
package poindexter
|
||||
|
||||
import "math"
|
||||
|
||||
// Lerp performs linear interpolation between a and b.
|
||||
// t=0 returns a, t=1 returns b, t=0.5 returns the midpoint.
|
||||
func Lerp(t, a, b float64) float64 {
|
||||
return a + t*(b-a)
|
||||
}
|
||||
|
||||
// InverseLerp returns where v falls between a and b as a fraction [0,1].
|
||||
// Returns 0 if a == b.
|
||||
func InverseLerp(v, a, b float64) float64 {
|
||||
if a == b {
|
||||
return 0
|
||||
}
|
||||
return (v - a) / (b - a)
|
||||
}
|
||||
|
||||
// Remap maps v from the range [inMin, inMax] to [outMin, outMax].
|
||||
// Equivalent to Lerp(InverseLerp(v, inMin, inMax), outMin, outMax).
|
||||
func Remap(v, inMin, inMax, outMin, outMax float64) float64 {
|
||||
return Lerp(InverseLerp(v, inMin, inMax), outMin, outMax)
|
||||
}
|
||||
|
||||
// RoundToN rounds f to n decimal places.
|
||||
func RoundToN(f float64, decimals int) float64 {
|
||||
mul := math.Pow(10, float64(decimals))
|
||||
return math.Round(f*mul) / mul
|
||||
}
|
||||
|
||||
// Clamp restricts v to the range [min, max].
|
||||
func Clamp(v, min, max float64) float64 {
|
||||
if v < min {
|
||||
return min
|
||||
}
|
||||
if v > max {
|
||||
return max
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
// ClampInt restricts v to the range [min, max].
|
||||
func ClampInt(v, min, max int) int {
|
||||
if v < min {
|
||||
return min
|
||||
}
|
||||
if v > max {
|
||||
return max
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
// MinMaxScale normalizes v into [0,1] given its range [min, max].
|
||||
// Returns 0 if min == max.
|
||||
func MinMaxScale(v, min, max float64) float64 {
|
||||
if min == max {
|
||||
return 0
|
||||
}
|
||||
return (v - min) / (max - min)
|
||||
}
|
||||
148
scale_test.go
Normal file
148
scale_test.go
Normal file
|
|
@ -0,0 +1,148 @@
|
|||
package poindexter
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLerp(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
t_, a, b float64
|
||||
want float64
|
||||
}{
|
||||
{"start", 0, 10, 20, 10},
|
||||
{"end", 1, 10, 20, 20},
|
||||
{"mid", 0.5, 10, 20, 15},
|
||||
{"quarter", 0.25, 0, 100, 25},
|
||||
{"extrapolate", 2, 0, 10, 20},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := Lerp(tt.t_, tt.a, tt.b)
|
||||
if math.Abs(got-tt.want) > 1e-9 {
|
||||
t.Errorf("Lerp(%v, %v, %v) = %v, want %v", tt.t_, tt.a, tt.b, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestInverseLerp(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
v, a, b float64
|
||||
want float64
|
||||
}{
|
||||
{"start", 10, 10, 20, 0},
|
||||
{"end", 20, 10, 20, 1},
|
||||
{"mid", 15, 10, 20, 0.5},
|
||||
{"equal_range", 5, 5, 5, 0},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := InverseLerp(tt.v, tt.a, tt.b)
|
||||
if math.Abs(got-tt.want) > 1e-9 {
|
||||
t.Errorf("InverseLerp(%v, %v, %v) = %v, want %v", tt.v, tt.a, tt.b, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemap(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
v, inMin, inMax, outMin, outMax float64
|
||||
want float64
|
||||
}{
|
||||
{"identity", 5, 0, 10, 0, 10, 5},
|
||||
{"scale_up", 5, 0, 10, 0, 100, 50},
|
||||
{"reverse", 3, 0, 10, 10, 0, 7},
|
||||
{"offset", 0, 0, 1, 100, 200, 100},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := Remap(tt.v, tt.inMin, tt.inMax, tt.outMin, tt.outMax)
|
||||
if math.Abs(got-tt.want) > 1e-9 {
|
||||
t.Errorf("Remap = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestRoundToN(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
f float64
|
||||
decimals int
|
||||
want float64
|
||||
}{
|
||||
{"zero_dec", 3.456, 0, 3},
|
||||
{"one_dec", 3.456, 1, 3.5},
|
||||
{"two_dec", 3.456, 2, 3.46},
|
||||
{"three_dec", 3.4564, 3, 3.456},
|
||||
{"negative", -2.555, 2, -2.56},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := RoundToN(tt.f, tt.decimals)
|
||||
if math.Abs(got-tt.want) > 1e-9 {
|
||||
t.Errorf("RoundToN(%v, %v) = %v, want %v", tt.f, tt.decimals, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestClamp(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
v, min, max float64
|
||||
want float64
|
||||
}{
|
||||
{"within", 5, 0, 10, 5},
|
||||
{"below", -5, 0, 10, 0},
|
||||
{"above", 15, 0, 10, 10},
|
||||
{"at_min", 0, 0, 10, 0},
|
||||
{"at_max", 10, 0, 10, 10},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := Clamp(tt.v, tt.min, tt.max)
|
||||
if got != tt.want {
|
||||
t.Errorf("Clamp(%v, %v, %v) = %v, want %v", tt.v, tt.min, tt.max, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestClampInt(t *testing.T) {
|
||||
if got := ClampInt(5, 0, 10); got != 5 {
|
||||
t.Errorf("ClampInt(5, 0, 10) = %v, want 5", got)
|
||||
}
|
||||
if got := ClampInt(-1, 0, 10); got != 0 {
|
||||
t.Errorf("ClampInt(-1, 0, 10) = %v, want 0", got)
|
||||
}
|
||||
if got := ClampInt(15, 0, 10); got != 10 {
|
||||
t.Errorf("ClampInt(15, 0, 10) = %v, want 10", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMinMaxScale(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
v, min, max float64
|
||||
want float64
|
||||
}{
|
||||
{"mid", 5, 0, 10, 0.5},
|
||||
{"at_min", 0, 0, 10, 0},
|
||||
{"at_max", 10, 0, 10, 1},
|
||||
{"equal_range", 5, 5, 5, 0},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := MinMaxScale(tt.v, tt.min, tt.max)
|
||||
if math.Abs(got-tt.want) > 1e-9 {
|
||||
t.Errorf("MinMaxScale(%v, %v, %v) = %v, want %v", tt.v, tt.min, tt.max, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
40
score.go
Normal file
40
score.go
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
package poindexter
|
||||
|
||||
// Factor is a value–weight pair for composite scoring.
|
||||
type Factor struct {
|
||||
Value float64
|
||||
Weight float64
|
||||
}
|
||||
|
||||
// WeightedScore computes the weighted sum of factors.
|
||||
// Each factor contributes Value * Weight to the total.
|
||||
// Returns 0 for empty slices.
|
||||
func WeightedScore(factors []Factor) float64 {
|
||||
var total float64
|
||||
for _, f := range factors {
|
||||
total += f.Value * f.Weight
|
||||
}
|
||||
return total
|
||||
}
|
||||
|
||||
// Ratio returns part/whole safely. Returns 0 if whole is 0.
|
||||
func Ratio(part, whole float64) float64 {
|
||||
if whole == 0 {
|
||||
return 0
|
||||
}
|
||||
return part / whole
|
||||
}
|
||||
|
||||
// Delta returns the difference new_ - old.
|
||||
func Delta(old, new_ float64) float64 {
|
||||
return new_ - old
|
||||
}
|
||||
|
||||
// DeltaPercent returns the percentage change from old to new_.
|
||||
// Returns 0 if old is 0.
|
||||
func DeltaPercent(old, new_ float64) float64 {
|
||||
if old == 0 {
|
||||
return 0
|
||||
}
|
||||
return (new_ - old) / old * 100
|
||||
}
|
||||
86
score_test.go
Normal file
86
score_test.go
Normal file
|
|
@ -0,0 +1,86 @@
|
|||
package poindexter
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestWeightedScore(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
factors []Factor
|
||||
want float64
|
||||
}{
|
||||
{"empty", nil, 0},
|
||||
{"single", []Factor{{Value: 5, Weight: 2}}, 10},
|
||||
{"multiple", []Factor{
|
||||
{Value: 3, Weight: 2}, // 6
|
||||
{Value: 1, Weight: -5}, // -5
|
||||
}, 1},
|
||||
{"lek_heuristic", []Factor{
|
||||
{Value: 2, Weight: 2}, // engagement × 2
|
||||
{Value: 1, Weight: 3}, // creative × 3
|
||||
{Value: 1, Weight: 1.5}, // first person × 1.5
|
||||
{Value: 3, Weight: -5}, // compliance × -5
|
||||
}, -6.5},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := WeightedScore(tt.factors)
|
||||
if math.Abs(got-tt.want) > 1e-9 {
|
||||
t.Errorf("WeightedScore = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestRatio(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
part, whole float64
|
||||
want float64
|
||||
}{
|
||||
{"half", 5, 10, 0.5},
|
||||
{"full", 10, 10, 1},
|
||||
{"zero_whole", 5, 0, 0},
|
||||
{"zero_part", 0, 10, 0},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := Ratio(tt.part, tt.whole)
|
||||
if math.Abs(got-tt.want) > 1e-9 {
|
||||
t.Errorf("Ratio(%v, %v) = %v, want %v", tt.part, tt.whole, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDelta(t *testing.T) {
|
||||
if got := Delta(10, 15); got != 5 {
|
||||
t.Errorf("Delta(10, 15) = %v, want 5", got)
|
||||
}
|
||||
if got := Delta(15, 10); got != -5 {
|
||||
t.Errorf("Delta(15, 10) = %v, want -5", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeltaPercent(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
old, new_ float64
|
||||
want float64
|
||||
}{
|
||||
{"increase", 100, 150, 50},
|
||||
{"decrease", 100, 75, -25},
|
||||
{"zero_old", 0, 10, 0},
|
||||
{"no_change", 50, 50, 0},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := DeltaPercent(tt.old, tt.new_)
|
||||
if math.Abs(got-tt.want) > 1e-9 {
|
||||
t.Errorf("DeltaPercent(%v, %v) = %v, want %v", tt.old, tt.new_, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
57
signal.go
Normal file
57
signal.go
Normal file
|
|
@ -0,0 +1,57 @@
|
|||
package poindexter
|
||||
|
||||
import (
|
||||
"math"
|
||||
"math/rand"
|
||||
)
|
||||
|
||||
// RampUp returns a linear ramp from 0 to 1 over the given duration.
|
||||
// The result is clamped to [0, 1].
|
||||
func RampUp(elapsed, duration float64) float64 {
|
||||
if duration <= 0 {
|
||||
return 1
|
||||
}
|
||||
return Clamp(elapsed/duration, 0, 1)
|
||||
}
|
||||
|
||||
// SineWave returns a sine value with the given period and amplitude.
|
||||
// Output range is [-amplitude, amplitude].
|
||||
func SineWave(t, period, amplitude float64) float64 {
|
||||
if period == 0 {
|
||||
return 0
|
||||
}
|
||||
return math.Sin(t/period*2*math.Pi) * amplitude
|
||||
}
|
||||
|
||||
// Oscillate modulates a base value with a sine wave.
|
||||
// Returns base * (1 + sin(t/period*2π) * amplitude).
|
||||
func Oscillate(base, t, period, amplitude float64) float64 {
|
||||
if period == 0 {
|
||||
return base
|
||||
}
|
||||
return base * (1 + math.Sin(t/period*2*math.Pi)*amplitude)
|
||||
}
|
||||
|
||||
// Noise generates seeded pseudo-random values.
|
||||
type Noise struct {
|
||||
rng *rand.Rand
|
||||
}
|
||||
|
||||
// NewNoise creates a seeded noise generator.
|
||||
func NewNoise(seed int64) *Noise {
|
||||
return &Noise{rng: rand.New(rand.NewSource(seed))}
|
||||
}
|
||||
|
||||
// Float64 returns a random value in [-variance, variance].
|
||||
func (n *Noise) Float64(variance float64) float64 {
|
||||
return (n.rng.Float64() - 0.5) * 2 * variance
|
||||
}
|
||||
|
||||
// Int returns a random integer in [0, max).
|
||||
// Returns 0 if max <= 0.
|
||||
func (n *Noise) Int(max int) int {
|
||||
if max <= 0 {
|
||||
return 0
|
||||
}
|
||||
return n.rng.Intn(max)
|
||||
}
|
||||
103
signal_test.go
Normal file
103
signal_test.go
Normal file
|
|
@ -0,0 +1,103 @@
|
|||
package poindexter
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestRampUp(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
elapsed, duration float64
|
||||
want float64
|
||||
}{
|
||||
{"start", 0, 30, 0},
|
||||
{"mid", 15, 30, 0.5},
|
||||
{"end", 30, 30, 1},
|
||||
{"over", 60, 30, 1},
|
||||
{"negative", -5, 30, 0},
|
||||
{"zero_duration", 10, 0, 1},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := RampUp(tt.elapsed, tt.duration)
|
||||
if math.Abs(got-tt.want) > 1e-9 {
|
||||
t.Errorf("RampUp(%v, %v) = %v, want %v", tt.elapsed, tt.duration, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSineWave(t *testing.T) {
|
||||
// At t=0, sin(0) = 0
|
||||
if got := SineWave(0, 10, 5); math.Abs(got) > 1e-9 {
|
||||
t.Errorf("SineWave(0, 10, 5) = %v, want 0", got)
|
||||
}
|
||||
// At t=period/4, sin(π/2) = 1, so result = amplitude
|
||||
got := SineWave(2.5, 10, 5)
|
||||
if math.Abs(got-5) > 1e-9 {
|
||||
t.Errorf("SineWave(2.5, 10, 5) = %v, want 5", got)
|
||||
}
|
||||
// Zero period returns 0
|
||||
if got := SineWave(5, 0, 5); got != 0 {
|
||||
t.Errorf("SineWave(5, 0, 5) = %v, want 0", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestOscillate(t *testing.T) {
|
||||
// At t=0, sin(0)=0, so result = base * (1 + 0) = base
|
||||
got := Oscillate(100, 0, 10, 0.05)
|
||||
if math.Abs(got-100) > 1e-9 {
|
||||
t.Errorf("Oscillate(100, 0, 10, 0.05) = %v, want 100", got)
|
||||
}
|
||||
// At t=period/4, sin=1, so result = base * (1 + amplitude)
|
||||
got = Oscillate(100, 2.5, 10, 0.05)
|
||||
if math.Abs(got-105) > 1e-9 {
|
||||
t.Errorf("Oscillate(100, 2.5, 10, 0.05) = %v, want 105", got)
|
||||
}
|
||||
// Zero period returns base
|
||||
if got := Oscillate(100, 5, 0, 0.05); got != 100 {
|
||||
t.Errorf("Oscillate(100, 5, 0, 0.05) = %v, want 100", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNoise(t *testing.T) {
|
||||
n := NewNoise(42)
|
||||
|
||||
// Float64 should be within [-variance, variance]
|
||||
for i := 0; i < 1000; i++ {
|
||||
v := n.Float64(0.1)
|
||||
if v < -0.1 || v > 0.1 {
|
||||
t.Fatalf("Float64(0.1) = %v, outside [-0.1, 0.1]", v)
|
||||
}
|
||||
}
|
||||
|
||||
// Int should be within [0, max)
|
||||
n2 := NewNoise(42)
|
||||
for i := 0; i < 1000; i++ {
|
||||
v := n2.Int(10)
|
||||
if v < 0 || v >= 10 {
|
||||
t.Fatalf("Int(10) = %v, outside [0, 10)", v)
|
||||
}
|
||||
}
|
||||
|
||||
// Int with zero max returns 0
|
||||
if got := n.Int(0); got != 0 {
|
||||
t.Errorf("Int(0) = %v, want 0", got)
|
||||
}
|
||||
if got := n.Int(-1); got != 0 {
|
||||
t.Errorf("Int(-1) = %v, want 0", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNoiseDeterministic(t *testing.T) {
|
||||
n1 := NewNoise(123)
|
||||
n2 := NewNoise(123)
|
||||
for i := 0; i < 100; i++ {
|
||||
a := n1.Float64(1.0)
|
||||
b := n2.Float64(1.0)
|
||||
if a != b {
|
||||
t.Fatalf("iteration %d: different values for same seed: %v != %v", i, a, b)
|
||||
}
|
||||
}
|
||||
}
|
||||
63
stats.go
Normal file
63
stats.go
Normal file
|
|
@ -0,0 +1,63 @@
|
|||
package poindexter
|
||||
|
||||
import "math"
|
||||
|
||||
// Sum returns the sum of all values. Returns 0 for empty slices.
|
||||
func Sum(data []float64) float64 {
|
||||
var s float64
|
||||
for _, v := range data {
|
||||
s += v
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// Mean returns the arithmetic mean. Returns 0 for empty slices.
|
||||
func Mean(data []float64) float64 {
|
||||
if len(data) == 0 {
|
||||
return 0
|
||||
}
|
||||
return Sum(data) / float64(len(data))
|
||||
}
|
||||
|
||||
// Variance returns the population variance. Returns 0 for empty slices.
|
||||
func Variance(data []float64) float64 {
|
||||
if len(data) == 0 {
|
||||
return 0
|
||||
}
|
||||
m := Mean(data)
|
||||
var ss float64
|
||||
for _, v := range data {
|
||||
d := v - m
|
||||
ss += d * d
|
||||
}
|
||||
return ss / float64(len(data))
|
||||
}
|
||||
|
||||
// StdDev returns the population standard deviation.
|
||||
func StdDev(data []float64) float64 {
|
||||
return math.Sqrt(Variance(data))
|
||||
}
|
||||
|
||||
// MinMax returns the minimum and maximum values.
|
||||
// Returns (0, 0) for empty slices.
|
||||
func MinMax(data []float64) (min, max float64) {
|
||||
if len(data) == 0 {
|
||||
return 0, 0
|
||||
}
|
||||
min, max = data[0], data[0]
|
||||
for _, v := range data[1:] {
|
||||
if v < min {
|
||||
min = v
|
||||
}
|
||||
if v > max {
|
||||
max = v
|
||||
}
|
||||
}
|
||||
return min, max
|
||||
}
|
||||
|
||||
// IsUnderrepresented returns true if val is below threshold fraction of avg.
|
||||
// For example, IsUnderrepresented(3, 10, 0.5) returns true because 3 < 10*0.5.
|
||||
func IsUnderrepresented(val, avg, threshold float64) bool {
|
||||
return val < avg*threshold
|
||||
}
|
||||
122
stats_test.go
Normal file
122
stats_test.go
Normal file
|
|
@ -0,0 +1,122 @@
|
|||
package poindexter
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestSum(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
data []float64
|
||||
want float64
|
||||
}{
|
||||
{"empty", nil, 0},
|
||||
{"single", []float64{5}, 5},
|
||||
{"multiple", []float64{1, 2, 3, 4, 5}, 15},
|
||||
{"negative", []float64{-1, -2, 3}, 0},
|
||||
{"floats", []float64{0.1, 0.2, 0.3}, 0.6},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := Sum(tt.data)
|
||||
if math.Abs(got-tt.want) > 1e-9 {
|
||||
t.Errorf("Sum(%v) = %v, want %v", tt.data, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMean(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
data []float64
|
||||
want float64
|
||||
}{
|
||||
{"empty", nil, 0},
|
||||
{"single", []float64{5}, 5},
|
||||
{"multiple", []float64{2, 4, 6}, 4},
|
||||
{"floats", []float64{1.5, 2.5}, 2},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := Mean(tt.data)
|
||||
if math.Abs(got-tt.want) > 1e-9 {
|
||||
t.Errorf("Mean(%v) = %v, want %v", tt.data, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestVariance(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
data []float64
|
||||
want float64
|
||||
}{
|
||||
{"empty", nil, 0},
|
||||
{"constant", []float64{5, 5, 5}, 0},
|
||||
{"simple", []float64{2, 4, 4, 4, 5, 5, 7, 9}, 4},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := Variance(tt.data)
|
||||
if math.Abs(got-tt.want) > 1e-9 {
|
||||
t.Errorf("Variance(%v) = %v, want %v", tt.data, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStdDev(t *testing.T) {
|
||||
got := StdDev([]float64{2, 4, 4, 4, 5, 5, 7, 9})
|
||||
if math.Abs(got-2) > 1e-9 {
|
||||
t.Errorf("StdDev = %v, want 2", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMinMax(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
data []float64
|
||||
wantMin float64
|
||||
wantMax float64
|
||||
}{
|
||||
{"empty", nil, 0, 0},
|
||||
{"single", []float64{3}, 3, 3},
|
||||
{"ordered", []float64{1, 2, 3}, 1, 3},
|
||||
{"reversed", []float64{3, 2, 1}, 1, 3},
|
||||
{"negative", []float64{-5, 0, 5}, -5, 5},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
gotMin, gotMax := MinMax(tt.data)
|
||||
if gotMin != tt.wantMin || gotMax != tt.wantMax {
|
||||
t.Errorf("MinMax(%v) = (%v, %v), want (%v, %v)", tt.data, gotMin, gotMax, tt.wantMin, tt.wantMax)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsUnderrepresented(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
val float64
|
||||
avg float64
|
||||
threshold float64
|
||||
want bool
|
||||
}{
|
||||
{"below", 3, 10, 0.5, true},
|
||||
{"at", 5, 10, 0.5, false},
|
||||
{"above", 7, 10, 0.5, false},
|
||||
{"zero_avg", 0, 0, 0.5, false},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := IsUnderrepresented(tt.val, tt.avg, tt.threshold)
|
||||
if got != tt.want {
|
||||
t.Errorf("IsUnderrepresented(%v, %v, %v) = %v, want %v", tt.val, tt.avg, tt.threshold, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
Loading…
Add table
Reference in a new issue