Compare commits

..

8 commits

Author SHA1 Message Date
Snider
520d0f5728 fix: tidy deps after dappco.re migration
Some checks failed
Security Scan / security (push) Has been cancelled
Test / test (push) Has been cancelled
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 16:25:15 +01:00
Snider
c823c46bb2 fix: migrate module paths from forge.lthn.ai to dappco.re
Some checks are pending
Security Scan / security (push) Waiting to run
Test / test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 16:21:13 +01:00
Virgil
56bd30d3d2 fix(node): add load-or-create identity helper and TTL-aware deduplication
Some checks failed
Security Scan / security (push) Has been cancelled
Test / test (push) Has been cancelled
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-03 05:10:11 +00:00
Virgil
3eeaf90d38 fix(node): enforce private key file permissions
All checks were successful
Security Scan / security (push) Successful in 9s
Test / test (push) Successful in 1m24s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-01 09:52:42 +00:00
Virgil
d5a962996b fix(peer): allow empty peer names
All checks were successful
Security Scan / security (push) Successful in 10s
Test / test (push) Successful in 1m23s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-01 06:35:43 +00:00
Virgil
572970d255 fix(peer): reject empty peer names
Some checks failed
Security Scan / security (push) Successful in 12s
Test / test (push) Failing after 46s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-01 06:27:20 +00:00
Virgil
ee623a7343 feat(node): persist peer allowlist
Some checks failed
Security Scan / security (push) Successful in 9s
Test / test (push) Failing after 55s
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-01 06:03:11 +00:00
Virgil
8d1caa3a59 fix(node): support timestamped remote log queries
All checks were successful
Security Scan / security (push) Successful in 8s
Test / test (push) Successful in 3m38s
Expose a Since-aware remote log helper on the controller and plumb the filter through the worker's miner log lookup so the payload field is honoured end-to-end.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-01 05:39:56 +00:00
60 changed files with 3144 additions and 4025 deletions

View file

@ -4,7 +4,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## Project
`go-p2p` is the P2P networking layer for the Lethean network. Module path: `dappco.re/go/core/p2p`
`go-p2p` is the P2P networking layer for the Lethean network. Module path: `forge.lthn.ai/core/go-p2p`
## Prerequisites
@ -40,7 +40,7 @@ logging/ — Structured levelled logger with component scoping (stdlib only)
### Data flow
1. **Identity** (`identity.go`) — X25519 keypair via Borg STMF. Shared secrets derived via X25519 ECDH + SHA-256.
1. **Identity** (`identity.go`) — Ed25519 keypair via Borg STMF. Shared secrets derived via X25519 ECDH + SHA-256.
2. **Transport** (`transport.go`) — WebSocket server/client (gorilla/websocket). Handshake exchanges `NodeIdentity` + HMAC-SHA256 challenge-response. Post-handshake messages are Borg SMSG-encrypted. Includes deduplication (5-min TTL), rate limiting (token bucket: 100 burst/50 per sec), and MaxConns enforcement.
3. **Dispatcher** (`dispatcher.go`) — Routes verified UEPS packets to intent handlers. Threat circuit breaker drops packets with `ThreatScore > 50,000` before routing.
4. **Controller** (`controller.go`) — Issues requests to remote peers using a pending-map pattern (`map[string]chan *Message`). Auto-connects to peers on demand.
@ -75,13 +75,13 @@ type ProfileManager interface {
- UK English (colour, organisation, centre, behaviour, recognise)
- All parameters and return types explicitly annotated
- Tests use `testify` assert/require; prefer table-driven subtests with `t.Run()` when multiple related cases share one shape
- Tests use `testify` assert/require; table-driven subtests with `t.Run()`
- Test name suffixes: `_Good` (happy path), `_Bad` (expected errors), `_Ugly` (panic/edge cases)
- Licence: EUPL-1.2 — new files need `// SPDX-License-Identifier: EUPL-1.2`
- Security-first: do not weaken HMAC, challenge-response, Zip Slip defence, or rate limiting
- Use `logging` package only — no `fmt.Println` or `log.Printf` in library code
- Error handling: use `core.E()` from `dappco.re/go/core` — never `fmt.Errorf` or `errors.New` in library code
- File I/O: use `dappco.re/go/core` filesystem helpers (package-level adapters in `node/` backed by `core.Fs`) — never `os.ReadFile`/`os.WriteFile` in library code (exception: `os.OpenFile` for streaming writes where filesystem helpers cannot preserve tar header mode bits)
- Error handling: use `coreerr.E()` from `go-log` — never `fmt.Errorf` or `errors.New` in library code
- File I/O: use `coreio.Local` from `go-io` — never `os.ReadFile`/`os.WriteFile` in library code (exception: `os.OpenFile` for streaming writes where `coreio` lacks support)
- Hot-path debug logging uses sampling pattern: `if counter.Add(1)%interval == 0`
### Transport test helper

View file

@ -1,11 +0,0 @@
<!-- SPDX-License-Identifier: EUPL-1.2 -->
# CODEX.md
Codex-compatible entrypoint for this repository.
- Treat `CLAUDE.md` as the authoritative local conventions file for commands, architecture notes, coding standards, and commit format.
- Current module path: `dappco.re/go/core/p2p`.
- Verification baseline: `go build ./...`, `go vet ./...`, and `go test ./...`.
- Use conventional commits with `Co-Authored-By: Virgil <virgil@lethean.io>`.
- If `.core/reference/docs/RFC.md` is absent in the checkout, report that gap explicitly and use the local docs under `docs/` plus the code as the available reference set.

View file

@ -1,52 +1,30 @@
[![Go Reference](https://pkg.go.dev/badge/dappco.re/go/core/p2p.svg)](https://pkg.go.dev/dappco.re/go/core/p2p)
[![License: EUPL-1.2](https://img.shields.io/badge/License-EUPL--1.2-blue.svg)](CONTRIBUTING.md#license)
[![Go Reference](https://pkg.go.dev/badge/forge.lthn.ai/core/go-p2p.svg)](https://pkg.go.dev/forge.lthn.ai/core/go-p2p)
[![License: EUPL-1.2](https://img.shields.io/badge/License-EUPL--1.2-blue.svg)](LICENSE.md)
[![Go Version](https://img.shields.io/badge/Go-1.26-00ADD8?style=flat&logo=go)](go.mod)
# go-p2p
P2P mesh networking layer for the Lethean network. Provides X25519 node identity, an encrypted WebSocket transport with HMAC-SHA256 challenge-response handshake, KD-tree peer selection across four dimensions (latency, hops, geography, reliability score), UEPS wire protocol (RFC-021) TLV packet builder and reader, UEPS intent routing with a threat circuit breaker, and TIM deployment bundle encryption with Zip Slip and decompression-bomb defences.
P2P mesh networking layer for the Lethean network. Provides Ed25519 node identity, an encrypted WebSocket transport with HMAC-SHA256 challenge-response handshake, KD-tree peer selection across four dimensions (latency, hops, geography, reliability score), UEPS wire protocol (RFC-021) TLV packet builder and reader, UEPS intent routing with a threat circuit breaker, and TIM deployment bundle encryption with Zip Slip and decompression-bomb defences.
**Module**: `dappco.re/go/core/p2p`
**Module**: `forge.lthn.ai/core/go-p2p`
**Licence**: EUPL-1.2
**Language**: Go 1.26
**Language**: Go 1.25
## Quick Start
```go
import (
"log"
"dappco.re/go/core/p2p/node"
"dappco.re/go/core/p2p/ueps"
"forge.lthn.ai/core/go-p2p/node"
"forge.lthn.ai/core/go-p2p/ueps"
)
nm, err := node.NewNodeManager()
if err != nil {
log.Fatal(err)
}
if !nm.HasIdentity() {
if err := nm.GenerateIdentity("worker-1", node.RoleWorker); err != nil {
log.Fatal(err)
}
}
// Start a P2P node
identity, _ := node.LoadOrCreateIdentity()
transport := node.NewTransport(identity, node.TransportConfig{ListenAddr: ":9091"})
transport.Start(ctx)
registry, err := node.NewPeerRegistry()
if err != nil {
log.Fatal(err)
}
transport := node.NewTransport(nm, registry, node.DefaultTransportConfig())
if err := transport.Start(); err != nil {
log.Fatal(err)
}
payload := []byte(`{"job":"hashrate"}`)
sharedSecret := make([]byte, 32)
pkt, err := ueps.NewBuilder(node.IntentCompute, payload).MarshalAndSign(sharedSecret)
if err != nil {
log.Fatal(err)
}
_ = pkt
// Build a UEPS packet
pkt, _ := ueps.NewBuilder(ueps.IntentCompute, payload).MarshalAndSign(sharedSecret)
```
## Documentation
@ -66,4 +44,4 @@ go build ./...
## Licence
European Union Public Licence 1.2 — see [CONTRIBUTING](CONTRIBUTING.md#license) for details.
European Union Public Licence 1.2 — see [LICENCE](LICENCE) for details.

View file

@ -1,57 +1,129 @@
# Session Brief: core/go-p2p
**Repo**: `forge.lthn.ai/core/go-p2p`
**Module**: `dappco.re/go/core/p2p`
**Status**: `go build ./...`, `go vet ./...`, and `go test ./...` pass on 2026-03-27.
**Primary references**: `CLAUDE.md`, `docs/architecture.md`, `docs/development.md`
**Repo**: `forge.lthn.ai/core/go-p2p` (clone at `/tmp/core-go-p2p`)
**Module**: `forge.lthn.ai/core/go-p2p`
**Status**: 16 Go files, ~2,500 LOC, node tests PASS (42% coverage), ueps has NO TESTS
**Wiki**: https://forge.lthn.ai/core/go-p2p/wiki (6 pages)
## What This Is
P2P networking layer for the Lethean network. The repository currently consists of four Go packages:
P2P networking layer for the Lethean network. Three packages:
- `node/` — P2P mesh: identity, transport, peer registry, messages, protocol helpers, worker/controller logic, dispatcher, and deployment bundles
- `node/levin/` — standalone CryptoNote Levin binary protocol support
- `ueps/` — UEPS TLV wire protocol with HMAC-SHA256 integrity verification
- `logging/` — structured levelled logger with component scoping
### node/ — P2P Mesh (14 files)
- **Identity**: Ed25519 keypair generation, PEM serialisation, challenge-response auth
- **Transport**: Encrypted WebSocket connections via gorilla/websocket + Borg (encrypted blob storage)
- **Peers**: Registry with scoring, persistence, auth modes (open/allowlist), name validation
- **Messages**: Typed protocol messages (handshake, ping, stats, miner control, deploy, logs)
- **Protocol**: Response handler with validation and typed parsing
- **Worker**: Command handler (ping, stats, miner start/stop, deploy profiles, get logs)
- **Dispatcher**: UEPS packet routing skeleton with threat circuit breaker
- **Controller**: Remote node operations (connect, command, disconnect)
- **Bundle**: Service factory for Core framework DI registration
### ueps/ — Wire Protocol (2 files, NO TESTS)
- **PacketBuilder**: Constructs signed UEPS frames with TLV encoding
- **ReadAndVerify**: Parses and verifies HMAC-SHA256 integrity
- TLV tags: 0x01-0x05 (header fields), 0x06 (HMAC), 0xFF (payload marker)
- Header: Version, CurrentLayer, TargetLayer, IntentID, ThreatScore
### logging/ — Structured Logger (1 file)
- Simple levelled logger (INFO/WARN/ERROR/DEBUG) with key-value pairs
## Current State
| Area | Status |
|------|--------|
| Build | PASS |
| Vet | PASS |
| Tests | PASS |
| `logging/` | Has direct unit coverage |
| `ueps/` | Has round-trip, malformed packet, and coverage-path tests |
| `node/transport` | Has real WebSocket handshake and integration tests |
| `node/controller` | Has request/response, auto-connect, ping, and miner-control tests |
| `node/dispatcher` | Has routing, threshold, and concurrency tests |
| `node/levin` | Has protocol encode/decode coverage |
## Key Behaviours
- **Identity** — X25519 keypair generation via Borg STMF, persisted through XDG paths
- **Transport** — WebSocket mesh with challenge-response authentication, SMSG encryption, deduplication, rate limiting, and keepalive handling
- **Peer registry** — KD-tree selection across latency, hops, geography, and reliability score
- **Controller/worker** — request/response messaging for stats, miner control, logs, and deployment
- **Dispatcher** — UEPS intent routing with a threat circuit breaker at `ThreatScore > 50000`
- **Bundles** — TIM-based profile and miner bundle handling with defensive tar extraction
| node/ tests | PASS — 42% statement coverage |
| ueps/ tests | NONE — zero test files |
| logging/ tests | NONE |
| go vet | Clean |
| TODOs/FIXMEs | None found |
| Identity (Ed25519) | Well tested — keypair, challenge-response, deterministic sigs |
| PeerRegistry | Well tested — add/remove, scoring, persistence, auth modes, name validation |
| Messages | Well tested — all 15 message types, serialisation, error codes |
| Worker | Well tested — ping, stats, miner, deploy, logs handlers |
| Transport | NOT tested — WebSocket + Borg encryption |
| Controller | NOT tested — remote node operations |
| Dispatcher | NOT tested — UEPS routing skeleton |
## Dependencies
- `dappco.re/go/core` v0.8.0-alpha.1
- `forge.lthn.ai/Snider/Borg` v0.3.1
- `forge.lthn.ai/Snider/Poindexter` v0.0.3
- `github.com/adrg/xdg` v0.5.3
- `github.com/google/uuid` v1.6.0
- `github.com/Snider/Borg` v0.2.0 (encrypted blob storage)
- `github.com/Snider/Enchantrix` v0.0.2 (secure environment)
- `github.com/Snider/Poindexter` (secure pointer)
- `github.com/gorilla/websocket` v1.5.3
- `github.com/google/uuid` v1.6.0
- `github.com/ProtonMail/go-crypto` v1.3.0
- `github.com/adrg/xdg` v0.5.3
- `github.com/stretchr/testify` v1.11.1
- `golang.org/x/crypto` v0.45.0
## Priority Work
### High (coverage gaps)
1. **UEPS tests** — Zero tests for the wire protocol. This is the consent-gated TLV protocol from RFC-021. Need: builder round-trip, HMAC verification, malformed packet rejection, boundary conditions (max ThreatScore, empty payload, oversized payload).
2. **Transport tests** — WebSocket connection, Borg encryption handshake, reconnection logic.
3. **Controller tests** — Connect/command/disconnect flow.
4. **Dispatcher tests** — UEPS routing, threat circuit breaker (ThreatScore > 50000 drops).
### Medium (hardening)
5. **Increase node/ coverage** from 42% to 70%+ — focus on transport.go, controller.go, dispatcher.go
6. **Benchmarks** — Peer scoring, UEPS marshal/unmarshal, identity key generation
7. **Integration test** — Full node-to-node handshake over localhost WebSocket
### Low (completeness)
8. **Logging tests** — Simple but should have coverage
9. **Peer discovery** — Currently manual. Add mDNS or DHT discovery
10. **Connection pooling** — Transport creates fresh connections; add pool for controller
## File Map
```
/tmp/core-go-p2p/
├── node/
│ ├── bundle.go + bundle_test.go — Core DI factory
│ ├── identity.go + identity_test.go — Ed25519 keypair, PEM, challenge-response
│ ├── message.go + message_test.go — Protocol message types
│ ├── peer.go + peer_test.go — Registry, scoring, auth
│ ├── protocol.go + protocol_test.go — Response validation, typed parsing
│ ├── worker.go + worker_test.go — Command handlers
│ ├── transport.go (NO TEST) — WebSocket + Borg encryption
│ ├── controller.go (NO TEST) — Remote node operations
│ ├── dispatcher.go (NO TEST) — UEPS routing skeleton
│ └── logging.go — Package-level logger setup
├── ueps/
│ ├── ueps.go (NO TEST) — PacketBuilder, ReadAndVerify, TLV
│ └── types.go (NO TEST) — UEPSHeader, ParsedPacket, intent IDs
├── logging/
│ └── logger.go (NO TEST) — Levelled structured logger
├── go.mod
└── go.sum
```
## Key Interfaces
```go
// node/message.go — 15 message types
const (
MsgHandshake MsgHandshakeAck MsgPing MsgPong
MsgDisconnect MsgGetStats MsgStats MsgStartMiner
MsgStopMiner MsgMinerAck MsgDeploy MsgDeployAck
MsgGetLogs MsgLogs MsgError
)
// ueps/types.go — UEPS header
type UEPSHeader struct {
Version uint8 // 0x09
CurrentLayer uint8
TargetLayer uint8
IntentID uint8 // 0x01=Handshake, 0x20=Compute, 0x30=Rehab, 0xFF=Extended
ThreatScore uint16
}
```
## Conventions
- UK English in comments, logs, and docs
- `core.E()` for library error wrapping and sentinel definitions
- `core.Fs` adapters for library file I/O in `node/`
- `testify` in tests; prefer `t.Run()` tables for related cases
- EUPL-1.2 SPDX identifiers on new files
- Conventional commits with `Co-Authored-By: Virgil <virgil@lethean.io>`
- UK English
- Tests: testify assert/require
- Licence: EUPL-1.2
- Lethean codenames: Borg (Secure/Blob), Poindexter (Secure/Pointer), Enchantrix (Secure/Environment)

View file

@ -1,440 +0,0 @@
# RFC-025: Agent Experience (AX) Design Principles
- **Status:** Draft
- **Authors:** Snider, Cladius
- **Date:** 2026-03-19
- **Applies to:** All Core ecosystem packages (CoreGO, CorePHP, CoreTS, core-agent)
## Abstract
Agent Experience (AX) is a design paradigm for software systems where the primary code consumer is an AI agent, not a human developer. AX sits alongside User Experience (UX) and Developer Experience (DX) as the third era of interface design.
This RFC establishes AX as a formal design principle for the Core ecosystem and defines the conventions that follow from it.
## Motivation
As of early 2026, AI agents write, review, and maintain the majority of code in the Core ecosystem. The original author has not manually edited code (outside of Core struct design) since October 2025. Code is processed semantically — agents reason about intent, not characters.
Design patterns inherited from the human-developer era optimise for the wrong consumer:
- **Short names** save keystrokes but increase semantic ambiguity
- **Functional option chains** are fluent for humans but opaque for agents tracing configuration
- **Error-at-every-call-site** produces 50% boilerplate that obscures intent
- **Generic type parameters** force agents to carry type context that the runtime already has
- **Panic-hiding conventions** (`Must*`) create implicit control flow that agents must special-case
AX acknowledges this shift and provides principles for designing code, APIs, file structures, and conventions that serve AI agents as first-class consumers.
## The Three Eras
| Era | Primary Consumer | Optimises For | Key Metric |
|-----|-----------------|---------------|------------|
| UX | End users | Discoverability, forgiveness, visual clarity | Task completion time |
| DX | Developers | Typing speed, IDE support, convention familiarity | Time to first commit |
| AX | AI agents | Predictability, composability, semantic navigation | Correct-on-first-pass rate |
AX does not replace UX or DX. End users still need good UX. Developers still need good DX. But when the primary code author and maintainer is an AI agent, the codebase should be designed for that consumer first.
## Principles
### 1. Predictable Names Over Short Names
Names are tokens that agents pattern-match across languages and contexts. Abbreviations introduce mapping overhead.
```
Config not Cfg
Service not Srv
Embed not Emb
Error not Err (as a subsystem name; err for local variables is fine)
Options not Opts
```
**Rule:** If a name would require a comment to explain, it is too short.
**Exception:** Industry-standard abbreviations that are universally understood (`HTTP`, `URL`, `ID`, `IPC`, `I18n`) are acceptable. The test: would an agent trained on any mainstream language recognise it without context?
### 2. Comments as Usage Examples
The function signature tells WHAT. The comment shows HOW with real values.
```go
// Detect the project type from files present
setup.Detect("/path/to/project")
// Set up a workspace with auto-detected template
setup.Run(setup.Options{Path: ".", Template: "auto"})
// Scaffold a PHP module workspace
setup.Run(setup.Options{Path: "./my-module", Template: "php"})
```
**Rule:** If a comment restates what the type signature already says, delete it. If a comment shows a concrete usage with realistic values, keep it.
**Rationale:** Agents learn from examples more effectively than from descriptions. A comment like "Run executes the setup process" adds zero information. A comment like `setup.Run(setup.Options{Path: ".", Template: "auto"})` teaches an agent exactly how to call the function.
### 3. Path Is Documentation
File and directory paths should be self-describing. An agent navigating the filesystem should understand what it is looking at without reading a README.
```
flow/deploy/to/homelab.yaml — deploy TO the homelab
flow/deploy/from/github.yaml — deploy FROM GitHub
flow/code/review.yaml — code review flow
template/file/go/struct.go.tmpl — Go struct file template
template/dir/workspace/php/ — PHP workspace scaffold
```
**Rule:** If an agent needs to read a file to understand what a directory contains, the directory naming has failed.
**Corollary:** The unified path convention (folder structure = HTTP route = CLI command = test path) is AX-native. One path, every surface.
### 4. Templates Over Freeform
When an agent generates code from a template, the output is constrained to known-good shapes. When an agent writes freeform, the output varies.
```go
// Template-driven — consistent output
lib.RenderFile("php/action", data)
lib.ExtractDir("php", targetDir, data)
// Freeform — variance in output
"write a PHP action class that..."
```
**Rule:** For any code pattern that recurs, provide a template. Templates are guardrails for agents.
**Scope:** Templates apply to file generation, workspace scaffolding, config generation, and commit messages. They do NOT apply to novel logic — agents should write business logic freeform with the domain knowledge available.
### 5. Declarative Over Imperative
Agents reason better about declarations of intent than sequences of operations.
```yaml
# Declarative — agent sees what should happen
steps:
- name: build
flow: tools/docker-build
with:
context: "{{ .app_dir }}"
image_name: "{{ .image_name }}"
- name: deploy
flow: deploy/with/docker
with:
host: "{{ .host }}"
```
```go
// Imperative — agent must trace execution
cmd := exec.Command("docker", "build", "--platform", "linux/amd64", "-t", imageName, ".")
cmd.Dir = appDir
if err := cmd.Run(); err != nil {
return fmt.Errorf("docker build: %w", err)
}
```
**Rule:** Orchestration, configuration, and pipeline logic should be declarative (YAML/JSON). Implementation logic should be imperative (Go/PHP/TS). The boundary is: if an agent needs to compose or modify the logic, make it declarative.
### 6. Universal Types (Core Primitives)
Every component in the ecosystem accepts and returns the same primitive types. An agent processing any level of the tree sees identical shapes.
```go
// Universal contract
setup.Run(core.Options{Path: ".", Template: "auto"})
brain.New(core.Options{Name: "openbrain"})
deploy.Run(core.Options{Flow: "deploy/to/homelab"})
// Fractal — Core itself is a Service
core.New(core.Options{
Services: []core.Service{
process.New(core.Options{Name: "process"}),
brain.New(core.Options{Name: "brain"}),
},
})
```
**Core primitive types:**
| Type | Purpose |
|------|---------|
| `core.Options` | Input configuration (what you want) |
| `core.Config` | Runtime settings (what is active) |
| `core.Data` | Embedded or stored content |
| `core.Service` | A managed component with lifecycle |
| `core.Result[T]` | Return value with OK/fail state |
**What this replaces:**
| Go Convention | Core AX | Why |
|--------------|---------|-----|
| `func With*(v) Option` | `core.Options{Field: v}` | Struct literal is parseable; option chain requires tracing |
| `func Must*(v) T` | `core.Result[T]` | No hidden panics; errors flow through Core |
| `func *For[T](c) T` | `c.Service("name")` | String lookup is greppable; generics require type context |
| `val, err :=` everywhere | Single return via `core.Result` | Intent not obscured by error handling |
| `_ = err` | Never needed | Core handles all errors internally |
### 7. Directory as Semantics
The directory structure tells an agent the intent before it reads a word. Top-level directories are semantic categories, not organisational bins.
```
plans/
├── code/ # Pure primitives — read for WHAT exists
├── project/ # Products — read for WHAT we're building and WHY
└── rfc/ # Contracts — read for constraints and rules
```
**Rule:** An agent should know what kind of document it's reading from the path alone. `code/core/go/io/RFC.md` = a lib primitive spec. `project/ofm/RFC.md` = a product spec that cross-references code/. `rfc/snider/borg/RFC-BORG-006-SMSG-FORMAT.md` = an immutable contract for the Borg SMSG protocol.
**Corollary:** The three-way split (code/project/rfc) extends principle 3 (Path Is Documentation) from files to entire subtrees. The path IS the metadata.
### 8. Lib Never Imports Consumer
Dependency flows one direction. Libraries define primitives. Consumers compose from them. A new feature in a consumer can never break a library.
```
code/core/go/* → lib tier (stable foundation)
code/core/agent/ → consumer tier (composes from go/*)
code/core/cli/ → consumer tier (composes from go/*)
code/core/gui/ → consumer tier (composes from go/*)
```
**Rule:** If package A is in `go/` and package B is in the consumer tier, B may import A but A must never import B. The repo naming convention enforces this: `go-{name}` = lib, bare `{name}` = consumer.
**Why this matters for agents:** When an agent is dispatched to implement a feature in `core/agent`, it can freely import from `go-io`, `go-scm`, `go-process`. But if an agent is dispatched to `go-io`, it knows its changes are foundational — every consumer depends on it, so the contract must not break.
### 9. Issues Are N+(rounds) Deep
Problems in code and specs are layered. Surface issues mask deeper issues. Fixing the surface reveals the next layer. This is not a failure mode — it is the discovery process.
```
Pass 1: Find 16 issues (surface — naming, imports, obvious errors)
Pass 2: Find 11 issues (structural — contradictions, missing types)
Pass 3: Find 5 issues (architectural — signature mismatches, registration gaps)
Pass 4: Find 4 issues (contract — cross-spec API mismatches)
Pass 5: Find 2 issues (mechanical — path format, nil safety)
Pass N: Findings are trivial → spec/code is complete
```
**Rule:** Iteration is required, not a failure. Each pass sees what the previous pass could not, because the context changed. An agent dispatched with the same task on the same repo will find different things each time — this is correct behaviour.
**Corollary:** The cheapest model should do the most passes (surface work). The frontier model should arrive last, when only deep issues remain. Tiered iteration: grunt model grinds → mid model pre-warms → frontier model polishes.
**Anti-pattern:** One-shot generation expecting valid output. No model, no human, produces correct-on-first-pass for non-trivial work. Expecting it wastes the first pass on surface issues that a cheaper pass would have caught.
### 10. CLI Tests as Artifact Validation
Unit tests verify the code. CLI tests verify the binary. The directory structure IS the command structure — path maps to command, Taskfile runs the test.
```
tests/cli/
├── core/
│ └── lint/
│ ├── Taskfile.yaml ← test `core-lint` (root)
│ ├── run/
│ │ ├── Taskfile.yaml ← test `core-lint run`
│ │ └── fixtures/
│ ├── go/
│ │ ├── Taskfile.yaml ← test `core-lint go`
│ │ └── fixtures/
│ └── security/
│ ├── Taskfile.yaml ← test `core-lint security`
│ └── fixtures/
```
**Rule:** Every CLI command has a matching `tests/cli/{path}/Taskfile.yaml`. The Taskfile runs the compiled binary against fixtures with known inputs and validates the output. If the CLI test passes, the underlying actions work — because CLI commands call actions, MCP tools call actions, API endpoints call actions. Test the CLI, trust the rest.
**Pattern:**
```yaml
# tests/cli/core/lint/go/Taskfile.yaml
version: '3'
tasks:
test:
cmds:
- core-lint go --output json fixtures/ > /tmp/result.json
- jq -e '.findings | length > 0' /tmp/result.json
- jq -e '.summary.passed == false' /tmp/result.json
```
**Why this matters for agents:** An agent can validate its own work by running `task test` in the matching `tests/cli/` directory. No test framework, no mocking, no setup — just the binary, fixtures, and `jq` assertions. The agent builds the binary, runs the test, sees the result. If it fails, the agent can read the fixture, read the output, and fix the code.
**Corollary:** Fixtures are planted bugs. Each fixture file has a known issue that the linter must find. If the linter doesn't find it, the test fails. Fixtures are the spec for what the tool must detect — they ARE the test cases, not descriptions of test cases.
## Applying AX to Existing Patterns
### File Structure
```
# AX-native: path describes content
core/agent/
├── go/ # Go source
├── php/ # PHP source
├── ui/ # Frontend source
├── claude/ # Claude Code plugin
└── codex/ # Codex plugin
# Not AX: generic names requiring README
src/
├── lib/
├── utils/
└── helpers/
```
### Error Handling
```go
// AX-native: errors are infrastructure, not application logic
svc := c.Service("brain")
cfg := c.Config().Get("database.host")
// Errors logged by Core. Code reads like a spec.
// Not AX: errors dominate the code
svc, err := c.ServiceFor[brain.Service]()
if err != nil {
return fmt.Errorf("get brain service: %w", err)
}
cfg, err := c.Config().Get("database.host")
if err != nil {
_ = err // silenced because "it'll be fine"
}
```
### API Design
```go
// AX-native: one shape, every surface
core.New(core.Options{
Name: "my-app",
Services: []core.Service{...},
Config: core.Config{...},
})
// Not AX: multiple patterns for the same thing
core.New(
core.WithName("my-app"),
core.WithService(factory1),
core.WithService(factory2),
core.WithConfig(cfg),
)
```
## The Plans Convention — AX Development Lifecycle
The `plans/` directory structure encodes a development methodology designed for how generative AI actually works: iterative refinement across structured phases, not one-shot generation.
### The Three-Way Split
```
plans/
├── project/ # 1. WHAT and WHY — start here
├── rfc/ # 2. CONSTRAINTS — immutable contracts
└── code/ # 3. HOW — implementation specs
```
Each directory is a phase. Work flows from project → rfc → code. Each transition forces a refinement pass — you cannot write a code spec without discovering gaps in the project spec, and you cannot write an RFC without discovering assumptions in both.
**Three places for data that can't be written simultaneously = three guaranteed iterations of "actually, this needs changing."** Refinement is baked into the structure, not bolted on as a review step.
### Phase 1: Project (Vision)
Start with `project/`. No code exists yet. Define:
- What the product IS and who it serves
- What existing primitives it consumes (cross-ref to `code/`)
- What constraints it operates under (cross-ref to `rfc/`)
This is where creativity lives. Map features to building blocks. Connect systems. The project spec is integrative — it references everything else.
### Phase 2: RFC (Contracts)
Extract the immutable rules into `rfc/`. These are constraints that don't change with implementation:
- Wire formats, protocols, hash algorithms
- Security properties that must hold
- Compatibility guarantees
RFCs are numbered per component (`RFC-BORG-006-SMSG-FORMAT.md`) and never modified after acceptance. If the contract changes, write a new RFC.
### Phase 3: Code (Implementation Specs)
Define the implementation in `code/`. Each component gets an RFC.md that an agent can implement from:
- Struct definitions (the DTOs — see principle 6)
- Method signatures and behaviour
- Error conditions and edge cases
- Cross-references to other code/ specs
The code spec IS the product. Write the spec → dispatch to an agent → review output → iterate.
### Pre-Launch: Alignment Protocol
Before dispatching for implementation, verify spec-model alignment:
```
1. REVIEW — The implementation model (Codex/Jules) reads the spec
and reports missing elements. This surfaces the delta between
the model's training and the spec's assumptions.
"I need X, Y, Z to implement this" is the model saying
"I hear you but I'm missing context" — without asking.
2. ADJUST — Update the spec to close the gaps. Add examples,
clarify ambiguities, provide the context the model needs.
This is shared alignment, not compromise.
3. VERIFY — A different model (or sub-agent) reviews the adjusted
spec without the planner's bias. Fresh eyes on the contract.
"Does this make sense to someone who wasn't in the room?"
4. READY — When the review findings are trivial or deployment-
related (not architectural), the spec is ready to dispatch.
```
### Implementation: Iterative Dispatch
Same prompt, multiple runs. Each pass sees deeper because the context evolved:
```
Round 1: Build features (the obvious gaps)
Round 2: Write tests (verify what was built)
Round 3: Harden security (what can go wrong?)
Round 4: Next RFC section (what's still missing?)
Round N: Findings are trivial → implementation is complete
```
Re-running is not failure. It is the process. Each pass changes the codebase, which changes what the next pass can see. The iteration IS the refinement.
### Post-Implementation: Auto-Documentation
The QA/verify chain produces artefacts that feed forward:
- Test results document the contract (what works, what doesn't)
- Coverage reports surface untested paths
- Diff summaries prep the changelog for the next release
- Doc site updates from the spec (the spec IS the documentation)
The output of one cycle is the input to the next. The plans repo stays current because the specs drive the code, not the other way round.
## Compatibility
AX conventions are valid, idiomatic Go/PHP/TS. They do not require language extensions, code generation, or non-standard tooling. An AX-designed codebase compiles, tests, and deploys with standard toolchains.
The conventions diverge from community patterns (functional options, Must/For, etc.) but do not violate language specifications. This is a style choice, not a fork.
## Adoption
AX applies to all new code in the Core ecosystem. Existing code migrates incrementally as it is touched — no big-bang rewrite.
Priority order:
1. **Public APIs** (package-level functions, struct constructors)
2. **File structure** (path naming, template locations)
3. **Internal fields** (struct field names, local variables)
## References
- dAppServer unified path convention (2024)
- CoreGO DTO pattern refactor (2026-03-18)
- Core primitives design (2026-03-19)
- Go Proverbs, Rob Pike (2015) — AX provides an updated lens
## Changelog
- 2026-03-19: Initial draft

View file

@ -1,6 +1,6 @@
# Architecture — go-p2p
`go-p2p` is the P2P networking layer for the Lethean network. Module path: `dappco.re/go/core/p2p`.
`go-p2p` is the P2P networking layer for the Lethean network. Module path: `forge.lthn.ai/core/go-p2p`.
## Package Structure
@ -17,7 +17,7 @@ go-p2p/
### identity.go — Node Identity
Each node holds an X25519 keypair generated via Borg STMF. The private key is stored at `~/.local/share/lethean-desktop/node/private.key` (mode 0600) and the public identity JSON at `~/.config/lethean-desktop/node.json`.
Each node holds an Ed25519 keypair generated via Borg STMF (X25519 curve). The private key is stored at `~/.local/share/lethean-desktop/node/private.key` (mode 0600) and the public identity JSON at `~/.config/lethean-desktop/node.json`.
`NodeIdentity` carries:
- `ID` — 32-character hex string derived from SHA-256 of the public key (first 16 bytes)
@ -36,9 +36,9 @@ The `Transport` manages a WebSocket server (gorilla/websocket) and outbound conn
| Field | Default | Purpose |
|-------|---------|---------|
| `ListenAddress` | `:9091` | HTTP bind address |
| `WebSocketPath` | `/ws` | WebSocket endpoint |
| `MaxConnections` | 100 | Maximum concurrent connections |
| `ListenAddr` | `:9091` | HTTP bind address |
| `WSPath` | `/ws` | WebSocket endpoint |
| `MaxConns` | 100 | Maximum concurrent connections |
| `MaxMessageSize` | 1 MB | Read limit per message |
| `PingInterval` | 30 s | Keepalive ping period |
| `PongTimeout` | 10 s | Maximum time to wait for pong |
@ -56,11 +56,11 @@ The `Transport` manages a WebSocket server (gorilla/websocket) and outbound conn
**Rate limiting**: Each `PeerConnection` holds a `PeerRateLimiter` (token bucket: 100 burst, 50 tokens/second refill). Messages from rate-limited peers are dropped in the read loop.
**MaxConnections enforcement**: The handler tracks `pendingHandshakeCount` (atomic counter) during the handshake phase in addition to established connections, preventing races where a surge of simultaneous inbounds could exceed the limit.
**MaxConns enforcement**: The handler tracks `pendingConns` (atomic counter) during the handshake phase in addition to established connections, preventing races where a surge of simultaneous inbounds could exceed the limit.
**Keepalive**: A goroutine per connection ticks at `PingInterval`. If `LastActivity` has not been updated within `PingInterval + PongTimeout`, the connection is removed.
**Graceful close**: `GracefulClose` sends `MsgDisconnect` before closing the underlying WebSocket. Write deadlines are managed exclusively inside `Send()` under `writeMutex` to prevent the race (P2P-RACE-1) where a bare `SetWriteDeadline` call could race with concurrent sends.
**Graceful close**: `GracefulClose` sends `MsgDisconnect` before closing the underlying WebSocket. Write deadlines are managed exclusively inside `Send()` under `writeMu` to prevent the race (P2P-RACE-1) where a bare `SetWriteDeadline` call could race with concurrent sends.
**Buffer pool**: `MarshalJSON` uses a `sync.Pool` of `bytes.Buffer` (initial capacity 1 KB, maximum pooled size 64 KB) to reduce allocation pressure in the message serialisation hot path. HTML escaping is disabled to match `json.Marshal` semantics.
@ -70,20 +70,20 @@ The `Transport` manages a WebSocket server (gorilla/websocket) and outbound conn
**Peer fields persisted**:
- `ID`, `Name`, `PublicKey`, `Address`, `Role`, `AddedAt`, `LastSeen`
- `PingMilliseconds`, `Hops`, `GeographicKilometres`, `Score` (float64, 0100)
- `PingMS`, `Hops`, `GeoKM`, `Score` (float64, 0100)
**KD-tree dimensions** (lower is better in all axes):
| Dimension | Weight | Rationale |
|-----------|--------|-----------|
| `PingMilliseconds` | 1.0 | Latency dominates interactive performance |
| `PingMS` | 1.0 | Latency dominates interactive performance |
| `Hops` | 0.7 | Network hop count (routing cost) |
| `GeographicKilometres` | 0.2 | Geographic distance (minor factor) |
| `GeoKM` | 0.2 | Geographic distance (minor factor) |
| `100 - Score` | 1.2 | Reliability (inverted so lower = better peer) |
`SelectOptimalPeer()` queries the tree for the point nearest to the origin (ideal: zero latency, zero hops, zero distance, maximum score). `SelectNearestPeers(n)` returns the n best.
**Persistence**: Writes are debounced with a 5-second coalesce window (`scheduleSave`). The actual write uses an atomic rename pattern (write to `.tmp`, then rename) to prevent partial file corruption. `Close()` flushes any pending dirty state synchronously.
**Persistence**: Writes are debounced with a 5-second coalesce window (`scheduleSave`). The actual write uses an atomic rename pattern (write to `.tmp`, then `os.Rename`) to prevent partial file corruption. `Close()` flushes any pending dirty state synchronously.
**Auth modes**:
- `PeerAuthOpen` — any connecting peer is accepted (default).
@ -98,7 +98,7 @@ The `Transport` manages a WebSocket server (gorilla/websocket) and outbound conn
| Timeout | 3.0 (floored at 0) |
| Default (new peer) | 50.0 |
**Peer name validation**: Names must be 164 characters, start and end with an alphanumeric character, and contain only alphanumeric, hyphen, underscore, or space characters.
**Peer name validation**: Empty names are permitted. Non-empty names must be 164 characters, start and end with an alphanumeric character, and contain only alphanumeric, hyphen, underscore, or space characters.
### message.go — Protocol Messages
@ -179,7 +179,7 @@ Auto-connect: if the target peer is not yet connected, `sendRequest` calls `tran
|----------|-------|---------|
| `IntentHandshake` | `0x01` | Connection establishment |
| `IntentCompute` | `0x20` | Compute job request |
| `IntentPauseExecution` | `0x30` | Benevolent intervention (pause execution) |
| `IntentRehab` | `0x30` | Benevolent intervention (pause execution) |
| `IntentCustom` | `0xFF` | Application-level sub-protocols |
**Sentinel errors**:
@ -209,10 +209,10 @@ The Unified Encrypted Packet Structure defines a TLV-encoded binary frame authen
[0x04][len][IntentID] Header: Semantic routing token
[0x05][0x02][ThreatScore] Header: uint16, big-endian
[0x06][0x20][HMAC-SHA256] Signature: 32 bytes, covers header TLVs + payload data
[0xFF][len][...payload...] Data: length-prefixed payload
[0xFF][...payload...] Data: no length prefix (relies on external framing)
```
**HMAC coverage**: The signature is computed over the serialised header TLVs (tags 0x010x05) concatenated with the raw payload bytes. The HMAC TLV itself (tag 0x06) and the payload TLV header (tag `0xFF` plus the 2-byte length) are excluded from the signed data.
**HMAC coverage**: The signature is computed over the serialised header TLVs (tags 0x010x05) concatenated with the raw payload bytes. The HMAC TLV itself (tag 0x06) and the payload tag byte (0xFF) are excluded from the signed data.
### PacketBuilder
@ -220,7 +220,9 @@ The Unified Encrypted Packet Structure defines a TLV-encoded binary frame authen
### ReadAndVerify
`ReadAndVerify(r *bufio.Reader, sharedSecret)` reads a stream, decodes the TLV fields in order, reconstructs the signed data buffer, and verifies the HMAC with `hmac.Equal`. Unknown TLV tags are accumulated into the signed data buffer (forward-compatible extension mechanism) but their semantics are ignored. The payload TLV is length-prefixed like every other field, so UEPS frames are self-delimiting.
`ReadAndVerify(r *bufio.Reader, sharedSecret)` reads a stream, decodes the TLV fields in order, reconstructs the signed data buffer, and verifies the HMAC with `hmac.Equal`. Unknown TLV tags are accumulated into the signed data buffer (forward-compatible extension mechanism) but their semantics are ignored.
**Known limitation**: Tag 0xFF carries no length prefix. The reader calls `io.ReadAll` on the remaining stream, which requires external TCP framing (e.g. a 4-byte length prefix on the enclosing connection) to delimit the packet boundary. The packet is not self-delimiting.
## logging/ — Structured Logger
@ -236,7 +238,7 @@ A global logger instance is available via `logging.Debug(...)`, `logging.Info(..
|----------|------------|
| `Transport.conns` | `sync.RWMutex` |
| `Transport.handler` | `sync.RWMutex` |
| `PeerConnection` writes | `sync.Mutex` (`writeMutex`) |
| `PeerConnection` writes | `sync.Mutex` (`writeMu`) |
| `PeerConnection` close | `sync.Once` (`closeOnce`) |
| `PeerRegistry.peers` + KD-tree | `sync.RWMutex` |
| `PeerRegistry.allowedPublicKeys` | separate `sync.RWMutex` |
@ -244,7 +246,7 @@ A global logger instance is available via `logging.Debug(...)`, `logging.Info(..
| `Controller.pending` | `sync.RWMutex` |
| `MessageDeduplicator.seen` | `sync.RWMutex` |
| `Dispatcher.handlers` | `sync.RWMutex` |
| `Transport.pendingHandshakeCount` | `atomic.Int32` |
| `Transport.pendingConns` | `atomic.Int32` |
The codebase is verified race-free under `go test -race`.
@ -253,8 +255,8 @@ The codebase is verified race-free under `go test -race`.
```
node/ ──► ueps/
node/ ──► logging/
node/ ──► forge.lthn.ai/Snider/Borg (STMF crypto, SMSG encryption, TIM)
node/ ──► forge.lthn.ai/Snider/Poindexter (KD-tree peer selection)
node/ ──► github.com/Snider/Borg (STMF crypto, SMSG encryption, TIM)
node/ ──► github.com/Snider/Poindexter (KD-tree peer selection)
node/ ──► github.com/gorilla/websocket
node/ ──► github.com/google/uuid
ueps/ ──► (stdlib only)

View file

@ -2,7 +2,7 @@
## Prerequisites
- Go 1.26 or later (the module declares `go 1.26.0`)
- Go 1.25 or later (the module declares `go 1.25.5`)
- Network access to `forge.lthn.ai` for private dependencies (Borg, Poindexter, Enchantrix)
- SSH key configured for `git@forge.lthn.ai:2223` (HTTPS auth is not supported on Forge)
@ -43,7 +43,7 @@ go vet ./...
### Table-Driven Subtests
Prefer table-driven subtests with `t.Run()` when multiple related cases share the same structure. Use clear case names and keep setup and verification consistent across the table.
All tests use table-driven subtests with `t.Run()`. A test that does not follow this pattern should be refactored before merging.
```go
func TestFoo(t *testing.T) {
@ -177,12 +177,12 @@ All parameters and return types must carry explicit type annotations. Avoid `int
### Error Handling
- Never discard errors silently.
- Wrap library errors with context using `core.E("operation", "context", err)`.
- Wrap errors with context using `fmt.Errorf("context: %w", err)`.
- Return typed sentinel errors for conditions callers need to inspect programmatically.
### Licence Header
Every new file must carry the EUPL-1.2 licence identifier. The project is licensed under EUPL-1.2; do not include the full licence text in each file. A short SPDX identifier comment at the top is sufficient for new files:
Every new file must carry the EUPL-1.2 licence identifier. The module's `LICENSE` file governs the package. Do not include the full licence text in each file; a short SPDX identifier comment at the top is sufficient for new files:
```go
// SPDX-License-Identifier: EUPL-1.2
@ -233,7 +233,7 @@ Examples:
```
feat(dispatcher): implement UEPS threat circuit breaker
test(transport): add keepalive timeout and MaxConnections enforcement tests
test(transport): add keepalive timeout and MaxConns enforcement tests
fix(peer): prevent data race in GracefulClose (P2P-RACE-1)
```

View file

@ -20,9 +20,9 @@ type Peer struct {
LastSeen time.Time `json:"lastSeen"`
// Poindexter metrics (updated dynamically)
PingMilliseconds float64 `json:"pingMs"` // Latency in milliseconds
PingMS float64 `json:"pingMs"` // Latency in milliseconds
Hops int `json:"hops"` // Network hop count
GeographicKilometres float64 `json:"geoKm"` // Geographic distance in kilometres
GeoKM float64 `json:"geoKm"` // Geographic distance in kilometres
Score float64 `json:"score"` // Reliability score 0--100
Connected bool `json:"-"` // Not persisted
@ -83,9 +83,9 @@ The registry maintains a 4-dimensional KD-tree for optimal peer selection. Each
| Dimension | Source | Weight | Direction |
|-----------|--------|--------|-----------|
| Latency | `PingMilliseconds` | 1.0 | Lower is better |
| Latency | `PingMS` | 1.0 | Lower is better |
| Hops | `Hops` | 0.7 | Lower is better |
| Geographic distance | `GeographicKilometres` | 0.2 | Lower is better |
| Geographic distance | `GeoKM` | 0.2 | Lower is better |
| Reliability | `100 - Score` | 1.2 | Inverted so lower is better |
The score dimension is inverted so that the "ideal peer" target point `[0, 0, 0, 0]` represents zero latency, zero hops, zero distance, and maximum reliability (score 100).
@ -146,7 +146,7 @@ This also updates `LastSeen` and triggers a KD-tree rebuild.
```go
// Create
registry, err := node.NewPeerRegistry() // XDG paths
registry, err := node.NewPeerRegistryFromPath(path) // Custom path (testing)
registry, err := node.NewPeerRegistryWithPath(path) // Custom path (testing)
// CRUD
err := registry.AddPeer(peer)
@ -177,7 +177,7 @@ Peers are persisted to `~/.config/lethean-desktop/peers.json` as a JSON array.
### Debounced Writes
To avoid excessive disk I/O, saves are debounced with a 5-second coalesce interval. Multiple mutations within that window produce a single disk write. The write uses an atomic rename pattern (write to `.tmp`, then rename) to prevent corruption on crash.
To avoid excessive disk I/O, saves are debounced with a 5-second coalesce interval. Multiple mutations within that window produce a single disk write. The write uses an atomic rename pattern (write to `.tmp`, then `os.Rename`) to prevent corruption on crash.
```go
// Flush pending changes on shutdown

View file

@ -10,10 +10,10 @@ Implemented the complete test suite for the UEPS binary framing layer. Tests cov
- PacketBuilder round-trip: basic, binary payload, elevated threat score, large payload
- HMAC verification: payload tampering detected, header tampering detected, wrong shared secret detected
- Boundary conditions: nil payload, empty slice payload, `uint16` max ThreatScore (65,535), TLV value exceeding 65,535 bytes (`writeTLV` error path)
- Boundary conditions: nil payload, empty slice payload, `uint16` max ThreatScore (65,535), TLV value exceeding 255 bytes (`writeTLV` error path)
- Stream robustness: truncated packets detected at multiple cut points (EOF mid-tag, mid-length, mid-value), missing HMAC tag, unknown TLV tags skipped and included in signed data
The remaining gap from 100% coverage at the time was the payload read-error path, which required a contrived broken reader to exercise.
The 11.5% gap from 100% coverage is the reader's `io.ReadAll` error path, which requires a contrived broken `io.Reader` to exercise.
### Phase 2 — Transport Tests
@ -28,10 +28,10 @@ Tests covered:
- Encrypted message round-trip: SMSG encrypt on one side, decrypt on other
- Message deduplication: duplicate UUID dropped silently
- Rate limiting: burst of more than 100 messages, subsequent drops after token bucket empties
- MaxConnections enforcement: 503 HTTP rejection when limit is reached
- MaxConns enforcement: 503 HTTP rejection when limit is reached
- Keepalive timeout: connection cleaned up after `PingInterval + PongTimeout` elapses
- Graceful close: `MsgDisconnect` sent before underlying WebSocket close
- Concurrent sends: no data races under `go test -race` (`writeMutex` protects all writes)
- Concurrent sends: no data races under `go test -race` (`writeMu` protects all writes)
### Phase 3 — Controller Tests
@ -86,13 +86,15 @@ Three integration tests (`TestIntegration_*`) exercise the full stack end-to-end
## Known Limitations
### UEPS Payload Framing (Resolved)
### UEPS 0xFF Payload Not Self-Delimiting
The `TagPayload` (0xFF) field now uses the same 2-byte length prefix as the other TLVs. `ReadAndVerify` reads that explicit length, so UEPS packets are self-delimiting and can be chained in a stream without relying on an outer framing layer.
The `TagPayload` (0xFF) field carries no length prefix. `ReadAndVerify` calls `io.ReadAll` on the remaining stream, which means the packet format relies on external TCP framing to delimit the packet boundary. The enclosing transport must provide a length-prefixed frame before calling `ReadAndVerify`. This is noted in comments in both `packet.go` and `reader.go` but no solution is implemented.
Consequence: UEPS packets cannot be chained in a raw stream without an outer framing protocol. The current WebSocket transport encapsulates each UEPS frame in a single WebSocket message, which provides the necessary boundary implicitly.
### No Resource Cleanup on Some Error Paths
`transport.handleWebSocketUpgrade` does not clean up on handshake timeout (the `pendingHandshakeCount` counter is decremented correctly via `defer`, but the underlying WebSocket connection may linger briefly before the read deadline fires). `transport.Connect` does not clean up the temporary connection object on handshake failure (the raw WebSocket `conn` is closed, but there is no registry or metrics cleanup for the partially constructed `PeerConnection`).
`transport.handleWSUpgrade` does not clean up on handshake timeout (the `pendingConns` counter is decremented correctly via `defer`, but the underlying WebSocket connection may linger briefly before the read deadline fires). `transport.Connect` does not clean up the temporary connection object on handshake failure (the raw WebSocket `conn` is closed, but there is no registry or metrics cleanup for the partially constructed `PeerConnection`).
These are low-severity gaps. They do not cause goroutine leaks under the current implementation because the connection's read loop is not started until after a successful handshake.
@ -104,9 +106,9 @@ The originally identified risk — that `transport.OnMessage(c.handleResponse)`
### P2P-RACE-1 — GracefulClose Data Race (Phase 3)
`GracefulClose` previously called `pc.WebSocketConnection.SetWriteDeadline()` outside of `writeMutex`, racing with concurrent `Send()` calls that also set the write deadline. Detected by `go test -race`.
`GracefulClose` previously called `pc.Conn.SetWriteDeadline()` outside of `writeMu`, racing with concurrent `Send()` calls that also set the write deadline. Detected by `go test -race`.
Fix: removed the bare `SetWriteDeadline` call from `GracefulClose`. The method now relies entirely on `Send()`, which manages write deadlines under `writeMutex`. This is documented in a comment in `transport.go` to prevent the pattern from being reintroduced.
Fix: removed the bare `SetWriteDeadline` call from `GracefulClose`. The method now relies entirely on `Send()`, which manages write deadlines under `writeMu`. This is documented in a comment in `transport.go` to prevent the pattern from being reintroduced.
## Wiki Corrections (19 February 2026)

View file

@ -39,13 +39,13 @@ Paths follow XDG base directories via `github.com/adrg/xdg`. The private key is
### Creating an Identity
```go
nodeManager, err := node.NewNodeManager()
nm, err := node.NewNodeManager()
if err != nil {
log.Fatal(err)
}
// Generate a new identity (persists key and config to disk)
err = nodeManager.GenerateIdentity("eu-controller-01", node.RoleController)
err = nm.GenerateIdentity("eu-controller-01", node.RoleController)
```
Internally this calls `stmf.GenerateKeyPair()` from the Borg library to produce the X25519 keypair.
@ -53,7 +53,7 @@ Internally this calls `stmf.GenerateKeyPair()` from the Borg library to produce
### Custom Paths (Testing)
```go
nodeManager, err := node.NewNodeManagerFromPaths(
nm, err := node.NewNodeManagerWithPaths(
"/tmp/test/private.key",
"/tmp/test/node.json",
)
@ -62,8 +62,8 @@ nodeManager, err := node.NewNodeManagerFromPaths(
### Checking and Retrieving Identity
```go
if nodeManager.HasIdentity() {
identity := nodeManager.GetIdentity() // Returns a copy
if nm.HasIdentity() {
identity := nm.GetIdentity() // Returns a copy
fmt.Println(identity.ID, identity.Name)
}
```
@ -73,7 +73,7 @@ if nodeManager.HasIdentity() {
### Deriving Shared Secrets
```go
sharedSecret, err := nodeManager.DeriveSharedSecret(peerPublicKeyBase64)
sharedSecret, err := nm.DeriveSharedSecret(peerPublicKeyBase64)
```
This performs X25519 ECDH with the peer's public key and hashes the result with SHA-256, producing a 32-byte symmetric key. The same shared secret is derived independently by both sides (no secret is transmitted).
@ -81,7 +81,7 @@ This performs X25519 ECDH with the peer's public key and hashes the result with
### Deleting an Identity
```go
err := nodeManager.Delete() // Removes key and config from disk, clears in-memory state
err := nm.Delete() // Removes key and config from disk, clears in-memory state
```
## Challenge-Response Authentication

View file

@ -7,7 +7,7 @@ description: P2P mesh networking layer for the Lethean network.
P2P networking layer for the Lethean network. Encrypted WebSocket mesh with UEPS wire protocol.
**Module:** `dappco.re/go/core/p2p`
**Module:** `forge.lthn.ai/core/go-p2p`
**Go:** 1.26
**Licence:** EUPL-1.2

View file

@ -5,7 +5,7 @@ description: UEPS intent-based packet routing with threat circuit breaker.
# Intent Routing
The `Dispatcher` routes verified UEPS packets to registered intent handlers. Before routing, it enforces a threat circuit breaker that blocks packets with elevated threat scores and returns sentinel errors to the caller.
The `Dispatcher` routes verified UEPS packets to registered intent handlers. Before routing, it enforces a threat circuit breaker that silently drops packets with elevated threat scores.
**File:** `node/dispatcher.go`
@ -74,8 +74,8 @@ Dropped packets are logged at WARN level with the threat score, threshold, inten
### Design Rationale
- **High-threat packets are not dispatched**. The dispatcher logs them and returns `ErrThreatScoreExceeded` to the caller; the sender still receives no protocol-level response.
- **Unknown intents are not forwarded**. The dispatcher logs them and returns `ErrUnknownIntent`, avoiding back-pressure on the transport layer.
- **High-threat packets are dropped silently** (from the sender's perspective) rather than returning an error, consistent with the "don't even parse the payload" philosophy.
- **Unknown intents are dropped**, not forwarded, to avoid back-pressure on the transport layer. They are logged at WARN level for debugging.
- **Handler errors propagate** to the caller, allowing upstream code to record failures.
## Intent Constants
@ -84,7 +84,7 @@ Dropped packets are logged at WARN level with the threat score, threshold, inten
const (
IntentHandshake byte = 0x01 // Connection establishment / hello
IntentCompute byte = 0x20 // Compute job request
IntentPauseExecution byte = 0x30
IntentRehab byte = 0x30 // Benevolent intervention (pause execution)
IntentCustom byte = 0xFF // Extended / application-level sub-protocols
)
```
@ -100,13 +100,12 @@ const (
```go
var (
ErrThreatScoreExceeded = core.E(
"Dispatcher.Dispatch",
core.Sprintf("packet rejected: threat score exceeds safety threshold (%d)", ThreatScoreThreshold),
nil,
ErrThreatScoreExceeded = fmt.Errorf(
"packet rejected: threat score exceeds safety threshold (%d)",
ThreatScoreThreshold,
)
ErrUnknownIntent = core.E("Dispatcher.Dispatch", "packet dropped: unknown intent", nil)
ErrNilPacket = core.E("Dispatcher.Dispatch", "nil packet", nil)
ErrUnknownIntent = errors.New("packet dropped: unknown intent")
ErrNilPacket = errors.New("dispatch: nil packet")
)
```

View file

@ -11,11 +11,11 @@ The `Transport` manages encrypted WebSocket connections between nodes. After an
```go
type TransportConfig struct {
ListenAddress string // ":9091" default
WebSocketPath string // "/ws" -- WebSocket endpoint path
ListenAddr string // ":9091" default
WSPath string // "/ws" -- WebSocket endpoint path
TLSCertPath string // Optional TLS for wss://
TLSKeyPath string
MaxConnections int // Maximum concurrent connections (default 100)
MaxConns int // Maximum concurrent connections (default 100)
MaxMessageSize int64 // Maximum message size in bytes (default 1MB)
PingInterval time.Duration // Keepalive interval (default 30s)
PongTimeout time.Duration // Pong wait timeout (default 10s)
@ -25,18 +25,18 @@ type TransportConfig struct {
Sensible defaults via `DefaultTransportConfig()`:
```go
transportConfig := node.DefaultTransportConfig()
// ListenAddress: ":9091", WebSocketPath: "/ws", MaxConnections: 100
cfg := node.DefaultTransportConfig()
// ListenAddr: ":9091", WSPath: "/ws", MaxConns: 100
// MaxMessageSize: 1MB, PingInterval: 30s, PongTimeout: 10s
```
## Creating and Starting
```go
transport := node.NewTransport(nodeManager, peerRegistry, transportConfig)
transport := node.NewTransport(nodeManager, peerRegistry, cfg)
// Set message handler before Start() to avoid races
transport.OnMessage(func(peerConnection *node.PeerConnection, msg *node.Message) {
transport.OnMessage(func(conn *node.PeerConnection, msg *node.Message) {
// Handle incoming messages
})
@ -86,8 +86,8 @@ Each active connection is wrapped in a `PeerConnection`:
```go
type PeerConnection struct {
Peer *Peer // Remote peer identity
WebSocketConnection *websocket.Conn // Underlying WebSocket
Peer *Peer // Remote peer identity
Conn *websocket.Conn // Underlying WebSocket
SharedSecret []byte // From X25519 ECDH
LastActivity time.Time
}
@ -96,15 +96,15 @@ type PeerConnection struct {
### Sending Messages
```go
err := peerConnection.Send(msg)
err := peerConn.Send(msg)
```
`Send()` serialises the message to JSON, encrypts it with SMSG, sets a 10-second write deadline, and writes as a binary WebSocket frame. A `writeMutex` serialises concurrent writes.
`Send()` serialises the message to JSON, encrypts it with SMSG, sets a 10-second write deadline, and writes as a binary WebSocket frame. A `writeMu` mutex serialises concurrent writes.
### Graceful Close
```go
err := peerConnection.GracefulClose("shutting down", node.DisconnectShutdown)
err := peerConn.GracefulClose("shutting down", node.DisconnectShutdown)
```
Sends a `disconnect` message (best-effort) before closing the connection. Uses `sync.Once` to ensure the connection is only closed once.
@ -123,9 +123,9 @@ const (
## Incoming Connections
The transport exposes an HTTP handler at the configured `WebSocketPath` that upgrades to WebSocket. Origin checks restrict browser clients to `localhost`, `127.0.0.1`, and `::1`; non-browser clients (no `Origin` header) are allowed.
The transport exposes an HTTP handler at the configured `WSPath` that upgrades to WebSocket. Origin checks restrict browser clients to `localhost`, `127.0.0.1`, and `::1`; non-browser clients (no `Origin` header) are allowed.
The `MaxConnections` limit is enforced before the WebSocket upgrade, counting both established and pending (mid-handshake) connections. Excess connections receive HTTP 503.
The `MaxConns` limit is enforced before the WebSocket upgrade, counting both established and pending (mid-handshake) connections. Excess connections receive HTTP 503.
## Message Deduplication
@ -166,7 +166,7 @@ err = transport.Send(peerID, msg)
err = transport.Broadcast(msg)
// Query connections
count := transport.ConnectedPeerCount()
count := transport.ConnectedPeers()
conn := transport.GetConnection(peerID)
// Iterate over all connections

View file

@ -7,7 +7,7 @@ description: TLV-encoded wire protocol with HMAC-SHA256 integrity verification (
The `ueps` package implements the Universal Encrypted Payload System -- a consent-gated TLV (Type-Length-Value) wire protocol with HMAC-SHA256 integrity verification. This is the low-level binary protocol that sits beneath the JSON-over-WebSocket mesh layer.
**Package:** `dappco.re/go/core/p2p/ueps`
**Package:** `forge.lthn.ai/core/go-p2p/ueps`
## TLV Format
@ -25,8 +25,8 @@ Each field is encoded as a 1-byte tag, 2-byte big-endian length (uint16), and va
| Tag | Constant | Value Size | Description |
|-----|----------|------------|-------------|
| `0x01` | `TagVersion` | 1 byte | Protocol version (default `0x09` for IPv9) |
| `0x02` | `TagCurrentLayer` | 1 byte | Current network layer |
| `0x03` | `TagTargetLayer` | 1 byte | Target network layer |
| `0x02` | `TagCurrentLay` | 1 byte | Current network layer |
| `0x03` | `TagTargetLay` | 1 byte | Target network layer |
| `0x04` | `TagIntent` | 1 byte | Semantic intent token (routes the packet) |
| `0x05` | `TagThreatScore` | 2 bytes | Threat score (0--65535, big-endian uint16) |
| `0x06` | `TagHMAC` | 32 bytes | HMAC-SHA256 signature |
@ -156,7 +156,7 @@ Reserved intent values:
|----|----------|---------|
| `0x01` | `IntentHandshake` | Connection establishment / hello |
| `0x20` | `IntentCompute` | Compute job request |
| `0x30` | `IntentPauseExecution` | Benevolent intervention (pause execution) |
| `0x30` | `IntentRehab` | Benevolent intervention (pause execution) |
| `0xFF` | `IntentCustom` | Extended / application-level sub-protocols |
## Threat Score

5
go.mod
View file

@ -3,7 +3,8 @@ module dappco.re/go/core/p2p
go 1.26.0
require (
dappco.re/go/core v0.8.0-alpha.1
dappco.re/go/core/io v0.2.0
dappco.re/go/core/log v0.1.0
forge.lthn.ai/Snider/Borg v0.3.1
forge.lthn.ai/Snider/Poindexter v0.0.3
github.com/adrg/xdg v0.5.3
@ -14,11 +15,11 @@ require (
require (
forge.lthn.ai/Snider/Enchantrix v0.0.4 // indirect
forge.lthn.ai/core/go-log v0.0.4 // indirect
github.com/ProtonMail/go-crypto v1.4.0 // indirect
github.com/cloudflare/circl v1.6.3 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/klauspost/compress v1.18.4 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
golang.org/x/crypto v0.49.0 // indirect
golang.org/x/sys v0.42.0 // indirect

9
go.sum
View file

@ -1,18 +1,21 @@
dappco.re/go/core v0.8.0-alpha.1 h1:gj7+Scv+L63Z7wMxbJYHhaRFkHJo2u4MMPuUSv/Dhtk=
dappco.re/go/core v0.8.0-alpha.1/go.mod h1:f2/tBZ3+3IqDrg2F5F598llv0nmb/4gJVCFzM5geE4A=
dappco.re/go/core/io v0.2.0 h1:zuudgIiTsQQ5ipVt97saWdGLROovbEB/zdVyy9/l+I4=
dappco.re/go/core/io v0.2.0/go.mod h1:1QnQV6X9LNgFKfm8SkOtR9LLaj3bDcsOIeJOOyjbL5E=
dappco.re/go/core/log v0.1.0 h1:pa71Vq2TD2aoEUQWFKwNcaJ3GBY8HbaNGqtE688Unyc=
dappco.re/go/core/log v0.1.0/go.mod h1:Nkqb8gsXhZAO8VLpx7B8i1iAmohhzqA20b9Zr8VUcJs=
forge.lthn.ai/Snider/Borg v0.3.1 h1:gfC1ZTpLoZai07oOWJiVeQ8+qJYK8A795tgVGJHbVL8=
forge.lthn.ai/Snider/Borg v0.3.1/go.mod h1:Z7DJD0yHXsxSyM7Mjl6/g4gH1NBsIz44Bf5AFlV76Wg=
forge.lthn.ai/Snider/Enchantrix v0.0.4 h1:biwpix/bdedfyc0iVeK15awhhJKH6TEMYOTXzHXx5TI=
forge.lthn.ai/Snider/Enchantrix v0.0.4/go.mod h1:OGCwuVeZPq3OPe2h6TX/ZbgEjHU6B7owpIBeXQGbSe0=
forge.lthn.ai/Snider/Poindexter v0.0.3 h1:cx5wRhuLRKBM8riIZyNVAT2a8rwRhn1dodFBktocsVE=
forge.lthn.ai/Snider/Poindexter v0.0.3/go.mod h1:ddzGia98k3HKkR0gl58IDzqz+MmgW2cQJOCNLfuWPpo=
forge.lthn.ai/core/go-log v0.0.4 h1:KTuCEPgFmuM8KJfnyQ8vPOU1Jg654W74h8IJvfQMfv0=
forge.lthn.ai/core/go-log v0.0.4/go.mod h1:r14MXKOD3LF/sI8XUJQhRk/SZHBE7jAFVuCfgkXoZPw=
github.com/ProtonMail/go-crypto v1.4.0 h1:Zq/pbM3F5DFgJiMouxEdSVY44MVoQNEKp5d5QxIQceQ=
github.com/ProtonMail/go-crypto v1.4.0/go.mod h1:e1OaTyu5SYVrO9gKOEhTc+5UcXtTUa+P3uLudwcgPqo=
github.com/adrg/xdg v0.5.3 h1:xRnxJXne7+oWDatRhR1JLnvuccuIeCoBu2rtuLqQB78=
github.com/adrg/xdg v0.5.3/go.mod h1:nlTsY+NNiCBGCK2tpm09vRqfVzrc2fLmXGpBLF0zlTQ=
github.com/cloudflare/circl v1.6.3 h1:9GPOhQGF9MCYUeXyMYlqTR6a5gTrgR/fBLXvUgtVcg8=
github.com/cloudflare/circl v1.6.3/go.mod h1:2eXP6Qfat4O/Yhh8BznvKnJ+uzEoTQ6jVKJRn81BiS4=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=

View file

@ -1,27 +1,33 @@
// logger := New(DefaultConfig())
// Package logging provides structured logging with log levels and fields.
package logging
import (
"fmt"
"io"
"maps"
"os"
"strings"
"sync"
"syscall"
"time"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
)
// level := LevelInfo
// Level represents the severity of a log message.
type Level int
const (
// LevelDebug is the most verbose log level.
LevelDebug Level = iota
// LevelInfo is for general informational messages.
LevelInfo
// LevelWarn is for warning messages.
LevelWarn
// LevelError is for error messages.
LevelError
)
// label := LevelWarn.String()
// String returns the string representation of the log level.
func (l Level) String() string {
switch l {
case LevelDebug:
@ -37,44 +43,44 @@ func (l Level) String() string {
}
}
// logger := New(DefaultConfig())
// Logger provides structured logging with configurable output and level.
type Logger struct {
mu sync.RWMutex
mu sync.Mutex
output io.Writer
level Level
component string
}
// config := Config{Output: io.Discard, Level: LevelDebug, Component: "sync"}
// Config holds configuration for creating a new Logger.
type Config struct {
Output io.Writer
Level Level
Component string
}
// config := DefaultConfig()
// DefaultConfig returns the default logger configuration.
func DefaultConfig() Config {
return Config{
Output: defaultOutput,
Output: os.Stderr,
Level: LevelInfo,
Component: "",
}
}
// logger := New(DefaultConfig())
func New(config Config) *Logger {
if config.Output == nil {
config.Output = defaultOutput
// New creates a new Logger with the given configuration.
func New(cfg Config) *Logger {
if cfg.Output == nil {
cfg.Output = os.Stderr
}
return &Logger{
output: config.Output,
level: config.Level,
component: config.Component,
output: cfg.Output,
level: cfg.Level,
component: cfg.Component,
}
}
// transportLogger := logger.ComponentLogger("transport")
func (l *Logger) ComponentLogger(component string) *Logger {
// WithComponent returns a new Logger with the specified component name.
func (l *Logger) WithComponent(component string) *Logger {
return &Logger{
output: l.output,
level: l.level,
@ -82,36 +88,25 @@ func (l *Logger) ComponentLogger(component string) *Logger {
}
}
// logger.SetLevel(LevelDebug)
// SetLevel sets the minimum log level.
func (l *Logger) SetLevel(level Level) {
l.mu.Lock()
defer l.mu.Unlock()
l.level = level
}
// level := logger.GetLevel()
// GetLevel returns the current log level.
func (l *Logger) GetLevel() Level {
l.mu.RLock()
defer l.mu.RUnlock()
l.mu.Lock()
defer l.mu.Unlock()
return l.level
}
// fields := Fields{"peer_id": "node-1", "attempt": 2}
// Fields represents key-value pairs for structured logging.
type Fields map[string]any
type stderrWriter struct{}
func (stderrWriter) Write(p []byte) (int, error) {
written, err := syscall.Write(syscall.Stderr, p)
if err != nil {
return written, core.E("logging.stderrWriter.Write", "failed to write log line", err)
}
return written, nil
}
var defaultOutput io.Writer = stderrWriter{}
func (l *Logger) log(level Level, message string, fields Fields) {
// log writes a log message at the specified level.
func (l *Logger) log(level Level, msg string, fields Fields) {
l.mu.Lock()
defer l.mu.Unlock()
@ -119,7 +114,8 @@ func (l *Logger) log(level Level, message string, fields Fields) {
return
}
sb := core.NewBuilder()
// Build the log line
var sb strings.Builder
timestamp := time.Now().Format("2006/01/02 15:04:05")
sb.WriteString(timestamp)
sb.WriteString(" [")
@ -133,63 +129,64 @@ func (l *Logger) log(level Level, message string, fields Fields) {
}
sb.WriteString(" ")
sb.WriteString(message)
sb.WriteString(msg)
// Add fields if present
if len(fields) > 0 {
sb.WriteString(" |")
for k, v := range fields {
sb.WriteString(" ")
sb.WriteString(k)
sb.WriteString("=")
sb.WriteString(core.Sprint(v))
sb.WriteString(fmt.Sprintf("%v", v))
}
}
sb.WriteString("\n")
_, _ = l.output.Write([]byte(sb.String()))
fmt.Fprint(l.output, sb.String())
}
// Debug("connected", Fields{"peer_id": "node-1"})
func (l *Logger) Debug(message string, fields ...Fields) {
l.log(LevelDebug, message, mergeFields(fields))
// Debug logs a debug message.
func (l *Logger) Debug(msg string, fields ...Fields) {
l.log(LevelDebug, msg, mergeFields(fields))
}
// Info("worker started", Fields{"component": "transport"})
func (l *Logger) Info(message string, fields ...Fields) {
l.log(LevelInfo, message, mergeFields(fields))
// Info logs an informational message.
func (l *Logger) Info(msg string, fields ...Fields) {
l.log(LevelInfo, msg, mergeFields(fields))
}
// Warn("peer rate limited", Fields{"peer_id": "node-1"})
func (l *Logger) Warn(message string, fields ...Fields) {
l.log(LevelWarn, message, mergeFields(fields))
// Warn logs a warning message.
func (l *Logger) Warn(msg string, fields ...Fields) {
l.log(LevelWarn, msg, mergeFields(fields))
}
// Error("send failed", Fields{"peer_id": "node-1"})
func (l *Logger) Error(message string, fields ...Fields) {
l.log(LevelError, message, mergeFields(fields))
// Error logs an error message.
func (l *Logger) Error(msg string, fields ...Fields) {
l.log(LevelError, msg, mergeFields(fields))
}
// Debugf("connected peer %s", "node-1")
// Debugf logs a formatted debug message.
func (l *Logger) Debugf(format string, args ...any) {
l.log(LevelDebug, core.Sprintf(format, args...), nil)
l.log(LevelDebug, fmt.Sprintf(format, args...), nil)
}
// Infof("worker %s ready", "node-1")
// Infof logs a formatted informational message.
func (l *Logger) Infof(format string, args ...any) {
l.log(LevelInfo, core.Sprintf(format, args...), nil)
l.log(LevelInfo, fmt.Sprintf(format, args...), nil)
}
// Warnf("peer %s is slow", "node-1")
// Warnf logs a formatted warning message.
func (l *Logger) Warnf(format string, args ...any) {
l.log(LevelWarn, core.Sprintf(format, args...), nil)
l.log(LevelWarn, fmt.Sprintf(format, args...), nil)
}
// Errorf("peer %s failed", "node-1")
// Errorf logs a formatted error message.
func (l *Logger) Errorf(format string, args ...any) {
l.log(LevelError, core.Sprintf(format, args...), nil)
l.log(LevelError, fmt.Sprintf(format, args...), nil)
}
// fields := mergeFields([]Fields{{"peer_id": "node-1"}, {"attempt": 2}})
// mergeFields combines multiple Fields maps into one.
func mergeFields(fields []Fields) Fields {
if len(fields) == 0 {
return nil
@ -201,75 +198,79 @@ func mergeFields(fields []Fields) Fields {
return result
}
// --- Global logger for convenience ---
var (
globalLogger = New(DefaultConfig())
globalMu sync.RWMutex
)
// SetGlobal(New(DefaultConfig()))
// SetGlobal sets the global logger instance.
func SetGlobal(l *Logger) {
globalMu.Lock()
defer globalMu.Unlock()
globalLogger = l
}
// logger := GetGlobal()
// GetGlobal returns the global logger instance.
func GetGlobal() *Logger {
globalMu.RLock()
defer globalMu.RUnlock()
return globalLogger
}
// SetGlobalLevel(LevelDebug)
// SetGlobalLevel sets the log level of the global logger.
func SetGlobalLevel(level Level) {
globalMu.RLock()
defer globalMu.RUnlock()
globalLogger.SetLevel(level)
}
// Debug("connected", Fields{"peer_id": "node-1"})
func Debug(message string, fields ...Fields) {
GetGlobal().Debug(message, fields...)
// Global convenience functions that use the global logger
// Debug logs a debug message using the global logger.
func Debug(msg string, fields ...Fields) {
GetGlobal().Debug(msg, fields...)
}
// Info("worker started", Fields{"component": "transport"})
func Info(message string, fields ...Fields) {
GetGlobal().Info(message, fields...)
// Info logs an informational message using the global logger.
func Info(msg string, fields ...Fields) {
GetGlobal().Info(msg, fields...)
}
// Warn("peer rate limited", Fields{"peer_id": "node-1"})
func Warn(message string, fields ...Fields) {
GetGlobal().Warn(message, fields...)
// Warn logs a warning message using the global logger.
func Warn(msg string, fields ...Fields) {
GetGlobal().Warn(msg, fields...)
}
// Error("send failed", Fields{"peer_id": "node-1"})
func Error(message string, fields ...Fields) {
GetGlobal().Error(message, fields...)
// Error logs an error message using the global logger.
func Error(msg string, fields ...Fields) {
GetGlobal().Error(msg, fields...)
}
// Debugf("connected peer %s", "node-1")
// Debugf logs a formatted debug message using the global logger.
func Debugf(format string, args ...any) {
GetGlobal().Debugf(format, args...)
}
// Infof("worker %s ready", "node-1")
// Infof logs a formatted informational message using the global logger.
func Infof(format string, args ...any) {
GetGlobal().Infof(format, args...)
}
// Warnf("peer %s is slow", "node-1")
// Warnf logs a formatted warning message using the global logger.
func Warnf(format string, args ...any) {
GetGlobal().Warnf(format, args...)
}
// Errorf("peer %s failed", "node-1")
// Errorf logs a formatted error message using the global logger.
func Errorf(format string, args ...any) {
GetGlobal().Errorf(format, args...)
}
// level, err := ParseLevel("warn")
// ParseLevel parses a string into a log level.
func ParseLevel(s string) (Level, error) {
switch core.Upper(s) {
switch strings.ToUpper(s) {
case "DEBUG":
return LevelDebug, nil
case "INFO":
@ -279,6 +280,6 @@ func ParseLevel(s string) (Level, error) {
case "ERROR":
return LevelError, nil
default:
return LevelInfo, core.E("logging.ParseLevel", "unknown log level: "+s, nil)
return LevelInfo, coreerr.E("logging.ParseLevel", "unknown log level: "+s, nil)
}
}

View file

@ -2,13 +2,11 @@ package logging
import (
"bytes"
"sync"
"strings"
"testing"
core "dappco.re/go/core"
)
func TestLogger_Levels_Good(t *testing.T) {
func TestLoggerLevels(t *testing.T) {
var buf bytes.Buffer
logger := New(Config{
Output: &buf,
@ -23,29 +21,29 @@ func TestLogger_Levels_Good(t *testing.T) {
// Info should appear
logger.Info("info message")
if !core.Contains(buf.String(), "[INFO]") {
if !strings.Contains(buf.String(), "[INFO]") {
t.Error("Info message should appear")
}
if !core.Contains(buf.String(), "info message") {
if !strings.Contains(buf.String(), "info message") {
t.Error("Info message content should appear")
}
buf.Reset()
// Warn should appear
logger.Warn("warn message")
if !core.Contains(buf.String(), "[WARN]") {
if !strings.Contains(buf.String(), "[WARN]") {
t.Error("Warn message should appear")
}
buf.Reset()
// Error should appear
logger.Error("error message")
if !core.Contains(buf.String(), "[ERROR]") {
if !strings.Contains(buf.String(), "[ERROR]") {
t.Error("Error message should appear")
}
}
func TestLogger_DebugLevel_Good(t *testing.T) {
func TestLoggerDebugLevel(t *testing.T) {
var buf bytes.Buffer
logger := New(Config{
Output: &buf,
@ -53,12 +51,12 @@ func TestLogger_DebugLevel_Good(t *testing.T) {
})
logger.Debug("debug message")
if !core.Contains(buf.String(), "[DEBUG]") {
if !strings.Contains(buf.String(), "[DEBUG]") {
t.Error("Debug message should appear at Debug level")
}
}
func TestLogger_WithFields_Good(t *testing.T) {
func TestLoggerWithFields(t *testing.T) {
var buf bytes.Buffer
logger := New(Config{
Output: &buf,
@ -68,15 +66,15 @@ func TestLogger_WithFields_Good(t *testing.T) {
logger.Info("test message", Fields{"key": "value", "num": 42})
output := buf.String()
if !core.Contains(output, "key=value") {
if !strings.Contains(output, "key=value") {
t.Error("Field key=value should appear")
}
if !core.Contains(output, "num=42") {
if !strings.Contains(output, "num=42") {
t.Error("Field num=42 should appear")
}
}
func TestLogger_ConfigComponent_Good(t *testing.T) {
func TestLoggerWithComponent(t *testing.T) {
var buf bytes.Buffer
logger := New(Config{
Output: &buf,
@ -87,33 +85,28 @@ func TestLogger_ConfigComponent_Good(t *testing.T) {
logger.Info("test message")
output := buf.String()
if !core.Contains(output, "[TestComponent]") {
if !strings.Contains(output, "[TestComponent]") {
t.Error("Component name should appear in log")
}
}
func TestLogger_ComponentLogger_Good(t *testing.T) {
func TestLoggerDerivedComponent(t *testing.T) {
var buf bytes.Buffer
parent := New(Config{
Output: &buf,
Level: LevelInfo,
})
child := parent.ComponentLogger("ChildComponent")
child := parent.WithComponent("ChildComponent")
child.Info("child message")
secondaryLogger := parent.ComponentLogger("SecondaryComponent")
secondaryLogger.Info("secondary message")
output := buf.String()
if !core.Contains(output, "[ChildComponent]") {
if !strings.Contains(output, "[ChildComponent]") {
t.Error("Derived component name should appear")
}
if !core.Contains(output, "[SecondaryComponent]") {
t.Error("Secondary component should preserve the component name")
}
}
func TestLogger_Formatted_Good(t *testing.T) {
func TestLoggerFormatted(t *testing.T) {
var buf bytes.Buffer
logger := New(Config{
Output: &buf,
@ -123,12 +116,12 @@ func TestLogger_Formatted_Good(t *testing.T) {
logger.Infof("formatted %s %d", "string", 123)
output := buf.String()
if !core.Contains(output, "formatted string 123") {
if !strings.Contains(output, "formatted string 123") {
t.Errorf("Formatted message should appear, got: %s", output)
}
}
func TestLogger_SetLevel_Good(t *testing.T) {
func TestSetLevel(t *testing.T) {
var buf bytes.Buffer
logger := New(Config{
Output: &buf,
@ -144,17 +137,17 @@ func TestLogger_SetLevel_Good(t *testing.T) {
// Change to Info level
logger.SetLevel(LevelInfo)
logger.Info("should appear now")
if !core.Contains(buf.String(), "should appear now") {
if !strings.Contains(buf.String(), "should appear now") {
t.Error("Info should appear after level change")
}
// Verify Level
// Verify GetLevel
if logger.GetLevel() != LevelInfo {
t.Error("GetLevel should return LevelInfo")
}
}
func TestLogger_ParseLevel_Good(t *testing.T) {
func TestParseLevel(t *testing.T) {
tests := []struct {
input string
expected Level
@ -187,7 +180,7 @@ func TestLogger_ParseLevel_Good(t *testing.T) {
}
}
func TestLogger_GlobalLogger_Good(t *testing.T) {
func TestGlobalLogger(t *testing.T) {
var buf bytes.Buffer
logger := New(Config{
Output: &buf,
@ -197,7 +190,7 @@ func TestLogger_GlobalLogger_Good(t *testing.T) {
SetGlobal(logger)
Info("global test")
if !core.Contains(buf.String(), "global test") {
if !strings.Contains(buf.String(), "global test") {
t.Error("Global logger should write message")
}
@ -212,7 +205,7 @@ func TestLogger_GlobalLogger_Good(t *testing.T) {
SetGlobal(New(DefaultConfig()))
}
func TestLogger_LevelString_Good(t *testing.T) {
func TestLevelString(t *testing.T) {
tests := []struct {
level Level
expected string
@ -231,7 +224,7 @@ func TestLogger_LevelString_Good(t *testing.T) {
}
}
func TestLogger_MergeFields_Good(t *testing.T) {
func TestMergeFields(t *testing.T) {
// Empty fields
result := mergeFields(nil)
if result != nil {
@ -267,35 +260,3 @@ func TestLogger_MergeFields_Good(t *testing.T) {
t.Error("Later fields should override earlier ones")
}
}
func TestLogger_ParseLevel_Bad(t *testing.T) {
_, err := ParseLevel("bogus")
if err == nil {
t.Error("ParseLevel should return an error for an unrecognised level string")
}
}
func TestLogger_ConcurrentWrite_Ugly(t *testing.T) {
var buf bytes.Buffer
logger := New(Config{
Output: &buf,
Level: LevelDebug,
})
const goroutines = 50
var wg sync.WaitGroup
wg.Add(goroutines)
for i := range goroutines {
go func(n int) {
defer wg.Done()
logger.Infof("concurrent message %d", n)
}(i)
}
wg.Wait()
// Only assert no panics / races occurred; output ordering is non-deterministic.
if buf.Len() == 0 {
t.Error("expected concurrent log writes to produce output")
}
}

View file

@ -1,44 +0,0 @@
// SPDX-License-Identifier: EUPL-1.2
package node
import (
"io/fs"
"testing"
core "dappco.re/go/core"
"github.com/stretchr/testify/require"
)
func testJoinPath(parts ...string) string {
return core.JoinPath(parts...)
}
func testNodeManagerPaths(dir string) (string, string) {
return testJoinPath(dir, "private.key"), testJoinPath(dir, "node.json")
}
func testWriteFile(t *testing.T, path string, content []byte, mode fs.FileMode) {
t.Helper()
require.NoError(t, filesystemResultError(localFileSystem.WriteMode(path, string(content), mode)))
}
func testReadFile(t *testing.T, path string) []byte {
t.Helper()
content, err := filesystemRead(path)
require.NoError(t, err)
return []byte(content)
}
func testJSONMarshal(t *testing.T, v any) []byte {
t.Helper()
result := core.JSONMarshal(v)
require.True(t, result.OK, "marshal should succeed: %v", result.Value)
return result.Value.([]byte)
}
func testJSONUnmarshal(t *testing.T, data []byte, target any) {
t.Helper()
result := core.JSONUnmarshal(data, target)
require.True(t, result.OK, "unmarshal should succeed: %v", result.Value)
}

View file

@ -2,10 +2,11 @@ package node
import (
"encoding/base64"
"encoding/json"
"path/filepath"
"testing"
"time"
core "dappco.re/go/core"
"forge.lthn.ai/Snider/Borg/pkg/smsg"
)
@ -15,7 +16,10 @@ func BenchmarkIdentityGenerate(b *testing.B) {
b.ReportAllocs()
for b.Loop() {
dir := b.TempDir()
nm, err := NewNodeManagerFromPaths(testNodeManagerPaths(dir))
nm, err := NewNodeManagerWithPaths(
filepath.Join(dir, "private.key"),
filepath.Join(dir, "node.json"),
)
if err != nil {
b.Fatalf("create node manager: %v", err)
}
@ -30,10 +34,10 @@ func BenchmarkDeriveSharedSecret(b *testing.B) {
dir1 := b.TempDir()
dir2 := b.TempDir()
nm1, _ := NewNodeManagerFromPaths(testJoinPath(dir1, "k"), testJoinPath(dir1, "n"))
nm1, _ := NewNodeManagerWithPaths(filepath.Join(dir1, "k"), filepath.Join(dir1, "n"))
nm1.GenerateIdentity("node1", RoleDual)
nm2, _ := NewNodeManagerFromPaths(testJoinPath(dir2, "k"), testJoinPath(dir2, "n"))
nm2, _ := NewNodeManagerWithPaths(filepath.Join(dir2, "k"), filepath.Join(dir2, "n"))
nm2.GenerateIdentity("node2", RoleDual)
peerPubKey := nm2.GetIdentity().PublicKey
@ -73,7 +77,7 @@ func BenchmarkMessageSerialise(b *testing.B) {
b.ResetTimer()
for b.Loop() {
msg, err := NewMessage(MessageStats, "sender-id", "receiver-id", payload)
msg, err := NewMessage(MsgStats, "sender-id", "receiver-id", payload)
if err != nil {
b.Fatalf("create message: %v", err)
}
@ -84,8 +88,8 @@ func BenchmarkMessageSerialise(b *testing.B) {
}
var restored Message
if result := core.JSONUnmarshal(data, &restored); !result.OK {
b.Fatalf("unmarshal message: %v", result.Value)
if err := json.Unmarshal(data, &restored); err != nil {
b.Fatalf("unmarshal message: %v", err)
}
}
}
@ -98,7 +102,7 @@ func BenchmarkMessageCreateOnly(b *testing.B) {
b.ResetTimer()
for b.Loop() {
_, err := NewMessage(MessagePing, "sender", "receiver", payload)
_, err := NewMessage(MsgPing, "sender", "receiver", payload)
if err != nil {
b.Fatalf("create message: %v", err)
}
@ -132,8 +136,9 @@ func BenchmarkMarshalJSON(b *testing.B) {
b.Run("Stdlib", func(b *testing.B) {
b.ReportAllocs()
for b.Loop() {
if result := core.JSONMarshal(data); !result.OK {
b.Fatal(result.Value)
_, err := json.Marshal(data)
if err != nil {
b.Fatal(err)
}
}
})
@ -145,10 +150,10 @@ func BenchmarkSMSGEncryptDecrypt(b *testing.B) {
dir1 := b.TempDir()
dir2 := b.TempDir()
nm1, _ := NewNodeManagerFromPaths(testJoinPath(dir1, "k"), testJoinPath(dir1, "n"))
nm1, _ := NewNodeManagerWithPaths(filepath.Join(dir1, "k"), filepath.Join(dir1, "n"))
nm1.GenerateIdentity("node1", RoleDual)
nm2, _ := NewNodeManagerFromPaths(testJoinPath(dir2, "k"), testJoinPath(dir2, "n"))
nm2, _ := NewNodeManagerWithPaths(filepath.Join(dir2, "k"), filepath.Join(dir2, "n"))
nm2.GenerateIdentity("node2", RoleDual)
sharedSecret, _ := nm1.DeriveSharedSecret(nm2.GetIdentity().PublicKey)
@ -197,7 +202,7 @@ func BenchmarkChallengeSignVerify(b *testing.B) {
// BenchmarkPeerScoring measures KD-tree rebuild and peer selection.
func BenchmarkPeerScoring(b *testing.B) {
dir := b.TempDir()
reg, err := NewPeerRegistryFromPath(testJoinPath(dir, "peers.json"))
reg, err := NewPeerRegistryWithPath(filepath.Join(dir, "peers.json"))
if err != nil {
b.Fatalf("create registry: %v", err)
}
@ -206,13 +211,13 @@ func BenchmarkPeerScoring(b *testing.B) {
// Add 50 peers with varied metrics
for i := range 50 {
peer := &Peer{
ID: testJoinPath("peer", string(rune('A'+i%26)), string(rune('0'+i/26))),
Name: "peer",
PingMilliseconds: float64(i*10 + 5),
Hops: i%5 + 1,
GeographicKilometres: float64(i * 100),
Score: float64(50 + i%50),
AddedAt: time.Now(),
ID: filepath.Join("peer", string(rune('A'+i%26)), string(rune('0'+i/26))),
Name: "peer",
PingMS: float64(i*10 + 5),
Hops: i%5 + 1,
GeoKM: float64(i * 100),
Score: float64(50 + i%50),
AddedAt: time.Now(),
}
// Bypass AddPeer's duplicate check by adding directly
reg.mu.Lock()

View file

@ -1,51 +0,0 @@
package node
import (
"bytes"
"sync"
core "dappco.re/go/core"
)
// bufferPool provides reusable byte buffers for JSON encoding in hot paths.
// This reduces allocation overhead in message serialization.
var bufferPool = sync.Pool{
New: func() any {
return bytes.NewBuffer(make([]byte, 0, 1024))
},
}
func getBuffer() *bytes.Buffer {
buffer := bufferPool.Get().(*bytes.Buffer)
buffer.Reset()
return buffer
}
func putBuffer(buffer *bytes.Buffer) {
// Don't pool buffers that grew too large (>64KB)
if buffer.Cap() <= 65536 {
bufferPool.Put(buffer)
}
}
// MarshalJSON encodes a value to JSON using Core's JSON primitive and then
// restores the historical no-EscapeHTML behaviour expected by the node package.
// Returns a copy of the encoded bytes (safe to use after the function returns).
//
// data, err := MarshalJSON(value)
func MarshalJSON(value any) ([]byte, error) {
encoded := core.JSONMarshal(value)
if !encoded.OK {
return nil, encoded.Value.(error)
}
data := encoded.Value.([]byte)
data = bytes.ReplaceAll(data, []byte(`\u003c`), []byte("<"))
data = bytes.ReplaceAll(data, []byte(`\u003e`), []byte(">"))
data = bytes.ReplaceAll(data, []byte(`\u0026`), []byte("&"))
// Return a copy since callers may retain the slice after subsequent calls.
out := make([]byte, len(data))
copy(out, data)
return out, nil
}

55
node/bufpool.go Normal file
View file

@ -0,0 +1,55 @@
package node
import (
"bytes"
"encoding/json"
"sync"
)
// bufferPool provides reusable byte buffers for JSON encoding.
// This reduces allocation overhead in hot paths like message serialization.
var bufferPool = sync.Pool{
New: func() any {
return bytes.NewBuffer(make([]byte, 0, 1024))
},
}
// getBuffer retrieves a buffer from the pool.
func getBuffer() *bytes.Buffer {
buf := bufferPool.Get().(*bytes.Buffer)
buf.Reset()
return buf
}
// putBuffer returns a buffer to the pool.
func putBuffer(buf *bytes.Buffer) {
// Don't pool buffers that grew too large (>64KB)
if buf.Cap() <= 65536 {
bufferPool.Put(buf)
}
}
// MarshalJSON encodes a value to JSON using a pooled buffer.
// Returns a copy of the encoded bytes (safe to use after the function returns).
func MarshalJSON(v any) ([]byte, error) {
buf := getBuffer()
defer putBuffer(buf)
enc := json.NewEncoder(buf)
// Don't escape HTML characters (matches json.Marshal behavior for these use cases)
enc.SetEscapeHTML(false)
if err := enc.Encode(v); err != nil {
return nil, err
}
// json.Encoder.Encode adds a newline; remove it to match json.Marshal
data := buf.Bytes()
if len(data) > 0 && data[len(data)-1] == '\n' {
data = data[:len(data)-1]
}
// Return a copy since the buffer will be reused
result := make([]byte, len(data))
copy(result, data)
return result, nil
}

View file

@ -2,17 +2,17 @@ package node
import (
"bytes"
"encoding/json"
"sync"
"testing"
core "dappco.re/go/core"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// --- buffer_pool.go tests ---
// --- bufpool.go tests ---
func TestBufferPool_Buffer_ReturnsResetBuffer_Good(t *testing.T) {
func TestGetBuffer_ReturnsResetBuffer(t *testing.T) {
t.Run("buffer is initially empty", func(t *testing.T) {
buf := getBuffer()
defer putBuffer(buf)
@ -33,7 +33,7 @@ func TestBufferPool_Buffer_ReturnsResetBuffer_Good(t *testing.T) {
})
}
func TestBufferPool_PutBuffer_DiscardsOversizedBuffers_Good(t *testing.T) {
func TestPutBuffer_DiscardsOversizedBuffers(t *testing.T) {
t.Run("buffer at 64KB limit is pooled", func(t *testing.T) {
buf := getBuffer()
buf.Grow(65536)
@ -59,7 +59,7 @@ func TestBufferPool_PutBuffer_DiscardsOversizedBuffers_Good(t *testing.T) {
})
}
func TestBufferPool_BufferIndependence_Good(t *testing.T) {
func TestBufPool_BufferIndependence(t *testing.T) {
buf1 := getBuffer()
buf2 := getBuffer()
@ -77,7 +77,7 @@ func TestBufferPool_BufferIndependence_Good(t *testing.T) {
putBuffer(buf2)
}
func TestBufferPool_MarshalJSON_BasicTypes_Good(t *testing.T) {
func TestMarshalJSON_BasicTypes(t *testing.T) {
tests := []struct {
name string
input any
@ -121,7 +121,8 @@ func TestBufferPool_MarshalJSON_BasicTypes_Good(t *testing.T) {
got, err := MarshalJSON(tt.input)
require.NoError(t, err)
expected := testJSONMarshal(t, tt.input)
expected, err := json.Marshal(tt.input)
require.NoError(t, err)
assert.JSONEq(t, string(expected), string(got),
"MarshalJSON output should match json.Marshal")
@ -129,7 +130,7 @@ func TestBufferPool_MarshalJSON_BasicTypes_Good(t *testing.T) {
}
}
func TestBufferPool_MarshalJSON_NoTrailingNewline_Good(t *testing.T) {
func TestMarshalJSON_NoTrailingNewline(t *testing.T) {
data, err := MarshalJSON(map[string]string{"key": "value"})
require.NoError(t, err)
@ -137,7 +138,7 @@ func TestBufferPool_MarshalJSON_NoTrailingNewline_Good(t *testing.T) {
"MarshalJSON should strip the trailing newline added by json.Encoder")
}
func TestBufferPool_MarshalJSON_HTMLEscaping_Good(t *testing.T) {
func TestMarshalJSON_HTMLEscaping(t *testing.T) {
input := map[string]string{"html": "<script>alert('xss')</script>"}
data, err := MarshalJSON(input)
require.NoError(t, err)
@ -146,7 +147,7 @@ func TestBufferPool_MarshalJSON_HTMLEscaping_Good(t *testing.T) {
"HTML characters should not be escaped when EscapeHTML is false")
}
func TestBufferPool_MarshalJSON_ReturnsCopy_Good(t *testing.T) {
func TestMarshalJSON_ReturnsCopy(t *testing.T) {
data1, err := MarshalJSON("first")
require.NoError(t, err)
@ -161,7 +162,7 @@ func TestBufferPool_MarshalJSON_ReturnsCopy_Good(t *testing.T) {
"returned slice should be a copy and not be mutated by subsequent calls")
}
func TestBufferPool_MarshalJSON_ReturnsIndependentCopy_Good(t *testing.T) {
func TestMarshalJSON_ReturnsIndependentCopy(t *testing.T) {
data1, err := MarshalJSON(map[string]string{"first": "call"})
require.NoError(t, err)
@ -174,13 +175,13 @@ func TestBufferPool_MarshalJSON_ReturnsIndependentCopy_Good(t *testing.T) {
"second result should contain its own data")
}
func TestBufferPool_MarshalJSON_InvalidValue_Bad(t *testing.T) {
func TestMarshalJSON_InvalidValue(t *testing.T) {
ch := make(chan int)
_, err := MarshalJSON(ch)
assert.Error(t, err, "marshalling an unserialisable type should return an error")
}
func TestBufferPool_ConcurrentAccess_Ugly(t *testing.T) {
func TestBufferPool_ConcurrentAccess(t *testing.T) {
const goroutines = 100
const iterations = 50
@ -205,7 +206,7 @@ func TestBufferPool_ConcurrentAccess_Ugly(t *testing.T) {
wg.Wait()
}
func TestBufferPool_MarshalJSON_ConcurrentSafety_Ugly(t *testing.T) {
func TestMarshalJSON_ConcurrentSafety(t *testing.T) {
const goroutines = 50
var wg sync.WaitGroup
@ -222,8 +223,8 @@ func TestBufferPool_MarshalJSON_ConcurrentSafety_Ugly(t *testing.T) {
if err == nil {
var parsed PingPayload
if result := core.JSONUnmarshal(data, &parsed); !result.OK {
err = result.Value.(error)
err = json.Unmarshal(data, &parsed)
if err != nil {
errs[idx] = err
return
}
@ -241,7 +242,7 @@ func TestBufferPool_MarshalJSON_ConcurrentSafety_Ugly(t *testing.T) {
}
}
func TestBufferPool_ReuseAfterReset_Ugly(t *testing.T) {
func TestBufferPool_ReuseAfterReset(t *testing.T) {
buf := getBuffer()
buf.Write(make([]byte, 4096))
putBuffer(buf)

View file

@ -5,28 +5,29 @@ import (
"bytes"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"io"
"io/fs"
"os"
"path/filepath"
"strings"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
coreerr "dappco.re/go/core/log"
"forge.lthn.ai/Snider/Borg/pkg/datanode"
"forge.lthn.ai/Snider/Borg/pkg/tim"
)
// bundleType := BundleProfile
// BundleType defines the type of deployment bundle.
type BundleType string
const (
// BundleProfile contains a profile JSON payload.
BundleProfile BundleType = "profile"
// BundleMiner contains a miner binary and optional profile data.
BundleMiner BundleType = "miner"
// BundleFull contains the full deployment payload.
BundleFull BundleType = "full"
BundleProfile BundleType = "profile" // Just config/profile JSON
BundleMiner BundleType = "miner" // Miner binary + config
BundleFull BundleType = "full" // Everything (miner + profiles + config)
)
// bundle := &Bundle{Type: BundleProfile, Name: "xmrig", Data: []byte("{}")}
// Bundle represents a deployment bundle for P2P transfer.
type Bundle struct {
Type BundleType `json:"type"`
Name string `json:"name"`
@ -34,7 +35,7 @@ type Bundle struct {
Checksum string `json:"checksum"` // SHA-256 of Data
}
// manifest := BundleManifest{Name: "xmrig", Type: BundleMiner}
// BundleManifest describes the contents of a bundle.
type BundleManifest struct {
Type BundleType `json:"type"`
Name string `json:"name"`
@ -44,19 +45,22 @@ type BundleManifest struct {
CreatedAt string `json:"createdAt"`
}
// bundle, err := CreateProfileBundle(profileJSON, "xmrig-default", "password")
// CreateProfileBundle creates an encrypted bundle containing a mining profile.
func CreateProfileBundle(profileJSON []byte, name string, password string) (*Bundle, error) {
timBundle, err := tim.New()
// Create a TIM with just the profile config
t, err := tim.New()
if err != nil {
return nil, core.E("CreateProfileBundle", "failed to create TIM", err)
return nil, coreerr.E("CreateProfileBundle", "failed to create TIM", err)
}
timBundle.Config = profileJSON
t.Config = profileJSON
stimData, err := timBundle.ToSigil(password)
// Encrypt to STIM format
stimData, err := t.ToSigil(password)
if err != nil {
return nil, core.E("CreateProfileBundle", "failed to encrypt bundle", err)
return nil, coreerr.E("CreateProfileBundle", "failed to encrypt bundle", err)
}
// Calculate checksum
checksum := calculateChecksum(stimData)
return &Bundle{
@ -67,7 +71,7 @@ func CreateProfileBundle(profileJSON []byte, name string, password string) (*Bun
}, nil
}
// bundle, err := CreateProfileBundleUnencrypted(profileJSON, "xmrig-default")
// CreateProfileBundleUnencrypted creates a plain JSON bundle (for testing or trusted networks).
func CreateProfileBundleUnencrypted(profileJSON []byte, name string) (*Bundle, error) {
checksum := calculateChecksum(profileJSON)
@ -79,38 +83,44 @@ func CreateProfileBundleUnencrypted(profileJSON []byte, name string) (*Bundle, e
}, nil
}
// bundle, err := CreateMinerBundle("/srv/miners/xmrig", profileJSON, "xmrig", "password")
// CreateMinerBundle creates an encrypted bundle containing a miner binary and optional profile.
func CreateMinerBundle(minerPath string, profileJSON []byte, name string, password string) (*Bundle, error) {
minerContent, err := filesystemRead(minerPath)
// Read miner binary
minerContent, err := coreio.Local.Read(minerPath)
if err != nil {
return nil, core.E("CreateMinerBundle", "failed to read miner binary", err)
return nil, coreerr.E("CreateMinerBundle", "failed to read miner binary", err)
}
minerData := []byte(minerContent)
// Create a tarball with the miner binary
tarData, err := createTarball(map[string][]byte{
core.PathBase(minerPath): minerData,
filepath.Base(minerPath): minerData,
})
if err != nil {
return nil, core.E("CreateMinerBundle", "failed to create tarball", err)
return nil, coreerr.E("CreateMinerBundle", "failed to create tarball", err)
}
dataNode, err := datanode.FromTar(tarData)
// Create DataNode from tarball
dn, err := datanode.FromTar(tarData)
if err != nil {
return nil, core.E("CreateMinerBundle", "failed to create datanode", err)
return nil, coreerr.E("CreateMinerBundle", "failed to create datanode", err)
}
timBundle, err := tim.FromDataNode(dataNode)
// Create TIM from DataNode
t, err := tim.FromDataNode(dn)
if err != nil {
return nil, core.E("CreateMinerBundle", "failed to create TIM", err)
return nil, coreerr.E("CreateMinerBundle", "failed to create TIM", err)
}
// Set profile as config if provided
if profileJSON != nil {
timBundle.Config = profileJSON
t.Config = profileJSON
}
stimData, err := timBundle.ToSigil(password)
// Encrypt to STIM format
stimData, err := t.ToSigil(password)
if err != nil {
return nil, core.E("CreateMinerBundle", "failed to encrypt bundle", err)
return nil, coreerr.E("CreateMinerBundle", "failed to encrypt bundle", err)
}
checksum := calculateChecksum(stimData)
@ -123,58 +133,67 @@ func CreateMinerBundle(minerPath string, profileJSON []byte, name string, passwo
}, nil
}
// profileJSON, err := ExtractProfileBundle(bundle, "password")
// ExtractProfileBundle decrypts and extracts a profile bundle.
func ExtractProfileBundle(bundle *Bundle, password string) ([]byte, error) {
// Verify checksum first
if calculateChecksum(bundle.Data) != bundle.Checksum {
return nil, core.E("ExtractProfileBundle", "checksum mismatch - bundle may be corrupted", nil)
return nil, coreerr.E("ExtractProfileBundle", "checksum mismatch - bundle may be corrupted", nil)
}
// If it's unencrypted JSON, just return it
if isJSON(bundle.Data) {
return bundle.Data, nil
}
timBundle, err := tim.FromSigil(bundle.Data, password)
// Decrypt STIM format
t, err := tim.FromSigil(bundle.Data, password)
if err != nil {
return nil, core.E("ExtractProfileBundle", "failed to decrypt bundle", err)
return nil, coreerr.E("ExtractProfileBundle", "failed to decrypt bundle", err)
}
return timBundle.Config, nil
return t.Config, nil
}
// minerPath, profileJSON, err := ExtractMinerBundle(bundle, "password", "/srv/miners")
// ExtractMinerBundle decrypts and extracts a miner bundle, returning the miner path and profile.
func ExtractMinerBundle(bundle *Bundle, password string, destDir string) (string, []byte, error) {
// Verify checksum
if calculateChecksum(bundle.Data) != bundle.Checksum {
return "", nil, core.E("ExtractMinerBundle", "checksum mismatch - bundle may be corrupted", nil)
return "", nil, coreerr.E("ExtractMinerBundle", "checksum mismatch - bundle may be corrupted", nil)
}
timBundle, err := tim.FromSigil(bundle.Data, password)
// Decrypt STIM format
t, err := tim.FromSigil(bundle.Data, password)
if err != nil {
return "", nil, core.E("ExtractMinerBundle", "failed to decrypt bundle", err)
return "", nil, coreerr.E("ExtractMinerBundle", "failed to decrypt bundle", err)
}
tarData, err := timBundle.RootFS.ToTar()
// Convert rootfs to tarball and extract
tarData, err := t.RootFS.ToTar()
if err != nil {
return "", nil, core.E("ExtractMinerBundle", "failed to convert rootfs to tar", err)
return "", nil, coreerr.E("ExtractMinerBundle", "failed to convert rootfs to tar", err)
}
// Extract tarball to destination
minerPath, err := extractTarball(tarData, destDir)
if err != nil {
return "", nil, core.E("ExtractMinerBundle", "failed to extract tarball", err)
return "", nil, coreerr.E("ExtractMinerBundle", "failed to extract tarball", err)
}
return minerPath, timBundle.Config, nil
return minerPath, t.Config, nil
}
// ok := VerifyBundle(bundle)
// VerifyBundle checks if a bundle's checksum is valid.
func VerifyBundle(bundle *Bundle) bool {
return calculateChecksum(bundle.Data) == bundle.Checksum
}
// calculateChecksum computes SHA-256 checksum of data.
func calculateChecksum(data []byte) string {
hash := sha256.Sum256(data)
return hex.EncodeToString(hash[:])
}
// isJSON checks if data starts with JSON characters.
func isJSON(data []byte) bool {
if len(data) == 0 {
return false
@ -183,78 +202,73 @@ func isJSON(data []byte) bool {
return data[0] == '{' || data[0] == '['
}
// createTarball creates a tar archive from a map of filename -> content.
func createTarball(files map[string][]byte) ([]byte, error) {
var buf bytes.Buffer
tarWriter := tar.NewWriter(&buf)
tw := tar.NewWriter(&buf)
createdDirectories := make(map[string]bool)
// Track directories we've created
dirs := make(map[string]bool)
for name, content := range files {
dir := core.PathDir(name)
if dir != "." && !createdDirectories[dir] {
header := &tar.Header{
// Create parent directories if needed
dir := filepath.Dir(name)
if dir != "." && !dirs[dir] {
hdr := &tar.Header{
Name: dir + "/",
Mode: 0755,
Typeflag: tar.TypeDir,
}
if err := tarWriter.WriteHeader(header); err != nil {
if err := tw.WriteHeader(hdr); err != nil {
return nil, err
}
createdDirectories[dir] = true
dirs[dir] = true
}
// Binaries in miners/ and non-JSON content get executable permissions.
// Determine file mode (executable for binaries in miners/)
mode := int64(0644)
if core.PathDir(name) == "miners" || !isJSON(content) {
if filepath.Dir(name) == "miners" || !isJSON(content) {
mode = 0755
}
header := &tar.Header{
hdr := &tar.Header{
Name: name,
Mode: mode,
Size: int64(len(content)),
}
if err := tarWriter.WriteHeader(header); err != nil {
if err := tw.WriteHeader(hdr); err != nil {
return nil, err
}
if _, err := tarWriter.Write(content); err != nil {
if _, err := tw.Write(content); err != nil {
return nil, err
}
}
if err := tarWriter.Close(); err != nil {
if err := tw.Close(); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
// extractTarball extracts a tar archive to a directory, returns first executable found.
func extractTarball(tarData []byte, destDir string) (string, error) {
// Ensure destDir is an absolute, clean path for security checks
absDestDir := destDir
pathSeparator := core.Env("DS")
if pathSeparator == "" {
pathSeparator = "/"
}
if !core.PathIsAbs(absDestDir) {
cwd := core.Env("DIR_CWD")
if cwd == "" {
return "", core.E("extractTarball", "failed to resolve destination directory", nil)
}
absDestDir = core.CleanPath(core.Concat(cwd, pathSeparator, absDestDir), pathSeparator)
} else {
absDestDir = core.CleanPath(absDestDir, pathSeparator)
absDestDir, err := filepath.Abs(destDir)
if err != nil {
return "", coreerr.E("extractTarball", "failed to resolve destination directory", err)
}
absDestDir = filepath.Clean(absDestDir)
if err := filesystemEnsureDir(absDestDir); err != nil {
if err := coreio.Local.EnsureDir(absDestDir); err != nil {
return "", err
}
tarReader := tar.NewReader(bytes.NewReader(tarData))
tr := tar.NewReader(bytes.NewReader(tarData))
var firstExecutable string
for {
header, err := tarReader.Next()
hdr, err := tr.Next()
if err == io.EOF {
break
}
@ -263,58 +277,61 @@ func extractTarball(tarData []byte, destDir string) (string, error) {
}
// Security: Sanitize the tar entry name to prevent path traversal (Zip Slip)
cleanName := core.CleanPath(header.Name, "/")
cleanName := filepath.Clean(hdr.Name)
// Reject absolute paths
if core.PathIsAbs(cleanName) {
return "", core.E("extractTarball", "invalid tar entry: absolute path not allowed: "+header.Name, nil)
if filepath.IsAbs(cleanName) {
return "", coreerr.E("extractTarball", "invalid tar entry: absolute path not allowed: "+hdr.Name, nil)
}
// Reject paths that escape the destination directory
if core.HasPrefix(cleanName, "../") || cleanName == ".." {
return "", core.E("extractTarball", "invalid tar entry: path traversal attempt: "+header.Name, nil)
if strings.HasPrefix(cleanName, ".."+string(os.PathSeparator)) || cleanName == ".." {
return "", coreerr.E("extractTarball", "invalid tar entry: path traversal attempt: "+hdr.Name, nil)
}
// Build the full path and verify it's within destDir
fullPath := core.CleanPath(core.Concat(absDestDir, pathSeparator, cleanName), pathSeparator)
fullPath := filepath.Join(absDestDir, cleanName)
fullPath = filepath.Clean(fullPath)
// Final security check: ensure the path is still within destDir
allowedPrefix := core.Concat(absDestDir, pathSeparator)
if absDestDir == pathSeparator {
allowedPrefix = absDestDir
}
if !core.HasPrefix(fullPath, allowedPrefix) && fullPath != absDestDir {
return "", core.E("extractTarball", "invalid tar entry: path escape attempt: "+header.Name, nil)
if !strings.HasPrefix(fullPath, absDestDir+string(os.PathSeparator)) && fullPath != absDestDir {
return "", coreerr.E("extractTarball", "invalid tar entry: path escape attempt: "+hdr.Name, nil)
}
switch header.Typeflag {
switch hdr.Typeflag {
case tar.TypeDir:
if err := filesystemEnsureDir(fullPath); err != nil {
if err := coreio.Local.EnsureDir(fullPath); err != nil {
return "", err
}
case tar.TypeReg:
// Ensure parent directory exists
if err := filesystemEnsureDir(core.PathDir(fullPath)); err != nil {
if err := coreio.Local.EnsureDir(filepath.Dir(fullPath)); err != nil {
return "", err
}
// os.OpenFile is used deliberately here instead of coreio.Local.Create/Write
// because coreio hardcodes file permissions (0644) and we need to preserve
// the tar header's mode bits — executable binaries require 0755.
f, err := os.OpenFile(fullPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.FileMode(hdr.Mode))
if err != nil {
return "", coreerr.E("extractTarball", "failed to create file "+hdr.Name, err)
}
// Limit file size to prevent decompression bombs (100MB max per file)
const maxFileSize int64 = 100 * 1024 * 1024
limitedReader := io.LimitReader(tarReader, maxFileSize+1)
content, err := io.ReadAll(limitedReader)
limitedReader := io.LimitReader(tr, maxFileSize+1)
written, err := io.Copy(f, limitedReader)
f.Close()
if err != nil {
return "", core.E("extractTarball", "failed to write file "+header.Name, err)
return "", coreerr.E("extractTarball", "failed to write file "+hdr.Name, err)
}
if int64(len(content)) > maxFileSize {
filesystemDelete(fullPath)
return "", core.E("extractTarball", "file "+header.Name+" exceeds maximum size", nil)
}
if err := filesystemResultError(localFileSystem.WriteMode(fullPath, string(content), fs.FileMode(header.Mode))); err != nil {
return "", core.E("extractTarball", "failed to create file "+header.Name, err)
if written > maxFileSize {
coreio.Local.Delete(fullPath)
return "", coreerr.E("extractTarball", "file "+hdr.Name+" exceeds maximum size", nil)
}
// Track first executable
if header.Mode&0111 != 0 && firstExecutable == "" {
if hdr.Mode&0111 != 0 && firstExecutable == "" {
firstExecutable = fullPath
}
// Explicitly ignore symlinks and hard links to prevent symlink attacks
@ -327,27 +344,18 @@ func extractTarball(tarData []byte, destDir string) (string, error) {
return firstExecutable, nil
}
// err := StreamBundle(bundle, writer)
// StreamBundle writes a bundle to a writer (for large transfers).
func StreamBundle(bundle *Bundle, w io.Writer) error {
result := core.JSONMarshal(bundle)
if !result.OK {
return result.Value.(error)
}
_, err := w.Write(result.Value.([]byte))
return err
encoder := json.NewEncoder(w)
return encoder.Encode(bundle)
}
// bundle, err := ReadBundle(reader)
// ReadBundle reads a bundle from a reader.
func ReadBundle(r io.Reader) (*Bundle, error) {
var buf bytes.Buffer
if _, err := io.Copy(&buf, r); err != nil {
return nil, err
}
var bundle Bundle
result := core.JSONUnmarshal(buf.Bytes(), &bundle)
if !result.OK {
return nil, result.Value.(error)
decoder := json.NewDecoder(r)
if err := decoder.Decode(&bundle); err != nil {
return nil, err
}
return &bundle, nil
}

View file

@ -3,10 +3,12 @@ package node
import (
"archive/tar"
"bytes"
"os"
"path/filepath"
"testing"
)
func TestBundle_CreateProfileBundleUnencrypted_Good(t *testing.T) {
func TestCreateProfileBundleUnencrypted(t *testing.T) {
profileJSON := []byte(`{"name":"test-profile","minerType":"xmrig","config":{}}`)
bundle, err := CreateProfileBundleUnencrypted(profileJSON, "test-profile")
@ -31,7 +33,7 @@ func TestBundle_CreateProfileBundleUnencrypted_Good(t *testing.T) {
}
}
func TestBundle_VerifyBundle_Good(t *testing.T) {
func TestVerifyBundle(t *testing.T) {
t.Run("ValidChecksum", func(t *testing.T) {
bundle, _ := CreateProfileBundleUnencrypted([]byte(`{"test":"data"}`), "test")
@ -59,7 +61,7 @@ func TestBundle_VerifyBundle_Good(t *testing.T) {
})
}
func TestBundle_CreateProfileBundle_Good(t *testing.T) {
func TestCreateProfileBundle(t *testing.T) {
profileJSON := []byte(`{"name":"encrypted-profile","minerType":"xmrig"}`)
password := "test-password-123"
@ -88,7 +90,7 @@ func TestBundle_CreateProfileBundle_Good(t *testing.T) {
}
}
func TestBundle_ExtractProfileBundle_Good(t *testing.T) {
func TestExtractProfileBundle(t *testing.T) {
t.Run("UnencryptedBundle", func(t *testing.T) {
originalJSON := []byte(`{"name":"plain","config":{}}`)
bundle, _ := CreateProfileBundleUnencrypted(originalJSON, "plain")
@ -140,7 +142,7 @@ func TestBundle_ExtractProfileBundle_Good(t *testing.T) {
})
}
func TestBundle_TarballFunctions_Good(t *testing.T) {
func TestTarballFunctions(t *testing.T) {
t.Run("CreateAndExtractTarball", func(t *testing.T) {
files := map[string][]byte{
"file1.txt": []byte("content of file 1"),
@ -158,7 +160,8 @@ func TestBundle_TarballFunctions_Good(t *testing.T) {
}
// Extract to temp directory
tmpDir := t.TempDir()
tmpDir, _ := os.MkdirTemp("", "tarball-test")
defer os.RemoveAll(tmpDir)
firstExec, err := extractTarball(tarData, tmpDir)
if err != nil {
@ -167,7 +170,12 @@ func TestBundle_TarballFunctions_Good(t *testing.T) {
// Check files exist
for name, content := range files {
data := testReadFile(t, testJoinPath(tmpDir, name))
path := filepath.Join(tmpDir, name)
data, err := os.ReadFile(path)
if err != nil {
t.Errorf("failed to read extracted file %s: %v", name, err)
continue
}
if !bytes.Equal(data, content) {
t.Errorf("content mismatch for %s", name)
@ -181,7 +189,7 @@ func TestBundle_TarballFunctions_Good(t *testing.T) {
})
}
func TestBundle_StreamAndReadBundle_Good(t *testing.T) {
func TestStreamAndReadBundle(t *testing.T) {
original, _ := CreateProfileBundleUnencrypted([]byte(`{"streaming":"test"}`), "stream-test")
// Stream to buffer
@ -210,7 +218,7 @@ func TestBundle_StreamAndReadBundle_Good(t *testing.T) {
}
}
func TestBundle_CalculateChecksum_Good(t *testing.T) {
func TestCalculateChecksum(t *testing.T) {
t.Run("Deterministic", func(t *testing.T) {
data := []byte("test data for checksum")
@ -248,7 +256,7 @@ func TestBundle_CalculateChecksum_Good(t *testing.T) {
})
}
func TestBundle_IsJSON_Good(t *testing.T) {
func TestIsJSON(t *testing.T) {
tests := []struct {
data []byte
expected bool
@ -271,7 +279,7 @@ func TestBundle_IsJSON_Good(t *testing.T) {
}
}
func TestBundle_Types_Good(t *testing.T) {
func TestBundleTypes(t *testing.T) {
types := []BundleType{
BundleProfile,
BundleMiner,
@ -287,11 +295,16 @@ func TestBundle_Types_Good(t *testing.T) {
}
}
func TestBundle_CreateMinerBundle_Good(t *testing.T) {
func TestCreateMinerBundle(t *testing.T) {
// Create a temp "miner binary"
tmpDir := t.TempDir()
minerPath := testJoinPath(tmpDir, "test-miner")
testWriteFile(t, minerPath, []byte("fake miner binary content"), 0o755)
tmpDir, _ := os.MkdirTemp("", "miner-bundle-test")
defer os.RemoveAll(tmpDir)
minerPath := filepath.Join(tmpDir, "test-miner")
err := os.WriteFile(minerPath, []byte("fake miner binary content"), 0755)
if err != nil {
t.Fatalf("failed to create test miner: %v", err)
}
profileJSON := []byte(`{"profile":"data"}`)
password := "miner-password"
@ -310,7 +323,8 @@ func TestBundle_CreateMinerBundle_Good(t *testing.T) {
}
// Extract and verify
extractDir := t.TempDir()
extractDir, _ := os.MkdirTemp("", "miner-extract-test")
defer os.RemoveAll(extractDir)
extractedPath, extractedProfile, err := ExtractMinerBundle(bundle, password, extractDir)
if err != nil {
@ -327,7 +341,10 @@ func TestBundle_CreateMinerBundle_Good(t *testing.T) {
// If we got an extracted path, verify its content
if extractedPath != "" {
minerData := testReadFile(t, extractedPath)
minerData, err := os.ReadFile(extractedPath)
if err != nil {
t.Fatalf("failed to read extracted miner: %v", err)
}
if string(minerData) != "fake miner binary content" {
t.Error("miner content mismatch")
@ -337,7 +354,7 @@ func TestBundle_CreateMinerBundle_Good(t *testing.T) {
// --- Additional coverage tests for bundle.go ---
func TestBundle_ExtractTarball_PathTraversal_Bad(t *testing.T) {
func TestExtractTarball_PathTraversal(t *testing.T) {
t.Run("AbsolutePath", func(t *testing.T) {
// Create a tarball with an absolute path entry
tarData, err := createTarballWithCustomName("/etc/passwd", []byte("malicious"))
@ -429,8 +446,8 @@ func TestBundle_ExtractTarball_PathTraversal_Bad(t *testing.T) {
}
// Verify symlink was not created
linkPath := testJoinPath(tmpDir, "link")
if filesystemExists(linkPath) {
linkPath := filepath.Join(tmpDir, "link")
if _, statErr := os.Lstat(linkPath); !os.IsNotExist(statErr) {
t.Error("symlink should not be created")
}
})
@ -464,7 +481,10 @@ func TestBundle_ExtractTarball_PathTraversal_Bad(t *testing.T) {
}
// Verify directory and file exist
data := testReadFile(t, testJoinPath(tmpDir, "mydir", "file.txt"))
data, err := os.ReadFile(filepath.Join(tmpDir, "mydir", "file.txt"))
if err != nil {
t.Fatalf("failed to read extracted file: %v", err)
}
if !bytes.Equal(data, content) {
t.Error("content mismatch")
}
@ -511,7 +531,7 @@ func createTarballWithSymlink(name, target string) ([]byte, error) {
return buf.Bytes(), nil
}
func TestBundle_ExtractMinerBundle_ChecksumMismatch_Bad(t *testing.T) {
func TestExtractMinerBundle_ChecksumMismatch(t *testing.T) {
bundle := &Bundle{
Type: BundleMiner,
Name: "bad-bundle",
@ -525,17 +545,17 @@ func TestBundle_ExtractMinerBundle_ChecksumMismatch_Bad(t *testing.T) {
}
}
func TestBundle_CreateMinerBundle_NonExistentFile_Bad(t *testing.T) {
func TestCreateMinerBundle_NonExistentFile(t *testing.T) {
_, err := CreateMinerBundle("/non/existent/miner", nil, "test", "password")
if err == nil {
t.Error("expected error for non-existent miner file")
}
}
func TestBundle_CreateMinerBundle_NilProfile_Ugly(t *testing.T) {
func TestCreateMinerBundle_NilProfile(t *testing.T) {
tmpDir := t.TempDir()
minerPath := testJoinPath(tmpDir, "miner")
testWriteFile(t, minerPath, []byte("binary"), 0o755)
minerPath := filepath.Join(tmpDir, "miner")
os.WriteFile(minerPath, []byte("binary"), 0755)
bundle, err := CreateMinerBundle(minerPath, nil, "nil-profile", "pass")
if err != nil {
@ -546,7 +566,7 @@ func TestBundle_CreateMinerBundle_NilProfile_Ugly(t *testing.T) {
}
}
func TestBundle_ReadBundle_InvalidJSON_Bad(t *testing.T) {
func TestReadBundle_InvalidJSON(t *testing.T) {
reader := bytes.NewReader([]byte("not json"))
_, err := ReadBundle(reader)
if err == nil {
@ -554,7 +574,7 @@ func TestBundle_ReadBundle_InvalidJSON_Bad(t *testing.T) {
}
}
func TestBundle_StreamBundle_EmptyBundle_Ugly(t *testing.T) {
func TestStreamBundle_EmptyBundle(t *testing.T) {
bundle := &Bundle{
Type: BundleProfile,
Name: "empty",
@ -578,7 +598,7 @@ func TestBundle_StreamBundle_EmptyBundle_Ugly(t *testing.T) {
}
}
func TestBundle_CreateTarball_MultipleDirs_Good(t *testing.T) {
func TestCreateTarball_MultipleDirs(t *testing.T) {
files := map[string][]byte{
"dir1/file1.txt": []byte("content1"),
"dir2/file2.txt": []byte("content2"),
@ -596,7 +616,11 @@ func TestBundle_CreateTarball_MultipleDirs_Good(t *testing.T) {
}
for name, content := range files {
data := testReadFile(t, testJoinPath(tmpDir, name))
data, err := os.ReadFile(filepath.Join(tmpDir, name))
if err != nil {
t.Errorf("failed to read %s: %v", name, err)
continue
}
if !bytes.Equal(data, content) {
t.Errorf("content mismatch for %s", name)
}

View file

@ -2,32 +2,33 @@ package node
import (
"context"
"encoding/json"
"sync"
"time"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
"dappco.re/go/core/p2p/logging"
)
// controller := NewController(nodeManager, peerRegistry, transport)
// Controller manages remote peer operations from a controller node.
type Controller struct {
nodeManager *NodeManager
peerRegistry *PeerRegistry
transport *Transport
mutex sync.RWMutex
node *NodeManager
peers *PeerRegistry
transport *Transport
mu sync.RWMutex
// Pending requests awaiting responses.
pendingRequests map[string]chan *Message // message ID -> response channel
// Pending requests awaiting responses
pending map[string]chan *Message // message ID -> response channel
}
// controller := NewController(nodeManager, peerRegistry, transport)
func NewController(nodeManager *NodeManager, peerRegistry *PeerRegistry, transport *Transport) *Controller {
// NewController creates a new Controller instance.
func NewController(node *NodeManager, peers *PeerRegistry, transport *Transport) *Controller {
c := &Controller{
nodeManager: nodeManager,
peerRegistry: peerRegistry,
transport: transport,
pendingRequests: make(map[string]chan *Message),
node: node,
peers: peers,
transport: transport,
pending: make(map[string]chan *Message),
}
// Register message handler for responses
@ -36,107 +37,114 @@ func NewController(nodeManager *NodeManager, peerRegistry *PeerRegistry, transpo
return c
}
func (c *Controller) handleResponse(_ *PeerConnection, message *Message) {
if message.ReplyTo == "" {
// handleResponse processes incoming messages that are responses to our requests.
func (c *Controller) handleResponse(conn *PeerConnection, msg *Message) {
if msg.ReplyTo == "" {
return // Not a response, let worker handle it
}
c.mutex.Lock()
responseChannel, hasPendingRequest := c.pendingRequests[message.ReplyTo]
if hasPendingRequest {
delete(c.pendingRequests, message.ReplyTo)
c.mu.Lock()
ch, exists := c.pending[msg.ReplyTo]
if exists {
delete(c.pending, msg.ReplyTo)
}
c.mutex.Unlock()
c.mu.Unlock()
if hasPendingRequest && responseChannel != nil {
if exists && ch != nil {
select {
case responseChannel <- message:
case ch <- msg:
default:
// Late duplicate response; drop it.
// Channel full or closed
}
}
}
func (c *Controller) sendRequest(peerID string, message *Message, timeout time.Duration) (*Message, error) {
resolvedPeerID := peerID
// sendRequest sends a message and waits for a response.
func (c *Controller) sendRequest(peerID string, msg *Message, timeout time.Duration) (*Message, error) {
actualPeerID := peerID
// Auto-connect if not already connected
if c.transport.GetConnection(peerID) == nil {
peer := c.peerRegistry.GetPeer(peerID)
peer := c.peers.GetPeer(peerID)
if peer == nil {
return nil, core.E("Controller.sendRequest", "peer not found: "+peerID, nil)
return nil, coreerr.E("Controller.sendRequest", "peer not found: "+peerID, nil)
}
conn, err := c.transport.Connect(peer)
if err != nil {
return nil, core.E("Controller.sendRequest", "failed to connect to peer", err)
return nil, coreerr.E("Controller.sendRequest", "failed to connect to peer", err)
}
resolvedPeerID = conn.Peer.ID
message.To = resolvedPeerID
// Use the real peer ID after handshake (it may have changed)
actualPeerID = conn.Peer.ID
// Update the message destination
msg.To = actualPeerID
}
responseChannel := make(chan *Message, 1)
// Create response channel
respCh := make(chan *Message, 1)
c.mutex.Lock()
c.pendingRequests[message.ID] = responseChannel
c.mutex.Unlock()
c.mu.Lock()
c.pending[msg.ID] = respCh
c.mu.Unlock()
// Clean up on exit. Deleting the pending entry is enough because
// handleResponse only routes through the map.
// Clean up on exit - ensure channel is closed and removed from map
defer func() {
c.mutex.Lock()
delete(c.pendingRequests, message.ID)
c.mutex.Unlock()
c.mu.Lock()
delete(c.pending, msg.ID)
c.mu.Unlock()
close(respCh) // Close channel to allow garbage collection
}()
if err := c.transport.Send(resolvedPeerID, message); err != nil {
return nil, core.E("Controller.sendRequest", "failed to send message", err)
// Send the message
if err := c.transport.Send(actualPeerID, msg); err != nil {
return nil, coreerr.E("Controller.sendRequest", "failed to send message", err)
}
// Wait for response
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
select {
case response := <-responseChannel:
return response, nil
case resp := <-respCh:
return resp, nil
case <-ctx.Done():
return nil, core.E("Controller.sendRequest", "request timeout", nil)
return nil, coreerr.E("Controller.sendRequest", "request timeout", nil)
}
}
// stats, err := controller.GetRemoteStats("worker-1")
// GetRemoteStats requests miner statistics from a remote peer.
func (c *Controller) GetRemoteStats(peerID string) (*StatsPayload, error) {
identity := c.nodeManager.GetIdentity()
identity := c.node.GetIdentity()
if identity == nil {
return nil, ErrorIdentityNotInitialized
return nil, ErrIdentityNotInitialized
}
requestMessage, err := NewMessage(MessageGetStats, identity.ID, peerID, nil)
msg, err := NewMessage(MsgGetStats, identity.ID, peerID, nil)
if err != nil {
return nil, core.E("Controller.GetRemoteStats", "failed to create message", err)
return nil, coreerr.E("Controller.GetRemoteStats", "failed to create message", err)
}
response, err := c.sendRequest(peerID, requestMessage, 10*time.Second)
resp, err := c.sendRequest(peerID, msg, 10*time.Second)
if err != nil {
return nil, err
}
var stats StatsPayload
if err := ParseResponse(response, MessageStats, &stats); err != nil {
if err := ParseResponse(resp, MsgStats, &stats); err != nil {
return nil, err
}
return &stats, nil
}
// err := controller.StartRemoteMiner("worker-1", "xmrig", "profile-1", nil)
func (c *Controller) StartRemoteMiner(peerID, minerType, profileID string, configOverride RawMessage) error {
identity := c.nodeManager.GetIdentity()
// StartRemoteMiner requests a remote peer to start a miner with a given profile.
func (c *Controller) StartRemoteMiner(peerID, minerType, profileID string, configOverride json.RawMessage) error {
identity := c.node.GetIdentity()
if identity == nil {
return ErrorIdentityNotInitialized
return ErrIdentityNotInitialized
}
if minerType == "" {
return core.E("Controller.StartRemoteMiner", "miner type is required", nil)
return coreerr.E("Controller.StartRemoteMiner", "miner type is required", nil)
}
payload := StartMinerPayload{
@ -145,98 +153,106 @@ func (c *Controller) StartRemoteMiner(peerID, minerType, profileID string, confi
Config: configOverride,
}
requestMessage, err := NewMessage(MessageStartMiner, identity.ID, peerID, payload)
msg, err := NewMessage(MsgStartMiner, identity.ID, peerID, payload)
if err != nil {
return core.E("Controller.StartRemoteMiner", "failed to create message", err)
return coreerr.E("Controller.StartRemoteMiner", "failed to create message", err)
}
response, err := c.sendRequest(peerID, requestMessage, 30*time.Second)
resp, err := c.sendRequest(peerID, msg, 30*time.Second)
if err != nil {
return err
}
var ack MinerAckPayload
if err := ParseResponse(response, MessageMinerAck, &ack); err != nil {
if err := ParseResponse(resp, MsgMinerAck, &ack); err != nil {
return err
}
if !ack.Success {
return core.E("Controller.StartRemoteMiner", "miner start failed: "+ack.Error, nil)
return coreerr.E("Controller.StartRemoteMiner", "miner start failed: "+ack.Error, nil)
}
return nil
}
// err := controller.StopRemoteMiner("worker-1", "xmrig-0")
// StopRemoteMiner requests a remote peer to stop a miner.
func (c *Controller) StopRemoteMiner(peerID, minerName string) error {
identity := c.nodeManager.GetIdentity()
identity := c.node.GetIdentity()
if identity == nil {
return ErrorIdentityNotInitialized
return ErrIdentityNotInitialized
}
payload := StopMinerPayload{
MinerName: minerName,
}
requestMessage, err := NewMessage(MessageStopMiner, identity.ID, peerID, payload)
msg, err := NewMessage(MsgStopMiner, identity.ID, peerID, payload)
if err != nil {
return core.E("Controller.StopRemoteMiner", "failed to create message", err)
return coreerr.E("Controller.StopRemoteMiner", "failed to create message", err)
}
response, err := c.sendRequest(peerID, requestMessage, 30*time.Second)
resp, err := c.sendRequest(peerID, msg, 30*time.Second)
if err != nil {
return err
}
var ack MinerAckPayload
if err := ParseResponse(response, MessageMinerAck, &ack); err != nil {
if err := ParseResponse(resp, MsgMinerAck, &ack); err != nil {
return err
}
if !ack.Success {
return core.E("Controller.StopRemoteMiner", "miner stop failed: "+ack.Error, nil)
return coreerr.E("Controller.StopRemoteMiner", "miner stop failed: "+ack.Error, nil)
}
return nil
}
// logs, err := controller.GetRemoteLogs("worker-1", "xmrig-0", 100)
// GetRemoteLogs requests console logs from a remote miner.
func (c *Controller) GetRemoteLogs(peerID, minerName string, lines int) ([]string, error) {
identity := c.nodeManager.GetIdentity()
return c.GetRemoteLogsSince(peerID, minerName, lines, time.Time{})
}
// GetRemoteLogsSince requests console logs from a remote miner after a point in time.
func (c *Controller) GetRemoteLogsSince(peerID, minerName string, lines int, since time.Time) ([]string, error) {
identity := c.node.GetIdentity()
if identity == nil {
return nil, ErrorIdentityNotInitialized
return nil, ErrIdentityNotInitialized
}
payload := LogsRequestPayload{
payload := GetLogsPayload{
MinerName: minerName,
Lines: lines,
}
requestMessage, err := NewMessage(MessageGetLogs, identity.ID, peerID, payload)
if err != nil {
return nil, core.E("Controller.GetRemoteLogs", "failed to create message", err)
if !since.IsZero() {
payload.Since = since.UnixMilli()
}
response, err := c.sendRequest(peerID, requestMessage, 10*time.Second)
msg, err := NewMessage(MsgGetLogs, identity.ID, peerID, payload)
if err != nil {
return nil, coreerr.E("Controller.GetRemoteLogsSince", "failed to create message", err)
}
resp, err := c.sendRequest(peerID, msg, 10*time.Second)
if err != nil {
return nil, err
}
var logs LogsPayload
if err := ParseResponse(response, MessageLogs, &logs); err != nil {
if err := ParseResponse(resp, MsgLogs, &logs); err != nil {
return nil, err
}
return logs.Lines, nil
}
// statsByPeerID := controller.GetAllStats()
// GetAllStats fetches stats from all connected peers.
func (c *Controller) GetAllStats() map[string]*StatsPayload {
results := make(map[string]*StatsPayload)
var mu sync.Mutex
var wg sync.WaitGroup
for peer := range c.peerRegistry.ConnectedPeers() {
for peer := range c.peers.ConnectedPeers() {
wg.Add(1)
go func(p *Peer) {
defer wg.Done()
@ -259,11 +275,11 @@ func (c *Controller) GetAllStats() map[string]*StatsPayload {
return results
}
// rttMilliseconds, err := controller.PingPeer("worker-1")
// PingPeer sends a ping to a peer and updates metrics.
func (c *Controller) PingPeer(peerID string) (float64, error) {
identity := c.nodeManager.GetIdentity()
identity := c.node.GetIdentity()
if identity == nil {
return 0, ErrorIdentityNotInitialized
return 0, ErrIdentityNotInitialized
}
sentAt := time.Now()
@ -271,48 +287,48 @@ func (c *Controller) PingPeer(peerID string) (float64, error) {
SentAt: sentAt.UnixMilli(),
}
requestMessage, err := NewMessage(MessagePing, identity.ID, peerID, payload)
msg, err := NewMessage(MsgPing, identity.ID, peerID, payload)
if err != nil {
return 0, core.E("Controller.PingPeer", "failed to create message", err)
return 0, coreerr.E("Controller.PingPeer", "failed to create message", err)
}
response, err := c.sendRequest(peerID, requestMessage, 5*time.Second)
resp, err := c.sendRequest(peerID, msg, 5*time.Second)
if err != nil {
return 0, err
}
if err := ValidateResponse(response, MessagePong); err != nil {
if err := ValidateResponse(resp, MsgPong); err != nil {
return 0, err
}
// Calculate round-trip time in milliseconds.
rtt := time.Since(sentAt).Seconds() * 1000
// Calculate round-trip time
rtt := time.Since(sentAt).Seconds() * 1000 // Convert to ms
// Update peer metrics
peer := c.peerRegistry.GetPeer(peerID)
peer := c.peers.GetPeer(peerID)
if peer != nil {
c.peerRegistry.UpdateMetrics(peerID, rtt, peer.GeographicKilometres, peer.Hops)
c.peers.UpdateMetrics(peerID, rtt, peer.GeoKM, peer.Hops)
}
return rtt, nil
}
// err := controller.ConnectToPeer("worker-1")
// ConnectToPeer establishes a connection to a peer.
func (c *Controller) ConnectToPeer(peerID string) error {
peer := c.peerRegistry.GetPeer(peerID)
peer := c.peers.GetPeer(peerID)
if peer == nil {
return core.E("Controller.ConnectToPeer", "peer not found: "+peerID, nil)
return coreerr.E("Controller.ConnectToPeer", "peer not found: "+peerID, nil)
}
_, err := c.transport.Connect(peer)
return err
}
// err := controller.DisconnectFromPeer("worker-1")
// DisconnectFromPeer closes connection to a peer.
func (c *Controller) DisconnectFromPeer(peerID string) error {
conn := c.transport.GetConnection(peerID)
if conn == nil {
return core.E("Controller.DisconnectFromPeer", "peer not connected: "+peerID, nil)
return coreerr.E("Controller.DisconnectFromPeer", "peer not connected: "+peerID, nil)
}
return conn.Close()

View file

@ -1,15 +1,18 @@
package node
import (
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"net/url"
"path/filepath"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
core "dappco.re/go/core"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@ -24,7 +27,7 @@ func setupControllerPair(t *testing.T) (*Controller, *Worker, *testTransportPair
// Server side: register a Worker to handle incoming requests.
worker := NewWorker(tp.ServerNode, tp.Server)
worker.RegisterOnTransport()
worker.RegisterWithTransport()
// Client side: create a Controller (registers handleResponse via OnMessage).
controller := NewController(tp.ClientNode, tp.ClientReg, tp.Client)
@ -40,23 +43,23 @@ func setupControllerPair(t *testing.T) (*Controller, *Worker, *testTransportPair
// makeWorkerServer spins up an independent server transport with a Worker
// registered, returning the server's NodeManager, address, and a cleanup func.
// Useful for multi-peer tests (AllStats, ConcurrentRequests).
// Useful for multi-peer tests (GetAllStats, ConcurrentRequests).
func makeWorkerServer(t *testing.T) (*NodeManager, string, *Transport) {
t.Helper()
nm := newTestNodeManager(t, "worker", RoleWorker)
reg := newTestPeerRegistry(t)
nm := testNode(t, "worker", RoleWorker)
reg := testRegistry(t)
cfg := DefaultTransportConfig()
srv := NewTransport(nm, reg, cfg)
mux := http.NewServeMux()
mux.HandleFunc(cfg.WebSocketPath, srv.handleWebSocketUpgrade)
mux.HandleFunc(cfg.WSPath, srv.handleWSUpgrade)
ts := httptest.NewServer(mux)
u, _ := url.Parse(ts.URL)
worker := NewWorker(nm, srv)
worker.RegisterOnTransport()
worker.RegisterWithTransport()
t.Cleanup(func() {
// Brief pause to let in-flight readLoop/Send operations finish before
@ -73,18 +76,18 @@ func makeWorkerServer(t *testing.T) (*NodeManager, string, *Transport) {
// --- Controller Tests ---
func TestController_RequestResponseCorrelation_Good(t *testing.T) {
func TestController_RequestResponseCorrelation(t *testing.T) {
controller, _, tp := setupControllerPair(t)
serverID := tp.ServerNode.GetIdentity().ID
// Send a ping request via the controller; the server-side worker
// replies with MessagePong, setting ReplyTo to the original message ID.
// replies with MsgPong, setting ReplyTo to the original message ID.
rtt, err := controller.PingPeer(serverID)
require.NoError(t, err, "PingPeer should succeed")
assert.Greater(t, rtt, 0.0, "RTT should be positive")
}
func TestController_RequestTimeout_Bad(t *testing.T) {
func TestController_RequestTimeout(t *testing.T) {
tp := setupTestTransportPair(t)
// Register a handler on the server that deliberately ignores all messages,
@ -101,7 +104,7 @@ func TestController_RequestTimeout_Bad(t *testing.T) {
clientID := tp.ClientNode.GetIdentity().ID
// Use sendRequest directly with a short deadline (PingPeer uses 5s internally).
msg, err := NewMessage(MessagePing, clientID, serverID, PingPayload{
msg, err := NewMessage(MsgPing, clientID, serverID, PingPayload{
SentAt: time.Now().UnixMilli(),
})
require.NoError(t, err)
@ -115,12 +118,12 @@ func TestController_RequestTimeout_Bad(t *testing.T) {
assert.Less(t, elapsed, 1*time.Second, "should return quickly after the deadline")
}
func TestController_AutoConnect_Good(t *testing.T) {
func TestController_AutoConnect(t *testing.T) {
tp := setupTestTransportPair(t)
// Register worker on the server side.
worker := NewWorker(tp.ServerNode, tp.Server)
worker.RegisterOnTransport()
worker.RegisterWithTransport()
// Create controller WITHOUT establishing a connection first.
controller := NewController(tp.ClientNode, tp.ClientReg, tp.Client)
@ -136,7 +139,7 @@ func TestController_AutoConnect_Good(t *testing.T) {
tp.ClientReg.AddPeer(peer)
// Confirm no connection exists yet.
assert.Equal(t, 0, tp.Client.ConnectedPeerCount(), "should have no connections initially")
assert.Equal(t, 0, tp.Client.ConnectedPeers(), "should have no connections initially")
// Send a request — controller should auto-connect via transport before sending.
rtt, err := controller.PingPeer(serverIdentity.ID)
@ -144,13 +147,13 @@ func TestController_AutoConnect_Good(t *testing.T) {
assert.Greater(t, rtt, 0.0, "RTT should be positive after auto-connect")
// Verify connection was established.
assert.Equal(t, 1, tp.Client.ConnectedPeerCount(), "should have 1 connection after auto-connect")
assert.Equal(t, 1, tp.Client.ConnectedPeers(), "should have 1 connection after auto-connect")
}
func TestController_AllStats_Good(t *testing.T) {
func TestController_GetAllStats(t *testing.T) {
// Controller node with connections to two independent worker servers.
controllerNM := newTestNodeManager(t, "controller", RoleController)
controllerReg := newTestPeerRegistry(t)
controllerNM := testNode(t, "controller", RoleController)
controllerReg := testRegistry(t)
controllerTransport := NewTransport(controllerNM, controllerReg, DefaultTransportConfig())
t.Cleanup(func() { controllerTransport.Stop() })
@ -178,7 +181,7 @@ func TestController_AllStats_Good(t *testing.T) {
controller := NewController(controllerNM, controllerReg, controllerTransport)
// AllStats fetches stats from all connected peers in parallel.
// GetAllStats fetches stats from all connected peers in parallel.
stats := controller.GetAllStats()
assert.Len(t, stats, numWorkers, "should get stats from all connected workers")
@ -192,14 +195,14 @@ func TestController_AllStats_Good(t *testing.T) {
}
}
func TestController_PingPeerRTT_Good(t *testing.T) {
func TestController_PingPeerRTT(t *testing.T) {
controller, _, tp := setupControllerPair(t)
serverID := tp.ServerNode.GetIdentity().ID
// Record initial peer metrics.
peerBefore := tp.ClientReg.GetPeer(serverID)
require.NotNil(t, peerBefore, "server peer should exist in the client registry")
initialPingMilliseconds := peerBefore.PingMilliseconds
initialPingMS := peerBefore.PingMS
// Send a ping.
rtt, err := controller.PingPeer(serverID)
@ -210,16 +213,16 @@ func TestController_PingPeerRTT_Good(t *testing.T) {
// Verify the peer registry was updated with the measured latency.
peerAfter := tp.ClientReg.GetPeer(serverID)
require.NotNil(t, peerAfter, "server peer should still exist after ping")
assert.NotEqual(t, initialPingMilliseconds, peerAfter.PingMilliseconds,
"PingMilliseconds should be updated after a successful ping")
assert.Greater(t, peerAfter.PingMilliseconds, 0.0, "PingMilliseconds should be positive")
assert.NotEqual(t, initialPingMS, peerAfter.PingMS,
"PingMS should be updated after a successful ping")
assert.Greater(t, peerAfter.PingMS, 0.0, "PingMS should be positive")
}
func TestController_ConcurrentRequests_Ugly(t *testing.T) {
func TestController_ConcurrentRequests(t *testing.T) {
// Multiple goroutines send pings to different peers simultaneously.
// Verify correct correlation — no cross-talk between responses.
controllerNM := newTestNodeManager(t, "controller", RoleController)
controllerReg := newTestPeerRegistry(t)
controllerNM := testNode(t, "controller", RoleController)
controllerReg := testRegistry(t)
controllerTransport := NewTransport(controllerNM, controllerReg, DefaultTransportConfig())
t.Cleanup(func() { controllerTransport.Stop() })
@ -269,7 +272,7 @@ func TestController_ConcurrentRequests_Ugly(t *testing.T) {
}
}
func TestController_DeadPeerCleanup_Good(t *testing.T) {
func TestController_DeadPeerCleanup(t *testing.T) {
tp := setupTestTransportPair(t)
// Server deliberately ignores all messages.
@ -283,7 +286,7 @@ func TestController_DeadPeerCleanup_Good(t *testing.T) {
clientID := tp.ClientNode.GetIdentity().ID
// Fire off a request that will time out.
msg, err := NewMessage(MessagePing, clientID, serverID, PingPayload{
msg, err := NewMessage(MsgPing, clientID, serverID, PingPayload{
SentAt: time.Now().UnixMilli(),
})
require.NoError(t, err)
@ -295,9 +298,9 @@ func TestController_DeadPeerCleanup_Good(t *testing.T) {
// The defer block inside sendRequest should have cleaned up the pending entry.
time.Sleep(50 * time.Millisecond)
controller.mutex.RLock()
pendingCount := len(controller.pendingRequests)
controller.mutex.RUnlock()
controller.mu.RLock()
pendingCount := len(controller.pending)
controller.mu.RUnlock()
assert.Equal(t, 0, pendingCount,
"pending map should be empty after timeout — no goroutine/memory leak")
@ -305,7 +308,7 @@ func TestController_DeadPeerCleanup_Good(t *testing.T) {
// --- Additional edge-case tests ---
func TestController_MultipleSequentialPings_Good(t *testing.T) {
func TestController_MultipleSequentialPings(t *testing.T) {
// Ensures sequential requests to the same peer are correctly correlated.
controller, _, tp := setupControllerPair(t)
serverID := tp.ServerNode.GetIdentity().ID
@ -317,7 +320,7 @@ func TestController_MultipleSequentialPings_Good(t *testing.T) {
}
}
func TestController_ConcurrentRequestsSamePeer_Ugly(t *testing.T) {
func TestController_ConcurrentRequestsSamePeer(t *testing.T) {
// Multiple goroutines sending requests to the SAME peer simultaneously.
// Tests concurrent pending-map insertions/deletions under contention.
controller, _, tp := setupControllerPair(t)
@ -341,12 +344,12 @@ func TestController_ConcurrentRequestsSamePeer_Ugly(t *testing.T) {
"all concurrent requests to the same peer should succeed")
}
func TestController_RemoteStats_Good(t *testing.T) {
func TestController_GetRemoteStats(t *testing.T) {
controller, _, tp := setupControllerPair(t)
serverID := tp.ServerNode.GetIdentity().ID
stats, err := controller.GetRemoteStats(serverID)
require.NoError(t, err, "RemoteStats should succeed")
require.NoError(t, err, "GetRemoteStats should succeed")
require.NotNil(t, stats)
assert.NotEmpty(t, stats.NodeID, "stats should contain the node ID")
@ -355,7 +358,7 @@ func TestController_RemoteStats_Good(t *testing.T) {
assert.GreaterOrEqual(t, stats.Uptime, int64(0), "uptime should be non-negative")
}
func TestController_ConnectToPeerUnknown_Bad(t *testing.T) {
func TestController_ConnectToPeerUnknown(t *testing.T) {
tp := setupTestTransportPair(t)
controller := NewController(tp.ClientNode, tp.ClientReg, tp.Client)
@ -364,17 +367,17 @@ func TestController_ConnectToPeerUnknown_Bad(t *testing.T) {
assert.Contains(t, err.Error(), "not found")
}
func TestController_DisconnectFromPeer_Good(t *testing.T) {
func TestController_DisconnectFromPeer(t *testing.T) {
controller, _, tp := setupControllerPair(t)
serverID := tp.ServerNode.GetIdentity().ID
assert.Equal(t, 1, tp.Client.ConnectedPeerCount(), "should have 1 connection")
assert.Equal(t, 1, tp.Client.ConnectedPeers(), "should have 1 connection")
err := controller.DisconnectFromPeer(serverID)
require.NoError(t, err, "DisconnectFromPeer should succeed")
}
func TestController_DisconnectFromPeerNotConnected_Bad(t *testing.T) {
func TestController_DisconnectFromPeerNotConnected(t *testing.T) {
tp := setupTestTransportPair(t)
controller := NewController(tp.ClientNode, tp.ClientReg, tp.Client)
@ -383,12 +386,12 @@ func TestController_DisconnectFromPeerNotConnected_Bad(t *testing.T) {
assert.Contains(t, err.Error(), "not connected")
}
func TestController_SendRequestPeerNotFound_Bad(t *testing.T) {
func TestController_SendRequestPeerNotFound(t *testing.T) {
tp := setupTestTransportPair(t)
controller := NewController(tp.ClientNode, tp.ClientReg, tp.Client)
clientID := tp.ClientNode.GetIdentity().ID
msg, err := NewMessage(MessagePing, clientID, "ghost-peer", PingPayload{
msg, err := NewMessage(MsgPing, clientID, "ghost-peer", PingPayload{
SentAt: time.Now().UnixMilli(),
})
require.NoError(t, err)
@ -399,7 +402,7 @@ func TestController_SendRequestPeerNotFound_Bad(t *testing.T) {
assert.Contains(t, err.Error(), "peer not found")
}
// --- Tests for StartRemoteMiner, StopRemoteMiner, RemoteLogs ---
// --- Tests for StartRemoteMiner, StopRemoteMiner, GetRemoteLogs ---
// setupControllerPairWithMiner creates a controller/worker pair where the worker
// has a fully configured MinerManager so that start/stop/logs handlers work.
@ -432,7 +435,7 @@ func setupControllerPairWithMiner(t *testing.T) (*Controller, *Worker, *testTran
},
}
worker.SetMinerManager(mm)
worker.RegisterOnTransport()
worker.RegisterWithTransport()
// Client side: create a Controller.
controller := NewController(tp.ClientNode, tp.ClientReg, tp.Client)
@ -444,7 +447,7 @@ func setupControllerPairWithMiner(t *testing.T) (*Controller, *Worker, *testTran
return controller, worker, tp
}
// mockMinerManagerFull implements MinerManager with functional start/stop/list/lookup.
// mockMinerManagerFull implements MinerManager with functional start/stop/list/get.
type mockMinerManagerFull struct {
mu sync.Mutex
miners map[string]*mockMinerFull
@ -473,7 +476,7 @@ func (m *mockMinerManagerFull) StopMiner(name string) error {
defer m.mu.Unlock()
if _, exists := m.miners[name]; !exists {
return core.E("mockMinerManagerFull.StopMiner", "miner "+name+" not found", nil)
return fmt.Errorf("miner %s not found", name)
}
delete(m.miners, name)
return nil
@ -496,15 +499,11 @@ func (m *mockMinerManagerFull) GetMiner(name string) (MinerInstance, error) {
miner, exists := m.miners[name]
if !exists {
return nil, core.E("mockMinerManagerFull.GetMiner", "miner "+name+" not found", nil)
return nil, fmt.Errorf("miner %s not found", name)
}
return miner, nil
}
func (m *mockMinerManagerFull) Miner(name string) (MinerInstance, error) {
return m.GetMiner(name)
}
// mockMinerFull implements MinerInstance with real data.
type mockMinerFull struct {
name string
@ -516,6 +515,40 @@ type mockMinerFull struct {
func (m *mockMinerFull) GetName() string { return m.name }
func (m *mockMinerFull) GetType() string { return m.minerType }
func (m *mockMinerFull) GetStats() (any, error) { return m.stats, nil }
func (m *mockMinerFull) GetConsoleHistorySince(lines int, since time.Time) []string {
if since.IsZero() {
if lines >= len(m.consoleHistory) {
return m.consoleHistory
}
return m.consoleHistory[:lines]
}
filtered := make([]string, 0, len(m.consoleHistory))
for _, line := range m.consoleHistory {
if lineAfter(line, since) {
filtered = append(filtered, line)
}
}
if lines >= len(filtered) {
return filtered
}
return filtered[:lines]
}
func lineAfter(line string, since time.Time) bool {
start := strings.IndexByte(line, '[')
end := strings.IndexByte(line, ']')
if start != 0 || end <= start+1 {
return true
}
ts, err := time.Parse("2006-01-02 15:04:05", line[start+1:end])
if err != nil {
return true
}
return ts.After(since) || ts.Equal(since)
}
func (m *mockMinerFull) GetConsoleHistory(lines int) []string {
if lines >= len(m.consoleHistory) {
return m.consoleHistory
@ -523,30 +556,25 @@ func (m *mockMinerFull) GetConsoleHistory(lines int) []string {
return m.consoleHistory[:lines]
}
func (m *mockMinerFull) Name() string { return m.GetName() }
func (m *mockMinerFull) Type() string { return m.GetType() }
func (m *mockMinerFull) Stats() (any, error) { return m.GetStats() }
func (m *mockMinerFull) ConsoleHistory(lines int) []string { return m.GetConsoleHistory(lines) }
func TestController_StartRemoteMiner_Good(t *testing.T) {
func TestController_StartRemoteMiner(t *testing.T) {
controller, _, tp := setupControllerPairWithMiner(t)
serverID := tp.ServerNode.GetIdentity().ID
configOverride := RawMessage(`{"pool":"pool.example.com:3333"}`)
configOverride := json.RawMessage(`{"pool":"pool.example.com:3333"}`)
err := controller.StartRemoteMiner(serverID, "xmrig", "profile-1", configOverride)
require.NoError(t, err, "StartRemoteMiner should succeed")
}
func TestController_StartRemoteMiner_WithConfig_Good(t *testing.T) {
func TestController_StartRemoteMiner_WithConfig(t *testing.T) {
controller, _, tp := setupControllerPairWithMiner(t)
serverID := tp.ServerNode.GetIdentity().ID
configOverride := RawMessage(`{"pool":"custom-pool:3333","threads":4}`)
configOverride := json.RawMessage(`{"pool":"custom-pool:3333","threads":4}`)
err := controller.StartRemoteMiner(serverID, "xmrig", "", configOverride)
require.NoError(t, err, "StartRemoteMiner with config override should succeed")
}
func TestController_StartRemoteMiner_EmptyType_Bad(t *testing.T) {
func TestController_StartRemoteMiner_EmptyType(t *testing.T) {
controller, _, tp := setupControllerPairWithMiner(t)
serverID := tp.ServerNode.GetIdentity().ID
@ -555,12 +583,14 @@ func TestController_StartRemoteMiner_EmptyType_Bad(t *testing.T) {
assert.Contains(t, err.Error(), "miner type is required")
}
func TestController_StartRemoteMiner_NoIdentity_Bad(t *testing.T) {
func TestController_StartRemoteMiner_NoIdentity(t *testing.T) {
tp := setupTestTransportPair(t)
// Create a node without identity
keyPath, configPath := testNodeManagerPaths(t.TempDir())
nmNoID, err := NewNodeManagerFromPaths(keyPath, configPath)
nmNoID, err := NewNodeManagerWithPaths(
filepath.Join(t.TempDir(), "priv.key"),
filepath.Join(t.TempDir(), "node.json"),
)
require.NoError(t, err)
controller := NewController(nmNoID, tp.ClientReg, tp.Client)
@ -570,7 +600,7 @@ func TestController_StartRemoteMiner_NoIdentity_Bad(t *testing.T) {
assert.Contains(t, err.Error(), "identity not initialized")
}
func TestController_StopRemoteMiner_Good(t *testing.T) {
func TestController_StopRemoteMiner(t *testing.T) {
controller, _, tp := setupControllerPairWithMiner(t)
serverID := tp.ServerNode.GetIdentity().ID
@ -578,7 +608,7 @@ func TestController_StopRemoteMiner_Good(t *testing.T) {
require.NoError(t, err, "StopRemoteMiner should succeed for existing miner")
}
func TestController_StopRemoteMiner_NotFound_Bad(t *testing.T) {
func TestController_StopRemoteMiner_NotFound(t *testing.T) {
controller, _, tp := setupControllerPairWithMiner(t)
serverID := tp.ServerNode.GetIdentity().ID
@ -586,10 +616,12 @@ func TestController_StopRemoteMiner_NotFound_Bad(t *testing.T) {
require.Error(t, err, "StopRemoteMiner should fail for non-existent miner")
}
func TestController_StopRemoteMiner_NoIdentity_Bad(t *testing.T) {
func TestController_StopRemoteMiner_NoIdentity(t *testing.T) {
tp := setupTestTransportPair(t)
keyPath, configPath := testNodeManagerPaths(t.TempDir())
nmNoID, err := NewNodeManagerFromPaths(keyPath, configPath)
nmNoID, err := NewNodeManagerWithPaths(
filepath.Join(t.TempDir(), "priv.key"),
filepath.Join(t.TempDir(), "node.json"),
)
require.NoError(t, err)
controller := NewController(nmNoID, tp.ClientReg, tp.Client)
@ -599,30 +631,46 @@ func TestController_StopRemoteMiner_NoIdentity_Bad(t *testing.T) {
assert.Contains(t, err.Error(), "identity not initialized")
}
func TestController_RemoteLogs_Good(t *testing.T) {
func TestController_GetRemoteLogs(t *testing.T) {
controller, _, tp := setupControllerPairWithMiner(t)
serverID := tp.ServerNode.GetIdentity().ID
lines, err := controller.GetRemoteLogs(serverID, "running-miner", 10)
require.NoError(t, err, "RemoteLogs should succeed")
require.NoError(t, err, "GetRemoteLogs should succeed")
require.NotNil(t, lines)
assert.Len(t, lines, 3, "should return all 3 console history lines")
assert.Contains(t, lines[0], "started")
}
func TestController_RemoteLogs_LimitedLines_Good(t *testing.T) {
func TestController_GetRemoteLogs_LimitedLines(t *testing.T) {
controller, _, tp := setupControllerPairWithMiner(t)
serverID := tp.ServerNode.GetIdentity().ID
lines, err := controller.GetRemoteLogs(serverID, "running-miner", 1)
require.NoError(t, err, "RemoteLogs with limited lines should succeed")
require.NoError(t, err, "GetRemoteLogs with limited lines should succeed")
assert.Len(t, lines, 1, "should return only 1 line")
}
func TestController_RemoteLogs_NoIdentity_Bad(t *testing.T) {
func TestController_GetRemoteLogsSince(t *testing.T) {
controller, _, tp := setupControllerPairWithMiner(t)
serverID := tp.ServerNode.GetIdentity().ID
since, err := time.Parse("2006-01-02 15:04:05", "2026-02-20 10:00:01")
require.NoError(t, err)
lines, err := controller.GetRemoteLogsSince(serverID, "running-miner", 10, since)
require.NoError(t, err, "GetRemoteLogsSince should succeed")
require.Len(t, lines, 2, "should return only log lines on or after the requested timestamp")
assert.Contains(t, lines[0], "connected to pool")
assert.Contains(t, lines[1], "new job received")
}
func TestController_GetRemoteLogs_NoIdentity(t *testing.T) {
tp := setupTestTransportPair(t)
keyPath, configPath := testNodeManagerPaths(t.TempDir())
nmNoID, err := NewNodeManagerFromPaths(keyPath, configPath)
nmNoID, err := NewNodeManagerWithPaths(
filepath.Join(t.TempDir(), "priv.key"),
filepath.Join(t.TempDir(), "node.json"),
)
require.NoError(t, err)
controller := NewController(nmNoID, tp.ClientReg, tp.Client)
@ -632,12 +680,12 @@ func TestController_RemoteLogs_NoIdentity_Bad(t *testing.T) {
assert.Contains(t, err.Error(), "identity not initialized")
}
func TestController_RemoteStats_WithMiners_Good(t *testing.T) {
func TestController_GetRemoteStats_WithMiners(t *testing.T) {
controller, _, tp := setupControllerPairWithMiner(t)
serverID := tp.ServerNode.GetIdentity().ID
stats, err := controller.GetRemoteStats(serverID)
require.NoError(t, err, "RemoteStats should succeed")
require.NoError(t, err, "GetRemoteStats should succeed")
require.NotNil(t, stats)
assert.NotEmpty(t, stats.NodeID)
// The worker has a miner manager with 1 running miner
@ -646,10 +694,12 @@ func TestController_RemoteStats_WithMiners_Good(t *testing.T) {
assert.Equal(t, 1234.5, stats.Miners[0].Hashrate)
}
func TestController_RemoteStats_NoIdentity_Bad(t *testing.T) {
func TestController_GetRemoteStats_NoIdentity(t *testing.T) {
tp := setupTestTransportPair(t)
keyPath, configPath := testNodeManagerPaths(t.TempDir())
nmNoID, err := NewNodeManagerFromPaths(keyPath, configPath)
nmNoID, err := NewNodeManagerWithPaths(
filepath.Join(t.TempDir(), "priv.key"),
filepath.Join(t.TempDir(), "node.json"),
)
require.NoError(t, err)
controller := NewController(nmNoID, tp.ClientReg, tp.Client)
@ -659,11 +709,11 @@ func TestController_RemoteStats_NoIdentity_Bad(t *testing.T) {
assert.Contains(t, err.Error(), "identity not initialized")
}
func TestController_ConnectToPeer_Success_Good(t *testing.T) {
func TestController_ConnectToPeer_Success(t *testing.T) {
tp := setupTestTransportPair(t)
worker := NewWorker(tp.ServerNode, tp.Server)
worker.RegisterOnTransport()
worker.RegisterWithTransport()
controller := NewController(tp.ClientNode, tp.ClientReg, tp.Client)
@ -680,25 +730,25 @@ func TestController_ConnectToPeer_Success_Good(t *testing.T) {
err := controller.ConnectToPeer(serverIdentity.ID)
require.NoError(t, err, "ConnectToPeer should succeed")
assert.Equal(t, 1, tp.Client.ConnectedPeerCount(), "should have 1 connection after ConnectToPeer")
assert.Equal(t, 1, tp.Client.ConnectedPeers(), "should have 1 connection after ConnectToPeer")
}
func TestController_HandleResponse_NonReply_Good(t *testing.T) {
func TestController_HandleResponse_NonReply(t *testing.T) {
tp := setupTestTransportPair(t)
controller := NewController(tp.ClientNode, tp.ClientReg, tp.Client)
// handleResponse should ignore messages without ReplyTo
msg, _ := NewMessage(MessagePing, "sender", "target", PingPayload{SentAt: 123})
msg, _ := NewMessage(MsgPing, "sender", "target", PingPayload{SentAt: 123})
controller.handleResponse(nil, msg)
// No pending entries should be affected
controller.mutex.RLock()
count := len(controller.pendingRequests)
controller.mutex.RUnlock()
controller.mu.RLock()
count := len(controller.pending)
controller.mu.RUnlock()
assert.Equal(t, 0, count)
}
func TestController_HandleResponse_FullChannel_Ugly(t *testing.T) {
func TestController_HandleResponse_FullChannel(t *testing.T) {
tp := setupTestTransportPair(t)
controller := NewController(tp.ClientNode, tp.ClientReg, tp.Client)
@ -706,26 +756,28 @@ func TestController_HandleResponse_FullChannel_Ugly(t *testing.T) {
ch := make(chan *Message, 1)
ch <- &Message{} // Fill the channel
controller.mutex.Lock()
controller.pendingRequests["test-id"] = ch
controller.mutex.Unlock()
controller.mu.Lock()
controller.pending["test-id"] = ch
controller.mu.Unlock()
// handleResponse with matching reply should not panic on full channel
msg, _ := NewMessage(MessagePong, "sender", "target", PongPayload{SentAt: 123})
msg, _ := NewMessage(MsgPong, "sender", "target", PongPayload{SentAt: 123})
msg.ReplyTo = "test-id"
controller.handleResponse(nil, msg)
// The pending entry should be removed despite channel being full
controller.mutex.RLock()
_, exists := controller.pendingRequests["test-id"]
controller.mutex.RUnlock()
controller.mu.RLock()
_, exists := controller.pending["test-id"]
controller.mu.RUnlock()
assert.False(t, exists, "pending entry should be removed after handling")
}
func TestController_PingPeer_NoIdentity_Bad(t *testing.T) {
func TestController_PingPeer_NoIdentity(t *testing.T) {
tp := setupTestTransportPair(t)
keyPath, configPath := testNodeManagerPaths(t.TempDir())
nmNoID, _ := NewNodeManagerFromPaths(keyPath, configPath)
nmNoID, _ := NewNodeManagerWithPaths(
filepath.Join(t.TempDir(), "priv.key"),
filepath.Join(t.TempDir(), "node.json"),
)
controller := NewController(nmNoID, tp.ClientReg, tp.Client)
_, err := controller.PingPeer("some-peer")

View file

@ -1,39 +1,56 @@
package node
import (
"fmt"
"iter"
"sync"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
"dappco.re/go/core/p2p/logging"
"dappco.re/go/core/p2p/ueps"
)
// threshold := ThreatScoreThreshold
// ThreatScoreThreshold is the maximum allowable threat score. Packets exceeding
// this value are silently dropped by the circuit breaker and logged as threat
// events. The threshold sits at ~76% of the uint16 range (50,000 / 65,535),
// providing headroom for legitimate elevated-risk traffic whilst rejecting
// clearly hostile payloads.
const ThreatScoreThreshold uint16 = 50000
// intentID := IntentPauseExecution
// Well-known intent identifiers. These correspond to the semantic tokens
// carried in the UEPS IntentID header field (RFC-021).
const (
IntentHandshake byte = 0x01
IntentCompute byte = 0x20
IntentPauseExecution byte = 0x30
IntentCustom byte = 0xFF
IntentHandshake byte = 0x01 // Connection establishment / hello
IntentCompute byte = 0x20 // Compute job request
IntentRehab byte = 0x30 // Benevolent intervention (pause execution)
IntentCustom byte = 0xFF // Extended / application-level sub-protocols
)
// var handler IntentHandler = func(packet *ueps.ParsedPacket) error { return nil }
type IntentHandler func(packet *ueps.ParsedPacket) error
// IntentHandler processes a UEPS packet that has been routed by intent.
// Implementations receive the fully parsed and HMAC-verified packet.
type IntentHandler func(pkt *ueps.ParsedPacket) error
// dispatcher := NewDispatcher()
// dispatcher.RegisterHandler(IntentCompute, func(packet *ueps.ParsedPacket) error { return nil })
// err := dispatcher.Dispatch(packet)
// Dispatcher routes verified UEPS packets to registered intent handlers.
// It enforces a threat circuit breaker before routing: any packet whose
// ThreatScore exceeds ThreatScoreThreshold is dropped and logged.
//
// Design decisions:
// - Handlers are registered per IntentID (1:1 mapping).
// - Unknown intents are logged at WARN level and silently dropped (no error
// returned to the caller) to avoid back-pressure on the transport layer.
// - High-threat packets are dropped silently (logged at WARN) rather than
// returning an error, consistent with the "don't even parse the payload"
// philosophy from the original stub.
// - The dispatcher is safe for concurrent use; a RWMutex protects the
// handler map.
type Dispatcher struct {
handlers map[byte]IntentHandler
mu sync.RWMutex
log *logging.Logger
}
// dispatcher := NewDispatcher()
// NewDispatcher creates a Dispatcher with no registered handlers.
func NewDispatcher() *Dispatcher {
return &Dispatcher{
handlers: make(map[byte]IntentHandler),
@ -44,20 +61,19 @@ func NewDispatcher() *Dispatcher {
}
}
// dispatcher.RegisterHandler(IntentCompute, func(packet *ueps.ParsedPacket) error { return nil })
// RegisterHandler associates an IntentHandler with a specific IntentID.
// Calling RegisterHandler with an IntentID that already has a handler will
// replace the previous handler.
func (d *Dispatcher) RegisterHandler(intentID byte, handler IntentHandler) {
d.mu.Lock()
defer d.mu.Unlock()
d.handlers[intentID] = handler
d.log.Debug("handler registered", logging.Fields{
"intent_id": core.Sprintf("0x%02X", intentID),
"intent_id": fmt.Sprintf("0x%02X", intentID),
})
}
// for intentID, handler := range dispatcher.Handlers() {
// _ = intentID
// _ = handler
// }
// Handlers returns an iterator over all registered intent handlers.
func (d *Dispatcher) Handlers() iter.Seq2[byte, IntentHandler] {
return func(yield func(byte, IntentHandler) bool) {
d.mu.RLock()
@ -71,46 +87,59 @@ func (d *Dispatcher) Handlers() iter.Seq2[byte, IntentHandler] {
}
}
// err := dispatcher.Dispatch(packet)
func (d *Dispatcher) Dispatch(packet *ueps.ParsedPacket) error {
if packet == nil {
return ErrorNilPacket
// Dispatch routes a parsed UEPS packet through the threat circuit breaker
// and then to the appropriate intent handler.
//
// Behaviour:
// - Returns ErrThreatScoreExceeded if the packet's ThreatScore exceeds the
// threshold (packet is dropped and logged).
// - Returns ErrUnknownIntent if no handler is registered for the IntentID
// (packet is dropped and logged).
// - Returns nil on successful delivery to a handler, or any error the
// handler itself returns.
// - A nil packet returns ErrNilPacket immediately.
func (d *Dispatcher) Dispatch(pkt *ueps.ParsedPacket) error {
if pkt == nil {
return ErrNilPacket
}
// 1. Threat circuit breaker (L5 guard)
if packet.Header.ThreatScore > ThreatScoreThreshold {
if pkt.Header.ThreatScore > ThreatScoreThreshold {
d.log.Warn("packet dropped: threat score exceeds safety threshold", logging.Fields{
"threat_score": packet.Header.ThreatScore,
"threat_score": pkt.Header.ThreatScore,
"threshold": ThreatScoreThreshold,
"intent_id": core.Sprintf("0x%02X", packet.Header.IntentID),
"version": packet.Header.Version,
"intent_id": fmt.Sprintf("0x%02X", pkt.Header.IntentID),
"version": pkt.Header.Version,
})
return ErrorThreatScoreExceeded
return ErrThreatScoreExceeded
}
// 2. Intent routing (L9 semantic)
d.mu.RLock()
handler, exists := d.handlers[packet.Header.IntentID]
handler, exists := d.handlers[pkt.Header.IntentID]
d.mu.RUnlock()
if !exists {
d.log.Warn("packet dropped: unknown intent", logging.Fields{
"intent_id": core.Sprintf("0x%02X", packet.Header.IntentID),
"version": packet.Header.Version,
"intent_id": fmt.Sprintf("0x%02X", pkt.Header.IntentID),
"version": pkt.Header.Version,
})
return ErrorUnknownIntent
return ErrUnknownIntent
}
return handler(packet)
return handler(pkt)
}
// Sentinel errors returned by Dispatch.
var (
// err := ErrorThreatScoreExceeded
ErrorThreatScoreExceeded = core.E("Dispatcher.Dispatch", core.Sprintf("packet rejected: threat score exceeds safety threshold (%d)", ThreatScoreThreshold), nil)
// ErrThreatScoreExceeded is returned when a packet's ThreatScore exceeds
// the safety threshold.
ErrThreatScoreExceeded = coreerr.E("Dispatcher.Dispatch", fmt.Sprintf("packet rejected: threat score exceeds safety threshold (%d)", ThreatScoreThreshold), nil)
// err := ErrorUnknownIntent
ErrorUnknownIntent = core.E("Dispatcher.Dispatch", "packet dropped: unknown intent", nil)
// ErrUnknownIntent is returned when no handler is registered for the
// packet's IntentID.
ErrUnknownIntent = coreerr.E("Dispatcher.Dispatch", "packet dropped: unknown intent", nil)
// err := ErrorNilPacket
ErrorNilPacket = core.E("Dispatcher.Dispatch", "nil packet", nil)
// ErrNilPacket is returned when a nil packet is passed to Dispatch.
ErrNilPacket = coreerr.E("Dispatcher.Dispatch", "nil packet", nil)
)

View file

@ -1,11 +1,11 @@
package node
import (
"fmt"
"sync"
"sync/atomic"
"testing"
core "dappco.re/go/core"
"dappco.re/go/core/p2p/ueps"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@ -28,7 +28,7 @@ func makePacket(intentID byte, threatScore uint16, payload []byte) *ueps.ParsedP
// --- Dispatcher Tests ---
func TestDispatcher_RegisterAndDispatch_Good(t *testing.T) {
func TestDispatcher_RegisterAndDispatch(t *testing.T) {
t.Run("handler receives the correct packet", func(t *testing.T) {
d := NewDispatcher()
var received *ueps.ParsedPacket
@ -49,7 +49,7 @@ func TestDispatcher_RegisterAndDispatch_Good(t *testing.T) {
t.Run("handler error propagates to caller", func(t *testing.T) {
d := NewDispatcher()
handlerErr := core.NewError("compute failed")
handlerErr := fmt.Errorf("compute failed")
d.RegisterHandler(IntentCompute, func(pkt *ueps.ParsedPacket) error {
return handlerErr
@ -62,7 +62,7 @@ func TestDispatcher_RegisterAndDispatch_Good(t *testing.T) {
})
}
func TestDispatcher_ThreatCircuitBreaker_Good(t *testing.T) {
func TestDispatcher_ThreatCircuitBreaker(t *testing.T) {
tests := []struct {
name string
threatScore uint16
@ -78,13 +78,13 @@ func TestDispatcher_ThreatCircuitBreaker_Good(t *testing.T) {
{
name: "score just above threshold is rejected",
threatScore: ThreatScoreThreshold + 1,
wantErr: ErrorThreatScoreExceeded,
wantErr: ErrThreatScoreExceeded,
dispatched: false,
},
{
name: "maximum uint16 score is rejected",
threatScore: 65535,
wantErr: ErrorThreatScoreExceeded,
wantErr: ErrThreatScoreExceeded,
dispatched: false,
},
{
@ -118,7 +118,7 @@ func TestDispatcher_ThreatCircuitBreaker_Good(t *testing.T) {
}
}
func TestDispatcher_UnknownIntentDropped_Bad(t *testing.T) {
func TestDispatcher_UnknownIntentDropped(t *testing.T) {
d := NewDispatcher()
// Register handlers for known intents only
@ -130,13 +130,13 @@ func TestDispatcher_UnknownIntentDropped_Bad(t *testing.T) {
pkt := makePacket(0x42, 0, []byte("unknown"))
err := d.Dispatch(pkt)
assert.ErrorIs(t, err, ErrorUnknownIntent)
assert.ErrorIs(t, err, ErrUnknownIntent)
}
func TestDispatcher_MultipleHandlersCorrectRouting_Good(t *testing.T) {
func TestDispatcher_MultipleHandlersCorrectRouting(t *testing.T) {
d := NewDispatcher()
var handshakeCalled, computeCalled, pauseExecutionCalled, customCalled bool
var handshakeCalled, computeCalled, rehabCalled, customCalled bool
d.RegisterHandler(IntentHandshake, func(pkt *ueps.ParsedPacket) error {
handshakeCalled = true
@ -146,8 +146,8 @@ func TestDispatcher_MultipleHandlersCorrectRouting_Good(t *testing.T) {
computeCalled = true
return nil
})
d.RegisterHandler(IntentPauseExecution, func(pkt *ueps.ParsedPacket) error {
pauseExecutionCalled = true
d.RegisterHandler(IntentRehab, func(pkt *ueps.ParsedPacket) error {
rehabCalled = true
return nil
})
d.RegisterHandler(IntentCustom, func(pkt *ueps.ParsedPacket) error {
@ -162,7 +162,7 @@ func TestDispatcher_MultipleHandlersCorrectRouting_Good(t *testing.T) {
}{
{"handshake routes correctly", IntentHandshake, &handshakeCalled},
{"compute routes correctly", IntentCompute, &computeCalled},
{"pause execution routes correctly", IntentPauseExecution, &pauseExecutionCalled},
{"rehab routes correctly", IntentRehab, &rehabCalled},
{"custom routes correctly", IntentCustom, &customCalled},
}
@ -171,7 +171,7 @@ func TestDispatcher_MultipleHandlersCorrectRouting_Good(t *testing.T) {
// Reset all flags
handshakeCalled = false
computeCalled = false
pauseExecutionCalled = false
rehabCalled = false
customCalled = false
pkt := makePacket(tt.intentID, 0, []byte("payload"))
@ -192,11 +192,11 @@ func TestDispatcher_MultipleHandlersCorrectRouting_Good(t *testing.T) {
}
}
func TestDispatcher_NilAndEmptyPayload_Ugly(t *testing.T) {
t.Run("nil packet returns ErrorNilPacket", func(t *testing.T) {
func TestDispatcher_NilAndEmptyPayload(t *testing.T) {
t.Run("nil packet returns ErrNilPacket", func(t *testing.T) {
d := NewDispatcher()
err := d.Dispatch(nil)
assert.ErrorIs(t, err, ErrorNilPacket)
assert.ErrorIs(t, err, ErrNilPacket)
})
t.Run("nil payload is delivered to handler", func(t *testing.T) {
@ -234,7 +234,7 @@ func TestDispatcher_NilAndEmptyPayload_Ugly(t *testing.T) {
})
}
func TestDispatcher_ConcurrentDispatchSafety_Ugly(t *testing.T) {
func TestDispatcher_ConcurrentDispatchSafety(t *testing.T) {
d := NewDispatcher()
var count atomic.Int64
@ -261,7 +261,7 @@ func TestDispatcher_ConcurrentDispatchSafety_Ugly(t *testing.T) {
assert.Equal(t, int64(goroutines), count.Load())
}
func TestDispatcher_ConcurrentRegisterAndDispatch_Ugly(t *testing.T) {
func TestDispatcher_ConcurrentRegisterAndDispatch(t *testing.T) {
d := NewDispatcher()
var count atomic.Int64
@ -301,7 +301,7 @@ func TestDispatcher_ConcurrentRegisterAndDispatch_Ugly(t *testing.T) {
assert.True(t, count.Load() >= 0)
}
func TestDispatcher_ReplaceHandler_Good(t *testing.T) {
func TestDispatcher_ReplaceHandler(t *testing.T) {
d := NewDispatcher()
var firstCalled, secondCalled bool
@ -325,22 +325,22 @@ func TestDispatcher_ReplaceHandler_Good(t *testing.T) {
assert.True(t, secondCalled, "replacement handler should be called")
}
func TestDispatcher_ThreatBlocksBeforeRouting_Good(t *testing.T) {
func TestDispatcher_ThreatBlocksBeforeRouting(t *testing.T) {
// Verify that the circuit breaker fires before intent routing,
// so even an unknown intent returns ErrorThreatScoreExceeded (not ErrorUnknownIntent).
// so even an unknown intent returns ErrThreatScoreExceeded (not ErrUnknownIntent).
d := NewDispatcher()
pkt := makePacket(0x42, ThreatScoreThreshold+1, []byte("hostile"))
err := d.Dispatch(pkt)
assert.ErrorIs(t, err, ErrorThreatScoreExceeded,
assert.ErrorIs(t, err, ErrThreatScoreExceeded,
"threat circuit breaker should fire before intent routing")
}
func TestDispatcher_IntentConstants_Good(t *testing.T) {
func TestDispatcher_IntentConstants(t *testing.T) {
// Verify the well-known intent IDs match the spec (RFC-021).
assert.Equal(t, byte(0x01), IntentHandshake)
assert.Equal(t, byte(0x20), IntentCompute)
assert.Equal(t, byte(0x30), IntentPauseExecution)
assert.Equal(t, byte(0x30), IntentRehab)
assert.Equal(t, byte(0xFF), IntentCustom)
}

View file

@ -1,9 +1,14 @@
package node
import core "dappco.re/go/core"
import coreerr "dappco.re/go/core/log"
// Sentinel errors shared across the node package.
var (
ErrorIdentityNotInitialized = core.E("node", "node identity not initialized", nil)
// ErrIdentityNotInitialized is returned when a node operation requires
// a node identity but none has been generated or loaded.
ErrIdentityNotInitialized = coreerr.E("node", "node identity not initialized", nil)
ErrorMinerManagerNotConfigured = core.E("node", "miner manager not configured", nil)
// ErrMinerManagerNotConfigured is returned when a miner operation is
// attempted but no MinerManager has been set on the Worker.
ErrMinerManagerNotConfigured = coreerr.E("node", "miner manager not configured", nil)
)

View file

@ -1,55 +0,0 @@
// SPDX-License-Identifier: EUPL-1.2
package node
import core "dappco.re/go/core"
// localFileSystem is the package-scoped filesystem rooted at `/` so node code
// can use Core file operations without os helpers.
var localFileSystem = (&core.Fs{}).New("/")
func filesystemEnsureDir(path string) error {
return filesystemResultError(localFileSystem.EnsureDir(path))
}
func filesystemWrite(path, content string) error {
return filesystemResultError(localFileSystem.Write(path, content))
}
func filesystemRead(path string) (string, error) {
result := localFileSystem.Read(path)
if !result.OK {
return "", filesystemResultError(result)
}
content, ok := result.Value.(string)
if !ok {
return "", core.E("node.filesystemRead", "filesystem read returned non-string content", nil)
}
return content, nil
}
func filesystemDelete(path string) error {
return filesystemResultError(localFileSystem.Delete(path))
}
func filesystemRename(oldPath, newPath string) error {
return filesystemResultError(localFileSystem.Rename(oldPath, newPath))
}
func filesystemExists(path string) bool {
return localFileSystem.Exists(path)
}
func filesystemResultError(result core.Result) error {
if result.OK {
return nil
}
if err, ok := result.Value.(error); ok && err != nil {
return err
}
return core.E("node.filesystem", "filesystem operation failed", nil)
}

View file

@ -7,41 +7,46 @@ import (
"crypto/rand"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"os"
"path/filepath"
"sync"
"time"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
coreerr "dappco.re/go/core/log"
"forge.lthn.ai/Snider/Borg/pkg/stmf"
"github.com/adrg/xdg"
)
// challenge := make([]byte, ChallengeSize)
// ChallengeSize is the size of the challenge in bytes
const ChallengeSize = 32
// challenge, err := GenerateChallenge()
// GenerateChallenge creates a random challenge for authentication.
func GenerateChallenge() ([]byte, error) {
challenge := make([]byte, ChallengeSize)
if _, err := rand.Read(challenge); err != nil {
return nil, core.E("GenerateChallenge", "failed to generate challenge", err)
return nil, coreerr.E("GenerateChallenge", "failed to generate challenge", err)
}
return challenge, nil
}
// signature := SignChallenge(challenge, sharedSecret)
// SignChallenge creates an HMAC signature of a challenge using a shared secret.
// The signature proves possession of the shared secret without revealing it.
func SignChallenge(challenge []byte, sharedSecret []byte) []byte {
mac := hmac.New(sha256.New, sharedSecret)
mac.Write(challenge)
return mac.Sum(nil)
}
// ok := VerifyChallenge(challenge, signature, sharedSecret)
// VerifyChallenge verifies that a challenge response was signed with the correct shared secret.
func VerifyChallenge(challenge, response, sharedSecret []byte) bool {
expected := SignChallenge(challenge, sharedSecret)
return hmac.Equal(response, expected)
}
// role := RoleWorker
// NodeRole defines the operational mode of a node.
type NodeRole string
const (
@ -53,7 +58,7 @@ const (
RoleDual NodeRole = "dual"
)
// identity := NodeIdentity{Name: "worker-1", Role: RoleWorker}
// NodeIdentity represents the public identity of a node.
type NodeIdentity struct {
ID string `json:"id"` // Derived from public key (first 16 bytes hex)
Name string `json:"name"` // Human-friendly name
@ -62,7 +67,7 @@ type NodeIdentity struct {
Role NodeRole `json:"role"`
}
// nodeManager, err := NewNodeManager()
// NodeManager handles node identity operations including key generation and storage.
type NodeManager struct {
identity *NodeIdentity
privateKey []byte // Never serialized to JSON
@ -72,50 +77,88 @@ type NodeManager struct {
mu sync.RWMutex
}
// nodeManager, err := NewNodeManager()
// NewNodeManager creates a new NodeManager, loading existing identity if available.
func NewNodeManager() (*NodeManager, error) {
keyPath, err := xdg.DataFile("lethean-desktop/node/private.key")
if err != nil {
return nil, core.E("NodeManager.New", "failed to get key path", err)
return nil, coreerr.E("NodeManager.New", "failed to get key path", err)
}
configPath, err := xdg.ConfigFile("lethean-desktop/node.json")
if err != nil {
return nil, core.E("NodeManager.New", "failed to get config path", err)
return nil, coreerr.E("NodeManager.New", "failed to get config path", err)
}
return NewNodeManagerFromPaths(keyPath, configPath)
return NewNodeManagerWithPaths(keyPath, configPath)
}
// nodeManager, err := NewNodeManagerFromPaths("/srv/p2p/private.key", "/srv/p2p/node.json")
// Missing files are treated as a fresh install; malformed or partial identity
// state returns an error so callers can handle it explicitly.
func NewNodeManagerFromPaths(keyPath, configPath string) (*NodeManager, error) {
// NewNodeManagerWithPaths creates a NodeManager with custom paths.
// This is primarily useful for testing to avoid xdg path caching issues.
func NewNodeManagerWithPaths(keyPath, configPath string) (*NodeManager, error) {
nm := &NodeManager{
keyPath: keyPath,
configPath: configPath,
}
// Missing files indicate a first run; anything else is a load failure.
if !filesystemExists(keyPath) && !filesystemExists(configPath) {
return nm, nil
}
// Try to load existing identity
if err := nm.loadIdentity(); err != nil {
return nil, err
// Identity doesn't exist yet, that's ok
return nm, nil
}
return nm, nil
}
// hasIdentity := nodeManager.HasIdentity()
// LoadOrCreateIdentity loads the node identity from the default XDG paths or
// generates a new dual-role identity when none exists yet.
func LoadOrCreateIdentity() (*NodeManager, error) {
keyPath, err := xdg.DataFile("lethean-desktop/node/private.key")
if err != nil {
return nil, coreerr.E("LoadOrCreateIdentity", "failed to get key path", err)
}
configPath, err := xdg.ConfigFile("lethean-desktop/node.json")
if err != nil {
return nil, coreerr.E("LoadOrCreateIdentity", "failed to get config path", err)
}
return LoadOrCreateIdentityWithPaths(keyPath, configPath)
}
// LoadOrCreateIdentityWithPaths loads an existing identity from the supplied
// paths or creates a new dual-role identity if no persisted identity exists.
// The generated identity name falls back to the host name, then a stable
// project-specific default if the host name cannot be determined.
func LoadOrCreateIdentityWithPaths(keyPath, configPath string) (*NodeManager, error) {
nm, err := NewNodeManagerWithPaths(keyPath, configPath)
if err != nil {
return nil, err
}
if nm.HasIdentity() {
return nm, nil
}
name, err := os.Hostname()
if err != nil || name == "" {
name = "lethean-node"
}
if err := nm.GenerateIdentity(name, RoleDual); err != nil {
return nil, coreerr.E("LoadOrCreateIdentityWithPaths", "failed to generate identity", err)
}
return nm, nil
}
// HasIdentity returns true if a node identity has been initialized.
func (n *NodeManager) HasIdentity() bool {
n.mu.RLock()
defer n.mu.RUnlock()
return n.identity != nil
}
// identity := nodeManager.GetIdentity()
// GetIdentity returns the node's public identity.
func (n *NodeManager) GetIdentity() *NodeIdentity {
n.mu.RLock()
defer n.mu.RUnlock()
@ -127,7 +170,7 @@ func (n *NodeManager) GetIdentity() *NodeIdentity {
return &identity
}
// err := nodeManager.GenerateIdentity("worker-1", RoleWorker)
// GenerateIdentity creates a new node identity with the given name and role.
func (n *NodeManager) GenerateIdentity(name string, role NodeRole) error {
n.mu.Lock()
defer n.mu.Unlock()
@ -135,7 +178,7 @@ func (n *NodeManager) GenerateIdentity(name string, role NodeRole) error {
// Generate X25519 keypair using STMF
keyPair, err := stmf.GenerateKeyPair()
if err != nil {
return core.E("NodeManager.GenerateIdentity", "failed to generate keypair", err)
return coreerr.E("NodeManager.GenerateIdentity", "failed to generate keypair", err)
}
// Derive node ID from public key (first 16 bytes as hex = 32 char ID)
@ -156,42 +199,43 @@ func (n *NodeManager) GenerateIdentity(name string, role NodeRole) error {
// Save private key
if err := n.savePrivateKey(); err != nil {
return core.E("NodeManager.GenerateIdentity", "failed to save private key", err)
return coreerr.E("NodeManager.GenerateIdentity", "failed to save private key", err)
}
// Save identity config
if err := n.saveIdentity(); err != nil {
return core.E("NodeManager.GenerateIdentity", "failed to save identity", err)
return coreerr.E("NodeManager.GenerateIdentity", "failed to save identity", err)
}
return nil
}
// sharedSecret, err := nodeManager.DeriveSharedSecret(peer.PublicKey)
// DeriveSharedSecret derives a shared secret with a peer using X25519 ECDH.
// The result is hashed with SHA-256 for use as a symmetric key.
func (n *NodeManager) DeriveSharedSecret(peerPubKeyBase64 string) ([]byte, error) {
n.mu.RLock()
defer n.mu.RUnlock()
if n.privateKey == nil {
return nil, ErrorIdentityNotInitialized
return nil, ErrIdentityNotInitialized
}
// Load peer's public key
peerPubKey, err := stmf.LoadPublicKeyBase64(peerPubKeyBase64)
if err != nil {
return nil, core.E("NodeManager.DeriveSharedSecret", "failed to load peer public key", err)
return nil, coreerr.E("NodeManager.DeriveSharedSecret", "failed to load peer public key", err)
}
// Load our private key
privateKey, err := ecdh.X25519().NewPrivateKey(n.privateKey)
if err != nil {
return nil, core.E("NodeManager.DeriveSharedSecret", "failed to load private key", err)
return nil, coreerr.E("NodeManager.DeriveSharedSecret", "failed to load private key", err)
}
// Derive shared secret using ECDH
sharedSecret, err := privateKey.ECDH(peerPubKey)
if err != nil {
return nil, core.E("NodeManager.DeriveSharedSecret", "failed to derive shared secret", err)
return nil, coreerr.E("NodeManager.DeriveSharedSecret", "failed to derive shared secret", err)
}
// Hash the shared secret using SHA-256 (same pattern as Borg/trix)
@ -199,59 +243,69 @@ func (n *NodeManager) DeriveSharedSecret(peerPubKeyBase64 string) ([]byte, error
return hash[:], nil
}
// savePrivateKey saves the private key to disk with restricted permissions.
func (n *NodeManager) savePrivateKey() error {
dir := core.PathDir(n.keyPath)
if err := filesystemEnsureDir(dir); err != nil {
return core.E("NodeManager.savePrivateKey", "failed to create key directory", err)
// Ensure directory exists
dir := filepath.Dir(n.keyPath)
if err := coreio.Local.EnsureDir(dir); err != nil {
return coreerr.E("NodeManager.savePrivateKey", "failed to create key directory", err)
}
if err := filesystemWrite(n.keyPath, string(n.privateKey)); err != nil {
return core.E("NodeManager.savePrivateKey", "failed to write private key", err)
// Write private key and then tighten permissions explicitly.
if err := coreio.Local.Write(n.keyPath, string(n.privateKey)); err != nil {
return coreerr.E("NodeManager.savePrivateKey", "failed to write private key", err)
}
if err := os.Chmod(n.keyPath, 0600); err != nil {
return coreerr.E("NodeManager.savePrivateKey", "failed to set private key permissions", err)
}
return nil
}
// saveIdentity saves the public identity to the config file.
func (n *NodeManager) saveIdentity() error {
dir := core.PathDir(n.configPath)
if err := filesystemEnsureDir(dir); err != nil {
return core.E("NodeManager.saveIdentity", "failed to create config directory", err)
// Ensure directory exists
dir := filepath.Dir(n.configPath)
if err := coreio.Local.EnsureDir(dir); err != nil {
return coreerr.E("NodeManager.saveIdentity", "failed to create config directory", err)
}
result := core.JSONMarshal(n.identity)
if !result.OK {
return core.E("NodeManager.saveIdentity", "failed to marshal identity", result.Value.(error))
data, err := json.MarshalIndent(n.identity, "", " ")
if err != nil {
return coreerr.E("NodeManager.saveIdentity", "failed to marshal identity", err)
}
data := result.Value.([]byte)
if err := filesystemWrite(n.configPath, string(data)); err != nil {
return core.E("NodeManager.saveIdentity", "failed to write identity", err)
if err := coreio.Local.Write(n.configPath, string(data)); err != nil {
return coreerr.E("NodeManager.saveIdentity", "failed to write identity", err)
}
return nil
}
// loadIdentity loads the node identity from disk.
func (n *NodeManager) loadIdentity() error {
content, err := filesystemRead(n.configPath)
// Load identity config
content, err := coreio.Local.Read(n.configPath)
if err != nil {
return core.E("NodeManager.loadIdentity", "failed to read identity", err)
return coreerr.E("NodeManager.loadIdentity", "failed to read identity", err)
}
var identity NodeIdentity
result := core.JSONUnmarshalString(content, &identity)
if !result.OK {
return core.E("NodeManager.loadIdentity", "failed to unmarshal identity", result.Value.(error))
if err := json.Unmarshal([]byte(content), &identity); err != nil {
return coreerr.E("NodeManager.loadIdentity", "failed to unmarshal identity", err)
}
keyContent, err := filesystemRead(n.keyPath)
// Load private key
keyContent, err := coreio.Local.Read(n.keyPath)
if err != nil {
return core.E("NodeManager.loadIdentity", "failed to read private key", err)
return coreerr.E("NodeManager.loadIdentity", "failed to read private key", err)
}
privateKey := []byte(keyContent)
// Reconstruct keypair from private key
keyPair, err := stmf.LoadKeyPair(privateKey)
if err != nil {
return core.E("NodeManager.loadIdentity", "failed to load keypair", err)
return coreerr.E("NodeManager.loadIdentity", "failed to load keypair", err)
}
n.identity = &identity
@ -261,22 +315,22 @@ func (n *NodeManager) loadIdentity() error {
return nil
}
// err := nodeManager.Delete()
// Delete removes the node identity and keys from disk.
func (n *NodeManager) Delete() error {
n.mu.Lock()
defer n.mu.Unlock()
// Remove private key (ignore if already absent)
if filesystemExists(n.keyPath) {
if err := filesystemDelete(n.keyPath); err != nil {
return core.E("NodeManager.Delete", "failed to remove private key", err)
if coreio.Local.Exists(n.keyPath) {
if err := coreio.Local.Delete(n.keyPath); err != nil {
return coreerr.E("NodeManager.Delete", "failed to remove private key", err)
}
}
// Remove identity config (ignore if already absent)
if filesystemExists(n.configPath) {
if err := filesystemDelete(n.configPath); err != nil {
return core.E("NodeManager.Delete", "failed to remove identity", err)
if coreio.Local.Exists(n.configPath) {
if err := coreio.Local.Delete(n.configPath); err != nil {
return coreerr.E("NodeManager.Delete", "failed to remove identity", err)
}
}

View file

@ -1,22 +1,38 @@
package node
import (
"os"
"path/filepath"
"testing"
)
func newTestNodeManagerWithoutIdentity(t *testing.T) *NodeManager {
tmpDir := t.TempDir()
nm, err := NewNodeManagerFromPaths(testNodeManagerPaths(tmpDir))
// setupTestNodeManager creates a NodeManager with paths in a temp directory.
func setupTestNodeManager(t *testing.T) (*NodeManager, func()) {
tmpDir, err := os.MkdirTemp("", "node-identity-test")
if err != nil {
t.Fatalf("failed to create temp dir: %v", err)
}
keyPath := filepath.Join(tmpDir, "private.key")
configPath := filepath.Join(tmpDir, "node.json")
nm, err := NewNodeManagerWithPaths(keyPath, configPath)
if err != nil {
os.RemoveAll(tmpDir)
t.Fatalf("failed to create node manager: %v", err)
}
return nm
cleanup := func() {
os.RemoveAll(tmpDir)
}
return nm, cleanup
}
func TestIdentity_NodeIdentity_Good(t *testing.T) {
func TestNodeIdentity(t *testing.T) {
t.Run("NewNodeManager", func(t *testing.T) {
nm := newTestNodeManagerWithoutIdentity(t)
nm, cleanup := setupTestNodeManager(t)
defer cleanup()
if nm.HasIdentity() {
t.Error("new node manager should not have identity")
@ -24,7 +40,8 @@ func TestIdentity_NodeIdentity_Good(t *testing.T) {
})
t.Run("GenerateIdentity", func(t *testing.T) {
nm := newTestNodeManagerWithoutIdentity(t)
nm, cleanup := setupTestNodeManager(t)
defer cleanup()
err := nm.GenerateIdentity("test-node", RoleDual)
if err != nil {
@ -57,12 +74,37 @@ func TestIdentity_NodeIdentity_Good(t *testing.T) {
}
})
t.Run("PrivateKeyPermissions", func(t *testing.T) {
nm, cleanup := setupTestNodeManager(t)
defer cleanup()
err := nm.GenerateIdentity("permission-test", RoleDual)
if err != nil {
t.Fatalf("failed to generate identity: %v", err)
}
info, err := os.Stat(nm.keyPath)
if err != nil {
t.Fatalf("failed to stat private key: %v", err)
}
if got := info.Mode().Perm(); got != 0600 {
t.Fatalf("expected private key permissions 0600, got %04o", got)
}
})
t.Run("LoadExistingIdentity", func(t *testing.T) {
tmpDir := t.TempDir()
keyPath, configPath := testNodeManagerPaths(tmpDir)
tmpDir, err := os.MkdirTemp("", "node-load-test")
if err != nil {
t.Fatalf("failed to create temp dir: %v", err)
}
defer os.RemoveAll(tmpDir)
keyPath := filepath.Join(tmpDir, "private.key")
configPath := filepath.Join(tmpDir, "node.json")
// First, create an identity
nm1, err := NewNodeManagerFromPaths(keyPath, configPath)
nm1, err := NewNodeManagerWithPaths(keyPath, configPath)
if err != nil {
t.Fatalf("failed to create first node manager: %v", err)
}
@ -76,7 +118,7 @@ func TestIdentity_NodeIdentity_Good(t *testing.T) {
originalPubKey := nm1.GetIdentity().PublicKey
// Create a new manager - should load existing identity
nm2, err := NewNodeManagerFromPaths(keyPath, configPath)
nm2, err := NewNodeManagerWithPaths(keyPath, configPath)
if err != nil {
t.Fatalf("failed to create second node manager: %v", err)
}
@ -97,11 +139,16 @@ func TestIdentity_NodeIdentity_Good(t *testing.T) {
t.Run("DeriveSharedSecret", func(t *testing.T) {
// Create two node managers with separate temp directories
tmpDir1 := t.TempDir()
tmpDir2 := t.TempDir()
tmpDir1, _ := os.MkdirTemp("", "node1")
tmpDir2, _ := os.MkdirTemp("", "node2")
defer os.RemoveAll(tmpDir1)
defer os.RemoveAll(tmpDir2)
// Node 1
nm1, err := NewNodeManagerFromPaths(testNodeManagerPaths(tmpDir1))
nm1, err := NewNodeManagerWithPaths(
filepath.Join(tmpDir1, "private.key"),
filepath.Join(tmpDir1, "node.json"),
)
if err != nil {
t.Fatalf("failed to create node manager 1: %v", err)
}
@ -111,7 +158,10 @@ func TestIdentity_NodeIdentity_Good(t *testing.T) {
}
// Node 2
nm2, err := NewNodeManagerFromPaths(testNodeManagerPaths(tmpDir2))
nm2, err := NewNodeManagerWithPaths(
filepath.Join(tmpDir2, "private.key"),
filepath.Join(tmpDir2, "node.json"),
)
if err != nil {
t.Fatalf("failed to create node manager 2: %v", err)
}
@ -144,7 +194,8 @@ func TestIdentity_NodeIdentity_Good(t *testing.T) {
})
t.Run("DeleteIdentity", func(t *testing.T) {
nm := newTestNodeManagerWithoutIdentity(t)
nm, cleanup := setupTestNodeManager(t)
defer cleanup()
err := nm.GenerateIdentity("delete-me", RoleDual)
if err != nil {
@ -164,25 +215,50 @@ func TestIdentity_NodeIdentity_Good(t *testing.T) {
t.Error("should not have identity after delete")
}
})
t.Run("LoadOrCreateIdentityWithPaths", func(t *testing.T) {
tmpDir, err := os.MkdirTemp("", "node-load-or-create-test")
if err != nil {
t.Fatalf("failed to create temp dir: %v", err)
}
defer os.RemoveAll(tmpDir)
keyPath := filepath.Join(tmpDir, "private.key")
configPath := filepath.Join(tmpDir, "node.json")
nm, err := LoadOrCreateIdentityWithPaths(keyPath, configPath)
if err != nil {
t.Fatalf("failed to load or create identity: %v", err)
}
if !nm.HasIdentity() {
t.Fatal("expected identity to be initialised")
}
identity := nm.GetIdentity()
if identity == nil {
t.Fatal("identity should not be nil")
}
if identity.Name == "" {
t.Error("identity name should be populated")
}
if identity.Role != RoleDual {
t.Errorf("expected default role dual, got %s", identity.Role)
}
if _, err := os.Stat(keyPath); err != nil {
t.Fatalf("expected private key to be persisted: %v", err)
}
if _, err := os.Stat(configPath); err != nil {
t.Fatalf("expected identity config to be persisted: %v", err)
}
})
}
func TestIdentity_NodeManagerFromPaths_CorruptIdentity_Bad(t *testing.T) {
tmpDir := t.TempDir()
keyPath, configPath := testNodeManagerPaths(tmpDir)
testWriteFile(t, configPath, []byte(`{"id":"node-1","name":"broken","publicKey":"not-json"`), 0o600)
nm, err := NewNodeManagerFromPaths(keyPath, configPath)
if err == nil {
t.Fatal("expected error when loading a corrupted node identity")
}
if nm != nil {
t.Fatal("expected nil node manager when identity data is corrupted")
}
}
func TestIdentity_NodeRoles_Good(t *testing.T) {
func TestNodeRoles(t *testing.T) {
tests := []struct {
role NodeRole
expected string
@ -201,7 +277,7 @@ func TestIdentity_NodeRoles_Good(t *testing.T) {
}
}
func TestIdentity_ChallengeResponse_Good(t *testing.T) {
func TestChallengeResponse(t *testing.T) {
t.Run("GenerateChallenge", func(t *testing.T) {
challenge, err := GenerateChallenge()
if err != nil {
@ -299,13 +375,21 @@ func TestIdentity_ChallengeResponse_Good(t *testing.T) {
t.Run("IntegrationWithSharedSecret", func(t *testing.T) {
// Create two nodes and test end-to-end challenge-response
tmpDir1 := t.TempDir()
tmpDir2 := t.TempDir()
tmpDir1, _ := os.MkdirTemp("", "node-challenge-1")
tmpDir2, _ := os.MkdirTemp("", "node-challenge-2")
defer os.RemoveAll(tmpDir1)
defer os.RemoveAll(tmpDir2)
nm1, _ := NewNodeManagerFromPaths(testNodeManagerPaths(tmpDir1))
nm1, _ := NewNodeManagerWithPaths(
filepath.Join(tmpDir1, "private.key"),
filepath.Join(tmpDir1, "node.json"),
)
nm1.GenerateIdentity("challenger", RoleDual)
nm2, _ := NewNodeManagerFromPaths(testNodeManagerPaths(tmpDir2))
nm2, _ := NewNodeManagerWithPaths(
filepath.Join(tmpDir2, "private.key"),
filepath.Join(tmpDir2, "node.json"),
)
nm2.GenerateIdentity("responder", RoleDual)
// Challenger generates challenge
@ -328,8 +412,9 @@ func TestIdentity_ChallengeResponse_Good(t *testing.T) {
})
}
func TestIdentity_NodeManager_DeriveSharedSecret_NoIdentity_Bad(t *testing.T) {
nm := newTestNodeManagerWithoutIdentity(t)
func TestNodeManager_DeriveSharedSecret_NoIdentity(t *testing.T) {
nm, cleanup := setupTestNodeManager(t)
defer cleanup()
// No identity generated
_, err := nm.DeriveSharedSecret("some-key")
@ -338,8 +423,9 @@ func TestIdentity_NodeManager_DeriveSharedSecret_NoIdentity_Bad(t *testing.T) {
}
}
func TestIdentity_NodeManager_Identity_NilWhenNoIdentity_Bad(t *testing.T) {
nm := newTestNodeManagerWithoutIdentity(t)
func TestNodeManager_GetIdentity_NilWhenNoIdentity(t *testing.T) {
nm, cleanup := setupTestNodeManager(t)
defer cleanup()
identity := nm.GetIdentity()
if identity != nil {
@ -347,11 +433,11 @@ func TestIdentity_NodeManager_Identity_NilWhenNoIdentity_Bad(t *testing.T) {
}
}
func TestIdentity_NodeManager_Delete_NoFiles_Bad(t *testing.T) {
func TestNodeManager_Delete_NoFiles(t *testing.T) {
tmpDir := t.TempDir()
nm, err := NewNodeManagerFromPaths(
testJoinPath(tmpDir, "nonexistent.key"),
testJoinPath(tmpDir, "nonexistent.json"),
nm, err := NewNodeManagerWithPaths(
filepath.Join(tmpDir, "nonexistent.key"),
filepath.Join(tmpDir, "nonexistent.json"),
)
if err != nil {
t.Fatalf("failed to create node manager: %v", err)

View file

@ -3,9 +3,11 @@ package node
import (
"bufio"
"bytes"
"encoding/json"
"net/http"
"net/http/httptest"
"net/url"
"path/filepath"
"sync"
"sync/atomic"
"testing"
@ -27,12 +29,12 @@ import (
// 5. Graceful shutdown with disconnect messages
// ============================================================================
func TestIntegration_FullNodeLifecycle_Good(t *testing.T) {
func TestIntegration_FullNodeLifecycle(t *testing.T) {
// ----------------------------------------------------------------
// Step 1: Identity creation
// ----------------------------------------------------------------
controllerNM := newTestNodeManager(t, "integration-controller", RoleController)
workerNM := newTestNodeManager(t, "integration-worker", RoleWorker)
controllerNM := testNode(t, "integration-controller", RoleController)
workerNM := testNode(t, "integration-worker", RoleWorker)
controllerIdentity := controllerNM.GetIdentity()
workerIdentity := workerNM.GetIdentity()
@ -48,8 +50,8 @@ func TestIntegration_FullNodeLifecycle_Good(t *testing.T) {
// ----------------------------------------------------------------
// Step 2: Set up transports, registries, worker, and controller
// ----------------------------------------------------------------
workerReg := newTestPeerRegistry(t)
controllerReg := newTestPeerRegistry(t)
workerReg := testRegistry(t)
controllerReg := testRegistry(t)
workerCfg := DefaultTransportConfig()
workerCfg.PingInterval = 2 * time.Second
@ -78,11 +80,11 @@ func TestIntegration_FullNodeLifecycle_Good(t *testing.T) {
},
},
})
worker.RegisterOnTransport()
worker.RegisterWithTransport()
// Start the worker transport behind httptest.
mux := http.NewServeMux()
mux.HandleFunc(workerCfg.WebSocketPath, workerTransport.handleWebSocketUpgrade)
mux.HandleFunc(workerCfg.WSPath, workerTransport.handleWSUpgrade)
ts := httptest.NewServer(mux)
t.Cleanup(func() {
controllerTransport.Stop()
@ -116,9 +118,9 @@ func TestIntegration_FullNodeLifecycle_Good(t *testing.T) {
// Allow server-side goroutines to register the connection.
time.Sleep(100 * time.Millisecond)
assert.Equal(t, 1, controllerTransport.ConnectedPeerCount(),
assert.Equal(t, 1, controllerTransport.ConnectedPeers(),
"controller should have 1 connected peer")
assert.Equal(t, 1, workerTransport.ConnectedPeerCount(),
assert.Equal(t, 1, workerTransport.ConnectedPeers(),
"worker should have 1 connected peer")
// Verify the peer's real identity is stored.
@ -138,13 +140,13 @@ func TestIntegration_FullNodeLifecycle_Good(t *testing.T) {
// Verify registry metrics were updated.
peerAfterPing := controllerReg.GetPeer(serverPeerID)
require.NotNil(t, peerAfterPing)
assert.Greater(t, peerAfterPing.PingMilliseconds, 0.0, "PingMilliseconds should be updated")
assert.Greater(t, peerAfterPing.PingMS, 0.0, "PingMS should be updated")
// ----------------------------------------------------------------
// Step 5: Encrypted message exchange — RemoteStats
// Step 5: Encrypted message exchange — GetRemoteStats
// ----------------------------------------------------------------
stats, err := controller.GetRemoteStats(serverPeerID)
require.NoError(t, err, "RemoteStats should succeed")
require.NoError(t, err, "GetRemoteStats should succeed")
require.NotNil(t, stats)
assert.Equal(t, workerIdentity.ID, stats.NodeID)
assert.Equal(t, "integration-worker", stats.NodeName)
@ -199,7 +201,7 @@ func TestIntegration_FullNodeLifecycle_Good(t *testing.T) {
parsed3, err := ueps.ReadAndVerify(bufio.NewReader(bytes.NewReader(wireData3)), sharedSecret)
require.NoError(t, err)
err = dispatcher.Dispatch(parsed3)
assert.ErrorIs(t, err, ErrorThreatScoreExceeded,
assert.ErrorIs(t, err, ErrThreatScoreExceeded,
"high-threat packet should be dropped by circuit breaker")
// Compute handler should NOT have been called again.
assert.Equal(t, int32(1), computeReceived.Load())
@ -209,7 +211,7 @@ func TestIntegration_FullNodeLifecycle_Good(t *testing.T) {
// ----------------------------------------------------------------
disconnectReceived := make(chan *Message, 1)
workerTransport.OnMessage(func(conn *PeerConnection, msg *Message) {
if msg.Type == MessageDisconnect {
if msg.Type == MsgDisconnect {
disconnectReceived <- msg
}
})
@ -219,7 +221,7 @@ func TestIntegration_FullNodeLifecycle_Good(t *testing.T) {
select {
case msg := <-disconnectReceived:
assert.Equal(t, MessageDisconnect, msg.Type)
assert.Equal(t, MsgDisconnect, msg.Type)
var payload DisconnectPayload
require.NoError(t, msg.ParsePayload(&payload))
assert.Equal(t, "integration test complete", payload.Reason)
@ -232,15 +234,15 @@ func TestIntegration_FullNodeLifecycle_Good(t *testing.T) {
time.Sleep(200 * time.Millisecond)
// After graceful close, the controller should have 0 peers.
assert.Equal(t, 0, controllerTransport.ConnectedPeerCount(),
assert.Equal(t, 0, controllerTransport.ConnectedPeers(),
"controller should have 0 peers after graceful close")
}
// TestIntegration_SharedSecretAgreement verifies that two independently created
// nodes derive the same shared secret via ECDH.
func TestIntegration_SharedSecretAgreement_Good(t *testing.T) {
nodeA := newTestNodeManager(t, "secret-node-a", RoleDual)
nodeB := newTestNodeManager(t, "secret-node-b", RoleDual)
func TestIntegration_SharedSecretAgreement(t *testing.T) {
nodeA := testNode(t, "secret-node-a", RoleDual)
nodeB := testNode(t, "secret-node-b", RoleDual)
pubKeyA := nodeA.GetIdentity().PublicKey
pubKeyB := nodeB.GetIdentity().PublicKey
@ -258,7 +260,7 @@ func TestIntegration_SharedSecretAgreement_Good(t *testing.T) {
// TestIntegration_TwoNodeBidirectionalMessages verifies that both nodes
// can send and receive encrypted messages after the handshake.
func TestIntegration_TwoNodeBidirectionalMessages_Good(t *testing.T) {
func TestIntegration_TwoNodeBidirectionalMessages(t *testing.T) {
controller, _, tp := setupControllerPair(t)
serverID := tp.ServerNode.GetIdentity().ID
@ -283,9 +285,9 @@ func TestIntegration_TwoNodeBidirectionalMessages_Good(t *testing.T) {
// TestIntegration_MultiPeerTopology verifies that a controller can
// simultaneously communicate with multiple workers.
func TestIntegration_MultiPeerTopology_Good(t *testing.T) {
controllerNM := newTestNodeManager(t, "multi-controller", RoleController)
controllerReg := newTestPeerRegistry(t)
func TestIntegration_MultiPeerTopology(t *testing.T) {
controllerNM := testNode(t, "multi-controller", RoleController)
controllerReg := testRegistry(t)
controllerTransport := NewTransport(controllerNM, controllerReg, DefaultTransportConfig())
t.Cleanup(func() { controllerTransport.Stop() })
@ -310,7 +312,7 @@ func TestIntegration_MultiPeerTopology_Good(t *testing.T) {
}
time.Sleep(100 * time.Millisecond)
assert.Equal(t, numWorkers, controllerTransport.ConnectedPeerCount(),
assert.Equal(t, numWorkers, controllerTransport.ConnectedPeers(),
"controller should be connected to all workers")
controller := NewController(controllerNM, controllerReg, controllerTransport)
@ -341,12 +343,13 @@ func TestIntegration_MultiPeerTopology_Good(t *testing.T) {
// TestIntegration_IdentityPersistenceAndReload verifies that a node identity
// can be generated, persisted, and reloaded from disk.
func TestIntegration_IdentityPersistenceAndReload_Good(t *testing.T) {
func TestIntegration_IdentityPersistenceAndReload(t *testing.T) {
dir := t.TempDir()
keyPath, configPath := testNodeManagerPaths(dir)
keyPath := filepath.Join(dir, "private.key")
configPath := filepath.Join(dir, "node.json")
// Create and persist identity.
nm1, err := NewNodeManagerFromPaths(keyPath, configPath)
nm1, err := NewNodeManagerWithPaths(keyPath, configPath)
require.NoError(t, err)
require.NoError(t, nm1.GenerateIdentity("persistent-node", RoleDual))
@ -354,7 +357,7 @@ func TestIntegration_IdentityPersistenceAndReload_Good(t *testing.T) {
require.NotNil(t, original)
// Reload from disk.
nm2, err := NewNodeManagerFromPaths(keyPath, configPath)
nm2, err := NewNodeManagerWithPaths(keyPath, configPath)
require.NoError(t, err)
require.True(t, nm2.HasIdentity(), "identity should be loaded from disk")
@ -383,7 +386,10 @@ func TestIntegration_IdentityPersistenceAndReload_Good(t *testing.T) {
// stmfGenerateKeyPair is a helper that generates a keypair and returns
// the public key as base64 (for use in DeriveSharedSecret tests).
func stmfGenerateKeyPair(dir string) (string, error) {
nm, err := NewNodeManagerFromPaths(testNodeManagerPaths(dir))
nm, err := NewNodeManagerWithPaths(
filepath.Join(dir, "private.key"),
filepath.Join(dir, "node.json"),
)
if err != nil {
return "", err
}
@ -393,11 +399,12 @@ func stmfGenerateKeyPair(dir string) (string, error) {
return nm.GetIdentity().PublicKey, nil
}
// TestIntegration_UEPSFullRoundTrip exercises a complete UEPS packet
// lifecycle: build, sign, transmit (simulated), read, verify, dispatch.
func TestIntegration_UEPSFullRoundTrip_Ugly(t *testing.T) {
nodeA := newTestNodeManager(t, "ueps-node-a", RoleController)
nodeB := newTestNodeManager(t, "ueps-node-b", RoleWorker)
func TestIntegration_UEPSFullRoundTrip(t *testing.T) {
nodeA := testNode(t, "ueps-node-a", RoleController)
nodeB := testNode(t, "ueps-node-b", RoleWorker)
bPubKey := nodeB.GetIdentity().PublicKey
sharedSecret, err := nodeA.DeriveSharedSecret(bPubKey)
@ -446,9 +453,9 @@ func TestIntegration_UEPSFullRoundTrip_Ugly(t *testing.T) {
// TestIntegration_UEPSIntegrityFailure verifies that a tampered UEPS packet
// is rejected by HMAC verification.
func TestIntegration_UEPSIntegrityFailure_Bad(t *testing.T) {
nodeA := newTestNodeManager(t, "integrity-a", RoleController)
nodeB := newTestNodeManager(t, "integrity-b", RoleWorker)
func TestIntegration_UEPSIntegrityFailure(t *testing.T) {
nodeA := testNode(t, "integrity-a", RoleController)
nodeB := testNode(t, "integrity-b", RoleWorker)
bPubKey := nodeB.GetIdentity().PublicKey
sharedSecret, err := nodeA.DeriveSharedSecret(bPubKey)
@ -477,15 +484,15 @@ func TestIntegration_UEPSIntegrityFailure_Bad(t *testing.T) {
// TestIntegration_AllowlistHandshakeRejection verifies that a peer not in the
// allowlist is rejected during the WebSocket handshake.
func TestIntegration_AllowlistHandshakeRejection_Bad(t *testing.T) {
workerNM := newTestNodeManager(t, "allowlist-worker", RoleWorker)
workerReg := newTestPeerRegistry(t)
func TestIntegration_AllowlistHandshakeRejection(t *testing.T) {
workerNM := testNode(t, "allowlist-worker", RoleWorker)
workerReg := testRegistry(t)
workerReg.SetAuthMode(PeerAuthAllowlist)
workerTransport := NewTransport(workerNM, workerReg, DefaultTransportConfig())
mux := http.NewServeMux()
mux.HandleFunc("/ws", workerTransport.handleWebSocketUpgrade)
mux.HandleFunc("/ws", workerTransport.handleWSUpgrade)
ts := httptest.NewServer(mux)
t.Cleanup(func() {
workerTransport.Stop()
@ -494,8 +501,8 @@ func TestIntegration_AllowlistHandshakeRejection_Bad(t *testing.T) {
u, _ := url.Parse(ts.URL)
controllerNM := newTestNodeManager(t, "rejected-controller", RoleController)
controllerReg := newTestPeerRegistry(t)
controllerNM := testNode(t, "rejected-controller", RoleController)
controllerReg := testRegistry(t)
controllerTransport := NewTransport(controllerNM, controllerReg, DefaultTransportConfig())
t.Cleanup(func() { controllerTransport.Stop() })
@ -514,22 +521,22 @@ func TestIntegration_AllowlistHandshakeRejection_Bad(t *testing.T) {
// TestIntegration_AllowlistHandshakeAccepted verifies that an allowlisted
// peer can connect successfully.
func TestIntegration_AllowlistHandshakeAccepted_Good(t *testing.T) {
workerNM := newTestNodeManager(t, "allowlist-worker-ok", RoleWorker)
workerReg := newTestPeerRegistry(t)
func TestIntegration_AllowlistHandshakeAccepted(t *testing.T) {
workerNM := testNode(t, "allowlist-worker-ok", RoleWorker)
workerReg := testRegistry(t)
workerReg.SetAuthMode(PeerAuthAllowlist)
controllerNM := newTestNodeManager(t, "allowed-controller", RoleController)
controllerReg := newTestPeerRegistry(t)
controllerNM := testNode(t, "allowed-controller", RoleController)
controllerReg := testRegistry(t)
workerReg.AllowPublicKey(controllerNM.GetIdentity().PublicKey)
workerTransport := NewTransport(workerNM, workerReg, DefaultTransportConfig())
worker := NewWorker(workerNM, workerTransport)
worker.RegisterOnTransport()
worker.RegisterWithTransport()
mux := http.NewServeMux()
mux.HandleFunc("/ws", workerTransport.handleWebSocketUpgrade)
mux.HandleFunc("/ws", workerTransport.handleWSUpgrade)
ts := httptest.NewServer(mux)
t.Cleanup(func() {
workerTransport.Stop()
@ -556,7 +563,7 @@ func TestIntegration_AllowlistHandshakeAccepted_Good(t *testing.T) {
// TestIntegration_DispatcherWithRealUEPSPackets builds real UEPS packets
// from wire bytes and routes them through the dispatcher.
func TestIntegration_DispatcherWithRealUEPSPackets_Good(t *testing.T) {
func TestIntegration_DispatcherWithRealUEPSPackets(t *testing.T) {
sharedSecret := make([]byte, 32)
for i := range sharedSecret {
sharedSecret[i] = byte(i ^ 0x42)
@ -572,7 +579,7 @@ func TestIntegration_DispatcherWithRealUEPSPackets_Good(t *testing.T) {
}{
{IntentHandshake, "handshake", "hello"},
{IntentCompute, "compute", `{"job":"123"}`},
{IntentPauseExecution, "pause-execution", "pause"},
{IntentRehab, "rehab", "pause"},
{IntentCustom, "custom", "app-specific-data"},
}
@ -607,11 +614,11 @@ func TestIntegration_DispatcherWithRealUEPSPackets_Good(t *testing.T) {
// TestIntegration_MessageSerialiseDeserialise verifies that messages survive
// the full serialisation/encryption/decryption/deserialisation pipeline
// with all fields intact.
func TestIntegration_MessageSerialiseDeserialise_Good(t *testing.T) {
func TestIntegration_MessageSerialiseDeserialise(t *testing.T) {
tp := setupTestTransportPair(t)
pc := tp.connectClient(t)
original, err := NewMessage(MessageStats, tp.ClientNode.GetIdentity().ID, tp.ServerNode.GetIdentity().ID, StatsPayload{
original, err := NewMessage(MsgStats, tp.ClientNode.GetIdentity().ID, tp.ServerNode.GetIdentity().ID, StatsPayload{
NodeID: "test-node",
NodeName: "test-name",
Miners: []MinerStatsItem{
@ -646,18 +653,18 @@ func TestIntegration_MessageSerialiseDeserialise_Good(t *testing.T) {
assert.Equal(t, original.ReplyTo, decrypted.ReplyTo)
var originalStats, decryptedStats StatsPayload
testJSONUnmarshal(t, original.Payload, &originalStats)
testJSONUnmarshal(t, decrypted.Payload, &decryptedStats)
require.NoError(t, json.Unmarshal(original.Payload, &originalStats))
require.NoError(t, json.Unmarshal(decrypted.Payload, &decryptedStats))
assert.Equal(t, originalStats, decryptedStats)
}
// TestIntegration_RemoteStats_EndToEnd tests the full stats retrieval flow
// TestIntegration_GetRemoteStats_EndToEnd tests the full stats retrieval flow
// across a real WebSocket connection.
func TestIntegration_RemoteStats_EndToEnd_Good(t *testing.T) {
func TestIntegration_GetRemoteStats_EndToEnd(t *testing.T) {
tp := setupTestTransportPair(t)
worker := NewWorker(tp.ServerNode, tp.Server)
worker.RegisterOnTransport()
worker.RegisterWithTransport()
controller := NewController(tp.ClientNode, tp.ClientReg, tp.Client)
@ -667,7 +674,7 @@ func TestIntegration_RemoteStats_EndToEnd_Good(t *testing.T) {
serverID := tp.ServerNode.GetIdentity().ID
stats, err := controller.GetRemoteStats(serverID)
require.NoError(t, err, "RemoteStats should succeed end-to-end")
require.NoError(t, err, "GetRemoteStats should succeed end-to-end")
require.NotNil(t, stats)
assert.Equal(t, serverID, stats.NodeID)
assert.Equal(t, "server", stats.NodeName)

View file

@ -10,22 +10,24 @@ import (
"time"
)
// flags := FlagRequest | FlagResponse
// Levin protocol flags.
const (
FlagRequest uint32 = 0x00000001
FlagResponse uint32 = 0x00000002
)
// header.ProtocolVersion = LevinProtocolVersion
// LevinProtocolVersion is the protocol version field written into every header.
const LevinProtocolVersion uint32 = 1
// connection.ReadTimeout = DefaultReadTimeout
// Default timeout values for Connection read and write operations.
const (
DefaultReadTimeout = 120 * time.Second
DefaultWriteTimeout = 30 * time.Second
)
// connection := NewConnection(networkConnection)
// Connection wraps a net.Conn and provides framed Levin packet I/O.
// All writes are serialised by an internal mutex, making it safe to call
// WritePacket and WriteResponse concurrently from multiple goroutines.
type Connection struct {
// MaxPayloadSize is the upper bound accepted for incoming payloads.
// Defaults to the package-level MaxPayloadSize (100 MB).
@ -37,64 +39,66 @@ type Connection struct {
// WriteTimeout is the deadline applied before each write call.
WriteTimeout time.Duration
networkConnection net.Conn
writeMutex sync.Mutex
conn net.Conn
writeMu sync.Mutex
}
// connection := NewConnection(networkConnection)
func NewConnection(connection net.Conn) *Connection {
// NewConnection creates a Connection that wraps conn with sensible defaults.
func NewConnection(conn net.Conn) *Connection {
return &Connection{
MaxPayloadSize: MaxPayloadSize,
ReadTimeout: DefaultReadTimeout,
WriteTimeout: DefaultWriteTimeout,
networkConnection: connection,
MaxPayloadSize: MaxPayloadSize,
ReadTimeout: DefaultReadTimeout,
WriteTimeout: DefaultWriteTimeout,
conn: conn,
}
}
// err := connection.WritePacket(CommandPing, payload, true)
func (connection *Connection) WritePacket(commandID uint32, payload []byte, expectResponse bool) error {
header := Header{
// WritePacket sends a Levin request or notification. It builds a 33-byte
// header, then writes header + payload atomically under the write mutex.
func (c *Connection) WritePacket(cmd uint32, payload []byte, expectResponse bool) error {
h := Header{
Signature: Signature,
PayloadSize: uint64(len(payload)),
ExpectResponse: expectResponse,
Command: commandID,
Command: cmd,
ReturnCode: ReturnOK,
Flags: FlagRequest,
ProtocolVersion: LevinProtocolVersion,
}
return connection.writeFrame(&header, payload)
return c.writeFrame(&h, payload)
}
// err := connection.WriteResponse(CommandPing, payload, ReturnOK)
func (connection *Connection) WriteResponse(commandID uint32, payload []byte, returnCode int32) error {
header := Header{
// WriteResponse sends a Levin response packet with the given return code.
func (c *Connection) WriteResponse(cmd uint32, payload []byte, returnCode int32) error {
h := Header{
Signature: Signature,
PayloadSize: uint64(len(payload)),
ExpectResponse: false,
Command: commandID,
Command: cmd,
ReturnCode: returnCode,
Flags: FlagResponse,
ProtocolVersion: LevinProtocolVersion,
}
return connection.writeFrame(&header, payload)
return c.writeFrame(&h, payload)
}
func (connection *Connection) writeFrame(header *Header, payload []byte) error {
headerBytes := EncodeHeader(header)
// writeFrame serialises header + payload and writes them atomically.
func (c *Connection) writeFrame(h *Header, payload []byte) error {
buf := EncodeHeader(h)
connection.writeMutex.Lock()
defer connection.writeMutex.Unlock()
c.writeMu.Lock()
defer c.writeMu.Unlock()
if err := connection.networkConnection.SetWriteDeadline(time.Now().Add(connection.WriteTimeout)); err != nil {
if err := c.conn.SetWriteDeadline(time.Now().Add(c.WriteTimeout)); err != nil {
return err
}
if _, err := connection.networkConnection.Write(headerBytes[:]); err != nil {
if _, err := c.conn.Write(buf[:]); err != nil {
return err
}
if len(payload) > 0 {
if _, err := connection.networkConnection.Write(payload); err != nil {
if _, err := c.conn.Write(payload); err != nil {
return err
}
}
@ -102,46 +106,49 @@ func (connection *Connection) writeFrame(header *Header, payload []byte) error {
return nil
}
// header, payload, err := connection.ReadPacket()
func (connection *Connection) ReadPacket() (Header, []byte, error) {
if err := connection.networkConnection.SetReadDeadline(time.Now().Add(connection.ReadTimeout)); err != nil {
// ReadPacket reads exactly 33 header bytes, validates the signature,
// checks the payload size against MaxPayloadSize, then reads exactly
// PayloadSize bytes of payload data.
func (c *Connection) ReadPacket() (Header, []byte, error) {
if err := c.conn.SetReadDeadline(time.Now().Add(c.ReadTimeout)); err != nil {
return Header{}, nil, err
}
var headerBytes [HeaderSize]byte
if _, err := io.ReadFull(connection.networkConnection, headerBytes[:]); err != nil {
// Read header.
var hdrBuf [HeaderSize]byte
if _, err := io.ReadFull(c.conn, hdrBuf[:]); err != nil {
return Header{}, nil, err
}
header, err := DecodeHeader(headerBytes)
h, err := DecodeHeader(hdrBuf)
if err != nil {
return Header{}, nil, err
}
// Check against the connection-specific payload limit.
if header.PayloadSize > connection.MaxPayloadSize {
return Header{}, nil, ErrorPayloadTooBig
if h.PayloadSize > c.MaxPayloadSize {
return Header{}, nil, ErrPayloadTooBig
}
// Empty payload is valid — return nil data without allocation.
if header.PayloadSize == 0 {
return header, nil, nil
if h.PayloadSize == 0 {
return h, nil, nil
}
payload := make([]byte, header.PayloadSize)
if _, err := io.ReadFull(connection.networkConnection, payload); err != nil {
payload := make([]byte, h.PayloadSize)
if _, err := io.ReadFull(c.conn, payload); err != nil {
return Header{}, nil, err
}
return header, payload, nil
return h, payload, nil
}
// err := connection.Close()
func (connection *Connection) Close() error {
return connection.networkConnection.Close()
// Close closes the underlying network connection.
func (c *Connection) Close() error {
return c.conn.Close()
}
// addr := connection.RemoteAddr()
func (connection *Connection) RemoteAddr() string {
return connection.networkConnection.RemoteAddr().String()
// RemoteAddr returns the remote address of the underlying connection as a string.
func (c *Connection) RemoteAddr() string {
return c.conn.RemoteAddr().String()
}

View file

@ -12,7 +12,7 @@ import (
"github.com/stretchr/testify/require"
)
func TestConnection_RoundTrip_Ugly(t *testing.T) {
func TestConnection_RoundTrip(t *testing.T) {
a, b := net.Pipe()
defer a.Close()
defer b.Close()
@ -41,7 +41,7 @@ func TestConnection_RoundTrip_Ugly(t *testing.T) {
assert.Equal(t, payload, data)
}
func TestConnection_EmptyPayload_Ugly(t *testing.T) {
func TestConnection_EmptyPayload(t *testing.T) {
a, b := net.Pipe()
defer a.Close()
defer b.Close()
@ -64,7 +64,7 @@ func TestConnection_EmptyPayload_Ugly(t *testing.T) {
assert.Nil(t, data)
}
func TestConnection_Response_Good(t *testing.T) {
func TestConnection_Response(t *testing.T) {
a, b := net.Pipe()
defer a.Close()
defer b.Close()
@ -73,7 +73,7 @@ func TestConnection_Response_Good(t *testing.T) {
receiver := NewConnection(b)
payload := []byte("response data")
retCode := ReturnErrorFormat
retCode := ReturnErrFormat
errCh := make(chan error, 1)
go func() {
@ -91,7 +91,7 @@ func TestConnection_Response_Good(t *testing.T) {
assert.Equal(t, payload, data)
}
func TestConnection_PayloadTooBig_Bad(t *testing.T) {
func TestConnection_PayloadTooBig(t *testing.T) {
a, b := net.Pipe()
defer a.Close()
defer b.Close()
@ -120,12 +120,12 @@ func TestConnection_PayloadTooBig_Bad(t *testing.T) {
_, _, err := receiver.ReadPacket()
require.Error(t, err)
assert.ErrorIs(t, err, ErrorPayloadTooBig)
assert.ErrorIs(t, err, ErrPayloadTooBig)
require.NoError(t, <-errCh)
}
func TestConnection_ReadTimeout_Bad(t *testing.T) {
func TestConnection_ReadTimeout(t *testing.T) {
a, b := net.Pipe()
defer a.Close()
defer b.Close()
@ -143,7 +143,7 @@ func TestConnection_ReadTimeout_Bad(t *testing.T) {
assert.True(t, netErr.Timeout(), "expected timeout error")
}
func TestConnection_RemoteAddr_Good(t *testing.T) {
func TestConnection_RemoteAddr(t *testing.T) {
a, b := net.Pipe()
defer a.Close()
defer b.Close()
@ -153,7 +153,7 @@ func TestConnection_RemoteAddr_Good(t *testing.T) {
assert.NotEmpty(t, addr)
}
func TestConnection_Close_Ugly(t *testing.T) {
func TestConnection_Close(t *testing.T) {
a, b := net.Pipe()
defer b.Close()

View file

@ -8,28 +8,28 @@ package levin
import (
"encoding/binary"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
)
// headerBytes := make([]byte, HeaderSize)
// HeaderSize is the exact byte length of a serialised Levin header.
const HeaderSize = 33
// header.Signature = Signature
// Signature is the magic value that opens every Levin packet.
const Signature uint64 = 0x0101010101012101
// header.PayloadSize <= MaxPayloadSize
// MaxPayloadSize is the upper bound we accept for a single payload (100 MB).
const MaxPayloadSize uint64 = 100 * 1024 * 1024
// Return-code constants carried in every Levin response.
const (
// returnCode := ReturnOK
ReturnOK int32 = 0
ReturnErrorConnection int32 = -1
ReturnErrorFormat int32 = -7
ReturnErrorSignature int32 = -13
ReturnOK int32 = 0
ReturnErrConnection int32 = -1
ReturnErrFormat int32 = -7
ReturnErrSignature int32 = -13
)
// Command IDs for the CryptoNote P2P layer.
const (
// commandID := CommandHandshake
CommandHandshake uint32 = 1001
CommandTimedSync uint32 = 1002
CommandPing uint32 = 1003
@ -41,13 +41,13 @@ const (
CommandResponseChain uint32 = 2007
)
// Sentinel errors returned by DecodeHeader.
var (
// err := ErrorBadSignature
ErrorBadSignature = core.E("levin", "bad signature", nil)
ErrorPayloadTooBig = core.E("levin", "payload exceeds maximum size", nil)
ErrBadSignature = coreerr.E("levin", "bad signature", nil)
ErrPayloadTooBig = coreerr.E("levin", "payload exceeds maximum size", nil)
)
// header := Header{Command: CommandHandshake, ExpectResponse: true}
// Header is the 33-byte packed header that prefixes every Levin message.
type Header struct {
Signature uint64
PayloadSize uint64
@ -58,38 +58,39 @@ type Header struct {
ProtocolVersion uint32
}
// encoded := EncodeHeader(header)
func EncodeHeader(header *Header) [HeaderSize]byte {
var headerBytes [HeaderSize]byte
binary.LittleEndian.PutUint64(headerBytes[0:8], header.Signature)
binary.LittleEndian.PutUint64(headerBytes[8:16], header.PayloadSize)
if header.ExpectResponse {
headerBytes[16] = 0x01
// EncodeHeader serialises h into a fixed-size 33-byte array (little-endian).
func EncodeHeader(h *Header) [HeaderSize]byte {
var buf [HeaderSize]byte
binary.LittleEndian.PutUint64(buf[0:8], h.Signature)
binary.LittleEndian.PutUint64(buf[8:16], h.PayloadSize)
if h.ExpectResponse {
buf[16] = 0x01
} else {
headerBytes[16] = 0x00
buf[16] = 0x00
}
binary.LittleEndian.PutUint32(headerBytes[17:21], header.Command)
binary.LittleEndian.PutUint32(headerBytes[21:25], uint32(header.ReturnCode))
binary.LittleEndian.PutUint32(headerBytes[25:29], header.Flags)
binary.LittleEndian.PutUint32(headerBytes[29:33], header.ProtocolVersion)
return headerBytes
binary.LittleEndian.PutUint32(buf[17:21], h.Command)
binary.LittleEndian.PutUint32(buf[21:25], uint32(h.ReturnCode))
binary.LittleEndian.PutUint32(buf[25:29], h.Flags)
binary.LittleEndian.PutUint32(buf[29:33], h.ProtocolVersion)
return buf
}
// header, err := DecodeHeader(headerBytes)
func DecodeHeader(headerBytes [HeaderSize]byte) (Header, error) {
var header Header
header.Signature = binary.LittleEndian.Uint64(headerBytes[0:8])
if header.Signature != Signature {
return Header{}, ErrorBadSignature
// DecodeHeader deserialises a 33-byte array into a Header, validating
// the magic signature.
func DecodeHeader(buf [HeaderSize]byte) (Header, error) {
var h Header
h.Signature = binary.LittleEndian.Uint64(buf[0:8])
if h.Signature != Signature {
return Header{}, ErrBadSignature
}
header.PayloadSize = binary.LittleEndian.Uint64(headerBytes[8:16])
if header.PayloadSize > MaxPayloadSize {
return Header{}, ErrorPayloadTooBig
h.PayloadSize = binary.LittleEndian.Uint64(buf[8:16])
if h.PayloadSize > MaxPayloadSize {
return Header{}, ErrPayloadTooBig
}
header.ExpectResponse = headerBytes[16] == 0x01
header.Command = binary.LittleEndian.Uint32(headerBytes[17:21])
header.ReturnCode = int32(binary.LittleEndian.Uint32(headerBytes[21:25]))
header.Flags = binary.LittleEndian.Uint32(headerBytes[25:29])
header.ProtocolVersion = binary.LittleEndian.Uint32(headerBytes[29:33])
return header, nil
h.ExpectResponse = buf[16] == 0x01
h.Command = binary.LittleEndian.Uint32(buf[17:21])
h.ReturnCode = int32(binary.LittleEndian.Uint32(buf[21:25]))
h.Flags = binary.LittleEndian.Uint32(buf[25:29])
h.ProtocolVersion = binary.LittleEndian.Uint32(buf[29:33])
return h, nil
}

View file

@ -11,11 +11,11 @@ import (
"github.com/stretchr/testify/require"
)
func TestHeader_SizeIs33_Good(t *testing.T) {
func TestHeaderSizeIs33(t *testing.T) {
assert.Equal(t, 33, HeaderSize)
}
func TestHeader_EncodeHeader_KnownValues_Good(t *testing.T) {
func TestEncodeHeader_KnownValues(t *testing.T) {
h := &Header{
Signature: Signature,
PayloadSize: 256,
@ -56,7 +56,7 @@ func TestHeader_EncodeHeader_KnownValues_Good(t *testing.T) {
assert.Equal(t, uint32(0), pv)
}
func TestHeader_EncodeHeader_ExpectResponseFalse_Good(t *testing.T) {
func TestEncodeHeader_ExpectResponseFalse(t *testing.T) {
h := &Header{
Signature: Signature,
PayloadSize: 42,
@ -68,26 +68,26 @@ func TestHeader_EncodeHeader_ExpectResponseFalse_Good(t *testing.T) {
assert.Equal(t, byte(0x00), buf[16])
}
func TestHeader_EncodeHeader_NegativeReturnCode_Good(t *testing.T) {
func TestEncodeHeader_NegativeReturnCode(t *testing.T) {
h := &Header{
Signature: Signature,
PayloadSize: 0,
ExpectResponse: false,
Command: CommandHandshake,
ReturnCode: ReturnErrorFormat,
ReturnCode: ReturnErrFormat,
}
buf := EncodeHeader(h)
rc := int32(binary.LittleEndian.Uint32(buf[21:25]))
assert.Equal(t, ReturnErrorFormat, rc)
assert.Equal(t, ReturnErrFormat, rc)
}
func TestHeader_DecodeHeader_RoundTrip_Ugly(t *testing.T) {
func TestDecodeHeader_RoundTrip(t *testing.T) {
original := &Header{
Signature: Signature,
PayloadSize: 1024,
ExpectResponse: true,
Command: CommandTimedSync,
ReturnCode: ReturnErrorConnection,
ReturnCode: ReturnErrConnection,
Flags: 0,
ProtocolVersion: 0,
}
@ -105,7 +105,7 @@ func TestHeader_DecodeHeader_RoundTrip_Ugly(t *testing.T) {
assert.Equal(t, original.ProtocolVersion, decoded.ProtocolVersion)
}
func TestHeader_DecodeHeader_AllCommands_Good(t *testing.T) {
func TestDecodeHeader_AllCommands(t *testing.T) {
commands := []uint32{
CommandHandshake,
CommandTimedSync,
@ -131,7 +131,7 @@ func TestHeader_DecodeHeader_AllCommands_Good(t *testing.T) {
}
}
func TestHeader_DecodeHeader_BadSignature_Bad(t *testing.T) {
func TestDecodeHeader_BadSignature(t *testing.T) {
h := &Header{
Signature: 0xDEADBEEF,
PayloadSize: 0,
@ -140,10 +140,10 @@ func TestHeader_DecodeHeader_BadSignature_Bad(t *testing.T) {
buf := EncodeHeader(h)
_, err := DecodeHeader(buf)
require.Error(t, err)
assert.ErrorIs(t, err, ErrorBadSignature)
assert.ErrorIs(t, err, ErrBadSignature)
}
func TestHeader_DecodeHeader_PayloadTooBig_Bad(t *testing.T) {
func TestDecodeHeader_PayloadTooBig(t *testing.T) {
h := &Header{
Signature: Signature,
PayloadSize: MaxPayloadSize + 1,
@ -152,10 +152,10 @@ func TestHeader_DecodeHeader_PayloadTooBig_Bad(t *testing.T) {
buf := EncodeHeader(h)
_, err := DecodeHeader(buf)
require.Error(t, err)
assert.ErrorIs(t, err, ErrorPayloadTooBig)
assert.ErrorIs(t, err, ErrPayloadTooBig)
}
func TestHeader_DecodeHeader_MaxPayloadExact_Ugly(t *testing.T) {
func TestDecodeHeader_MaxPayloadExact(t *testing.T) {
h := &Header{
Signature: Signature,
PayloadSize: MaxPayloadSize,

View file

@ -5,11 +5,12 @@ package levin
import (
"encoding/binary"
"fmt"
"maps"
"math"
"slices"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
)
// Portable storage signatures and version (9-byte header).
@ -40,18 +41,20 @@ const (
// Sentinel errors for storage encoding and decoding.
var (
ErrorStorageBadSignature = core.E("levin.storage", "bad storage signature", nil)
ErrorStorageTruncated = core.E("levin.storage", "truncated storage data", nil)
ErrorStorageBadVersion = core.E("levin.storage", "unsupported storage version", nil)
ErrorStorageNameTooLong = core.E("levin.storage", "entry name exceeds 255 bytes", nil)
ErrorStorageTypeMismatch = core.E("levin.storage", "value type mismatch", nil)
ErrorStorageUnknownType = core.E("levin.storage", "unknown type tag", nil)
ErrStorageBadSignature = coreerr.E("levin.storage", "bad storage signature", nil)
ErrStorageTruncated = coreerr.E("levin.storage", "truncated storage data", nil)
ErrStorageBadVersion = coreerr.E("levin.storage", "unsupported storage version", nil)
ErrStorageNameTooLong = coreerr.E("levin.storage", "entry name exceeds 255 bytes", nil)
ErrStorageTypeMismatch = coreerr.E("levin.storage", "value type mismatch", nil)
ErrStorageUnknownType = coreerr.E("levin.storage", "unknown type tag", nil)
)
// section := Section{"id": StringValue([]byte("peer-1"))}
// Section is an ordered map of named values forming a portable storage section.
// Field iteration order is always alphabetical by key for deterministic encoding.
type Section map[string]Value
// value := StringValue([]byte("peer-1"))
// Value holds a typed portable storage value. Use the constructor functions
// (Uint64Val, StringVal, ObjectVal, etc.) to create instances.
type Value struct {
Type uint8
@ -74,162 +77,162 @@ type Value struct {
// Scalar constructors
// ---------------------------------------------------------------------------
// value := Uint64Value(42)
func Uint64Value(value uint64) Value { return Value{Type: TypeUint64, uintVal: value} }
// Uint64Val creates a Value of TypeUint64.
func Uint64Val(v uint64) Value { return Value{Type: TypeUint64, uintVal: v} }
// value := Uint32Value(42)
func Uint32Value(value uint32) Value { return Value{Type: TypeUint32, uintVal: uint64(value)} }
// Uint32Val creates a Value of TypeUint32.
func Uint32Val(v uint32) Value { return Value{Type: TypeUint32, uintVal: uint64(v)} }
// value := Uint16Value(42)
func Uint16Value(value uint16) Value { return Value{Type: TypeUint16, uintVal: uint64(value)} }
// Uint16Val creates a Value of TypeUint16.
func Uint16Val(v uint16) Value { return Value{Type: TypeUint16, uintVal: uint64(v)} }
// value := Uint8Value(42)
func Uint8Value(value uint8) Value { return Value{Type: TypeUint8, uintVal: uint64(value)} }
// Uint8Val creates a Value of TypeUint8.
func Uint8Val(v uint8) Value { return Value{Type: TypeUint8, uintVal: uint64(v)} }
// value := Int64Value(42)
func Int64Value(value int64) Value { return Value{Type: TypeInt64, intVal: value} }
// Int64Val creates a Value of TypeInt64.
func Int64Val(v int64) Value { return Value{Type: TypeInt64, intVal: v} }
// value := Int32Value(42)
func Int32Value(value int32) Value { return Value{Type: TypeInt32, intVal: int64(value)} }
// Int32Val creates a Value of TypeInt32.
func Int32Val(v int32) Value { return Value{Type: TypeInt32, intVal: int64(v)} }
// value := Int16Value(42)
func Int16Value(value int16) Value { return Value{Type: TypeInt16, intVal: int64(value)} }
// Int16Val creates a Value of TypeInt16.
func Int16Val(v int16) Value { return Value{Type: TypeInt16, intVal: int64(v)} }
// value := Int8Value(42)
func Int8Value(value int8) Value { return Value{Type: TypeInt8, intVal: int64(value)} }
// Int8Val creates a Value of TypeInt8.
func Int8Val(v int8) Value { return Value{Type: TypeInt8, intVal: int64(v)} }
// value := BoolValue(true)
func BoolValue(value bool) Value { return Value{Type: TypeBool, boolVal: value} }
// BoolVal creates a Value of TypeBool.
func BoolVal(v bool) Value { return Value{Type: TypeBool, boolVal: v} }
// value := DoubleValue(3.14)
func DoubleValue(value float64) Value { return Value{Type: TypeDouble, floatVal: value} }
// DoubleVal creates a Value of TypeDouble.
func DoubleVal(v float64) Value { return Value{Type: TypeDouble, floatVal: v} }
// value := StringValue([]byte("hello"))
func StringValue(value []byte) Value { return Value{Type: TypeString, bytesVal: value} }
// StringVal creates a Value of TypeString. The slice is not copied.
func StringVal(v []byte) Value { return Value{Type: TypeString, bytesVal: v} }
// value := ObjectValue(Section{"id": StringValue([]byte("peer-1"))})
func ObjectValue(section Section) Value { return Value{Type: TypeObject, objectVal: section} }
// ObjectVal creates a Value of TypeObject wrapping a nested Section.
func ObjectVal(s Section) Value { return Value{Type: TypeObject, objectVal: s} }
// ---------------------------------------------------------------------------
// Array constructors
// ---------------------------------------------------------------------------
// value := Uint64ArrayValue([]uint64{1, 2, 3})
func Uint64ArrayValue(values []uint64) Value {
return Value{Type: ArrayFlag | TypeUint64, uint64Array: values}
// Uint64ArrayVal creates a typed array of uint64 values.
func Uint64ArrayVal(vs []uint64) Value {
return Value{Type: ArrayFlag | TypeUint64, uint64Array: vs}
}
// value := Uint32ArrayValue([]uint32{1, 2, 3})
func Uint32ArrayValue(values []uint32) Value {
return Value{Type: ArrayFlag | TypeUint32, uint32Array: values}
// Uint32ArrayVal creates a typed array of uint32 values.
func Uint32ArrayVal(vs []uint32) Value {
return Value{Type: ArrayFlag | TypeUint32, uint32Array: vs}
}
// value := StringArrayValue([][]byte{[]byte("a"), []byte("b")})
func StringArrayValue(values [][]byte) Value {
return Value{Type: ArrayFlag | TypeString, stringArray: values}
// StringArrayVal creates a typed array of byte-string values.
func StringArrayVal(vs [][]byte) Value {
return Value{Type: ArrayFlag | TypeString, stringArray: vs}
}
// value := ObjectArrayValue([]Section{{"id": StringValue([]byte("peer-1"))}})
func ObjectArrayValue(values []Section) Value {
return Value{Type: ArrayFlag | TypeObject, objectArray: values}
// ObjectArrayVal creates a typed array of Section values.
func ObjectArrayVal(vs []Section) Value {
return Value{Type: ArrayFlag | TypeObject, objectArray: vs}
}
// ---------------------------------------------------------------------------
// Scalar accessors
// ---------------------------------------------------------------------------
// value, err := Uint64Value(42).AsUint64()
// AsUint64 returns the uint64 value or an error on type mismatch.
func (v Value) AsUint64() (uint64, error) {
if v.Type != TypeUint64 {
return 0, ErrorStorageTypeMismatch
return 0, ErrStorageTypeMismatch
}
return v.uintVal, nil
}
// value, err := Uint32Value(42).AsUint32()
// AsUint32 returns the uint32 value or an error on type mismatch.
func (v Value) AsUint32() (uint32, error) {
if v.Type != TypeUint32 {
return 0, ErrorStorageTypeMismatch
return 0, ErrStorageTypeMismatch
}
return uint32(v.uintVal), nil
}
// value, err := Uint16Value(42).AsUint16()
// AsUint16 returns the uint16 value or an error on type mismatch.
func (v Value) AsUint16() (uint16, error) {
if v.Type != TypeUint16 {
return 0, ErrorStorageTypeMismatch
return 0, ErrStorageTypeMismatch
}
return uint16(v.uintVal), nil
}
// value, err := Uint8Value(42).AsUint8()
// AsUint8 returns the uint8 value or an error on type mismatch.
func (v Value) AsUint8() (uint8, error) {
if v.Type != TypeUint8 {
return 0, ErrorStorageTypeMismatch
return 0, ErrStorageTypeMismatch
}
return uint8(v.uintVal), nil
}
// value, err := Int64Value(42).AsInt64()
// AsInt64 returns the int64 value or an error on type mismatch.
func (v Value) AsInt64() (int64, error) {
if v.Type != TypeInt64 {
return 0, ErrorStorageTypeMismatch
return 0, ErrStorageTypeMismatch
}
return v.intVal, nil
}
// value, err := Int32Value(42).AsInt32()
// AsInt32 returns the int32 value or an error on type mismatch.
func (v Value) AsInt32() (int32, error) {
if v.Type != TypeInt32 {
return 0, ErrorStorageTypeMismatch
return 0, ErrStorageTypeMismatch
}
return int32(v.intVal), nil
}
// value, err := Int16Value(42).AsInt16()
// AsInt16 returns the int16 value or an error on type mismatch.
func (v Value) AsInt16() (int16, error) {
if v.Type != TypeInt16 {
return 0, ErrorStorageTypeMismatch
return 0, ErrStorageTypeMismatch
}
return int16(v.intVal), nil
}
// value, err := Int8Value(42).AsInt8()
// AsInt8 returns the int8 value or an error on type mismatch.
func (v Value) AsInt8() (int8, error) {
if v.Type != TypeInt8 {
return 0, ErrorStorageTypeMismatch
return 0, ErrStorageTypeMismatch
}
return int8(v.intVal), nil
}
// value, err := BoolValue(true).AsBool()
// AsBool returns the bool value or an error on type mismatch.
func (v Value) AsBool() (bool, error) {
if v.Type != TypeBool {
return false, ErrorStorageTypeMismatch
return false, ErrStorageTypeMismatch
}
return v.boolVal, nil
}
// value, err := DoubleValue(3.14).AsDouble()
// AsDouble returns the float64 value or an error on type mismatch.
func (v Value) AsDouble() (float64, error) {
if v.Type != TypeDouble {
return 0, ErrorStorageTypeMismatch
return 0, ErrStorageTypeMismatch
}
return v.floatVal, nil
}
// value, err := StringValue([]byte("hello")).AsString()
// AsString returns the byte-string value or an error on type mismatch.
func (v Value) AsString() ([]byte, error) {
if v.Type != TypeString {
return nil, ErrorStorageTypeMismatch
return nil, ErrStorageTypeMismatch
}
return v.bytesVal, nil
}
// section, err := ObjectValue(Section{"id": StringValue([]byte("peer-1"))}).AsSection()
// AsSection returns the nested Section or an error on type mismatch.
func (v Value) AsSection() (Section, error) {
if v.Type != TypeObject {
return nil, ErrorStorageTypeMismatch
return nil, ErrStorageTypeMismatch
}
return v.objectVal, nil
}
@ -238,34 +241,34 @@ func (v Value) AsSection() (Section, error) {
// Array accessors
// ---------------------------------------------------------------------------
// values, err := Uint64ArrayValue([]uint64{1, 2, 3}).AsUint64Array()
// AsUint64Array returns the []uint64 array or an error on type mismatch.
func (v Value) AsUint64Array() ([]uint64, error) {
if v.Type != (ArrayFlag | TypeUint64) {
return nil, ErrorStorageTypeMismatch
return nil, ErrStorageTypeMismatch
}
return v.uint64Array, nil
}
// values, err := Uint32ArrayValue([]uint32{1, 2, 3}).AsUint32Array()
// AsUint32Array returns the []uint32 array or an error on type mismatch.
func (v Value) AsUint32Array() ([]uint32, error) {
if v.Type != (ArrayFlag | TypeUint32) {
return nil, ErrorStorageTypeMismatch
return nil, ErrStorageTypeMismatch
}
return v.uint32Array, nil
}
// values, err := StringArrayValue([][]byte{[]byte("a"), []byte("b")}).AsStringArray()
// AsStringArray returns the [][]byte array or an error on type mismatch.
func (v Value) AsStringArray() ([][]byte, error) {
if v.Type != (ArrayFlag | TypeString) {
return nil, ErrorStorageTypeMismatch
return nil, ErrStorageTypeMismatch
}
return v.stringArray, nil
}
// values, err := ObjectArrayValue([]Section{{"id": StringValue([]byte("peer-1"))}}).AsSectionArray()
// AsSectionArray returns the []Section array or an error on type mismatch.
func (v Value) AsSectionArray() ([]Section, error) {
if v.Type != (ArrayFlag | TypeObject) {
return nil, ErrorStorageTypeMismatch
return nil, ErrStorageTypeMismatch
}
return v.objectArray, nil
}
@ -274,26 +277,28 @@ func (v Value) AsSectionArray() ([]Section, error) {
// Encoder
// ---------------------------------------------------------------------------
// data, err := EncodeStorage(section)
func EncodeStorage(section Section) ([]byte, error) {
buffer := make([]byte, 0, 256)
// EncodeStorage serialises a Section to the portable storage binary format,
// including the 9-byte header. Keys are sorted alphabetically to ensure
// deterministic output.
func EncodeStorage(s Section) ([]byte, error) {
buf := make([]byte, 0, 256)
// 9-byte storage header.
var hdr [StorageHeaderSize]byte
binary.LittleEndian.PutUint32(hdr[0:4], StorageSignatureA)
binary.LittleEndian.PutUint32(hdr[4:8], StorageSignatureB)
hdr[8] = StorageVersion
buffer = append(buffer, hdr[:]...)
buf = append(buf, hdr[:]...)
// Encode root section.
out, err := encodeSection(buffer, section)
out, err := encodeSection(buf, s)
if err != nil {
return nil, err
}
return out, nil
}
// encodeSection appends a section (entry count + entries) to buffer.
// encodeSection appends a section (entry count + entries) to buf.
func encodeSection(buf []byte, s Section) ([]byte, error) {
// Sort keys for deterministic output.
keys := slices.Sorted(maps.Keys(s))
@ -306,7 +311,7 @@ func encodeSection(buf []byte, s Section) ([]byte, error) {
// Name: uint8 length + raw bytes.
if len(name) > 255 {
return nil, ErrorStorageNameTooLong
return nil, ErrStorageNameTooLong
}
buf = append(buf, byte(len(name)))
buf = append(buf, name...)
@ -389,7 +394,7 @@ func encodeValue(buf []byte, v Value) ([]byte, error) {
return encodeSection(buf, v.objectVal)
default:
return nil, core.E("levin.encodeValue", core.Sprintf("unknown type tag: 0x%02x", v.Type), ErrorStorageUnknownType)
return nil, coreerr.E("levin.encodeValue", fmt.Sprintf("unknown type tag: 0x%02x", v.Type), ErrStorageUnknownType)
}
}
@ -436,7 +441,7 @@ func encodeArray(buf []byte, v Value) ([]byte, error) {
return buf, nil
default:
return nil, core.E("levin.encodeArray", core.Sprintf("unknown type tag: array of 0x%02x", elemType), ErrorStorageUnknownType)
return nil, coreerr.E("levin.encodeArray", fmt.Sprintf("unknown type tag: array of 0x%02x", elemType), ErrStorageUnknownType)
}
}
@ -444,10 +449,11 @@ func encodeArray(buf []byte, v Value) ([]byte, error) {
// Decoder
// ---------------------------------------------------------------------------
// section, err := DecodeStorage(data)
// DecodeStorage deserialises portable storage binary data (including the
// 9-byte header) into a Section.
func DecodeStorage(data []byte) (Section, error) {
if len(data) < StorageHeaderSize {
return nil, ErrorStorageTruncated
return nil, ErrStorageTruncated
}
sigA := binary.LittleEndian.Uint32(data[0:4])
@ -455,22 +461,22 @@ func DecodeStorage(data []byte) (Section, error) {
ver := data[8]
if sigA != StorageSignatureA || sigB != StorageSignatureB {
return nil, ErrorStorageBadSignature
return nil, ErrStorageBadSignature
}
if ver != StorageVersion {
return nil, ErrorStorageBadVersion
return nil, ErrStorageBadVersion
}
s, _, err := decodeSection(data[StorageHeaderSize:])
return s, err
}
// decodeSection reads a section from buffer and returns the section plus
// decodeSection reads a section from buf and returns the section plus
// the number of bytes consumed.
func decodeSection(buf []byte) (Section, int, error) {
count, n, err := UnpackVarint(buf)
if err != nil {
return nil, 0, core.E("levin.decodeSection", "section entry count", err)
return nil, 0, coreerr.E("levin.decodeSection", "section entry count", err)
}
off := n
@ -479,21 +485,21 @@ func decodeSection(buf []byte) (Section, int, error) {
for range count {
// Name length (1 byte).
if off >= len(buf) {
return nil, 0, ErrorStorageTruncated
return nil, 0, ErrStorageTruncated
}
nameLen := int(buf[off])
off++
// Name bytes.
if off+nameLen > len(buf) {
return nil, 0, ErrorStorageTruncated
return nil, 0, ErrStorageTruncated
}
name := string(buf[off : off+nameLen])
off += nameLen
// Type tag (1 byte).
if off >= len(buf) {
return nil, 0, ErrorStorageTruncated
return nil, 0, ErrStorageTruncated
}
tag := buf[off]
off++
@ -501,7 +507,7 @@ func decodeSection(buf []byte) (Section, int, error) {
// Value.
val, consumed, err := decodeValue(buf[off:], tag)
if err != nil {
return nil, 0, core.E("levin.decodeSection", "field "+name, err)
return nil, 0, coreerr.E("levin.decodeSection", "field "+name, err)
}
off += consumed
@ -511,7 +517,7 @@ func decodeSection(buf []byte) (Section, int, error) {
return s, off, nil
}
// decodeValue reads a value of the given type tag from buffer and returns
// decodeValue reads a value of the given type tag from buf and returns
// the value plus bytes consumed.
func decodeValue(buf []byte, tag uint8) (Value, int, error) {
// Array types.
@ -522,68 +528,68 @@ func decodeValue(buf []byte, tag uint8) (Value, int, error) {
switch tag {
case TypeUint64:
if len(buf) < 8 {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
v := binary.LittleEndian.Uint64(buf[:8])
return Value{Type: TypeUint64, uintVal: v}, 8, nil
case TypeInt64:
if len(buf) < 8 {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
v := int64(binary.LittleEndian.Uint64(buf[:8]))
return Value{Type: TypeInt64, intVal: v}, 8, nil
case TypeDouble:
if len(buf) < 8 {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
bits := binary.LittleEndian.Uint64(buf[:8])
return Value{Type: TypeDouble, floatVal: math.Float64frombits(bits)}, 8, nil
case TypeUint32:
if len(buf) < 4 {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
v := binary.LittleEndian.Uint32(buf[:4])
return Value{Type: TypeUint32, uintVal: uint64(v)}, 4, nil
case TypeInt32:
if len(buf) < 4 {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
v := int32(binary.LittleEndian.Uint32(buf[:4]))
return Value{Type: TypeInt32, intVal: int64(v)}, 4, nil
case TypeUint16:
if len(buf) < 2 {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
v := binary.LittleEndian.Uint16(buf[:2])
return Value{Type: TypeUint16, uintVal: uint64(v)}, 2, nil
case TypeInt16:
if len(buf) < 2 {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
v := int16(binary.LittleEndian.Uint16(buf[:2]))
return Value{Type: TypeInt16, intVal: int64(v)}, 2, nil
case TypeUint8:
if len(buf) < 1 {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
return Value{Type: TypeUint8, uintVal: uint64(buf[0])}, 1, nil
case TypeInt8:
if len(buf) < 1 {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
return Value{Type: TypeInt8, intVal: int64(int8(buf[0]))}, 1, nil
case TypeBool:
if len(buf) < 1 {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
return Value{Type: TypeBool, boolVal: buf[0] != 0}, 1, nil
@ -593,7 +599,7 @@ func decodeValue(buf []byte, tag uint8) (Value, int, error) {
return Value{}, 0, err
}
if uint64(len(buf)-n) < strLen {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
data := make([]byte, strLen)
copy(data, buf[n:n+int(strLen)])
@ -607,11 +613,11 @@ func decodeValue(buf []byte, tag uint8) (Value, int, error) {
return Value{Type: TypeObject, objectVal: sec}, consumed, nil
default:
return Value{}, 0, core.E("levin.decodeValue", core.Sprintf("unknown type tag: 0x%02x", tag), ErrorStorageUnknownType)
return Value{}, 0, coreerr.E("levin.decodeValue", fmt.Sprintf("unknown type tag: 0x%02x", tag), ErrStorageUnknownType)
}
}
// decodeArray reads a typed array from buffer (tag has ArrayFlag set).
// decodeArray reads a typed array from buf (tag has ArrayFlag set).
func decodeArray(buf []byte, tag uint8) (Value, int, error) {
elemType := tag & ^ArrayFlag
@ -626,7 +632,7 @@ func decodeArray(buf []byte, tag uint8) (Value, int, error) {
arr := make([]uint64, count)
for i := range count {
if off+8 > len(buf) {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
arr[i] = binary.LittleEndian.Uint64(buf[off : off+8])
off += 8
@ -637,7 +643,7 @@ func decodeArray(buf []byte, tag uint8) (Value, int, error) {
arr := make([]uint32, count)
for i := range count {
if off+4 > len(buf) {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
arr[i] = binary.LittleEndian.Uint32(buf[off : off+4])
off += 4
@ -653,7 +659,7 @@ func decodeArray(buf []byte, tag uint8) (Value, int, error) {
}
off += sn
if uint64(len(buf)-off) < strLen {
return Value{}, 0, ErrorStorageTruncated
return Value{}, 0, ErrStorageTruncated
}
data := make([]byte, strLen)
copy(data, buf[off:off+int(strLen)])
@ -675,6 +681,6 @@ func decodeArray(buf []byte, tag uint8) (Value, int, error) {
return Value{Type: tag, objectArray: arr}, off, nil
default:
return Value{}, 0, core.E("levin.decodeArray", core.Sprintf("unknown type tag: array of 0x%02x", elemType), ErrorStorageUnknownType)
return Value{}, 0, coreerr.E("levin.decodeArray", fmt.Sprintf("unknown type tag: array of 0x%02x", elemType), ErrStorageUnknownType)
}
}

View file

@ -10,7 +10,7 @@ import (
"github.com/stretchr/testify/require"
)
func TestStorage_EncodeStorage_EmptySection_Ugly(t *testing.T) {
func TestEncodeStorage_EmptySection(t *testing.T) {
s := Section{}
data, err := EncodeStorage(s)
require.NoError(t, err)
@ -35,19 +35,19 @@ func TestStorage_EncodeStorage_EmptySection_Ugly(t *testing.T) {
assert.Equal(t, byte(0x00), data[9])
}
func TestStorage_PrimitivesRoundTrip_Ugly(t *testing.T) {
func TestStorage_PrimitivesRoundTrip(t *testing.T) {
s := Section{
"u64": Uint64Value(0xDEADBEEFCAFEBABE),
"u32": Uint32Value(0xCAFEBABE),
"u16": Uint16Value(0xBEEF),
"u8": Uint8Value(42),
"i64": Int64Value(-9223372036854775808),
"i32": Int32Value(-2147483648),
"i16": Int16Value(-32768),
"i8": Int8Value(-128),
"flag": BoolValue(true),
"height": StringValue([]byte("hello world")),
"pi": DoubleValue(3.141592653589793),
"u64": Uint64Val(0xDEADBEEFCAFEBABE),
"u32": Uint32Val(0xCAFEBABE),
"u16": Uint16Val(0xBEEF),
"u8": Uint8Val(42),
"i64": Int64Val(-9223372036854775808),
"i32": Int32Val(-2147483648),
"i16": Int16Val(-32768),
"i8": Int8Val(-128),
"flag": BoolVal(true),
"height": StringVal([]byte("hello world")),
"pi": DoubleVal(3.141592653589793),
}
data, err := EncodeStorage(s)
@ -106,14 +106,14 @@ func TestStorage_PrimitivesRoundTrip_Ugly(t *testing.T) {
assert.Equal(t, 3.141592653589793, pi)
}
func TestStorage_NestedObject_Good(t *testing.T) {
func TestStorage_NestedObject(t *testing.T) {
inner := Section{
"port": Uint16Value(18080),
"host": StringValue([]byte("127.0.0.1")),
"port": Uint16Val(18080),
"host": StringVal([]byte("127.0.0.1")),
}
outer := Section{
"node_data": ObjectValue(inner),
"version": Uint32Value(1),
"node_data": ObjectVal(inner),
"version": Uint32Val(1),
}
data, err := EncodeStorage(outer)
@ -138,9 +138,9 @@ func TestStorage_NestedObject_Good(t *testing.T) {
assert.Equal(t, []byte("127.0.0.1"), host)
}
func TestStorage_Uint64Array_Good(t *testing.T) {
func TestStorage_Uint64Array(t *testing.T) {
s := Section{
"heights": Uint64ArrayValue([]uint64{10, 20, 30}),
"heights": Uint64ArrayVal([]uint64{10, 20, 30}),
}
data, err := EncodeStorage(s)
@ -154,9 +154,9 @@ func TestStorage_Uint64Array_Good(t *testing.T) {
assert.Equal(t, []uint64{10, 20, 30}, arr)
}
func TestStorage_StringArray_Good(t *testing.T) {
func TestStorage_StringArray(t *testing.T) {
s := Section{
"peers": StringArrayValue([][]byte{[]byte("foo"), []byte("bar")}),
"peers": StringArrayVal([][]byte{[]byte("foo"), []byte("bar")}),
}
data, err := EncodeStorage(s)
@ -172,13 +172,13 @@ func TestStorage_StringArray_Good(t *testing.T) {
assert.Equal(t, []byte("bar"), arr[1])
}
func TestStorage_ObjectArray_Good(t *testing.T) {
func TestStorage_ObjectArray(t *testing.T) {
sections := []Section{
{"id": Uint32Value(1), "name": StringValue([]byte("alice"))},
{"id": Uint32Value(2), "name": StringValue([]byte("bob"))},
{"id": Uint32Val(1), "name": StringVal([]byte("alice"))},
{"id": Uint32Val(2), "name": StringVal([]byte("bob"))},
}
s := Section{
"nodes": ObjectArrayValue(sections),
"nodes": ObjectArrayVal(sections),
}
data, err := EncodeStorage(s)
@ -208,30 +208,30 @@ func TestStorage_ObjectArray_Good(t *testing.T) {
assert.Equal(t, []byte("bob"), name2)
}
func TestStorage_DecodeStorage_BadSignature_Bad(t *testing.T) {
func TestDecodeStorage_BadSignature(t *testing.T) {
// Corrupt the first 4 bytes.
data := []byte{0xFF, 0xFF, 0xFF, 0xFF, 0x01, 0x01, 0x02, 0x01, 0x01, 0x00}
_, err := DecodeStorage(data)
require.Error(t, err)
assert.ErrorIs(t, err, ErrorStorageBadSignature)
assert.ErrorIs(t, err, ErrStorageBadSignature)
}
func TestStorage_DecodeStorage_TooShort_Bad(t *testing.T) {
func TestDecodeStorage_TooShort(t *testing.T) {
_, err := DecodeStorage([]byte{0x01, 0x11})
require.Error(t, err)
assert.ErrorIs(t, err, ErrorStorageTruncated)
assert.ErrorIs(t, err, ErrStorageTruncated)
}
func TestStorage_ByteIdenticalReencode_Ugly(t *testing.T) {
func TestStorage_ByteIdenticalReencode(t *testing.T) {
s := Section{
"alpha": Uint64Value(999),
"bravo": StringValue([]byte("deterministic")),
"charlie": BoolValue(false),
"delta": ObjectValue(Section{
"x": Int32Value(-42),
"y": Int32Value(100),
"alpha": Uint64Val(999),
"bravo": StringVal([]byte("deterministic")),
"charlie": BoolVal(false),
"delta": ObjectVal(Section{
"x": Int32Val(-42),
"y": Int32Val(100),
}),
"echo": Uint64ArrayValue([]uint64{1, 2, 3}),
"echo": Uint64ArrayVal([]uint64{1, 2, 3}),
}
data1, err := EncodeStorage(s)
@ -246,28 +246,28 @@ func TestStorage_ByteIdenticalReencode_Ugly(t *testing.T) {
assert.Equal(t, data1, data2, "re-encoded bytes must be identical")
}
func TestStorage_TypeMismatchErrors_Bad(t *testing.T) {
v := Uint64Value(42)
func TestStorage_TypeMismatchErrors(t *testing.T) {
v := Uint64Val(42)
_, err := v.AsUint32()
assert.ErrorIs(t, err, ErrorStorageTypeMismatch)
assert.ErrorIs(t, err, ErrStorageTypeMismatch)
_, err = v.AsString()
assert.ErrorIs(t, err, ErrorStorageTypeMismatch)
assert.ErrorIs(t, err, ErrStorageTypeMismatch)
_, err = v.AsBool()
assert.ErrorIs(t, err, ErrorStorageTypeMismatch)
assert.ErrorIs(t, err, ErrStorageTypeMismatch)
_, err = v.AsSection()
assert.ErrorIs(t, err, ErrorStorageTypeMismatch)
assert.ErrorIs(t, err, ErrStorageTypeMismatch)
_, err = v.AsUint64Array()
assert.ErrorIs(t, err, ErrorStorageTypeMismatch)
assert.ErrorIs(t, err, ErrStorageTypeMismatch)
}
func TestStorage_Uint32Array_Good(t *testing.T) {
func TestStorage_Uint32Array(t *testing.T) {
s := Section{
"ports": Uint32ArrayValue([]uint32{8080, 8443, 9090}),
"ports": Uint32ArrayVal([]uint32{8080, 8443, 9090}),
}
data, err := EncodeStorage(s)
@ -281,19 +281,19 @@ func TestStorage_Uint32Array_Good(t *testing.T) {
assert.Equal(t, []uint32{8080, 8443, 9090}, arr)
}
func TestStorage_DecodeStorage_BadVersion_Bad(t *testing.T) {
func TestDecodeStorage_BadVersion(t *testing.T) {
// Valid signatures but version 2 instead of 1.
data := []byte{0x01, 0x11, 0x01, 0x01, 0x01, 0x01, 0x02, 0x01, 0x02, 0x00}
_, err := DecodeStorage(data)
require.Error(t, err)
assert.ErrorIs(t, err, ErrorStorageBadVersion)
assert.ErrorIs(t, err, ErrStorageBadVersion)
}
func TestStorage_EmptyArrays_Ugly(t *testing.T) {
func TestStorage_EmptyArrays(t *testing.T) {
s := Section{
"empty_u64": Uint64ArrayValue([]uint64{}),
"empty_str": StringArrayValue([][]byte{}),
"empty_obj": ObjectArrayValue([]Section{}),
"empty_u64": Uint64ArrayVal([]uint64{}),
"empty_str": StringArrayVal([][]byte{}),
"empty_obj": ObjectArrayVal([]Section{}),
}
data, err := EncodeStorage(s)
@ -315,10 +315,10 @@ func TestStorage_EmptyArrays_Ugly(t *testing.T) {
assert.Empty(t, objarr)
}
func TestStorage_BoolFalseRoundTrip_Ugly(t *testing.T) {
func TestStorage_BoolFalseRoundTrip(t *testing.T) {
s := Section{
"off": BoolValue(false),
"on": BoolValue(true),
"off": BoolVal(false),
"on": BoolVal(true),
}
data, err := EncodeStorage(s)

View file

@ -6,86 +6,89 @@ package levin
import (
"encoding/binary"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
)
// Size-mark bits occupying the two lowest bits of the first byte.
const (
varintMask = 0x03
varintMark1 = 0x00 // 1 byte, max 63
varintMark2 = 0x01 // 2 bytes, max 16,383
varintMark4 = 0x02 // 4 bytes, max 1,073,741,823
varintMark8 = 0x03 // 8 bytes, max 4,611,686,018,427,387,903
varintMax1 = 63
varintMax2 = 16_383
varintMax4 = 1_073_741_823
varintMax8 = 4_611_686_018_427_387_903
varintMask = 0x03
varintMark1 = 0x00 // 1 byte, max 63
varintMark2 = 0x01 // 2 bytes, max 16,383
varintMark4 = 0x02 // 4 bytes, max 1,073,741,823
varintMark8 = 0x03 // 8 bytes, max 4,611,686,018,427,387,903
varintMax1 = 63
varintMax2 = 16_383
varintMax4 = 1_073_741_823
varintMax8 = 4_611_686_018_427_387_903
)
// ErrorVarintTruncated is returned when the buffer is too short.
var ErrorVarintTruncated = core.E("levin", "truncated varint", nil)
// ErrVarintTruncated is returned when the buffer is too short.
var ErrVarintTruncated = coreerr.E("levin", "truncated varint", nil)
// ErrorVarintOverflow is returned when the value is too large to encode.
var ErrorVarintOverflow = core.E("levin", "varint overflow", nil)
// ErrVarintOverflow is returned when the value is too large to encode.
var ErrVarintOverflow = coreerr.E("levin", "varint overflow", nil)
// encoded := PackVarint(42)
func PackVarint(value uint64) []byte {
// PackVarint encodes v using the epee portable-storage varint scheme.
// The low two bits of the first byte indicate the total encoded width;
// the remaining bits carry the value in little-endian order.
func PackVarint(v uint64) []byte {
switch {
case value <= varintMax1:
return []byte{byte((value << 2) | varintMark1)}
case value <= varintMax2:
raw := uint16((value << 2) | varintMark2)
buffer := make([]byte, 2)
binary.LittleEndian.PutUint16(buffer, raw)
return buffer
case value <= varintMax4:
raw := uint32((value << 2) | varintMark4)
buffer := make([]byte, 4)
binary.LittleEndian.PutUint32(buffer, raw)
return buffer
case v <= varintMax1:
return []byte{byte((v << 2) | varintMark1)}
case v <= varintMax2:
raw := uint16((v << 2) | varintMark2)
buf := make([]byte, 2)
binary.LittleEndian.PutUint16(buf, raw)
return buf
case v <= varintMax4:
raw := uint32((v << 2) | varintMark4)
buf := make([]byte, 4)
binary.LittleEndian.PutUint32(buf, raw)
return buf
default:
raw := (value << 2) | varintMark8
buffer := make([]byte, 8)
binary.LittleEndian.PutUint64(buffer, raw)
return buffer
raw := (v << 2) | varintMark8
buf := make([]byte, 8)
binary.LittleEndian.PutUint64(buf, raw)
return buf
}
}
// value, err := UnpackVarint(buffer)
func UnpackVarint(buffer []byte) (value uint64, bytesConsumed int, err error) {
if len(buffer) == 0 {
return 0, 0, ErrorVarintTruncated
// UnpackVarint decodes one epee portable-storage varint from buf.
// It returns the decoded value, the number of bytes consumed, and any error.
func UnpackVarint(buf []byte) (value uint64, bytesConsumed int, err error) {
if len(buf) == 0 {
return 0, 0, ErrVarintTruncated
}
mark := buffer[0] & varintMask
mark := buf[0] & varintMask
switch mark {
case varintMark1:
value = uint64(buffer[0]) >> 2
value = uint64(buf[0]) >> 2
return value, 1, nil
case varintMark2:
if len(buffer) < 2 {
return 0, 0, ErrorVarintTruncated
if len(buf) < 2 {
return 0, 0, ErrVarintTruncated
}
raw := binary.LittleEndian.Uint16(buffer[:2])
raw := binary.LittleEndian.Uint16(buf[:2])
value = uint64(raw) >> 2
return value, 2, nil
case varintMark4:
if len(buffer) < 4 {
return 0, 0, ErrorVarintTruncated
if len(buf) < 4 {
return 0, 0, ErrVarintTruncated
}
raw := binary.LittleEndian.Uint32(buffer[:4])
raw := binary.LittleEndian.Uint32(buf[:4])
value = uint64(raw) >> 2
return value, 4, nil
case varintMark8:
if len(buffer) < 8 {
return 0, 0, ErrorVarintTruncated
if len(buf) < 8 {
return 0, 0, ErrVarintTruncated
}
raw := binary.LittleEndian.Uint64(buffer[:8])
raw := binary.LittleEndian.Uint64(buf[:8])
value = raw >> 2
return value, 8, nil
default:
// Unreachable — mark is masked to 2 bits.
return 0, 0, ErrorVarintTruncated
return 0, 0, ErrVarintTruncated
}
}

View file

@ -10,41 +10,41 @@ import (
"github.com/stretchr/testify/require"
)
func TestVarint_PackVarint_Value5_Good(t *testing.T) {
func TestPackVarint_Value5(t *testing.T) {
// 5 << 2 | 0x00 = 20 = 0x14
got := PackVarint(5)
assert.Equal(t, []byte{0x14}, got)
}
func TestVarint_PackVarint_Value100_Good(t *testing.T) {
func TestPackVarint_Value100(t *testing.T) {
// 100 << 2 | 0x01 = 401 = 0x0191 → LE [0x91, 0x01]
got := PackVarint(100)
assert.Equal(t, []byte{0x91, 0x01}, got)
}
func TestVarint_PackVarint_Value65536_Good(t *testing.T) {
func TestPackVarint_Value65536(t *testing.T) {
// 65536 << 2 | 0x02 = 262146 = 0x00040002 → LE [0x02, 0x00, 0x04, 0x00]
got := PackVarint(65536)
assert.Equal(t, []byte{0x02, 0x00, 0x04, 0x00}, got)
}
func TestVarint_PackVarint_Value2Billion_Good(t *testing.T) {
func TestPackVarint_Value2Billion(t *testing.T) {
got := PackVarint(2_000_000_000)
require.Len(t, got, 8)
// Low 2 bits must be 0x03 (8-byte mark).
assert.Equal(t, byte(0x03), got[0]&0x03)
}
func TestVarint_PackVarint_Zero_Ugly(t *testing.T) {
func TestPackVarint_Zero(t *testing.T) {
got := PackVarint(0)
assert.Equal(t, []byte{0x00}, got)
}
func TestVarint_PackVarint_Boundaries_Good(t *testing.T) {
func TestPackVarint_Boundaries(t *testing.T) {
tests := []struct {
name string
value uint64
wantLen int
name string
value uint64
wantLen int
}{
{"1-byte max (63)", 63, 1},
{"2-byte min (64)", 64, 2},
@ -63,7 +63,7 @@ func TestVarint_PackVarint_Boundaries_Good(t *testing.T) {
}
}
func TestVarint_RoundTrip_Ugly(t *testing.T) {
func TestVarint_RoundTrip(t *testing.T) {
values := []uint64{
0, 1, 63, 64, 100, 16_383, 16_384,
1_073_741_823, 1_073_741_824,
@ -79,38 +79,38 @@ func TestVarint_RoundTrip_Ugly(t *testing.T) {
}
}
func TestVarint_UnpackVarint_EmptyInput_Ugly(t *testing.T) {
func TestUnpackVarint_EmptyInput(t *testing.T) {
_, _, err := UnpackVarint([]byte{})
require.Error(t, err)
assert.ErrorIs(t, err, ErrorVarintTruncated)
assert.ErrorIs(t, err, ErrVarintTruncated)
}
func TestVarint_UnpackVarint_Truncated2Byte_Bad(t *testing.T) {
func TestUnpackVarint_Truncated2Byte(t *testing.T) {
// Encode 64 (needs 2 bytes), then only pass 1 byte.
buf := PackVarint(64)
require.Len(t, buf, 2)
_, _, err := UnpackVarint(buf[:1])
require.Error(t, err)
assert.ErrorIs(t, err, ErrorVarintTruncated)
assert.ErrorIs(t, err, ErrVarintTruncated)
}
func TestVarint_UnpackVarint_Truncated4Byte_Bad(t *testing.T) {
func TestUnpackVarint_Truncated4Byte(t *testing.T) {
buf := PackVarint(16_384)
require.Len(t, buf, 4)
_, _, err := UnpackVarint(buf[:2])
require.Error(t, err)
assert.ErrorIs(t, err, ErrorVarintTruncated)
assert.ErrorIs(t, err, ErrVarintTruncated)
}
func TestVarint_UnpackVarint_Truncated8Byte_Bad(t *testing.T) {
func TestUnpackVarint_Truncated8Byte(t *testing.T) {
buf := PackVarint(1_073_741_824)
require.Len(t, buf, 8)
_, _, err := UnpackVarint(buf[:4])
require.Error(t, err)
assert.ErrorIs(t, err, ErrorVarintTruncated)
assert.ErrorIs(t, err, ErrVarintTruncated)
}
func TestVarint_UnpackVarint_ExtraBytes_Good(t *testing.T) {
func TestUnpackVarint_ExtraBytes(t *testing.T) {
// Ensure that extra trailing bytes are not consumed.
buf := append(PackVarint(42), 0xFF, 0xFF)
decoded, consumed, err := UnpackVarint(buf)
@ -119,7 +119,7 @@ func TestVarint_UnpackVarint_ExtraBytes_Good(t *testing.T) {
assert.Equal(t, 1, consumed)
}
func TestVarint_PackVarint_SizeMarkBits_Good(t *testing.T) {
func TestPackVarint_SizeMarkBits(t *testing.T) {
tests := []struct {
name string
value uint64

View file

@ -1,106 +1,85 @@
package node
import (
"encoding/json"
"slices"
"time"
core "dappco.re/go/core"
"github.com/google/uuid"
)
// Protocol version constants
const (
// version := ProtocolVersion
// ProtocolVersion is the current protocol version
ProtocolVersion = "1.0"
// minimumVersion := MinProtocolVersion
// MinProtocolVersion is the minimum supported version
MinProtocolVersion = "1.0"
)
// versions := SupportedProtocolVersions
// SupportedProtocolVersions lists all protocol versions this node supports.
// Used for version negotiation during handshake.
var SupportedProtocolVersions = []string{"1.0"}
// payload := RawMessage(`{"pool":"pool.example.com:3333"}`)
type RawMessage []byte
// data, err := RawMessage(`{"ok":true}`).MarshalJSON()
func (m RawMessage) MarshalJSON() ([]byte, error) {
if m == nil {
return []byte("null"), nil
}
return m, nil
}
// var payload RawMessage
// _ = payload.UnmarshalJSON([]byte(`{"ok":true}`))
func (m *RawMessage) UnmarshalJSON(data []byte) error {
if m == nil {
return core.E("node.RawMessage.UnmarshalJSON", "raw message target is nil", nil)
}
*m = append((*m)[:0], data...)
return nil
}
// ok := IsProtocolVersionSupported("1.0")
// IsProtocolVersionSupported checks if a given version is supported.
func IsProtocolVersionSupported(version string) bool {
return slices.Contains(SupportedProtocolVersions, version)
}
// messageType := MessagePing
// MessageType defines the type of P2P message.
type MessageType string
const (
// Connection lifecycle
MessageHandshake MessageType = "handshake"
MessageHandshakeAck MessageType = "handshake_ack"
MessagePing MessageType = "ping"
MessagePong MessageType = "pong"
MessageDisconnect MessageType = "disconnect"
MsgHandshake MessageType = "handshake"
MsgHandshakeAck MessageType = "handshake_ack"
MsgPing MessageType = "ping"
MsgPong MessageType = "pong"
MsgDisconnect MessageType = "disconnect"
// Miner operations
MessageGetStats MessageType = "get_stats"
MessageStats MessageType = "stats"
MessageStartMiner MessageType = "start_miner"
MessageStopMiner MessageType = "stop_miner"
MessageMinerAck MessageType = "miner_ack"
MsgGetStats MessageType = "get_stats"
MsgStats MessageType = "stats"
MsgStartMiner MessageType = "start_miner"
MsgStopMiner MessageType = "stop_miner"
MsgMinerAck MessageType = "miner_ack"
// Deployment
MessageDeploy MessageType = "deploy"
MessageDeployAck MessageType = "deploy_ack"
MsgDeploy MessageType = "deploy"
MsgDeployAck MessageType = "deploy_ack"
// Logs
MessageGetLogs MessageType = "get_logs"
MessageLogs MessageType = "logs"
MsgGetLogs MessageType = "get_logs"
MsgLogs MessageType = "logs"
// Error response
MessageError MessageType = "error"
MsgError MessageType = "error"
)
// message, err := NewMessage(MessagePing, "controller", "worker", PingPayload{SentAt: time.Now().UnixMilli()})
// Message represents a P2P message between nodes.
type Message struct {
ID string `json:"id"` // UUID
Type MessageType `json:"type"`
From string `json:"from"` // Sender node ID
To string `json:"to"` // Recipient node ID (empty for broadcast)
Timestamp time.Time `json:"ts"`
Payload RawMessage `json:"payload"`
ReplyTo string `json:"replyTo,omitempty"` // ID of message being replied to
ID string `json:"id"` // UUID
Type MessageType `json:"type"`
From string `json:"from"` // Sender node ID
To string `json:"to"` // Recipient node ID (empty for broadcast)
Timestamp time.Time `json:"ts"`
Payload json.RawMessage `json:"payload"`
ReplyTo string `json:"replyTo,omitempty"` // ID of message being replied to
}
// message, err := NewMessage(MessagePing, "controller", "worker-1", PingPayload{SentAt: 42})
func NewMessage(messageType MessageType, from, to string, payload any) (*Message, error) {
var payloadBytes RawMessage
// NewMessage creates a new message with a generated ID and timestamp.
func NewMessage(msgType MessageType, from, to string, payload any) (*Message, error) {
var payloadBytes json.RawMessage
if payload != nil {
data, err := MarshalJSON(payload)
if err != nil {
return nil, err
}
payloadBytes = RawMessage(data)
payloadBytes = data
}
return &Message{
ID: uuid.New().String(),
Type: messageType,
Type: msgType,
From: from,
To: to,
Timestamp: time.Now(),
@ -108,9 +87,9 @@ func NewMessage(messageType MessageType, from, to string, payload any) (*Message
}, nil
}
// reply, err := message.Reply(MessagePong, PongPayload{SentAt: 42, ReceivedAt: 43})
func (m *Message) Reply(messageType MessageType, payload any) (*Message, error) {
reply, err := NewMessage(messageType, m.To, m.From, payload)
// Reply creates a reply message to this message.
func (m *Message) Reply(msgType MessageType, payload any) (*Message, error) {
reply, err := NewMessage(msgType, m.To, m.From, payload)
if err != nil {
return nil, err
}
@ -118,29 +97,24 @@ func (m *Message) Reply(messageType MessageType, payload any) (*Message, error)
return reply, nil
}
// var ping PingPayload
// err := message.ParsePayload(&ping)
func (m *Message) ParsePayload(target any) error {
// ParsePayload unmarshals the payload into the given struct.
func (m *Message) ParsePayload(v any) error {
if m.Payload == nil {
return nil
}
result := core.JSONUnmarshal(m.Payload, target)
if !result.OK {
return result.Value.(error)
}
return nil
return json.Unmarshal(m.Payload, v)
}
// --- Payload Types ---
// payload := HandshakePayload{Identity: NodeIdentity{Name: "worker-1"}, Version: ProtocolVersion}
// HandshakePayload is sent during connection establishment.
type HandshakePayload struct {
Identity NodeIdentity `json:"identity"`
Challenge []byte `json:"challenge,omitempty"` // Random bytes for auth
Version string `json:"version"` // Protocol version
}
// ack := HandshakeAckPayload{Accepted: true}
// HandshakeAckPayload is the response to a handshake.
type HandshakeAckPayload struct {
Identity NodeIdentity `json:"identity"`
ChallengeResponse []byte `json:"challengeResponse,omitempty"`
@ -148,37 +122,37 @@ type HandshakeAckPayload struct {
Reason string `json:"reason,omitempty"` // If not accepted
}
// payload := PingPayload{SentAt: 42}
// PingPayload for keepalive/latency measurement.
type PingPayload struct {
SentAt int64 `json:"sentAt"` // Unix timestamp in milliseconds
}
// payload := PongPayload{SentAt: 42, ReceivedAt: 43}
// PongPayload response to ping.
type PongPayload struct {
SentAt int64 `json:"sentAt"` // Echo of ping's sentAt
ReceivedAt int64 `json:"receivedAt"` // When ping was received
}
// payload := StartMinerPayload{MinerType: "xmrig"}
// StartMinerPayload requests starting a miner.
type StartMinerPayload struct {
MinerType string `json:"minerType"` // Required: miner type (e.g., "xmrig", "tt-miner")
ProfileID string `json:"profileId,omitempty"`
Config RawMessage `json:"config,omitempty"` // Override profile config
MinerType string `json:"minerType"` // Required: miner type (e.g., "xmrig", "tt-miner")
ProfileID string `json:"profileId,omitempty"`
Config json.RawMessage `json:"config,omitempty"` // Override profile config
}
// payload := StopMinerPayload{MinerName: "xmrig-0"}
// StopMinerPayload requests stopping a miner.
type StopMinerPayload struct {
MinerName string `json:"minerName"`
}
// ack := MinerAckPayload{Success: true, MinerName: "xmrig-0"}
// MinerAckPayload acknowledges a miner start/stop operation.
type MinerAckPayload struct {
Success bool `json:"success"`
MinerName string `json:"minerName,omitempty"`
Error string `json:"error,omitempty"`
}
// miner := MinerStatsItem{Name: "xmrig-0", Hashrate: 1200}
// MinerStatsItem represents stats for a single miner.
type MinerStatsItem struct {
Name string `json:"name"`
Type string `json:"type"`
@ -191,7 +165,7 @@ type MinerStatsItem struct {
CPUThreads int `json:"cpuThreads,omitempty"`
}
// stats := StatsPayload{NodeID: "worker-1"}
// StatsPayload contains miner statistics.
type StatsPayload struct {
NodeID string `json:"nodeId"`
NodeName string `json:"nodeName"`
@ -199,21 +173,21 @@ type StatsPayload struct {
Uptime int64 `json:"uptime"` // Node uptime in seconds
}
// payload := LogsRequestPayload{MinerName: "xmrig-0", Lines: 100}
type LogsRequestPayload struct {
// GetLogsPayload requests console logs from a miner.
type GetLogsPayload struct {
MinerName string `json:"minerName"`
Lines int `json:"lines"` // Number of lines to fetch
Since int64 `json:"since,omitempty"` // Unix timestamp, logs after this time
}
// payload := LogsPayload{MinerName: "xmrig-0", Lines: []string{"started"}}
// LogsPayload contains console log lines.
type LogsPayload struct {
MinerName string `json:"minerName"`
Lines []string `json:"lines"`
HasMore bool `json:"hasMore"` // More logs available
}
// payload := DeployPayload{Name: "xmrig", BundleType: string(BundleMiner)}
// DeployPayload contains a deployment bundle.
type DeployPayload struct {
BundleType string `json:"type"` // "profile" | "miner" | "full"
Data []byte `json:"data"` // STIM-encrypted bundle
@ -221,39 +195,39 @@ type DeployPayload struct {
Name string `json:"name"` // Profile or miner name
}
// ack := DeployAckPayload{Success: true, Name: "xmrig"}
// DeployAckPayload acknowledges a deployment.
type DeployAckPayload struct {
Success bool `json:"success"`
Name string `json:"name,omitempty"`
Error string `json:"error,omitempty"`
}
// payload := ErrorPayload{Code: ErrorCodeOperationFailed, Message: "start failed"}
// ErrorPayload contains error information.
type ErrorPayload struct {
Code int `json:"code"`
Message string `json:"message"`
Details string `json:"details,omitempty"`
}
// Common error codes
const (
ErrorCodeUnknown = 1000
ErrorCodeInvalidMessage = 1001
ErrorCodeUnauthorized = 1002
ErrorCodeNotFound = 1003
// code := ErrorCodeOperationFailed
ErrorCodeOperationFailed = 1004
ErrorCodeTimeout = 1005
ErrCodeUnknown = 1000
ErrCodeInvalidMessage = 1001
ErrCodeUnauthorized = 1002
ErrCodeNotFound = 1003
ErrCodeOperationFailed = 1004
ErrCodeTimeout = 1005
)
// errorMessage, err := NewErrorMessage("worker-1", "controller-1", ErrorCodeOperationFailed, "miner start failed", "req-1")
// NewErrorMessage creates an error response message.
func NewErrorMessage(from, to string, code int, message string, replyTo string) (*Message, error) {
errorMessage, err := NewMessage(MessageError, from, to, ErrorPayload{
msg, err := NewMessage(MsgError, from, to, ErrorPayload{
Code: code,
Message: message,
})
if err != nil {
return nil, err
}
errorMessage.ReplyTo = replyTo
return errorMessage, nil
msg.ReplyTo = replyTo
return msg, nil
}

View file

@ -1,19 +1,20 @@
package node
import (
"encoding/json"
"testing"
"time"
)
func TestMessage_NewMessage_Good(t *testing.T) {
func TestNewMessage(t *testing.T) {
t.Run("BasicMessage", func(t *testing.T) {
msg, err := NewMessage(MessagePing, "sender-id", "receiver-id", nil)
msg, err := NewMessage(MsgPing, "sender-id", "receiver-id", nil)
if err != nil {
t.Fatalf("failed to create message: %v", err)
}
if msg.Type != MessagePing {
t.Errorf("expected type MessagePing, got %s", msg.Type)
if msg.Type != MsgPing {
t.Errorf("expected type MsgPing, got %s", msg.Type)
}
if msg.From != "sender-id" {
@ -38,7 +39,7 @@ func TestMessage_NewMessage_Good(t *testing.T) {
SentAt: time.Now().UnixMilli(),
}
msg, err := NewMessage(MessagePing, "sender", "receiver", payload)
msg, err := NewMessage(MsgPing, "sender", "receiver", payload)
if err != nil {
t.Fatalf("failed to create message: %v", err)
}
@ -59,10 +60,10 @@ func TestMessage_NewMessage_Good(t *testing.T) {
})
}
func TestMessage_Reply_Good(t *testing.T) {
original, _ := NewMessage(MessagePing, "sender", "receiver", PingPayload{SentAt: 12345})
func TestMessageReply(t *testing.T) {
original, _ := NewMessage(MsgPing, "sender", "receiver", PingPayload{SentAt: 12345})
reply, err := original.Reply(MessagePong, PongPayload{
reply, err := original.Reply(MsgPong, PongPayload{
SentAt: 12345,
ReceivedAt: 12350,
})
@ -83,19 +84,19 @@ func TestMessage_Reply_Good(t *testing.T) {
t.Error("reply To should be original From")
}
if reply.Type != MessagePong {
t.Errorf("expected type MessagePong, got %s", reply.Type)
if reply.Type != MsgPong {
t.Errorf("expected type MsgPong, got %s", reply.Type)
}
}
func TestMessage_ParsePayload_Good(t *testing.T) {
func TestParsePayload(t *testing.T) {
t.Run("ValidPayload", func(t *testing.T) {
payload := StartMinerPayload{
MinerType: "xmrig",
ProfileID: "test-profile",
}
msg, _ := NewMessage(MessageStartMiner, "ctrl", "worker", payload)
msg, _ := NewMessage(MsgStartMiner, "ctrl", "worker", payload)
var parsed StartMinerPayload
err := msg.ParsePayload(&parsed)
@ -109,7 +110,7 @@ func TestMessage_ParsePayload_Good(t *testing.T) {
})
t.Run("NilPayload", func(t *testing.T) {
msg, _ := NewMessage(MessageGetStats, "ctrl", "worker", nil)
msg, _ := NewMessage(MsgGetStats, "ctrl", "worker", nil)
var parsed StatsPayload
err := msg.ParsePayload(&parsed)
@ -137,7 +138,7 @@ func TestMessage_ParsePayload_Good(t *testing.T) {
Uptime: 86400,
}
msg, _ := NewMessage(MessageStats, "worker", "ctrl", stats)
msg, _ := NewMessage(MsgStats, "worker", "ctrl", stats)
var parsed StatsPayload
err := msg.ParsePayload(&parsed)
@ -159,14 +160,14 @@ func TestMessage_ParsePayload_Good(t *testing.T) {
})
}
func TestMessage_NewErrorMessage_Bad(t *testing.T) {
errMsg, err := NewErrorMessage("sender", "receiver", ErrorCodeOperationFailed, "something went wrong", "original-msg-id")
func TestNewErrorMessage(t *testing.T) {
errMsg, err := NewErrorMessage("sender", "receiver", ErrCodeOperationFailed, "something went wrong", "original-msg-id")
if err != nil {
t.Fatalf("failed to create error message: %v", err)
}
if errMsg.Type != MessageError {
t.Errorf("expected type MessageError, got %s", errMsg.Type)
if errMsg.Type != MsgError {
t.Errorf("expected type MsgError, got %s", errMsg.Type)
}
if errMsg.ReplyTo != "original-msg-id" {
@ -179,8 +180,8 @@ func TestMessage_NewErrorMessage_Bad(t *testing.T) {
t.Fatalf("failed to parse error payload: %v", err)
}
if errPayload.Code != ErrorCodeOperationFailed {
t.Errorf("expected code %d, got %d", ErrorCodeOperationFailed, errPayload.Code)
if errPayload.Code != ErrCodeOperationFailed {
t.Errorf("expected code %d, got %d", ErrCodeOperationFailed, errPayload.Code)
}
if errPayload.Message != "something went wrong" {
@ -188,18 +189,24 @@ func TestMessage_NewErrorMessage_Bad(t *testing.T) {
}
}
func TestMessage_Serialization_Good(t *testing.T) {
original, _ := NewMessage(MessageStartMiner, "ctrl", "worker", StartMinerPayload{
func TestMessageSerialization(t *testing.T) {
original, _ := NewMessage(MsgStartMiner, "ctrl", "worker", StartMinerPayload{
MinerType: "xmrig",
ProfileID: "my-profile",
})
// Serialize
data := testJSONMarshal(t, original)
data, err := json.Marshal(original)
if err != nil {
t.Fatalf("failed to serialize message: %v", err)
}
// Deserialize
var restored Message
testJSONUnmarshal(t, data, &restored)
err = json.Unmarshal(data, &restored)
if err != nil {
t.Fatalf("failed to deserialize message: %v", err)
}
if restored.ID != original.ID {
t.Error("ID mismatch after serialization")
@ -214,7 +221,8 @@ func TestMessage_Serialization_Good(t *testing.T) {
}
var payload StartMinerPayload
if err := restored.ParsePayload(&payload); err != nil {
err = restored.ParsePayload(&payload)
if err != nil {
t.Fatalf("failed to parse restored payload: %v", err)
}
@ -223,23 +231,23 @@ func TestMessage_Serialization_Good(t *testing.T) {
}
}
func TestMessage_Types_Good(t *testing.T) {
func TestMessageTypes(t *testing.T) {
types := []MessageType{
MessageHandshake,
MessageHandshakeAck,
MessagePing,
MessagePong,
MessageDisconnect,
MessageGetStats,
MessageStats,
MessageStartMiner,
MessageStopMiner,
MessageMinerAck,
MessageDeploy,
MessageDeployAck,
MessageGetLogs,
MessageLogs,
MessageError,
MsgHandshake,
MsgHandshakeAck,
MsgPing,
MsgPong,
MsgDisconnect,
MsgGetStats,
MsgStats,
MsgStartMiner,
MsgStopMiner,
MsgMinerAck,
MsgDeploy,
MsgDeployAck,
MsgGetLogs,
MsgLogs,
MsgError,
}
for _, msgType := range types {
@ -256,14 +264,14 @@ func TestMessage_Types_Good(t *testing.T) {
}
}
func TestMessage_ErrorCodes_Bad(t *testing.T) {
func TestErrorCodes(t *testing.T) {
codes := map[int]string{
ErrorCodeUnknown: "Unknown",
ErrorCodeInvalidMessage: "InvalidMessage",
ErrorCodeUnauthorized: "Unauthorized",
ErrorCodeNotFound: "NotFound",
ErrorCodeOperationFailed: "OperationFailed",
ErrorCodeTimeout: "Timeout",
ErrCodeUnknown: "Unknown",
ErrCodeInvalidMessage: "InvalidMessage",
ErrCodeUnauthorized: "Unauthorized",
ErrCodeNotFound: "NotFound",
ErrCodeOperationFailed: "OperationFailed",
ErrCodeTimeout: "Timeout",
}
for code, name := range codes {
@ -275,8 +283,8 @@ func TestMessage_ErrorCodes_Bad(t *testing.T) {
}
}
func TestMessage_NewMessage_NilPayload_Ugly(t *testing.T) {
msg, err := NewMessage(MessagePing, "from", "to", nil)
func TestNewMessage_NilPayload(t *testing.T) {
msg, err := NewMessage(MsgPing, "from", "to", nil)
if err != nil {
t.Fatalf("NewMessage with nil payload should succeed: %v", err)
}
@ -285,7 +293,7 @@ func TestMessage_NewMessage_NilPayload_Ugly(t *testing.T) {
}
}
func TestMessage_ParsePayload_Nil_Ugly(t *testing.T) {
func TestMessage_ParsePayload_Nil(t *testing.T) {
msg := &Message{Payload: nil}
var target PingPayload
err := msg.ParsePayload(&target)
@ -294,25 +302,22 @@ func TestMessage_ParsePayload_Nil_Ugly(t *testing.T) {
}
}
func TestMessage_NewErrorMessage_Success_Bad(t *testing.T) {
msg, err := NewErrorMessage("from", "to", ErrorCodeOperationFailed, "something went wrong", "reply-123")
func TestNewErrorMessage_Success(t *testing.T) {
msg, err := NewErrorMessage("from", "to", ErrCodeOperationFailed, "something went wrong", "reply-123")
if err != nil {
t.Fatalf("NewErrorMessage failed: %v", err)
}
if msg.Type != MessageError {
t.Errorf("expected type %s, got %s", MessageError, msg.Type)
if msg.Type != MsgError {
t.Errorf("expected type %s, got %s", MsgError, msg.Type)
}
if msg.ReplyTo != "reply-123" {
t.Errorf("expected ReplyTo 'reply-123', got '%s'", msg.ReplyTo)
}
var payload ErrorPayload
err = msg.ParsePayload(&payload)
if err != nil {
t.Fatalf("ParsePayload failed: %v", err)
}
if payload.Code != ErrorCodeOperationFailed {
t.Errorf("expected code %d, got %d", ErrorCodeOperationFailed, payload.Code)
msg.ParsePayload(&payload)
if payload.Code != ErrCodeOperationFailed {
t.Errorf("expected code %d, got %d", ErrCodeOperationFailed, payload.Code)
}
if payload.Message != "something went wrong" {
t.Errorf("expected message 'something went wrong', got '%s'", payload.Message)

View file

@ -1,28 +1,24 @@
package node
import (
"encoding/json"
"iter"
"maps"
"path/filepath"
"regexp"
"slices"
"sync"
"time"
core "dappco.re/go/core"
coreio "dappco.re/go/core/io"
coreerr "dappco.re/go/core/log"
"dappco.re/go/core/p2p/logging"
poindexter "forge.lthn.ai/Snider/Poindexter"
"github.com/adrg/xdg"
)
// peer := &Peer{
// ID: "worker-1",
// Name: "Worker 1",
// Address: "127.0.0.1:9101",
// PingMilliseconds: 42.5,
// GeographicKilometres: 100,
// Score: 80,
// }
// Peer represents a known remote node.
type Peer struct {
ID string `json:"id"`
Name string `json:"name"`
@ -33,36 +29,37 @@ type Peer struct {
LastSeen time.Time `json:"lastSeen"`
// Poindexter metrics (updated dynamically)
PingMilliseconds float64 `json:"pingMs"` // Latency in milliseconds
Hops int `json:"hops"` // Network hop count
GeographicKilometres float64 `json:"geoKm"` // Geographic distance in kilometres
Score float64 `json:"score"` // Reliability score 0-100
PingMS float64 `json:"pingMs"` // Latency in milliseconds
Hops int `json:"hops"` // Network hop count
GeoKM float64 `json:"geoKm"` // Geographic distance in kilometers
Score float64 `json:"score"` // Reliability score 0-100
// Connection state (not persisted)
Connected bool `json:"-"`
}
const peerRegistrySaveDebounceInterval = 5 * time.Second
// saveDebounceInterval is the minimum time between disk writes.
const saveDebounceInterval = 5 * time.Second
// mode := PeerAuthAllowlist
// PeerAuthMode controls how unknown peers are handled
type PeerAuthMode int
const (
// PeerAuthOpen allows any peer to connect.
// PeerAuthOpen allows any peer to connect (original behavior)
PeerAuthOpen PeerAuthMode = iota
// PeerAuthAllowlist only allows pre-registered peers or those with allowed public keys
PeerAuthAllowlist
)
// Peer name validation constants
// Peer name validation constants.
const (
PeerNameMinLength = 1
PeerNameMaxLength = 64
)
// peerNamePattern validates peer names: alphanumeric, hyphens, underscores, and spaces.
var peerNamePattern = regexp.MustCompile(`^[a-zA-Z0-9][a-zA-Z0-9\-_ ]{0,62}[a-zA-Z0-9]$|^[a-zA-Z0-9]$`)
// peerNameRegex validates peer names: alphanumeric, hyphens, underscores, and spaces
var peerNameRegex = regexp.MustCompile(`^[a-zA-Z0-9][a-zA-Z0-9\-_ ]{0,62}[a-zA-Z0-9]$|^[a-zA-Z0-9]$`)
// safeKeyPrefix returns a truncated key for logging, handling short keys safely
func safeKeyPrefix(key string) string {
if len(key) >= 16 {
return key[:16] + "..."
@ -73,23 +70,24 @@ func safeKeyPrefix(key string) string {
return key
}
// validatePeerName checks if a peer name is valid.
// Empty names are permitted. Non-empty names must be 1-64 characters,
// start and end with alphanumeric, and contain only alphanumeric,
// hyphens, underscores, and spaces.
func validatePeerName(name string) error {
if name == "" {
return nil // Empty names are allowed (optional field)
}
if len(name) < PeerNameMinLength {
return core.E("validatePeerName", "peer name too short", nil)
return nil
}
if len(name) > PeerNameMaxLength {
return core.E("validatePeerName", "peer name too long", nil)
return coreerr.E("validatePeerName", "peer name too long", nil)
}
if !peerNamePattern.MatchString(name) {
return core.E("validatePeerName", "peer name contains invalid characters (use alphanumeric, hyphens, underscores, spaces)", nil)
if !peerNameRegex.MatchString(name) {
return coreerr.E("validatePeerName", "peer name contains invalid characters (use alphanumeric, hyphens, underscores, spaces)", nil)
}
return nil
}
// peerRegistry, err := NewPeerRegistry()
// PeerRegistry manages known peers with KD-tree based selection.
type PeerRegistry struct {
peers map[string]*Peer
kdTree *poindexter.KDTree[string] // KD-tree with peer ID as payload
@ -100,58 +98,64 @@ type PeerRegistry struct {
authMode PeerAuthMode // How to handle unknown peers
allowedPublicKeys map[string]bool // Allowlist of public keys (when authMode is Allowlist)
allowedPublicKeyMu sync.RWMutex // Protects allowedPublicKeys
allowlistPath string // Sidecar file for persisted allowlist keys
// Debounce disk writes
hasPendingChanges bool // Whether there are unsaved changes
pendingSaveTimer *time.Timer // Timer for debounced save
saveMutex sync.Mutex // Protects pending save state
dirty bool // Whether there are unsaved changes
saveTimer *time.Timer // Timer for debounced save
saveMu sync.Mutex // Protects dirty and saveTimer
stopChan chan struct{} // Signal to stop background save
saveStopOnce sync.Once // Ensure stopChan is closed only once
}
// Dimension weights for peer selection.
// Lower ping, hops, and geographic distance are better; higher score is better.
// Dimension weights for peer selection
// Lower ping, hops, geo are better; higher score is better
var (
pingWeight = 1.0
hopsWeight = 0.7
geographicWeight = 0.2
scoreWeight = 1.2
pingWeight = 1.0
hopsWeight = 0.7
geoWeight = 0.2
scoreWeight = 1.2
)
// peerRegistry, err := NewPeerRegistry()
// NewPeerRegistry creates a new PeerRegistry, loading existing peers if available.
func NewPeerRegistry() (*PeerRegistry, error) {
peersPath, err := xdg.ConfigFile("lethean-desktop/peers.json")
if err != nil {
return nil, core.E("PeerRegistry.New", "failed to get peers path", err)
return nil, coreerr.E("PeerRegistry.New", "failed to get peers path", err)
}
return NewPeerRegistryFromPath(peersPath)
return NewPeerRegistryWithPath(peersPath)
}
// peerRegistry, err := NewPeerRegistryFromPath("/srv/p2p/peers.json")
// Missing files are treated as an empty registry; malformed registry files
// return an error so callers can repair the persisted state.
func NewPeerRegistryFromPath(peersPath string) (*PeerRegistry, error) {
// NewPeerRegistryWithPath creates a new PeerRegistry with a custom path.
// This is primarily useful for testing to avoid xdg path caching issues.
func NewPeerRegistryWithPath(peersPath string) (*PeerRegistry, error) {
pr := &PeerRegistry{
peers: make(map[string]*Peer),
path: peersPath,
authMode: PeerAuthOpen, // Default to open.
allowlistPath: peersPath + ".allowlist.json",
stopChan: make(chan struct{}),
authMode: PeerAuthOpen, // Default to open for backward compatibility
allowedPublicKeys: make(map[string]bool),
}
// Missing files indicate a first run; any existing file must parse cleanly.
if !filesystemExists(peersPath) {
// Try to load existing peers
if err := pr.load(); err != nil {
// No existing peers, that's ok
pr.rebuildKDTree()
return pr, nil
}
if err := pr.load(); err != nil {
return nil, err
// Load any persisted allowlist entries. This is best effort so that a
// missing or corrupt sidecar does not block peer registry startup.
if err := pr.loadAllowedPublicKeys(); err != nil {
logging.Warn("failed to load peer allowlist", logging.Fields{"error": err})
}
pr.rebuildKDTree()
return pr, nil
}
// registry.SetAuthMode(PeerAuthAllowlist)
// SetAuthMode sets the authentication mode for peer connections.
func (r *PeerRegistry) SetAuthMode(mode PeerAuthMode) {
r.allowedPublicKeyMu.Lock()
defer r.allowedPublicKeyMu.Unlock()
@ -159,40 +163,48 @@ func (r *PeerRegistry) SetAuthMode(mode PeerAuthMode) {
logging.Info("peer auth mode changed", logging.Fields{"mode": mode})
}
// mode := registry.GetAuthMode()
// GetAuthMode returns the current authentication mode.
func (r *PeerRegistry) GetAuthMode() PeerAuthMode {
r.allowedPublicKeyMu.RLock()
defer r.allowedPublicKeyMu.RUnlock()
return r.authMode
}
// registry.AllowPublicKey(peer.PublicKey)
// AllowPublicKey adds a public key to the allowlist.
func (r *PeerRegistry) AllowPublicKey(publicKey string) {
r.allowedPublicKeyMu.Lock()
defer r.allowedPublicKeyMu.Unlock()
r.allowedPublicKeys[publicKey] = true
r.allowedPublicKeyMu.Unlock()
logging.Debug("public key added to allowlist", logging.Fields{"key": safeKeyPrefix(publicKey)})
if err := r.saveAllowedPublicKeys(); err != nil {
logging.Warn("failed to persist peer allowlist", logging.Fields{"error": err})
}
}
// registry.RevokePublicKey(peer.PublicKey)
// RevokePublicKey removes a public key from the allowlist.
func (r *PeerRegistry) RevokePublicKey(publicKey string) {
r.allowedPublicKeyMu.Lock()
defer r.allowedPublicKeyMu.Unlock()
delete(r.allowedPublicKeys, publicKey)
r.allowedPublicKeyMu.Unlock()
logging.Debug("public key removed from allowlist", logging.Fields{"key": safeKeyPrefix(publicKey)})
if err := r.saveAllowedPublicKeys(); err != nil {
logging.Warn("failed to persist peer allowlist", logging.Fields{"error": err})
}
}
// allowed := registry.IsPublicKeyAllowed(peer.PublicKey)
// IsPublicKeyAllowed checks if a public key is in the allowlist.
func (r *PeerRegistry) IsPublicKeyAllowed(publicKey string) bool {
r.allowedPublicKeyMu.RLock()
defer r.allowedPublicKeyMu.RUnlock()
return r.allowedPublicKeys[publicKey]
}
// Returns true when AuthMode is Open (all allowed), or when Allowlist mode is active
// and the peer is pre-registered or its public key is in the allowlist.
//
// allowed := registry.IsPeerAllowed(peer.ID, peer.PublicKey)
// IsPeerAllowed checks if a peer is allowed to connect based on auth mode.
// Returns true if:
// - AuthMode is Open (allow all)
// - AuthMode is Allowlist AND (peer is pre-registered OR public key is allowlisted)
func (r *PeerRegistry) IsPeerAllowed(peerID string, publicKey string) bool {
r.allowedPublicKeyMu.RLock()
authMode := r.authMode
@ -217,14 +229,12 @@ func (r *PeerRegistry) IsPeerAllowed(peerID string, publicKey string) bool {
return keyAllowed
}
// keys := registry.ListAllowedPublicKeys()
// ListAllowedPublicKeys returns all allowlisted public keys.
func (r *PeerRegistry) ListAllowedPublicKeys() []string {
return slices.Collect(r.AllowedPublicKeys())
}
// for key := range registry.AllowedPublicKeys() {
// log.Printf("allowed: %s", key[:16])
// }
// AllowedPublicKeys returns an iterator over all allowlisted public keys.
func (r *PeerRegistry) AllowedPublicKeys() iter.Seq[string] {
return func(yield func(string) bool) {
r.allowedPublicKeyMu.RLock()
@ -238,20 +248,15 @@ func (r *PeerRegistry) AllowedPublicKeys() iter.Seq[string] {
}
}
// err := registry.AddPeer(&Peer{ID: "worker-1", Address: "10.0.0.1:9091", Role: RoleWorker})
// AddPeer adds a new peer to the registry.
// Note: Persistence is debounced (writes batched every 5s). Call Close() to ensure
// all changes are flushed to disk before shutdown.
func (r *PeerRegistry) AddPeer(peer *Peer) error {
if peer == nil {
return core.E("PeerRegistry.AddPeer", "peer is nil", nil)
}
peerCopy := *peer
peer = &peerCopy
r.mu.Lock()
if peer.ID == "" {
r.mu.Unlock()
return core.E("PeerRegistry.AddPeer", "peer ID is required", nil)
return coreerr.E("PeerRegistry.AddPeer", "peer ID is required", nil)
}
// Validate peer name (P2P-LOW-3)
@ -262,7 +267,7 @@ func (r *PeerRegistry) AddPeer(peer *Peer) error {
if _, exists := r.peers[peer.ID]; exists {
r.mu.Unlock()
return core.E("PeerRegistry.AddPeer", "peer "+peer.ID+" already exists", nil)
return coreerr.E("PeerRegistry.AddPeer", "peer "+peer.ID+" already exists", nil)
}
// Set defaults
@ -270,67 +275,51 @@ func (r *PeerRegistry) AddPeer(peer *Peer) error {
peer.AddedAt = time.Now()
}
if peer.Score == 0 {
peer.Score = ScoreDefault
peer.Score = 50 // Default neutral score
}
r.peers[peer.ID] = peer
r.rebuildKDTree()
r.mu.Unlock()
r.scheduleSave()
return nil
return r.save()
}
// Persistence is debounced. Call Close() to flush before shutdown.
//
// err := registry.UpdatePeer(&Peer{ID: "worker-1", Score: 90})
// UpdatePeer updates an existing peer's information.
// Note: Persistence is debounced. Call Close() to flush before shutdown.
func (r *PeerRegistry) UpdatePeer(peer *Peer) error {
if peer == nil {
return core.E("PeerRegistry.UpdatePeer", "peer is nil", nil)
}
if peer.ID == "" {
return core.E("PeerRegistry.UpdatePeer", "peer ID is required", nil)
}
peerCopy := *peer
peer = &peerCopy
r.mu.Lock()
if _, exists := r.peers[peer.ID]; !exists {
r.mu.Unlock()
return core.E("PeerRegistry.UpdatePeer", "peer "+peer.ID+" not found", nil)
return coreerr.E("PeerRegistry.UpdatePeer", "peer "+peer.ID+" not found", nil)
}
r.peers[peer.ID] = peer
r.rebuildKDTree()
r.mu.Unlock()
r.scheduleSave()
return nil
return r.save()
}
// Persistence is debounced. Call Close() to flush before shutdown.
//
// err := registry.RemovePeer("worker-1")
// RemovePeer removes a peer from the registry.
// Note: Persistence is debounced. Call Close() to flush before shutdown.
func (r *PeerRegistry) RemovePeer(id string) error {
r.mu.Lock()
if _, exists := r.peers[id]; !exists {
r.mu.Unlock()
return core.E("PeerRegistry.RemovePeer", "peer "+id+" not found", nil)
return coreerr.E("PeerRegistry.RemovePeer", "peer "+id+" not found", nil)
}
delete(r.peers, id)
r.rebuildKDTree()
r.mu.Unlock()
r.scheduleSave()
return nil
return r.save()
}
// peer := registry.GetPeer("worker-1")
// GetPeer returns a peer by ID.
func (r *PeerRegistry) GetPeer(id string) *Peer {
r.mu.RLock()
defer r.mu.RUnlock()
@ -340,20 +329,18 @@ func (r *PeerRegistry) GetPeer(id string) *Peer {
return nil
}
// Return a copy
peerCopy := *peer
return &peerCopy
}
// peers := registry.ListPeers()
// ListPeers returns all registered peers.
func (r *PeerRegistry) ListPeers() []*Peer {
return slices.Collect(r.Peers())
}
// Peers returns an iterator over all registered peers.
// Each peer is a copy to prevent mutation.
//
// for peer := range registry.Peers() {
// _ = peer
// }
func (r *PeerRegistry) Peers() iter.Seq[*Peer] {
return func(yield func(*Peer) bool) {
r.mu.RLock()
@ -368,30 +355,29 @@ func (r *PeerRegistry) Peers() iter.Seq[*Peer] {
}
}
// registry.UpdateMetrics("worker-1", 42.5, 100, 3)
// UpdateMetrics updates a peer's performance metrics.
// Note: Persistence is debounced. Call Close() to flush before shutdown.
func (r *PeerRegistry) UpdateMetrics(id string, pingMilliseconds, geographicKilometres float64, hopCount int) error {
func (r *PeerRegistry) UpdateMetrics(id string, pingMS, geoKM float64, hops int) error {
r.mu.Lock()
peer, exists := r.peers[id]
if !exists {
r.mu.Unlock()
return core.E("PeerRegistry.UpdateMetrics", "peer "+id+" not found", nil)
return coreerr.E("PeerRegistry.UpdateMetrics", "peer "+id+" not found", nil)
}
peer.PingMilliseconds = pingMilliseconds
peer.GeographicKilometres = geographicKilometres
peer.Hops = hopCount
peer.PingMS = pingMS
peer.GeoKM = geoKM
peer.Hops = hops
peer.LastSeen = time.Now()
r.rebuildKDTree()
r.mu.Unlock()
r.scheduleSave()
return nil
return r.save()
}
// registry.UpdateScore("worker-1", 90)
// UpdateScore updates a peer's reliability score.
// Note: Persistence is debounced. Call Close() to flush before shutdown.
func (r *PeerRegistry) UpdateScore(id string, score float64) error {
r.mu.Lock()
@ -399,7 +385,7 @@ func (r *PeerRegistry) UpdateScore(id string, score float64) error {
peer, exists := r.peers[id]
if !exists {
r.mu.Unlock()
return core.E("PeerRegistry.UpdateScore", "peer "+id+" not found", nil)
return coreerr.E("PeerRegistry.UpdateScore", "peer "+id+" not found", nil)
}
// Clamp score to 0-100
@ -409,11 +395,10 @@ func (r *PeerRegistry) UpdateScore(id string, score float64) error {
r.rebuildKDTree()
r.mu.Unlock()
r.scheduleSave()
return nil
return r.save()
}
// registry.SetConnected("worker-1", true)
// SetConnected updates a peer's connection state.
func (r *PeerRegistry) SetConnected(id string, connected bool) {
r.mu.Lock()
defer r.mu.Unlock()
@ -436,7 +421,7 @@ const (
ScoreDefault = 50.0 // Default score for new peers
)
// registry.RecordSuccess("worker-1")
// RecordSuccess records a successful interaction with a peer, improving their score.
func (r *PeerRegistry) RecordSuccess(id string) {
r.mu.Lock()
peer, exists := r.peers[id]
@ -448,10 +433,10 @@ func (r *PeerRegistry) RecordSuccess(id string) {
peer.Score = min(peer.Score+ScoreSuccessIncrement, ScoreMaximum)
peer.LastSeen = time.Now()
r.mu.Unlock()
r.scheduleSave()
r.save()
}
// registry.RecordFailure("worker-1")
// RecordFailure records a failed interaction with a peer, reducing their score.
func (r *PeerRegistry) RecordFailure(id string) {
r.mu.Lock()
peer, exists := r.peers[id]
@ -463,7 +448,7 @@ func (r *PeerRegistry) RecordFailure(id string) {
peer.Score = max(peer.Score-ScoreFailureDecrement, ScoreMinimum)
newScore := peer.Score
r.mu.Unlock()
r.scheduleSave()
r.save()
logging.Debug("peer score decreased", logging.Fields{
"peer_id": id,
@ -472,7 +457,7 @@ func (r *PeerRegistry) RecordFailure(id string) {
})
}
// registry.RecordTimeout("worker-1")
// RecordTimeout records a timeout when communicating with a peer.
func (r *PeerRegistry) RecordTimeout(id string) {
r.mu.Lock()
peer, exists := r.peers[id]
@ -484,7 +469,7 @@ func (r *PeerRegistry) RecordTimeout(id string) {
peer.Score = max(peer.Score-ScoreTimeoutDecrement, ScoreMinimum)
newScore := peer.Score
r.mu.Unlock()
r.scheduleSave()
r.save()
logging.Debug("peer score decreased", logging.Fields{
"peer_id": id,
@ -493,13 +478,14 @@ func (r *PeerRegistry) RecordTimeout(id string) {
})
}
// peers := registry.GetPeersByScore()
// GetPeersByScore returns peers sorted by score (highest first).
func (r *PeerRegistry) GetPeersByScore() []*Peer {
r.mu.RLock()
defer r.mu.RUnlock()
peers := slices.Collect(maps.Values(r.peers))
// Sort by score descending
slices.SortFunc(peers, func(a, b *Peer) int {
if b.Score > a.Score {
return 1
@ -510,18 +496,10 @@ func (r *PeerRegistry) GetPeersByScore() []*Peer {
return 0
})
peerCopies := make([]*Peer, 0, len(peers))
for _, peer := range peers {
peerCopy := *peer
peerCopies = append(peerCopies, &peerCopy)
}
return peerCopies
return peers
}
// for peer := range registry.PeersByScore() {
// log.Printf("peer %s score=%.0f", peer.ID, peer.Score)
// }
// PeersByScore returns an iterator over peers sorted by score (highest first).
func (r *PeerRegistry) PeersByScore() iter.Seq[*Peer] {
return func(yield func(*Peer) bool) {
peers := r.GetPeersByScore()
@ -533,9 +511,8 @@ func (r *PeerRegistry) PeersByScore() iter.Seq[*Peer] {
}
}
// Uses Poindexter KD-tree to find the peer closest to ideal metrics (low ping, low hops, high score).
//
// peer := registry.SelectOptimalPeer()
// SelectOptimalPeer returns the best peer based on multi-factor optimization.
// Uses Poindexter KD-tree to find the peer closest to ideal metrics.
func (r *PeerRegistry) SelectOptimalPeer() *Peer {
r.mu.RLock()
defer r.mu.RUnlock()
@ -544,7 +521,7 @@ func (r *PeerRegistry) SelectOptimalPeer() *Peer {
return nil
}
// Target: ideal peer (0 ping, 0 hops, 0 geographic distance, 100 score)
// Target: ideal peer (0 ping, 0 hops, 0 geo, 100 score)
// Score is inverted (100 - score) so lower is better in the tree
target := []float64{0, 0, 0, 0}
@ -562,7 +539,7 @@ func (r *PeerRegistry) SelectOptimalPeer() *Peer {
return &peerCopy
}
// peers := registry.SelectNearestPeers(3)
// SelectNearestPeers returns the n best peers based on multi-factor optimization.
func (r *PeerRegistry) SelectNearestPeers(n int) []*Peer {
r.mu.RLock()
defer r.mu.RUnlock()
@ -587,16 +564,13 @@ func (r *PeerRegistry) SelectNearestPeers(n int) []*Peer {
return peers
}
// connectedPeers := registry.GetConnectedPeers()
// GetConnectedPeers returns all currently connected peers.
func (r *PeerRegistry) GetConnectedPeers() []*Peer {
return slices.Collect(r.ConnectedPeers())
}
// ConnectedPeers returns an iterator over all currently connected peers.
// Each peer is a copy to prevent mutation.
//
// for peer := range registry.ConnectedPeers() {
// _ = peer
// }
func (r *PeerRegistry) ConnectedPeers() iter.Seq[*Peer] {
return func(yield func(*Peer) bool) {
r.mu.RLock()
@ -613,13 +587,15 @@ func (r *PeerRegistry) ConnectedPeers() iter.Seq[*Peer] {
}
}
// n := registry.Count()
// Count returns the number of registered peers.
func (r *PeerRegistry) Count() int {
r.mu.RLock()
defer r.mu.RUnlock()
return len(r.peers)
}
// rebuildKDTree rebuilds the KD-tree from current peers.
// Must be called with lock held.
func (r *PeerRegistry) rebuildKDTree() {
if len(r.peers) == 0 {
r.kdTree = nil
@ -628,14 +604,14 @@ func (r *PeerRegistry) rebuildKDTree() {
points := make([]poindexter.KDPoint[string], 0, len(r.peers))
for _, peer := range r.peers {
// Build a 4D point with weighted, normalised values.
// Build 4D point with weighted, normalized values
// Invert score so that higher score = lower value (better)
point := poindexter.KDPoint[string]{
ID: peer.ID,
Coords: []float64{
peer.PingMilliseconds * pingWeight,
peer.PingMS * pingWeight,
float64(peer.Hops) * hopsWeight,
peer.GeographicKilometres * geographicWeight,
peer.GeoKM * geoWeight,
(100 - peer.Score) * scoreWeight, // Invert score
},
Value: peer.ID,
@ -654,26 +630,26 @@ func (r *PeerRegistry) rebuildKDTree() {
}
// scheduleSave schedules a debounced save operation.
// Multiple calls within peerRegistrySaveDebounceInterval will be coalesced into a single save.
// Call it after releasing r.mu so peer state and save state do not interleave.
// Multiple calls within saveDebounceInterval will be coalesced into a single save.
// Must NOT be called with r.mu held.
func (r *PeerRegistry) scheduleSave() {
r.saveMutex.Lock()
defer r.saveMutex.Unlock()
r.saveMu.Lock()
defer r.saveMu.Unlock()
r.hasPendingChanges = true
r.dirty = true
// If timer already running, let it handle the save
if r.pendingSaveTimer != nil {
if r.saveTimer != nil {
return
}
// Start a new timer
r.pendingSaveTimer = time.AfterFunc(peerRegistrySaveDebounceInterval, func() {
r.saveMutex.Lock()
r.pendingSaveTimer = nil
shouldSave := r.hasPendingChanges
r.hasPendingChanges = false
r.saveMutex.Unlock()
r.saveTimer = time.AfterFunc(saveDebounceInterval, func() {
r.saveMu.Lock()
r.saveTimer = nil
shouldSave := r.dirty
r.dirty = false
r.saveMu.Unlock()
if shouldSave {
r.mu.RLock()
@ -691,45 +667,48 @@ func (r *PeerRegistry) scheduleSave() {
// Must be called with r.mu held (at least RLock).
func (r *PeerRegistry) saveNow() error {
// Ensure directory exists
dir := core.PathDir(r.path)
if err := filesystemEnsureDir(dir); err != nil {
return core.E("PeerRegistry.saveNow", "failed to create peers directory", err)
dir := filepath.Dir(r.path)
if err := coreio.Local.EnsureDir(dir); err != nil {
return coreerr.E("PeerRegistry.saveNow", "failed to create peers directory", err)
}
// Convert to slice for JSON
peers := slices.Collect(maps.Values(r.peers))
result := core.JSONMarshal(peers)
if !result.OK {
return core.E("PeerRegistry.saveNow", "failed to marshal peers", result.Value.(error))
data, err := json.MarshalIndent(peers, "", " ")
if err != nil {
return coreerr.E("PeerRegistry.saveNow", "failed to marshal peers", err)
}
data := result.Value.([]byte)
// Use atomic write pattern: write to temp file, then rename
tmpPath := r.path + ".tmp"
if err := filesystemWrite(tmpPath, string(data)); err != nil {
return core.E("PeerRegistry.saveNow", "failed to write peers temp file", err)
if err := coreio.Local.Write(tmpPath, string(data)); err != nil {
return coreerr.E("PeerRegistry.saveNow", "failed to write peers temp file", err)
}
if err := filesystemRename(tmpPath, r.path); err != nil {
filesystemDelete(tmpPath) // Clean up temp file
return core.E("PeerRegistry.saveNow", "failed to rename peers file", err)
if err := coreio.Local.Rename(tmpPath, r.path); err != nil {
coreio.Local.Delete(tmpPath) // Clean up temp file
return coreerr.E("PeerRegistry.saveNow", "failed to rename peers file", err)
}
return nil
}
// registry.Close()
// Close flushes any pending changes and releases resources.
func (r *PeerRegistry) Close() error {
// Cancel any pending timer and save immediately if changes are queued.
r.saveMutex.Lock()
if r.pendingSaveTimer != nil {
r.pendingSaveTimer.Stop()
r.pendingSaveTimer = nil
r.saveStopOnce.Do(func() {
close(r.stopChan)
})
// Cancel pending timer and save immediately if dirty
r.saveMu.Lock()
if r.saveTimer != nil {
r.saveTimer.Stop()
r.saveTimer = nil
}
shouldSave := r.hasPendingChanges
r.hasPendingChanges = false
r.saveMutex.Unlock()
shouldSave := r.dirty
r.dirty = false
r.saveMu.Unlock()
if shouldSave {
r.mu.RLock()
@ -741,16 +720,90 @@ func (r *PeerRegistry) Close() error {
return nil
}
func (r *PeerRegistry) load() error {
content, err := filesystemRead(r.path)
// saveAllowedPublicKeys persists the allowlist to disk immediately.
// It keeps the allowlist in a separate sidecar file so peer persistence remains
// backwards compatible with the existing peers.json array format.
func (r *PeerRegistry) saveAllowedPublicKeys() error {
r.allowedPublicKeyMu.RLock()
keys := make([]string, 0, len(r.allowedPublicKeys))
for key := range r.allowedPublicKeys {
keys = append(keys, key)
}
r.allowedPublicKeyMu.RUnlock()
slices.Sort(keys)
dir := filepath.Dir(r.allowlistPath)
if err := coreio.Local.EnsureDir(dir); err != nil {
return coreerr.E("PeerRegistry.saveAllowedPublicKeys", "failed to create allowlist directory", err)
}
data, err := json.MarshalIndent(keys, "", " ")
if err != nil {
return core.E("PeerRegistry.load", "failed to read peers", err)
return coreerr.E("PeerRegistry.saveAllowedPublicKeys", "failed to marshal allowlist", err)
}
tmpPath := r.allowlistPath + ".tmp"
if err := coreio.Local.Write(tmpPath, string(data)); err != nil {
return coreerr.E("PeerRegistry.saveAllowedPublicKeys", "failed to write allowlist temp file", err)
}
if err := coreio.Local.Rename(tmpPath, r.allowlistPath); err != nil {
coreio.Local.Delete(tmpPath)
return coreerr.E("PeerRegistry.saveAllowedPublicKeys", "failed to rename allowlist file", err)
}
return nil
}
// loadAllowedPublicKeys loads the allowlist from disk.
func (r *PeerRegistry) loadAllowedPublicKeys() error {
if !coreio.Local.Exists(r.allowlistPath) {
return nil
}
content, err := coreio.Local.Read(r.allowlistPath)
if err != nil {
return coreerr.E("PeerRegistry.loadAllowedPublicKeys", "failed to read allowlist", err)
}
var keys []string
if err := json.Unmarshal([]byte(content), &keys); err != nil {
return coreerr.E("PeerRegistry.loadAllowedPublicKeys", "failed to unmarshal allowlist", err)
}
r.allowedPublicKeyMu.Lock()
defer r.allowedPublicKeyMu.Unlock()
r.allowedPublicKeys = make(map[string]bool, len(keys))
for _, key := range keys {
if key == "" {
continue
}
r.allowedPublicKeys[key] = true
}
return nil
}
// save is a helper that schedules a debounced save.
// Kept for backward compatibility but now debounces writes.
// Must NOT be called with r.mu held.
func (r *PeerRegistry) save() error {
r.scheduleSave()
return nil // Errors will be logged asynchronously
}
// load reads peers from disk.
func (r *PeerRegistry) load() error {
content, err := coreio.Local.Read(r.path)
if err != nil {
return coreerr.E("PeerRegistry.load", "failed to read peers", err)
}
var peers []*Peer
result := core.JSONUnmarshalString(content, &peers)
if !result.OK {
return core.E("PeerRegistry.load", "failed to unmarshal peers", result.Value.(error))
if err := json.Unmarshal([]byte(content), &peers); err != nil {
return coreerr.E("PeerRegistry.load", "failed to unmarshal peers", err)
}
r.peers = make(map[string]*Peer)
@ -760,3 +813,5 @@ func (r *PeerRegistry) load() error {
return nil
}
// Example usage inside a connection handler

View file

@ -1,24 +1,35 @@
package node
import (
"os"
"path/filepath"
"slices"
"testing"
"time"
)
func setupTestPeerRegistry(t *testing.T) (*PeerRegistry, func()) {
tmpDir := t.TempDir()
peersPath := testJoinPath(tmpDir, "peers.json")
pr, err := NewPeerRegistryFromPath(peersPath)
tmpDir, err := os.MkdirTemp("", "peer-registry-test")
if err != nil {
t.Fatalf("failed to create temp dir: %v", err)
}
peersPath := filepath.Join(tmpDir, "peers.json")
pr, err := NewPeerRegistryWithPath(peersPath)
if err != nil {
os.RemoveAll(tmpDir)
t.Fatalf("failed to create peer registry: %v", err)
}
return pr, func() {}
cleanup := func() {
os.RemoveAll(tmpDir)
}
return pr, cleanup
}
func TestPeer_Registry_NewPeerRegistry_Good(t *testing.T) {
func TestPeerRegistry_NewPeerRegistry(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -27,23 +38,7 @@ func TestPeer_Registry_NewPeerRegistry_Good(t *testing.T) {
}
}
func TestPeer_Registry_NewPeerRegistryFromPath_CorruptFile_Bad(t *testing.T) {
tmpDir := t.TempDir()
peersPath := testJoinPath(tmpDir, "peers.json")
testWriteFile(t, peersPath, []byte(`{"id":"peer-1"`), 0o600)
pr, err := NewPeerRegistryFromPath(peersPath)
if err == nil {
t.Fatal("expected error when loading a corrupted peer registry")
}
if pr != nil {
t.Fatal("expected nil peer registry when persisted data is corrupted")
}
}
func TestPeer_Registry_AddPeer_Good(t *testing.T) {
func TestPeerRegistry_AddPeer(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -65,15 +60,6 @@ func TestPeer_Registry_AddPeer_Good(t *testing.T) {
t.Errorf("expected 1 peer, got %d", pr.Count())
}
peer.Name = "Mutated after add"
stored := pr.GetPeer("test-peer-1")
if stored == nil {
t.Fatal("expected peer to exist after add")
}
if stored.Name != "Test Peer" {
t.Errorf("expected stored peer to remain unchanged, got %q", stored.Name)
}
// Try to add duplicate
err = pr.AddPeer(peer)
if err == nil {
@ -81,7 +67,7 @@ func TestPeer_Registry_AddPeer_Good(t *testing.T) {
}
}
func TestPeer_Registry_Peer_Good(t *testing.T) {
func TestPeerRegistry_GetPeer(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -111,7 +97,7 @@ func TestPeer_Registry_Peer_Good(t *testing.T) {
}
}
func TestPeer_Registry_ListPeers_Good(t *testing.T) {
func TestPeerRegistry_ListPeers(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -131,7 +117,7 @@ func TestPeer_Registry_ListPeers_Good(t *testing.T) {
}
}
func TestPeer_Registry_RemovePeer_Good(t *testing.T) {
func TestPeerRegistry_RemovePeer(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -164,7 +150,7 @@ func TestPeer_Registry_RemovePeer_Good(t *testing.T) {
}
}
func TestPeer_Registry_UpdateMetrics_Good(t *testing.T) {
func TestPeerRegistry_UpdateMetrics(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -186,18 +172,18 @@ func TestPeer_Registry_UpdateMetrics_Good(t *testing.T) {
if updated == nil {
t.Fatal("expected peer to exist")
}
if updated.PingMilliseconds != 50.5 {
t.Errorf("expected ping 50.5, got %f", updated.PingMilliseconds)
if updated.PingMS != 50.5 {
t.Errorf("expected ping 50.5, got %f", updated.PingMS)
}
if updated.GeographicKilometres != 100.2 {
t.Errorf("expected geographic distance 100.2, got %f", updated.GeographicKilometres)
if updated.GeoKM != 100.2 {
t.Errorf("expected geo 100.2, got %f", updated.GeoKM)
}
if updated.Hops != 3 {
t.Errorf("expected hops 3, got %d", updated.Hops)
}
}
func TestPeer_Registry_UpdateScore_Good(t *testing.T) {
func TestPeerRegistry_UpdateScore(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -251,7 +237,7 @@ func TestPeer_Registry_UpdateScore_Good(t *testing.T) {
}
}
func TestPeer_Registry_MarkConnected_Good(t *testing.T) {
func TestPeerRegistry_SetConnected(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -286,7 +272,7 @@ func TestPeer_Registry_MarkConnected_Good(t *testing.T) {
}
}
func TestPeer_Registry_ConnectedPeerList_Good(t *testing.T) {
func TestPeerRegistry_GetConnectedPeers(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -309,15 +295,15 @@ func TestPeer_Registry_ConnectedPeerList_Good(t *testing.T) {
}
}
func TestPeer_Registry_SelectOptimalPeer_Good(t *testing.T) {
func TestPeerRegistry_SelectOptimalPeer(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
// Add peers with different metrics
peers := []*Peer{
{ID: "opt-1", Name: "Slow Peer", PingMilliseconds: 200, Hops: 5, GeographicKilometres: 1000, Score: 50},
{ID: "opt-2", Name: "Fast Peer", PingMilliseconds: 10, Hops: 1, GeographicKilometres: 50, Score: 90},
{ID: "opt-3", Name: "Medium Peer", PingMilliseconds: 50, Hops: 2, GeographicKilometres: 200, Score: 70},
{ID: "opt-1", Name: "Slow Peer", PingMS: 200, Hops: 5, GeoKM: 1000, Score: 50},
{ID: "opt-2", Name: "Fast Peer", PingMS: 10, Hops: 1, GeoKM: 50, Score: 90},
{ID: "opt-3", Name: "Medium Peer", PingMS: 50, Hops: 2, GeoKM: 200, Score: 70},
}
for _, p := range peers {
@ -335,15 +321,15 @@ func TestPeer_Registry_SelectOptimalPeer_Good(t *testing.T) {
}
}
func TestPeer_Registry_SelectNearestPeers_Good(t *testing.T) {
func TestPeerRegistry_SelectNearestPeers(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
peers := []*Peer{
{ID: "near-1", Name: "Peer 1", PingMilliseconds: 100, Score: 50},
{ID: "near-2", Name: "Peer 2", PingMilliseconds: 10, Score: 90},
{ID: "near-3", Name: "Peer 3", PingMilliseconds: 50, Score: 70},
{ID: "near-4", Name: "Peer 4", PingMilliseconds: 200, Score: 30},
{ID: "near-1", Name: "Peer 1", PingMS: 100, Score: 50},
{ID: "near-2", Name: "Peer 2", PingMS: 10, Score: 90},
{ID: "near-3", Name: "Peer 3", PingMS: 50, Score: 70},
{ID: "near-4", Name: "Peer 4", PingMS: 200, Score: 30},
}
for _, p := range peers {
@ -356,12 +342,14 @@ func TestPeer_Registry_SelectNearestPeers_Good(t *testing.T) {
}
}
func TestPeer_Registry_Persistence_Good(t *testing.T) {
tmpDir := t.TempDir()
peersPath := testJoinPath(tmpDir, "peers.json")
func TestPeerRegistry_Persistence(t *testing.T) {
tmpDir, _ := os.MkdirTemp("", "persist-test")
defer os.RemoveAll(tmpDir)
peersPath := filepath.Join(tmpDir, "peers.json")
// Create and save
pr1, err := NewPeerRegistryFromPath(peersPath)
pr1, err := NewPeerRegistryWithPath(peersPath)
if err != nil {
t.Fatalf("failed to create first registry: %v", err)
}
@ -382,7 +370,7 @@ func TestPeer_Registry_Persistence_Good(t *testing.T) {
}
// Load in new registry from same path
pr2, err := NewPeerRegistryFromPath(peersPath)
pr2, err := NewPeerRegistryWithPath(peersPath)
if err != nil {
t.Fatalf("failed to create second registry: %v", err)
}
@ -401,9 +389,42 @@ func TestPeer_Registry_Persistence_Good(t *testing.T) {
}
}
func TestPeerRegistry_AllowlistPersistence(t *testing.T) {
tmpDir, _ := os.MkdirTemp("", "allowlist-persist-test")
defer os.RemoveAll(tmpDir)
peersPath := filepath.Join(tmpDir, "peers.json")
pr1, err := NewPeerRegistryWithPath(peersPath)
if err != nil {
t.Fatalf("failed to create first registry: %v", err)
}
key := "allowlist-key-1234567890"
pr1.AllowPublicKey(key)
if err := pr1.Close(); err != nil {
t.Fatalf("failed to close first registry: %v", err)
}
pr2, err := NewPeerRegistryWithPath(peersPath)
if err != nil {
t.Fatalf("failed to create second registry: %v", err)
}
if !pr2.IsPublicKeyAllowed(key) {
t.Fatal("expected allowlisted key to survive reload")
}
keys := pr2.ListAllowedPublicKeys()
if !slices.Contains(keys, key) {
t.Fatalf("expected allowlisted key to be listed after reload, got %v", keys)
}
}
// --- Security Feature Tests ---
func TestPeer_Registry_AuthMode_Good(t *testing.T) {
func TestPeerRegistry_AuthMode(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -425,7 +446,7 @@ func TestPeer_Registry_AuthMode_Good(t *testing.T) {
}
}
func TestPeer_Registry_PublicKeyAllowlist_Good(t *testing.T) {
func TestPeerRegistry_PublicKeyAllowlist(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -462,7 +483,7 @@ func TestPeer_Registry_PublicKeyAllowlist_Good(t *testing.T) {
}
}
func TestPeer_Registry_IsPeerAllowed_OpenMode_Good(t *testing.T) {
func TestPeerRegistry_IsPeerAllowed_OpenMode(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -478,7 +499,7 @@ func TestPeer_Registry_IsPeerAllowed_OpenMode_Good(t *testing.T) {
}
}
func TestPeer_Registry_IsPeerAllowed_AllowlistMode_Good(t *testing.T) {
func TestPeerRegistry_IsPeerAllowed_AllowlistMode(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -513,7 +534,7 @@ func TestPeer_Registry_IsPeerAllowed_AllowlistMode_Good(t *testing.T) {
}
}
func TestPeer_Registry_PeerNameValidation_Good(t *testing.T) {
func TestPeerRegistry_PeerNameValidation(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -557,7 +578,7 @@ func TestPeer_Registry_PeerNameValidation_Good(t *testing.T) {
}
}
func TestPeer_Registry_ScoreRecording_Good(t *testing.T) {
func TestPeerRegistry_ScoreRecording(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -613,7 +634,7 @@ func TestPeer_Registry_ScoreRecording_Good(t *testing.T) {
}
}
func TestPeer_Registry_PeersSortedByScore_Good(t *testing.T) {
func TestPeerRegistry_GetPeersByScore(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -643,39 +664,11 @@ func TestPeer_Registry_PeersSortedByScore_Good(t *testing.T) {
if sorted[2].ID != "low-score" {
t.Errorf("third peer should be low-score, got %s", sorted[2].ID)
}
sorted[0].Name = "Mutated"
restored := pr.GetPeer("high-score")
if restored == nil {
t.Fatal("expected high-score peer to still exist")
}
if restored.Name != "High" {
t.Errorf("expected registry peer to remain unchanged, got %q", restored.Name)
}
}
func TestPeer_Registry_NilPeerInputs_Bad(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
t.Run("AddPeer", func(t *testing.T) {
err := pr.AddPeer(nil)
if err == nil {
t.Fatal("expected error when adding nil peer")
}
})
t.Run("UpdatePeer", func(t *testing.T) {
err := pr.UpdatePeer(nil)
if err == nil {
t.Fatal("expected error when updating nil peer")
}
})
}
// --- Additional coverage tests for peer.go ---
func TestPeer_SafeKeyPrefix_Good(t *testing.T) {
func TestSafeKeyPrefix(t *testing.T) {
tests := []struct {
name string
key string
@ -698,7 +691,7 @@ func TestPeer_SafeKeyPrefix_Good(t *testing.T) {
}
}
func TestPeer_ValidatePeerName_Good(t *testing.T) {
func TestValidatePeerName(t *testing.T) {
tests := []struct {
name string
peerName string
@ -731,7 +724,7 @@ func TestPeer_ValidatePeerName_Good(t *testing.T) {
}
}
func TestPeer_Registry_AddPeer_EmptyID_Bad(t *testing.T) {
func TestPeerRegistry_AddPeer_EmptyID(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -742,7 +735,7 @@ func TestPeer_Registry_AddPeer_EmptyID_Bad(t *testing.T) {
}
}
func TestPeer_Registry_UpdatePeer_Good(t *testing.T) {
func TestPeerRegistry_UpdatePeer(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -773,22 +766,9 @@ func TestPeer_Registry_UpdatePeer_Good(t *testing.T) {
if updated.Score != 80 {
t.Errorf("expected score 80, got %f", updated.Score)
}
peer.Name = "Mutated after update"
peer.Score = 12
stored := pr.GetPeer("update-test")
if stored == nil {
t.Fatal("expected peer to exist after update mutation")
}
if stored.Name != "Updated" {
t.Errorf("expected stored peer name to remain Updated, got %q", stored.Name)
}
if stored.Score != 80 {
t.Errorf("expected stored peer score to remain 80, got %f", stored.Score)
}
}
func TestPeer_Registry_UpdateMetrics_NotFound_Bad(t *testing.T) {
func TestPeerRegistry_UpdateMetrics_NotFound(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -798,7 +778,7 @@ func TestPeer_Registry_UpdateMetrics_NotFound_Bad(t *testing.T) {
}
}
func TestPeer_Registry_UpdateScore_NotFound_Bad(t *testing.T) {
func TestPeerRegistry_UpdateScore_NotFound(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -808,7 +788,7 @@ func TestPeer_Registry_UpdateScore_NotFound_Bad(t *testing.T) {
}
}
func TestPeer_Registry_RecordSuccess_NotFound_Bad(t *testing.T) {
func TestPeerRegistry_RecordSuccess_NotFound(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -816,21 +796,21 @@ func TestPeer_Registry_RecordSuccess_NotFound_Bad(t *testing.T) {
pr.RecordSuccess("ghost-peer")
}
func TestPeer_Registry_RecordFailure_NotFound_Bad(t *testing.T) {
func TestPeerRegistry_RecordFailure_NotFound(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
pr.RecordFailure("ghost-peer")
}
func TestPeer_Registry_RecordTimeout_NotFound_Bad(t *testing.T) {
func TestPeerRegistry_RecordTimeout_NotFound(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
pr.RecordTimeout("ghost-peer")
}
func TestPeer_Registry_SelectOptimalPeer_EmptyRegistry_Ugly(t *testing.T) {
func TestPeerRegistry_SelectOptimalPeer_EmptyRegistry(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -840,7 +820,7 @@ func TestPeer_Registry_SelectOptimalPeer_EmptyRegistry_Ugly(t *testing.T) {
}
}
func TestPeer_Registry_SelectNearestPeers_EmptyRegistry_Ugly(t *testing.T) {
func TestPeerRegistry_SelectNearestPeers_EmptyRegistry(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -850,7 +830,7 @@ func TestPeer_Registry_SelectNearestPeers_EmptyRegistry_Ugly(t *testing.T) {
}
}
func TestPeer_Registry_MarkConnected_NonExistent_Bad(t *testing.T) {
func TestPeerRegistry_SetConnected_NonExistent(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -858,7 +838,7 @@ func TestPeer_Registry_MarkConnected_NonExistent_Bad(t *testing.T) {
pr.SetConnected("ghost-peer", true)
}
func TestPeer_Registry_Close_NoDirtyData_Ugly(t *testing.T) {
func TestPeerRegistry_Close_NoDirtyData(t *testing.T) {
pr, cleanup := setupTestPeerRegistry(t)
defer cleanup()
@ -869,10 +849,12 @@ func TestPeer_Registry_Close_NoDirtyData_Ugly(t *testing.T) {
}
}
func TestPeer_Registry_Close_WithDirtyData_Ugly(t *testing.T) {
tmpDir := t.TempDir()
peersPath := testJoinPath(tmpDir, "peers.json")
pr, err := NewPeerRegistryFromPath(peersPath)
func TestPeerRegistry_Close_WithDirtyData(t *testing.T) {
tmpDir, _ := os.MkdirTemp("", "close-dirty-test")
defer os.RemoveAll(tmpDir)
peersPath := filepath.Join(tmpDir, "peers.json")
pr, err := NewPeerRegistryWithPath(peersPath)
if err != nil {
t.Fatalf("failed to create registry: %v", err)
}
@ -887,7 +869,7 @@ func TestPeer_Registry_Close_WithDirtyData_Ugly(t *testing.T) {
}
// Verify data was saved
pr2, err := NewPeerRegistryFromPath(peersPath)
pr2, err := NewPeerRegistryWithPath(peersPath)
if err != nil {
t.Fatalf("failed to reload: %v", err)
}
@ -896,10 +878,12 @@ func TestPeer_Registry_Close_WithDirtyData_Ugly(t *testing.T) {
}
}
func TestPeer_Registry_ScheduleSave_Debounce_Ugly(t *testing.T) {
tmpDir := t.TempDir()
peersPath := testJoinPath(tmpDir, "peers.json")
pr, err := NewPeerRegistryFromPath(peersPath)
func TestPeerRegistry_ScheduleSave_Debounce(t *testing.T) {
tmpDir, _ := os.MkdirTemp("", "debounce-test")
defer os.RemoveAll(tmpDir)
peersPath := filepath.Join(tmpDir, "peers.json")
pr, err := NewPeerRegistryWithPath(peersPath)
if err != nil {
t.Fatalf("failed to create registry: %v", err)
}
@ -916,10 +900,12 @@ func TestPeer_Registry_ScheduleSave_Debounce_Ugly(t *testing.T) {
}
}
func TestPeer_Registry_SaveNow_Good(t *testing.T) {
tmpDir := t.TempDir()
peersPath := testJoinPath(tmpDir, "subdir", "peers.json")
pr, err := NewPeerRegistryFromPath(peersPath)
func TestPeerRegistry_SaveNow(t *testing.T) {
tmpDir, _ := os.MkdirTemp("", "savenow-test")
defer os.RemoveAll(tmpDir)
peersPath := filepath.Join(tmpDir, "subdir", "peers.json")
pr, err := NewPeerRegistryWithPath(peersPath)
if err != nil {
t.Fatalf("failed to create registry: %v", err)
}
@ -935,19 +921,21 @@ func TestPeer_Registry_SaveNow_Good(t *testing.T) {
}
// Verify the file was written
if !filesystemExists(peersPath) {
if _, err := os.Stat(peersPath); os.IsNotExist(err) {
t.Error("peers.json should exist after saveNow")
}
}
func TestPeer_Registry_ScheduleSave_TimerFires_Ugly(t *testing.T) {
func TestPeerRegistry_ScheduleSave_TimerFires(t *testing.T) {
if testing.Short() {
t.Skip("skipping debounce timer test in short mode")
}
tmpDir := t.TempDir()
peersPath := testJoinPath(tmpDir, "peers.json")
pr, err := NewPeerRegistryFromPath(peersPath)
tmpDir, _ := os.MkdirTemp("", "timer-fire-test")
defer os.RemoveAll(tmpDir)
peersPath := filepath.Join(tmpDir, "peers.json")
pr, err := NewPeerRegistryWithPath(peersPath)
if err != nil {
t.Fatalf("failed to create registry: %v", err)
}
@ -958,12 +946,12 @@ func TestPeer_Registry_ScheduleSave_TimerFires_Ugly(t *testing.T) {
time.Sleep(6 * time.Second)
// The file should have been saved by the timer
if !filesystemExists(peersPath) {
if _, err := os.Stat(peersPath); os.IsNotExist(err) {
t.Error("peers.json should exist after debounce timer fires")
}
// Reload and verify
pr2, err := NewPeerRegistryFromPath(peersPath)
pr2, err := NewPeerRegistryWithPath(peersPath)
if err != nil {
t.Fatalf("failed to reload: %v", err)
}

View file

@ -1,46 +1,53 @@
package node
import (
core "dappco.re/go/core"
"fmt"
coreerr "dappco.re/go/core/log"
)
// err := &ProtocolError{Code: ErrorCodeOperationFailed, Message: "start failed"}
// ProtocolError represents an error from the remote peer.
type ProtocolError struct {
Code int
Message string
}
func (e *ProtocolError) Error() string {
return core.Sprintf("remote error (%d): %s", e.Code, e.Message)
return fmt.Sprintf("remote error (%d): %s", e.Code, e.Message)
}
// handler := &ResponseHandler{}
// ResponseHandler provides helpers for handling protocol responses.
type ResponseHandler struct{}
// err := handler.ValidateResponse(resp, MessageStats)
// ValidateResponse checks if the response is valid and returns a parsed error if it's an error response.
// It checks:
// 1. If response is nil (returns error)
// 2. If response is an error message (returns ProtocolError)
// 3. If response type matches expected (returns error if not)
func (h *ResponseHandler) ValidateResponse(resp *Message, expectedType MessageType) error {
if resp == nil {
return core.E("ResponseHandler.ValidateResponse", "nil response", nil)
return coreerr.E("ResponseHandler.ValidateResponse", "nil response", nil)
}
// Check for error response
if resp.Type == MessageError {
if resp.Type == MsgError {
var errPayload ErrorPayload
if err := resp.ParsePayload(&errPayload); err != nil {
return &ProtocolError{Code: ErrorCodeUnknown, Message: "unable to parse error response"}
return &ProtocolError{Code: ErrCodeUnknown, Message: "unable to parse error response"}
}
return &ProtocolError{Code: errPayload.Code, Message: errPayload.Message}
}
// Check expected type
if resp.Type != expectedType {
return core.E("ResponseHandler.ValidateResponse", "unexpected response type: expected "+string(expectedType)+", got "+string(resp.Type), nil)
return coreerr.E("ResponseHandler.ValidateResponse", "unexpected response type: expected "+string(expectedType)+", got "+string(resp.Type), nil)
}
return nil
}
// err := handler.ParseResponse(resp, MessageStats, &stats)
// ParseResponse validates the response and parses the payload into the target.
// This combines ValidateResponse and ParsePayload into a single call.
func (h *ResponseHandler) ParseResponse(resp *Message, expectedType MessageType, target any) error {
if err := h.ValidateResponse(resp, expectedType); err != nil {
return err
@ -48,33 +55,33 @@ func (h *ResponseHandler) ParseResponse(resp *Message, expectedType MessageType,
if target != nil {
if err := resp.ParsePayload(target); err != nil {
return core.E("ResponseHandler.ParseResponse", "failed to parse "+string(expectedType)+" payload", err)
return coreerr.E("ResponseHandler.ParseResponse", "failed to parse "+string(expectedType)+" payload", err)
}
}
return nil
}
// handler := DefaultResponseHandler
// DefaultResponseHandler is the default response handler instance.
var DefaultResponseHandler = &ResponseHandler{}
// err := ValidateResponse(message, MessageStats)
// ValidateResponse is a convenience function using the default handler.
func ValidateResponse(resp *Message, expectedType MessageType) error {
return DefaultResponseHandler.ValidateResponse(resp, expectedType)
}
// err := ParseResponse(message, MessageStats, &stats)
// ParseResponse is a convenience function using the default handler.
func ParseResponse(resp *Message, expectedType MessageType, target any) error {
return DefaultResponseHandler.ParseResponse(resp, expectedType, target)
}
// ok := IsProtocolError(err)
// IsProtocolError returns true if the error is a ProtocolError.
func IsProtocolError(err error) bool {
_, ok := err.(*ProtocolError)
return ok
}
// code := GetProtocolErrorCode(err)
// GetProtocolErrorCode returns the error code if err is a ProtocolError, otherwise returns 0.
func GetProtocolErrorCode(err error) int {
if pe, ok := err.(*ProtocolError); ok {
return pe.Code

View file

@ -1,24 +1,23 @@
package node
import (
"fmt"
"testing"
core "dappco.re/go/core"
)
func TestProtocol_ResponseHandler_ValidateResponse_Good(t *testing.T) {
func TestResponseHandler_ValidateResponse(t *testing.T) {
handler := &ResponseHandler{}
t.Run("NilResponse", func(t *testing.T) {
err := handler.ValidateResponse(nil, MessageStats)
err := handler.ValidateResponse(nil, MsgStats)
if err == nil {
t.Error("Expected error for nil response")
}
})
t.Run("ErrorResponse", func(t *testing.T) {
errMsg, _ := NewErrorMessage("sender", "receiver", ErrorCodeOperationFailed, "operation failed", "")
err := handler.ValidateResponse(errMsg, MessageStats)
errMsg, _ := NewErrorMessage("sender", "receiver", ErrCodeOperationFailed, "operation failed", "")
err := handler.ValidateResponse(errMsg, MsgStats)
if err == nil {
t.Fatal("Expected error for error response")
}
@ -27,14 +26,14 @@ func TestProtocol_ResponseHandler_ValidateResponse_Good(t *testing.T) {
t.Errorf("Expected ProtocolError, got %T", err)
}
if GetProtocolErrorCode(err) != ErrorCodeOperationFailed {
t.Errorf("Expected code %d, got %d", ErrorCodeOperationFailed, GetProtocolErrorCode(err))
if GetProtocolErrorCode(err) != ErrCodeOperationFailed {
t.Errorf("Expected code %d, got %d", ErrCodeOperationFailed, GetProtocolErrorCode(err))
}
})
t.Run("WrongType", func(t *testing.T) {
msg, _ := NewMessage(MessagePong, "sender", "receiver", nil)
err := handler.ValidateResponse(msg, MessageStats)
msg, _ := NewMessage(MsgPong, "sender", "receiver", nil)
err := handler.ValidateResponse(msg, MsgStats)
if err == nil {
t.Error("Expected error for wrong type")
}
@ -44,15 +43,15 @@ func TestProtocol_ResponseHandler_ValidateResponse_Good(t *testing.T) {
})
t.Run("ValidResponse", func(t *testing.T) {
msg, _ := NewMessage(MessageStats, "sender", "receiver", StatsPayload{NodeID: "test"})
err := handler.ValidateResponse(msg, MessageStats)
msg, _ := NewMessage(MsgStats, "sender", "receiver", StatsPayload{NodeID: "test"})
err := handler.ValidateResponse(msg, MsgStats)
if err != nil {
t.Errorf("Unexpected error: %v", err)
}
})
}
func TestProtocol_ResponseHandler_ParseResponse_Good(t *testing.T) {
func TestResponseHandler_ParseResponse(t *testing.T) {
handler := &ResponseHandler{}
t.Run("ParseStats", func(t *testing.T) {
@ -61,10 +60,10 @@ func TestProtocol_ResponseHandler_ParseResponse_Good(t *testing.T) {
NodeName: "Test Node",
Uptime: 3600,
}
msg, _ := NewMessage(MessageStats, "sender", "receiver", payload)
msg, _ := NewMessage(MsgStats, "sender", "receiver", payload)
var parsed StatsPayload
err := handler.ParseResponse(msg, MessageStats, &parsed)
err := handler.ParseResponse(msg, MsgStats, &parsed)
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
@ -82,10 +81,10 @@ func TestProtocol_ResponseHandler_ParseResponse_Good(t *testing.T) {
Success: true,
MinerName: "xmrig-1",
}
msg, _ := NewMessage(MessageMinerAck, "sender", "receiver", payload)
msg, _ := NewMessage(MsgMinerAck, "sender", "receiver", payload)
var parsed MinerAckPayload
err := handler.ParseResponse(msg, MessageMinerAck, &parsed)
err := handler.ParseResponse(msg, MsgMinerAck, &parsed)
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
@ -99,10 +98,10 @@ func TestProtocol_ResponseHandler_ParseResponse_Good(t *testing.T) {
})
t.Run("ErrorResponse", func(t *testing.T) {
errMsg, _ := NewErrorMessage("sender", "receiver", ErrorCodeNotFound, "not found", "")
errMsg, _ := NewErrorMessage("sender", "receiver", ErrCodeNotFound, "not found", "")
var parsed StatsPayload
err := handler.ParseResponse(errMsg, MessageStats, &parsed)
err := handler.ParseResponse(errMsg, MsgStats, &parsed)
if err == nil {
t.Error("Expected error for error response")
}
@ -112,15 +111,15 @@ func TestProtocol_ResponseHandler_ParseResponse_Good(t *testing.T) {
})
t.Run("NilTarget", func(t *testing.T) {
msg, _ := NewMessage(MessagePong, "sender", "receiver", nil)
err := handler.ParseResponse(msg, MessagePong, nil)
msg, _ := NewMessage(MsgPong, "sender", "receiver", nil)
err := handler.ParseResponse(msg, MsgPong, nil)
if err != nil {
t.Errorf("Unexpected error with nil target: %v", err)
}
})
}
func TestProtocol_Error_Bad(t *testing.T) {
func TestProtocolError(t *testing.T) {
err := &ProtocolError{Code: 1001, Message: "test error"}
if err.Error() != "remote error (1001): test error" {
@ -136,17 +135,17 @@ func TestProtocol_Error_Bad(t *testing.T) {
}
}
func TestProtocol_ConvenienceFunctions_Good(t *testing.T) {
msg, _ := NewMessage(MessageStats, "sender", "receiver", StatsPayload{NodeID: "test"})
func TestConvenienceFunctions(t *testing.T) {
msg, _ := NewMessage(MsgStats, "sender", "receiver", StatsPayload{NodeID: "test"})
// Test ValidateResponse
if err := ValidateResponse(msg, MessageStats); err != nil {
if err := ValidateResponse(msg, MsgStats); err != nil {
t.Errorf("ValidateResponse failed: %v", err)
}
// Test ParseResponse
var parsed StatsPayload
if err := ParseResponse(msg, MessageStats, &parsed); err != nil {
if err := ParseResponse(msg, MsgStats, &parsed); err != nil {
t.Errorf("ParseResponse failed: %v", err)
}
if parsed.NodeID != "test" {
@ -154,8 +153,8 @@ func TestProtocol_ConvenienceFunctions_Good(t *testing.T) {
}
}
func TestProtocol_ProtocolErrorCode_NonProtocolError_Bad(t *testing.T) {
err := core.NewError("regular error")
func TestGetProtocolErrorCode_NonProtocolError(t *testing.T) {
err := fmt.Errorf("regular error")
if GetProtocolErrorCode(err) != 0 {
t.Error("Expected 0 for non-ProtocolError")
}

File diff suppressed because it is too large Load diff

View file

@ -1,25 +1,30 @@
package node
import (
"encoding/json"
"net/http"
"net/http/httptest"
"net/url"
"path/filepath"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
core "dappco.re/go/core"
"github.com/gorilla/websocket"
)
// --- Test Helpers ---
func newTestNodeManager(t *testing.T, name string, role NodeRole) *NodeManager {
// testNode creates a NodeManager with a generated identity in a temp directory.
func testNode(t *testing.T, name string, role NodeRole) *NodeManager {
t.Helper()
dir := t.TempDir()
nm, err := NewNodeManagerFromPaths(testNodeManagerPaths(dir))
nm, err := NewNodeManagerWithPaths(
filepath.Join(dir, "private.key"),
filepath.Join(dir, "node.json"),
)
if err != nil {
t.Fatalf("create node manager %q: %v", name, err)
}
@ -29,10 +34,11 @@ func newTestNodeManager(t *testing.T, name string, role NodeRole) *NodeManager {
return nm
}
func newTestPeerRegistry(t *testing.T) *PeerRegistry {
// testRegistry creates a PeerRegistry with open auth in a temp directory.
func testRegistry(t *testing.T) *PeerRegistry {
t.Helper()
dir := t.TempDir()
reg, err := NewPeerRegistryFromPath(testJoinPath(dir, "peers.json"))
reg, err := NewPeerRegistryWithPath(filepath.Join(dir, "peers.json"))
if err != nil {
t.Fatalf("create registry: %v", err)
}
@ -62,17 +68,17 @@ func setupTestTransportPair(t *testing.T) *testTransportPair {
func setupTestTransportPairWithConfig(t *testing.T, serverCfg, clientCfg TransportConfig) *testTransportPair {
t.Helper()
serverNM := newTestNodeManager(t, "server", RoleWorker)
clientNM := newTestNodeManager(t, "client", RoleController)
serverReg := newTestPeerRegistry(t)
clientReg := newTestPeerRegistry(t)
serverNM := testNode(t, "server", RoleWorker)
clientNM := testNode(t, "client", RoleController)
serverReg := testRegistry(t)
clientReg := testRegistry(t)
serverTransport := NewTransport(serverNM, serverReg, serverCfg)
clientTransport := NewTransport(clientNM, clientReg, clientCfg)
// Use httptest.Server with the transport's WebSocket handler
mux := http.NewServeMux()
mux.HandleFunc(serverCfg.WebSocketPath, serverTransport.handleWebSocketUpgrade)
mux.HandleFunc(serverCfg.WSPath, serverTransport.handleWSUpgrade)
ts := httptest.NewServer(mux)
u, _ := url.Parse(ts.URL)
@ -118,7 +124,7 @@ func (tp *testTransportPair) connectClient(t *testing.T) *PeerConnection {
// --- Unit Tests for Sub-Components ---
func TestTransport_MessageDeduplicator_Good(t *testing.T) {
func TestMessageDeduplicator(t *testing.T) {
t.Run("MarkAndCheck", func(t *testing.T) {
d := NewMessageDeduplicator(5 * time.Minute)
@ -153,14 +159,14 @@ func TestTransport_MessageDeduplicator_Good(t *testing.T) {
}
})
t.Run("ExpiredEntriesDoNotLinger", func(t *testing.T) {
d := NewMessageDeduplicator(50 * time.Millisecond)
d.Mark("msg-1")
t.Run("ExpiredEntriesAreNotDuplicates", func(t *testing.T) {
d := NewMessageDeduplicator(25 * time.Millisecond)
d.Mark("msg-expired")
time.Sleep(75 * time.Millisecond)
time.Sleep(40 * time.Millisecond)
if d.IsDuplicate("msg-1") {
t.Error("should not be duplicate after TTL even before cleanup runs")
if d.IsDuplicate("msg-expired") {
t.Error("expired message should not remain a duplicate")
}
})
@ -180,7 +186,7 @@ func TestTransport_MessageDeduplicator_Good(t *testing.T) {
})
}
func TestTransport_PeerRateLimiter_Good(t *testing.T) {
func TestPeerRateLimiter(t *testing.T) {
t.Run("AllowUpToBurst", func(t *testing.T) {
rl := NewPeerRateLimiter(10, 5)
@ -218,7 +224,7 @@ func TestTransport_PeerRateLimiter_Good(t *testing.T) {
// --- Transport Integration Tests ---
func TestTransport_FullHandshake_Good(t *testing.T) {
func TestTransport_FullHandshake(t *testing.T) {
tp := setupTestTransportPair(t)
pc := tp.connectClient(t)
@ -230,11 +236,11 @@ func TestTransport_FullHandshake_Good(t *testing.T) {
// Allow server goroutines to register the connection
time.Sleep(50 * time.Millisecond)
if tp.Server.ConnectedPeerCount() != 1 {
t.Errorf("server connected peers: got %d, want 1", tp.Server.ConnectedPeerCount())
if tp.Server.ConnectedPeers() != 1 {
t.Errorf("server connected peers: got %d, want 1", tp.Server.ConnectedPeers())
}
if tp.Client.ConnectedPeerCount() != 1 {
t.Errorf("client connected peers: got %d, want 1", tp.Client.ConnectedPeerCount())
if tp.Client.ConnectedPeers() != 1 {
t.Errorf("client connected peers: got %d, want 1", tp.Client.ConnectedPeers())
}
// Verify peer identity was exchanged correctly
@ -248,72 +254,7 @@ func TestTransport_FullHandshake_Good(t *testing.T) {
}
}
func TestTransport_ConnectSendsAgentUserAgent_Good(t *testing.T) {
serverNM := newTestNodeManager(t, "ua-server", RoleWorker)
clientNM := newTestNodeManager(t, "ua-client", RoleController)
serverReg := newTestPeerRegistry(t)
clientReg := newTestPeerRegistry(t)
serverCfg := DefaultTransportConfig()
clientCfg := DefaultTransportConfig()
serverTransport := NewTransport(serverNM, serverReg, serverCfg)
clientTransport := NewTransport(clientNM, clientReg, clientCfg)
var capturedUserAgent atomic.Value
mux := http.NewServeMux()
mux.HandleFunc(serverCfg.WebSocketPath, func(w http.ResponseWriter, r *http.Request) {
capturedUserAgent.Store(r.Header.Get("User-Agent"))
serverTransport.handleWebSocketUpgrade(w, r)
})
ts := httptest.NewServer(mux)
t.Cleanup(func() {
clientTransport.Stop()
serverTransport.Stop()
ts.Close()
})
u, _ := url.Parse(ts.URL)
serverAddr := u.Host
peer := &Peer{
ID: serverNM.GetIdentity().ID,
Name: "server",
Address: serverAddr,
Role: RoleWorker,
}
clientReg.AddPeer(peer)
pc, err := clientTransport.Connect(peer)
if err != nil {
t.Fatalf("client connect failed: %v", err)
}
ua, ok := capturedUserAgent.Load().(string)
if !ok || ua == "" {
t.Fatal("expected user-agent to be captured during websocket upgrade")
}
if !strings.HasPrefix(ua, agentUserAgentPrefix) {
t.Fatalf("user-agent prefix: got %q, want prefix %q", ua, agentUserAgentPrefix)
}
if !strings.Contains(ua, "id="+clientNM.GetIdentity().ID) {
t.Fatalf("user-agent should include client identity, got %q", ua)
}
if pc.UserAgent != ua {
t.Fatalf("client connection user-agent: got %q, want %q", pc.UserAgent, ua)
}
serverConn := serverTransport.GetConnection(clientNM.GetIdentity().ID)
if serverConn == nil {
t.Fatal("server should retain the accepted connection")
}
if serverConn.UserAgent != ua {
t.Fatalf("server connection user-agent: got %q, want %q", serverConn.UserAgent, ua)
}
}
func TestTransport_HandshakeRejectWrongVersion_Bad(t *testing.T) {
func TestTransport_HandshakeRejectWrongVersion(t *testing.T) {
tp := setupTestTransportPair(t)
// Dial raw WebSocket and send handshake with unsupported version
@ -329,7 +270,7 @@ func TestTransport_HandshakeRejectWrongVersion_Bad(t *testing.T) {
Identity: *clientIdentity,
Version: "99.99", // Unsupported
}
msg, _ := NewMessage(MessageHandshake, clientIdentity.ID, "", payload)
msg, _ := NewMessage(MsgHandshake, clientIdentity.ID, "", payload)
data, _ := MarshalJSON(msg)
if err := conn.WriteMessage(websocket.TextMessage, data); err != nil {
@ -342,7 +283,9 @@ func TestTransport_HandshakeRejectWrongVersion_Bad(t *testing.T) {
}
var resp Message
testJSONUnmarshal(t, respData, &resp)
if err := json.Unmarshal(respData, &resp); err != nil {
t.Fatalf("unmarshal response: %v", err)
}
var ack HandshakeAckPayload
resp.ParsePayload(&ack)
@ -350,12 +293,12 @@ func TestTransport_HandshakeRejectWrongVersion_Bad(t *testing.T) {
if ack.Accepted {
t.Error("should reject incompatible protocol version")
}
if !core.Contains(ack.Reason, "incompatible protocol version") {
if !strings.Contains(ack.Reason, "incompatible protocol version") {
t.Errorf("expected version rejection reason, got: %s", ack.Reason)
}
}
func TestTransport_HandshakeRejectAllowlist_Bad(t *testing.T) {
func TestTransport_HandshakeRejectAllowlist(t *testing.T) {
tp := setupTestTransportPair(t)
// Switch server to allowlist mode WITHOUT adding client's key
@ -373,12 +316,12 @@ func TestTransport_HandshakeRejectAllowlist_Bad(t *testing.T) {
if err == nil {
t.Fatal("should reject peer not in allowlist")
}
if !core.Contains(err.Error(), "rejected") {
if !strings.Contains(err.Error(), "rejected") {
t.Errorf("expected rejection error, got: %v", err)
}
}
func TestTransport_EncryptedMessageRoundTrip_Ugly(t *testing.T) {
func TestTransport_EncryptedMessageRoundTrip(t *testing.T) {
tp := setupTestTransportPair(t)
received := make(chan *Message, 1)
@ -391,7 +334,7 @@ func TestTransport_EncryptedMessageRoundTrip_Ugly(t *testing.T) {
// Send an encrypted message from client to server
clientID := tp.ClientNode.GetIdentity().ID
serverID := tp.ServerNode.GetIdentity().ID
sentMsg, _ := NewMessage(MessagePing, clientID, serverID, PingPayload{
sentMsg, _ := NewMessage(MsgPing, clientID, serverID, PingPayload{
SentAt: time.Now().UnixMilli(),
})
@ -401,8 +344,8 @@ func TestTransport_EncryptedMessageRoundTrip_Ugly(t *testing.T) {
select {
case msg := <-received:
if msg.Type != MessagePing {
t.Errorf("type: got %s, want %s", msg.Type, MessagePing)
if msg.Type != MsgPing {
t.Errorf("type: got %s, want %s", msg.Type, MsgPing)
}
if msg.ID != sentMsg.ID {
t.Error("message ID mismatch after encrypt/decrypt round-trip")
@ -421,7 +364,7 @@ func TestTransport_EncryptedMessageRoundTrip_Ugly(t *testing.T) {
}
}
func TestTransport_MessageDedup_Good(t *testing.T) {
func TestTransport_MessageDedup(t *testing.T) {
tp := setupTestTransportPair(t)
var count atomic.Int32
@ -433,7 +376,7 @@ func TestTransport_MessageDedup_Good(t *testing.T) {
clientID := tp.ClientNode.GetIdentity().ID
serverID := tp.ServerNode.GetIdentity().ID
msg, _ := NewMessage(MessagePing, clientID, serverID, PingPayload{SentAt: time.Now().UnixMilli()})
msg, _ := NewMessage(MsgPing, clientID, serverID, PingPayload{SentAt: time.Now().UnixMilli()})
// Send the same message twice
if err := pc.Send(msg); err != nil {
@ -451,7 +394,7 @@ func TestTransport_MessageDedup_Good(t *testing.T) {
}
}
func TestTransport_RateLimiting_Good(t *testing.T) {
func TestTransport_RateLimiting(t *testing.T) {
tp := setupTestTransportPair(t)
var count atomic.Int32
@ -466,7 +409,7 @@ func TestTransport_RateLimiting_Good(t *testing.T) {
// Send 150 messages rapidly (rate limiter burst = 100)
for range 150 {
msg, _ := NewMessage(MessagePing, clientID, serverID, PingPayload{SentAt: time.Now().UnixMilli()})
msg, _ := NewMessage(MsgPing, clientID, serverID, PingPayload{SentAt: time.Now().UnixMilli()})
pc.Send(msg)
}
@ -483,17 +426,17 @@ func TestTransport_RateLimiting_Good(t *testing.T) {
}
}
func TestTransport_MaxConnectionsEnforcement_Good(t *testing.T) {
// Server with MaxConnections=1
serverNM := newTestNodeManager(t, "maxconns-server", RoleWorker)
serverReg := newTestPeerRegistry(t)
func TestTransport_MaxConnsEnforcement(t *testing.T) {
// Server with MaxConns=1
serverNM := testNode(t, "maxconns-server", RoleWorker)
serverReg := testRegistry(t)
serverCfg := DefaultTransportConfig()
serverCfg.MaxConnections = 1
serverCfg.MaxConns = 1
serverTransport := NewTransport(serverNM, serverReg, serverCfg)
mux := http.NewServeMux()
mux.HandleFunc(serverCfg.WebSocketPath, serverTransport.handleWebSocketUpgrade)
mux.HandleFunc(serverCfg.WSPath, serverTransport.handleWSUpgrade)
ts := httptest.NewServer(mux)
t.Cleanup(func() {
serverTransport.Stop()
@ -504,8 +447,8 @@ func TestTransport_MaxConnectionsEnforcement_Good(t *testing.T) {
serverAddr := u.Host
// First client connects successfully
client1NM := newTestNodeManager(t, "client1", RoleController)
client1Reg := newTestPeerRegistry(t)
client1NM := testNode(t, "client1", RoleController)
client1Reg := testRegistry(t)
client1Transport := NewTransport(client1NM, client1Reg, DefaultTransportConfig())
t.Cleanup(func() { client1Transport.Stop() })
@ -520,9 +463,9 @@ func TestTransport_MaxConnectionsEnforcement_Good(t *testing.T) {
// Allow server to register the connection
time.Sleep(50 * time.Millisecond)
// Second client should be rejected (MaxConnections=1 reached)
client2NM := newTestNodeManager(t, "client2", RoleController)
client2Reg := newTestPeerRegistry(t)
// Second client should be rejected (MaxConns=1 reached)
client2NM := testNode(t, "client2", RoleController)
client2Reg := testRegistry(t)
client2Transport := NewTransport(client2NM, client2Reg, DefaultTransportConfig())
t.Cleanup(func() { client2Transport.Stop() })
@ -531,11 +474,11 @@ func TestTransport_MaxConnectionsEnforcement_Good(t *testing.T) {
_, err = client2Transport.Connect(peer2)
if err == nil {
t.Fatal("second connection should be rejected when MaxConnections=1")
t.Fatal("second connection should be rejected when MaxConns=1")
}
}
func TestTransport_KeepaliveTimeout_Bad(t *testing.T) {
func TestTransport_KeepaliveTimeout(t *testing.T) {
// Use short keepalive settings so the test is fast
serverCfg := DefaultTransportConfig()
serverCfg.PingInterval = 100 * time.Millisecond
@ -550,8 +493,8 @@ func TestTransport_KeepaliveTimeout_Bad(t *testing.T) {
// Verify connection is established
time.Sleep(50 * time.Millisecond)
if tp.Server.ConnectedPeerCount() != 1 {
t.Fatalf("server should have 1 peer initially, got %d", tp.Server.ConnectedPeerCount())
if tp.Server.ConnectedPeers() != 1 {
t.Fatalf("server should have 1 peer initially, got %d", tp.Server.ConnectedPeers())
}
// Close the underlying WebSocket on the client side to simulate network failure.
@ -562,16 +505,16 @@ func TestTransport_KeepaliveTimeout_Bad(t *testing.T) {
if clientConn == nil {
t.Fatal("client should have connection to server")
}
clientConn.WebSocketConnection.Close()
clientConn.Conn.Close()
// Wait for server to detect and clean up
deadline := time.After(2 * time.Second)
for {
select {
case <-deadline:
t.Fatalf("server did not clean up connection: still has %d peers", tp.Server.ConnectedPeerCount())
t.Fatalf("server did not clean up connection: still has %d peers", tp.Server.ConnectedPeers())
default:
if tp.Server.ConnectedPeerCount() == 0 {
if tp.Server.ConnectedPeers() == 0 {
// Verify registry updated
peer := tp.ServerReg.GetPeer(clientID)
if peer != nil && peer.Connected {
@ -584,7 +527,7 @@ func TestTransport_KeepaliveTimeout_Bad(t *testing.T) {
}
}
func TestTransport_GracefulClose_Ugly(t *testing.T) {
func TestTransport_GracefulClose(t *testing.T) {
tp := setupTestTransportPair(t)
received := make(chan *Message, 10)
@ -597,13 +540,13 @@ func TestTransport_GracefulClose_Ugly(t *testing.T) {
// Allow connection to fully establish
time.Sleep(50 * time.Millisecond)
// Graceful close should send a MessageDisconnect before closing
// Graceful close should send a MsgDisconnect before closing
pc.GracefulClose("test shutdown", DisconnectNormal)
// Check if disconnect message was received
select {
case msg := <-received:
if msg.Type != MessageDisconnect {
if msg.Type != MsgDisconnect {
t.Errorf("expected disconnect message, got %s", msg.Type)
}
var payload DisconnectPayload
@ -619,38 +562,7 @@ func TestTransport_GracefulClose_Ugly(t *testing.T) {
}
}
func TestTransport_PeerConnectionClose_ReleasesState_Good(t *testing.T) {
tp := setupTestTransportPair(t)
pc := tp.connectClient(t)
clientID := tp.ClientNode.GetIdentity().ID
if tp.Client.GetConnection(tp.ServerNode.GetIdentity().ID) == nil {
t.Fatal("client should have an active connection before close")
}
if err := pc.Close(); err != nil {
t.Fatalf("close peer connection: %v", err)
}
deadline := time.After(2 * time.Second)
for {
select {
case <-deadline:
t.Fatal("connection state was not released after close")
default:
if tp.Client.GetConnection(tp.ServerNode.GetIdentity().ID) == nil && tp.Client.ConnectedPeerCount() == 0 {
peer := tp.ClientReg.GetPeer(clientID)
if peer != nil && peer.Connected {
t.Fatal("registry should show peer as disconnected after close")
}
return
}
time.Sleep(20 * time.Millisecond)
}
}
}
func TestTransport_ConcurrentSends_Ugly(t *testing.T) {
func TestTransport_ConcurrentSends(t *testing.T) {
tp := setupTestTransportPair(t)
var count atomic.Int32
@ -671,7 +583,7 @@ func TestTransport_ConcurrentSends_Ugly(t *testing.T) {
for range goroutines {
wg.Go(func() {
for range msgsPerGoroutine {
msg, _ := NewMessage(MessagePing, clientID, serverID, PingPayload{SentAt: time.Now().UnixMilli()})
msg, _ := NewMessage(MsgPing, clientID, serverID, PingPayload{SentAt: time.Now().UnixMilli()})
pc.Send(msg)
}
})
@ -690,10 +602,10 @@ func TestTransport_ConcurrentSends_Ugly(t *testing.T) {
// --- Additional coverage tests ---
func TestTransport_Broadcast_Good(t *testing.T) {
func TestTransport_Broadcast(t *testing.T) {
// Set up a controller with two worker peers connected.
controllerNM := newTestNodeManager(t, "broadcast-controller", RoleController)
controllerReg := newTestPeerRegistry(t)
controllerNM := testNode(t, "broadcast-controller", RoleController)
controllerReg := testRegistry(t)
controllerTransport := NewTransport(controllerNM, controllerReg, DefaultTransportConfig())
t.Cleanup(func() { controllerTransport.Stop() })
@ -728,7 +640,7 @@ func TestTransport_Broadcast_Good(t *testing.T) {
// Broadcast a message from the controller
controllerID := controllerNM.GetIdentity().ID
msg, _ := NewMessage(MessagePing, controllerID, "", PingPayload{
msg, _ := NewMessage(MsgPing, controllerID, "", PingPayload{
SentAt: time.Now().UnixMilli(),
})
@ -747,7 +659,7 @@ func TestTransport_Broadcast_Good(t *testing.T) {
}
}
func TestTransport_BroadcastExcludesSender_Good(t *testing.T) {
func TestTransport_BroadcastExcludesSender(t *testing.T) {
// Verify that Broadcast excludes the sender.
tp := setupTestTransportPair(t)
@ -764,7 +676,7 @@ func TestTransport_BroadcastExcludesSender_Good(t *testing.T) {
// connection peer ID check, not the server's own ID. Let's verify sender exclusion
// by broadcasting from the server with its own ID.
serverID := tp.ServerNode.GetIdentity().ID
msg, _ := NewMessage(MessagePing, serverID, "", PingPayload{SentAt: time.Now().UnixMilli()})
msg, _ := NewMessage(MsgPing, serverID, "", PingPayload{SentAt: time.Now().UnixMilli()})
// This broadcasts from server to all connected peers (the client).
// The server itself won't receive it back because it's not connected to itself.
@ -774,9 +686,9 @@ func TestTransport_BroadcastExcludesSender_Good(t *testing.T) {
}
}
func TestTransport_NewTransport_DefaultMaxMessageSize_Good(t *testing.T) {
nm := newTestNodeManager(t, "defaults", RoleWorker)
reg := newTestPeerRegistry(t)
func TestTransport_NewTransport_DefaultMaxMessageSize(t *testing.T) {
nm := testNode(t, "defaults", RoleWorker)
reg := testRegistry(t)
cfg := TransportConfig{
MaxMessageSize: 0, // should use default
}
@ -788,29 +700,29 @@ func TestTransport_NewTransport_DefaultMaxMessageSize_Good(t *testing.T) {
if tr.config.MaxMessageSize != 0 {
t.Errorf("config should preserve 0 value, got %d", tr.config.MaxMessageSize)
}
// The actual default is applied at usage time (readLoop, handleWebSocketUpgrade)
// The actual default is applied at usage time (readLoop, handleWSUpgrade)
}
func TestTransport_ConnectedPeerCount_Good(t *testing.T) {
func TestTransport_ConnectedPeers(t *testing.T) {
tp := setupTestTransportPair(t)
if tp.Server.ConnectedPeerCount() != 0 {
t.Errorf("expected 0 connected peers initially, got %d", tp.Server.ConnectedPeerCount())
if tp.Server.ConnectedPeers() != 0 {
t.Errorf("expected 0 connected peers initially, got %d", tp.Server.ConnectedPeers())
}
tp.connectClient(t)
time.Sleep(50 * time.Millisecond)
if tp.Server.ConnectedPeerCount() != 1 {
t.Errorf("expected 1 connected peer after connect, got %d", tp.Server.ConnectedPeerCount())
if tp.Server.ConnectedPeers() != 1 {
t.Errorf("expected 1 connected peer after connect, got %d", tp.Server.ConnectedPeers())
}
}
func TestTransport_StartAndStop_Good(t *testing.T) {
nm := newTestNodeManager(t, "start-test", RoleWorker)
reg := newTestPeerRegistry(t)
func TestTransport_StartAndStop(t *testing.T) {
nm := testNode(t, "start-test", RoleWorker)
reg := testRegistry(t)
cfg := DefaultTransportConfig()
cfg.ListenAddress = ":0" // Let OS pick a free port
cfg.ListenAddr = ":0" // Let OS pick a free port
tr := NewTransport(nm, reg, cfg)
@ -828,9 +740,9 @@ func TestTransport_StartAndStop_Good(t *testing.T) {
}
}
func TestTransport_CheckOrigin_Good(t *testing.T) {
nm := newTestNodeManager(t, "origin-test", RoleWorker)
reg := newTestPeerRegistry(t)
func TestTransport_CheckOrigin(t *testing.T) {
nm := testNode(t, "origin-test", RoleWorker)
reg := testRegistry(t)
cfg := DefaultTransportConfig()
tr := NewTransport(nm, reg, cfg)

View file

@ -2,15 +2,18 @@ package node
import (
"encoding/base64"
"encoding/json"
"path/filepath"
"time"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
"dappco.re/go/core/p2p/logging"
"github.com/adrg/xdg"
)
// var minerManager MinerManager
// MinerManager interface for the mining package integration.
// This allows the node package to interact with mining.Manager without import cycles.
type MinerManager interface {
StartMiner(minerType string, config any) (MinerInstance, error)
StopMiner(name string) error
@ -18,68 +21,68 @@ type MinerManager interface {
GetMiner(name string) (MinerInstance, error)
}
// var miner MinerInstance
// MinerInstance represents a running miner for stats collection.
type MinerInstance interface {
GetName() string
GetType() string
GetStats() (any, error)
GetConsoleHistory(lines int) []string
GetConsoleHistorySince(lines int, since time.Time) []string
}
// var profileManager ProfileManager
// ProfileManager interface for profile operations.
type ProfileManager interface {
GetProfile(id string) (any, error)
SaveProfile(profile any) error
}
// worker := NewWorker(nodeManager, transport)
// Worker handles incoming messages on a worker node.
type Worker struct {
nodeManager *NodeManager
transport *Transport
minerManager MinerManager
profileManager ProfileManager
startedAt time.Time
DeploymentDirectory string // worker.DeploymentDirectory = "/srv/p2p/deployments"
node *NodeManager
transport *Transport
minerManager MinerManager
profileManager ProfileManager
startTime time.Time
DataDir string // Base directory for deployments (defaults to xdg.DataHome)
}
// worker := NewWorker(nodeManager, transport)
func NewWorker(nodeManager *NodeManager, transport *Transport) *Worker {
// NewWorker creates a new Worker instance.
func NewWorker(node *NodeManager, transport *Transport) *Worker {
return &Worker{
nodeManager: nodeManager,
transport: transport,
startedAt: time.Now(),
DeploymentDirectory: xdg.DataHome,
node: node,
transport: transport,
startTime: time.Now(),
DataDir: xdg.DataHome,
}
}
// worker.SetMinerManager(minerManager)
// SetMinerManager sets the miner manager for handling miner operations.
func (w *Worker) SetMinerManager(manager MinerManager) {
w.minerManager = manager
}
// worker.SetProfileManager(profileManager)
// SetProfileManager sets the profile manager for handling profile operations.
func (w *Worker) SetProfileManager(manager ProfileManager) {
w.profileManager = manager
}
// worker.HandleMessage(peerConnection, message)
func (w *Worker) HandleMessage(peerConnection *PeerConnection, message *Message) {
// HandleMessage processes incoming messages and returns a response.
func (w *Worker) HandleMessage(conn *PeerConnection, msg *Message) {
var response *Message
var err error
switch message.Type {
case MessagePing:
response, err = w.handlePing(message)
case MessageGetStats:
response, err = w.handleStats(message)
case MessageStartMiner:
response, err = w.handleStartMiner(message)
case MessageStopMiner:
response, err = w.handleStopMiner(message)
case MessageGetLogs:
response, err = w.handleLogs(message)
case MessageDeploy:
response, err = w.handleDeploy(peerConnection, message)
switch msg.Type {
case MsgPing:
response, err = w.handlePing(msg)
case MsgGetStats:
response, err = w.handleGetStats(msg)
case MsgStartMiner:
response, err = w.handleStartMiner(msg)
case MsgStopMiner:
response, err = w.handleStopMiner(msg)
case MsgGetLogs:
response, err = w.handleGetLogs(msg)
case MsgDeploy:
response, err = w.handleDeploy(conn, msg)
default:
// Unknown message type - ignore or send error
return
@ -87,23 +90,23 @@ func (w *Worker) HandleMessage(peerConnection *PeerConnection, message *Message)
if err != nil {
// Send error response
identity := w.nodeManager.GetIdentity()
identity := w.node.GetIdentity()
if identity != nil {
errMsg, _ := NewErrorMessage(
identity.ID,
message.From,
ErrorCodeOperationFailed,
msg.From,
ErrCodeOperationFailed,
err.Error(),
message.ID,
msg.ID,
)
peerConnection.Send(errMsg)
conn.Send(errMsg)
}
return
}
if response != nil {
logging.Debug("sending response", logging.Fields{"type": response.Type, "to": message.From})
if err := peerConnection.Send(response); err != nil {
logging.Debug("sending response", logging.Fields{"type": response.Type, "to": msg.From})
if err := conn.Send(response); err != nil {
logging.Error("failed to send response", logging.Fields{"error": err})
} else {
logging.Debug("response sent successfully")
@ -111,10 +114,11 @@ func (w *Worker) HandleMessage(peerConnection *PeerConnection, message *Message)
}
}
func (w *Worker) handlePing(message *Message) (*Message, error) {
// handlePing responds to ping requests.
func (w *Worker) handlePing(msg *Message) (*Message, error) {
var ping PingPayload
if err := message.ParsePayload(&ping); err != nil {
return nil, core.E("Worker.handlePing", "invalid ping payload", err)
if err := msg.ParsePayload(&ping); err != nil {
return nil, coreerr.E("Worker.handlePing", "invalid ping payload", err)
}
pong := PongPayload{
@ -122,20 +126,21 @@ func (w *Worker) handlePing(message *Message) (*Message, error) {
ReceivedAt: time.Now().UnixMilli(),
}
return message.Reply(MessagePong, pong)
return msg.Reply(MsgPong, pong)
}
func (w *Worker) handleStats(message *Message) (*Message, error) {
identity := w.nodeManager.GetIdentity()
// handleGetStats responds with current miner statistics.
func (w *Worker) handleGetStats(msg *Message) (*Message, error) {
identity := w.node.GetIdentity()
if identity == nil {
return nil, ErrorIdentityNotInitialized
return nil, ErrIdentityNotInitialized
}
stats := StatsPayload{
NodeID: identity.ID,
NodeName: identity.Name,
Miners: []MinerStatsItem{},
Uptime: int64(time.Since(w.startedAt).Seconds()),
Uptime: int64(time.Since(w.startTime).Seconds()),
}
if w.minerManager != nil {
@ -146,20 +151,24 @@ func (w *Worker) handleStats(message *Message) (*Message, error) {
continue
}
// Convert to MinerStatsItem - this is a simplified conversion
// The actual implementation would need to match the mining package's stats structure
item := convertMinerStats(miner, minerStats)
stats.Miners = append(stats.Miners, item)
}
}
return message.Reply(MessageStats, stats)
return msg.Reply(MsgStats, stats)
}
// convertMinerStats converts miner stats to the protocol format.
func convertMinerStats(miner MinerInstance, rawStats any) MinerStatsItem {
item := MinerStatsItem{
Name: miner.GetName(),
Type: miner.GetType(),
}
// Try to extract common fields from the stats
if statsMap, ok := rawStats.(map[string]any); ok {
if hashrate, ok := statsMap["hashrate"].(float64); ok {
item.Hashrate = hashrate
@ -184,57 +193,62 @@ func convertMinerStats(miner MinerInstance, rawStats any) MinerStatsItem {
return item
}
func (w *Worker) handleStartMiner(message *Message) (*Message, error) {
// handleStartMiner starts a miner with the given profile.
func (w *Worker) handleStartMiner(msg *Message) (*Message, error) {
if w.minerManager == nil {
return nil, ErrorMinerManagerNotConfigured
return nil, ErrMinerManagerNotConfigured
}
var payload StartMinerPayload
if err := message.ParsePayload(&payload); err != nil {
return nil, core.E("Worker.handleStartMiner", "invalid start miner payload", err)
if err := msg.ParsePayload(&payload); err != nil {
return nil, coreerr.E("Worker.handleStartMiner", "invalid start miner payload", err)
}
// Validate miner type is provided
if payload.MinerType == "" {
return nil, core.E("Worker.handleStartMiner", "miner type is required", nil)
return nil, coreerr.E("Worker.handleStartMiner", "miner type is required", nil)
}
// Get the config from the profile or use the override
var config any
if payload.Config != nil {
config = payload.Config
} else if w.profileManager != nil {
profile, err := w.profileManager.GetProfile(payload.ProfileID)
if err != nil {
return nil, core.E("Worker.handleStartMiner", "profile not found: "+payload.ProfileID, nil)
return nil, coreerr.E("Worker.handleStartMiner", "profile not found: "+payload.ProfileID, nil)
}
config = profile
} else {
return nil, core.E("Worker.handleStartMiner", "no config provided and no profile manager configured", nil)
return nil, coreerr.E("Worker.handleStartMiner", "no config provided and no profile manager configured", nil)
}
// Start the miner
miner, err := w.minerManager.StartMiner(payload.MinerType, config)
if err != nil {
ack := MinerAckPayload{
Success: false,
Error: err.Error(),
}
return message.Reply(MessageMinerAck, ack)
return msg.Reply(MsgMinerAck, ack)
}
ack := MinerAckPayload{
Success: true,
MinerName: miner.GetName(),
}
return message.Reply(MessageMinerAck, ack)
return msg.Reply(MsgMinerAck, ack)
}
func (w *Worker) handleStopMiner(message *Message) (*Message, error) {
// handleStopMiner stops a running miner.
func (w *Worker) handleStopMiner(msg *Message) (*Message, error) {
if w.minerManager == nil {
return nil, ErrorMinerManagerNotConfigured
return nil, ErrMinerManagerNotConfigured
}
var payload StopMinerPayload
if err := message.ParsePayload(&payload); err != nil {
return nil, core.E("Worker.handleStopMiner", "invalid stop miner payload", err)
if err := msg.ParsePayload(&payload); err != nil {
return nil, coreerr.E("Worker.handleStopMiner", "invalid stop miner payload", err)
}
err := w.minerManager.StopMiner(payload.MinerName)
@ -246,19 +260,21 @@ func (w *Worker) handleStopMiner(message *Message) (*Message, error) {
ack.Error = err.Error()
}
return message.Reply(MessageMinerAck, ack)
return msg.Reply(MsgMinerAck, ack)
}
func (w *Worker) handleLogs(message *Message) (*Message, error) {
// handleGetLogs returns console logs from a miner.
func (w *Worker) handleGetLogs(msg *Message) (*Message, error) {
if w.minerManager == nil {
return nil, ErrorMinerManagerNotConfigured
return nil, ErrMinerManagerNotConfigured
}
var payload LogsRequestPayload
if err := message.ParsePayload(&payload); err != nil {
return nil, core.E("Worker.handleLogs", "invalid logs payload", err)
var payload GetLogsPayload
if err := msg.ParsePayload(&payload); err != nil {
return nil, coreerr.E("Worker.handleGetLogs", "invalid get logs payload", err)
}
// Validate and limit the Lines parameter to prevent resource exhaustion
const maxLogLines = 10000
if payload.Lines <= 0 || payload.Lines > maxLogLines {
payload.Lines = maxLogLines
@ -266,10 +282,15 @@ func (w *Worker) handleLogs(message *Message) (*Message, error) {
miner, err := w.minerManager.GetMiner(payload.MinerName)
if err != nil {
return nil, core.E("Worker.handleLogs", "miner not found: "+payload.MinerName, nil)
return nil, coreerr.E("Worker.handleGetLogs", "miner not found: "+payload.MinerName, nil)
}
lines := miner.GetConsoleHistory(payload.Lines)
var since time.Time
if payload.Since > 0 {
since = time.UnixMilli(payload.Since)
}
lines := miner.GetConsoleHistorySince(payload.Lines, since)
logs := LogsPayload{
MinerName: payload.MinerName,
@ -277,13 +298,14 @@ func (w *Worker) handleLogs(message *Message) (*Message, error) {
HasMore: len(lines) >= payload.Lines,
}
return message.Reply(MessageLogs, logs)
return msg.Reply(MsgLogs, logs)
}
func (w *Worker) handleDeploy(peerConnection *PeerConnection, message *Message) (*Message, error) {
// handleDeploy handles deployment of profiles or miner bundles.
func (w *Worker) handleDeploy(conn *PeerConnection, msg *Message) (*Message, error) {
var payload DeployPayload
if err := message.ParsePayload(&payload); err != nil {
return nil, core.E("Worker.handleDeploy", "invalid deploy payload", err)
if err := msg.ParsePayload(&payload); err != nil {
return nil, coreerr.E("Worker.handleDeploy", "invalid deploy payload", err)
}
// Reconstruct Bundle object from payload
@ -296,24 +318,26 @@ func (w *Worker) handleDeploy(peerConnection *PeerConnection, message *Message)
// Use shared secret as password (base64 encoded)
password := ""
if peerConnection != nil && len(peerConnection.SharedSecret) > 0 {
password = base64.StdEncoding.EncodeToString(peerConnection.SharedSecret)
if conn != nil && len(conn.SharedSecret) > 0 {
password = base64.StdEncoding.EncodeToString(conn.SharedSecret)
}
switch bundle.Type {
case BundleProfile:
if w.profileManager == nil {
return nil, core.E("Worker.handleDeploy", "profile manager not configured", nil)
return nil, coreerr.E("Worker.handleDeploy", "profile manager not configured", nil)
}
// Decrypt and extract profile data
profileData, err := ExtractProfileBundle(bundle, password)
if err != nil {
return nil, core.E("Worker.handleDeploy", "failed to extract profile bundle", err)
return nil, coreerr.E("Worker.handleDeploy", "failed to extract profile bundle", err)
}
// Unmarshal into interface{} to pass to ProfileManager
var profile any
if result := core.JSONUnmarshal(profileData, &profile); !result.OK {
return nil, core.E("Worker.handleDeploy", "invalid profile data JSON", result.Value.(error))
if err := json.Unmarshal(profileData, &profile); err != nil {
return nil, coreerr.E("Worker.handleDeploy", "invalid profile data JSON", err)
}
if err := w.profileManager.SaveProfile(profile); err != nil {
@ -322,18 +346,20 @@ func (w *Worker) handleDeploy(peerConnection *PeerConnection, message *Message)
Name: payload.Name,
Error: err.Error(),
}
return message.Reply(MessageDeployAck, ack)
return msg.Reply(MsgDeployAck, ack)
}
ack := DeployAckPayload{
Success: true,
Name: payload.Name,
}
return message.Reply(MessageDeployAck, ack)
return msg.Reply(MsgDeployAck, ack)
case BundleMiner, BundleFull:
minersDir := core.JoinPath(w.deploymentDirectory(), "lethean-desktop", "miners")
installDir := core.JoinPath(minersDir, payload.Name)
// Determine installation directory
// We use w.DataDir/lethean-desktop/miners/<bundle_name>
minersDir := filepath.Join(w.DataDir, "lethean-desktop", "miners")
installDir := filepath.Join(minersDir, payload.Name)
logging.Info("deploying miner bundle", logging.Fields{
"name": payload.Name,
@ -341,15 +367,17 @@ func (w *Worker) handleDeploy(peerConnection *PeerConnection, message *Message)
"type": payload.BundleType,
})
// Extract miner bundle
minerPath, profileData, err := ExtractMinerBundle(bundle, password, installDir)
if err != nil {
return nil, core.E("Worker.handleDeploy", "failed to extract miner bundle", err)
return nil, coreerr.E("Worker.handleDeploy", "failed to extract miner bundle", err)
}
// If the bundle contained a profile config, save it
if len(profileData) > 0 && w.profileManager != nil {
var profile any
if result := core.JSONUnmarshal(profileData, &profile); !result.OK {
logging.Warn("failed to parse profile from miner bundle", logging.Fields{"error": result.Value.(error)})
if err := json.Unmarshal(profileData, &profile); err != nil {
logging.Warn("failed to parse profile from miner bundle", logging.Fields{"error": err})
} else {
if err := w.profileManager.SaveProfile(profile); err != nil {
logging.Warn("failed to save profile from miner bundle", logging.Fields{"error": err})
@ -357,30 +385,26 @@ func (w *Worker) handleDeploy(peerConnection *PeerConnection, message *Message)
}
}
// Success response
ack := DeployAckPayload{
Success: true,
Name: payload.Name,
}
// Log the installation
logging.Info("miner bundle installed successfully", logging.Fields{
"name": payload.Name,
"miner_path": minerPath,
})
return message.Reply(MessageDeployAck, ack)
return msg.Reply(MsgDeployAck, ack)
default:
return nil, core.E("Worker.handleDeploy", "unknown bundle type: "+payload.BundleType, nil)
return nil, coreerr.E("Worker.handleDeploy", "unknown bundle type: "+payload.BundleType, nil)
}
}
func (w *Worker) RegisterOnTransport() {
// RegisterWithTransport registers the worker's message handler with the transport.
func (w *Worker) RegisterWithTransport() {
w.transport.OnMessage(w.HandleMessage)
}
func (w *Worker) deploymentDirectory() string {
if w.DeploymentDirectory != "" {
return w.DeploymentDirectory
}
return xdg.DataHome
}

File diff suppressed because it is too large Load diff

View file

@ -1,80 +0,0 @@
# logging
**Import:** `dappco.re/go/core/p2p/logging`
**Files:** 1
## Types
### `Level`
`type Level int`
Log severity used by `Logger`. `String` renders the level name in upper case, and `ParseLevel` accepts `debug`, `info`, `warn` or `warning`, and `error`.
### `Config`
```go
type Config struct {
Output io.Writer
Level Level
Component string
}
```
Configuration passed to `New`.
- `Output`: destination for log lines. `New` falls back to stderr when this is `nil`.
- `Level`: minimum severity that will be emitted.
- `Component`: optional component label added to each line.
### `Fields`
`type Fields map[string]any`
Structured key/value fields passed to logging calls. When multiple `Fields` values are supplied, they are merged from left to right, so later maps override earlier keys.
### `Logger`
`type Logger struct { /* unexported fields */ }`
Structured logger with configurable output, severity filtering, and component scoping. Log writes are serialised by a mutex and are formatted as timestamped single-line records.
## Functions
### Top-level
| Name | Signature | Description |
| --- | --- | --- |
| `DefaultConfig` | `func DefaultConfig() Config` | Returns the default configuration: stderr output, `LevelInfo`, and no component label. |
| `New` | `func New(config Config) *Logger` | Creates a `Logger` from `config`, substituting the default stderr writer when `config.Output` is `nil`. |
| `SetGlobal` | `func SetGlobal(l *Logger)` | Replaces the package-level global logger instance. |
| `GetGlobal` | `func GetGlobal() *Logger` | Returns the current package-level global logger. |
| `SetGlobalLevel` | `func SetGlobalLevel(level Level)` | Updates the minimum severity on the current global logger. |
| `Debug` | `func Debug(msg string, fields ...Fields)` | Logs a debug message through the global logger. |
| `Info` | `func Info(msg string, fields ...Fields)` | Logs an informational message through the global logger. |
| `Warn` | `func Warn(msg string, fields ...Fields)` | Logs a warning message through the global logger. |
| `Error` | `func Error(msg string, fields ...Fields)` | Logs an error message through the global logger. |
| `Debugf` | `func Debugf(format string, args ...any)` | Formats and logs a debug message through the global logger. |
| `Infof` | `func Infof(format string, args ...any)` | Formats and logs an informational message through the global logger. |
| `Warnf` | `func Warnf(format string, args ...any)` | Formats and logs a warning message through the global logger. |
| `Errorf` | `func Errorf(format string, args ...any)` | Formats and logs an error message through the global logger. |
| `ParseLevel` | `func ParseLevel(s string) (Level, error)` | Parses a text level into `Level`. Unknown strings return `LevelInfo` plus an error. |
### `Level` methods
| Name | Signature | Description |
| --- | --- | --- |
| `String` | `func (l Level) String() string` | Returns `DEBUG`, `INFO`, `WARN`, `ERROR`, or `UNKNOWN` for out-of-range values. |
### `*Logger` methods
| Name | Signature | Description |
| --- | --- | --- |
| `ComponentLogger` | `func (l *Logger) ComponentLogger(component string) *Logger` | Returns a new logger scoped to `component`. |
| `SetLevel` | `func (l *Logger) SetLevel(level Level)` | Sets the minimum severity that the logger will emit. |
| `GetLevel` | `func (l *Logger) GetLevel() Level` | Returns the current minimum severity. |
| `Debug` | `func (l *Logger) Debug(msg string, fields ...Fields)` | Logs `msg` at debug level after merging any supplied field maps. |
| `Info` | `func (l *Logger) Info(msg string, fields ...Fields)` | Logs `msg` at info level after merging any supplied field maps. |
| `Warn` | `func (l *Logger) Warn(msg string, fields ...Fields)` | Logs `msg` at warning level after merging any supplied field maps. |
| `Error` | `func (l *Logger) Error(msg string, fields ...Fields)` | Logs `msg` at error level after merging any supplied field maps. |
| `Debugf` | `func (l *Logger) Debugf(format string, args ...any)` | Formats and logs a debug message. |
| `Infof` | `func (l *Logger) Infof(format string, args ...any)` | Formats and logs an informational message. |
| `Warnf` | `func (l *Logger) Warnf(format string, args ...any)` | Formats and logs a warning message. |
| `Errorf` | `func (l *Logger) Errorf(format string, args ...any)` | Formats and logs an error message. |

View file

@ -1,117 +0,0 @@
# levin
**Import:** `dappco.re/go/core/p2p/node/levin`
**Files:** 4
## Types
### `Connection`
```go
type Connection struct {
MaxPayloadSize uint64
ReadTimeout time.Duration
WriteTimeout time.Duration
}
```
Wrapper around `net.Conn` that reads and writes framed Levin packets.
- `MaxPayloadSize`: per-connection payload ceiling enforced by `ReadPacket`. `NewConnection` starts with the package `MaxPayloadSize` default.
- `ReadTimeout`: deadline applied before each `ReadPacket` call. `NewConnection` sets this to `DefaultReadTimeout`.
- `WriteTimeout`: deadline applied before each write. `NewConnection` sets this to `DefaultWriteTimeout`.
### `Header`
```go
type Header struct {
Signature uint64
PayloadSize uint64
ExpectResponse bool
Command uint32
ReturnCode int32
Flags uint32
ProtocolVersion uint32
}
```
Packed 33-byte Levin frame header. `EncodeHeader` writes these fields little-endian, and `DecodeHeader` validates the `Signature` and package-level `MaxPayloadSize`.
### `Section`
`type Section map[string]Value`
Portable-storage object used by the Levin encoder and decoder. `EncodeStorage` sorts keys alphabetically for deterministic output.
### `Value`
```go
type Value struct {
Type uint8
}
```
Tagged portable-storage value. The exported `Type` field identifies which internal scalar or array slot is populated; constructors such as `Uint64Value`, `StringValue`, and `ObjectArrayValue` create correctly-typed instances.
## Functions
### Top-level framing and storage functions
| Name | Signature | Description |
| --- | --- | --- |
| `NewConnection` | `func NewConnection(conn net.Conn) *Connection` | Wraps `conn` with Levin defaults: 100 MB payload limit, 120 s read timeout, and 30 s write timeout. |
| `EncodeHeader` | `func EncodeHeader(h *Header) [HeaderSize]byte` | Serialises `h` into the fixed 33-byte Levin header format. |
| `DecodeHeader` | `func DecodeHeader(buf [HeaderSize]byte) (Header, error)` | Parses a 33-byte header, rejecting bad magic signatures and payload sizes above the package-level limit. |
| `PackVarint` | `func PackVarint(v uint64) []byte` | Encodes `v` using the epee portable-storage varint scheme where the low two bits of the first byte encode the width. |
| `UnpackVarint` | `func UnpackVarint(buf []byte) (value uint64, bytesConsumed int, err error)` | Decodes one portable-storage varint and returns the value, consumed width, and any truncation or overflow error. |
| `EncodeStorage` | `func EncodeStorage(s Section) ([]byte, error)` | Serialises a `Section` into portable-storage binary form, including the 9-byte storage header. |
| `DecodeStorage` | `func DecodeStorage(data []byte) (Section, error)` | Deserialises portable-storage binary data, validates the storage signatures and version, and reconstructs a `Section`. |
### `Value` constructors
| Name | Signature | Description |
| --- | --- | --- |
| `Uint64Value` | `func Uint64Value(v uint64) Value` | Creates a scalar `Value` with `TypeUint64`. |
| `Uint32Value` | `func Uint32Value(v uint32) Value` | Creates a scalar `Value` with `TypeUint32`. |
| `Uint16Value` | `func Uint16Value(v uint16) Value` | Creates a scalar `Value` with `TypeUint16`. |
| `Uint8Value` | `func Uint8Value(v uint8) Value` | Creates a scalar `Value` with `TypeUint8`. |
| `Int64Value` | `func Int64Value(v int64) Value` | Creates a scalar `Value` with `TypeInt64`. |
| `Int32Value` | `func Int32Value(v int32) Value` | Creates a scalar `Value` with `TypeInt32`. |
| `Int16Value` | `func Int16Value(v int16) Value` | Creates a scalar `Value` with `TypeInt16`. |
| `Int8Value` | `func Int8Value(v int8) Value` | Creates a scalar `Value` with `TypeInt8`. |
| `BoolValue` | `func BoolValue(v bool) Value` | Creates a scalar `Value` with `TypeBool`. |
| `DoubleValue` | `func DoubleValue(v float64) Value` | Creates a scalar `Value` with `TypeDouble`. |
| `StringValue` | `func StringValue(v []byte) Value` | Creates a scalar `Value` with `TypeString`. The byte slice is stored without copying. |
| `ObjectValue` | `func ObjectValue(s Section) Value` | Creates a scalar `Value` with `TypeObject` that wraps a nested `Section`. |
| `Uint64ArrayValue` | `func Uint64ArrayValue(vs []uint64) Value` | Creates an array `Value` tagged as `ArrayFlag | TypeUint64`. |
| `Uint32ArrayValue` | `func Uint32ArrayValue(vs []uint32) Value` | Creates an array `Value` tagged as `ArrayFlag | TypeUint32`. |
| `StringArrayValue` | `func StringArrayValue(vs [][]byte) Value` | Creates an array `Value` tagged as `ArrayFlag | TypeString`. |
| `ObjectArrayValue` | `func ObjectArrayValue(vs []Section) Value` | Creates an array `Value` tagged as `ArrayFlag | TypeObject`. |
### `*Connection` methods
| Name | Signature | Description |
| --- | --- | --- |
| `WritePacket` | `func (c *Connection) WritePacket(cmd uint32, payload []byte, expectResponse bool) error` | Sends a Levin request or notification with `FlagRequest`, `ReturnOK`, and the current protocol version. Header and payload writes are serialised by an internal mutex. |
| `WriteResponse` | `func (c *Connection) WriteResponse(cmd uint32, payload []byte, returnCode int32) error` | Sends a Levin response with `FlagResponse` and the supplied return code. |
| `ReadPacket` | `func (c *Connection) ReadPacket() (Header, []byte, error)` | Applies the read deadline, reads exactly one header and payload, validates the frame, and enforces the connection-specific `MaxPayloadSize`. Empty payloads are returned as `nil` without allocation. |
| `Close` | `func (c *Connection) Close() error` | Closes the wrapped network connection. |
| `RemoteAddr` | `func (c *Connection) RemoteAddr() string` | Returns the wrapped connection's remote address string. |
### `Value` methods
| Name | Signature | Description |
| --- | --- | --- |
| `AsUint64` | `func (v Value) AsUint64() (uint64, error)` | Returns the scalar `uint64` value or `ErrStorageTypeMismatch`. |
| `AsUint32` | `func (v Value) AsUint32() (uint32, error)` | Returns the scalar `uint32` value or `ErrStorageTypeMismatch`. |
| `AsUint16` | `func (v Value) AsUint16() (uint16, error)` | Returns the scalar `uint16` value or `ErrStorageTypeMismatch`. |
| `AsUint8` | `func (v Value) AsUint8() (uint8, error)` | Returns the scalar `uint8` value or `ErrStorageTypeMismatch`. |
| `AsInt64` | `func (v Value) AsInt64() (int64, error)` | Returns the scalar `int64` value or `ErrStorageTypeMismatch`. |
| `AsInt32` | `func (v Value) AsInt32() (int32, error)` | Returns the scalar `int32` value or `ErrStorageTypeMismatch`. |
| `AsInt16` | `func (v Value) AsInt16() (int16, error)` | Returns the scalar `int16` value or `ErrStorageTypeMismatch`. |
| `AsInt8` | `func (v Value) AsInt8() (int8, error)` | Returns the scalar `int8` value or `ErrStorageTypeMismatch`. |
| `AsBool` | `func (v Value) AsBool() (bool, error)` | Returns the scalar `bool` value or `ErrStorageTypeMismatch`. |
| `AsDouble` | `func (v Value) AsDouble() (float64, error)` | Returns the scalar `float64` value or `ErrStorageTypeMismatch`. |
| `AsString` | `func (v Value) AsString() ([]byte, error)` | Returns the scalar byte-string or `ErrStorageTypeMismatch`. |
| `AsSection` | `func (v Value) AsSection() (Section, error)` | Returns the nested `Section` or `ErrStorageTypeMismatch`. |
| `AsUint64Array` | `func (v Value) AsUint64Array() ([]uint64, error)` | Returns the `[]uint64` array or `ErrStorageTypeMismatch`. |
| `AsUint32Array` | `func (v Value) AsUint32Array() ([]uint32, error)` | Returns the `[]uint32` array or `ErrStorageTypeMismatch`. |
| `AsStringArray` | `func (v Value) AsStringArray() ([][]byte, error)` | Returns the `[][]byte` array or `ErrStorageTypeMismatch`. |
| `AsSectionArray` | `func (v Value) AsSectionArray() ([]Section, error)` | Returns the `[]Section` array or `ErrStorageTypeMismatch`. |

View file

@ -1,237 +0,0 @@
# node
**Import:** `dappco.re/go/core/p2p/node`
**Files:** 12
## Types
### Core types
| Type | Definition | Description |
| --- | --- | --- |
| `BundleType` | `type BundleType string` | Deployment bundle kind used by `Bundle` and `BundleManifest`. |
| `Bundle` | `struct{ Type BundleType; Name string; Data []byte; Checksum string }` | Transferable deployment bundle. `Data` contains STIM-encrypted bytes or raw JSON, and `Checksum` is the SHA-256 hex digest of `Data`. |
| `BundleManifest` | `struct{ Type BundleType; Name string; Version string; MinerType string; ProfileIDs []string; CreatedAt string }` | Metadata describing the logical contents of a bundle payload. |
| `Controller` | `struct{ /* unexported fields */ }` | High-level controller client for remote peer operations. It keeps a pending-response map keyed by request ID and registers its internal response handler with the transport in `NewController`. |
| `Dispatcher` | `struct{ /* unexported fields */ }` | Concurrent-safe UEPS router. It applies the threat-score circuit breaker before dispatching to a handler map keyed by `IntentID`. |
| `IntentHandler` | `type IntentHandler func(pkt *ueps.ParsedPacket) error` | Callback signature used by `Dispatcher` for verified UEPS packets. |
| `Message` | `struct{ ID string; Type MessageType; From string; To string; Timestamp time.Time; Payload RawMessage; ReplyTo string }` | Generic P2P message envelope. `Payload` stores raw JSON, and `ReplyTo` links responses back to the originating request. |
| `MessageDeduplicator` | `struct{ /* unexported fields */ }` | TTL cache of recently seen message IDs used to suppress duplicates. |
| `MessageHandler` | `type MessageHandler func(conn *PeerConnection, msg *Message)` | Callback signature for decrypted inbound transport messages. |
| `MessageType` | `type MessageType string` | String message discriminator stored in `Message.Type`. |
| `NodeIdentity` | `struct{ ID string; Name string; PublicKey string; CreatedAt time.Time; Role NodeRole }` | Public node identity. `ID` is derived from the first 16 bytes of the SHA-256 hash of the public key. |
| `NodeManager` | `struct{ /* unexported fields */ }` | Identity and key manager that loads, generates, persists, and deletes X25519 node credentials. |
| `NodeRole` | `type NodeRole string` | Operational mode string for controller, worker, or dual-role nodes. |
| `Peer` | `struct{ ID string; Name string; PublicKey string; Address string; Role NodeRole; AddedAt time.Time; LastSeen time.Time; PingMS float64; Hops int; GeoKM float64; Score float64; Connected bool }` | Registry record for a remote node, including addressing, role, scoring metrics, and transient connection state. |
| `PeerAuthMode` | `type PeerAuthMode int` | Peer admission policy used by `PeerRegistry` when unknown peers attempt to connect. |
| `PeerConnection` | `struct{ Peer *Peer; WebSocketConnection *websocket.Conn; SharedSecret []byte; LastActivity time.Time }` | Active WebSocket session to a peer, including the negotiated shared secret and transport-owned write/close coordination. |
| `PeerRateLimiter` | `struct{ /* unexported fields */ }` | Per-peer token bucket limiter used by the transport hot path. |
| `PeerRegistry` | `struct{ /* unexported fields */ }` | Concurrent peer store with KD-tree selection, allowlist state, and debounced persistence to disk. |
| `ProtocolError` | `struct{ Code int; Message string }` | Structured remote error returned by protocol response helpers when a peer replies with `MsgError`. |
| `RawMessage` | `type RawMessage []byte` | Raw JSON payload bytes preserved without eager decoding. |
| `ResponseHandler` | `struct{}` | Helper for validating message envelopes and decoding typed responses. |
| `Transport` | `struct{ /* unexported fields */ }` | WebSocket transport that manages listeners, connections, encryption, deduplication, and shutdown coordination. |
| `TransportConfig` | `struct{ ListenAddress string; ListenAddr string; WebSocketPath string; TLSCertPath string; TLSKeyPath string; MaxConnections int; MaxMessageSize int64; PingInterval time.Duration; PongTimeout time.Duration }` | Listener, TLS, sizing, and keepalive settings for `Transport`. |
| `Worker` | `struct{ DataDir string /* plus unexported fields */ }` | Inbound command handler for worker nodes. It tracks uptime, optional miner/profile integrations, and the base directory used for deployments. |
### Payload and integration types
| Type | Definition | Description |
| --- | --- | --- |
| `DeployAckPayload` | `struct{ Success bool; Name string; Error string }` | Deployment acknowledgement with success state, optional deployed name, and optional error text. |
| `DeployPayload` | `struct{ BundleType string; Data []byte; Checksum string; Name string }` | Deployment request carrying STIM-encrypted bundle bytes (or other bundle data), checksum, and logical name. |
| `DisconnectPayload` | `struct{ Reason string; Code int }` | Disconnect notice with human-readable reason and optional disconnect code. |
| `ErrorPayload` | `struct{ Code int; Message string; Details string }` | Payload used by `MsgError` responses. |
| `LogsRequestPayload` | `struct{ MinerName string; Lines int; Since int64 }` | Request for miner console output, optionally bounded by line count and a Unix timestamp. |
| `HandshakeAckPayload` | `struct{ Identity NodeIdentity; ChallengeResponse []byte; Accepted bool; Reason string }` | Handshake reply containing the responder identity, optional challenge response, acceptance flag, and optional rejection reason. |
| `HandshakePayload` | `struct{ Identity NodeIdentity; Challenge []byte; Version string }` | Handshake request containing node identity, optional authentication challenge, and protocol version. |
| `LogsPayload` | `struct{ MinerName string; Lines []string; HasMore bool }` | Returned miner log lines plus an indicator that more lines are available. |
| `MinerAckPayload` | `struct{ Success bool; MinerName string; Error string }` | Acknowledgement for remote miner start and stop operations. |
| `MinerInstance` | `interface{ GetName() string; GetType() string; GetStats() (any, error); GetConsoleHistory(lines int) []string }` | Minimal runtime miner contract used by the worker to collect stats and logs without importing the mining package. |
| `MinerManager` | `interface{ StartMiner(minerType string, config any) (MinerInstance, error); StopMiner(name string) error; ListMiners() []MinerInstance; GetMiner(name string) (MinerInstance, error) }` | Worker-facing miner control contract. |
| `MinerStatsItem` | `struct{ Name string; Type string; Hashrate float64; Shares int; Rejected int; Uptime int; Pool string; Algorithm string; CPUThreads int }` | Protocol-facing summary of one miner's runtime statistics. |
| `PingPayload` | `struct{ SentAt int64 }` | Ping payload carrying the sender's millisecond timestamp. |
| `PongPayload` | `struct{ SentAt int64; ReceivedAt int64 }` | Ping response carrying the echoed send time and the receiver's millisecond timestamp. |
| `ProfileManager` | `interface{ GetProfile(id string) (any, error); SaveProfile(profile any) error }` | Worker-facing profile storage contract. |
| `StartMinerPayload` | `struct{ MinerType string; ProfileID string; Config RawMessage }` | Request to start a miner with an optional profile ID and raw JSON config override. |
| `StatsPayload` | `struct{ NodeID string; NodeName string; Miners []MinerStatsItem; Uptime int64 }` | Node-wide stats response with node identity fields, miner summaries, and uptime in seconds. |
| `StopMinerPayload` | `struct{ MinerName string }` | Request to stop a miner by name. |
## Functions
### Bundle, protocol, and utility functions
| Name | Signature | Description |
| --- | --- | --- |
| `CreateProfileBundle` | `func CreateProfileBundle(profileJSON []byte, name string, password string) (*Bundle, error)` | Builds a TIM containing `profileJSON`, encrypts it to STIM with `password`, and returns a `BundleProfile` bundle with a SHA-256 checksum. |
| `CreateProfileBundleUnencrypted` | `func CreateProfileBundleUnencrypted(profileJSON []byte, name string) (*Bundle, error)` | Returns a `BundleProfile` bundle whose `Data` is the raw JSON payload and whose checksum is computed over that JSON. |
| `CreateMinerBundle` | `func CreateMinerBundle(minerPath string, profileJSON []byte, name string, password string) (*Bundle, error)` | Reads a miner binary, tars it, loads it into a TIM, optionally attaches `profileJSON`, encrypts the result to STIM, and returns a `BundleMiner` bundle. |
| `ExtractProfileBundle` | `func ExtractProfileBundle(bundle *Bundle, password string) ([]byte, error)` | Verifies `bundle.Checksum`, returns raw JSON directly when `bundle.Data` already looks like JSON, otherwise decrypts STIM and returns the embedded config bytes. |
| `ExtractMinerBundle` | `func ExtractMinerBundle(bundle *Bundle, password string, destDir string) (string, []byte, error)` | Verifies checksum, decrypts STIM, extracts the root filesystem tarball into `destDir`, and returns the first executable path plus the embedded config bytes. |
| `VerifyBundle` | `func VerifyBundle(bundle *Bundle) bool` | Returns whether `bundle.Checksum` matches the SHA-256 checksum of `bundle.Data`. |
| `StreamBundle` | `func StreamBundle(bundle *Bundle, w io.Writer) error` | JSON-encodes `bundle` and writes it to `w`. |
| `ReadBundle` | `func ReadBundle(r io.Reader) (*Bundle, error)` | Reads all bytes from `r`, JSON-decodes them into a `Bundle`, and returns the result. |
| `GenerateChallenge` | `func GenerateChallenge() ([]byte, error)` | Returns a new 32-byte random authentication challenge. |
| `SignChallenge` | `func SignChallenge(challenge []byte, sharedSecret []byte) []byte` | Computes the HMAC-SHA256 signature of `challenge` using `sharedSecret`. |
| `VerifyChallenge` | `func VerifyChallenge(challenge, response, sharedSecret []byte) bool` | Recomputes the expected challenge signature and compares it to `response` with `hmac.Equal`. |
| `IsProtocolVersionSupported` | `func IsProtocolVersionSupported(version string) bool` | Returns whether `version` is present in `SupportedProtocolVersions`. |
| `MarshalJSON` | `func MarshalJSON(v any) ([]byte, error)` | Encodes `v` with the core JSON helper, restores the package's historical no-EscapeHTML behaviour, and returns a caller-owned copy of the bytes. |
| `NewMessage` | `func NewMessage(msgType MessageType, from, to string, payload any) (*Message, error)` | Creates a message with a generated UUID, current timestamp, and JSON-encoded payload. A `nil` payload leaves `Payload` empty. |
| `NewErrorMessage` | `func NewErrorMessage(from, to string, code int, message string, replyTo string) (*Message, error)` | Creates a `MsgError` response containing an `ErrorPayload` and sets `ReplyTo` to the supplied request ID. |
| `ValidateResponse` | `func ValidateResponse(resp *Message, expectedType MessageType) error` | Convenience wrapper that delegates to `DefaultResponseHandler.ValidateResponse`. |
| `ParseResponse` | `func ParseResponse(resp *Message, expectedType MessageType, target any) error` | Convenience wrapper that delegates to `DefaultResponseHandler.ParseResponse`. |
| `IsProtocolError` | `func IsProtocolError(err error) bool` | Returns whether `err` is a `*ProtocolError`. |
| `GetProtocolErrorCode` | `func GetProtocolErrorCode(err error) int` | Returns `err.(*ProtocolError).Code` when `err` is a `*ProtocolError`, otherwise `0`. |
### Constructors
| Name | Signature | Description |
| --- | --- | --- |
| `DefaultTransportConfig` | `func DefaultTransportConfig() TransportConfig` | Returns the transport defaults: `ListenAddress=:9091`, `ListenAddr=:9091`, `WebSocketPath=/ws`, `MaxConnections=100`, `MaxMessageSize=1<<20`, `PingInterval=30s`, and `PongTimeout=10s`. |
| `NewController` | `func NewController(node *NodeManager, peers *PeerRegistry, transport *Transport) *Controller` | Creates a controller, initialises its pending-response map, and installs its response handler on `transport`. |
| `NewDispatcher` | `func NewDispatcher() *Dispatcher` | Creates an empty dispatcher with a debug-level component logger named `dispatcher`. |
| `NewMessageDeduplicator` | `func NewMessageDeduplicator(ttl time.Duration) *MessageDeduplicator` | Creates a deduplicator that retains message IDs for the supplied TTL. |
| `NewNodeManager` | `func NewNodeManager() (*NodeManager, error)` | Resolves XDG key and config paths, then loads an existing identity if present. |
| `NewNodeManagerFromPaths` | `func NewNodeManagerFromPaths(keyPath, configPath string) (*NodeManager, error)` | Creates a node manager from explicit key and config paths. |
| `NewPeerRateLimiter` | `func NewPeerRateLimiter(maxTokens, refillRate int) *PeerRateLimiter` | Creates a token bucket seeded with `maxTokens` and refilled at `refillRate` tokens per second. |
| `NewPeerRegistry` | `func NewPeerRegistry() (*PeerRegistry, error)` | Resolves the XDG peers path, loads any persisted peers, and builds the selection KD-tree. |
| `NewPeerRegistryFromPath` | `func NewPeerRegistryFromPath(peersPath string) (*PeerRegistry, error)` | Creates a peer registry bound to `peersPath` with open authentication mode and an empty public-key allowlist. |
| `NewTransport` | `func NewTransport(node *NodeManager, registry *PeerRegistry, config TransportConfig) *Transport` | Creates a transport with lifecycle context, a 5-minute message deduplicator, and a WebSocket upgrader that only accepts local origins. |
| `NewWorker` | `func NewWorker(node *NodeManager, transport *Transport) *Worker` | Creates a worker, records its start time for uptime reporting, and defaults `DataDir` to `xdg.DataHome`. |
### `RawMessage` methods
| Name | Signature | Description |
| --- | --- | --- |
| `MarshalJSON` | `func (m RawMessage) MarshalJSON() ([]byte, error)` | Emits raw payload bytes unchanged, or `null` when the receiver is `nil`. |
| `UnmarshalJSON` | `func (m *RawMessage) UnmarshalJSON(data []byte) error` | Copies `data` into the receiver without decoding it. Passing a `nil` receiver returns an error. |
### `*Message` methods
| Name | Signature | Description |
| --- | --- | --- |
| `Reply` | `func (m *Message) Reply(msgType MessageType, payload any) (*Message, error)` | Creates a reply message that swaps `From` and `To` and sets `ReplyTo` to `m.ID`. |
| `ParsePayload` | `func (m *Message) ParsePayload(v any) error` | JSON-decodes `Payload` into `v`. A `nil` payload is treated as a no-op. |
### `*NodeManager` methods
| Name | Signature | Description |
| --- | --- | --- |
| `HasIdentity` | `func (n *NodeManager) HasIdentity() bool` | Returns whether an identity is currently loaded in memory. |
| `GetIdentity` | `func (n *NodeManager) GetIdentity() *NodeIdentity` | Returns a copy of the loaded public identity, or `nil` when no identity is initialised. |
| `GenerateIdentity` | `func (n *NodeManager) GenerateIdentity(name string, role NodeRole) error` | Generates a new X25519 keypair, derives the node ID from the public key hash, stores the public identity, and persists both key and config to disk. |
| `DeriveSharedSecret` | `func (n *NodeManager) DeriveSharedSecret(peerPubKeyBase64 string) ([]byte, error)` | Decodes the peer public key, performs X25519 ECDH with the node private key, hashes the result with SHA-256, and returns the symmetric key material. |
| `Delete` | `func (n *NodeManager) Delete() error` | Removes persisted key/config files when they exist and clears the in-memory identity and key state. |
### `*Controller` methods
| Name | Signature | Description |
| --- | --- | --- |
| `GetRemoteStats` | `func (c *Controller) GetRemoteStats(peerID string) (*StatsPayload, error)` | Sends `MsgGetStats` to `peerID`, waits for a response, and decodes the resulting `MsgStats` payload. |
| `StartRemoteMiner` | `func (c *Controller) StartRemoteMiner(peerID, minerType, profileID string, configOverride RawMessage) error` | Validates `minerType`, sends `MsgStartMiner`, waits for `MsgMinerAck`, and returns an error when the remote ack reports failure. |
| `StopRemoteMiner` | `func (c *Controller) StopRemoteMiner(peerID, minerName string) error` | Sends `MsgStopMiner`, waits for `MsgMinerAck`, and returns an error when the remote ack reports failure. |
| `GetRemoteLogs` | `func (c *Controller) GetRemoteLogs(peerID, minerName string, lines int) ([]string, error)` | Requests `MsgLogs` from a remote miner and returns the decoded log lines. |
| `GetAllStats` | `func (c *Controller) GetAllStats() map[string]*StatsPayload` | Requests stats from every currently connected peer and returns the successful responses keyed by peer ID. |
| `PingPeer` | `func (c *Controller) PingPeer(peerID string) (float64, error)` | Sends a ping, measures round-trip time in milliseconds, and updates the peer registry metrics for that peer. |
| `ConnectToPeer` | `func (c *Controller) ConnectToPeer(peerID string) error` | Looks up `peerID` in the registry and establishes a transport connection. |
| `DisconnectFromPeer` | `func (c *Controller) DisconnectFromPeer(peerID string) error` | Gracefully closes an active transport connection for `peerID`. |
### `*Dispatcher` methods
| Name | Signature | Description |
| --- | --- | --- |
| `RegisterHandler` | `func (d *Dispatcher) RegisterHandler(intentID byte, handler IntentHandler)` | Associates `handler` with `intentID`, replacing any existing handler for that intent. |
| `Handlers` | `func (d *Dispatcher) Handlers() iter.Seq2[byte, IntentHandler]` | Returns an iterator over the currently registered intent handlers. |
| `Dispatch` | `func (d *Dispatcher) Dispatch(pkt *ueps.ParsedPacket) error` | Rejects `nil` packets, drops packets whose `ThreatScore` exceeds `ThreatScoreThreshold`, rejects unknown intents, and otherwise invokes the matching handler. |
### `*MessageDeduplicator` methods
| Name | Signature | Description |
| --- | --- | --- |
| `IsDuplicate` | `func (d *MessageDeduplicator) IsDuplicate(msgID string) bool` | Returns whether `msgID` is still present in the deduplicator's TTL window. |
| `Mark` | `func (d *MessageDeduplicator) Mark(msgID string)` | Records `msgID` with the current time. |
| `Cleanup` | `func (d *MessageDeduplicator) Cleanup()` | Removes expired message IDs whose age exceeds the configured TTL. |
### `*PeerRateLimiter` methods
| Name | Signature | Description |
| --- | --- | --- |
| `Allow` | `func (r *PeerRateLimiter) Allow() bool` | Refills tokens according to elapsed whole seconds and returns whether one token could be consumed for the current message. |
### `*PeerRegistry` methods
| Name | Signature | Description |
| --- | --- | --- |
| `SetAuthMode` | `func (r *PeerRegistry) SetAuthMode(mode PeerAuthMode)` | Replaces the current peer admission mode. |
| `GetAuthMode` | `func (r *PeerRegistry) GetAuthMode() PeerAuthMode` | Returns the current peer admission mode. |
| `AllowPublicKey` | `func (r *PeerRegistry) AllowPublicKey(publicKey string)` | Adds `publicKey` to the explicit allowlist. |
| `RevokePublicKey` | `func (r *PeerRegistry) RevokePublicKey(publicKey string)` | Removes `publicKey` from the explicit allowlist. |
| `IsPublicKeyAllowed` | `func (r *PeerRegistry) IsPublicKeyAllowed(publicKey string) bool` | Returns whether `publicKey` is currently allowlisted. |
| `IsPeerAllowed` | `func (r *PeerRegistry) IsPeerAllowed(peerID string, publicKey string) bool` | Returns `true` in open mode, or in allowlist mode when the peer is already registered or the supplied public key is allowlisted. |
| `ListAllowedPublicKeys` | `func (r *PeerRegistry) ListAllowedPublicKeys() []string` | Returns a slice snapshot of allowlisted public keys. |
| `AllowedPublicKeys` | `func (r *PeerRegistry) AllowedPublicKeys() iter.Seq[string]` | Returns an iterator over allowlisted public keys. |
| `AddPeer` | `func (r *PeerRegistry) AddPeer(peer *Peer) error` | Validates the peer, sets `AddedAt` when zero, defaults `Score` to `50`, adds it to the registry, rebuilds the KD-tree, and schedules a debounced save. |
| `UpdatePeer` | `func (r *PeerRegistry) UpdatePeer(peer *Peer) error` | Replaces an existing peer entry, rebuilds the KD-tree, and schedules a debounced save. |
| `RemovePeer` | `func (r *PeerRegistry) RemovePeer(id string) error` | Deletes an existing peer, rebuilds the KD-tree, and schedules a debounced save. |
| `GetPeer` | `func (r *PeerRegistry) GetPeer(id string) *Peer` | Returns a copy of the peer identified by `id`, or `nil` when absent. |
| `ListPeers` | `func (r *PeerRegistry) ListPeers() []*Peer` | Returns a slice of peer copies. |
| `Peers` | `func (r *PeerRegistry) Peers() iter.Seq[*Peer]` | Returns an iterator over peer copies so callers cannot mutate registry state directly. |
| `UpdateMetrics` | `func (r *PeerRegistry) UpdateMetrics(id string, pingMS, geoKM float64, hops int) error` | Updates latency, distance, hop count, and `LastSeen`, rebuilds the KD-tree, and schedules a debounced save. |
| `UpdateScore` | `func (r *PeerRegistry) UpdateScore(id string, score float64) error` | Clamps `score` into `[0,100]`, updates the peer, rebuilds the KD-tree, and schedules a debounced save. |
| `SetConnected` | `func (r *PeerRegistry) SetConnected(id string, connected bool)` | Updates the connection flag for a peer and refreshes `LastSeen` when marking the peer connected. |
| `RecordSuccess` | `func (r *PeerRegistry) RecordSuccess(id string)` | Increases the peer score by `ScoreSuccessIncrement` up to `ScoreMaximum`, updates `LastSeen`, and schedules a save. |
| `RecordFailure` | `func (r *PeerRegistry) RecordFailure(id string)` | Decreases the peer score by `ScoreFailureDecrement` down to `ScoreMinimum` and schedules a save. |
| `RecordTimeout` | `func (r *PeerRegistry) RecordTimeout(id string)` | Decreases the peer score by `ScoreTimeoutDecrement` down to `ScoreMinimum` and schedules a save. |
| `GetPeersByScore` | `func (r *PeerRegistry) GetPeersByScore() []*Peer` | Returns peers sorted by descending score. |
| `PeersByScore` | `func (r *PeerRegistry) PeersByScore() iter.Seq[*Peer]` | Returns an iterator over peers sorted by descending score. |
| `SelectOptimalPeer` | `func (r *PeerRegistry) SelectOptimalPeer() *Peer` | Uses the KD-tree to find the peer closest to the ideal metrics vector and returns a copy of that peer. |
| `SelectNearestPeers` | `func (r *PeerRegistry) SelectNearestPeers(n int) []*Peer` | Returns copies of the `n` nearest peers from the KD-tree according to the weighted metrics. |
| `GetConnectedPeers` | `func (r *PeerRegistry) GetConnectedPeers() []*Peer` | Returns a slice of copies for peers whose `Connected` flag is true. |
| `ConnectedPeers` | `func (r *PeerRegistry) ConnectedPeers() iter.Seq[*Peer]` | Returns an iterator over connected peer copies. |
| `Count` | `func (r *PeerRegistry) Count() int` | Returns the number of registered peers. |
| `Close` | `func (r *PeerRegistry) Close() error` | Stops any pending save timer and immediately flushes dirty peer data to disk when needed. |
### `*ResponseHandler` methods
| Name | Signature | Description |
| --- | --- | --- |
| `ValidateResponse` | `func (h *ResponseHandler) ValidateResponse(resp *Message, expectedType MessageType) error` | Rejects `nil` responses, unwraps `MsgError` into a `ProtocolError`, and checks that `resp.Type` matches `expectedType`. |
| `ParseResponse` | `func (h *ResponseHandler) ParseResponse(resp *Message, expectedType MessageType, target any) error` | Runs `ValidateResponse` and then decodes the payload into `target` when `target` is not `nil`. |
### `*Transport` methods
| Name | Signature | Description |
| --- | --- | --- |
| `Start` | `func (t *Transport) Start() error` | Starts the WebSocket listener and begins accepting inbound peer connections. |
| `Stop` | `func (t *Transport) Stop() error` | Cancels transport context, closes active connections, and shuts down the listener. |
| `OnMessage` | `func (t *Transport) OnMessage(handler MessageHandler)` | Installs the inbound message callback used after decryption. It must be set before `Start` to avoid races. |
| `Connect` | `func (t *Transport) Connect(peer *Peer) (*PeerConnection, error)` | Dials `peer`, performs the handshake, derives the shared secret, and returns the active peer connection. |
| `Send` | `func (t *Transport) Send(peerID string, msg *Message) error` | Looks up the active connection for `peerID` and sends `msg` over it. |
| `Connections` | `func (t *Transport) Connections() iter.Seq[*PeerConnection]` | Returns an iterator over active peer connections. |
| `Broadcast` | `func (t *Transport) Broadcast(msg *Message) error` | Sends `msg` to every connected peer except the sender identified by `msg.From`. |
| `GetConnection` | `func (t *Transport) GetConnection(peerID string) *PeerConnection` | Returns the active connection for `peerID`, or `nil` when not connected. |
| `ConnectedPeerCount` | `func (t *Transport) ConnectedPeerCount() int` | Returns the number of active peer connections. |
### `*PeerConnection` methods
| Name | Signature | Description |
| --- | --- | --- |
| `Send` | `func (pc *PeerConnection) Send(msg *Message) error` | Encrypts and writes a message over the WebSocket connection. |
| `Close` | `func (pc *PeerConnection) Close() error` | Closes the underlying connection once and releases transport state for that peer. |
| `GracefulClose` | `func (pc *PeerConnection) GracefulClose(reason string, code int) error` | Sends a `MsgDisconnect` notification before closing the connection. |
### `*Worker` methods
| Name | Signature | Description |
| --- | --- | --- |
| `SetMinerManager` | `func (w *Worker) SetMinerManager(manager MinerManager)` | Installs the miner manager used for start, stop, stats, and log requests. |
| `SetProfileManager` | `func (w *Worker) SetProfileManager(manager ProfileManager)` | Installs the profile manager used during deployment handling. |
| `HandleMessage` | `func (w *Worker) HandleMessage(conn *PeerConnection, msg *Message)` | Dispatches supported message types, sends normal replies on success, and emits `MsgError` responses when a handled command fails. |
| `RegisterOnTransport` | `func (w *Worker) RegisterOnTransport()` | Registers `HandleMessage` as the transport's inbound message callback. |
### `*ProtocolError` methods
| Name | Signature | Description |
| --- | --- | --- |
| `Error` | `func (e *ProtocolError) Error() string` | Formats the remote error as `remote error (<code>): <message>`. |

View file

@ -1,67 +0,0 @@
# ueps
**Import:** `dappco.re/go/core/p2p/ueps`
**Files:** 2
## Types
### `UEPSHeader`
```go
type UEPSHeader struct {
Version uint8
CurrentLayer uint8
TargetLayer uint8
IntentID uint8
ThreatScore uint16
}
```
Routing and integrity metadata carried in UEPS frames.
- `Version`: protocol version byte. `NewBuilder` initialises this to `0x09`.
- `CurrentLayer`: source layer byte. `NewBuilder` initialises this to `5`.
- `TargetLayer`: destination layer byte. `NewBuilder` initialises this to `5`.
- `IntentID`: semantic intent token.
- `ThreatScore`: unsigned 16-bit risk score.
### `PacketBuilder`
```go
type PacketBuilder struct {
Header UEPSHeader
Payload []byte
}
```
Mutable packet assembly state used to produce a signed UEPS frame.
- `Header`: TLV metadata written before the payload.
- `Payload`: raw payload bytes appended as the terminal TLV.
### `ParsedPacket`
```go
type ParsedPacket struct {
Header UEPSHeader
Payload []byte
}
```
Verified packet returned by `ReadAndVerify`.
- `Header`: decoded UEPS header values reconstructed from the stream.
- `Payload`: payload bytes from the `TagPayload` TLV.
## Functions
### Top-level
| Name | Signature | Description |
| --- | --- | --- |
| `NewBuilder` | `func NewBuilder(intentID uint8, payload []byte) *PacketBuilder` | Creates a packet builder with default header values (`Version=0x09`, `CurrentLayer=5`, `TargetLayer=5`, `ThreatScore=0`) and the supplied intent and payload. |
| `ReadAndVerify` | `func ReadAndVerify(r *bufio.Reader, sharedSecret []byte) (*ParsedPacket, error)` | Reads TLVs from `r` until `TagPayload`, reconstructs the signed header bytes, and verifies the HMAC-SHA256 over headers plus payload using `sharedSecret`. Missing signatures, truncated data, and HMAC mismatches return errors. |
### `*PacketBuilder` methods
| Name | Signature | Description |
| --- | --- | --- |
| `MarshalAndSign` | `func (p *PacketBuilder) MarshalAndSign(sharedSecret []byte) ([]byte, error)` | Serialises header TLVs `0x01` through `0x05`, signs those bytes plus `Payload` with HMAC-SHA256, appends the `TagHMAC` TLV, then writes the terminal `TagPayload` TLV. All TLV lengths are encoded as 2-byte big-endian unsigned integers. |

View file

@ -7,105 +7,120 @@ import (
"encoding/binary"
"io"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
)
// TLV Types
const (
TagVersion = 0x01
TagCurrentLayer = 0x02
TagTargetLayer = 0x03
TagIntent = 0x04
TagThreatScore = 0x05
TagHMAC = 0x06
TagPayload = 0xFF
TagVersion = 0x01
TagCurrentLay = 0x02
TagTargetLay = 0x03
TagIntent = 0x04
TagThreatScore = 0x05
TagHMAC = 0x06 // The Signature
TagPayload = 0xFF // The Data
)
// header := UEPSHeader{Version: 0x09, CurrentLayer: 5, TargetLayer: 5, IntentID: 0x01}
// UEPSHeader represents the conscious routing metadata
type UEPSHeader struct {
Version uint8
Version uint8 // Default 0x09
CurrentLayer uint8
TargetLayer uint8
IntentID uint8
ThreatScore uint16
IntentID uint8 // Semantic Token
ThreatScore uint16 // 0-65535
}
// builder := NewBuilder(0x20, []byte("hello"))
// PacketBuilder helps construct a signed UEPS frame
type PacketBuilder struct {
Header UEPSHeader
Payload []byte
}
// builder := NewBuilder(0x20, []byte("hello"))
// NewBuilder creates a packet context for a specific intent
func NewBuilder(intentID uint8, payload []byte) *PacketBuilder {
return &PacketBuilder{
Header: UEPSHeader{
Version: 0x09,
CurrentLayer: 5,
TargetLayer: 5,
Version: 0x09, // IPv9
CurrentLayer: 5, // Application
TargetLayer: 5, // Application
IntentID: intentID,
ThreatScore: 0,
ThreatScore: 0, // Assumed innocent until proven guilty
},
Payload: payload,
}
}
// frame, err := builder.MarshalAndSign(sharedSecret)
// MarshalAndSign generates the final byte stream using the shared secret
func (p *PacketBuilder) MarshalAndSign(sharedSecret []byte) ([]byte, error) {
buffer := new(bytes.Buffer)
buf := new(bytes.Buffer)
if err := writeTLV(buffer, TagVersion, []byte{p.Header.Version}); err != nil {
// 1. Write Standard Header Tags (0x01 - 0x05)
// We write these first because they are part of what we sign.
if err := writeTLV(buf, TagVersion, []byte{p.Header.Version}); err != nil {
return nil, err
}
if err := writeTLV(buffer, TagCurrentLayer, []byte{p.Header.CurrentLayer}); err != nil {
if err := writeTLV(buf, TagCurrentLay, []byte{p.Header.CurrentLayer}); err != nil {
return nil, err
}
if err := writeTLV(buffer, TagTargetLayer, []byte{p.Header.TargetLayer}); err != nil {
if err := writeTLV(buf, TagTargetLay, []byte{p.Header.TargetLayer}); err != nil {
return nil, err
}
if err := writeTLV(buffer, TagIntent, []byte{p.Header.IntentID}); err != nil {
return nil, err
}
threatScoreBytes := make([]byte, 2)
binary.BigEndian.PutUint16(threatScoreBytes, p.Header.ThreatScore)
if err := writeTLV(buffer, TagThreatScore, threatScoreBytes); err != nil {
if err := writeTLV(buf, TagIntent, []byte{p.Header.IntentID}); err != nil {
return nil, err
}
// Threat Score is uint16, needs binary packing
tsBuf := make([]byte, 2)
binary.BigEndian.PutUint16(tsBuf, p.Header.ThreatScore)
if err := writeTLV(buf, TagThreatScore, tsBuf); err != nil {
return nil, err
}
// 2. Calculate HMAC
// The signature covers: Existing Header TLVs + The Payload
// It does NOT cover the HMAC TLV tag itself (obviously)
mac := hmac.New(sha256.New, sharedSecret)
mac.Write(buffer.Bytes())
mac.Write(p.Payload)
mac.Write(buf.Bytes()) // The headers so far
mac.Write(p.Payload) // The data
signature := mac.Sum(nil)
if err := writeTLV(buffer, TagHMAC, signature); err != nil {
// 3. Write HMAC TLV (0x06)
// Length is 32 bytes for SHA256
if err := writeTLV(buf, TagHMAC, signature); err != nil {
return nil, err
}
if err := writeTLV(buffer, TagPayload, p.Payload); err != nil {
// 4. Write Payload TLV (0xFF)
// Fixed: Now uses writeTLV which provides a 2-byte length prefix.
// This prevents the io.ReadAll DoS and allows multiple packets in a stream.
if err := writeTLV(buf, TagPayload, p.Payload); err != nil {
return nil, err
}
return buffer.Bytes(), nil
return buf.Bytes(), nil
}
// writeTLV(&buffer, TagPayload, []byte("hello"))
func writeTLV(writer io.Writer, tag uint8, value []byte) error {
// Helper to write a simple TLV.
// Now uses 2-byte big-endian length (uint16) to support up to 64KB payloads.
func writeTLV(w io.Writer, tag uint8, value []byte) error {
// Check length constraint (2 byte length = max 65535 bytes)
if len(value) > 65535 {
return core.E("ueps.writeTLV", "TLV value too large for 2-byte length header", nil)
return coreerr.E("ueps.writeTLV", "TLV value too large for 2-byte length header", nil)
}
if _, err := writer.Write([]byte{tag}); err != nil {
if _, err := w.Write([]byte{tag}); err != nil {
return err
}
lenBuf := make([]byte, 2)
binary.BigEndian.PutUint16(lenBuf, uint16(len(value)))
if _, err := writer.Write(lenBuf); err != nil {
if _, err := w.Write(lenBuf); err != nil {
return err
}
if _, err := writer.Write(value); err != nil {
if _, err := w.Write(value); err != nil {
return err
}
return nil
}

View file

@ -6,10 +6,10 @@ import (
"crypto/hmac"
"crypto/sha256"
"encoding/binary"
"errors"
"io"
"testing"
core "dappco.re/go/core"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@ -22,7 +22,7 @@ type failWriter struct {
func (f *failWriter) Write(p []byte) (int, error) {
if f.remaining <= 0 {
return 0, core.NewError("write failed")
return 0, errors.New("write failed")
}
f.remaining--
return len(p), nil
@ -30,7 +30,7 @@ func (f *failWriter) Write(p []byte) (int, error) {
// TestWriteTLV_TagWriteFails verifies writeTLV returns an error
// when the very first Write (the tag byte) fails.
func TestPacketCoverage_WriteTLV_TagWriteFails_Bad(t *testing.T) {
func TestWriteTLV_TagWriteFails(t *testing.T) {
w := &failWriter{remaining: 0}
err := writeTLV(w, TagVersion, []byte{0x09})
@ -40,7 +40,7 @@ func TestPacketCoverage_WriteTLV_TagWriteFails_Bad(t *testing.T) {
// TestWriteTLV_LengthWriteFails verifies writeTLV returns an error
// when the second Write (the length byte) fails.
func TestPacketCoverage_WriteTLV_LengthWriteFails_Bad(t *testing.T) {
func TestWriteTLV_LengthWriteFails(t *testing.T) {
w := &failWriter{remaining: 1}
err := writeTLV(w, TagVersion, []byte{0x09})
@ -50,7 +50,7 @@ func TestPacketCoverage_WriteTLV_LengthWriteFails_Bad(t *testing.T) {
// TestWriteTLV_ValueWriteFails verifies writeTLV returns an error
// when the third Write (the value bytes) fails.
func TestPacketCoverage_WriteTLV_ValueWriteFails_Bad(t *testing.T) {
func TestWriteTLV_ValueWriteFails(t *testing.T) {
w := &failWriter{remaining: 2}
err := writeTLV(w, TagVersion, []byte{0x09})
@ -81,7 +81,7 @@ func (r *errorAfterNReader) Read(p []byte) (int, error) {
// TestReadAndVerify_PayloadReadError exercises the error branch at
// reader.go:51-53 where io.ReadAll fails after the 0xFF tag byte
// has been successfully read.
func TestPacketCoverage_ReadAndVerify_PayloadReadError_Bad(t *testing.T) {
func TestReadAndVerify_PayloadReadError(t *testing.T) {
// Build a valid packet so we have genuine TLV headers + HMAC.
payload := []byte("coverage test")
builder := NewBuilder(0x20, payload)
@ -104,7 +104,7 @@ func TestPacketCoverage_ReadAndVerify_PayloadReadError_Bad(t *testing.T) {
prefix := frame[:payloadTagIdx+1]
r := &errorAfterNReader{
data: prefix,
err: core.NewError("connection reset"),
err: errors.New("connection reset"),
}
_, err = ReadAndVerify(bufio.NewReader(r), testSecret)
@ -115,7 +115,7 @@ func TestPacketCoverage_ReadAndVerify_PayloadReadError_Bad(t *testing.T) {
// TestReadAndVerify_PayloadReadError_EOF ensures that a truncated payload
// (missing bytes after TagPayload) is handled as an I/O error (UnexpectedEOF)
// because ReadAndVerify now uses io.ReadFull with the expected length prefix.
func TestPacketCoverage_ReadAndVerify_PayloadReadError_EOF_Bad(t *testing.T) {
func TestReadAndVerify_PayloadReadError_EOF(t *testing.T) {
payload := []byte("eof test")
builder := NewBuilder(0x20, payload)
frame, err := builder.MarshalAndSign(testSecret)
@ -141,7 +141,7 @@ func TestPacketCoverage_ReadAndVerify_PayloadReadError_EOF_Bad(t *testing.T) {
// TestWriteTLV_AllWritesSucceed confirms the happy path still works
// after exercising all error branches — a simple sanity check using
// failWriter with enough remaining writes.
func TestPacketCoverage_WriteTLV_AllWritesSucceed_Good(t *testing.T) {
func TestWriteTLV_AllWritesSucceed(t *testing.T) {
var buf bytes.Buffer
err := writeTLV(&buf, TagVersion, []byte{0x09})
require.NoError(t, err)
@ -149,9 +149,10 @@ func TestPacketCoverage_WriteTLV_AllWritesSucceed_Good(t *testing.T) {
assert.Equal(t, []byte{TagVersion, 0x00, 0x01, 0x09}, buf.Bytes())
}
// TestWriteTLV_FailWriterTable runs the three failure scenarios in
// a table-driven fashion for completeness.
func TestPacketCoverage_WriteTLV_FailWriterTable_Bad(t *testing.T) {
func TestWriteTLV_FailWriterTable(t *testing.T) {
tests := []struct {
name string
remaining int
@ -176,14 +177,14 @@ func TestPacketCoverage_WriteTLV_FailWriterTable_Bad(t *testing.T) {
// HMAC computation independently of the builder. This also serves as
// a cross-check that our errorAfterNReader is not accidentally
// corrupting the prefix bytes.
func TestPacketCoverage_ReadAndVerify_ManualPacket_PayloadReadError_Bad(t *testing.T) {
func TestReadAndVerify_ManualPacket_PayloadReadError(t *testing.T) {
payload := []byte("manual test")
// Build header TLVs
var hdr bytes.Buffer
require.NoError(t, writeTLV(&hdr, TagVersion, []byte{0x09}))
require.NoError(t, writeTLV(&hdr, TagCurrentLayer, []byte{5}))
require.NoError(t, writeTLV(&hdr, TagTargetLayer, []byte{5}))
require.NoError(t, writeTLV(&hdr, TagCurrentLay, []byte{5}))
require.NoError(t, writeTLV(&hdr, TagTargetLay, []byte{5}))
require.NoError(t, writeTLV(&hdr, TagIntent, []byte{0x20}))
tsBuf := make([]byte, 2)
binary.BigEndian.PutUint16(tsBuf, 0)
@ -211,32 +212,3 @@ func TestPacketCoverage_ReadAndVerify_ManualPacket_PayloadReadError_Bad(t *testi
require.Error(t, err)
assert.Equal(t, io.ErrUnexpectedEOF, err)
}
// TestReadAndVerify_MalformedHeaderTLV_Bad verifies malformed header values
// return an error instead of panicking during TLV reconstruction.
func TestPacketCoverage_ReadAndVerify_MalformedHeaderTLV_Bad(t *testing.T) {
tests := []struct {
name string
frame []byte
wantErr string
}{
{
name: "ZeroLengthVersion",
frame: []byte{TagVersion, 0x00, 0x00},
wantErr: "malformed version TLV",
},
{
name: "ShortThreatScore",
frame: []byte{TagThreatScore, 0x00, 0x01, 0xFF},
wantErr: "malformed threat score TLV",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
_, err := ReadAndVerify(bufio.NewReader(bytes.NewReader(tc.frame)), testSecret)
require.Error(t, err)
assert.Contains(t, err.Error(), tc.wantErr)
})
}
}

View file

@ -7,15 +7,14 @@ import (
"crypto/sha256"
"encoding/binary"
"io"
"strings"
"testing"
core "dappco.re/go/core"
)
// testSecret is a deterministic shared secret for reproducible tests.
var testSecret = []byte("test-shared-secret-32-bytes!!!!!")
func TestPacket_Builder_RoundTrip_Ugly(t *testing.T) {
func TestPacketBuilder_RoundTrip(t *testing.T) {
tests := []struct {
name string
intentID uint8
@ -85,7 +84,7 @@ func TestPacket_Builder_RoundTrip_Ugly(t *testing.T) {
}
}
func TestPacket_HMACVerification_TamperedPayload_Bad(t *testing.T) {
func TestHMACVerification_TamperedPayload(t *testing.T) {
builder := NewBuilder(0x20, []byte("original payload"))
frame, err := builder.MarshalAndSign(testSecret)
if err != nil {
@ -101,12 +100,12 @@ func TestPacket_HMACVerification_TamperedPayload_Bad(t *testing.T) {
if err == nil {
t.Fatal("Expected HMAC mismatch error for tampered payload")
}
if !core.Contains(err.Error(), "integrity violation") {
if !strings.Contains(err.Error(), "integrity violation") {
t.Errorf("Expected integrity violation error, got: %v", err)
}
}
func TestPacket_HMACVerification_TamperedHeader_Bad(t *testing.T) {
func TestHMACVerification_TamperedHeader(t *testing.T) {
builder := NewBuilder(0x20, []byte("test payload"))
frame, err := builder.MarshalAndSign(testSecret)
if err != nil {
@ -123,12 +122,12 @@ func TestPacket_HMACVerification_TamperedHeader_Bad(t *testing.T) {
if err == nil {
t.Fatal("Expected HMAC mismatch error for tampered header")
}
if !core.Contains(err.Error(), "integrity violation") {
if !strings.Contains(err.Error(), "integrity violation") {
t.Errorf("Expected integrity violation error, got: %v", err)
}
}
func TestPacket_HMACVerification_WrongSharedSecret_Bad(t *testing.T) {
func TestHMACVerification_WrongSharedSecret(t *testing.T) {
builder := NewBuilder(0x20, []byte("secret data"))
frame, err := builder.MarshalAndSign([]byte("key-A-used-for-signing!!!!!!!!!!"))
if err != nil {
@ -139,12 +138,12 @@ func TestPacket_HMACVerification_WrongSharedSecret_Bad(t *testing.T) {
if err == nil {
t.Fatal("Expected HMAC mismatch error for wrong shared secret")
}
if !core.Contains(err.Error(), "integrity violation") {
if !strings.Contains(err.Error(), "integrity violation") {
t.Errorf("Expected integrity violation error, got: %v", err)
}
}
func TestPacket_EmptyPayload_Ugly(t *testing.T) {
func TestEmptyPayload(t *testing.T) {
tests := []struct {
name string
payload []byte
@ -176,7 +175,7 @@ func TestPacket_EmptyPayload_Ugly(t *testing.T) {
}
}
func TestPacket_MaxThreatScoreBoundary_Ugly(t *testing.T) {
func TestMaxThreatScoreBoundary(t *testing.T) {
builder := NewBuilder(0x20, []byte("threat boundary"))
builder.Header.ThreatScore = 65535 // uint16 max
@ -195,14 +194,14 @@ func TestPacket_MaxThreatScoreBoundary_Ugly(t *testing.T) {
}
}
func TestPacket_MissingHMACTag_Bad(t *testing.T) {
func TestMissingHMACTag(t *testing.T) {
// Craft a packet manually: header TLVs + payload tag, but no HMAC (0x06)
var buf bytes.Buffer
// Write header TLVs
writeTLV(&buf, TagVersion, []byte{0x09})
writeTLV(&buf, TagCurrentLayer, []byte{5})
writeTLV(&buf, TagTargetLayer, []byte{5})
writeTLV(&buf, TagCurrentLay, []byte{5})
writeTLV(&buf, TagTargetLay, []byte{5})
writeTLV(&buf, TagIntent, []byte{0x20})
tsBuf := make([]byte, 2)
binary.BigEndian.PutUint16(tsBuf, 0)
@ -215,24 +214,24 @@ func TestPacket_MissingHMACTag_Bad(t *testing.T) {
if err == nil {
t.Fatal("Expected 'missing HMAC' error")
}
if !core.Contains(err.Error(), "missing HMAC") {
if !strings.Contains(err.Error(), "missing HMAC") {
t.Errorf("Expected 'missing HMAC' error, got: %v", err)
}
}
func TestPacket_WriteTLV_ValueTooLarge_Bad(t *testing.T) {
func TestWriteTLV_ValueTooLarge(t *testing.T) {
var buf bytes.Buffer
oversized := make([]byte, 65536) // 1 byte over the 65535 limit
err := writeTLV(&buf, TagVersion, oversized)
if err == nil {
t.Fatal("Expected error for TLV value > 65535 bytes")
}
if !core.Contains(err.Error(), "TLV value too large") {
if !strings.Contains(err.Error(), "TLV value too large") {
t.Errorf("Expected 'TLV value too large' error, got: %v", err)
}
}
func TestPacket_TruncatedPacket_Bad(t *testing.T) {
func TestTruncatedPacket(t *testing.T) {
builder := NewBuilder(0x20, []byte("full payload"))
frame, err := builder.MarshalAndSign(testSecret)
if err != nil {
@ -257,7 +256,7 @@ func TestPacket_TruncatedPacket_Bad(t *testing.T) {
{
name: "CutMidHMAC",
cutAt: 20, // Somewhere inside the header TLVs or HMAC
wantErr: "", // Any io error
wantErr: "", // Any io error
},
}
@ -268,14 +267,14 @@ func TestPacket_TruncatedPacket_Bad(t *testing.T) {
if err == nil {
t.Fatal("Expected error for truncated packet")
}
if tc.wantErr != "" && !core.Contains(err.Error(), tc.wantErr) {
if tc.wantErr != "" && !strings.Contains(err.Error(), tc.wantErr) {
t.Errorf("Expected error containing %q, got: %v", tc.wantErr, err)
}
})
}
}
func TestPacket_UnknownTLVTag_Bad(t *testing.T) {
func TestUnknownTLVTag(t *testing.T) {
// Build a valid packet, then inject an unknown tag before the HMAC.
// The unknown tag must be included in signedData for HMAC to pass.
payload := []byte("tagged payload")
@ -285,8 +284,8 @@ func TestPacket_UnknownTLVTag_Bad(t *testing.T) {
// Standard header TLVs
writeTLV(&headerBuf, TagVersion, []byte{0x09})
writeTLV(&headerBuf, TagCurrentLayer, []byte{5})
writeTLV(&headerBuf, TagTargetLayer, []byte{5})
writeTLV(&headerBuf, TagCurrentLay, []byte{5})
writeTLV(&headerBuf, TagTargetLay, []byte{5})
writeTLV(&headerBuf, TagIntent, []byte{0x20})
tsBuf := make([]byte, 2)
binary.BigEndian.PutUint16(tsBuf, 0)
@ -325,7 +324,7 @@ func TestPacket_UnknownTLVTag_Bad(t *testing.T) {
}
}
func TestPacket_NewBuilder_Defaults_Good(t *testing.T) {
func TestNewBuilder_Defaults(t *testing.T) {
builder := NewBuilder(0x20, []byte("data"))
if builder.Header.Version != 0x09 {
@ -345,7 +344,7 @@ func TestPacket_NewBuilder_Defaults_Good(t *testing.T) {
}
}
func TestPacket_ThreatScoreBoundaries_Good(t *testing.T) {
func TestThreatScoreBoundaries(t *testing.T) {
tests := []struct {
name string
score uint16
@ -379,7 +378,7 @@ func TestPacket_ThreatScoreBoundaries_Good(t *testing.T) {
}
}
func TestPacket_WriteTLV_BoundaryLengths_Ugly(t *testing.T) {
func TestWriteTLV_BoundaryLengths(t *testing.T) {
tests := []struct {
name string
length int
@ -408,8 +407,9 @@ func TestPacket_WriteTLV_BoundaryLengths_Ugly(t *testing.T) {
}
}
// TestReadAndVerify_EmptyReader verifies behaviour on completely empty input.
func TestPacket_ReadAndVerify_EmptyReader_Ugly(t *testing.T) {
func TestReadAndVerify_EmptyReader(t *testing.T) {
_, err := ReadAndVerify(bufio.NewReader(bytes.NewReader(nil)), testSecret)
if err == nil {
t.Fatal("Expected error for empty reader")

View file

@ -8,86 +8,83 @@ import (
"encoding/binary"
"io"
core "dappco.re/go/core"
coreerr "dappco.re/go/core/log"
)
// packet := &ParsedPacket{Header: UEPSHeader{IntentID: 0x01}}
// ParsedPacket holds the verified data
type ParsedPacket struct {
Header UEPSHeader
Payload []byte
}
// packet, err := ReadAndVerify(bufio.NewReader(bytes.NewReader(frame)), sharedSecret)
// ReadAndVerify reads a UEPS frame from the stream and validates the HMAC.
// It consumes the stream up to the end of the packet.
func ReadAndVerify(r *bufio.Reader, sharedSecret []byte) (*ParsedPacket, error) {
// Buffer to reconstruct the data for HMAC verification
var signedData bytes.Buffer
header := UEPSHeader{}
var signature []byte
var payload []byte
// Loop through TLVs
for {
// 1. Read Tag
tag, err := r.ReadByte()
if err != nil {
return nil, err
}
// 2. Read Length (2-byte big-endian uint16)
lenBuf := make([]byte, 2)
if _, err := io.ReadFull(r, lenBuf); err != nil {
return nil, err
}
length := int(binary.BigEndian.Uint16(lenBuf))
// 3. Read Value
value := make([]byte, length)
if _, err := io.ReadFull(r, value); err != nil {
return nil, err
}
// 4. Handle Tag
switch tag {
case TagVersion:
if len(value) != 1 {
return nil, core.E("ueps.ReadAndVerify", "malformed version TLV", nil)
}
header.Version = value[0]
signedData.WriteByte(tag)
signedData.Write(lenBuf)
signedData.Write(value)
case TagCurrentLayer:
if len(value) != 1 {
return nil, core.E("ueps.ReadAndVerify", "malformed current layer TLV", nil)
}
case TagCurrentLay:
header.CurrentLayer = value[0]
signedData.WriteByte(tag)
signedData.Write(lenBuf)
signedData.Write(value)
case TagTargetLayer:
if len(value) != 1 {
return nil, core.E("ueps.ReadAndVerify", "malformed target layer TLV", nil)
}
case TagTargetLay:
header.TargetLayer = value[0]
signedData.WriteByte(tag)
signedData.Write(lenBuf)
signedData.Write(value)
case TagIntent:
if len(value) != 1 {
return nil, core.E("ueps.ReadAndVerify", "malformed intent TLV", nil)
}
header.IntentID = value[0]
signedData.WriteByte(tag)
signedData.Write(lenBuf)
signedData.Write(value)
case TagThreatScore:
if len(value) != 2 {
return nil, core.E("ueps.ReadAndVerify", "malformed threat score TLV", nil)
}
header.ThreatScore = binary.BigEndian.Uint16(value)
signedData.WriteByte(tag)
signedData.Write(lenBuf)
signedData.Write(value)
case TagHMAC:
signature = value
// HMAC tag itself is not part of the signed data
case TagPayload:
payload = value
// Exit loop after payload (last tag in UEPS frame)
// Note: The HMAC covers the Payload but NOT the TagPayload/Length bytes
// to match the PacketBuilder.MarshalAndSign logic.
goto verify
default:
// Unknown tag (future proofing), verify it but ignore semantics
signedData.WriteByte(tag)
signedData.Write(lenBuf)
signedData.Write(value)
@ -96,16 +93,18 @@ func ReadAndVerify(r *bufio.Reader, sharedSecret []byte) (*ParsedPacket, error)
verify:
if len(signature) == 0 {
return nil, core.E("ueps.ReadAndVerify", "UEPS packet missing HMAC signature", nil)
return nil, coreerr.E("ueps.ReadAndVerify", "UEPS packet missing HMAC signature", nil)
}
// 5. Verify HMAC
// Reconstruct: Headers (signedData) + Payload
mac := hmac.New(sha256.New, sharedSecret)
mac.Write(signedData.Bytes())
mac.Write(payload)
expectedMAC := mac.Sum(nil)
if !hmac.Equal(signature, expectedMAC) {
return nil, core.E("ueps.ReadAndVerify", "integrity violation: HMAC mismatch (ThreatScore +100)", nil)
return nil, coreerr.E("ueps.ReadAndVerify", "integrity violation: HMAC mismatch (ThreatScore +100)", nil)
}
return &ParsedPacket{
@ -113,3 +112,4 @@ verify:
Payload: payload,
}, nil
}