Compare commits
25 commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
520d0f5728 | ||
|
|
c823c46bb2 | ||
|
|
56bd30d3d2 | ||
|
|
3eeaf90d38 | ||
|
|
d5a962996b | ||
|
|
572970d255 | ||
|
|
ee623a7343 | ||
|
|
8d1caa3a59 | ||
| 727b5fdb8d | |||
|
|
6fd3fe1cd2 | ||
|
|
23994a66ac | ||
|
|
eaa919af89 | ||
| 3f1f9a7d60 | |||
|
|
b334cb4909 | ||
| 36f0582bfc | |||
|
|
3ea407c115 | ||
|
|
bc47006152 | ||
|
|
66bc0b862f | ||
|
|
c2d2d5d126 | ||
|
|
6da95f3ed4 | ||
|
|
39049106c1 | ||
|
|
e24df2c9fa | ||
|
|
2561c6615d | ||
|
|
fe04cf93aa | ||
|
|
9ad643df90 |
38 changed files with 2010 additions and 340 deletions
24
.core/build.yaml
Normal file
24
.core/build.yaml
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
version: 1
|
||||
|
||||
project:
|
||||
name: go-p2p
|
||||
description: Peer-to-peer networking
|
||||
binary: ""
|
||||
|
||||
build:
|
||||
cgo: false
|
||||
flags:
|
||||
- -trimpath
|
||||
ldflags:
|
||||
- -s
|
||||
- -w
|
||||
|
||||
targets:
|
||||
- os: linux
|
||||
arch: amd64
|
||||
- os: linux
|
||||
arch: arm64
|
||||
- os: darwin
|
||||
arch: arm64
|
||||
- os: windows
|
||||
arch: amd64
|
||||
20
.core/release.yaml
Normal file
20
.core/release.yaml
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
version: 1
|
||||
|
||||
project:
|
||||
name: go-p2p
|
||||
repository: core/go-p2p
|
||||
|
||||
publishers: []
|
||||
|
||||
changelog:
|
||||
include:
|
||||
- feat
|
||||
- fix
|
||||
- perf
|
||||
- refactor
|
||||
exclude:
|
||||
- chore
|
||||
- docs
|
||||
- style
|
||||
- test
|
||||
- ci
|
||||
4
.gitignore
vendored
Normal file
4
.gitignore
vendored
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
.idea/
|
||||
.vscode/
|
||||
*.log
|
||||
.core/
|
||||
67
CLAUDE.md
67
CLAUDE.md
|
|
@ -1,22 +1,54 @@
|
|||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Project
|
||||
|
||||
`go-p2p` is the P2P networking layer for the Lethean network. Module path: `forge.lthn.ai/core/go-p2p`
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Private dependencies (`Borg`, `Poindexter`, `Enchantrix`) are hosted on `forge.lthn.ai`. Required env:
|
||||
```bash
|
||||
GOPRIVATE=forge.lthn.ai
|
||||
```
|
||||
SSH key must be configured for `git@forge.lthn.ai:2223`. Push to `forge` remote only.
|
||||
|
||||
## Commands
|
||||
|
||||
```bash
|
||||
go test ./... # Run all tests
|
||||
go test -run TestName ./... # Single test
|
||||
go test -race ./... # Race detector (required before any PR)
|
||||
go test -short ./... # Skip integration tests
|
||||
go test -cover ./node # Coverage for node package
|
||||
go test -bench . ./... # Benchmarks
|
||||
go test -short ./... # Skip integration tests (they bind real TCP ports)
|
||||
go test -cover ./node # Coverage for a specific package
|
||||
go test -bench . -benchmem ./... # Benchmarks with allocation tracking
|
||||
go vet ./... # Static analysis
|
||||
golangci-lint run ./... # Linting
|
||||
```
|
||||
|
||||
## Key Interfaces
|
||||
## Architecture
|
||||
|
||||
Three packages plus one subpackage:
|
||||
|
||||
```
|
||||
node/ — P2P mesh: identity, transport, peers, protocol, workers, controller, dispatcher, bundles
|
||||
node/levin/ — CryptoNote Levin binary protocol (standalone, no parent imports)
|
||||
ueps/ — UEPS wire protocol (RFC-021): TLV packet builder and stream reader (stdlib only)
|
||||
logging/ — Structured levelled logger with component scoping (stdlib only)
|
||||
```
|
||||
|
||||
### Data flow
|
||||
|
||||
1. **Identity** (`identity.go`) — Ed25519 keypair via Borg STMF. Shared secrets derived via X25519 ECDH + SHA-256.
|
||||
2. **Transport** (`transport.go`) — WebSocket server/client (gorilla/websocket). Handshake exchanges `NodeIdentity` + HMAC-SHA256 challenge-response. Post-handshake messages are Borg SMSG-encrypted. Includes deduplication (5-min TTL), rate limiting (token bucket: 100 burst/50 per sec), and MaxConns enforcement.
|
||||
3. **Dispatcher** (`dispatcher.go`) — Routes verified UEPS packets to intent handlers. Threat circuit breaker drops packets with `ThreatScore > 50,000` before routing.
|
||||
4. **Controller** (`controller.go`) — Issues requests to remote peers using a pending-map pattern (`map[string]chan *Message`). Auto-connects to peers on demand.
|
||||
5. **Worker** (`worker.go`) — Handles incoming commands via `MinerManager` and `ProfileManager` interfaces.
|
||||
6. **Peer Registry** (`peer.go`) — KD-tree peer selection across 4 dimensions (latency, hops, geography, reliability). Persistence uses atomic rename with 5-second debounced writes.
|
||||
7. **Levin** (`node/levin/`) — CryptoNote binary protocol: header parsing, portable storage decode, varint encoding. Completely standalone subpackage.
|
||||
|
||||
### Key interfaces
|
||||
|
||||
```go
|
||||
// MinerManager — decoupled miner control (worker.go)
|
||||
|
|
@ -33,14 +65,34 @@ type ProfileManager interface {
|
|||
}
|
||||
```
|
||||
|
||||
### Dependency codenames
|
||||
|
||||
- **Borg** — STMF crypto (key generation), SMSG (symmetric encryption), TIM (deployment bundle encryption)
|
||||
- **Poindexter** — KD-tree for peer selection
|
||||
- **Enchantrix** — Secure environment (indirect, via Borg)
|
||||
|
||||
## Coding Standards
|
||||
|
||||
- UK English (colour, organisation, centre)
|
||||
- UK English (colour, organisation, centre, behaviour, recognise)
|
||||
- All parameters and return types explicitly annotated
|
||||
- Tests use `testify` assert/require; table-driven subtests with `t.Run()`
|
||||
- Licence: EUPL-1.2
|
||||
- Test name suffixes: `_Good` (happy path), `_Bad` (expected errors), `_Ugly` (panic/edge cases)
|
||||
- Licence: EUPL-1.2 — new files need `// SPDX-License-Identifier: EUPL-1.2`
|
||||
- Security-first: do not weaken HMAC, challenge-response, Zip Slip defence, or rate limiting
|
||||
- Use `logging` package only — no `fmt.Println` or `log.Printf` in library code
|
||||
- Error handling: use `coreerr.E()` from `go-log` — never `fmt.Errorf` or `errors.New` in library code
|
||||
- File I/O: use `coreio.Local` from `go-io` — never `os.ReadFile`/`os.WriteFile` in library code (exception: `os.OpenFile` for streaming writes where `coreio` lacks support)
|
||||
- Hot-path debug logging uses sampling pattern: `if counter.Add(1)%interval == 0`
|
||||
|
||||
### Transport test helper
|
||||
|
||||
Tests needing live WebSocket endpoints use the reusable helper:
|
||||
```go
|
||||
tp := setupTestTransportPair(t) // creates two transports on ephemeral ports
|
||||
pc := tp.connectClient(t) // performs real handshake, returns *PeerConnection
|
||||
// tp.Server, tp.Client, tp.ServerNode, tp.ClientNode, tp.ServerReg, tp.ClientReg
|
||||
```
|
||||
Cleanup is automatic via `t.Cleanup`.
|
||||
|
||||
## Commit Format
|
||||
|
||||
|
|
@ -50,6 +102,9 @@ type(scope): description
|
|||
Co-Authored-By: Virgil <virgil@lethean.io>
|
||||
```
|
||||
|
||||
Types: `feat`, `fix`, `test`, `refactor`, `docs`, `chore`, `perf`, `ci`
|
||||
Scopes: `node`, `ueps`, `logging`, `transport`, `peer`, `dispatcher`, `identity`, `bundle`, `controller`, `levin`
|
||||
|
||||
## Documentation
|
||||
|
||||
- `docs/architecture.md` — full package and component reference
|
||||
|
|
|
|||
|
|
@ -98,7 +98,7 @@ The `Transport` manages a WebSocket server (gorilla/websocket) and outbound conn
|
|||
| Timeout | −3.0 (floored at 0) |
|
||||
| Default (new peer) | 50.0 |
|
||||
|
||||
**Peer name validation**: Names must be 1–64 characters, start and end with an alphanumeric character, and contain only alphanumeric, hyphen, underscore, or space characters.
|
||||
**Peer name validation**: Empty names are permitted. Non-empty names must be 1–64 characters, start and end with an alphanumeric character, and contain only alphanumeric, hyphen, underscore, or space characters.
|
||||
|
||||
### message.go — Protocol Messages
|
||||
|
||||
|
|
|
|||
209
docs/discovery.md
Normal file
209
docs/discovery.md
Normal file
|
|
@ -0,0 +1,209 @@
|
|||
---
|
||||
title: Peer Discovery
|
||||
description: KD-tree peer selection across four weighted dimensions, score tracking, and allowlist authentication.
|
||||
---
|
||||
|
||||
# Peer Discovery
|
||||
|
||||
The `PeerRegistry` manages known peers with intelligent selection via a 4-dimensional KD-tree (powered by Poindexter). Peers are scored based on network metrics and reliability, persisted with debounced writes, and gated by configurable authentication modes.
|
||||
|
||||
## Peer Structure
|
||||
|
||||
```go
|
||||
type Peer struct {
|
||||
ID string `json:"id"` // Node ID (derived from public key)
|
||||
Name string `json:"name"` // Human-readable name
|
||||
PublicKey string `json:"publicKey"` // X25519 public key (base64)
|
||||
Address string `json:"address"` // host:port for WebSocket connection
|
||||
Role NodeRole `json:"role"` // controller, worker, or dual
|
||||
AddedAt time.Time `json:"addedAt"`
|
||||
LastSeen time.Time `json:"lastSeen"`
|
||||
|
||||
// Poindexter metrics (updated dynamically)
|
||||
PingMS float64 `json:"pingMs"` // Latency in milliseconds
|
||||
Hops int `json:"hops"` // Network hop count
|
||||
GeoKM float64 `json:"geoKm"` // Geographic distance in kilometres
|
||||
Score float64 `json:"score"` // Reliability score 0--100
|
||||
|
||||
Connected bool `json:"-"` // Not persisted
|
||||
}
|
||||
```
|
||||
|
||||
## Authentication Modes
|
||||
|
||||
```go
|
||||
const (
|
||||
PeerAuthOpen PeerAuthMode = iota // Accept any peer that authenticates
|
||||
PeerAuthAllowlist // Only accept pre-approved peers
|
||||
)
|
||||
```
|
||||
|
||||
| Mode | Behaviour |
|
||||
|------|-----------|
|
||||
| `PeerAuthOpen` | Any node that completes the challenge-response handshake is accepted |
|
||||
| `PeerAuthAllowlist` | Only nodes whose public keys appear on the allowlist, or that are already registered, are accepted |
|
||||
|
||||
### Allowlist Management
|
||||
|
||||
```go
|
||||
registry.SetAuthMode(node.PeerAuthAllowlist)
|
||||
|
||||
// Add/revoke public keys
|
||||
registry.AllowPublicKey(peerPublicKeyBase64)
|
||||
registry.RevokePublicKey(peerPublicKeyBase64)
|
||||
|
||||
// Check
|
||||
ok := registry.IsPublicKeyAllowed(peerPublicKeyBase64)
|
||||
|
||||
// List all allowed keys
|
||||
keys := registry.ListAllowedPublicKeys()
|
||||
|
||||
// Iterate
|
||||
for key := range registry.AllowedPublicKeys() {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
The `IsPeerAllowed(peerID, publicKey)` method is called during the transport handshake. It returns `true` if:
|
||||
- Auth mode is `PeerAuthOpen`, **or**
|
||||
- The peer ID is already registered in the registry, **or**
|
||||
- The public key is on the allowlist.
|
||||
|
||||
### Peer Name Validation
|
||||
|
||||
Peer names are validated on `AddPeer()`:
|
||||
- 1--64 characters
|
||||
- Must start and end with alphanumeric characters
|
||||
- May contain alphanumeric, hyphens, underscores, and spaces
|
||||
- Empty names are permitted (the field is optional)
|
||||
|
||||
## KD-Tree Peer Selection
|
||||
|
||||
The registry maintains a 4-dimensional KD-tree for optimal peer selection. Each peer is represented as a weighted point:
|
||||
|
||||
| Dimension | Source | Weight | Direction |
|
||||
|-----------|--------|--------|-----------|
|
||||
| Latency | `PingMS` | 1.0 | Lower is better |
|
||||
| Hops | `Hops` | 0.7 | Lower is better |
|
||||
| Geographic distance | `GeoKM` | 0.2 | Lower is better |
|
||||
| Reliability | `100 - Score` | 1.2 | Inverted so lower is better |
|
||||
|
||||
The score dimension is inverted so that the "ideal peer" target point `[0, 0, 0, 0]` represents zero latency, zero hops, zero distance, and maximum reliability (score 100).
|
||||
|
||||
### Selecting Peers
|
||||
|
||||
```go
|
||||
// Best single peer (nearest to ideal in 4D space)
|
||||
best := registry.SelectOptimalPeer()
|
||||
|
||||
// Top N peers
|
||||
top3 := registry.SelectNearestPeers(3)
|
||||
|
||||
// All peers sorted by score (highest first)
|
||||
ranked := registry.GetPeersByScore()
|
||||
|
||||
// Iterator over peers by score
|
||||
for peer := range registry.PeersByScore() {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
The KD-tree uses Euclidean distance (configured via `poindexter.WithMetric(poindexter.EuclideanDistance{})`). It is rebuilt whenever peers are added, removed, or their metrics change.
|
||||
|
||||
## Score Tracking
|
||||
|
||||
Peer reliability is tracked with a score between 0 and 100 (default 50 for new peers).
|
||||
|
||||
```go
|
||||
const (
|
||||
ScoreSuccessIncrement = 1.0 // +1 per successful interaction
|
||||
ScoreFailureDecrement = 5.0 // -5 per failure
|
||||
ScoreTimeoutDecrement = 3.0 // -3 per timeout
|
||||
)
|
||||
```
|
||||
|
||||
```go
|
||||
registry.RecordSuccess(peerID) // Score += 1, capped at 100
|
||||
registry.RecordFailure(peerID) // Score -= 5, floored at 0
|
||||
registry.RecordTimeout(peerID) // Score -= 3, floored at 0
|
||||
|
||||
// Direct score update (clamped to 0--100)
|
||||
registry.UpdateScore(peerID, 75.0)
|
||||
```
|
||||
|
||||
The asymmetric adjustments (+1 success vs -5 failure) mean that a peer must sustain consistent reliability to maintain a high score. A single failure costs five successful interactions to recover.
|
||||
|
||||
## Metric Updates
|
||||
|
||||
```go
|
||||
registry.UpdateMetrics(peerID, pingMS, geoKM, hops)
|
||||
```
|
||||
|
||||
This also updates `LastSeen` and triggers a KD-tree rebuild.
|
||||
|
||||
## Registry Operations
|
||||
|
||||
```go
|
||||
// Create
|
||||
registry, err := node.NewPeerRegistry() // XDG paths
|
||||
registry, err := node.NewPeerRegistryWithPath(path) // Custom path (testing)
|
||||
|
||||
// CRUD
|
||||
err := registry.AddPeer(peer)
|
||||
err := registry.UpdatePeer(peer)
|
||||
err := registry.RemovePeer(peerID)
|
||||
peer := registry.GetPeer(peerID) // Returns a copy
|
||||
|
||||
// Lists and iterators
|
||||
peers := registry.ListPeers()
|
||||
count := registry.Count()
|
||||
|
||||
for peer := range registry.Peers() {
|
||||
// Each peer is a copy to prevent mutation
|
||||
}
|
||||
|
||||
// Connection state
|
||||
registry.SetConnected(peerID, true)
|
||||
connected := registry.GetConnectedPeers()
|
||||
|
||||
for peer := range registry.ConnectedPeers() {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
## Persistence
|
||||
|
||||
Peers are persisted to `~/.config/lethean-desktop/peers.json` as a JSON array.
|
||||
|
||||
### Debounced Writes
|
||||
|
||||
To avoid excessive disk I/O, saves are debounced with a 5-second coalesce interval. Multiple mutations within that window produce a single disk write. The write uses an atomic rename pattern (write to `.tmp`, then `os.Rename`) to prevent corruption on crash.
|
||||
|
||||
```go
|
||||
// Flush pending changes on shutdown
|
||||
err := registry.Close()
|
||||
```
|
||||
|
||||
Always call `Close()` before shutdown to ensure unsaved changes are flushed.
|
||||
|
||||
## Peer Lifecycle
|
||||
|
||||
```
|
||||
Discovery Authentication Active Stale
|
||||
| | | |
|
||||
| handshake +---------+ +--------+------+ +-----+
|
||||
+-------------->| Auth |---->| Score | Ping |---->| Evict|
|
||||
| Check | | Update | Loop | | or |
|
||||
+---------+ +--------+------+ | Retry|
|
||||
| +-----+
|
||||
[rejected]
|
||||
```
|
||||
|
||||
1. **Discovery** -- New peer connects or is discovered via mesh communication.
|
||||
2. **Authentication** -- Challenge-response handshake (see [identity.md](identity.md)). Allowlist checked if in allowlist mode.
|
||||
3. **Active** -- Metrics updated via ping, score adjusted on success/failure, eligible for task assignment.
|
||||
4. **Stale** -- No response after keepalive timeout; connection removed, `Connected` set to `false`.
|
||||
|
||||
## Thread Safety
|
||||
|
||||
`PeerRegistry` is safe for concurrent use. A `sync.RWMutex` protects the peer map and KD-tree. Allowlist operations use a separate `sync.RWMutex`. All public methods that return peers return copies to prevent external mutation.
|
||||
144
docs/identity.md
Normal file
144
docs/identity.md
Normal file
|
|
@ -0,0 +1,144 @@
|
|||
---
|
||||
title: Node Identity
|
||||
description: X25519 keypair generation, node ID derivation, and HMAC-SHA256 challenge-response authentication.
|
||||
---
|
||||
|
||||
# Node Identity
|
||||
|
||||
Every node in the mesh has a unique identity derived from an X25519 keypair. The node ID is cryptographically bound to the public key, and authentication uses HMAC-SHA256 challenge-response over a shared secret derived via ECDH.
|
||||
|
||||
## NodeIdentity
|
||||
|
||||
The public identity struct carried in handshake messages and stored on disk:
|
||||
|
||||
```go
|
||||
type NodeIdentity struct {
|
||||
ID string `json:"id"` // 32-char hex, derived from public key
|
||||
Name string `json:"name"` // Human-friendly name
|
||||
PublicKey string `json:"publicKey"` // X25519 base64
|
||||
CreatedAt time.Time `json:"createdAt"`
|
||||
Role NodeRole `json:"role"` // controller, worker, or dual
|
||||
}
|
||||
```
|
||||
|
||||
The `ID` is computed as the first 16 bytes of `SHA-256(publicKey)`, hex-encoded to produce a 32-character string.
|
||||
|
||||
## Key Storage
|
||||
|
||||
| Item | Path | Permissions |
|
||||
|------|------|-------------|
|
||||
| Private key | `~/.local/share/lethean-desktop/node/private.key` | `0600` |
|
||||
| Identity config | `~/.config/lethean-desktop/node.json` | `0644` |
|
||||
|
||||
Paths follow XDG base directories via `github.com/adrg/xdg`. The private key is never serialised to JSON or transmitted over the network.
|
||||
|
||||
## NodeManager
|
||||
|
||||
`NodeManager` handles identity lifecycle -- generation, persistence, loading, and deletion. It also derives shared secrets for peer authentication.
|
||||
|
||||
### Creating an Identity
|
||||
|
||||
```go
|
||||
nm, err := node.NewNodeManager()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Generate a new identity (persists key and config to disk)
|
||||
err = nm.GenerateIdentity("eu-controller-01", node.RoleController)
|
||||
```
|
||||
|
||||
Internally this calls `stmf.GenerateKeyPair()` from the Borg library to produce the X25519 keypair.
|
||||
|
||||
### Custom Paths (Testing)
|
||||
|
||||
```go
|
||||
nm, err := node.NewNodeManagerWithPaths(
|
||||
"/tmp/test/private.key",
|
||||
"/tmp/test/node.json",
|
||||
)
|
||||
```
|
||||
|
||||
### Checking and Retrieving Identity
|
||||
|
||||
```go
|
||||
if nm.HasIdentity() {
|
||||
identity := nm.GetIdentity() // Returns a copy
|
||||
fmt.Println(identity.ID, identity.Name)
|
||||
}
|
||||
```
|
||||
|
||||
`GetIdentity()` returns a copy of the identity struct to prevent mutation of the internal state.
|
||||
|
||||
### Deriving Shared Secrets
|
||||
|
||||
```go
|
||||
sharedSecret, err := nm.DeriveSharedSecret(peerPublicKeyBase64)
|
||||
```
|
||||
|
||||
This performs X25519 ECDH with the peer's public key and hashes the result with SHA-256, producing a 32-byte symmetric key. The same shared secret is derived independently by both sides (no secret is transmitted).
|
||||
|
||||
### Deleting an Identity
|
||||
|
||||
```go
|
||||
err := nm.Delete() // Removes key and config from disk, clears in-memory state
|
||||
```
|
||||
|
||||
## Challenge-Response Authentication
|
||||
|
||||
After the ECDH key exchange, nodes prove identity possession through HMAC-SHA256 challenge-response. The shared secret is never transmitted.
|
||||
|
||||
### Functions
|
||||
|
||||
```go
|
||||
// Generate a 32-byte cryptographically random challenge
|
||||
challenge, err := node.GenerateChallenge()
|
||||
|
||||
// Sign a challenge with the shared secret (HMAC-SHA256)
|
||||
response := node.SignChallenge(challenge, sharedSecret)
|
||||
|
||||
// Verify a challenge response (constant-time comparison via hmac.Equal)
|
||||
ok := node.VerifyChallenge(challenge, response, sharedSecret)
|
||||
```
|
||||
|
||||
### Authentication Flow
|
||||
|
||||
```
|
||||
Node A (initiator) Node B (responder)
|
||||
| |
|
||||
|--- handshake (identity + challenge) ->|
|
||||
| |
|
||||
| [B derives shared secret via ECDH] |
|
||||
| [B checks protocol version] |
|
||||
| [B checks allowlist] |
|
||||
| [B signs challenge with HMAC] |
|
||||
| |
|
||||
|<-- handshake_ack (identity + sig) ---|
|
||||
| |
|
||||
| [A derives shared secret via ECDH] |
|
||||
| [A verifies challenge response] |
|
||||
| |
|
||||
| ---- encrypted channel open ---- |
|
||||
```
|
||||
|
||||
The handshake and handshake_ack messages are sent unencrypted (they carry the public keys needed to derive the shared secret). All subsequent messages are SMSG-encrypted.
|
||||
|
||||
## Node Roles
|
||||
|
||||
```go
|
||||
const (
|
||||
RoleController NodeRole = "controller" // Manages the mesh, distributes tasks
|
||||
RoleWorker NodeRole = "worker" // Receives and executes workloads
|
||||
RoleDual NodeRole = "dual" // Both controller and worker
|
||||
)
|
||||
```
|
||||
|
||||
| Role | Behaviour |
|
||||
|------|-----------|
|
||||
| `controller` | Sends `get_stats`, `start_miner`, `stop_miner`, `get_logs`, `deploy` messages |
|
||||
| `worker` | Handles incoming commands, runs compute tasks, reports stats |
|
||||
| `dual` | Participates as both controller and worker |
|
||||
|
||||
## Thread Safety
|
||||
|
||||
`NodeManager` is safe for concurrent use. A `sync.RWMutex` protects all internal state. `GetIdentity()` returns a copy rather than a pointer to the internal struct.
|
||||
94
docs/index.md
Normal file
94
docs/index.md
Normal file
|
|
@ -0,0 +1,94 @@
|
|||
---
|
||||
title: go-p2p Overview
|
||||
description: P2P mesh networking layer for the Lethean network.
|
||||
---
|
||||
|
||||
# go-p2p
|
||||
|
||||
P2P networking layer for the Lethean network. Encrypted WebSocket mesh with UEPS wire protocol.
|
||||
|
||||
**Module:** `forge.lthn.ai/core/go-p2p`
|
||||
**Go:** 1.26
|
||||
**Licence:** EUPL-1.2
|
||||
|
||||
## Package Structure
|
||||
|
||||
```
|
||||
go-p2p/
|
||||
├── node/ P2P mesh: identity, transport, peers, protocol, controller, dispatcher
|
||||
│ └── levin/ Levin binary protocol (header, storage, varint, connection)
|
||||
├── ueps/ UEPS wire protocol (RFC-021): TLV packet builder and stream reader
|
||||
└── logging/ Structured levelled logger with component scoping
|
||||
```
|
||||
|
||||
## What Each Piece Does
|
||||
|
||||
| Component | File(s) | Purpose |
|
||||
|-----------|---------|---------|
|
||||
| [Identity](identity.md) | `identity.go` | X25519 keypair, node ID derivation, HMAC-SHA256 challenge-response |
|
||||
| [Transport](transport.md) | `transport.go` | Encrypted WebSocket connections, SMSG encryption, rate limiting |
|
||||
| [Discovery](discovery.md) | `peer.go` | Peer registry, KD-tree selection, score tracking, allowlist auth |
|
||||
| [UEPS](ueps.md) | `ueps/packet.go`, `ueps/reader.go` | TLV wire protocol with HMAC integrity (RFC-021) |
|
||||
| [Routing](routing.md) | `dispatcher.go` | Intent-based packet routing with threat circuit breaker |
|
||||
| [TIM Bundles](tim.md) | `bundle.go` | Encrypted deployment bundles, tar extraction with Zip Slip defence |
|
||||
| Messages | `message.go` | Message envelope, payload types, protocol version negotiation |
|
||||
| Protocol | `protocol.go` | Response validation, structured error handling |
|
||||
| Controller | `controller.go` | Request-response correlation, remote peer operations |
|
||||
| Worker | `worker.go` | Incoming message dispatch, miner/profile management interfaces |
|
||||
| Buffer Pool | `bufpool.go` | `sync.Pool`-backed JSON encoding for hot paths |
|
||||
|
||||
## Dependencies
|
||||
|
||||
| Module | Purpose |
|
||||
|--------|---------|
|
||||
| `forge.lthn.ai/Snider/Borg` | STMF crypto (keypairs), SMSG encryption, TIM bundle format |
|
||||
| `forge.lthn.ai/Snider/Poindexter` | KD-tree peer scoring and nearest-neighbour selection |
|
||||
| `github.com/gorilla/websocket` | WebSocket transport |
|
||||
| `github.com/google/uuid` | Message and peer ID generation |
|
||||
| `github.com/adrg/xdg` | XDG base directory paths for key and config storage |
|
||||
|
||||
## Message Protocol
|
||||
|
||||
Every message is a JSON-encoded `Message` struct transported over WebSocket. After handshake, all messages are SMSG-encrypted using the X25519-derived shared secret.
|
||||
|
||||
```go
|
||||
type Message struct {
|
||||
ID string `json:"id"` // UUID v4
|
||||
Type MessageType `json:"type"` // Determines payload interpretation
|
||||
From string `json:"from"` // Sender node ID
|
||||
To string `json:"to"` // Recipient node ID
|
||||
Timestamp time.Time `json:"ts"`
|
||||
Payload json.RawMessage `json:"payload"` // Type-specific JSON
|
||||
ReplyTo string `json:"replyTo,omitempty"` // For request-response correlation
|
||||
}
|
||||
```
|
||||
|
||||
### Message Types
|
||||
|
||||
| Category | Types |
|
||||
|----------|-------|
|
||||
| Connection | `handshake`, `handshake_ack`, `ping`, `pong`, `disconnect` |
|
||||
| Operations | `get_stats`, `stats`, `start_miner`, `stop_miner`, `miner_ack` |
|
||||
| Deployment | `deploy`, `deploy_ack` |
|
||||
| Logs | `get_logs`, `logs` |
|
||||
| Error | `error` (codes 1000--1005) |
|
||||
|
||||
## Node Roles
|
||||
|
||||
```go
|
||||
const (
|
||||
RoleController NodeRole = "controller" // Orchestrates work distribution
|
||||
RoleWorker NodeRole = "worker" // Executes compute tasks
|
||||
RoleDual NodeRole = "dual" // Both controller and worker
|
||||
)
|
||||
```
|
||||
|
||||
## Architecture Layers
|
||||
|
||||
The stack has two distinct protocol layers:
|
||||
|
||||
1. **UEPS (low-level)** -- Binary TLV wire protocol with HMAC-SHA256 integrity, intent routing, and threat scoring. Operates beneath the mesh layer. See [ueps.md](ueps.md).
|
||||
|
||||
2. **Node mesh (high-level)** -- JSON-over-WebSocket with SMSG encryption. Handles identity, peer management, controller/worker operations, and deployment bundles.
|
||||
|
||||
The dispatcher bridges the two layers, routing verified UEPS packets to registered intent handlers whilst enforcing the threat circuit breaker.
|
||||
157
docs/routing.md
Normal file
157
docs/routing.md
Normal file
|
|
@ -0,0 +1,157 @@
|
|||
---
|
||||
title: Intent Routing
|
||||
description: UEPS intent-based packet routing with threat circuit breaker.
|
||||
---
|
||||
|
||||
# Intent Routing
|
||||
|
||||
The `Dispatcher` routes verified UEPS packets to registered intent handlers. Before routing, it enforces a threat circuit breaker that silently drops packets with elevated threat scores.
|
||||
|
||||
**File:** `node/dispatcher.go`
|
||||
|
||||
## Dispatcher
|
||||
|
||||
```go
|
||||
dispatcher := node.NewDispatcher()
|
||||
```
|
||||
|
||||
The dispatcher is safe for concurrent use -- a `sync.RWMutex` protects the handler map.
|
||||
|
||||
## Registering Handlers
|
||||
|
||||
Handlers are registered per intent ID (one handler per intent). Registering a new handler for an existing intent replaces the previous one.
|
||||
|
||||
```go
|
||||
dispatcher.RegisterHandler(node.IntentHandshake, func(pkt *ueps.ParsedPacket) error {
|
||||
// Handle handshake packets
|
||||
return nil
|
||||
})
|
||||
|
||||
dispatcher.RegisterHandler(node.IntentCompute, func(pkt *ueps.ParsedPacket) error {
|
||||
// Handle compute job requests
|
||||
return nil
|
||||
})
|
||||
```
|
||||
|
||||
### Handler Signature
|
||||
|
||||
```go
|
||||
type IntentHandler func(pkt *ueps.ParsedPacket) error
|
||||
```
|
||||
|
||||
Handlers receive the fully parsed and HMAC-verified packet. The payload bytes are available as `pkt.Payload` and the routing metadata as `pkt.Header`.
|
||||
|
||||
### Iterating Handlers
|
||||
|
||||
```go
|
||||
for intentID, handler := range dispatcher.Handlers() {
|
||||
fmt.Printf("0x%02X registered\n", intentID)
|
||||
}
|
||||
```
|
||||
|
||||
## Dispatching Packets
|
||||
|
||||
```go
|
||||
err := dispatcher.Dispatch(packet)
|
||||
```
|
||||
|
||||
### Dispatch Flow
|
||||
|
||||
1. **Nil check** -- returns `ErrNilPacket` immediately.
|
||||
2. **Threat circuit breaker** -- if `ThreatScore > 50,000`, the packet is dropped and `ErrThreatScoreExceeded` is returned. A warning is logged.
|
||||
3. **Intent lookup** -- finds the handler registered for `pkt.Header.IntentID`. If none exists, the packet is dropped and `ErrUnknownIntent` is returned.
|
||||
4. **Handler invocation** -- calls the handler and returns its result.
|
||||
|
||||
### Threat Circuit Breaker
|
||||
|
||||
```go
|
||||
const ThreatScoreThreshold uint16 = 50000
|
||||
```
|
||||
|
||||
The threshold sits at approximately 76% of the uint16 range (50,000 / 65,535). This provides headroom for legitimate elevated-risk traffic whilst rejecting clearly hostile payloads.
|
||||
|
||||
Dropped packets are logged at WARN level with the threat score, threshold, intent ID, and protocol version.
|
||||
|
||||
### Design Rationale
|
||||
|
||||
- **High-threat packets are dropped silently** (from the sender's perspective) rather than returning an error, consistent with the "don't even parse the payload" philosophy.
|
||||
- **Unknown intents are dropped**, not forwarded, to avoid back-pressure on the transport layer. They are logged at WARN level for debugging.
|
||||
- **Handler errors propagate** to the caller, allowing upstream code to record failures.
|
||||
|
||||
## Intent Constants
|
||||
|
||||
```go
|
||||
const (
|
||||
IntentHandshake byte = 0x01 // Connection establishment / hello
|
||||
IntentCompute byte = 0x20 // Compute job request
|
||||
IntentRehab byte = 0x30 // Benevolent intervention (pause execution)
|
||||
IntentCustom byte = 0xFF // Extended / application-level sub-protocols
|
||||
)
|
||||
```
|
||||
|
||||
| Intent | ID | Purpose |
|
||||
|--------|----|---------|
|
||||
| Handshake | `0x01` | Connection establishment and hello exchange |
|
||||
| Compute | `0x20` | Compute job requests (mining, inference, etc.) |
|
||||
| Rehab | `0x30` | Benevolent intervention -- pause execution of a potentially harmful workload |
|
||||
| Custom | `0xFF` | Application-level sub-protocols carried in the payload |
|
||||
|
||||
## Sentinel Errors
|
||||
|
||||
```go
|
||||
var (
|
||||
ErrThreatScoreExceeded = fmt.Errorf(
|
||||
"packet rejected: threat score exceeds safety threshold (%d)",
|
||||
ThreatScoreThreshold,
|
||||
)
|
||||
ErrUnknownIntent = errors.New("packet dropped: unknown intent")
|
||||
ErrNilPacket = errors.New("dispatch: nil packet")
|
||||
)
|
||||
```
|
||||
|
||||
## Integration with Transport
|
||||
|
||||
The dispatcher operates above the UEPS reader. A typical integration:
|
||||
|
||||
```go
|
||||
// Parse and verify the UEPS frame
|
||||
packet, err := ueps.ReadAndVerify(reader, sharedSecret)
|
||||
if err != nil {
|
||||
// HMAC mismatch or malformed packet
|
||||
return err
|
||||
}
|
||||
|
||||
// Route through the dispatcher
|
||||
if err := dispatcher.Dispatch(packet); err != nil {
|
||||
if errors.Is(err, node.ErrThreatScoreExceeded) {
|
||||
// Packet was too threatening -- already logged
|
||||
return nil
|
||||
}
|
||||
if errors.Is(err, node.ErrUnknownIntent) {
|
||||
// No handler for this intent -- already logged
|
||||
return nil
|
||||
}
|
||||
// Handler returned an error
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
## Full Example
|
||||
|
||||
```go
|
||||
// Set up dispatcher with handlers
|
||||
dispatcher := node.NewDispatcher()
|
||||
|
||||
dispatcher.RegisterHandler(node.IntentCompute, func(pkt *ueps.ParsedPacket) error {
|
||||
fmt.Printf("Compute request: %s\n", string(pkt.Payload))
|
||||
return nil
|
||||
})
|
||||
|
||||
// Build a packet
|
||||
builder := ueps.NewBuilder(node.IntentCompute, []byte(`{"job":"hashrate"}`))
|
||||
frame, _ := builder.MarshalAndSign(sharedSecret)
|
||||
|
||||
// Parse and dispatch
|
||||
packet, _ := ueps.ReadAndVerify(bufio.NewReader(bytes.NewReader(frame)), sharedSecret)
|
||||
err := dispatcher.Dispatch(packet) // Calls the compute handler
|
||||
```
|
||||
185
docs/tim.md
Normal file
185
docs/tim.md
Normal file
|
|
@ -0,0 +1,185 @@
|
|||
---
|
||||
title: TIM Deployment Bundles
|
||||
description: Encrypted deployment bundles using TIM/STIM format with Zip Slip defences.
|
||||
---
|
||||
|
||||
# TIM Deployment Bundles
|
||||
|
||||
The bundle system handles encrypted deployment packages for peer-to-peer transfer. Bundles use the TIM (Terminal Isolation Matrix) format from the Borg library for encryption and the tar format for file archives. Extraction includes multiple layers of path traversal defence.
|
||||
|
||||
**File:** `node/bundle.go`
|
||||
|
||||
## Bundle Types
|
||||
|
||||
```go
|
||||
const (
|
||||
BundleProfile BundleType = "profile" // JSON configuration only
|
||||
BundleMiner BundleType = "miner" // Miner binary + optional config
|
||||
BundleFull BundleType = "full" // Everything (miner + profiles + config)
|
||||
)
|
||||
```
|
||||
|
||||
## Bundle Structure
|
||||
|
||||
```go
|
||||
type Bundle struct {
|
||||
Type BundleType `json:"type"`
|
||||
Name string `json:"name"`
|
||||
Data []byte `json:"data"` // Encrypted STIM data or raw JSON
|
||||
Checksum string `json:"checksum"` // SHA-256 of Data
|
||||
}
|
||||
```
|
||||
|
||||
Every bundle carries a SHA-256 checksum of its `Data` field. This is verified before extraction to detect corruption or tampering in transit.
|
||||
|
||||
## Creating Bundles
|
||||
|
||||
### Profile Bundle (Encrypted)
|
||||
|
||||
```go
|
||||
profileJSON := []byte(`{"algorithm":"randomx","threads":4}`)
|
||||
bundle, err := node.CreateProfileBundle(profileJSON, "gpu-profile", password)
|
||||
```
|
||||
|
||||
Internally:
|
||||
1. Creates a TIM container with the profile as its config.
|
||||
2. Encrypts to STIM format using the password.
|
||||
3. Computes SHA-256 checksum.
|
||||
|
||||
### Profile Bundle (Unencrypted)
|
||||
|
||||
For testing or trusted networks:
|
||||
|
||||
```go
|
||||
bundle, err := node.CreateProfileBundleUnencrypted(profileJSON, "test-profile")
|
||||
```
|
||||
|
||||
### Miner Bundle
|
||||
|
||||
```go
|
||||
bundle, err := node.CreateMinerBundle(
|
||||
"/path/to/xmrig", // Miner binary path
|
||||
profileJSON, // Optional profile config (may be nil)
|
||||
"xmrig-linux-amd64", // Bundle name
|
||||
password, // Encryption password
|
||||
)
|
||||
```
|
||||
|
||||
Internally:
|
||||
1. Reads the miner binary from disk.
|
||||
2. Creates a tar archive containing the binary.
|
||||
3. Converts the tar to a Borg DataNode, then to a TIM container.
|
||||
4. Attaches the profile config if provided.
|
||||
5. Encrypts to STIM format.
|
||||
|
||||
## Extracting Bundles
|
||||
|
||||
### Profile Bundle
|
||||
|
||||
```go
|
||||
profileJSON, err := node.ExtractProfileBundle(bundle, password)
|
||||
```
|
||||
|
||||
Detects whether the data is plain JSON or encrypted STIM and handles both cases.
|
||||
|
||||
### Miner Bundle
|
||||
|
||||
```go
|
||||
minerPath, profileJSON, err := node.ExtractMinerBundle(bundle, password, "/opt/miners/")
|
||||
```
|
||||
|
||||
Returns the path to the first executable found in the archive and the profile config.
|
||||
|
||||
## Verification
|
||||
|
||||
```go
|
||||
ok := node.VerifyBundle(bundle)
|
||||
```
|
||||
|
||||
Recomputes the SHA-256 checksum and compares against the stored value.
|
||||
|
||||
## Streaming
|
||||
|
||||
For large bundles transferred over the network:
|
||||
|
||||
```go
|
||||
// Write
|
||||
err := node.StreamBundle(bundle, writer)
|
||||
|
||||
// Read
|
||||
bundle, err := node.ReadBundle(reader)
|
||||
```
|
||||
|
||||
Both use JSON encoding/decoding via `json.Encoder`/`json.Decoder`.
|
||||
|
||||
## Security: Zip Slip Defence
|
||||
|
||||
Tar extraction (`extractTarball`) applies multiple layers of path traversal protection:
|
||||
|
||||
1. **Path cleaning** -- `filepath.Clean(hdr.Name)` normalises the entry name.
|
||||
2. **Absolute path rejection** -- entries with absolute paths are rejected.
|
||||
3. **Parent directory traversal** -- entries starting with `../` or equal to `..` are rejected.
|
||||
4. **Destination containment** -- the resolved full path must have `destDir` as a prefix. This catches edge cases the previous checks might miss.
|
||||
5. **Symlink rejection** -- both symbolic links (`TypeSymlink`) and hard links (`TypeLink`) are silently skipped, preventing symlink-based escapes.
|
||||
6. **File size limit** -- each file is capped at 100MB via `io.LimitReader` to prevent decompression bombs. Files exceeding the limit are deleted after detection.
|
||||
|
||||
### Example of a Rejected Entry
|
||||
|
||||
```
|
||||
Entry: "../../../etc/passwd"
|
||||
-> filepath.Clean -> "../../../etc/passwd"
|
||||
-> strings.HasPrefix(".." + "/") -> true
|
||||
-> REJECTED: "invalid tar entry: path traversal attempt"
|
||||
```
|
||||
|
||||
## Deployment Message Flow
|
||||
|
||||
The bundle system integrates with the P2P message protocol:
|
||||
|
||||
```
|
||||
Controller Worker
|
||||
| |
|
||||
| [CreateProfileBundle / CreateMinerBundle]
|
||||
| |
|
||||
|--- deploy (DeployPayload) --------->|
|
||||
| |
|
||||
| [VerifyBundle] |
|
||||
| [ExtractProfileBundle / ExtractMinerBundle]
|
||||
| |
|
||||
|<-- deploy_ack (success/error) ------|
|
||||
```
|
||||
|
||||
### DeployPayload
|
||||
|
||||
```go
|
||||
type DeployPayload struct {
|
||||
BundleType string `json:"type"` // "profile", "miner", or "full"
|
||||
Data []byte `json:"data"` // STIM-encrypted bundle
|
||||
Checksum string `json:"checksum"` // SHA-256 of Data
|
||||
Name string `json:"name"` // Profile or miner name
|
||||
}
|
||||
```
|
||||
|
||||
## BundleManifest
|
||||
|
||||
Describes the contents of a bundle for catalogue purposes:
|
||||
|
||||
```go
|
||||
type BundleManifest struct {
|
||||
Type BundleType `json:"type"`
|
||||
Name string `json:"name"`
|
||||
Version string `json:"version,omitempty"`
|
||||
MinerType string `json:"minerType,omitempty"`
|
||||
ProfileIDs []string `json:"profileIds,omitempty"`
|
||||
CreatedAt string `json:"createdAt"`
|
||||
}
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
| Library | Usage |
|
||||
|---------|-------|
|
||||
| `forge.lthn.ai/Snider/Borg/pkg/tim` | TIM container creation, STIM encryption/decryption |
|
||||
| `forge.lthn.ai/Snider/Borg/pkg/datanode` | DataNode from tar archive (for miner bundles) |
|
||||
| `archive/tar` | Tar creation and extraction |
|
||||
| `crypto/sha256` | Bundle checksum computation |
|
||||
188
docs/transport.md
Normal file
188
docs/transport.md
Normal file
|
|
@ -0,0 +1,188 @@
|
|||
---
|
||||
title: Encrypted WebSocket Transport
|
||||
description: SMSG-encrypted WebSocket connections with HMAC handshake, rate limiting, and message deduplication.
|
||||
---
|
||||
|
||||
# Encrypted WebSocket Transport
|
||||
|
||||
The `Transport` manages encrypted WebSocket connections between nodes. After an HMAC-SHA256 challenge-response handshake, all messages are encrypted using SMSG (from the Borg library) with the X25519-derived shared secret.
|
||||
|
||||
## Configuration
|
||||
|
||||
```go
|
||||
type TransportConfig struct {
|
||||
ListenAddr string // ":9091" default
|
||||
WSPath string // "/ws" -- WebSocket endpoint path
|
||||
TLSCertPath string // Optional TLS for wss://
|
||||
TLSKeyPath string
|
||||
MaxConns int // Maximum concurrent connections (default 100)
|
||||
MaxMessageSize int64 // Maximum message size in bytes (default 1MB)
|
||||
PingInterval time.Duration // Keepalive interval (default 30s)
|
||||
PongTimeout time.Duration // Pong wait timeout (default 10s)
|
||||
}
|
||||
```
|
||||
|
||||
Sensible defaults via `DefaultTransportConfig()`:
|
||||
|
||||
```go
|
||||
cfg := node.DefaultTransportConfig()
|
||||
// ListenAddr: ":9091", WSPath: "/ws", MaxConns: 100
|
||||
// MaxMessageSize: 1MB, PingInterval: 30s, PongTimeout: 10s
|
||||
```
|
||||
|
||||
## Creating and Starting
|
||||
|
||||
```go
|
||||
transport := node.NewTransport(nodeManager, peerRegistry, cfg)
|
||||
|
||||
// Set message handler before Start() to avoid races
|
||||
transport.OnMessage(func(conn *node.PeerConnection, msg *node.Message) {
|
||||
// Handle incoming messages
|
||||
})
|
||||
|
||||
err := transport.Start()
|
||||
```
|
||||
|
||||
## TLS Hardening
|
||||
|
||||
When `TLSCertPath` and `TLSKeyPath` are set, the transport uses TLS with hardened settings:
|
||||
|
||||
- Minimum TLS 1.2
|
||||
- Curve preferences: X25519, P-256
|
||||
- AEAD cipher suites only (GCM and ChaCha20-Poly1305)
|
||||
|
||||
## Connection Handshake
|
||||
|
||||
The handshake sequence establishes identity and derives the encryption key:
|
||||
|
||||
1. **Initiator** sends `MsgHandshake` containing its `NodeIdentity`, a 32-byte random challenge, and the protocol version (`"1.0"`).
|
||||
2. **Responder** derives the shared secret via X25519 ECDH, checks the protocol version is supported, verifies the peer against the allowlist (if `PeerAuthAllowlist` mode), signs the challenge with HMAC-SHA256, and sends `MsgHandshakeAck` with its own identity and the challenge response.
|
||||
3. **Initiator** derives the shared secret, verifies the HMAC response, and stores the shared secret.
|
||||
4. All subsequent messages are SMSG-encrypted.
|
||||
|
||||
Both the handshake and acknowledgement are sent unencrypted -- they carry the public keys needed to derive the shared secret. A 10-second timeout prevents slow or malicious peers from blocking the handshake.
|
||||
|
||||
### Rejection
|
||||
|
||||
The responder rejects connections with a `HandshakeAckPayload` where `Accepted: false` and a `Reason` string for:
|
||||
|
||||
- Incompatible protocol version
|
||||
- Peer not on the allowlist (when in allowlist mode)
|
||||
|
||||
## Message Encryption
|
||||
|
||||
After handshake, messages are encrypted with SMSG using the shared secret:
|
||||
|
||||
```
|
||||
Send path: Message -> JSON (pooled buffer) -> SMSG encrypt -> WebSocket binary frame
|
||||
Recv path: WebSocket frame -> SMSG decrypt -> JSON unmarshal -> Message
|
||||
```
|
||||
|
||||
The shared secret is base64-encoded before use as the SMSG password. This is handled internally by `encryptMessage()` and `decryptMessage()`.
|
||||
|
||||
## PeerConnection
|
||||
|
||||
Each active connection is wrapped in a `PeerConnection`:
|
||||
|
||||
```go
|
||||
type PeerConnection struct {
|
||||
Peer *Peer // Remote peer identity
|
||||
Conn *websocket.Conn // Underlying WebSocket
|
||||
SharedSecret []byte // From X25519 ECDH
|
||||
LastActivity time.Time
|
||||
}
|
||||
```
|
||||
|
||||
### Sending Messages
|
||||
|
||||
```go
|
||||
err := peerConn.Send(msg)
|
||||
```
|
||||
|
||||
`Send()` serialises the message to JSON, encrypts it with SMSG, sets a 10-second write deadline, and writes as a binary WebSocket frame. A `writeMu` mutex serialises concurrent writes.
|
||||
|
||||
### Graceful Close
|
||||
|
||||
```go
|
||||
err := peerConn.GracefulClose("shutting down", node.DisconnectShutdown)
|
||||
```
|
||||
|
||||
Sends a `disconnect` message (best-effort) before closing the connection. Uses `sync.Once` to ensure the connection is only closed once.
|
||||
|
||||
### Disconnect Codes
|
||||
|
||||
```go
|
||||
const (
|
||||
DisconnectNormal = 1000 // Normal closure
|
||||
DisconnectGoingAway = 1001 // Server/peer going away
|
||||
DisconnectProtocolErr = 1002 // Protocol error
|
||||
DisconnectTimeout = 1003 // Idle timeout
|
||||
DisconnectShutdown = 1004 // Server shutdown
|
||||
)
|
||||
```
|
||||
|
||||
## Incoming Connections
|
||||
|
||||
The transport exposes an HTTP handler at the configured `WSPath` that upgrades to WebSocket. Origin checks restrict browser clients to `localhost`, `127.0.0.1`, and `::1`; non-browser clients (no `Origin` header) are allowed.
|
||||
|
||||
The `MaxConns` limit is enforced before the WebSocket upgrade, counting both established and pending (mid-handshake) connections. Excess connections receive HTTP 503.
|
||||
|
||||
## Message Deduplication
|
||||
|
||||
`MessageDeduplicator` prevents duplicate message processing (amplification attack mitigation):
|
||||
|
||||
- Tracks message IDs with a configurable TTL (default 5 minutes)
|
||||
- Checked after decryption, before handler dispatch
|
||||
- Background cleanup runs every minute
|
||||
|
||||
## Rate Limiting
|
||||
|
||||
Each `PeerConnection` has a `PeerRateLimiter` implementing a token-bucket algorithm:
|
||||
|
||||
- **Burst:** 100 messages
|
||||
- **Refill:** 50 tokens per second
|
||||
|
||||
Messages exceeding the rate limit are silently dropped with a warning log. This prevents a single peer from overwhelming the node.
|
||||
|
||||
## Keepalive
|
||||
|
||||
A background goroutine per connection sends `MsgPing` at the configured `PingInterval`. If no activity is observed within `PingInterval + PongTimeout`, the connection is closed and removed.
|
||||
|
||||
The read loop also sets a deadline of `PingInterval + PongTimeout` on each read, preventing indefinitely blocked reads on unresponsive connections.
|
||||
|
||||
## Lifecycle
|
||||
|
||||
```go
|
||||
// Start listening and accepting connections
|
||||
err := transport.Start()
|
||||
|
||||
// Connect to a known peer (triggers handshake)
|
||||
pc, err := transport.Connect(peer)
|
||||
|
||||
// Send to a specific peer
|
||||
err = transport.Send(peerID, msg)
|
||||
|
||||
// Broadcast to all connected peers (excludes sender)
|
||||
err = transport.Broadcast(msg)
|
||||
|
||||
// Query connections
|
||||
count := transport.ConnectedPeers()
|
||||
conn := transport.GetConnection(peerID)
|
||||
|
||||
// Iterate over all connections
|
||||
for pc := range transport.Connections() {
|
||||
// ...
|
||||
}
|
||||
|
||||
// Graceful shutdown (sends disconnect to all peers, waits for goroutines)
|
||||
err = transport.Stop()
|
||||
```
|
||||
|
||||
## Buffer Pool
|
||||
|
||||
JSON encoding in hot paths uses `sync.Pool`-backed byte buffers (`bufpool.go`). The `MarshalJSON()` function:
|
||||
|
||||
- Uses pooled buffers (initial capacity 1024 bytes)
|
||||
- Disables HTML escaping
|
||||
- Returns a copy of the encoded bytes (safe after function return)
|
||||
- Discards buffers exceeding 64KB to prevent pool bloat
|
||||
172
docs/ueps.md
Normal file
172
docs/ueps.md
Normal file
|
|
@ -0,0 +1,172 @@
|
|||
---
|
||||
title: UEPS Wire Protocol
|
||||
description: TLV-encoded wire protocol with HMAC-SHA256 integrity verification (RFC-021).
|
||||
---
|
||||
|
||||
# UEPS Wire Protocol
|
||||
|
||||
The `ueps` package implements the Universal Encrypted Payload System -- a consent-gated TLV (Type-Length-Value) wire protocol with HMAC-SHA256 integrity verification. This is the low-level binary protocol that sits beneath the JSON-over-WebSocket mesh layer.
|
||||
|
||||
**Package:** `forge.lthn.ai/core/go-p2p/ueps`
|
||||
|
||||
## TLV Format
|
||||
|
||||
Each field is encoded as a 1-byte tag, 2-byte big-endian length (uint16), and variable-length value. Maximum field size is 65,535 bytes.
|
||||
|
||||
```
|
||||
+------+--------+--------+-----------+
|
||||
| Tag | Len-Hi | Len-Lo | Value |
|
||||
| 1B | 1B | 1B | 0..65535B |
|
||||
+------+--------+--------+-----------+
|
||||
```
|
||||
|
||||
## Tag Registry
|
||||
|
||||
| Tag | Constant | Value Size | Description |
|
||||
|-----|----------|------------|-------------|
|
||||
| `0x01` | `TagVersion` | 1 byte | Protocol version (default `0x09` for IPv9) |
|
||||
| `0x02` | `TagCurrentLay` | 1 byte | Current network layer |
|
||||
| `0x03` | `TagTargetLay` | 1 byte | Target network layer |
|
||||
| `0x04` | `TagIntent` | 1 byte | Semantic intent token (routes the packet) |
|
||||
| `0x05` | `TagThreatScore` | 2 bytes | Threat score (0--65535, big-endian uint16) |
|
||||
| `0x06` | `TagHMAC` | 32 bytes | HMAC-SHA256 signature |
|
||||
| `0xFF` | `TagPayload` | variable | Application data |
|
||||
|
||||
## Header
|
||||
|
||||
```go
|
||||
type UEPSHeader struct {
|
||||
Version uint8 // Default 0x09
|
||||
CurrentLayer uint8 // Source layer
|
||||
TargetLayer uint8 // Destination layer
|
||||
IntentID uint8 // Semantic intent token
|
||||
ThreatScore uint16 // 0--65535
|
||||
}
|
||||
```
|
||||
|
||||
## Building Packets
|
||||
|
||||
`PacketBuilder` constructs signed UEPS frames:
|
||||
|
||||
```go
|
||||
builder := ueps.NewBuilder(intentID, payload)
|
||||
builder.Header.ThreatScore = 100
|
||||
|
||||
frame, err := builder.MarshalAndSign(sharedSecret)
|
||||
```
|
||||
|
||||
### Defaults
|
||||
|
||||
`NewBuilder` sets:
|
||||
- `Version`: `0x09` (IPv9)
|
||||
- `CurrentLayer`: `5` (Application)
|
||||
- `TargetLayer`: `5` (Application)
|
||||
- `ThreatScore`: `0` (assumed innocent)
|
||||
|
||||
### Wire Layout
|
||||
|
||||
`MarshalAndSign` produces:
|
||||
|
||||
```
|
||||
[Version TLV][CurrentLayer TLV][TargetLayer TLV][Intent TLV][ThreatScore TLV][HMAC TLV][Payload TLV]
|
||||
```
|
||||
|
||||
1. Serialises header TLVs (tags `0x01`--`0x05`) into the buffer.
|
||||
2. Computes HMAC-SHA256 over `header_bytes + raw_payload` using the shared secret.
|
||||
3. Writes the HMAC TLV (`0x06`, 32 bytes).
|
||||
4. Writes the payload TLV (`0xFF`, with 2-byte length prefix).
|
||||
|
||||
### What the HMAC Covers
|
||||
|
||||
The signature covers:
|
||||
|
||||
- All header TLV bytes (tag + length + value for tags `0x01`--`0x05`)
|
||||
- The raw payload bytes
|
||||
|
||||
It does **not** cover the HMAC TLV itself or the payload's tag/length bytes (only the payload value).
|
||||
|
||||
## Reading and Verifying
|
||||
|
||||
`ReadAndVerify` parses and validates a UEPS frame from a buffered reader:
|
||||
|
||||
```go
|
||||
packet, err := ueps.ReadAndVerify(bufio.NewReader(data), sharedSecret)
|
||||
if err != nil {
|
||||
// Integrity violation or malformed packet
|
||||
}
|
||||
|
||||
fmt.Println(packet.Header.IntentID)
|
||||
fmt.Println(string(packet.Payload))
|
||||
```
|
||||
|
||||
### Verification Steps
|
||||
|
||||
1. Reads TLV fields sequentially, accumulating header bytes into a signed-data buffer.
|
||||
2. Stores the HMAC signature separately (not added to signed-data).
|
||||
3. On encountering `0xFF` (payload tag), reads the length-prefixed payload.
|
||||
4. Recomputes HMAC over `signed_data + payload`.
|
||||
5. Compares signatures using `hmac.Equal` (constant-time).
|
||||
|
||||
On HMAC mismatch, returns: `"integrity violation: HMAC mismatch (ThreatScore +100)"`.
|
||||
|
||||
### Parsed Result
|
||||
|
||||
```go
|
||||
type ParsedPacket struct {
|
||||
Header UEPSHeader
|
||||
Payload []byte
|
||||
}
|
||||
```
|
||||
|
||||
### Unknown Tags
|
||||
|
||||
Unknown tags between the header and HMAC are included in the signed-data buffer but ignored semantically. This provides forward compatibility -- older readers can verify packets that include newer header fields.
|
||||
|
||||
## Roundtrip Example
|
||||
|
||||
```go
|
||||
secret := []byte("shared-secret-32-bytes-here.....")
|
||||
payload := []byte(`{"action":"compute","params":{}}`)
|
||||
|
||||
// Build and sign
|
||||
builder := ueps.NewBuilder(0x20, payload) // IntentCompute
|
||||
frame, err := builder.MarshalAndSign(secret)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Read and verify
|
||||
reader := bufio.NewReader(bytes.NewReader(frame))
|
||||
packet, err := ueps.ReadAndVerify(reader, secret)
|
||||
if err != nil {
|
||||
log.Fatal(err) // Integrity violation
|
||||
}
|
||||
|
||||
fmt.Printf("Intent: 0x%02X\n", packet.Header.IntentID) // 0x20
|
||||
fmt.Printf("Payload: %s\n", string(packet.Payload))
|
||||
```
|
||||
|
||||
## Intent Routing
|
||||
|
||||
The `IntentID` field enables semantic routing at the application layer. The [dispatcher](routing.md) uses this field to route verified packets to registered handlers.
|
||||
|
||||
Reserved intent values:
|
||||
|
||||
| ID | Constant | Purpose |
|
||||
|----|----------|---------|
|
||||
| `0x01` | `IntentHandshake` | Connection establishment / hello |
|
||||
| `0x20` | `IntentCompute` | Compute job request |
|
||||
| `0x30` | `IntentRehab` | Benevolent intervention (pause execution) |
|
||||
| `0xFF` | `IntentCustom` | Extended / application-level sub-protocols |
|
||||
|
||||
## Threat Score
|
||||
|
||||
The `ThreatScore` field (0--65535) provides a mechanism for nodes to signal the perceived risk level of a packet. The dispatcher's circuit breaker drops packets exceeding a threshold of 50,000 (see [routing.md](routing.md)).
|
||||
|
||||
When the reader detects an HMAC mismatch, the error message includes `ThreatScore +100` as guidance for upstream threat tracking.
|
||||
|
||||
## Security Notes
|
||||
|
||||
- HMAC-SHA256 provides both integrity and authenticity (assuming the shared secret is known only to the two communicating nodes).
|
||||
- The constant-time comparison via `hmac.Equal` prevents timing side-channel attacks.
|
||||
- The 2-byte length prefix on all TLVs (including payload) prevents unbounded reads -- maximum 65,535 bytes per field.
|
||||
18
go.mod
18
go.mod
|
|
@ -1,10 +1,12 @@
|
|||
module forge.lthn.ai/core/go-p2p
|
||||
module dappco.re/go/core/p2p
|
||||
|
||||
go 1.26.0
|
||||
|
||||
require (
|
||||
forge.lthn.ai/Snider/Borg v0.2.1
|
||||
forge.lthn.ai/Snider/Poindexter v0.0.2
|
||||
dappco.re/go/core/io v0.2.0
|
||||
dappco.re/go/core/log v0.1.0
|
||||
forge.lthn.ai/Snider/Borg v0.3.1
|
||||
forge.lthn.ai/Snider/Poindexter v0.0.3
|
||||
github.com/adrg/xdg v0.5.3
|
||||
github.com/google/uuid v1.6.0
|
||||
github.com/gorilla/websocket v1.5.3
|
||||
|
|
@ -13,15 +15,13 @@ require (
|
|||
|
||||
require (
|
||||
forge.lthn.ai/Snider/Enchantrix v0.0.4 // indirect
|
||||
github.com/ProtonMail/go-crypto v1.3.0 // indirect
|
||||
forge.lthn.ai/core/go-log v0.0.4 // indirect
|
||||
github.com/ProtonMail/go-crypto v1.4.0 // indirect
|
||||
github.com/cloudflare/circl v1.6.3 // indirect
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/klauspost/compress v1.18.4 // indirect
|
||||
github.com/kr/pretty v0.3.1 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/rogpeppe/go-internal v1.14.1 // indirect
|
||||
golang.org/x/crypto v0.48.0 // indirect
|
||||
golang.org/x/sys v0.41.0 // indirect
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
|
||||
golang.org/x/crypto v0.49.0 // indirect
|
||||
golang.org/x/sys v0.42.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
)
|
||||
|
|
|
|||
32
go.sum
32
go.sum
|
|
@ -1,16 +1,21 @@
|
|||
forge.lthn.ai/Snider/Borg v0.2.1 h1:Uf/YtUJLL8jlxTCjvP4J+5GHe3LLeALGtbh7zj8d8Qc=
|
||||
forge.lthn.ai/Snider/Borg v0.2.1/go.mod h1:MVfolb7F6/A2LOIijcbBhWImu5db5NSMcSjvShMoMCA=
|
||||
dappco.re/go/core/io v0.2.0 h1:zuudgIiTsQQ5ipVt97saWdGLROovbEB/zdVyy9/l+I4=
|
||||
dappco.re/go/core/io v0.2.0/go.mod h1:1QnQV6X9LNgFKfm8SkOtR9LLaj3bDcsOIeJOOyjbL5E=
|
||||
dappco.re/go/core/log v0.1.0 h1:pa71Vq2TD2aoEUQWFKwNcaJ3GBY8HbaNGqtE688Unyc=
|
||||
dappco.re/go/core/log v0.1.0/go.mod h1:Nkqb8gsXhZAO8VLpx7B8i1iAmohhzqA20b9Zr8VUcJs=
|
||||
forge.lthn.ai/Snider/Borg v0.3.1 h1:gfC1ZTpLoZai07oOWJiVeQ8+qJYK8A795tgVGJHbVL8=
|
||||
forge.lthn.ai/Snider/Borg v0.3.1/go.mod h1:Z7DJD0yHXsxSyM7Mjl6/g4gH1NBsIz44Bf5AFlV76Wg=
|
||||
forge.lthn.ai/Snider/Enchantrix v0.0.4 h1:biwpix/bdedfyc0iVeK15awhhJKH6TEMYOTXzHXx5TI=
|
||||
forge.lthn.ai/Snider/Enchantrix v0.0.4/go.mod h1:OGCwuVeZPq3OPe2h6TX/ZbgEjHU6B7owpIBeXQGbSe0=
|
||||
forge.lthn.ai/Snider/Poindexter v0.0.2 h1:XXzSKFjO6MeftQAnB9qR+IkOTp9f57Tg4sIx8Qzi/II=
|
||||
forge.lthn.ai/Snider/Poindexter v0.0.2/go.mod h1:ddzGia98k3HKkR0gl58IDzqz+MmgW2cQJOCNLfuWPpo=
|
||||
github.com/ProtonMail/go-crypto v1.3.0 h1:ILq8+Sf5If5DCpHQp4PbZdS1J7HDFRXz/+xKBiRGFrw=
|
||||
github.com/ProtonMail/go-crypto v1.3.0/go.mod h1:9whxjD8Rbs29b4XWbB8irEcE8KHMqaR2e7GWU1R+/PE=
|
||||
forge.lthn.ai/Snider/Poindexter v0.0.3 h1:cx5wRhuLRKBM8riIZyNVAT2a8rwRhn1dodFBktocsVE=
|
||||
forge.lthn.ai/Snider/Poindexter v0.0.3/go.mod h1:ddzGia98k3HKkR0gl58IDzqz+MmgW2cQJOCNLfuWPpo=
|
||||
forge.lthn.ai/core/go-log v0.0.4 h1:KTuCEPgFmuM8KJfnyQ8vPOU1Jg654W74h8IJvfQMfv0=
|
||||
forge.lthn.ai/core/go-log v0.0.4/go.mod h1:r14MXKOD3LF/sI8XUJQhRk/SZHBE7jAFVuCfgkXoZPw=
|
||||
github.com/ProtonMail/go-crypto v1.4.0 h1:Zq/pbM3F5DFgJiMouxEdSVY44MVoQNEKp5d5QxIQceQ=
|
||||
github.com/ProtonMail/go-crypto v1.4.0/go.mod h1:e1OaTyu5SYVrO9gKOEhTc+5UcXtTUa+P3uLudwcgPqo=
|
||||
github.com/adrg/xdg v0.5.3 h1:xRnxJXne7+oWDatRhR1JLnvuccuIeCoBu2rtuLqQB78=
|
||||
github.com/adrg/xdg v0.5.3/go.mod h1:nlTsY+NNiCBGCK2tpm09vRqfVzrc2fLmXGpBLF0zlTQ=
|
||||
github.com/cloudflare/circl v1.6.3 h1:9GPOhQGF9MCYUeXyMYlqTR6a5gTrgR/fBLXvUgtVcg8=
|
||||
github.com/cloudflare/circl v1.6.3/go.mod h1:2eXP6Qfat4O/Yhh8BznvKnJ+uzEoTQ6jVKJRn81BiS4=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
|
|
@ -19,25 +24,20 @@ github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aN
|
|||
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/klauspost/compress v1.18.4 h1:RPhnKRAQ4Fh8zU2FY/6ZFDwTVTxgJ/EMydqSTzE9a2c=
|
||||
github.com/klauspost/compress v1.18.4/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
|
||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
|
||||
golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos=
|
||||
golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
|
||||
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
|
||||
golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA=
|
||||
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
|
||||
golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
|
|
|
|||
|
|
@ -9,6 +9,8 @@ import (
|
|||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
)
|
||||
|
||||
// Level represents the severity of a log message.
|
||||
|
|
@ -278,6 +280,6 @@ func ParseLevel(s string) (Level, error) {
|
|||
case "ERROR":
|
||||
return LevelError, nil
|
||||
default:
|
||||
return LevelInfo, fmt.Errorf("unknown log level: %s", s)
|
||||
return LevelInfo, coreerr.E("logging.ParseLevel", "unknown log level: "+s, nil)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,13 +6,14 @@ import (
|
|||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
|
||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
||||
)
|
||||
|
|
@ -49,14 +50,14 @@ func CreateProfileBundle(profileJSON []byte, name string, password string) (*Bun
|
|||
// Create a TIM with just the profile config
|
||||
t, err := tim.New()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create TIM: %w", err)
|
||||
return nil, coreerr.E("CreateProfileBundle", "failed to create TIM", err)
|
||||
}
|
||||
t.Config = profileJSON
|
||||
|
||||
// Encrypt to STIM format
|
||||
stimData, err := t.ToSigil(password)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to encrypt bundle: %w", err)
|
||||
return nil, coreerr.E("CreateProfileBundle", "failed to encrypt bundle", err)
|
||||
}
|
||||
|
||||
// Calculate checksum
|
||||
|
|
@ -85,29 +86,30 @@ func CreateProfileBundleUnencrypted(profileJSON []byte, name string) (*Bundle, e
|
|||
// CreateMinerBundle creates an encrypted bundle containing a miner binary and optional profile.
|
||||
func CreateMinerBundle(minerPath string, profileJSON []byte, name string, password string) (*Bundle, error) {
|
||||
// Read miner binary
|
||||
minerData, err := os.ReadFile(minerPath)
|
||||
minerContent, err := coreio.Local.Read(minerPath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read miner binary: %w", err)
|
||||
return nil, coreerr.E("CreateMinerBundle", "failed to read miner binary", err)
|
||||
}
|
||||
minerData := []byte(minerContent)
|
||||
|
||||
// Create a tarball with the miner binary
|
||||
tarData, err := createTarball(map[string][]byte{
|
||||
filepath.Base(minerPath): minerData,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create tarball: %w", err)
|
||||
return nil, coreerr.E("CreateMinerBundle", "failed to create tarball", err)
|
||||
}
|
||||
|
||||
// Create DataNode from tarball
|
||||
dn, err := datanode.FromTar(tarData)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create datanode: %w", err)
|
||||
return nil, coreerr.E("CreateMinerBundle", "failed to create datanode", err)
|
||||
}
|
||||
|
||||
// Create TIM from DataNode
|
||||
t, err := tim.FromDataNode(dn)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create TIM: %w", err)
|
||||
return nil, coreerr.E("CreateMinerBundle", "failed to create TIM", err)
|
||||
}
|
||||
|
||||
// Set profile as config if provided
|
||||
|
|
@ -118,7 +120,7 @@ func CreateMinerBundle(minerPath string, profileJSON []byte, name string, passwo
|
|||
// Encrypt to STIM format
|
||||
stimData, err := t.ToSigil(password)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to encrypt bundle: %w", err)
|
||||
return nil, coreerr.E("CreateMinerBundle", "failed to encrypt bundle", err)
|
||||
}
|
||||
|
||||
checksum := calculateChecksum(stimData)
|
||||
|
|
@ -135,7 +137,7 @@ func CreateMinerBundle(minerPath string, profileJSON []byte, name string, passwo
|
|||
func ExtractProfileBundle(bundle *Bundle, password string) ([]byte, error) {
|
||||
// Verify checksum first
|
||||
if calculateChecksum(bundle.Data) != bundle.Checksum {
|
||||
return nil, errors.New("checksum mismatch - bundle may be corrupted")
|
||||
return nil, coreerr.E("ExtractProfileBundle", "checksum mismatch - bundle may be corrupted", nil)
|
||||
}
|
||||
|
||||
// If it's unencrypted JSON, just return it
|
||||
|
|
@ -146,7 +148,7 @@ func ExtractProfileBundle(bundle *Bundle, password string) ([]byte, error) {
|
|||
// Decrypt STIM format
|
||||
t, err := tim.FromSigil(bundle.Data, password)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decrypt bundle: %w", err)
|
||||
return nil, coreerr.E("ExtractProfileBundle", "failed to decrypt bundle", err)
|
||||
}
|
||||
|
||||
return t.Config, nil
|
||||
|
|
@ -156,25 +158,25 @@ func ExtractProfileBundle(bundle *Bundle, password string) ([]byte, error) {
|
|||
func ExtractMinerBundle(bundle *Bundle, password string, destDir string) (string, []byte, error) {
|
||||
// Verify checksum
|
||||
if calculateChecksum(bundle.Data) != bundle.Checksum {
|
||||
return "", nil, errors.New("checksum mismatch - bundle may be corrupted")
|
||||
return "", nil, coreerr.E("ExtractMinerBundle", "checksum mismatch - bundle may be corrupted", nil)
|
||||
}
|
||||
|
||||
// Decrypt STIM format
|
||||
t, err := tim.FromSigil(bundle.Data, password)
|
||||
if err != nil {
|
||||
return "", nil, fmt.Errorf("failed to decrypt bundle: %w", err)
|
||||
return "", nil, coreerr.E("ExtractMinerBundle", "failed to decrypt bundle", err)
|
||||
}
|
||||
|
||||
// Convert rootfs to tarball and extract
|
||||
tarData, err := t.RootFS.ToTar()
|
||||
if err != nil {
|
||||
return "", nil, fmt.Errorf("failed to convert rootfs to tar: %w", err)
|
||||
return "", nil, coreerr.E("ExtractMinerBundle", "failed to convert rootfs to tar", err)
|
||||
}
|
||||
|
||||
// Extract tarball to destination
|
||||
minerPath, err := extractTarball(tarData, destDir)
|
||||
if err != nil {
|
||||
return "", nil, fmt.Errorf("failed to extract tarball: %w", err)
|
||||
return "", nil, coreerr.E("ExtractMinerBundle", "failed to extract tarball", err)
|
||||
}
|
||||
|
||||
return minerPath, t.Config, nil
|
||||
|
|
@ -254,11 +256,11 @@ func extractTarball(tarData []byte, destDir string) (string, error) {
|
|||
// Ensure destDir is an absolute, clean path for security checks
|
||||
absDestDir, err := filepath.Abs(destDir)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to resolve destination directory: %w", err)
|
||||
return "", coreerr.E("extractTarball", "failed to resolve destination directory", err)
|
||||
}
|
||||
absDestDir = filepath.Clean(absDestDir)
|
||||
|
||||
if err := os.MkdirAll(absDestDir, 0755); err != nil {
|
||||
if err := coreio.Local.EnsureDir(absDestDir); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
|
|
@ -279,12 +281,12 @@ func extractTarball(tarData []byte, destDir string) (string, error) {
|
|||
|
||||
// Reject absolute paths
|
||||
if filepath.IsAbs(cleanName) {
|
||||
return "", fmt.Errorf("invalid tar entry: absolute path not allowed: %s", hdr.Name)
|
||||
return "", coreerr.E("extractTarball", "invalid tar entry: absolute path not allowed: "+hdr.Name, nil)
|
||||
}
|
||||
|
||||
// Reject paths that escape the destination directory
|
||||
if strings.HasPrefix(cleanName, ".."+string(os.PathSeparator)) || cleanName == ".." {
|
||||
return "", fmt.Errorf("invalid tar entry: path traversal attempt: %s", hdr.Name)
|
||||
return "", coreerr.E("extractTarball", "invalid tar entry: path traversal attempt: "+hdr.Name, nil)
|
||||
}
|
||||
|
||||
// Build the full path and verify it's within destDir
|
||||
|
|
@ -293,23 +295,26 @@ func extractTarball(tarData []byte, destDir string) (string, error) {
|
|||
|
||||
// Final security check: ensure the path is still within destDir
|
||||
if !strings.HasPrefix(fullPath, absDestDir+string(os.PathSeparator)) && fullPath != absDestDir {
|
||||
return "", fmt.Errorf("invalid tar entry: path escape attempt: %s", hdr.Name)
|
||||
return "", coreerr.E("extractTarball", "invalid tar entry: path escape attempt: "+hdr.Name, nil)
|
||||
}
|
||||
|
||||
switch hdr.Typeflag {
|
||||
case tar.TypeDir:
|
||||
if err := os.MkdirAll(fullPath, os.FileMode(hdr.Mode)); err != nil {
|
||||
if err := coreio.Local.EnsureDir(fullPath); err != nil {
|
||||
return "", err
|
||||
}
|
||||
case tar.TypeReg:
|
||||
// Ensure parent directory exists
|
||||
if err := os.MkdirAll(filepath.Dir(fullPath), 0755); err != nil {
|
||||
if err := coreio.Local.EnsureDir(filepath.Dir(fullPath)); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// os.OpenFile is used deliberately here instead of coreio.Local.Create/Write
|
||||
// because coreio hardcodes file permissions (0644) and we need to preserve
|
||||
// the tar header's mode bits — executable binaries require 0755.
|
||||
f, err := os.OpenFile(fullPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.FileMode(hdr.Mode))
|
||||
if err != nil {
|
||||
return "", err
|
||||
return "", coreerr.E("extractTarball", "failed to create file "+hdr.Name, err)
|
||||
}
|
||||
|
||||
// Limit file size to prevent decompression bombs (100MB max per file)
|
||||
|
|
@ -318,11 +323,11 @@ func extractTarball(tarData []byte, destDir string) (string, error) {
|
|||
written, err := io.Copy(f, limitedReader)
|
||||
f.Close()
|
||||
if err != nil {
|
||||
return "", err
|
||||
return "", coreerr.E("extractTarball", "failed to write file "+hdr.Name, err)
|
||||
}
|
||||
if written > maxFileSize {
|
||||
os.Remove(fullPath)
|
||||
return "", fmt.Errorf("file %s exceeds maximum size of %d bytes", hdr.Name, maxFileSize)
|
||||
coreio.Local.Delete(fullPath)
|
||||
return "", coreerr.E("extractTarball", "file "+hdr.Name+" exceeds maximum size", nil)
|
||||
}
|
||||
|
||||
// Track first executable
|
||||
|
|
|
|||
|
|
@ -3,12 +3,12 @@ package node
|
|||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go-p2p/logging"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
|
||||
"dappco.re/go/core/p2p/logging"
|
||||
)
|
||||
|
||||
// Controller manages remote peer operations from a controller node.
|
||||
|
|
@ -67,11 +67,11 @@ func (c *Controller) sendRequest(peerID string, msg *Message, timeout time.Durat
|
|||
if c.transport.GetConnection(peerID) == nil {
|
||||
peer := c.peers.GetPeer(peerID)
|
||||
if peer == nil {
|
||||
return nil, fmt.Errorf("peer not found: %s", peerID)
|
||||
return nil, coreerr.E("Controller.sendRequest", "peer not found: "+peerID, nil)
|
||||
}
|
||||
conn, err := c.transport.Connect(peer)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to connect to peer: %w", err)
|
||||
return nil, coreerr.E("Controller.sendRequest", "failed to connect to peer", err)
|
||||
}
|
||||
// Use the real peer ID after handshake (it may have changed)
|
||||
actualPeerID = conn.Peer.ID
|
||||
|
|
@ -96,7 +96,7 @@ func (c *Controller) sendRequest(peerID string, msg *Message, timeout time.Durat
|
|||
|
||||
// Send the message
|
||||
if err := c.transport.Send(actualPeerID, msg); err != nil {
|
||||
return nil, fmt.Errorf("failed to send message: %w", err)
|
||||
return nil, coreerr.E("Controller.sendRequest", "failed to send message", err)
|
||||
}
|
||||
|
||||
// Wait for response
|
||||
|
|
@ -107,7 +107,7 @@ func (c *Controller) sendRequest(peerID string, msg *Message, timeout time.Durat
|
|||
case resp := <-respCh:
|
||||
return resp, nil
|
||||
case <-ctx.Done():
|
||||
return nil, errors.New("request timeout")
|
||||
return nil, coreerr.E("Controller.sendRequest", "request timeout", nil)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -120,7 +120,7 @@ func (c *Controller) GetRemoteStats(peerID string) (*StatsPayload, error) {
|
|||
|
||||
msg, err := NewMessage(MsgGetStats, identity.ID, peerID, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create message: %w", err)
|
||||
return nil, coreerr.E("Controller.GetRemoteStats", "failed to create message", err)
|
||||
}
|
||||
|
||||
resp, err := c.sendRequest(peerID, msg, 10*time.Second)
|
||||
|
|
@ -144,7 +144,7 @@ func (c *Controller) StartRemoteMiner(peerID, minerType, profileID string, confi
|
|||
}
|
||||
|
||||
if minerType == "" {
|
||||
return errors.New("miner type is required")
|
||||
return coreerr.E("Controller.StartRemoteMiner", "miner type is required", nil)
|
||||
}
|
||||
|
||||
payload := StartMinerPayload{
|
||||
|
|
@ -155,7 +155,7 @@ func (c *Controller) StartRemoteMiner(peerID, minerType, profileID string, confi
|
|||
|
||||
msg, err := NewMessage(MsgStartMiner, identity.ID, peerID, payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create message: %w", err)
|
||||
return coreerr.E("Controller.StartRemoteMiner", "failed to create message", err)
|
||||
}
|
||||
|
||||
resp, err := c.sendRequest(peerID, msg, 30*time.Second)
|
||||
|
|
@ -169,7 +169,7 @@ func (c *Controller) StartRemoteMiner(peerID, minerType, profileID string, confi
|
|||
}
|
||||
|
||||
if !ack.Success {
|
||||
return fmt.Errorf("miner start failed: %s", ack.Error)
|
||||
return coreerr.E("Controller.StartRemoteMiner", "miner start failed: "+ack.Error, nil)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -188,7 +188,7 @@ func (c *Controller) StopRemoteMiner(peerID, minerName string) error {
|
|||
|
||||
msg, err := NewMessage(MsgStopMiner, identity.ID, peerID, payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create message: %w", err)
|
||||
return coreerr.E("Controller.StopRemoteMiner", "failed to create message", err)
|
||||
}
|
||||
|
||||
resp, err := c.sendRequest(peerID, msg, 30*time.Second)
|
||||
|
|
@ -202,7 +202,7 @@ func (c *Controller) StopRemoteMiner(peerID, minerName string) error {
|
|||
}
|
||||
|
||||
if !ack.Success {
|
||||
return fmt.Errorf("miner stop failed: %s", ack.Error)
|
||||
return coreerr.E("Controller.StopRemoteMiner", "miner stop failed: "+ack.Error, nil)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -210,6 +210,11 @@ func (c *Controller) StopRemoteMiner(peerID, minerName string) error {
|
|||
|
||||
// GetRemoteLogs requests console logs from a remote miner.
|
||||
func (c *Controller) GetRemoteLogs(peerID, minerName string, lines int) ([]string, error) {
|
||||
return c.GetRemoteLogsSince(peerID, minerName, lines, time.Time{})
|
||||
}
|
||||
|
||||
// GetRemoteLogsSince requests console logs from a remote miner after a point in time.
|
||||
func (c *Controller) GetRemoteLogsSince(peerID, minerName string, lines int, since time.Time) ([]string, error) {
|
||||
identity := c.node.GetIdentity()
|
||||
if identity == nil {
|
||||
return nil, ErrIdentityNotInitialized
|
||||
|
|
@ -219,10 +224,13 @@ func (c *Controller) GetRemoteLogs(peerID, minerName string, lines int) ([]strin
|
|||
MinerName: minerName,
|
||||
Lines: lines,
|
||||
}
|
||||
if !since.IsZero() {
|
||||
payload.Since = since.UnixMilli()
|
||||
}
|
||||
|
||||
msg, err := NewMessage(MsgGetLogs, identity.ID, peerID, payload)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create message: %w", err)
|
||||
return nil, coreerr.E("Controller.GetRemoteLogsSince", "failed to create message", err)
|
||||
}
|
||||
|
||||
resp, err := c.sendRequest(peerID, msg, 10*time.Second)
|
||||
|
|
@ -281,7 +289,7 @@ func (c *Controller) PingPeer(peerID string) (float64, error) {
|
|||
|
||||
msg, err := NewMessage(MsgPing, identity.ID, peerID, payload)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("failed to create message: %w", err)
|
||||
return 0, coreerr.E("Controller.PingPeer", "failed to create message", err)
|
||||
}
|
||||
|
||||
resp, err := c.sendRequest(peerID, msg, 5*time.Second)
|
||||
|
|
@ -309,7 +317,7 @@ func (c *Controller) PingPeer(peerID string) (float64, error) {
|
|||
func (c *Controller) ConnectToPeer(peerID string) error {
|
||||
peer := c.peers.GetPeer(peerID)
|
||||
if peer == nil {
|
||||
return fmt.Errorf("peer not found: %s", peerID)
|
||||
return coreerr.E("Controller.ConnectToPeer", "peer not found: "+peerID, nil)
|
||||
}
|
||||
|
||||
_, err := c.transport.Connect(peer)
|
||||
|
|
@ -320,7 +328,7 @@ func (c *Controller) ConnectToPeer(peerID string) error {
|
|||
func (c *Controller) DisconnectFromPeer(peerID string) error {
|
||||
conn := c.transport.GetConnection(peerID)
|
||||
if conn == nil {
|
||||
return fmt.Errorf("peer not connected: %s", peerID)
|
||||
return coreerr.E("Controller.DisconnectFromPeer", "peer not connected: "+peerID, nil)
|
||||
}
|
||||
|
||||
return conn.Close()
|
||||
|
|
|
|||
|
|
@ -7,6 +7,7 @@ import (
|
|||
"net/http/httptest"
|
||||
"net/url"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
|
|
@ -514,6 +515,40 @@ type mockMinerFull struct {
|
|||
func (m *mockMinerFull) GetName() string { return m.name }
|
||||
func (m *mockMinerFull) GetType() string { return m.minerType }
|
||||
func (m *mockMinerFull) GetStats() (any, error) { return m.stats, nil }
|
||||
func (m *mockMinerFull) GetConsoleHistorySince(lines int, since time.Time) []string {
|
||||
if since.IsZero() {
|
||||
if lines >= len(m.consoleHistory) {
|
||||
return m.consoleHistory
|
||||
}
|
||||
return m.consoleHistory[:lines]
|
||||
}
|
||||
|
||||
filtered := make([]string, 0, len(m.consoleHistory))
|
||||
for _, line := range m.consoleHistory {
|
||||
if lineAfter(line, since) {
|
||||
filtered = append(filtered, line)
|
||||
}
|
||||
}
|
||||
if lines >= len(filtered) {
|
||||
return filtered
|
||||
}
|
||||
return filtered[:lines]
|
||||
}
|
||||
|
||||
func lineAfter(line string, since time.Time) bool {
|
||||
start := strings.IndexByte(line, '[')
|
||||
end := strings.IndexByte(line, ']')
|
||||
if start != 0 || end <= start+1 {
|
||||
return true
|
||||
}
|
||||
|
||||
ts, err := time.Parse("2006-01-02 15:04:05", line[start+1:end])
|
||||
if err != nil {
|
||||
return true
|
||||
}
|
||||
return ts.After(since) || ts.Equal(since)
|
||||
}
|
||||
|
||||
func (m *mockMinerFull) GetConsoleHistory(lines int) []string {
|
||||
if lines >= len(m.consoleHistory) {
|
||||
return m.consoleHistory
|
||||
|
|
@ -616,6 +651,20 @@ func TestController_GetRemoteLogs_LimitedLines(t *testing.T) {
|
|||
assert.Len(t, lines, 1, "should return only 1 line")
|
||||
}
|
||||
|
||||
func TestController_GetRemoteLogsSince(t *testing.T) {
|
||||
controller, _, tp := setupControllerPairWithMiner(t)
|
||||
serverID := tp.ServerNode.GetIdentity().ID
|
||||
|
||||
since, err := time.Parse("2006-01-02 15:04:05", "2026-02-20 10:00:01")
|
||||
require.NoError(t, err)
|
||||
|
||||
lines, err := controller.GetRemoteLogsSince(serverID, "running-miner", 10, since)
|
||||
require.NoError(t, err, "GetRemoteLogsSince should succeed")
|
||||
require.Len(t, lines, 2, "should return only log lines on or after the requested timestamp")
|
||||
assert.Contains(t, lines[0], "connected to pool")
|
||||
assert.Contains(t, lines[1], "new job received")
|
||||
}
|
||||
|
||||
func TestController_GetRemoteLogs_NoIdentity(t *testing.T) {
|
||||
tp := setupTestTransportPair(t)
|
||||
nmNoID, err := NewNodeManagerWithPaths(
|
||||
|
|
|
|||
|
|
@ -1,13 +1,14 @@
|
|||
package node
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"iter"
|
||||
"sync"
|
||||
|
||||
"forge.lthn.ai/core/go-p2p/logging"
|
||||
"forge.lthn.ai/core/go-p2p/ueps"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
|
||||
"dappco.re/go/core/p2p/logging"
|
||||
"dappco.re/go/core/p2p/ueps"
|
||||
)
|
||||
|
||||
// ThreatScoreThreshold is the maximum allowable threat score. Packets exceeding
|
||||
|
|
@ -133,12 +134,12 @@ func (d *Dispatcher) Dispatch(pkt *ueps.ParsedPacket) error {
|
|||
var (
|
||||
// ErrThreatScoreExceeded is returned when a packet's ThreatScore exceeds
|
||||
// the safety threshold.
|
||||
ErrThreatScoreExceeded = fmt.Errorf("packet rejected: threat score exceeds safety threshold (%d)", ThreatScoreThreshold)
|
||||
ErrThreatScoreExceeded = coreerr.E("Dispatcher.Dispatch", fmt.Sprintf("packet rejected: threat score exceeds safety threshold (%d)", ThreatScoreThreshold), nil)
|
||||
|
||||
// ErrUnknownIntent is returned when no handler is registered for the
|
||||
// packet's IntentID.
|
||||
ErrUnknownIntent = errors.New("packet dropped: unknown intent")
|
||||
ErrUnknownIntent = coreerr.E("Dispatcher.Dispatch", "packet dropped: unknown intent", nil)
|
||||
|
||||
// ErrNilPacket is returned when a nil packet is passed to Dispatch.
|
||||
ErrNilPacket = errors.New("dispatch: nil packet")
|
||||
ErrNilPacket = coreerr.E("Dispatcher.Dispatch", "nil packet", nil)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ import (
|
|||
"sync/atomic"
|
||||
"testing"
|
||||
|
||||
"forge.lthn.ai/core/go-p2p/ueps"
|
||||
"dappco.re/go/core/p2p/ueps"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,14 +1,14 @@
|
|||
package node
|
||||
|
||||
import "errors"
|
||||
import coreerr "dappco.re/go/core/log"
|
||||
|
||||
// Sentinel errors shared across the node package.
|
||||
var (
|
||||
// ErrIdentityNotInitialized is returned when a node operation requires
|
||||
// a node identity but none has been generated or loaded.
|
||||
ErrIdentityNotInitialized = errors.New("node identity not initialized")
|
||||
ErrIdentityNotInitialized = coreerr.E("node", "node identity not initialized", nil)
|
||||
|
||||
// ErrMinerManagerNotConfigured is returned when a miner operation is
|
||||
// attempted but no MinerManager has been set on the Worker.
|
||||
ErrMinerManagerNotConfigured = errors.New("miner manager not configured")
|
||||
ErrMinerManagerNotConfigured = coreerr.E("node", "miner manager not configured", nil)
|
||||
)
|
||||
|
|
|
|||
118
node/identity.go
118
node/identity.go
|
|
@ -8,12 +8,14 @@ import (
|
|||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
|
||||
"forge.lthn.ai/Snider/Borg/pkg/stmf"
|
||||
"github.com/adrg/xdg"
|
||||
)
|
||||
|
|
@ -25,7 +27,7 @@ const ChallengeSize = 32
|
|||
func GenerateChallenge() ([]byte, error) {
|
||||
challenge := make([]byte, ChallengeSize)
|
||||
if _, err := rand.Read(challenge); err != nil {
|
||||
return nil, fmt.Errorf("failed to generate challenge: %w", err)
|
||||
return nil, coreerr.E("GenerateChallenge", "failed to generate challenge", err)
|
||||
}
|
||||
return challenge, nil
|
||||
}
|
||||
|
|
@ -79,12 +81,12 @@ type NodeManager struct {
|
|||
func NewNodeManager() (*NodeManager, error) {
|
||||
keyPath, err := xdg.DataFile("lethean-desktop/node/private.key")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get key path: %w", err)
|
||||
return nil, coreerr.E("NodeManager.New", "failed to get key path", err)
|
||||
}
|
||||
|
||||
configPath, err := xdg.ConfigFile("lethean-desktop/node.json")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get config path: %w", err)
|
||||
return nil, coreerr.E("NodeManager.New", "failed to get config path", err)
|
||||
}
|
||||
|
||||
return NewNodeManagerWithPaths(keyPath, configPath)
|
||||
|
|
@ -107,6 +109,48 @@ func NewNodeManagerWithPaths(keyPath, configPath string) (*NodeManager, error) {
|
|||
return nm, nil
|
||||
}
|
||||
|
||||
// LoadOrCreateIdentity loads the node identity from the default XDG paths or
|
||||
// generates a new dual-role identity when none exists yet.
|
||||
func LoadOrCreateIdentity() (*NodeManager, error) {
|
||||
keyPath, err := xdg.DataFile("lethean-desktop/node/private.key")
|
||||
if err != nil {
|
||||
return nil, coreerr.E("LoadOrCreateIdentity", "failed to get key path", err)
|
||||
}
|
||||
|
||||
configPath, err := xdg.ConfigFile("lethean-desktop/node.json")
|
||||
if err != nil {
|
||||
return nil, coreerr.E("LoadOrCreateIdentity", "failed to get config path", err)
|
||||
}
|
||||
|
||||
return LoadOrCreateIdentityWithPaths(keyPath, configPath)
|
||||
}
|
||||
|
||||
// LoadOrCreateIdentityWithPaths loads an existing identity from the supplied
|
||||
// paths or creates a new dual-role identity if no persisted identity exists.
|
||||
// The generated identity name falls back to the host name, then a stable
|
||||
// project-specific default if the host name cannot be determined.
|
||||
func LoadOrCreateIdentityWithPaths(keyPath, configPath string) (*NodeManager, error) {
|
||||
nm, err := NewNodeManagerWithPaths(keyPath, configPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if nm.HasIdentity() {
|
||||
return nm, nil
|
||||
}
|
||||
|
||||
name, err := os.Hostname()
|
||||
if err != nil || name == "" {
|
||||
name = "lethean-node"
|
||||
}
|
||||
|
||||
if err := nm.GenerateIdentity(name, RoleDual); err != nil {
|
||||
return nil, coreerr.E("LoadOrCreateIdentityWithPaths", "failed to generate identity", err)
|
||||
}
|
||||
|
||||
return nm, nil
|
||||
}
|
||||
|
||||
// HasIdentity returns true if a node identity has been initialized.
|
||||
func (n *NodeManager) HasIdentity() bool {
|
||||
n.mu.RLock()
|
||||
|
|
@ -134,7 +178,7 @@ func (n *NodeManager) GenerateIdentity(name string, role NodeRole) error {
|
|||
// Generate X25519 keypair using STMF
|
||||
keyPair, err := stmf.GenerateKeyPair()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to generate keypair: %w", err)
|
||||
return coreerr.E("NodeManager.GenerateIdentity", "failed to generate keypair", err)
|
||||
}
|
||||
|
||||
// Derive node ID from public key (first 16 bytes as hex = 32 char ID)
|
||||
|
|
@ -155,12 +199,12 @@ func (n *NodeManager) GenerateIdentity(name string, role NodeRole) error {
|
|||
|
||||
// Save private key
|
||||
if err := n.savePrivateKey(); err != nil {
|
||||
return fmt.Errorf("failed to save private key: %w", err)
|
||||
return coreerr.E("NodeManager.GenerateIdentity", "failed to save private key", err)
|
||||
}
|
||||
|
||||
// Save identity config
|
||||
if err := n.saveIdentity(); err != nil {
|
||||
return fmt.Errorf("failed to save identity: %w", err)
|
||||
return coreerr.E("NodeManager.GenerateIdentity", "failed to save identity", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -179,19 +223,19 @@ func (n *NodeManager) DeriveSharedSecret(peerPubKeyBase64 string) ([]byte, error
|
|||
// Load peer's public key
|
||||
peerPubKey, err := stmf.LoadPublicKeyBase64(peerPubKeyBase64)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to load peer public key: %w", err)
|
||||
return nil, coreerr.E("NodeManager.DeriveSharedSecret", "failed to load peer public key", err)
|
||||
}
|
||||
|
||||
// Load our private key
|
||||
privateKey, err := ecdh.X25519().NewPrivateKey(n.privateKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to load private key: %w", err)
|
||||
return nil, coreerr.E("NodeManager.DeriveSharedSecret", "failed to load private key", err)
|
||||
}
|
||||
|
||||
// Derive shared secret using ECDH
|
||||
sharedSecret, err := privateKey.ECDH(peerPubKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to derive shared secret: %w", err)
|
||||
return nil, coreerr.E("NodeManager.DeriveSharedSecret", "failed to derive shared secret", err)
|
||||
}
|
||||
|
||||
// Hash the shared secret using SHA-256 (same pattern as Borg/trix)
|
||||
|
|
@ -203,13 +247,16 @@ func (n *NodeManager) DeriveSharedSecret(peerPubKeyBase64 string) ([]byte, error
|
|||
func (n *NodeManager) savePrivateKey() error {
|
||||
// Ensure directory exists
|
||||
dir := filepath.Dir(n.keyPath)
|
||||
if err := os.MkdirAll(dir, 0700); err != nil {
|
||||
return fmt.Errorf("failed to create key directory: %w", err)
|
||||
if err := coreio.Local.EnsureDir(dir); err != nil {
|
||||
return coreerr.E("NodeManager.savePrivateKey", "failed to create key directory", err)
|
||||
}
|
||||
|
||||
// Write private key with restricted permissions (0600)
|
||||
if err := os.WriteFile(n.keyPath, n.privateKey, 0600); err != nil {
|
||||
return fmt.Errorf("failed to write private key: %w", err)
|
||||
// Write private key and then tighten permissions explicitly.
|
||||
if err := coreio.Local.Write(n.keyPath, string(n.privateKey)); err != nil {
|
||||
return coreerr.E("NodeManager.savePrivateKey", "failed to write private key", err)
|
||||
}
|
||||
if err := os.Chmod(n.keyPath, 0600); err != nil {
|
||||
return coreerr.E("NodeManager.savePrivateKey", "failed to set private key permissions", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -219,17 +266,17 @@ func (n *NodeManager) savePrivateKey() error {
|
|||
func (n *NodeManager) saveIdentity() error {
|
||||
// Ensure directory exists
|
||||
dir := filepath.Dir(n.configPath)
|
||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||
return fmt.Errorf("failed to create config directory: %w", err)
|
||||
if err := coreio.Local.EnsureDir(dir); err != nil {
|
||||
return coreerr.E("NodeManager.saveIdentity", "failed to create config directory", err)
|
||||
}
|
||||
|
||||
data, err := json.MarshalIndent(n.identity, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal identity: %w", err)
|
||||
return coreerr.E("NodeManager.saveIdentity", "failed to marshal identity", err)
|
||||
}
|
||||
|
||||
if err := os.WriteFile(n.configPath, data, 0644); err != nil {
|
||||
return fmt.Errorf("failed to write identity: %w", err)
|
||||
if err := coreio.Local.Write(n.configPath, string(data)); err != nil {
|
||||
return coreerr.E("NodeManager.saveIdentity", "failed to write identity", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -238,26 +285,27 @@ func (n *NodeManager) saveIdentity() error {
|
|||
// loadIdentity loads the node identity from disk.
|
||||
func (n *NodeManager) loadIdentity() error {
|
||||
// Load identity config
|
||||
data, err := os.ReadFile(n.configPath)
|
||||
content, err := coreio.Local.Read(n.configPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read identity: %w", err)
|
||||
return coreerr.E("NodeManager.loadIdentity", "failed to read identity", err)
|
||||
}
|
||||
|
||||
var identity NodeIdentity
|
||||
if err := json.Unmarshal(data, &identity); err != nil {
|
||||
return fmt.Errorf("failed to unmarshal identity: %w", err)
|
||||
if err := json.Unmarshal([]byte(content), &identity); err != nil {
|
||||
return coreerr.E("NodeManager.loadIdentity", "failed to unmarshal identity", err)
|
||||
}
|
||||
|
||||
// Load private key
|
||||
privateKey, err := os.ReadFile(n.keyPath)
|
||||
keyContent, err := coreio.Local.Read(n.keyPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read private key: %w", err)
|
||||
return coreerr.E("NodeManager.loadIdentity", "failed to read private key", err)
|
||||
}
|
||||
privateKey := []byte(keyContent)
|
||||
|
||||
// Reconstruct keypair from private key
|
||||
keyPair, err := stmf.LoadKeyPair(privateKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load keypair: %w", err)
|
||||
return coreerr.E("NodeManager.loadIdentity", "failed to load keypair", err)
|
||||
}
|
||||
|
||||
n.identity = &identity
|
||||
|
|
@ -272,14 +320,18 @@ func (n *NodeManager) Delete() error {
|
|||
n.mu.Lock()
|
||||
defer n.mu.Unlock()
|
||||
|
||||
// Remove private key
|
||||
if err := os.Remove(n.keyPath); err != nil && !os.IsNotExist(err) {
|
||||
return fmt.Errorf("failed to remove private key: %w", err)
|
||||
// Remove private key (ignore if already absent)
|
||||
if coreio.Local.Exists(n.keyPath) {
|
||||
if err := coreio.Local.Delete(n.keyPath); err != nil {
|
||||
return coreerr.E("NodeManager.Delete", "failed to remove private key", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Remove identity config
|
||||
if err := os.Remove(n.configPath); err != nil && !os.IsNotExist(err) {
|
||||
return fmt.Errorf("failed to remove identity: %w", err)
|
||||
// Remove identity config (ignore if already absent)
|
||||
if coreio.Local.Exists(n.configPath) {
|
||||
if err := coreio.Local.Delete(n.configPath); err != nil {
|
||||
return coreerr.E("NodeManager.Delete", "failed to remove identity", err)
|
||||
}
|
||||
}
|
||||
|
||||
n.identity = nil
|
||||
|
|
|
|||
|
|
@ -74,6 +74,25 @@ func TestNodeIdentity(t *testing.T) {
|
|||
}
|
||||
})
|
||||
|
||||
t.Run("PrivateKeyPermissions", func(t *testing.T) {
|
||||
nm, cleanup := setupTestNodeManager(t)
|
||||
defer cleanup()
|
||||
|
||||
err := nm.GenerateIdentity("permission-test", RoleDual)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to generate identity: %v", err)
|
||||
}
|
||||
|
||||
info, err := os.Stat(nm.keyPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to stat private key: %v", err)
|
||||
}
|
||||
|
||||
if got := info.Mode().Perm(); got != 0600 {
|
||||
t.Fatalf("expected private key permissions 0600, got %04o", got)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("LoadExistingIdentity", func(t *testing.T) {
|
||||
tmpDir, err := os.MkdirTemp("", "node-load-test")
|
||||
if err != nil {
|
||||
|
|
@ -196,6 +215,47 @@ func TestNodeIdentity(t *testing.T) {
|
|||
t.Error("should not have identity after delete")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("LoadOrCreateIdentityWithPaths", func(t *testing.T) {
|
||||
tmpDir, err := os.MkdirTemp("", "node-load-or-create-test")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tmpDir)
|
||||
|
||||
keyPath := filepath.Join(tmpDir, "private.key")
|
||||
configPath := filepath.Join(tmpDir, "node.json")
|
||||
|
||||
nm, err := LoadOrCreateIdentityWithPaths(keyPath, configPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to load or create identity: %v", err)
|
||||
}
|
||||
|
||||
if !nm.HasIdentity() {
|
||||
t.Fatal("expected identity to be initialised")
|
||||
}
|
||||
|
||||
identity := nm.GetIdentity()
|
||||
if identity == nil {
|
||||
t.Fatal("identity should not be nil")
|
||||
}
|
||||
|
||||
if identity.Name == "" {
|
||||
t.Error("identity name should be populated")
|
||||
}
|
||||
|
||||
if identity.Role != RoleDual {
|
||||
t.Errorf("expected default role dual, got %s", identity.Role)
|
||||
}
|
||||
|
||||
if _, err := os.Stat(keyPath); err != nil {
|
||||
t.Fatalf("expected private key to be persisted: %v", err)
|
||||
}
|
||||
|
||||
if _, err := os.Stat(configPath); err != nil {
|
||||
t.Fatalf("expected identity config to be persisted: %v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestNodeRoles(t *testing.T) {
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ import (
|
|||
"testing"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go-p2p/ueps"
|
||||
"dappco.re/go/core/p2p/ueps"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
|
@ -370,7 +370,7 @@ func TestIntegration_IdentityPersistenceAndReload(t *testing.T) {
|
|||
assert.Equal(t, original.Role, reloaded.Role, "Role should persist")
|
||||
|
||||
// Verify the reloaded key can derive the same shared secret.
|
||||
kp, err := stmfGenerateKeyPair()
|
||||
kp, err := stmfGenerateKeyPair(t.TempDir())
|
||||
require.NoError(t, err)
|
||||
|
||||
secret1, err := nm1.DeriveSharedSecret(kp)
|
||||
|
|
@ -385,8 +385,7 @@ func TestIntegration_IdentityPersistenceAndReload(t *testing.T) {
|
|||
|
||||
// stmfGenerateKeyPair is a helper that generates a keypair and returns
|
||||
// the public key as base64 (for use in DeriveSharedSecret tests).
|
||||
func stmfGenerateKeyPair() (string, error) {
|
||||
dir, _ := filepath.Abs("/tmp/stmf-test-" + time.Now().Format("20060102150405.000"))
|
||||
func stmfGenerateKeyPair(dir string) (string, error) {
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
|
|
@ -400,6 +399,7 @@ func stmfGenerateKeyPair() (string, error) {
|
|||
return nm.GetIdentity().PublicKey, nil
|
||||
}
|
||||
|
||||
|
||||
// TestIntegration_UEPSFullRoundTrip exercises a complete UEPS packet
|
||||
// lifecycle: build, sign, transmit (simulated), read, verify, dispatch.
|
||||
func TestIntegration_UEPSFullRoundTrip(t *testing.T) {
|
||||
|
|
|
|||
|
|
@ -7,7 +7,8 @@ package levin
|
|||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
)
|
||||
|
||||
// HeaderSize is the exact byte length of a serialised Levin header.
|
||||
|
|
@ -42,8 +43,8 @@ const (
|
|||
|
||||
// Sentinel errors returned by DecodeHeader.
|
||||
var (
|
||||
ErrBadSignature = errors.New("levin: bad signature")
|
||||
ErrPayloadTooBig = errors.New("levin: payload exceeds maximum size")
|
||||
ErrBadSignature = coreerr.E("levin", "bad signature", nil)
|
||||
ErrPayloadTooBig = coreerr.E("levin", "payload exceeds maximum size", nil)
|
||||
)
|
||||
|
||||
// Header is the 33-byte packed header that prefixes every Levin message.
|
||||
|
|
|
|||
|
|
@ -5,11 +5,12 @@ package levin
|
|||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"maps"
|
||||
"math"
|
||||
"slices"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
)
|
||||
|
||||
// Portable storage signatures and version (9-byte header).
|
||||
|
|
@ -40,12 +41,12 @@ const (
|
|||
|
||||
// Sentinel errors for storage encoding and decoding.
|
||||
var (
|
||||
ErrStorageBadSignature = errors.New("levin: bad storage signature")
|
||||
ErrStorageTruncated = errors.New("levin: truncated storage data")
|
||||
ErrStorageBadVersion = errors.New("levin: unsupported storage version")
|
||||
ErrStorageNameTooLong = errors.New("levin: entry name exceeds 255 bytes")
|
||||
ErrStorageTypeMismatch = errors.New("levin: value type mismatch")
|
||||
ErrStorageUnknownType = errors.New("levin: unknown type tag")
|
||||
ErrStorageBadSignature = coreerr.E("levin.storage", "bad storage signature", nil)
|
||||
ErrStorageTruncated = coreerr.E("levin.storage", "truncated storage data", nil)
|
||||
ErrStorageBadVersion = coreerr.E("levin.storage", "unsupported storage version", nil)
|
||||
ErrStorageNameTooLong = coreerr.E("levin.storage", "entry name exceeds 255 bytes", nil)
|
||||
ErrStorageTypeMismatch = coreerr.E("levin.storage", "value type mismatch", nil)
|
||||
ErrStorageUnknownType = coreerr.E("levin.storage", "unknown type tag", nil)
|
||||
)
|
||||
|
||||
// Section is an ordered map of named values forming a portable storage section.
|
||||
|
|
@ -393,7 +394,7 @@ func encodeValue(buf []byte, v Value) ([]byte, error) {
|
|||
return encodeSection(buf, v.objectVal)
|
||||
|
||||
default:
|
||||
return nil, fmt.Errorf("%w: 0x%02x", ErrStorageUnknownType, v.Type)
|
||||
return nil, coreerr.E("levin.encodeValue", fmt.Sprintf("unknown type tag: 0x%02x", v.Type), ErrStorageUnknownType)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -440,7 +441,7 @@ func encodeArray(buf []byte, v Value) ([]byte, error) {
|
|||
return buf, nil
|
||||
|
||||
default:
|
||||
return nil, fmt.Errorf("%w: array of 0x%02x", ErrStorageUnknownType, elemType)
|
||||
return nil, coreerr.E("levin.encodeArray", fmt.Sprintf("unknown type tag: array of 0x%02x", elemType), ErrStorageUnknownType)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -475,7 +476,7 @@ func DecodeStorage(data []byte) (Section, error) {
|
|||
func decodeSection(buf []byte) (Section, int, error) {
|
||||
count, n, err := UnpackVarint(buf)
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("section entry count: %w", err)
|
||||
return nil, 0, coreerr.E("levin.decodeSection", "section entry count", err)
|
||||
}
|
||||
off := n
|
||||
|
||||
|
|
@ -506,7 +507,7 @@ func decodeSection(buf []byte) (Section, int, error) {
|
|||
// Value.
|
||||
val, consumed, err := decodeValue(buf[off:], tag)
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("field %q: %w", name, err)
|
||||
return nil, 0, coreerr.E("levin.decodeSection", "field "+name, err)
|
||||
}
|
||||
off += consumed
|
||||
|
||||
|
|
@ -612,7 +613,7 @@ func decodeValue(buf []byte, tag uint8) (Value, int, error) {
|
|||
return Value{Type: TypeObject, objectVal: sec}, consumed, nil
|
||||
|
||||
default:
|
||||
return Value{}, 0, fmt.Errorf("%w: 0x%02x", ErrStorageUnknownType, tag)
|
||||
return Value{}, 0, coreerr.E("levin.decodeValue", fmt.Sprintf("unknown type tag: 0x%02x", tag), ErrStorageUnknownType)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -680,6 +681,6 @@ func decodeArray(buf []byte, tag uint8) (Value, int, error) {
|
|||
return Value{Type: tag, objectArray: arr}, off, nil
|
||||
|
||||
default:
|
||||
return Value{}, 0, fmt.Errorf("%w: array of 0x%02x", ErrStorageUnknownType, elemType)
|
||||
return Value{}, 0, coreerr.E("levin.decodeArray", fmt.Sprintf("unknown type tag: array of 0x%02x", elemType), ErrStorageUnknownType)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -5,7 +5,8 @@ package levin
|
|||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
)
|
||||
|
||||
// Size-mark bits occupying the two lowest bits of the first byte.
|
||||
|
|
@ -22,10 +23,10 @@ const (
|
|||
)
|
||||
|
||||
// ErrVarintTruncated is returned when the buffer is too short.
|
||||
var ErrVarintTruncated = errors.New("levin: truncated varint")
|
||||
var ErrVarintTruncated = coreerr.E("levin", "truncated varint", nil)
|
||||
|
||||
// ErrVarintOverflow is returned when the value is too large to encode.
|
||||
var ErrVarintOverflow = errors.New("levin: varint overflow")
|
||||
var ErrVarintOverflow = coreerr.E("levin", "varint overflow", nil)
|
||||
|
||||
// PackVarint encodes v using the epee portable-storage varint scheme.
|
||||
// The low two bits of the first byte indicate the total encoded width;
|
||||
|
|
|
|||
150
node/peer.go
150
node/peer.go
|
|
@ -2,19 +2,19 @@ package node
|
|||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"iter"
|
||||
"maps"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"slices"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
coreio "dappco.re/go/core/io"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
"dappco.re/go/core/p2p/logging"
|
||||
|
||||
poindexter "forge.lthn.ai/Snider/Poindexter"
|
||||
"forge.lthn.ai/core/go-p2p/logging"
|
||||
"github.com/adrg/xdg"
|
||||
)
|
||||
|
||||
|
|
@ -51,9 +51,8 @@ const (
|
|||
PeerAuthAllowlist
|
||||
)
|
||||
|
||||
// Peer name validation constants
|
||||
// Peer name validation constants.
|
||||
const (
|
||||
PeerNameMinLength = 1
|
||||
PeerNameMaxLength = 64
|
||||
)
|
||||
|
||||
|
|
@ -72,20 +71,18 @@ func safeKeyPrefix(key string) string {
|
|||
}
|
||||
|
||||
// validatePeerName checks if a peer name is valid.
|
||||
// Peer names must be 1-64 characters, start and end with alphanumeric,
|
||||
// and contain only alphanumeric, hyphens, underscores, and spaces.
|
||||
// Empty names are permitted. Non-empty names must be 1-64 characters,
|
||||
// start and end with alphanumeric, and contain only alphanumeric,
|
||||
// hyphens, underscores, and spaces.
|
||||
func validatePeerName(name string) error {
|
||||
if name == "" {
|
||||
return nil // Empty names are allowed (optional field)
|
||||
}
|
||||
if len(name) < PeerNameMinLength {
|
||||
return fmt.Errorf("peer name too short (min %d characters)", PeerNameMinLength)
|
||||
return nil
|
||||
}
|
||||
if len(name) > PeerNameMaxLength {
|
||||
return fmt.Errorf("peer name too long (max %d characters)", PeerNameMaxLength)
|
||||
return coreerr.E("validatePeerName", "peer name too long", nil)
|
||||
}
|
||||
if !peerNameRegex.MatchString(name) {
|
||||
return errors.New("peer name contains invalid characters (use alphanumeric, hyphens, underscores, spaces)")
|
||||
return coreerr.E("validatePeerName", "peer name contains invalid characters (use alphanumeric, hyphens, underscores, spaces)", nil)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
@ -101,6 +98,7 @@ type PeerRegistry struct {
|
|||
authMode PeerAuthMode // How to handle unknown peers
|
||||
allowedPublicKeys map[string]bool // Allowlist of public keys (when authMode is Allowlist)
|
||||
allowedPublicKeyMu sync.RWMutex // Protects allowedPublicKeys
|
||||
allowlistPath string // Sidecar file for persisted allowlist keys
|
||||
|
||||
// Debounce disk writes
|
||||
dirty bool // Whether there are unsaved changes
|
||||
|
|
@ -123,7 +121,7 @@ var (
|
|||
func NewPeerRegistry() (*PeerRegistry, error) {
|
||||
peersPath, err := xdg.ConfigFile("lethean-desktop/peers.json")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get peers path: %w", err)
|
||||
return nil, coreerr.E("PeerRegistry.New", "failed to get peers path", err)
|
||||
}
|
||||
|
||||
return NewPeerRegistryWithPath(peersPath)
|
||||
|
|
@ -135,6 +133,7 @@ func NewPeerRegistryWithPath(peersPath string) (*PeerRegistry, error) {
|
|||
pr := &PeerRegistry{
|
||||
peers: make(map[string]*Peer),
|
||||
path: peersPath,
|
||||
allowlistPath: peersPath + ".allowlist.json",
|
||||
stopChan: make(chan struct{}),
|
||||
authMode: PeerAuthOpen, // Default to open for backward compatibility
|
||||
allowedPublicKeys: make(map[string]bool),
|
||||
|
|
@ -144,7 +143,12 @@ func NewPeerRegistryWithPath(peersPath string) (*PeerRegistry, error) {
|
|||
if err := pr.load(); err != nil {
|
||||
// No existing peers, that's ok
|
||||
pr.rebuildKDTree()
|
||||
return pr, nil
|
||||
}
|
||||
|
||||
// Load any persisted allowlist entries. This is best effort so that a
|
||||
// missing or corrupt sidecar does not block peer registry startup.
|
||||
if err := pr.loadAllowedPublicKeys(); err != nil {
|
||||
logging.Warn("failed to load peer allowlist", logging.Fields{"error": err})
|
||||
}
|
||||
|
||||
pr.rebuildKDTree()
|
||||
|
|
@ -169,17 +173,25 @@ func (r *PeerRegistry) GetAuthMode() PeerAuthMode {
|
|||
// AllowPublicKey adds a public key to the allowlist.
|
||||
func (r *PeerRegistry) AllowPublicKey(publicKey string) {
|
||||
r.allowedPublicKeyMu.Lock()
|
||||
defer r.allowedPublicKeyMu.Unlock()
|
||||
r.allowedPublicKeys[publicKey] = true
|
||||
r.allowedPublicKeyMu.Unlock()
|
||||
logging.Debug("public key added to allowlist", logging.Fields{"key": safeKeyPrefix(publicKey)})
|
||||
|
||||
if err := r.saveAllowedPublicKeys(); err != nil {
|
||||
logging.Warn("failed to persist peer allowlist", logging.Fields{"error": err})
|
||||
}
|
||||
}
|
||||
|
||||
// RevokePublicKey removes a public key from the allowlist.
|
||||
func (r *PeerRegistry) RevokePublicKey(publicKey string) {
|
||||
r.allowedPublicKeyMu.Lock()
|
||||
defer r.allowedPublicKeyMu.Unlock()
|
||||
delete(r.allowedPublicKeys, publicKey)
|
||||
r.allowedPublicKeyMu.Unlock()
|
||||
logging.Debug("public key removed from allowlist", logging.Fields{"key": safeKeyPrefix(publicKey)})
|
||||
|
||||
if err := r.saveAllowedPublicKeys(); err != nil {
|
||||
logging.Warn("failed to persist peer allowlist", logging.Fields{"error": err})
|
||||
}
|
||||
}
|
||||
|
||||
// IsPublicKeyAllowed checks if a public key is in the allowlist.
|
||||
|
|
@ -244,7 +256,7 @@ func (r *PeerRegistry) AddPeer(peer *Peer) error {
|
|||
|
||||
if peer.ID == "" {
|
||||
r.mu.Unlock()
|
||||
return errors.New("peer ID is required")
|
||||
return coreerr.E("PeerRegistry.AddPeer", "peer ID is required", nil)
|
||||
}
|
||||
|
||||
// Validate peer name (P2P-LOW-3)
|
||||
|
|
@ -255,7 +267,7 @@ func (r *PeerRegistry) AddPeer(peer *Peer) error {
|
|||
|
||||
if _, exists := r.peers[peer.ID]; exists {
|
||||
r.mu.Unlock()
|
||||
return fmt.Errorf("peer %s already exists", peer.ID)
|
||||
return coreerr.E("PeerRegistry.AddPeer", "peer "+peer.ID+" already exists", nil)
|
||||
}
|
||||
|
||||
// Set defaults
|
||||
|
|
@ -280,7 +292,7 @@ func (r *PeerRegistry) UpdatePeer(peer *Peer) error {
|
|||
|
||||
if _, exists := r.peers[peer.ID]; !exists {
|
||||
r.mu.Unlock()
|
||||
return fmt.Errorf("peer %s not found", peer.ID)
|
||||
return coreerr.E("PeerRegistry.UpdatePeer", "peer "+peer.ID+" not found", nil)
|
||||
}
|
||||
|
||||
r.peers[peer.ID] = peer
|
||||
|
|
@ -297,7 +309,7 @@ func (r *PeerRegistry) RemovePeer(id string) error {
|
|||
|
||||
if _, exists := r.peers[id]; !exists {
|
||||
r.mu.Unlock()
|
||||
return fmt.Errorf("peer %s not found", id)
|
||||
return coreerr.E("PeerRegistry.RemovePeer", "peer "+id+" not found", nil)
|
||||
}
|
||||
|
||||
delete(r.peers, id)
|
||||
|
|
@ -351,7 +363,7 @@ func (r *PeerRegistry) UpdateMetrics(id string, pingMS, geoKM float64, hops int)
|
|||
peer, exists := r.peers[id]
|
||||
if !exists {
|
||||
r.mu.Unlock()
|
||||
return fmt.Errorf("peer %s not found", id)
|
||||
return coreerr.E("PeerRegistry.UpdateMetrics", "peer "+id+" not found", nil)
|
||||
}
|
||||
|
||||
peer.PingMS = pingMS
|
||||
|
|
@ -373,7 +385,7 @@ func (r *PeerRegistry) UpdateScore(id string, score float64) error {
|
|||
peer, exists := r.peers[id]
|
||||
if !exists {
|
||||
r.mu.Unlock()
|
||||
return fmt.Errorf("peer %s not found", id)
|
||||
return coreerr.E("PeerRegistry.UpdateScore", "peer "+id+" not found", nil)
|
||||
}
|
||||
|
||||
// Clamp score to 0-100
|
||||
|
|
@ -656,8 +668,8 @@ func (r *PeerRegistry) scheduleSave() {
|
|||
func (r *PeerRegistry) saveNow() error {
|
||||
// Ensure directory exists
|
||||
dir := filepath.Dir(r.path)
|
||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||
return fmt.Errorf("failed to create peers directory: %w", err)
|
||||
if err := coreio.Local.EnsureDir(dir); err != nil {
|
||||
return coreerr.E("PeerRegistry.saveNow", "failed to create peers directory", err)
|
||||
}
|
||||
|
||||
// Convert to slice for JSON
|
||||
|
|
@ -665,18 +677,18 @@ func (r *PeerRegistry) saveNow() error {
|
|||
|
||||
data, err := json.MarshalIndent(peers, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal peers: %w", err)
|
||||
return coreerr.E("PeerRegistry.saveNow", "failed to marshal peers", err)
|
||||
}
|
||||
|
||||
// Use atomic write pattern: write to temp file, then rename
|
||||
tmpPath := r.path + ".tmp"
|
||||
if err := os.WriteFile(tmpPath, data, 0644); err != nil {
|
||||
return fmt.Errorf("failed to write peers temp file: %w", err)
|
||||
if err := coreio.Local.Write(tmpPath, string(data)); err != nil {
|
||||
return coreerr.E("PeerRegistry.saveNow", "failed to write peers temp file", err)
|
||||
}
|
||||
|
||||
if err := os.Rename(tmpPath, r.path); err != nil {
|
||||
os.Remove(tmpPath) // Clean up temp file
|
||||
return fmt.Errorf("failed to rename peers file: %w", err)
|
||||
if err := coreio.Local.Rename(tmpPath, r.path); err != nil {
|
||||
coreio.Local.Delete(tmpPath) // Clean up temp file
|
||||
return coreerr.E("PeerRegistry.saveNow", "failed to rename peers file", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -708,6 +720,72 @@ func (r *PeerRegistry) Close() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// saveAllowedPublicKeys persists the allowlist to disk immediately.
|
||||
// It keeps the allowlist in a separate sidecar file so peer persistence remains
|
||||
// backwards compatible with the existing peers.json array format.
|
||||
func (r *PeerRegistry) saveAllowedPublicKeys() error {
|
||||
r.allowedPublicKeyMu.RLock()
|
||||
keys := make([]string, 0, len(r.allowedPublicKeys))
|
||||
for key := range r.allowedPublicKeys {
|
||||
keys = append(keys, key)
|
||||
}
|
||||
r.allowedPublicKeyMu.RUnlock()
|
||||
|
||||
slices.Sort(keys)
|
||||
|
||||
dir := filepath.Dir(r.allowlistPath)
|
||||
if err := coreio.Local.EnsureDir(dir); err != nil {
|
||||
return coreerr.E("PeerRegistry.saveAllowedPublicKeys", "failed to create allowlist directory", err)
|
||||
}
|
||||
|
||||
data, err := json.MarshalIndent(keys, "", " ")
|
||||
if err != nil {
|
||||
return coreerr.E("PeerRegistry.saveAllowedPublicKeys", "failed to marshal allowlist", err)
|
||||
}
|
||||
|
||||
tmpPath := r.allowlistPath + ".tmp"
|
||||
if err := coreio.Local.Write(tmpPath, string(data)); err != nil {
|
||||
return coreerr.E("PeerRegistry.saveAllowedPublicKeys", "failed to write allowlist temp file", err)
|
||||
}
|
||||
|
||||
if err := coreio.Local.Rename(tmpPath, r.allowlistPath); err != nil {
|
||||
coreio.Local.Delete(tmpPath)
|
||||
return coreerr.E("PeerRegistry.saveAllowedPublicKeys", "failed to rename allowlist file", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// loadAllowedPublicKeys loads the allowlist from disk.
|
||||
func (r *PeerRegistry) loadAllowedPublicKeys() error {
|
||||
if !coreio.Local.Exists(r.allowlistPath) {
|
||||
return nil
|
||||
}
|
||||
|
||||
content, err := coreio.Local.Read(r.allowlistPath)
|
||||
if err != nil {
|
||||
return coreerr.E("PeerRegistry.loadAllowedPublicKeys", "failed to read allowlist", err)
|
||||
}
|
||||
|
||||
var keys []string
|
||||
if err := json.Unmarshal([]byte(content), &keys); err != nil {
|
||||
return coreerr.E("PeerRegistry.loadAllowedPublicKeys", "failed to unmarshal allowlist", err)
|
||||
}
|
||||
|
||||
r.allowedPublicKeyMu.Lock()
|
||||
defer r.allowedPublicKeyMu.Unlock()
|
||||
|
||||
r.allowedPublicKeys = make(map[string]bool, len(keys))
|
||||
for _, key := range keys {
|
||||
if key == "" {
|
||||
continue
|
||||
}
|
||||
r.allowedPublicKeys[key] = true
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// save is a helper that schedules a debounced save.
|
||||
// Kept for backward compatibility but now debounces writes.
|
||||
// Must NOT be called with r.mu held.
|
||||
|
|
@ -718,14 +796,14 @@ func (r *PeerRegistry) save() error {
|
|||
|
||||
// load reads peers from disk.
|
||||
func (r *PeerRegistry) load() error {
|
||||
data, err := os.ReadFile(r.path)
|
||||
content, err := coreio.Local.Read(r.path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read peers: %w", err)
|
||||
return coreerr.E("PeerRegistry.load", "failed to read peers", err)
|
||||
}
|
||||
|
||||
var peers []*Peer
|
||||
if err := json.Unmarshal(data, &peers); err != nil {
|
||||
return fmt.Errorf("failed to unmarshal peers: %w", err)
|
||||
if err := json.Unmarshal([]byte(content), &peers); err != nil {
|
||||
return coreerr.E("PeerRegistry.load", "failed to unmarshal peers", err)
|
||||
}
|
||||
|
||||
r.peers = make(map[string]*Peer)
|
||||
|
|
|
|||
|
|
@ -389,6 +389,39 @@ func TestPeerRegistry_Persistence(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestPeerRegistry_AllowlistPersistence(t *testing.T) {
|
||||
tmpDir, _ := os.MkdirTemp("", "allowlist-persist-test")
|
||||
defer os.RemoveAll(tmpDir)
|
||||
|
||||
peersPath := filepath.Join(tmpDir, "peers.json")
|
||||
|
||||
pr1, err := NewPeerRegistryWithPath(peersPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create first registry: %v", err)
|
||||
}
|
||||
|
||||
key := "allowlist-key-1234567890"
|
||||
pr1.AllowPublicKey(key)
|
||||
|
||||
if err := pr1.Close(); err != nil {
|
||||
t.Fatalf("failed to close first registry: %v", err)
|
||||
}
|
||||
|
||||
pr2, err := NewPeerRegistryWithPath(peersPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create second registry: %v", err)
|
||||
}
|
||||
|
||||
if !pr2.IsPublicKeyAllowed(key) {
|
||||
t.Fatal("expected allowlisted key to survive reload")
|
||||
}
|
||||
|
||||
keys := pr2.ListAllowedPublicKeys()
|
||||
if !slices.Contains(keys, key) {
|
||||
t.Fatalf("expected allowlisted key to be listed after reload, got %v", keys)
|
||||
}
|
||||
}
|
||||
|
||||
// --- Security Feature Tests ---
|
||||
|
||||
func TestPeerRegistry_AuthMode(t *testing.T) {
|
||||
|
|
|
|||
|
|
@ -1,8 +1,9 @@
|
|||
package node
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
)
|
||||
|
||||
// ProtocolError represents an error from the remote peer.
|
||||
|
|
@ -25,7 +26,7 @@ type ResponseHandler struct{}
|
|||
// 3. If response type matches expected (returns error if not)
|
||||
func (h *ResponseHandler) ValidateResponse(resp *Message, expectedType MessageType) error {
|
||||
if resp == nil {
|
||||
return errors.New("nil response")
|
||||
return coreerr.E("ResponseHandler.ValidateResponse", "nil response", nil)
|
||||
}
|
||||
|
||||
// Check for error response
|
||||
|
|
@ -39,7 +40,7 @@ func (h *ResponseHandler) ValidateResponse(resp *Message, expectedType MessageTy
|
|||
|
||||
// Check expected type
|
||||
if resp.Type != expectedType {
|
||||
return fmt.Errorf("unexpected response type: expected %s, got %s", expectedType, resp.Type)
|
||||
return coreerr.E("ResponseHandler.ValidateResponse", "unexpected response type: expected "+string(expectedType)+", got "+string(resp.Type), nil)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -54,7 +55,7 @@ func (h *ResponseHandler) ParseResponse(resp *Message, expectedType MessageType,
|
|||
|
||||
if target != nil {
|
||||
if err := resp.ParsePayload(target); err != nil {
|
||||
return fmt.Errorf("failed to parse %s payload: %w", expectedType, err)
|
||||
return coreerr.E("ResponseHandler.ParseResponse", "failed to parse "+string(expectedType)+" payload", err)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -5,7 +5,6 @@ import (
|
|||
"crypto/tls"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"iter"
|
||||
"maps"
|
||||
|
|
@ -16,8 +15,10 @@ import (
|
|||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
"dappco.re/go/core/p2p/logging"
|
||||
|
||||
"forge.lthn.ai/Snider/Borg/pkg/smsg"
|
||||
"forge.lthn.ai/core/go-p2p/logging"
|
||||
"github.com/gorilla/websocket"
|
||||
)
|
||||
|
||||
|
|
@ -75,10 +76,20 @@ func NewMessageDeduplicator(ttl time.Duration) *MessageDeduplicator {
|
|||
|
||||
// IsDuplicate checks if a message ID has been seen recently
|
||||
func (d *MessageDeduplicator) IsDuplicate(msgID string) bool {
|
||||
d.mu.RLock()
|
||||
_, exists := d.seen[msgID]
|
||||
d.mu.RUnlock()
|
||||
return exists
|
||||
d.mu.Lock()
|
||||
defer d.mu.Unlock()
|
||||
|
||||
seenAt, exists := d.seen[msgID]
|
||||
if !exists {
|
||||
return false
|
||||
}
|
||||
|
||||
if d.ttl > 0 && time.Since(seenAt) > d.ttl {
|
||||
delete(d.seen, msgID)
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Mark records a message ID as seen
|
||||
|
|
@ -289,7 +300,7 @@ func (t *Transport) Stop() error {
|
|||
defer cancel()
|
||||
|
||||
if err := t.server.Shutdown(ctx); err != nil {
|
||||
return fmt.Errorf("server shutdown error: %w", err)
|
||||
return coreerr.E("Transport.Stop", "server shutdown error", err)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -320,7 +331,7 @@ func (t *Transport) Connect(peer *Peer) (*PeerConnection, error) {
|
|||
}
|
||||
conn, _, err := dialer.Dial(u.String(), nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to connect to peer: %w", err)
|
||||
return nil, coreerr.E("Transport.Connect", "failed to connect to peer", err)
|
||||
}
|
||||
|
||||
pc := &PeerConnection{
|
||||
|
|
@ -335,7 +346,7 @@ func (t *Transport) Connect(peer *Peer) (*PeerConnection, error) {
|
|||
// This also derives and stores the shared secret in pc.SharedSecret
|
||||
if err := t.performHandshake(pc); err != nil {
|
||||
conn.Close()
|
||||
return nil, fmt.Errorf("handshake failed: %w", err)
|
||||
return nil, coreerr.E("Transport.Connect", "handshake failed", err)
|
||||
}
|
||||
|
||||
// Store connection using the real peer ID from handshake
|
||||
|
|
@ -368,7 +379,7 @@ func (t *Transport) Send(peerID string, msg *Message) error {
|
|||
t.mu.RUnlock()
|
||||
|
||||
if !exists {
|
||||
return fmt.Errorf("peer %s not connected", peerID)
|
||||
return coreerr.E("Transport.Send", "peer "+peerID+" not connected", nil)
|
||||
}
|
||||
|
||||
return pc.Send(msg)
|
||||
|
|
@ -628,7 +639,7 @@ func (t *Transport) performHandshake(pc *PeerConnection) error {
|
|||
// Generate challenge for the server to prove it has the matching private key
|
||||
challenge, err := GenerateChallenge()
|
||||
if err != nil {
|
||||
return fmt.Errorf("generate challenge: %w", err)
|
||||
return coreerr.E("Transport.performHandshake", "generate challenge", err)
|
||||
}
|
||||
|
||||
payload := HandshakePayload{
|
||||
|
|
@ -639,41 +650,41 @@ func (t *Transport) performHandshake(pc *PeerConnection) error {
|
|||
|
||||
msg, err := NewMessage(MsgHandshake, identity.ID, pc.Peer.ID, payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("create handshake message: %w", err)
|
||||
return coreerr.E("Transport.performHandshake", "create handshake message", err)
|
||||
}
|
||||
|
||||
// First message is unencrypted (peer needs our public key)
|
||||
data, err := MarshalJSON(msg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marshal handshake message: %w", err)
|
||||
return coreerr.E("Transport.performHandshake", "marshal handshake message", err)
|
||||
}
|
||||
|
||||
if err := pc.Conn.WriteMessage(websocket.TextMessage, data); err != nil {
|
||||
return fmt.Errorf("send handshake: %w", err)
|
||||
return coreerr.E("Transport.performHandshake", "send handshake", err)
|
||||
}
|
||||
|
||||
// Wait for ack
|
||||
_, ackData, err := pc.Conn.ReadMessage()
|
||||
if err != nil {
|
||||
return fmt.Errorf("read handshake ack: %w", err)
|
||||
return coreerr.E("Transport.performHandshake", "read handshake ack", err)
|
||||
}
|
||||
|
||||
var ackMsg Message
|
||||
if err := json.Unmarshal(ackData, &ackMsg); err != nil {
|
||||
return fmt.Errorf("unmarshal handshake ack: %w", err)
|
||||
return coreerr.E("Transport.performHandshake", "unmarshal handshake ack", err)
|
||||
}
|
||||
|
||||
if ackMsg.Type != MsgHandshakeAck {
|
||||
return fmt.Errorf("expected handshake_ack, got %s", ackMsg.Type)
|
||||
return coreerr.E("Transport.performHandshake", "expected handshake_ack, got "+string(ackMsg.Type), nil)
|
||||
}
|
||||
|
||||
var ackPayload HandshakeAckPayload
|
||||
if err := ackMsg.ParsePayload(&ackPayload); err != nil {
|
||||
return fmt.Errorf("parse handshake ack payload: %w", err)
|
||||
return coreerr.E("Transport.performHandshake", "parse handshake ack payload", err)
|
||||
}
|
||||
|
||||
if !ackPayload.Accepted {
|
||||
return fmt.Errorf("handshake rejected: %s", ackPayload.Reason)
|
||||
return coreerr.E("Transport.performHandshake", "handshake rejected: "+ackPayload.Reason, nil)
|
||||
}
|
||||
|
||||
// Update peer with the received identity info
|
||||
|
|
@ -685,15 +696,15 @@ func (t *Transport) performHandshake(pc *PeerConnection) error {
|
|||
// Verify challenge response - derive shared secret first using the peer's public key
|
||||
sharedSecret, err := t.node.DeriveSharedSecret(pc.Peer.PublicKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("derive shared secret for challenge verification: %w", err)
|
||||
return coreerr.E("Transport.performHandshake", "derive shared secret for challenge verification", err)
|
||||
}
|
||||
|
||||
// Verify the server's response to our challenge
|
||||
if len(ackPayload.ChallengeResponse) == 0 {
|
||||
return errors.New("server did not provide challenge response")
|
||||
return coreerr.E("Transport.performHandshake", "server did not provide challenge response", nil)
|
||||
}
|
||||
if !VerifyChallenge(challenge, ackPayload.ChallengeResponse, sharedSecret) {
|
||||
return errors.New("challenge response verification failed: server may not have matching private key")
|
||||
return coreerr.E("Transport.performHandshake", "challenge response verification failed: server may not have matching private key", nil)
|
||||
}
|
||||
|
||||
// Store the shared secret for later use
|
||||
|
|
@ -840,7 +851,7 @@ func (pc *PeerConnection) Send(msg *Message) error {
|
|||
|
||||
// Set write deadline to prevent blocking forever
|
||||
if err := pc.Conn.SetWriteDeadline(time.Now().Add(10 * time.Second)); err != nil {
|
||||
return fmt.Errorf("failed to set write deadline: %w", err)
|
||||
return coreerr.E("PeerConnection.Send", "failed to set write deadline", err)
|
||||
}
|
||||
defer pc.Conn.SetWriteDeadline(time.Time{}) // Reset deadline after send
|
||||
|
||||
|
|
|
|||
|
|
@ -159,6 +159,17 @@ func TestMessageDeduplicator(t *testing.T) {
|
|||
}
|
||||
})
|
||||
|
||||
t.Run("ExpiredEntriesAreNotDuplicates", func(t *testing.T) {
|
||||
d := NewMessageDeduplicator(25 * time.Millisecond)
|
||||
d.Mark("msg-expired")
|
||||
|
||||
time.Sleep(40 * time.Millisecond)
|
||||
|
||||
if d.IsDuplicate("msg-expired") {
|
||||
t.Error("expired message should not remain a duplicate")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("ConcurrentAccess", func(t *testing.T) {
|
||||
d := NewMessageDeduplicator(5 * time.Minute)
|
||||
var wg sync.WaitGroup
|
||||
|
|
|
|||
|
|
@ -3,12 +3,12 @@ package node
|
|||
import (
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"forge.lthn.ai/core/go-p2p/logging"
|
||||
coreerr "dappco.re/go/core/log"
|
||||
|
||||
"dappco.re/go/core/p2p/logging"
|
||||
"github.com/adrg/xdg"
|
||||
)
|
||||
|
||||
|
|
@ -26,7 +26,7 @@ type MinerInstance interface {
|
|||
GetName() string
|
||||
GetType() string
|
||||
GetStats() (any, error)
|
||||
GetConsoleHistory(lines int) []string
|
||||
GetConsoleHistorySince(lines int, since time.Time) []string
|
||||
}
|
||||
|
||||
// ProfileManager interface for profile operations.
|
||||
|
|
@ -42,6 +42,7 @@ type Worker struct {
|
|||
minerManager MinerManager
|
||||
profileManager ProfileManager
|
||||
startTime time.Time
|
||||
DataDir string // Base directory for deployments (defaults to xdg.DataHome)
|
||||
}
|
||||
|
||||
// NewWorker creates a new Worker instance.
|
||||
|
|
@ -50,6 +51,7 @@ func NewWorker(node *NodeManager, transport *Transport) *Worker {
|
|||
node: node,
|
||||
transport: transport,
|
||||
startTime: time.Now(),
|
||||
DataDir: xdg.DataHome,
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -116,7 +118,7 @@ func (w *Worker) HandleMessage(conn *PeerConnection, msg *Message) {
|
|||
func (w *Worker) handlePing(msg *Message) (*Message, error) {
|
||||
var ping PingPayload
|
||||
if err := msg.ParsePayload(&ping); err != nil {
|
||||
return nil, fmt.Errorf("invalid ping payload: %w", err)
|
||||
return nil, coreerr.E("Worker.handlePing", "invalid ping payload", err)
|
||||
}
|
||||
|
||||
pong := PongPayload{
|
||||
|
|
@ -199,12 +201,12 @@ func (w *Worker) handleStartMiner(msg *Message) (*Message, error) {
|
|||
|
||||
var payload StartMinerPayload
|
||||
if err := msg.ParsePayload(&payload); err != nil {
|
||||
return nil, fmt.Errorf("invalid start miner payload: %w", err)
|
||||
return nil, coreerr.E("Worker.handleStartMiner", "invalid start miner payload", err)
|
||||
}
|
||||
|
||||
// Validate miner type is provided
|
||||
if payload.MinerType == "" {
|
||||
return nil, errors.New("miner type is required")
|
||||
return nil, coreerr.E("Worker.handleStartMiner", "miner type is required", nil)
|
||||
}
|
||||
|
||||
// Get the config from the profile or use the override
|
||||
|
|
@ -214,11 +216,11 @@ func (w *Worker) handleStartMiner(msg *Message) (*Message, error) {
|
|||
} else if w.profileManager != nil {
|
||||
profile, err := w.profileManager.GetProfile(payload.ProfileID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("profile not found: %s", payload.ProfileID)
|
||||
return nil, coreerr.E("Worker.handleStartMiner", "profile not found: "+payload.ProfileID, nil)
|
||||
}
|
||||
config = profile
|
||||
} else {
|
||||
return nil, errors.New("no config provided and no profile manager configured")
|
||||
return nil, coreerr.E("Worker.handleStartMiner", "no config provided and no profile manager configured", nil)
|
||||
}
|
||||
|
||||
// Start the miner
|
||||
|
|
@ -246,7 +248,7 @@ func (w *Worker) handleStopMiner(msg *Message) (*Message, error) {
|
|||
|
||||
var payload StopMinerPayload
|
||||
if err := msg.ParsePayload(&payload); err != nil {
|
||||
return nil, fmt.Errorf("invalid stop miner payload: %w", err)
|
||||
return nil, coreerr.E("Worker.handleStopMiner", "invalid stop miner payload", err)
|
||||
}
|
||||
|
||||
err := w.minerManager.StopMiner(payload.MinerName)
|
||||
|
|
@ -269,7 +271,7 @@ func (w *Worker) handleGetLogs(msg *Message) (*Message, error) {
|
|||
|
||||
var payload GetLogsPayload
|
||||
if err := msg.ParsePayload(&payload); err != nil {
|
||||
return nil, fmt.Errorf("invalid get logs payload: %w", err)
|
||||
return nil, coreerr.E("Worker.handleGetLogs", "invalid get logs payload", err)
|
||||
}
|
||||
|
||||
// Validate and limit the Lines parameter to prevent resource exhaustion
|
||||
|
|
@ -280,10 +282,15 @@ func (w *Worker) handleGetLogs(msg *Message) (*Message, error) {
|
|||
|
||||
miner, err := w.minerManager.GetMiner(payload.MinerName)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("miner not found: %s", payload.MinerName)
|
||||
return nil, coreerr.E("Worker.handleGetLogs", "miner not found: "+payload.MinerName, nil)
|
||||
}
|
||||
|
||||
lines := miner.GetConsoleHistory(payload.Lines)
|
||||
var since time.Time
|
||||
if payload.Since > 0 {
|
||||
since = time.UnixMilli(payload.Since)
|
||||
}
|
||||
|
||||
lines := miner.GetConsoleHistorySince(payload.Lines, since)
|
||||
|
||||
logs := LogsPayload{
|
||||
MinerName: payload.MinerName,
|
||||
|
|
@ -298,7 +305,7 @@ func (w *Worker) handleGetLogs(msg *Message) (*Message, error) {
|
|||
func (w *Worker) handleDeploy(conn *PeerConnection, msg *Message) (*Message, error) {
|
||||
var payload DeployPayload
|
||||
if err := msg.ParsePayload(&payload); err != nil {
|
||||
return nil, fmt.Errorf("invalid deploy payload: %w", err)
|
||||
return nil, coreerr.E("Worker.handleDeploy", "invalid deploy payload", err)
|
||||
}
|
||||
|
||||
// Reconstruct Bundle object from payload
|
||||
|
|
@ -318,19 +325,19 @@ func (w *Worker) handleDeploy(conn *PeerConnection, msg *Message) (*Message, err
|
|||
switch bundle.Type {
|
||||
case BundleProfile:
|
||||
if w.profileManager == nil {
|
||||
return nil, errors.New("profile manager not configured")
|
||||
return nil, coreerr.E("Worker.handleDeploy", "profile manager not configured", nil)
|
||||
}
|
||||
|
||||
// Decrypt and extract profile data
|
||||
profileData, err := ExtractProfileBundle(bundle, password)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to extract profile bundle: %w", err)
|
||||
return nil, coreerr.E("Worker.handleDeploy", "failed to extract profile bundle", err)
|
||||
}
|
||||
|
||||
// Unmarshal into interface{} to pass to ProfileManager
|
||||
var profile any
|
||||
if err := json.Unmarshal(profileData, &profile); err != nil {
|
||||
return nil, fmt.Errorf("invalid profile data JSON: %w", err)
|
||||
return nil, coreerr.E("Worker.handleDeploy", "invalid profile data JSON", err)
|
||||
}
|
||||
|
||||
if err := w.profileManager.SaveProfile(profile); err != nil {
|
||||
|
|
@ -350,8 +357,8 @@ func (w *Worker) handleDeploy(conn *PeerConnection, msg *Message) (*Message, err
|
|||
|
||||
case BundleMiner, BundleFull:
|
||||
// Determine installation directory
|
||||
// We use xdg.DataHome/lethean-desktop/miners/<bundle_name>
|
||||
minersDir := filepath.Join(xdg.DataHome, "lethean-desktop", "miners")
|
||||
// We use w.DataDir/lethean-desktop/miners/<bundle_name>
|
||||
minersDir := filepath.Join(w.DataDir, "lethean-desktop", "miners")
|
||||
installDir := filepath.Join(minersDir, payload.Name)
|
||||
|
||||
logging.Info("deploying miner bundle", logging.Fields{
|
||||
|
|
@ -363,7 +370,7 @@ func (w *Worker) handleDeploy(conn *PeerConnection, msg *Message) (*Message, err
|
|||
// Extract miner bundle
|
||||
minerPath, profileData, err := ExtractMinerBundle(bundle, password, installDir)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to extract miner bundle: %w", err)
|
||||
return nil, coreerr.E("Worker.handleDeploy", "failed to extract miner bundle", err)
|
||||
}
|
||||
|
||||
// If the bundle contained a profile config, save it
|
||||
|
|
@ -393,7 +400,7 @@ func (w *Worker) handleDeploy(conn *PeerConnection, msg *Message) (*Message, err
|
|||
return msg.Reply(MsgDeployAck, ack)
|
||||
|
||||
default:
|
||||
return nil, fmt.Errorf("unknown bundle type: %s", payload.BundleType)
|
||||
return nil, coreerr.E("Worker.handleDeploy", "unknown bundle type: "+payload.BundleType, nil)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -25,7 +25,11 @@ func TestNewWorker(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -40,6 +44,7 @@ func TestNewWorker(t *testing.T) {
|
|||
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
if worker == nil {
|
||||
t.Fatal("NewWorker returned nil")
|
||||
|
|
@ -56,7 +61,11 @@ func TestWorker_SetMinerManager(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -71,6 +80,7 @@ func TestWorker_SetMinerManager(t *testing.T) {
|
|||
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
mockManager := &mockMinerManager{}
|
||||
worker.SetMinerManager(mockManager)
|
||||
|
|
@ -84,7 +94,11 @@ func TestWorker_SetProfileManager(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -99,6 +113,7 @@ func TestWorker_SetProfileManager(t *testing.T) {
|
|||
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
mockProfile := &mockProfileManager{}
|
||||
worker.SetProfileManager(mockProfile)
|
||||
|
|
@ -112,7 +127,11 @@ func TestWorker_HandlePing(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -127,6 +146,7 @@ func TestWorker_HandlePing(t *testing.T) {
|
|||
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
// Create a ping message
|
||||
identity := nm.GetIdentity()
|
||||
|
|
@ -171,7 +191,11 @@ func TestWorker_HandleGetStats(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -186,6 +210,7 @@ func TestWorker_HandleGetStats(t *testing.T) {
|
|||
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
// Create a get_stats message
|
||||
identity := nm.GetIdentity()
|
||||
|
|
@ -229,7 +254,11 @@ func TestWorker_HandleStartMiner_NoManager(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -244,6 +273,7 @@ func TestWorker_HandleStartMiner_NoManager(t *testing.T) {
|
|||
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
// Create a start_miner message
|
||||
identity := nm.GetIdentity()
|
||||
|
|
@ -267,7 +297,11 @@ func TestWorker_HandleStopMiner_NoManager(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -282,6 +316,7 @@ func TestWorker_HandleStopMiner_NoManager(t *testing.T) {
|
|||
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
// Create a stop_miner message
|
||||
identity := nm.GetIdentity()
|
||||
|
|
@ -305,7 +340,11 @@ func TestWorker_HandleGetLogs_NoManager(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -320,6 +359,7 @@ func TestWorker_HandleGetLogs_NoManager(t *testing.T) {
|
|||
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
// Create a get_logs message
|
||||
identity := nm.GetIdentity()
|
||||
|
|
@ -343,7 +383,11 @@ func TestWorker_HandleDeploy_Profile(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -358,6 +402,7 @@ func TestWorker_HandleDeploy_Profile(t *testing.T) {
|
|||
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
// Create a deploy message for profile
|
||||
identity := nm.GetIdentity()
|
||||
|
|
@ -385,7 +430,11 @@ func TestWorker_HandleDeploy_UnknownType(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -400,6 +449,7 @@ func TestWorker_HandleDeploy_UnknownType(t *testing.T) {
|
|||
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
// Create a deploy message with unknown type
|
||||
identity := nm.GetIdentity()
|
||||
|
|
@ -500,10 +550,14 @@ type mockMinerInstance struct {
|
|||
stats any
|
||||
}
|
||||
|
||||
func (m *mockMinerInstance) GetName() string { return m.name }
|
||||
func (m *mockMinerInstance) GetType() string { return m.minerType }
|
||||
func (m *mockMinerInstance) GetStats() (any, error) { return m.stats, nil }
|
||||
func (m *mockMinerInstance) GetConsoleHistory(lines int) []string { return []string{} }
|
||||
func (m *mockMinerInstance) GetName() string { return m.name }
|
||||
func (m *mockMinerInstance) GetType() string { return m.minerType }
|
||||
func (m *mockMinerInstance) GetStats() (any, error) {
|
||||
return m.stats, nil
|
||||
}
|
||||
func (m *mockMinerInstance) GetConsoleHistorySince(lines int, since time.Time) []string {
|
||||
return []string{}
|
||||
}
|
||||
|
||||
type mockProfileManager struct{}
|
||||
|
||||
|
|
@ -566,7 +620,11 @@ func TestWorker_HandleStartMiner_WithManager(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -581,6 +639,7 @@ func TestWorker_HandleStartMiner_WithManager(t *testing.T) {
|
|||
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
mm := &mockMinerManager{
|
||||
miners: []MinerInstance{},
|
||||
|
|
@ -733,7 +792,11 @@ func TestWorker_HandleStopMiner_WithManager(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -746,6 +809,7 @@ func TestWorker_HandleStopMiner_WithManager(t *testing.T) {
|
|||
}
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
identity := nm.GetIdentity()
|
||||
|
||||
t.Run("Success", func(t *testing.T) {
|
||||
|
|
@ -795,7 +859,11 @@ func TestWorker_HandleGetLogs_WithManager(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -808,6 +876,7 @@ func TestWorker_HandleGetLogs_WithManager(t *testing.T) {
|
|||
}
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
identity := nm.GetIdentity()
|
||||
|
||||
t.Run("Success", func(t *testing.T) {
|
||||
|
|
@ -900,7 +969,11 @@ func TestWorker_HandleGetStats_WithMinerManager(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -913,6 +986,7 @@ func TestWorker_HandleGetStats_WithMinerManager(t *testing.T) {
|
|||
}
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
identity := nm.GetIdentity()
|
||||
|
||||
// Set miner manager with miners that have real stats
|
||||
|
|
@ -959,7 +1033,11 @@ func TestWorker_HandleMessage_UnknownType(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -972,6 +1050,7 @@ func TestWorker_HandleMessage_UnknownType(t *testing.T) {
|
|||
}
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
identity := nm.GetIdentity()
|
||||
msg, _ := NewMessage("unknown_type", "sender-id", identity.ID, nil)
|
||||
|
|
@ -984,7 +1063,11 @@ func TestWorker_HandleDeploy_ProfileWithManager(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -997,6 +1080,7 @@ func TestWorker_HandleDeploy_ProfileWithManager(t *testing.T) {
|
|||
}
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
pm := &mockProfileManagerFull{profiles: make(map[string]any)}
|
||||
worker.SetProfileManager(pm)
|
||||
|
|
@ -1037,7 +1121,11 @@ func TestWorker_HandleDeploy_ProfileSaveFails(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -1050,6 +1138,7 @@ func TestWorker_HandleDeploy_ProfileSaveFails(t *testing.T) {
|
|||
}
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
worker.SetProfileManager(&mockProfileManagerFailing{})
|
||||
|
||||
identity := nm.GetIdentity()
|
||||
|
|
@ -1081,7 +1170,11 @@ func TestWorker_HandleDeploy_MinerBundle(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -1094,6 +1187,7 @@ func TestWorker_HandleDeploy_MinerBundle(t *testing.T) {
|
|||
}
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
pm := &mockProfileManagerFull{profiles: make(map[string]any)}
|
||||
worker.SetProfileManager(pm)
|
||||
|
||||
|
|
@ -1143,7 +1237,11 @@ func TestWorker_HandleDeploy_FullBundle(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -1156,6 +1254,7 @@ func TestWorker_HandleDeploy_FullBundle(t *testing.T) {
|
|||
}
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
identity := nm.GetIdentity()
|
||||
|
||||
|
|
@ -1197,7 +1296,11 @@ func TestWorker_HandleDeploy_MinerBundle_WithProfileManager(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, err := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, err := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create node manager: %v", err)
|
||||
}
|
||||
|
|
@ -1210,6 +1313,7 @@ func TestWorker_HandleDeploy_MinerBundle_WithProfileManager(t *testing.T) {
|
|||
}
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
// Set a failing profile manager to exercise the warn-and-continue path
|
||||
worker.SetProfileManager(&mockProfileManagerFailing{})
|
||||
|
|
@ -1256,11 +1360,16 @@ func TestWorker_HandleDeploy_InvalidPayload(t *testing.T) {
|
|||
cleanup := setupTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
nm, _ := NewNodeManager()
|
||||
dir := t.TempDir()
|
||||
nm, _ := NewNodeManagerWithPaths(
|
||||
filepath.Join(dir, "private.key"),
|
||||
filepath.Join(dir, "node.json"),
|
||||
)
|
||||
nm.GenerateIdentity("test", RoleWorker)
|
||||
pr, _ := NewPeerRegistryWithPath(t.TempDir() + "/peers.json")
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
identity := nm.GetIdentity()
|
||||
|
||||
// Create a message with invalid payload
|
||||
|
|
@ -1284,6 +1393,7 @@ func TestWorker_HandleGetStats_NoIdentity(t *testing.T) {
|
|||
pr, _ := NewPeerRegistryWithPath(t.TempDir() + "/peers.json")
|
||||
transport := NewTransport(nm, pr, DefaultTransportConfig())
|
||||
worker := NewWorker(nm, transport)
|
||||
worker.DataDir = t.TempDir()
|
||||
|
||||
msg, _ := NewMessage(MsgGetStats, "sender-id", "target-id", nil)
|
||||
_, err := worker.handleGetStats(msg)
|
||||
|
|
|
|||
|
|
@ -5,8 +5,9 @@ import (
|
|||
"crypto/hmac"
|
||||
"crypto/sha256"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"io"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
)
|
||||
|
||||
// TLV Types
|
||||
|
|
@ -90,35 +91,36 @@ func (p *PacketBuilder) MarshalAndSign(sharedSecret []byte) ([]byte, error) {
|
|||
}
|
||||
|
||||
// 4. Write Payload TLV (0xFF)
|
||||
// Note: 0xFF length is variable. For simplicity in this specialized reader,
|
||||
// we might handle 0xFF as "read until EOF" or use a varint length.
|
||||
// Implementing standard 1-byte length for payload is risky if payload > 255.
|
||||
// Assuming your spec allows >255 bytes, we handle 0xFF differently.
|
||||
|
||||
buf.WriteByte(TagPayload)
|
||||
// We don't write a 1-byte length for payload here assuming stream mode,
|
||||
// but if strict TLV, we'd need a multi-byte length protocol.
|
||||
// For this snippet, simply appending data:
|
||||
buf.Write(p.Payload)
|
||||
// Fixed: Now uses writeTLV which provides a 2-byte length prefix.
|
||||
// This prevents the io.ReadAll DoS and allows multiple packets in a stream.
|
||||
if err := writeTLV(buf, TagPayload, p.Payload); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
// Helper to write a simple TLV
|
||||
// Helper to write a simple TLV.
|
||||
// Now uses 2-byte big-endian length (uint16) to support up to 64KB payloads.
|
||||
func writeTLV(w io.Writer, tag uint8, value []byte) error {
|
||||
// Check strict length constraint (1 byte length = max 255 bytes)
|
||||
if len(value) > 255 {
|
||||
return errors.New("TLV value too large for 1-byte length header")
|
||||
// Check length constraint (2 byte length = max 65535 bytes)
|
||||
if len(value) > 65535 {
|
||||
return coreerr.E("ueps.writeTLV", "TLV value too large for 2-byte length header", nil)
|
||||
}
|
||||
|
||||
if _, err := w.Write([]byte{tag}); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := w.Write([]byte{uint8(len(value))}); err != nil {
|
||||
|
||||
lenBuf := make([]byte, 2)
|
||||
binary.BigEndian.PutUint16(lenBuf, uint16(len(value)))
|
||||
if _, err := w.Write(lenBuf); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if _, err := w.Write(value); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -112,26 +112,30 @@ func TestReadAndVerify_PayloadReadError(t *testing.T) {
|
|||
assert.Equal(t, "connection reset", err.Error())
|
||||
}
|
||||
|
||||
// TestReadAndVerify_PayloadReadError_EOF ensures that a clean EOF
|
||||
// (no payload bytes at all after 0xFF) is handled differently from
|
||||
// a hard I/O error — io.ReadAll treats io.EOF as success and returns
|
||||
// an empty slice, so the result should be an HMAC mismatch rather
|
||||
// than a raw read error.
|
||||
// TestReadAndVerify_PayloadReadError_EOF ensures that a truncated payload
|
||||
// (missing bytes after TagPayload) is handled as an I/O error (UnexpectedEOF)
|
||||
// because ReadAndVerify now uses io.ReadFull with the expected length prefix.
|
||||
func TestReadAndVerify_PayloadReadError_EOF(t *testing.T) {
|
||||
payload := []byte("eof test")
|
||||
builder := NewBuilder(0x20, payload)
|
||||
frame, err := builder.MarshalAndSign(testSecret)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Truncate at 0xFF tag — the reader will see 0xFF then immediate
|
||||
// EOF, which io.ReadAll treats as success with empty payload.
|
||||
// Truncate at TagPayload tag + partial length — the reader will see 0xFF
|
||||
// then EOF while trying to read the 2-byte length or the payload itself.
|
||||
payloadTagIdx := bytes.IndexByte(frame, TagPayload)
|
||||
require.NotEqual(t, -1, payloadTagIdx)
|
||||
|
||||
truncated := frame[:payloadTagIdx+1]
|
||||
truncated := frame[:payloadTagIdx+1] // Only the tag, no length
|
||||
_, err = ReadAndVerify(bufio.NewReader(bytes.NewReader(truncated)), testSecret)
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "integrity violation")
|
||||
assert.ErrorIs(t, err, io.EOF) // Failed reading length
|
||||
|
||||
truncatedWithLen := frame[:payloadTagIdx+3] // Tag + Length, but no payload
|
||||
_, err = ReadAndVerify(bufio.NewReader(bytes.NewReader(truncatedWithLen)), testSecret)
|
||||
require.Error(t, err)
|
||||
// io.ReadFull returns io.EOF if no bytes are read at all before EOF.
|
||||
assert.ErrorIs(t, err, io.EOF)
|
||||
}
|
||||
|
||||
// TestWriteTLV_AllWritesSucceed confirms the happy path still works
|
||||
|
|
@ -141,9 +145,11 @@ func TestWriteTLV_AllWritesSucceed(t *testing.T) {
|
|||
var buf bytes.Buffer
|
||||
err := writeTLV(&buf, TagVersion, []byte{0x09})
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []byte{TagVersion, 0x01, 0x09}, buf.Bytes())
|
||||
// Now uses 2-byte big-endian length: 0x00 0x01
|
||||
assert.Equal(t, []byte{TagVersion, 0x00, 0x01, 0x09}, buf.Bytes())
|
||||
}
|
||||
|
||||
|
||||
// TestWriteTLV_FailWriterTable runs the three failure scenarios in
|
||||
// a table-driven fashion for completeness.
|
||||
func TestWriteTLV_FailWriterTable(t *testing.T) {
|
||||
|
|
|
|||
|
|
@ -112,10 +112,11 @@ func TestHMACVerification_TamperedHeader(t *testing.T) {
|
|||
t.Fatalf("MarshalAndSign failed: %v", err)
|
||||
}
|
||||
|
||||
// Tamper with the Version TLV value (byte index 2: tag=0, len=1, val=2)
|
||||
// Tamper with the Version TLV value
|
||||
// New Index: Tag (1 byte) + Length (2 bytes) = 3. Value is at index 3.
|
||||
tampered := make([]byte, len(frame))
|
||||
copy(tampered, frame)
|
||||
tampered[2] = 0x01 // Change version from 0x09 to 0x01
|
||||
tampered[3] = 0x01 // Change version from 0x09 to 0x01
|
||||
|
||||
_, err = ReadAndVerify(bufio.NewReader(bytes.NewReader(tampered)), testSecret)
|
||||
if err == nil {
|
||||
|
|
@ -206,9 +207,8 @@ func TestMissingHMACTag(t *testing.T) {
|
|||
binary.BigEndian.PutUint16(tsBuf, 0)
|
||||
writeTLV(&buf, TagThreatScore, tsBuf)
|
||||
|
||||
// Skip HMAC TLV entirely — go straight to payload
|
||||
buf.WriteByte(TagPayload)
|
||||
buf.Write([]byte("some data"))
|
||||
// Skip HMAC TLV entirely — go straight to payload (with length prefix now)
|
||||
writeTLV(&buf, TagPayload, []byte("some data"))
|
||||
|
||||
_, err := ReadAndVerify(bufio.NewReader(bytes.NewReader(buf.Bytes())), testSecret)
|
||||
if err == nil {
|
||||
|
|
@ -221,10 +221,10 @@ func TestMissingHMACTag(t *testing.T) {
|
|||
|
||||
func TestWriteTLV_ValueTooLarge(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
oversized := make([]byte, 256) // 1 byte over the 255 limit
|
||||
oversized := make([]byte, 65536) // 1 byte over the 65535 limit
|
||||
err := writeTLV(&buf, TagVersion, oversized)
|
||||
if err == nil {
|
||||
t.Fatal("Expected error for TLV value > 255 bytes")
|
||||
t.Fatal("Expected error for TLV value > 65535 bytes")
|
||||
}
|
||||
if !strings.Contains(err.Error(), "TLV value too large") {
|
||||
t.Errorf("Expected 'TLV value too large' error, got: %v", err)
|
||||
|
|
@ -250,7 +250,7 @@ func TestTruncatedPacket(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "CutInFirstTLVValue",
|
||||
cutAt: 2, // Tag + length, but missing value
|
||||
cutAt: 2, // Tag + partial length
|
||||
wantErr: "EOF",
|
||||
},
|
||||
{
|
||||
|
|
@ -301,12 +301,11 @@ func TestUnknownTLVTag(t *testing.T) {
|
|||
mac.Write(payload)
|
||||
signature := mac.Sum(nil)
|
||||
|
||||
// Assemble full frame: headers + unknown + HMAC TLV + 0xFF + payload
|
||||
// Assemble full frame: headers + unknown + HMAC TLV + 0xFF + payload (with length!)
|
||||
var frame bytes.Buffer
|
||||
frame.Write(headerBuf.Bytes())
|
||||
writeTLV(&frame, TagHMAC, signature)
|
||||
frame.WriteByte(TagPayload)
|
||||
frame.Write(payload)
|
||||
writeTLV(&frame, TagPayload, payload)
|
||||
|
||||
parsed, err := ReadAndVerify(bufio.NewReader(bytes.NewReader(frame.Bytes())), testSecret)
|
||||
if err != nil {
|
||||
|
|
@ -387,9 +386,10 @@ func TestWriteTLV_BoundaryLengths(t *testing.T) {
|
|||
}{
|
||||
{"Empty", 0, false},
|
||||
{"OneByte", 1, false},
|
||||
{"MaxValid", 255, false},
|
||||
{"OneOver", 256, true},
|
||||
{"WayOver", 1024, true},
|
||||
{"OldMax", 255, false},
|
||||
{"NewMax", 65535, false},
|
||||
{"OneOver", 65536, true},
|
||||
{"WayOver", 100000, true},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
|
|
@ -407,6 +407,7 @@ func TestWriteTLV_BoundaryLengths(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
// TestReadAndVerify_EmptyReader verifies behaviour on completely empty input.
|
||||
func TestReadAndVerify_EmptyReader(t *testing.T) {
|
||||
_, err := ReadAndVerify(bufio.NewReader(bytes.NewReader(nil)), testSecret)
|
||||
|
|
|
|||
|
|
@ -6,8 +6,9 @@ import (
|
|||
"crypto/hmac"
|
||||
"crypto/sha256"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"io"
|
||||
|
||||
coreerr "dappco.re/go/core/log"
|
||||
)
|
||||
|
||||
// ParsedPacket holds the verified data
|
||||
|
|
@ -20,13 +21,12 @@ type ParsedPacket struct {
|
|||
// It consumes the stream up to the end of the packet.
|
||||
func ReadAndVerify(r *bufio.Reader, sharedSecret []byte) (*ParsedPacket, error) {
|
||||
// Buffer to reconstruct the data for HMAC verification
|
||||
// We have to "record" what we read to verify the signature later.
|
||||
var signedData bytes.Buffer
|
||||
header := UEPSHeader{}
|
||||
var signature []byte
|
||||
var payload []byte
|
||||
|
||||
// Loop through TLVs until we hit Payload (0xFF) or EOF
|
||||
// Loop through TLVs
|
||||
for {
|
||||
// 1. Read Tag
|
||||
tag, err := r.ReadByte()
|
||||
|
|
@ -34,87 +34,66 @@ func ReadAndVerify(r *bufio.Reader, sharedSecret []byte) (*ParsedPacket, error)
|
|||
return nil, err
|
||||
}
|
||||
|
||||
// 2. Handle Payload Tag (0xFF) - The Exit Condition
|
||||
if tag == TagPayload {
|
||||
// Stop recording signedData here (HMAC covers headers + payload, but logic splits)
|
||||
// Actually, wait. The HMAC covers (Headers + Payload).
|
||||
// We need to read the payload to verify.
|
||||
|
||||
// For this implementation, we read until EOF or a specific delimiter?
|
||||
// In a TCP stream, we need a length.
|
||||
// If you are using standard TCP, you typically prefix the WHOLE frame with
|
||||
// a 4-byte length. Assuming you handle that framing *before* calling this.
|
||||
|
||||
// Reading the rest as payload:
|
||||
remaining, err := io.ReadAll(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
payload = remaining
|
||||
|
||||
// Add 0xFF and payload to the buffer for signature check?
|
||||
// NO. In MarshalAndSign:
|
||||
// mac.Write(buf.Bytes()) // Headers
|
||||
// mac.Write(p.Payload) // Data
|
||||
// It did NOT write the 0xFF tag into the HMAC.
|
||||
|
||||
break // Exit loop
|
||||
}
|
||||
|
||||
// 3. Read Length (Standard TLV)
|
||||
lengthByte, err := r.ReadByte()
|
||||
if err != nil {
|
||||
// 2. Read Length (2-byte big-endian uint16)
|
||||
lenBuf := make([]byte, 2)
|
||||
if _, err := io.ReadFull(r, lenBuf); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
length := int(lengthByte)
|
||||
length := int(binary.BigEndian.Uint16(lenBuf))
|
||||
|
||||
// 4. Read Value
|
||||
// 3. Read Value
|
||||
value := make([]byte, length)
|
||||
if _, err := io.ReadFull(r, value); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Store for processing
|
||||
// 4. Handle Tag
|
||||
switch tag {
|
||||
case TagVersion:
|
||||
header.Version = value[0]
|
||||
// Reconstruct signed data: Tag + Len + Val
|
||||
signedData.WriteByte(tag)
|
||||
signedData.WriteByte(byte(length))
|
||||
signedData.Write(lenBuf)
|
||||
signedData.Write(value)
|
||||
case TagCurrentLay:
|
||||
header.CurrentLayer = value[0]
|
||||
signedData.WriteByte(tag)
|
||||
signedData.WriteByte(byte(length))
|
||||
signedData.Write(lenBuf)
|
||||
signedData.Write(value)
|
||||
case TagTargetLay:
|
||||
header.TargetLayer = value[0]
|
||||
signedData.WriteByte(tag)
|
||||
signedData.WriteByte(byte(length))
|
||||
signedData.Write(lenBuf)
|
||||
signedData.Write(value)
|
||||
case TagIntent:
|
||||
header.IntentID = value[0]
|
||||
signedData.WriteByte(tag)
|
||||
signedData.WriteByte(byte(length))
|
||||
signedData.Write(lenBuf)
|
||||
signedData.Write(value)
|
||||
case TagThreatScore:
|
||||
header.ThreatScore = binary.BigEndian.Uint16(value)
|
||||
signedData.WriteByte(tag)
|
||||
signedData.WriteByte(byte(length))
|
||||
signedData.Write(lenBuf)
|
||||
signedData.Write(value)
|
||||
case TagHMAC:
|
||||
signature = value
|
||||
// We do NOT add the HMAC itself to signedData
|
||||
// HMAC tag itself is not part of the signed data
|
||||
case TagPayload:
|
||||
payload = value
|
||||
// Exit loop after payload (last tag in UEPS frame)
|
||||
// Note: The HMAC covers the Payload but NOT the TagPayload/Length bytes
|
||||
// to match the PacketBuilder.MarshalAndSign logic.
|
||||
goto verify
|
||||
default:
|
||||
// Unknown tag (future proofing), verify it but ignore semantics
|
||||
signedData.WriteByte(tag)
|
||||
signedData.WriteByte(byte(length))
|
||||
signedData.Write(lenBuf)
|
||||
signedData.Write(value)
|
||||
}
|
||||
}
|
||||
|
||||
verify:
|
||||
if len(signature) == 0 {
|
||||
return nil, errors.New("UEPS packet missing HMAC signature")
|
||||
return nil, coreerr.E("ueps.ReadAndVerify", "UEPS packet missing HMAC signature", nil)
|
||||
}
|
||||
|
||||
// 5. Verify HMAC
|
||||
|
|
@ -125,9 +104,7 @@ func ReadAndVerify(r *bufio.Reader, sharedSecret []byte) (*ParsedPacket, error)
|
|||
expectedMAC := mac.Sum(nil)
|
||||
|
||||
if !hmac.Equal(signature, expectedMAC) {
|
||||
// Log this. This is a Threat Event.
|
||||
// "Axiom Violation: Integrity Check Failed"
|
||||
return nil, errors.New("integrity violation: HMAC mismatch (ThreatScore +100)")
|
||||
return nil, coreerr.E("ueps.ReadAndVerify", "integrity violation: HMAC mismatch (ThreatScore +100)", nil)
|
||||
}
|
||||
|
||||
return &ParsedPacket{
|
||||
|
|
@ -135,3 +112,4 @@ func ReadAndVerify(r *bufio.Reader, sharedSecret []byte) (*ParsedPacket, error)
|
|||
Payload: payload,
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue