feat: scaffold go-proxy from RFC spec
Stratum mining proxy library skeleton with 18 Go source files, type declarations, event bus, NiceHash/simple splitter packages, pool client, HTTP API types, access/share logging, and rate limiter. No function implementations — ready for agent dispatch. Co-Authored-By: Virgil <virgil@lethean.io>
This commit is contained in:
commit
3d64079f91
24 changed files with 2788 additions and 0 deletions
440
.core/reference/RFC-025-AGENT-EXPERIENCE.md
Normal file
440
.core/reference/RFC-025-AGENT-EXPERIENCE.md
Normal file
|
|
@ -0,0 +1,440 @@
|
||||||
|
# RFC-025: Agent Experience (AX) Design Principles
|
||||||
|
|
||||||
|
- **Status:** Draft
|
||||||
|
- **Authors:** Snider, Cladius
|
||||||
|
- **Date:** 2026-03-19
|
||||||
|
- **Applies to:** All Core ecosystem packages (CoreGO, CorePHP, CoreTS, core-agent)
|
||||||
|
|
||||||
|
## Abstract
|
||||||
|
|
||||||
|
Agent Experience (AX) is a design paradigm for software systems where the primary code consumer is an AI agent, not a human developer. AX sits alongside User Experience (UX) and Developer Experience (DX) as the third era of interface design.
|
||||||
|
|
||||||
|
This RFC establishes AX as a formal design principle for the Core ecosystem and defines the conventions that follow from it.
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
As of early 2026, AI agents write, review, and maintain the majority of code in the Core ecosystem. The original author has not manually edited code (outside of Core struct design) since October 2025. Code is processed semantically — agents reason about intent, not characters.
|
||||||
|
|
||||||
|
Design patterns inherited from the human-developer era optimise for the wrong consumer:
|
||||||
|
|
||||||
|
- **Short names** save keystrokes but increase semantic ambiguity
|
||||||
|
- **Functional option chains** are fluent for humans but opaque for agents tracing configuration
|
||||||
|
- **Error-at-every-call-site** produces 50% boilerplate that obscures intent
|
||||||
|
- **Generic type parameters** force agents to carry type context that the runtime already has
|
||||||
|
- **Panic-hiding conventions** (`Must*`) create implicit control flow that agents must special-case
|
||||||
|
|
||||||
|
AX acknowledges this shift and provides principles for designing code, APIs, file structures, and conventions that serve AI agents as first-class consumers.
|
||||||
|
|
||||||
|
## The Three Eras
|
||||||
|
|
||||||
|
| Era | Primary Consumer | Optimises For | Key Metric |
|
||||||
|
|-----|-----------------|---------------|------------|
|
||||||
|
| UX | End users | Discoverability, forgiveness, visual clarity | Task completion time |
|
||||||
|
| DX | Developers | Typing speed, IDE support, convention familiarity | Time to first commit |
|
||||||
|
| AX | AI agents | Predictability, composability, semantic navigation | Correct-on-first-pass rate |
|
||||||
|
|
||||||
|
AX does not replace UX or DX. End users still need good UX. Developers still need good DX. But when the primary code author and maintainer is an AI agent, the codebase should be designed for that consumer first.
|
||||||
|
|
||||||
|
## Principles
|
||||||
|
|
||||||
|
### 1. Predictable Names Over Short Names
|
||||||
|
|
||||||
|
Names are tokens that agents pattern-match across languages and contexts. Abbreviations introduce mapping overhead.
|
||||||
|
|
||||||
|
```
|
||||||
|
Config not Cfg
|
||||||
|
Service not Srv
|
||||||
|
Embed not Emb
|
||||||
|
Error not Err (as a subsystem name; err for local variables is fine)
|
||||||
|
Options not Opts
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule:** If a name would require a comment to explain, it is too short.
|
||||||
|
|
||||||
|
**Exception:** Industry-standard abbreviations that are universally understood (`HTTP`, `URL`, `ID`, `IPC`, `I18n`) are acceptable. The test: would an agent trained on any mainstream language recognise it without context?
|
||||||
|
|
||||||
|
### 2. Comments as Usage Examples
|
||||||
|
|
||||||
|
The function signature tells WHAT. The comment shows HOW with real values.
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Detect the project type from files present
|
||||||
|
setup.Detect("/path/to/project")
|
||||||
|
|
||||||
|
// Set up a workspace with auto-detected template
|
||||||
|
setup.Run(setup.Options{Path: ".", Template: "auto"})
|
||||||
|
|
||||||
|
// Scaffold a PHP module workspace
|
||||||
|
setup.Run(setup.Options{Path: "./my-module", Template: "php"})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule:** If a comment restates what the type signature already says, delete it. If a comment shows a concrete usage with realistic values, keep it.
|
||||||
|
|
||||||
|
**Rationale:** Agents learn from examples more effectively than from descriptions. A comment like "Run executes the setup process" adds zero information. A comment like `setup.Run(setup.Options{Path: ".", Template: "auto"})` teaches an agent exactly how to call the function.
|
||||||
|
|
||||||
|
### 3. Path Is Documentation
|
||||||
|
|
||||||
|
File and directory paths should be self-describing. An agent navigating the filesystem should understand what it is looking at without reading a README.
|
||||||
|
|
||||||
|
```
|
||||||
|
flow/deploy/to/homelab.yaml — deploy TO the homelab
|
||||||
|
flow/deploy/from/github.yaml — deploy FROM GitHub
|
||||||
|
flow/code/review.yaml — code review flow
|
||||||
|
template/file/go/struct.go.tmpl — Go struct file template
|
||||||
|
template/dir/workspace/php/ — PHP workspace scaffold
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule:** If an agent needs to read a file to understand what a directory contains, the directory naming has failed.
|
||||||
|
|
||||||
|
**Corollary:** The unified path convention (folder structure = HTTP route = CLI command = test path) is AX-native. One path, every surface.
|
||||||
|
|
||||||
|
### 4. Templates Over Freeform
|
||||||
|
|
||||||
|
When an agent generates code from a template, the output is constrained to known-good shapes. When an agent writes freeform, the output varies.
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Template-driven — consistent output
|
||||||
|
lib.RenderFile("php/action", data)
|
||||||
|
lib.ExtractDir("php", targetDir, data)
|
||||||
|
|
||||||
|
// Freeform — variance in output
|
||||||
|
"write a PHP action class that..."
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule:** For any code pattern that recurs, provide a template. Templates are guardrails for agents.
|
||||||
|
|
||||||
|
**Scope:** Templates apply to file generation, workspace scaffolding, config generation, and commit messages. They do NOT apply to novel logic — agents should write business logic freeform with the domain knowledge available.
|
||||||
|
|
||||||
|
### 5. Declarative Over Imperative
|
||||||
|
|
||||||
|
Agents reason better about declarations of intent than sequences of operations.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Declarative — agent sees what should happen
|
||||||
|
steps:
|
||||||
|
- name: build
|
||||||
|
flow: tools/docker-build
|
||||||
|
with:
|
||||||
|
context: "{{ .app_dir }}"
|
||||||
|
image_name: "{{ .image_name }}"
|
||||||
|
|
||||||
|
- name: deploy
|
||||||
|
flow: deploy/with/docker
|
||||||
|
with:
|
||||||
|
host: "{{ .host }}"
|
||||||
|
```
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Imperative — agent must trace execution
|
||||||
|
cmd := exec.Command("docker", "build", "--platform", "linux/amd64", "-t", imageName, ".")
|
||||||
|
cmd.Dir = appDir
|
||||||
|
if err := cmd.Run(); err != nil {
|
||||||
|
return fmt.Errorf("docker build: %w", err)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule:** Orchestration, configuration, and pipeline logic should be declarative (YAML/JSON). Implementation logic should be imperative (Go/PHP/TS). The boundary is: if an agent needs to compose or modify the logic, make it declarative.
|
||||||
|
|
||||||
|
### 6. Universal Types (Core Primitives)
|
||||||
|
|
||||||
|
Every component in the ecosystem accepts and returns the same primitive types. An agent processing any level of the tree sees identical shapes.
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Universal contract
|
||||||
|
setup.Run(core.Options{Path: ".", Template: "auto"})
|
||||||
|
brain.New(core.Options{Name: "openbrain"})
|
||||||
|
deploy.Run(core.Options{Flow: "deploy/to/homelab"})
|
||||||
|
|
||||||
|
// Fractal — Core itself is a Service
|
||||||
|
core.New(core.Options{
|
||||||
|
Services: []core.Service{
|
||||||
|
process.New(core.Options{Name: "process"}),
|
||||||
|
brain.New(core.Options{Name: "brain"}),
|
||||||
|
},
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Core primitive types:**
|
||||||
|
|
||||||
|
| Type | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `core.Options` | Input configuration (what you want) |
|
||||||
|
| `core.Config` | Runtime settings (what is active) |
|
||||||
|
| `core.Data` | Embedded or stored content |
|
||||||
|
| `core.Service` | A managed component with lifecycle |
|
||||||
|
| `core.Result[T]` | Return value with OK/fail state |
|
||||||
|
|
||||||
|
**What this replaces:**
|
||||||
|
|
||||||
|
| Go Convention | Core AX | Why |
|
||||||
|
|--------------|---------|-----|
|
||||||
|
| `func With*(v) Option` | `core.Options{Field: v}` | Struct literal is parseable; option chain requires tracing |
|
||||||
|
| `func Must*(v) T` | `core.Result[T]` | No hidden panics; errors flow through Core |
|
||||||
|
| `func *For[T](c) T` | `c.Service("name")` | String lookup is greppable; generics require type context |
|
||||||
|
| `val, err :=` everywhere | Single return via `core.Result` | Intent not obscured by error handling |
|
||||||
|
| `_ = err` | Never needed | Core handles all errors internally |
|
||||||
|
|
||||||
|
### 7. Directory as Semantics
|
||||||
|
|
||||||
|
The directory structure tells an agent the intent before it reads a word. Top-level directories are semantic categories, not organisational bins.
|
||||||
|
|
||||||
|
```
|
||||||
|
plans/
|
||||||
|
├── code/ # Pure primitives — read for WHAT exists
|
||||||
|
├── project/ # Products — read for WHAT we're building and WHY
|
||||||
|
└── rfc/ # Contracts — read for constraints and rules
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule:** An agent should know what kind of document it's reading from the path alone. `code/core/go/io/RFC.md` = a lib primitive spec. `project/ofm/RFC.md` = a product spec that cross-references code/. `rfc/snider/borg/RFC-BORG-006-SMSG-FORMAT.md` = an immutable contract for the Borg SMSG protocol.
|
||||||
|
|
||||||
|
**Corollary:** The three-way split (code/project/rfc) extends principle 3 (Path Is Documentation) from files to entire subtrees. The path IS the metadata.
|
||||||
|
|
||||||
|
### 8. Lib Never Imports Consumer
|
||||||
|
|
||||||
|
Dependency flows one direction. Libraries define primitives. Consumers compose from them. A new feature in a consumer can never break a library.
|
||||||
|
|
||||||
|
```
|
||||||
|
code/core/go/* → lib tier (stable foundation)
|
||||||
|
code/core/agent/ → consumer tier (composes from go/*)
|
||||||
|
code/core/cli/ → consumer tier (composes from go/*)
|
||||||
|
code/core/gui/ → consumer tier (composes from go/*)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule:** If package A is in `go/` and package B is in the consumer tier, B may import A but A must never import B. The repo naming convention enforces this: `go-{name}` = lib, bare `{name}` = consumer.
|
||||||
|
|
||||||
|
**Why this matters for agents:** When an agent is dispatched to implement a feature in `core/agent`, it can freely import from `go-io`, `go-scm`, `go-process`. But if an agent is dispatched to `go-io`, it knows its changes are foundational — every consumer depends on it, so the contract must not break.
|
||||||
|
|
||||||
|
### 9. Issues Are N+(rounds) Deep
|
||||||
|
|
||||||
|
Problems in code and specs are layered. Surface issues mask deeper issues. Fixing the surface reveals the next layer. This is not a failure mode — it is the discovery process.
|
||||||
|
|
||||||
|
```
|
||||||
|
Pass 1: Find 16 issues (surface — naming, imports, obvious errors)
|
||||||
|
Pass 2: Find 11 issues (structural — contradictions, missing types)
|
||||||
|
Pass 3: Find 5 issues (architectural — signature mismatches, registration gaps)
|
||||||
|
Pass 4: Find 4 issues (contract — cross-spec API mismatches)
|
||||||
|
Pass 5: Find 2 issues (mechanical — path format, nil safety)
|
||||||
|
Pass N: Findings are trivial → spec/code is complete
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule:** Iteration is required, not a failure. Each pass sees what the previous pass could not, because the context changed. An agent dispatched with the same task on the same repo will find different things each time — this is correct behaviour.
|
||||||
|
|
||||||
|
**Corollary:** The cheapest model should do the most passes (surface work). The frontier model should arrive last, when only deep issues remain. Tiered iteration: grunt model grinds → mid model pre-warms → frontier model polishes.
|
||||||
|
|
||||||
|
**Anti-pattern:** One-shot generation expecting valid output. No model, no human, produces correct-on-first-pass for non-trivial work. Expecting it wastes the first pass on surface issues that a cheaper pass would have caught.
|
||||||
|
|
||||||
|
### 10. CLI Tests as Artifact Validation
|
||||||
|
|
||||||
|
Unit tests verify the code. CLI tests verify the binary. The directory structure IS the command structure — path maps to command, Taskfile runs the test.
|
||||||
|
|
||||||
|
```
|
||||||
|
tests/cli/
|
||||||
|
├── core/
|
||||||
|
│ └── lint/
|
||||||
|
│ ├── Taskfile.yaml ← test `core-lint` (root)
|
||||||
|
│ ├── run/
|
||||||
|
│ │ ├── Taskfile.yaml ← test `core-lint run`
|
||||||
|
│ │ └── fixtures/
|
||||||
|
│ ├── go/
|
||||||
|
│ │ ├── Taskfile.yaml ← test `core-lint go`
|
||||||
|
│ │ └── fixtures/
|
||||||
|
│ └── security/
|
||||||
|
│ ├── Taskfile.yaml ← test `core-lint security`
|
||||||
|
│ └── fixtures/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule:** Every CLI command has a matching `tests/cli/{path}/Taskfile.yaml`. The Taskfile runs the compiled binary against fixtures with known inputs and validates the output. If the CLI test passes, the underlying actions work — because CLI commands call actions, MCP tools call actions, API endpoints call actions. Test the CLI, trust the rest.
|
||||||
|
|
||||||
|
**Pattern:**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# tests/cli/core/lint/go/Taskfile.yaml
|
||||||
|
version: '3'
|
||||||
|
tasks:
|
||||||
|
test:
|
||||||
|
cmds:
|
||||||
|
- core-lint go --output json fixtures/ > /tmp/result.json
|
||||||
|
- jq -e '.findings | length > 0' /tmp/result.json
|
||||||
|
- jq -e '.summary.passed == false' /tmp/result.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why this matters for agents:** An agent can validate its own work by running `task test` in the matching `tests/cli/` directory. No test framework, no mocking, no setup — just the binary, fixtures, and `jq` assertions. The agent builds the binary, runs the test, sees the result. If it fails, the agent can read the fixture, read the output, and fix the code.
|
||||||
|
|
||||||
|
**Corollary:** Fixtures are planted bugs. Each fixture file has a known issue that the linter must find. If the linter doesn't find it, the test fails. Fixtures are the spec for what the tool must detect — they ARE the test cases, not descriptions of test cases.
|
||||||
|
|
||||||
|
## Applying AX to Existing Patterns
|
||||||
|
|
||||||
|
### File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
# AX-native: path describes content
|
||||||
|
core/agent/
|
||||||
|
├── go/ # Go source
|
||||||
|
├── php/ # PHP source
|
||||||
|
├── ui/ # Frontend source
|
||||||
|
├── claude/ # Claude Code plugin
|
||||||
|
└── codex/ # Codex plugin
|
||||||
|
|
||||||
|
# Not AX: generic names requiring README
|
||||||
|
src/
|
||||||
|
├── lib/
|
||||||
|
├── utils/
|
||||||
|
└── helpers/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
```go
|
||||||
|
// AX-native: errors are infrastructure, not application logic
|
||||||
|
svc := c.Service("brain")
|
||||||
|
cfg := c.Config().Get("database.host")
|
||||||
|
// Errors logged by Core. Code reads like a spec.
|
||||||
|
|
||||||
|
// Not AX: errors dominate the code
|
||||||
|
svc, err := c.ServiceFor[brain.Service]()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("get brain service: %w", err)
|
||||||
|
}
|
||||||
|
cfg, err := c.Config().Get("database.host")
|
||||||
|
if err != nil {
|
||||||
|
_ = err // silenced because "it'll be fine"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### API Design
|
||||||
|
|
||||||
|
```go
|
||||||
|
// AX-native: one shape, every surface
|
||||||
|
core.New(core.Options{
|
||||||
|
Name: "my-app",
|
||||||
|
Services: []core.Service{...},
|
||||||
|
Config: core.Config{...},
|
||||||
|
})
|
||||||
|
|
||||||
|
// Not AX: multiple patterns for the same thing
|
||||||
|
core.New(
|
||||||
|
core.WithName("my-app"),
|
||||||
|
core.WithService(factory1),
|
||||||
|
core.WithService(factory2),
|
||||||
|
core.WithConfig(cfg),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## The Plans Convention — AX Development Lifecycle
|
||||||
|
|
||||||
|
The `plans/` directory structure encodes a development methodology designed for how generative AI actually works: iterative refinement across structured phases, not one-shot generation.
|
||||||
|
|
||||||
|
### The Three-Way Split
|
||||||
|
|
||||||
|
```
|
||||||
|
plans/
|
||||||
|
├── project/ # 1. WHAT and WHY — start here
|
||||||
|
├── rfc/ # 2. CONSTRAINTS — immutable contracts
|
||||||
|
└── code/ # 3. HOW — implementation specs
|
||||||
|
```
|
||||||
|
|
||||||
|
Each directory is a phase. Work flows from project → rfc → code. Each transition forces a refinement pass — you cannot write a code spec without discovering gaps in the project spec, and you cannot write an RFC without discovering assumptions in both.
|
||||||
|
|
||||||
|
**Three places for data that can't be written simultaneously = three guaranteed iterations of "actually, this needs changing."** Refinement is baked into the structure, not bolted on as a review step.
|
||||||
|
|
||||||
|
### Phase 1: Project (Vision)
|
||||||
|
|
||||||
|
Start with `project/`. No code exists yet. Define:
|
||||||
|
- What the product IS and who it serves
|
||||||
|
- What existing primitives it consumes (cross-ref to `code/`)
|
||||||
|
- What constraints it operates under (cross-ref to `rfc/`)
|
||||||
|
|
||||||
|
This is where creativity lives. Map features to building blocks. Connect systems. The project spec is integrative — it references everything else.
|
||||||
|
|
||||||
|
### Phase 2: RFC (Contracts)
|
||||||
|
|
||||||
|
Extract the immutable rules into `rfc/`. These are constraints that don't change with implementation:
|
||||||
|
- Wire formats, protocols, hash algorithms
|
||||||
|
- Security properties that must hold
|
||||||
|
- Compatibility guarantees
|
||||||
|
|
||||||
|
RFCs are numbered per component (`RFC-BORG-006-SMSG-FORMAT.md`) and never modified after acceptance. If the contract changes, write a new RFC.
|
||||||
|
|
||||||
|
### Phase 3: Code (Implementation Specs)
|
||||||
|
|
||||||
|
Define the implementation in `code/`. Each component gets an RFC.md that an agent can implement from:
|
||||||
|
- Struct definitions (the DTOs — see principle 6)
|
||||||
|
- Method signatures and behaviour
|
||||||
|
- Error conditions and edge cases
|
||||||
|
- Cross-references to other code/ specs
|
||||||
|
|
||||||
|
The code spec IS the product. Write the spec → dispatch to an agent → review output → iterate.
|
||||||
|
|
||||||
|
### Pre-Launch: Alignment Protocol
|
||||||
|
|
||||||
|
Before dispatching for implementation, verify spec-model alignment:
|
||||||
|
|
||||||
|
```
|
||||||
|
1. REVIEW — The implementation model (Codex/Jules) reads the spec
|
||||||
|
and reports missing elements. This surfaces the delta between
|
||||||
|
the model's training and the spec's assumptions.
|
||||||
|
|
||||||
|
"I need X, Y, Z to implement this" is the model saying
|
||||||
|
"I hear you but I'm missing context" — without asking.
|
||||||
|
|
||||||
|
2. ADJUST — Update the spec to close the gaps. Add examples,
|
||||||
|
clarify ambiguities, provide the context the model needs.
|
||||||
|
This is shared alignment, not compromise.
|
||||||
|
|
||||||
|
3. VERIFY — A different model (or sub-agent) reviews the adjusted
|
||||||
|
spec without the planner's bias. Fresh eyes on the contract.
|
||||||
|
"Does this make sense to someone who wasn't in the room?"
|
||||||
|
|
||||||
|
4. READY — When the review findings are trivial or deployment-
|
||||||
|
related (not architectural), the spec is ready to dispatch.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Implementation: Iterative Dispatch
|
||||||
|
|
||||||
|
Same prompt, multiple runs. Each pass sees deeper because the context evolved:
|
||||||
|
|
||||||
|
```
|
||||||
|
Round 1: Build features (the obvious gaps)
|
||||||
|
Round 2: Write tests (verify what was built)
|
||||||
|
Round 3: Harden security (what can go wrong?)
|
||||||
|
Round 4: Next RFC section (what's still missing?)
|
||||||
|
Round N: Findings are trivial → implementation is complete
|
||||||
|
```
|
||||||
|
|
||||||
|
Re-running is not failure. It is the process. Each pass changes the codebase, which changes what the next pass can see. The iteration IS the refinement.
|
||||||
|
|
||||||
|
### Post-Implementation: Auto-Documentation
|
||||||
|
|
||||||
|
The QA/verify chain produces artefacts that feed forward:
|
||||||
|
- Test results document the contract (what works, what doesn't)
|
||||||
|
- Coverage reports surface untested paths
|
||||||
|
- Diff summaries prep the changelog for the next release
|
||||||
|
- Doc site updates from the spec (the spec IS the documentation)
|
||||||
|
|
||||||
|
The output of one cycle is the input to the next. The plans repo stays current because the specs drive the code, not the other way round.
|
||||||
|
|
||||||
|
## Compatibility
|
||||||
|
|
||||||
|
AX conventions are valid, idiomatic Go/PHP/TS. They do not require language extensions, code generation, or non-standard tooling. An AX-designed codebase compiles, tests, and deploys with standard toolchains.
|
||||||
|
|
||||||
|
The conventions diverge from community patterns (functional options, Must/For, etc.) but do not violate language specifications. This is a style choice, not a fork.
|
||||||
|
|
||||||
|
## Adoption
|
||||||
|
|
||||||
|
AX applies to all new code in the Core ecosystem. Existing code migrates incrementally as it is touched — no big-bang rewrite.
|
||||||
|
|
||||||
|
Priority order:
|
||||||
|
1. **Public APIs** (package-level functions, struct constructors)
|
||||||
|
2. **File structure** (path naming, template locations)
|
||||||
|
3. **Internal fields** (struct field names, local variables)
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- dAppServer unified path convention (2024)
|
||||||
|
- CoreGO DTO pattern refactor (2026-03-18)
|
||||||
|
- Core primitives design (2026-03-19)
|
||||||
|
- Go Proverbs, Rob Pike (2015) — AX provides an updated lens
|
||||||
|
|
||||||
|
## Changelog
|
||||||
|
|
||||||
|
- 2026-03-19: Initial draft
|
||||||
30
.gitignore
vendored
Normal file
30
.gitignore
vendored
Normal file
|
|
@ -0,0 +1,30 @@
|
||||||
|
# Binaries
|
||||||
|
*.exe
|
||||||
|
*.exe~
|
||||||
|
*.dll
|
||||||
|
*.so
|
||||||
|
*.dylib
|
||||||
|
|
||||||
|
# Test binary
|
||||||
|
*.test
|
||||||
|
|
||||||
|
# Output of go coverage
|
||||||
|
*.out
|
||||||
|
|
||||||
|
# Go workspace
|
||||||
|
go.work
|
||||||
|
go.work.sum
|
||||||
|
|
||||||
|
# IDE
|
||||||
|
.idea/
|
||||||
|
.vscode/
|
||||||
|
*.swp
|
||||||
|
*.swo
|
||||||
|
*~
|
||||||
|
|
||||||
|
# OS
|
||||||
|
.DS_Store
|
||||||
|
Thumbs.db
|
||||||
|
|
||||||
|
# Dependency directories
|
||||||
|
vendor/
|
||||||
86
CLAUDE.md
Normal file
86
CLAUDE.md
Normal file
|
|
@ -0,0 +1,86 @@
|
||||||
|
# CLAUDE.md
|
||||||
|
|
||||||
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||||
|
|
||||||
|
## What This Is
|
||||||
|
|
||||||
|
CryptoNote stratum mining proxy library with NiceHash nonce-splitting (256-slot), simple passthrough mode, pool failover, TLS, HTTP monitoring API, and per-IP rate limiting. Module: `dappco.re/go/core/proxy`
|
||||||
|
|
||||||
|
## Spec
|
||||||
|
|
||||||
|
The complete RFC lives at `docs/RFC.md` (1475 lines). Read it fully before implementing any feature. AX design principles live at `.core/reference/RFC-025-AGENT-EXPERIENCE.md`.
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
go test ./... # Run all tests
|
||||||
|
go test -v -run TestJob_BlobWithFixedByte_Good ./... # Run single test
|
||||||
|
go test -race ./... # Race detector (must pass before commit)
|
||||||
|
go test -cover ./... # Coverage
|
||||||
|
go test -bench=. -benchmem ./... # Benchmarks
|
||||||
|
go vet ./... # Vet
|
||||||
|
```
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
**Event-driven proxy.** The `Proxy` orchestrator owns the tick loop (1s), TCP servers, splitter, stats, workers, and optional HTTP API. Events flow through a synchronous `EventBus`.
|
||||||
|
|
||||||
|
**Two splitter modes:**
|
||||||
|
- `nicehash` — NonceSplitter partitions 32-bit nonce space via fixed byte 39. Up to 256 miners per upstream pool connection.
|
||||||
|
- `simple` — SimpleSplitter creates one upstream per miner, with optional reuse pool on disconnect.
|
||||||
|
|
||||||
|
**Per-miner goroutines.** Each accepted TCP connection runs a read loop in its own goroutine. Writes serialised by `Miner.sendMu`.
|
||||||
|
|
||||||
|
**File layout:**
|
||||||
|
```
|
||||||
|
proxy.go # Proxy orchestrator — tick loop, listeners, splitter, stats
|
||||||
|
config.go # Config struct, JSON unmarshal, hot-reload watcher
|
||||||
|
server.go # TCP server — accepts connections, applies rate limiter
|
||||||
|
miner.go # Miner state machine — one per connection
|
||||||
|
job.go # Job value type — blob, job_id, target, algo, height
|
||||||
|
worker.go # Worker aggregate — rolling hashrate, share counts
|
||||||
|
stats.go # Stats aggregate — global counters, hashrate windows
|
||||||
|
events.go # Event bus — LoginEvent, AcceptEvent, SubmitEvent, CloseEvent
|
||||||
|
splitter/nicehash/splitter.go # NonceSplitter — owns mapper pool, routes miners
|
||||||
|
splitter/nicehash/mapper.go # NonceMapper — one upstream connection, owns NonceStorage
|
||||||
|
splitter/nicehash/storage.go # NonceStorage — 256-slot table, fixed-byte allocation
|
||||||
|
splitter/simple/splitter.go # SimpleSplitter — passthrough, upstream reuse pool
|
||||||
|
splitter/simple/mapper.go # SimpleMapper — one upstream per miner group
|
||||||
|
pool/client.go # StratumClient — outbound pool TCP/TLS connection
|
||||||
|
pool/strategy.go # FailoverStrategy — primary + ordered fallbacks
|
||||||
|
log/access.go # AccessLog — connection open/close lines
|
||||||
|
log/share.go # ShareLog — accept/reject lines per share
|
||||||
|
api/router.go # HTTP handlers — /1/summary, /1/workers, /1/miners
|
||||||
|
```
|
||||||
|
|
||||||
|
## Banned Imports
|
||||||
|
|
||||||
|
| Banned Import | Use Instead |
|
||||||
|
|---------------|-------------|
|
||||||
|
| `fmt` | `core.Sprintf`, `core.Print` |
|
||||||
|
| `log` | `core.Print`, `core.Error` |
|
||||||
|
| `errors` | `core.E(scope, message, cause)` |
|
||||||
|
| `os` | `c.Fs()` |
|
||||||
|
| `os/exec` | `c.Process()` |
|
||||||
|
| `strings` | `core.Contains`, `core.TrimPrefix`, etc. |
|
||||||
|
| `path/filepath` | `core.JoinPath`, `core.PathBase` |
|
||||||
|
| `encoding/json` | `core.JSONMarshalString`, `core.JSONUnmarshalString` |
|
||||||
|
|
||||||
|
## Coding Standards
|
||||||
|
|
||||||
|
- **UK English** in all code, comments, docs (colour, behaviour, serialise, organisation)
|
||||||
|
- **Strict error handling**: all errors use `core.E(scope, message, cause)` — never `fmt.Errorf` or `errors.New`
|
||||||
|
- **Comments as usage examples** with concrete values, not prose descriptions
|
||||||
|
- **Type hints**: all parameters and return types declared
|
||||||
|
- **Test naming**: `TestFilename_Function_{Good,Bad,Ugly}` — all three mandatory per function
|
||||||
|
- **Race detector**: `go test -race ./...` must pass before commit
|
||||||
|
- **Conventional commits**: `type(scope): description`
|
||||||
|
- **Co-Author**: `Co-Authored-By: Virgil <virgil@lethean.io>`
|
||||||
|
- **Licence**: EUPL-1.2
|
||||||
|
|
||||||
|
## Test Conventions
|
||||||
|
|
||||||
|
- Test names follow `TestFilename_Function_{Good,Bad,Ugly}`
|
||||||
|
- Good = happy path, Bad = expected failures, Ugly = edge cases and recovery
|
||||||
|
- Use `require` for preconditions, `assert` for verifications (`testify`)
|
||||||
|
- All three categories mandatory per function under test
|
||||||
63
api/router.go
Normal file
63
api/router.go
Normal file
|
|
@ -0,0 +1,63 @@
|
||||||
|
// Package api implements the HTTP monitoring endpoints for the proxy.
|
||||||
|
//
|
||||||
|
// Registered routes:
|
||||||
|
//
|
||||||
|
// GET /1/summary — aggregated proxy stats
|
||||||
|
// GET /1/workers — per-worker hashrate table
|
||||||
|
// GET /1/miners — per-connection state table
|
||||||
|
//
|
||||||
|
// proxyapi.RegisterRoutes(apiRouter, p)
|
||||||
|
package api
|
||||||
|
|
||||||
|
// SummaryResponse is the /1/summary JSON body.
|
||||||
|
//
|
||||||
|
// {"version":"1.0.0","mode":"nicehash","hashrate":{"total":[...]}, ...}
|
||||||
|
type SummaryResponse struct {
|
||||||
|
Version string `json:"version"`
|
||||||
|
Mode string `json:"mode"`
|
||||||
|
Hashrate HashrateResponse `json:"hashrate"`
|
||||||
|
Miners MinersCountResponse `json:"miners"`
|
||||||
|
Workers uint64 `json:"workers"`
|
||||||
|
Upstreams UpstreamResponse `json:"upstreams"`
|
||||||
|
Results ResultsResponse `json:"results"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashrateResponse carries the per-window hashrate array.
|
||||||
|
//
|
||||||
|
// HashrateResponse{Total: [6]float64{12345.67, 11900.00, 12100.00, 11800.00, 12000.00, 12200.00}}
|
||||||
|
type HashrateResponse struct {
|
||||||
|
Total [6]float64 `json:"total"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// MinersCountResponse carries current and peak miner counts.
|
||||||
|
//
|
||||||
|
// MinersCountResponse{Now: 142, Max: 200}
|
||||||
|
type MinersCountResponse struct {
|
||||||
|
Now uint64 `json:"now"`
|
||||||
|
Max uint64 `json:"max"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpstreamResponse carries pool connection state counts.
|
||||||
|
//
|
||||||
|
// UpstreamResponse{Active: 1, Sleep: 0, Error: 0, Total: 1, Ratio: 142.0}
|
||||||
|
type UpstreamResponse struct {
|
||||||
|
Active uint64 `json:"active"`
|
||||||
|
Sleep uint64 `json:"sleep"`
|
||||||
|
Error uint64 `json:"error"`
|
||||||
|
Total uint64 `json:"total"`
|
||||||
|
Ratio float64 `json:"ratio"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResultsResponse carries share acceptance statistics.
|
||||||
|
//
|
||||||
|
// ResultsResponse{Accepted: 4821, Rejected: 3, Invalid: 0, Expired: 12}
|
||||||
|
type ResultsResponse struct {
|
||||||
|
Accepted uint64 `json:"accepted"`
|
||||||
|
Rejected uint64 `json:"rejected"`
|
||||||
|
Invalid uint64 `json:"invalid"`
|
||||||
|
Expired uint64 `json:"expired"`
|
||||||
|
AvgTime uint32 `json:"avg_time"`
|
||||||
|
Latency uint32 `json:"latency"`
|
||||||
|
HashesTotal uint64 `json:"hashes_total"`
|
||||||
|
Best [10]uint64 `json:"best"`
|
||||||
|
}
|
||||||
90
config.go
Normal file
90
config.go
Normal file
|
|
@ -0,0 +1,90 @@
|
||||||
|
package proxy
|
||||||
|
|
||||||
|
// Config is the top-level proxy configuration, loaded from JSON and hot-reloaded on change.
|
||||||
|
//
|
||||||
|
// cfg, result := proxy.LoadConfig("config.json")
|
||||||
|
// if !result.OK { log.Fatal(result.Error) }
|
||||||
|
type Config struct {
|
||||||
|
Mode string `json:"mode"` // "nicehash" or "simple"
|
||||||
|
Bind []BindAddr `json:"bind"` // listen addresses
|
||||||
|
Pools []PoolConfig `json:"pools"` // ordered primary + fallbacks
|
||||||
|
TLS TLSConfig `json:"tls"` // inbound TLS (miner-facing)
|
||||||
|
HTTP HTTPConfig `json:"http"` // monitoring API
|
||||||
|
AccessPassword string `json:"access-password"` // "" = no auth required
|
||||||
|
CustomDiff uint64 `json:"custom-diff"` // 0 = disabled
|
||||||
|
CustomDiffStats bool `json:"custom-diff-stats"` // report per custom-diff bucket
|
||||||
|
AlgoExtension bool `json:"algo-ext"` // forward algo field in jobs
|
||||||
|
Workers WorkersMode `json:"workers"` // "rig-id", "user", "password", "agent", "ip", "false"
|
||||||
|
AccessLogFile string `json:"access-log-file"` // "" = disabled
|
||||||
|
ReuseTimeout int `json:"reuse-timeout"` // seconds; simple mode upstream reuse
|
||||||
|
Retries int `json:"retries"` // pool reconnect attempts
|
||||||
|
RetryPause int `json:"retry-pause"` // seconds between retries
|
||||||
|
Watch bool `json:"watch"` // hot-reload on file change
|
||||||
|
RateLimit RateLimit `json:"rate-limit"` // per-IP connection rate limit
|
||||||
|
}
|
||||||
|
|
||||||
|
// BindAddr is one TCP listen endpoint.
|
||||||
|
//
|
||||||
|
// proxy.BindAddr{Host: "0.0.0.0", Port: 3333, TLS: false}
|
||||||
|
type BindAddr struct {
|
||||||
|
Host string `json:"host"`
|
||||||
|
Port uint16 `json:"port"`
|
||||||
|
TLS bool `json:"tls"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// PoolConfig is one upstream pool entry.
|
||||||
|
//
|
||||||
|
// proxy.PoolConfig{URL: "pool.lthn.io:3333", User: "WALLET", Pass: "x", Enabled: true}
|
||||||
|
type PoolConfig struct {
|
||||||
|
URL string `json:"url"`
|
||||||
|
User string `json:"user"`
|
||||||
|
Pass string `json:"pass"`
|
||||||
|
RigID string `json:"rig-id"`
|
||||||
|
Algo string `json:"algo"`
|
||||||
|
TLS bool `json:"tls"`
|
||||||
|
TLSFingerprint string `json:"tls-fingerprint"` // SHA-256 hex; "" = skip pin
|
||||||
|
Keepalive bool `json:"keepalive"`
|
||||||
|
Enabled bool `json:"enabled"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// TLSConfig controls inbound TLS on bind addresses that have TLS: true.
|
||||||
|
//
|
||||||
|
// proxy.TLSConfig{Enabled: true, CertFile: "/etc/proxy/cert.pem", KeyFile: "/etc/proxy/key.pem"}
|
||||||
|
type TLSConfig struct {
|
||||||
|
Enabled bool `json:"enabled"`
|
||||||
|
CertFile string `json:"cert"`
|
||||||
|
KeyFile string `json:"cert_key"`
|
||||||
|
Ciphers string `json:"ciphers"` // OpenSSL cipher string; "" = default
|
||||||
|
Protocols string `json:"protocols"` // TLS version string; "" = default
|
||||||
|
}
|
||||||
|
|
||||||
|
// HTTPConfig controls the monitoring API server.
|
||||||
|
//
|
||||||
|
// proxy.HTTPConfig{Enabled: true, Host: "127.0.0.1", Port: 8080, Restricted: true}
|
||||||
|
type HTTPConfig struct {
|
||||||
|
Enabled bool `json:"enabled"`
|
||||||
|
Host string `json:"host"`
|
||||||
|
Port uint16 `json:"port"`
|
||||||
|
AccessToken string `json:"access-token"` // Bearer token; "" = no auth
|
||||||
|
Restricted bool `json:"restricted"` // true = read-only GET only
|
||||||
|
}
|
||||||
|
|
||||||
|
// RateLimit controls per-IP connection rate limiting using a token bucket.
|
||||||
|
//
|
||||||
|
// proxy.RateLimit{MaxConnectionsPerMinute: 30, BanDurationSeconds: 300}
|
||||||
|
type RateLimit struct {
|
||||||
|
MaxConnectionsPerMinute int `json:"max-connections-per-minute"` // 0 = disabled
|
||||||
|
BanDurationSeconds int `json:"ban-duration"` // 0 = no ban
|
||||||
|
}
|
||||||
|
|
||||||
|
// WorkersMode controls which login field becomes the worker name.
|
||||||
|
type WorkersMode string
|
||||||
|
|
||||||
|
const (
|
||||||
|
WorkersByRigID WorkersMode = "rig-id" // rigid field, fallback to user
|
||||||
|
WorkersByUser WorkersMode = "user"
|
||||||
|
WorkersByPass WorkersMode = "password"
|
||||||
|
WorkersByAgent WorkersMode = "agent"
|
||||||
|
WorkersByIP WorkersMode = "ip"
|
||||||
|
WorkersDisabled WorkersMode = "false"
|
||||||
|
)
|
||||||
1475
docs/RFC.md
Normal file
1475
docs/RFC.md
Normal file
File diff suppressed because it is too large
Load diff
41
events.go
Normal file
41
events.go
Normal file
|
|
@ -0,0 +1,41 @@
|
||||||
|
package proxy
|
||||||
|
|
||||||
|
import "sync"
|
||||||
|
|
||||||
|
// EventBus dispatches proxy lifecycle events to registered listeners.
|
||||||
|
// Dispatch is synchronous on the calling goroutine. Listeners must not block.
|
||||||
|
//
|
||||||
|
// bus := proxy.NewEventBus()
|
||||||
|
// bus.Subscribe(proxy.EventLogin, customDiff.OnLogin)
|
||||||
|
// bus.Subscribe(proxy.EventAccept, stats.OnAccept)
|
||||||
|
type EventBus struct {
|
||||||
|
listeners map[EventType][]EventHandler
|
||||||
|
mu sync.RWMutex
|
||||||
|
}
|
||||||
|
|
||||||
|
// EventType identifies the proxy lifecycle event.
|
||||||
|
type EventType int
|
||||||
|
|
||||||
|
const (
|
||||||
|
EventLogin EventType = iota // miner completed login
|
||||||
|
EventAccept // pool accepted a submitted share
|
||||||
|
EventReject // pool rejected a share (or share expired)
|
||||||
|
EventClose // miner TCP connection closed
|
||||||
|
)
|
||||||
|
|
||||||
|
// EventHandler is the callback signature for all event types.
|
||||||
|
type EventHandler func(Event)
|
||||||
|
|
||||||
|
// Event carries the data for any proxy lifecycle event.
|
||||||
|
// Fields not relevant to the event type are zero/nil.
|
||||||
|
//
|
||||||
|
// bus.Dispatch(proxy.Event{Type: proxy.EventLogin, Miner: m})
|
||||||
|
type Event struct {
|
||||||
|
Type EventType
|
||||||
|
Miner *Miner // always set
|
||||||
|
Job *Job // set for Accept and Reject events
|
||||||
|
Diff uint64 // effective difficulty of the share (Accept and Reject)
|
||||||
|
Error string // rejection reason (Reject only)
|
||||||
|
Latency uint16 // pool response time in ms (Accept and Reject)
|
||||||
|
Expired bool // true if the share was accepted but against the previous job
|
||||||
|
}
|
||||||
3
go.mod
Normal file
3
go.mod
Normal file
|
|
@ -0,0 +1,3 @@
|
||||||
|
module dappco.re/go/core/proxy
|
||||||
|
|
||||||
|
go 1.26.0
|
||||||
0
go.sum
Normal file
0
go.sum
Normal file
19
job.go
Normal file
19
job.go
Normal file
|
|
@ -0,0 +1,19 @@
|
||||||
|
package proxy
|
||||||
|
|
||||||
|
// Job holds the current work unit received from a pool. Immutable once assigned.
|
||||||
|
//
|
||||||
|
// j := proxy.Job{
|
||||||
|
// Blob: "0707d5ef...b01",
|
||||||
|
// JobID: "4BiGm3/RgGQzgkTI",
|
||||||
|
// Target: "b88d0600",
|
||||||
|
// Algo: "cn/r",
|
||||||
|
// }
|
||||||
|
type Job struct {
|
||||||
|
Blob string // hex-encoded block template (160 hex chars = 80 bytes)
|
||||||
|
JobID string // pool-assigned identifier
|
||||||
|
Target string // 8-char hex little-endian uint32 difficulty target
|
||||||
|
Algo string // algorithm e.g. "cn/r", "rx/0"; "" if not negotiated
|
||||||
|
Height uint64 // block height (0 if pool did not provide)
|
||||||
|
SeedHash string // RandomX seed hash hex (empty if not RandomX)
|
||||||
|
ClientID string // pool session ID that issued this job (for stale detection)
|
||||||
|
}
|
||||||
23
log/access.go
Normal file
23
log/access.go
Normal file
|
|
@ -0,0 +1,23 @@
|
||||||
|
// Package log implements append-only access and share logging for the proxy.
|
||||||
|
//
|
||||||
|
// al, result := log.NewAccessLog("/var/log/proxy-access.log")
|
||||||
|
// bus.Subscribe(proxy.EventLogin, al.OnLogin)
|
||||||
|
// bus.Subscribe(proxy.EventClose, al.OnClose)
|
||||||
|
package log
|
||||||
|
|
||||||
|
import "sync"
|
||||||
|
|
||||||
|
// AccessLog writes connection lifecycle lines to an append-only text file.
|
||||||
|
//
|
||||||
|
// Line format (connect): 2026-04-04T12:00:00Z CONNECT <ip> <user> <agent>
|
||||||
|
// Line format (close): 2026-04-04T12:00:00Z CLOSE <ip> <user> rx=<bytes> tx=<bytes>
|
||||||
|
//
|
||||||
|
// al, result := log.NewAccessLog("/var/log/proxy-access.log")
|
||||||
|
// bus.Subscribe(proxy.EventLogin, al.OnLogin)
|
||||||
|
// bus.Subscribe(proxy.EventClose, al.OnClose)
|
||||||
|
type AccessLog struct {
|
||||||
|
path string
|
||||||
|
mu sync.Mutex
|
||||||
|
// f is opened append-only on first write; nil until first event.
|
||||||
|
// Uses core.File for I/O abstraction.
|
||||||
|
}
|
||||||
18
log/share.go
Normal file
18
log/share.go
Normal file
|
|
@ -0,0 +1,18 @@
|
||||||
|
package log
|
||||||
|
|
||||||
|
import "sync"
|
||||||
|
|
||||||
|
// ShareLog writes share result lines to an append-only text file.
|
||||||
|
//
|
||||||
|
// Line format (accept): 2026-04-04T12:00:00Z ACCEPT <user> diff=<diff> latency=<ms>ms
|
||||||
|
// Line format (reject): 2026-04-04T12:00:00Z REJECT <user> reason="<message>"
|
||||||
|
//
|
||||||
|
// sl := log.NewShareLog("/var/log/proxy-shares.log")
|
||||||
|
// bus.Subscribe(proxy.EventAccept, sl.OnAccept)
|
||||||
|
// bus.Subscribe(proxy.EventReject, sl.OnReject)
|
||||||
|
type ShareLog struct {
|
||||||
|
path string
|
||||||
|
mu sync.Mutex
|
||||||
|
// f is opened append-only on first write; nil until first event.
|
||||||
|
// Uses core.File for I/O abstraction.
|
||||||
|
}
|
||||||
52
miner.go
Normal file
52
miner.go
Normal file
|
|
@ -0,0 +1,52 @@
|
||||||
|
package proxy
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/tls"
|
||||||
|
"net"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// MinerState represents the lifecycle state of one miner connection.
|
||||||
|
//
|
||||||
|
// WaitLogin → WaitReady → Ready → Closing
|
||||||
|
type MinerState int
|
||||||
|
|
||||||
|
const (
|
||||||
|
MinerStateWaitLogin MinerState = iota // connection open, awaiting login request (10s timeout)
|
||||||
|
MinerStateWaitReady // login validated, awaiting upstream pool job (600s timeout)
|
||||||
|
MinerStateReady // receiving jobs, accepting submit requests
|
||||||
|
MinerStateClosing // TCP close in progress
|
||||||
|
)
|
||||||
|
|
||||||
|
// Miner is the state machine for one miner TCP connection.
|
||||||
|
//
|
||||||
|
// // created by Server on accept:
|
||||||
|
// m := proxy.NewMiner(conn, 3333, nil)
|
||||||
|
// m.Start()
|
||||||
|
type Miner struct {
|
||||||
|
id int64 // monotonically increasing per-process; atomic assignment
|
||||||
|
rpcID string // UUID v4 sent to miner as session id
|
||||||
|
state MinerState
|
||||||
|
extAlgo bool // miner sent algo list in login params
|
||||||
|
extNH bool // NiceHash mode active (fixed byte splitting)
|
||||||
|
ip string // remote IP (without port, for logging)
|
||||||
|
localPort uint16
|
||||||
|
user string // login params.login (wallet address), custom diff suffix stripped
|
||||||
|
password string // login params.pass
|
||||||
|
agent string // login params.agent
|
||||||
|
rigID string // login params.rigid (optional extension)
|
||||||
|
fixedByte uint8 // NiceHash slot index (0-255)
|
||||||
|
mapperID int64 // which NonceMapper owns this miner; -1 = unassigned
|
||||||
|
routeID int64 // SimpleMapper ID in simple mode; -1 = unassigned
|
||||||
|
customDiff uint64 // 0 = use pool diff; non-zero = cap diff to this value
|
||||||
|
diff uint64 // last difficulty sent to this miner from the pool
|
||||||
|
rx uint64 // bytes received from miner
|
||||||
|
tx uint64 // bytes sent to miner
|
||||||
|
connectedAt time.Time
|
||||||
|
lastActivityAt time.Time
|
||||||
|
conn net.Conn
|
||||||
|
tlsConn *tls.Conn // nil if plain TCP
|
||||||
|
sendMu sync.Mutex // serialises writes to conn
|
||||||
|
buf [16384]byte // per-miner send buffer; avoids per-write allocations
|
||||||
|
}
|
||||||
41
pool/client.go
Normal file
41
pool/client.go
Normal file
|
|
@ -0,0 +1,41 @@
|
||||||
|
// Package pool implements the outbound stratum pool client and failover strategy.
|
||||||
|
//
|
||||||
|
// client := pool.NewStratumClient(poolCfg, listener)
|
||||||
|
// client.Connect()
|
||||||
|
package pool
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/tls"
|
||||||
|
"net"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"dappco.re/go/core/proxy"
|
||||||
|
)
|
||||||
|
|
||||||
|
// StratumClient is one outbound stratum TCP (optionally TLS) connection to a pool.
|
||||||
|
// The proxy presents itself to the pool as a standard stratum miner using the
|
||||||
|
// wallet address and password from PoolConfig.
|
||||||
|
//
|
||||||
|
// client := pool.NewStratumClient(poolCfg, listener)
|
||||||
|
// client.Connect()
|
||||||
|
type StratumClient struct {
|
||||||
|
cfg proxy.PoolConfig
|
||||||
|
listener StratumListener
|
||||||
|
conn net.Conn
|
||||||
|
tlsConn *tls.Conn // nil if plain TCP
|
||||||
|
sessionID string // pool-assigned session id from login reply
|
||||||
|
seq int64 // atomic JSON-RPC request id counter
|
||||||
|
active bool // true once first job received
|
||||||
|
sendMu sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
// StratumListener receives events from the pool connection.
|
||||||
|
type StratumListener interface {
|
||||||
|
// OnJob is called when the pool pushes a new job notification or the login reply contains a job.
|
||||||
|
OnJob(job proxy.Job)
|
||||||
|
// OnResultAccepted is called when the pool accepts or rejects a submitted share.
|
||||||
|
// sequence matches the value returned by Submit(). errorMessage is "" on accept.
|
||||||
|
OnResultAccepted(sequence int64, accepted bool, errorMessage string)
|
||||||
|
// OnDisconnect is called when the pool TCP connection closes for any reason.
|
||||||
|
OnDisconnect()
|
||||||
|
}
|
||||||
37
pool/strategy.go
Normal file
37
pool/strategy.go
Normal file
|
|
@ -0,0 +1,37 @@
|
||||||
|
package pool
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"dappco.re/go/core/proxy"
|
||||||
|
)
|
||||||
|
|
||||||
|
// FailoverStrategy wraps an ordered slice of PoolConfig entries.
|
||||||
|
// It connects to the first enabled pool and fails over in order on error.
|
||||||
|
// On reconnect it always retries from the primary first.
|
||||||
|
//
|
||||||
|
// strategy := pool.NewFailoverStrategy(cfg.Pools, listener, cfg)
|
||||||
|
// strategy.Connect()
|
||||||
|
type FailoverStrategy struct {
|
||||||
|
pools []proxy.PoolConfig
|
||||||
|
current int
|
||||||
|
client *StratumClient
|
||||||
|
listener StratumListener
|
||||||
|
cfg *proxy.Config
|
||||||
|
mu sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
// StrategyFactory creates a new FailoverStrategy for a given StratumListener.
|
||||||
|
// Used by splitters to create per-mapper strategies without coupling to Config.
|
||||||
|
//
|
||||||
|
// factory := pool.NewStrategyFactory(cfg)
|
||||||
|
// strategy := factory(listener) // each mapper calls this
|
||||||
|
type StrategyFactory func(listener StratumListener) Strategy
|
||||||
|
|
||||||
|
// Strategy is the interface the splitters use to submit shares and check pool state.
|
||||||
|
type Strategy interface {
|
||||||
|
Connect()
|
||||||
|
Submit(jobID, nonce, result, algo string) int64
|
||||||
|
Disconnect()
|
||||||
|
IsActive() bool
|
||||||
|
}
|
||||||
122
proxy.go
Normal file
122
proxy.go
Normal file
|
|
@ -0,0 +1,122 @@
|
||||||
|
// Package proxy is a CryptoNote stratum mining proxy library.
|
||||||
|
//
|
||||||
|
// It accepts miner connections over TCP (optionally TLS), splits the 32-bit nonce
|
||||||
|
// space across up to 256 simultaneous miners per upstream pool connection (NiceHash
|
||||||
|
// mode), and presents a small monitoring API.
|
||||||
|
//
|
||||||
|
// Full specification: docs/RFC.md
|
||||||
|
//
|
||||||
|
// p, result := proxy.New(cfg)
|
||||||
|
// if result.OK { p.Start() }
|
||||||
|
package proxy
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Proxy is the top-level orchestrator. It owns the server, splitter, stats, workers,
|
||||||
|
// event bus, tick goroutine, and optional HTTP API.
|
||||||
|
//
|
||||||
|
// p, result := proxy.New(cfg)
|
||||||
|
// if result.OK { p.Start() }
|
||||||
|
type Proxy struct {
|
||||||
|
config *Config
|
||||||
|
splitter Splitter
|
||||||
|
stats *Stats
|
||||||
|
workers *Workers
|
||||||
|
events *EventBus
|
||||||
|
servers []*Server
|
||||||
|
ticker *time.Ticker
|
||||||
|
watcher *ConfigWatcher
|
||||||
|
done chan struct{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Splitter is the interface both NonceSplitter and SimpleSplitter satisfy.
|
||||||
|
type Splitter interface {
|
||||||
|
// Connect establishes the first pool upstream connection.
|
||||||
|
Connect()
|
||||||
|
// OnLogin routes a newly authenticated miner to an upstream slot.
|
||||||
|
OnLogin(event *LoginEvent)
|
||||||
|
// OnSubmit routes a share submission to the correct upstream.
|
||||||
|
OnSubmit(event *SubmitEvent)
|
||||||
|
// OnClose releases the upstream slot for a disconnecting miner.
|
||||||
|
OnClose(event *CloseEvent)
|
||||||
|
// Tick is called every second for keepalive and GC housekeeping.
|
||||||
|
Tick(ticks uint64)
|
||||||
|
// GC runs every 60 ticks to reclaim disconnected upstream slots.
|
||||||
|
GC()
|
||||||
|
// Upstreams returns current upstream pool connection counts.
|
||||||
|
Upstreams() UpstreamStats
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpstreamStats carries pool connection state counts for monitoring.
|
||||||
|
type UpstreamStats struct {
|
||||||
|
Active uint64 // connections currently receiving jobs
|
||||||
|
Sleep uint64 // idle connections (simple mode reuse pool)
|
||||||
|
Error uint64 // connections in error/reconnecting state
|
||||||
|
Total uint64 // Active + Sleep + Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoginEvent is dispatched when a miner completes the login handshake.
|
||||||
|
type LoginEvent struct {
|
||||||
|
Miner *Miner
|
||||||
|
}
|
||||||
|
|
||||||
|
// SubmitEvent is dispatched when a miner submits a share.
|
||||||
|
type SubmitEvent struct {
|
||||||
|
Miner *Miner
|
||||||
|
JobID string
|
||||||
|
Nonce string
|
||||||
|
Result string
|
||||||
|
Algo string
|
||||||
|
RequestID int64
|
||||||
|
}
|
||||||
|
|
||||||
|
// CloseEvent is dispatched when a miner TCP connection closes.
|
||||||
|
type CloseEvent struct {
|
||||||
|
Miner *Miner
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConfigWatcher polls a config file for mtime changes and calls onChange on modification.
|
||||||
|
// Uses 1-second polling; does not require fsnotify.
|
||||||
|
//
|
||||||
|
// w := proxy.NewConfigWatcher("config.json", func(cfg *proxy.Config) {
|
||||||
|
// p.Reload(cfg)
|
||||||
|
// })
|
||||||
|
// w.Start()
|
||||||
|
type ConfigWatcher struct {
|
||||||
|
path string
|
||||||
|
onChange func(*Config)
|
||||||
|
lastMod time.Time
|
||||||
|
done chan struct{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// RateLimiter implements per-IP token bucket connection rate limiting.
|
||||||
|
// Each unique IP has a bucket initialised to MaxConnectionsPerMinute tokens.
|
||||||
|
// Each connection attempt consumes one token. Tokens refill at 1 per (60/max) seconds.
|
||||||
|
// An IP that empties its bucket is added to a ban list for BanDurationSeconds.
|
||||||
|
//
|
||||||
|
// rl := proxy.NewRateLimiter(cfg.RateLimit)
|
||||||
|
// if !rl.Allow("1.2.3.4") { conn.Close(); return }
|
||||||
|
type RateLimiter struct {
|
||||||
|
cfg RateLimit
|
||||||
|
buckets map[string]*tokenBucket
|
||||||
|
banned map[string]time.Time
|
||||||
|
mu sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
// tokenBucket is a simple token bucket for one IP.
|
||||||
|
type tokenBucket struct {
|
||||||
|
tokens int
|
||||||
|
lastRefill time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// CustomDiff resolves and applies per-miner difficulty overrides at login time.
|
||||||
|
// Resolution order: user-suffix (+N) > Config.CustomDiff > pool difficulty.
|
||||||
|
//
|
||||||
|
// cd := proxy.NewCustomDiff(cfg.CustomDiff)
|
||||||
|
// bus.Subscribe(proxy.EventLogin, cd.OnLogin)
|
||||||
|
type CustomDiff struct {
|
||||||
|
globalDiff uint64
|
||||||
|
}
|
||||||
19
server.go
Normal file
19
server.go
Normal file
|
|
@ -0,0 +1,19 @@
|
||||||
|
package proxy
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/tls"
|
||||||
|
"net"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Server listens on one BindAddr and creates a Miner for each accepted connection.
|
||||||
|
//
|
||||||
|
// srv, result := proxy.NewServer(bind, tlsCfg, rateLimiter, onAccept)
|
||||||
|
// srv.Start()
|
||||||
|
type Server struct {
|
||||||
|
addr BindAddr
|
||||||
|
tlsCfg *tls.Config // nil for plain TCP
|
||||||
|
limiter *RateLimiter
|
||||||
|
onAccept func(net.Conn, uint16)
|
||||||
|
listener net.Listener
|
||||||
|
done chan struct{}
|
||||||
|
}
|
||||||
32
splitter/nicehash/mapper.go
Normal file
32
splitter/nicehash/mapper.go
Normal file
|
|
@ -0,0 +1,32 @@
|
||||||
|
package nicehash
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"dappco.re/go/core/proxy"
|
||||||
|
"dappco.re/go/core/proxy/pool"
|
||||||
|
)
|
||||||
|
|
||||||
|
// NonceMapper manages one outbound pool connection and the 256-slot NonceStorage.
|
||||||
|
// It implements pool.StratumListener to receive job and result events from the pool.
|
||||||
|
//
|
||||||
|
// m := nicehash.NewNonceMapper(id, cfg, strategy)
|
||||||
|
// m.Start()
|
||||||
|
type NonceMapper struct {
|
||||||
|
id int64
|
||||||
|
storage *NonceStorage
|
||||||
|
strategy pool.Strategy // manages pool client lifecycle and failover
|
||||||
|
pending map[int64]SubmitContext // sequence → {requestID, minerID}
|
||||||
|
cfg *proxy.Config
|
||||||
|
active bool // true once pool has sent at least one job
|
||||||
|
suspended int // > 0 when pool connection is in error/reconnecting
|
||||||
|
mu sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
// SubmitContext tracks one in-flight share submission waiting for pool reply.
|
||||||
|
//
|
||||||
|
// ctx := SubmitContext{RequestID: 42, MinerID: 7}
|
||||||
|
type SubmitContext struct {
|
||||||
|
RequestID int64 // JSON-RPC id from the miner's submit request
|
||||||
|
MinerID int64 // miner that submitted
|
||||||
|
}
|
||||||
30
splitter/nicehash/splitter.go
Normal file
30
splitter/nicehash/splitter.go
Normal file
|
|
@ -0,0 +1,30 @@
|
||||||
|
// Package nicehash implements the NiceHash nonce-splitting mode.
|
||||||
|
//
|
||||||
|
// It partitions the 32-bit nonce space among miners by fixing one byte (byte 39
|
||||||
|
// of the blob). Each upstream pool connection (NonceMapper) owns a 256-slot table.
|
||||||
|
// Up to 256 miners share one pool connection. The 257th miner triggers creation
|
||||||
|
// of a new NonceMapper with a new pool connection.
|
||||||
|
//
|
||||||
|
// s := nicehash.NewNonceSplitter(cfg, eventBus, strategyFactory)
|
||||||
|
// s.Connect()
|
||||||
|
package nicehash
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"dappco.re/go/core/proxy"
|
||||||
|
"dappco.re/go/core/proxy/pool"
|
||||||
|
)
|
||||||
|
|
||||||
|
// NonceSplitter is the Splitter implementation for NiceHash mode.
|
||||||
|
// It manages a dynamic slice of NonceMapper upstreams, creating new ones on demand.
|
||||||
|
//
|
||||||
|
// s := nicehash.NewNonceSplitter(cfg, eventBus, strategyFactory)
|
||||||
|
// s.Connect()
|
||||||
|
type NonceSplitter struct {
|
||||||
|
mappers []*NonceMapper
|
||||||
|
cfg *proxy.Config
|
||||||
|
events *proxy.EventBus
|
||||||
|
strategyFactory pool.StrategyFactory
|
||||||
|
mu sync.RWMutex
|
||||||
|
}
|
||||||
25
splitter/nicehash/storage.go
Normal file
25
splitter/nicehash/storage.go
Normal file
|
|
@ -0,0 +1,25 @@
|
||||||
|
package nicehash
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"dappco.re/go/core/proxy"
|
||||||
|
)
|
||||||
|
|
||||||
|
// NonceStorage is the 256-slot fixed-byte allocation table for one NonceMapper.
|
||||||
|
//
|
||||||
|
// Slot encoding:
|
||||||
|
//
|
||||||
|
// 0 = free
|
||||||
|
// +minerID = active miner
|
||||||
|
// -minerID = disconnected miner (dead slot, cleared on next SetJob)
|
||||||
|
//
|
||||||
|
// storage := nicehash.NewNonceStorage()
|
||||||
|
type NonceStorage struct {
|
||||||
|
slots [256]int64 // slot state per above encoding
|
||||||
|
miners map[int64]*proxy.Miner // minerID → Miner pointer for active miners
|
||||||
|
job proxy.Job // current job from pool
|
||||||
|
prevJob proxy.Job // previous job (for stale submit validation)
|
||||||
|
cursor int // search starts here (round-robin allocation)
|
||||||
|
mu sync.Mutex
|
||||||
|
}
|
||||||
21
splitter/simple/mapper.go
Normal file
21
splitter/simple/mapper.go
Normal file
|
|
@ -0,0 +1,21 @@
|
||||||
|
package simple
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"dappco.re/go/core/proxy"
|
||||||
|
"dappco.re/go/core/proxy/pool"
|
||||||
|
)
|
||||||
|
|
||||||
|
// SimpleMapper holds one outbound pool connection and serves at most one active miner
|
||||||
|
// at a time. It becomes idle when the miner disconnects and may be reclaimed for the
|
||||||
|
// next login.
|
||||||
|
//
|
||||||
|
// m := simple.NewSimpleMapper(id, strategy)
|
||||||
|
type SimpleMapper struct {
|
||||||
|
id int64
|
||||||
|
miner *proxy.Miner // nil when idle
|
||||||
|
strategy pool.Strategy
|
||||||
|
idleAt time.Time // zero when active
|
||||||
|
stopped bool
|
||||||
|
}
|
||||||
28
splitter/simple/splitter.go
Normal file
28
splitter/simple/splitter.go
Normal file
|
|
@ -0,0 +1,28 @@
|
||||||
|
// Package simple implements the passthrough splitter mode.
|
||||||
|
//
|
||||||
|
// Simple mode creates one upstream pool connection per miner. When ReuseTimeout > 0,
|
||||||
|
// the upstream connection is held idle for that many seconds after the miner disconnects,
|
||||||
|
// allowing the next miner to inherit it and avoid reconnect latency.
|
||||||
|
//
|
||||||
|
// s := simple.NewSimpleSplitter(cfg, eventBus, strategyFactory)
|
||||||
|
package simple
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"dappco.re/go/core/proxy"
|
||||||
|
"dappco.re/go/core/proxy/pool"
|
||||||
|
)
|
||||||
|
|
||||||
|
// SimpleSplitter is the Splitter implementation for simple (passthrough) mode.
|
||||||
|
//
|
||||||
|
// s := simple.NewSimpleSplitter(cfg, eventBus, strategyFactory)
|
||||||
|
type SimpleSplitter struct {
|
||||||
|
active map[int64]*SimpleMapper // minerID → mapper
|
||||||
|
idle map[int64]*SimpleMapper // mapperID → mapper (reuse pool, keyed by mapper seq)
|
||||||
|
cfg *proxy.Config
|
||||||
|
events *proxy.EventBus
|
||||||
|
factory pool.StrategyFactory
|
||||||
|
mu sync.Mutex
|
||||||
|
seq int64 // monotonic mapper sequence counter
|
||||||
|
}
|
||||||
60
stats.go
Normal file
60
stats.go
Normal file
|
|
@ -0,0 +1,60 @@
|
||||||
|
package proxy
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Stats tracks global proxy metrics. Hot-path counters are atomic. Hashrate windows
|
||||||
|
// use a ring buffer per window size, advanced by Tick().
|
||||||
|
//
|
||||||
|
// s := proxy.NewStats()
|
||||||
|
// bus.Subscribe(proxy.EventAccept, s.OnAccept)
|
||||||
|
// bus.Subscribe(proxy.EventReject, s.OnReject)
|
||||||
|
type Stats struct {
|
||||||
|
accepted atomic.Uint64
|
||||||
|
rejected atomic.Uint64
|
||||||
|
invalid atomic.Uint64
|
||||||
|
expired atomic.Uint64
|
||||||
|
hashes atomic.Uint64 // cumulative sum of accepted share difficulties
|
||||||
|
connections atomic.Uint64 // total TCP connections accepted (ever)
|
||||||
|
maxMiners atomic.Uint64 // peak concurrent miner count
|
||||||
|
topDiff [10]uint64 // top-10 accepted difficulties, sorted descending; guarded by mu
|
||||||
|
latency []uint16 // pool response latencies in ms; capped at 10000 samples; guarded by mu
|
||||||
|
windows [6]tickWindow // one per hashrate reporting period
|
||||||
|
startTime time.Time
|
||||||
|
mu sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hashrate window sizes in seconds. Index maps to Stats.windows and SummaryResponse.Hashrate.
|
||||||
|
const (
|
||||||
|
HashrateWindow60s = 0 // 1 minute
|
||||||
|
HashrateWindow600s = 1 // 10 minutes
|
||||||
|
HashrateWindow3600s = 2 // 1 hour
|
||||||
|
HashrateWindow12h = 3 // 12 hours
|
||||||
|
HashrateWindow24h = 4 // 24 hours
|
||||||
|
HashrateWindowAll = 5 // all-time (single accumulator, no window)
|
||||||
|
)
|
||||||
|
|
||||||
|
// tickWindow is a fixed-capacity ring buffer of per-second difficulty sums.
|
||||||
|
type tickWindow struct {
|
||||||
|
buckets []uint64
|
||||||
|
pos int
|
||||||
|
size int // window size in seconds = len(buckets)
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatsSummary is the serialisable snapshot returned by Summary().
|
||||||
|
//
|
||||||
|
// summary := stats.Summary()
|
||||||
|
type StatsSummary struct {
|
||||||
|
Accepted uint64 `json:"accepted"`
|
||||||
|
Rejected uint64 `json:"rejected"`
|
||||||
|
Invalid uint64 `json:"invalid"`
|
||||||
|
Expired uint64 `json:"expired"`
|
||||||
|
Hashes uint64 `json:"hashes_total"`
|
||||||
|
AvgTime uint32 `json:"avg_time"` // seconds per accepted share
|
||||||
|
AvgLatency uint32 `json:"latency"` // median pool response latency in ms
|
||||||
|
Hashrate [6]float64 `json:"hashrate"` // H/s per window (index = HashrateWindow* constants)
|
||||||
|
TopDiff [10]uint64 `json:"best"`
|
||||||
|
}
|
||||||
33
worker.go
Normal file
33
worker.go
Normal file
|
|
@ -0,0 +1,33 @@
|
||||||
|
package proxy
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Workers maintains per-worker aggregate stats. Workers are identified by name,
|
||||||
|
// derived from the miner's login fields per WorkersMode.
|
||||||
|
//
|
||||||
|
// w := proxy.NewWorkers(proxy.WorkersByRigID, bus)
|
||||||
|
type Workers struct {
|
||||||
|
mode WorkersMode
|
||||||
|
entries []WorkerRecord // ordered by first-seen (stable)
|
||||||
|
nameIndex map[string]int // workerName → entries index
|
||||||
|
idIndex map[int64]int // minerID → entries index
|
||||||
|
mu sync.RWMutex
|
||||||
|
}
|
||||||
|
|
||||||
|
// WorkerRecord is the per-identity aggregate.
|
||||||
|
//
|
||||||
|
// hr60 := record.Hashrate(60)
|
||||||
|
type WorkerRecord struct {
|
||||||
|
Name string
|
||||||
|
LastIP string
|
||||||
|
Connections uint64
|
||||||
|
Accepted uint64
|
||||||
|
Rejected uint64
|
||||||
|
Invalid uint64
|
||||||
|
Hashes uint64 // sum of accepted share difficulties
|
||||||
|
LastHashAt time.Time
|
||||||
|
windows [5]tickWindow // 60s, 600s, 3600s, 12h, 24h
|
||||||
|
}
|
||||||
Loading…
Add table
Reference in a new issue