Compare commits

...
Sign in to create a new pull request.

45 commits
main ... dev

Author SHA1 Message Date
Snider
76488e0beb feat(wire): add alias entry readers + AX usage-example comments
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Add readExtraAliasEntryOld (tag 20) and readExtraAliasEntry (tag 33)
wire readers so the node can deserialise blocks containing alias
registrations. Without these readers, mainnet sync would fail on any
block with an alias transaction. Three round-trip tests validate the
new readers.

Also apply AX-2 (comments as usage examples) across 12 files: add
concrete usage-example comments to exported functions in config/,
types/, wire/, chain/, difficulty/, and consensus/. Fix stale doc
in consensus/doc.go that incorrectly referenced *config.ChainConfig.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-05 08:46:54 +01:00
Snider
caf83faf39 refactor(types,consensus,chain): apply AX design principles across public API
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Migrate types/asset.go from fmt.Errorf to coreerr.E() for consistency
with the rest of the types package. Add usage-example comments (AX-2) to
all key exported functions in consensus/, chain/, and types/ so agents
can see concrete calling patterns without reading implementation details.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-05 07:36:31 +01:00
Virgil
123047bebd refactor(wire): unify HF5 asset operation parsing
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 23:09:26 +00:00
Virgil
330ee2a146 refactor(consensus): expand fork-state naming
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 23:04:19 +00:00
Virgil
d5070cce15 build(crypto): select randomx sources by architecture
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 22:57:24 +00:00
Virgil
9c5b179375 feat(tui): render tx inputs explicitly
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 22:51:57 +00:00
Virgil
602c886400 fix(types): track decoded address prefixes
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 22:46:12 +00:00
Virgil
474fa2f07d chore(rfc): verify spec coverage
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 22:40:53 +00:00
Virgil
99720fff5e refactor(consensus): centralise validation fork state
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 22:38:20 +00:00
Virgil
b7428496bd refactor(p2p): reuse build-version helper in handshake validation
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 22:35:02 +00:00
Virgil
bdbefa7d4a refactor(tui): describe non-to-key outputs in explorer
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 22:31:10 +00:00
Virgil
bb941ebcc5 fix(consensus): tighten fork-era type gating
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Reject Zarcanum inputs and outputs before HF4 instead of letting
unsupported combinations pass semantic validation.

Also restrict PoS miner stake inputs to txin_to_key pre-HF4 and
txin_zc post-HF4, with regression coverage for the affected paths.

Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 22:18:53 +00:00
Virgil
95edac1d15 refactor(ax): centralise asset validation
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 22:09:17 +00:00
Virgil
8802b94ee5 test(core): align helpers with current APIs
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 22:04:52 +00:00
Virgil
0512861330 refactor(types): centralise transparent spend-key lookup
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 22:00:07 +00:00
Virgil
51d5ce9f14 fix(crypto): restore vendored boost compat
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 21:56:32 +00:00
Virgil
bc3e208691 fix(consensus): restore exported block version check
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 21:50:42 +00:00
Virgil
219aeae540 refactor(consensus): unexport block version helpers
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 21:48:36 +00:00
Virgil
2e92407233 fix(consensus): restore pre-hf1 miner tx version
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
2026-04-04 21:37:38 +00:00
Virgil
0993b081c7 test(chain): pin HTLC expiration boundary
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 21:30:53 +00:00
Virgil
cb43082d18 feat(crypto): add Zarcanum verification context API
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 21:28:12 +00:00
Virgil
2bebe323b8 fix(wire): reject unsupported output types
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 21:22:15 +00:00
Virgil
0ab8bfbd01 fix(consensus): validate miner tx versions by fork era
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 21:18:42 +00:00
Virgil
b34afa827f refactor(ax): make chain sync logging explicit
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 21:03:36 +00:00
Virgil
7e01df15fe fix(blockchain): handle sync setup errors explicitly
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 21:00:03 +00:00
Virgil
2f3f46e8c5 fix(consensus): count zarcanum miner outputs in rewards
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 20:56:56 +00:00
Virgil
92628cec35 refactor(types): centralise to-key target extraction
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 20:50:54 +00:00
Virgil
92cb5a8fbb feat(wallet): mark HTLC spends during sync
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 20:42:56 +00:00
Virgil
e25e3e73e7 refactor(wire): deduplicate output target encoding
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 20:39:45 +00:00
Virgil
c787990b9a refactor(ax): clarify ring and wallet names
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 20:36:16 +00:00
Virgil
3686a82b33 fix(consensus): validate HTLC tags in v2 signatures
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 20:32:39 +00:00
Virgil
d6f31dbe57 fix(cli): tighten chain command validation
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 20:22:48 +00:00
Virgil
41f2d52979 chore(spec): verify RFC coverage
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
No missing HF1/HF3/HF5 implementation gaps were found in the current codebase.

Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 20:18:13 +00:00
Virgil
050d530b29 feat(consensus): validate HF5 asset operations
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 20:14:54 +00:00
Virgil
c1b68523c6 fix(consensus): enforce tx versions across fork eras
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 20:04:24 +00:00
Virgil
be99c5e93a fix(wire): reject unsupported transaction variants
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Add explicit errors for unknown input/output variants in the wire encoder and tighten transparent output validation in consensus. Cover the new failure paths with unit tests.

Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 19:56:50 +00:00
Virgil
d2caf68d94 fix(p2p): report malformed peer builds
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 19:49:03 +00:00
Virgil
ccdcfbaacf refactor(blockchain): clarify handshake sync naming
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 19:21:12 +00:00
Virgil
f1738527bc feat(chain): select HTLC ring keys by expiry
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 19:11:37 +00:00
Virgil
21c5d49ef9 fix(sync): validate peers and persist HTLC spends
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Centralise handshake response validation so outbound sync checks both network identity and minimum peer build version through the p2p layer. Also record HTLC key images as spent during block processing, matching the HF1 input semantics and preventing those spends from being omitted from chain state.

Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 19:01:07 +00:00
Virgil
0ba5bbe49c feat(consensus): enforce block version in chain sync
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
2026-04-04 18:56:36 +00:00
Virgil
01f4e5cd0a feat(chain): support multisig and HTLC ring outputs
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 18:52:40 +00:00
Virgil
d3143d3f88 feat(consensus): enforce hf5 tx version and add asset descriptors
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 18:34:49 +00:00
Virgil
f7ee451fc4 fix(blockchain): enforce HF5 freeze and peer build gate
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Co-Authored-By: Virgil <virgil@lethean.io>
2026-04-04 18:30:32 +00:00
Virgil
8e6dc326df feat(crypto): add generic double-Schnorr bridge
Some checks are pending
Security Scan / security (push) Waiting to run
Test / Test (push) Waiting to run
Expose generate/verify wrappers for generic_double_schnorr_sig and add a consensus helper for balance-proof checks.

Co-Authored-By: Charon <charon@lethean.io>
2026-04-04 18:26:33 +00:00
61 changed files with 3418 additions and 528 deletions

View file

@ -19,11 +19,16 @@ type Chain struct {
} }
// New creates a Chain backed by the given store. // New creates a Chain backed by the given store.
//
// s, _ := store.New("~/.lethean/chain/chain.db")
// blockchain := chain.New(s)
func New(s *store.Store) *Chain { func New(s *store.Store) *Chain {
return &Chain{store: s} return &Chain{store: s}
} }
// Height returns the number of stored blocks (0 if empty). // Height returns the number of stored blocks (0 if empty).
//
// h, err := blockchain.Height()
func (c *Chain) Height() (uint64, error) { func (c *Chain) Height() (uint64, error) {
n, err := c.store.Count(groupBlocks) n, err := c.store.Count(groupBlocks)
if err != nil { if err != nil {
@ -34,6 +39,8 @@ func (c *Chain) Height() (uint64, error) {
// TopBlock returns the highest stored block and its metadata. // TopBlock returns the highest stored block and its metadata.
// Returns an error if the chain is empty. // Returns an error if the chain is empty.
//
// blk, meta, err := blockchain.TopBlock()
func (c *Chain) TopBlock() (*types.Block, *BlockMeta, error) { func (c *Chain) TopBlock() (*types.Block, *BlockMeta, error) {
h, err := c.Height() h, err := c.Height()
if err != nil { if err != nil {

View file

@ -76,12 +76,16 @@ func (c *Chain) nextDifficultyWith(height uint64, forks []config.HardFork, baseT
// NextDifficulty computes the expected PoW difficulty for the block at the // NextDifficulty computes the expected PoW difficulty for the block at the
// given height. Pre-HF6 the target is 120s; post-HF6 it doubles to 240s. // given height. Pre-HF6 the target is 120s; post-HF6 it doubles to 240s.
//
// diff, err := blockchain.NextDifficulty(nextHeight, config.MainnetForks)
func (c *Chain) NextDifficulty(height uint64, forks []config.HardFork) (uint64, error) { func (c *Chain) NextDifficulty(height uint64, forks []config.HardFork) (uint64, error) {
return c.nextDifficultyWith(height, forks, config.DifficultyPowTarget, config.DifficultyPowTargetHF6) return c.nextDifficultyWith(height, forks, config.DifficultyPowTarget, config.DifficultyPowTargetHF6)
} }
// NextPoSDifficulty computes the expected PoS difficulty for the block at the // NextPoSDifficulty computes the expected PoS difficulty for the block at the
// given height. Pre-HF6 the target is 120s; post-HF6 it doubles to 240s. // given height. Pre-HF6 the target is 120s; post-HF6 it doubles to 240s.
//
// diff, err := blockchain.NextPoSDifficulty(nextHeight, config.MainnetForks)
func (c *Chain) NextPoSDifficulty(height uint64, forks []config.HardFork) (uint64, error) { func (c *Chain) NextPoSDifficulty(height uint64, forks []config.HardFork) (uint64, error) {
return c.nextDifficultyWith(height, forks, config.DifficultyPosTarget, config.DifficultyPosTargetHF6) return c.nextDifficultyWith(height, forks, config.DifficultyPosTarget, config.DifficultyPosTargetHF6)
} }

View file

@ -18,6 +18,8 @@ import (
) )
// MarkSpent records a key image as spent at the given block height. // MarkSpent records a key image as spent at the given block height.
//
// err := blockchain.MarkSpent(input.KeyImage, blockHeight)
func (c *Chain) MarkSpent(ki types.KeyImage, height uint64) error { func (c *Chain) MarkSpent(ki types.KeyImage, height uint64) error {
if err := c.store.Set(groupSpentKeys, ki.String(), strconv.FormatUint(height, 10)); err != nil { if err := c.store.Set(groupSpentKeys, ki.String(), strconv.FormatUint(height, 10)); err != nil {
return coreerr.E("Chain.MarkSpent", fmt.Sprintf("chain: mark spent %s", ki), err) return coreerr.E("Chain.MarkSpent", fmt.Sprintf("chain: mark spent %s", ki), err)
@ -26,6 +28,8 @@ func (c *Chain) MarkSpent(ki types.KeyImage, height uint64) error {
} }
// IsSpent checks whether a key image has been spent. // IsSpent checks whether a key image has been spent.
//
// spent, err := blockchain.IsSpent(keyImage)
func (c *Chain) IsSpent(ki types.KeyImage) (bool, error) { func (c *Chain) IsSpent(ki types.KeyImage) (bool, error) {
_, err := c.store.Get(groupSpentKeys, ki.String()) _, err := c.store.Get(groupSpentKeys, ki.String())
if errors.Is(err, store.ErrNotFound) { if errors.Is(err, store.ErrNotFound) {
@ -92,6 +96,8 @@ func (c *Chain) GetOutput(amount uint64, gindex uint64) (types.Hash, uint32, err
} }
// OutputCount returns the number of outputs indexed for the given amount. // OutputCount returns the number of outputs indexed for the given amount.
//
// count, err := blockchain.OutputCount(1_000_000_000_000)
func (c *Chain) OutputCount(amount uint64) (uint64, error) { func (c *Chain) OutputCount(amount uint64) (uint64, error) {
n, err := c.store.Count(outputGroup(amount)) n, err := c.store.Count(outputGroup(amount))
if err != nil { if err != nil {

View file

@ -6,9 +6,7 @@
package chain package chain
import ( import (
"log" corelog "dappco.re/go/core/log"
coreerr "dappco.re/go/core/log"
"dappco.re/go/core/blockchain/p2p" "dappco.re/go/core/blockchain/p2p"
levinpkg "dappco.re/go/core/p2p/node/levin" levinpkg "dappco.re/go/core/p2p/node/levin"
@ -28,6 +26,10 @@ func NewLevinP2PConn(conn *levinpkg.Connection, peerHeight uint64, localSync p2p
return &LevinP2PConn{conn: conn, peerHeight: peerHeight, localSync: localSync} return &LevinP2PConn{conn: conn, peerHeight: peerHeight, localSync: localSync}
} }
// PeerHeight returns the remote peer's advertised chain height,
// obtained during the Levin handshake.
//
// height := conn.PeerHeight()
func (c *LevinP2PConn) PeerHeight() uint64 { return c.peerHeight } func (c *LevinP2PConn) PeerHeight() uint64 { return c.peerHeight }
// handleMessage processes non-target messages received while waiting for // handleMessage processes non-target messages received while waiting for
@ -40,40 +42,44 @@ func (c *LevinP2PConn) handleMessage(hdr levinpkg.Header, data []byte) error {
resp := p2p.TimedSyncRequest{PayloadData: c.localSync} resp := p2p.TimedSyncRequest{PayloadData: c.localSync}
payload, err := resp.Encode() payload, err := resp.Encode()
if err != nil { if err != nil {
return coreerr.E("LevinP2PConn.handleMessage", "encode timed_sync response", err) return corelog.E("LevinP2PConn.handleMessage", "encode timed_sync response", err)
} }
if err := c.conn.WriteResponse(p2p.CommandTimedSync, payload, levinpkg.ReturnOK); err != nil { if err := c.conn.WriteResponse(p2p.CommandTimedSync, payload, levinpkg.ReturnOK); err != nil {
return coreerr.E("LevinP2PConn.handleMessage", "write timed_sync response", err) return corelog.E("LevinP2PConn.handleMessage", "write timed_sync response", err)
} }
log.Printf("p2p: responded to timed_sync") corelog.Info("p2p responded to timed_sync")
return nil return nil
} }
// Silently skip other messages (new_block notifications, etc.) // Silently skip other messages (new_block notifications, etc.)
return nil return nil
} }
// RequestChain sends NOTIFY_REQUEST_CHAIN with our sparse block ID list
// and returns the start height and block IDs from the peer's response.
//
// startHeight, blockIDs, err := conn.RequestChain(historyBytes)
func (c *LevinP2PConn) RequestChain(blockIDs [][]byte) (uint64, [][]byte, error) { func (c *LevinP2PConn) RequestChain(blockIDs [][]byte) (uint64, [][]byte, error) {
req := p2p.RequestChain{BlockIDs: blockIDs} req := p2p.RequestChain{BlockIDs: blockIDs}
payload, err := req.Encode() payload, err := req.Encode()
if err != nil { if err != nil {
return 0, nil, coreerr.E("LevinP2PConn.RequestChain", "encode request_chain", err) return 0, nil, corelog.E("LevinP2PConn.RequestChain", "encode request_chain", err)
} }
// Send as notification (expectResponse=false) per CryptoNote protocol. // Send as notification (expectResponse=false) per CryptoNote protocol.
if err := c.conn.WritePacket(p2p.CommandRequestChain, payload, false); err != nil { if err := c.conn.WritePacket(p2p.CommandRequestChain, payload, false); err != nil {
return 0, nil, coreerr.E("LevinP2PConn.RequestChain", "write request_chain", err) return 0, nil, corelog.E("LevinP2PConn.RequestChain", "write request_chain", err)
} }
// Read until we get RESPONSE_CHAIN_ENTRY. // Read until we get RESPONSE_CHAIN_ENTRY.
for { for {
hdr, data, err := c.conn.ReadPacket() hdr, data, err := c.conn.ReadPacket()
if err != nil { if err != nil {
return 0, nil, coreerr.E("LevinP2PConn.RequestChain", "read response_chain", err) return 0, nil, corelog.E("LevinP2PConn.RequestChain", "read response_chain", err)
} }
if hdr.Command == p2p.CommandResponseChain { if hdr.Command == p2p.CommandResponseChain {
var resp p2p.ResponseChainEntry var resp p2p.ResponseChainEntry
if err := resp.Decode(data); err != nil { if err := resp.Decode(data); err != nil {
return 0, nil, coreerr.E("LevinP2PConn.RequestChain", "decode response_chain", err) return 0, nil, corelog.E("LevinP2PConn.RequestChain", "decode response_chain", err)
} }
return resp.StartHeight, resp.BlockIDs, nil return resp.StartHeight, resp.BlockIDs, nil
} }
@ -83,27 +89,31 @@ func (c *LevinP2PConn) RequestChain(blockIDs [][]byte) (uint64, [][]byte, error)
} }
} }
// RequestObjects sends NOTIFY_REQUEST_GET_OBJECTS for the given block
// hashes and returns the raw block and transaction blobs.
//
// entries, err := conn.RequestObjects(batchHashes)
func (c *LevinP2PConn) RequestObjects(blockHashes [][]byte) ([]BlockBlobEntry, error) { func (c *LevinP2PConn) RequestObjects(blockHashes [][]byte) ([]BlockBlobEntry, error) {
req := p2p.RequestGetObjects{Blocks: blockHashes} req := p2p.RequestGetObjects{Blocks: blockHashes}
payload, err := req.Encode() payload, err := req.Encode()
if err != nil { if err != nil {
return nil, coreerr.E("LevinP2PConn.RequestObjects", "encode request_get_objects", err) return nil, corelog.E("LevinP2PConn.RequestObjects", "encode request_get_objects", err)
} }
if err := c.conn.WritePacket(p2p.CommandRequestObjects, payload, false); err != nil { if err := c.conn.WritePacket(p2p.CommandRequestObjects, payload, false); err != nil {
return nil, coreerr.E("LevinP2PConn.RequestObjects", "write request_get_objects", err) return nil, corelog.E("LevinP2PConn.RequestObjects", "write request_get_objects", err)
} }
// Read until we get RESPONSE_GET_OBJECTS. // Read until we get RESPONSE_GET_OBJECTS.
for { for {
hdr, data, err := c.conn.ReadPacket() hdr, data, err := c.conn.ReadPacket()
if err != nil { if err != nil {
return nil, coreerr.E("LevinP2PConn.RequestObjects", "read response_get_objects", err) return nil, corelog.E("LevinP2PConn.RequestObjects", "read response_get_objects", err)
} }
if hdr.Command == p2p.CommandResponseObjects { if hdr.Command == p2p.CommandResponseObjects {
var resp p2p.ResponseGetObjects var resp p2p.ResponseGetObjects
if err := resp.Decode(data); err != nil { if err := resp.Decode(data); err != nil {
return nil, coreerr.E("LevinP2PConn.RequestObjects", "decode response_get_objects", err) return nil, corelog.E("LevinP2PConn.RequestObjects", "decode response_get_objects", err)
} }
entries := make([]BlockBlobEntry, len(resp.Blocks)) entries := make([]BlockBlobEntry, len(resp.Blocks))
for i, b := range resp.Blocks { for i, b := range resp.Blocks {

View file

@ -8,9 +8,8 @@ package chain
import ( import (
"context" "context"
"fmt" "fmt"
"log"
coreerr "dappco.re/go/core/log" corelog "dappco.re/go/core/log"
) )
// P2PConnection abstracts the P2P communication needed for block sync. // P2PConnection abstracts the P2P communication needed for block sync.
@ -46,7 +45,7 @@ func (c *Chain) P2PSync(ctx context.Context, conn P2PConnection, opts SyncOption
localHeight, err := c.Height() localHeight, err := c.Height()
if err != nil { if err != nil {
return coreerr.E("Chain.P2PSync", "p2p sync: get height", err) return corelog.E("Chain.P2PSync", "p2p sync: get height", err)
} }
peerHeight := conn.PeerHeight() peerHeight := conn.PeerHeight()
@ -57,7 +56,7 @@ func (c *Chain) P2PSync(ctx context.Context, conn P2PConnection, opts SyncOption
// Build sparse chain history. // Build sparse chain history.
history, err := c.SparseChainHistory() history, err := c.SparseChainHistory()
if err != nil { if err != nil {
return coreerr.E("Chain.P2PSync", "p2p sync: build history", err) return corelog.E("Chain.P2PSync", "p2p sync: build history", err)
} }
// Convert Hash to []byte for P2P. // Convert Hash to []byte for P2P.
@ -71,14 +70,14 @@ func (c *Chain) P2PSync(ctx context.Context, conn P2PConnection, opts SyncOption
// Request chain entry. // Request chain entry.
startHeight, blockIDs, err := conn.RequestChain(historyBytes) startHeight, blockIDs, err := conn.RequestChain(historyBytes)
if err != nil { if err != nil {
return coreerr.E("Chain.P2PSync", "p2p sync: request chain", err) return corelog.E("Chain.P2PSync", "p2p sync: request chain", err)
} }
if len(blockIDs) == 0 { if len(blockIDs) == 0 {
return nil // nothing to sync return nil // nothing to sync
} }
log.Printf("p2p sync: chain entry from height %d, %d block IDs", startHeight, len(blockIDs)) corelog.Info("p2p sync chain entry", "start_height", startHeight, "block_ids", len(blockIDs))
// The daemon returns the fork-point block as the first entry. // The daemon returns the fork-point block as the first entry.
// Skip blocks we already have. // Skip blocks we already have.
@ -108,24 +107,24 @@ func (c *Chain) P2PSync(ctx context.Context, conn P2PConnection, opts SyncOption
entries, err := conn.RequestObjects(batch) entries, err := conn.RequestObjects(batch)
if err != nil { if err != nil {
return coreerr.E("Chain.P2PSync", "p2p sync: request objects", err) return corelog.E("Chain.P2PSync", "p2p sync: request objects", err)
} }
currentHeight := fetchStart + uint64(i) currentHeight := fetchStart + uint64(i)
for j, entry := range entries { for j, entry := range entries {
blockHeight := currentHeight + uint64(j) blockHeight := currentHeight + uint64(j)
if blockHeight > 0 && blockHeight%100 == 0 { if blockHeight > 0 && blockHeight%100 == 0 {
log.Printf("p2p sync: processing block %d", blockHeight) corelog.Info("p2p sync processing block", "height", blockHeight)
} }
blockDiff, err := c.NextDifficulty(blockHeight, opts.Forks) blockDiff, err := c.NextDifficulty(blockHeight, opts.Forks)
if err != nil { if err != nil {
return coreerr.E("Chain.P2PSync", fmt.Sprintf("p2p sync: compute difficulty for block %d", blockHeight), err) return corelog.E("Chain.P2PSync", fmt.Sprintf("p2p sync: compute difficulty for block %d", blockHeight), err)
} }
if err := c.processBlockBlobs(entry.Block, entry.Txs, if err := c.processBlockBlobs(entry.Block, entry.Txs,
blockHeight, blockDiff, opts); err != nil { blockHeight, blockDiff, opts); err != nil {
return coreerr.E("Chain.P2PSync", fmt.Sprintf("p2p sync: process block %d", blockHeight), err) return corelog.E("Chain.P2PSync", fmt.Sprintf("p2p sync: process block %d", blockHeight), err)
} }
} }
} }

View file

@ -15,10 +15,12 @@ import (
) )
// GetRingOutputs fetches the public keys for the given global output indices // GetRingOutputs fetches the public keys for the given global output indices
// at the specified amount. This implements the consensus.RingOutputsFn // at the specified spending height and amount. This implements the
// signature for use during signature verification. // consensus.RingOutputsFn signature for use during signature verification.
func (c *Chain) GetRingOutputs(amount uint64, offsets []uint64) ([]types.PublicKey, error) { //
pubs := make([]types.PublicKey, len(offsets)) // keys, err := blockchain.GetRingOutputs(blockHeight, inputAmount, []uint64{0, 5, 12, 30})
func (c *Chain) GetRingOutputs(height, amount uint64, offsets []uint64) ([]types.PublicKey, error) {
publicKeys := make([]types.PublicKey, len(offsets))
for i, gidx := range offsets { for i, gidx := range offsets {
txHash, outNo, err := c.GetOutput(amount, gidx) txHash, outNo, err := c.GetOutput(amount, gidx)
if err != nil { if err != nil {
@ -36,16 +38,44 @@ func (c *Chain) GetRingOutputs(amount uint64, offsets []uint64) ([]types.PublicK
switch out := tx.Vout[outNo].(type) { switch out := tx.Vout[outNo].(type) {
case types.TxOutputBare: case types.TxOutputBare:
toKey, ok := out.Target.(types.TxOutToKey) spendKey, err := ringOutputSpendKey(height, out.Target)
if !ok { if err != nil {
return nil, coreerr.E("Chain.GetRingOutputs", fmt.Sprintf("ring output %d: unsupported target type %T", i, out.Target), nil) return nil, coreerr.E("Chain.GetRingOutputs", fmt.Sprintf("ring output %d: %v", i, err), nil)
} }
pubs[i] = toKey.Key publicKeys[i] = spendKey
default: default:
return nil, coreerr.E("Chain.GetRingOutputs", fmt.Sprintf("ring output %d: unsupported output type %T", i, out), nil) return nil, coreerr.E("Chain.GetRingOutputs", fmt.Sprintf("ring output %d: unsupported output type %T", i, out), nil)
} }
} }
return pubs, nil return publicKeys, nil
}
// ringOutputSpendKey extracts the spend key for a transparent output target.
//
// TxOutMultisig does not carry enough context here to select the exact spend
// path, so we return the first listed key as a deterministic fallback.
// TxOutHTLC selects redeem vs refund based on whether the spending height is
// before or after the contract expiration. The refund path only opens after
// the expiration height has passed.
func ringOutputSpendKey(height uint64, target types.TxOutTarget) (types.PublicKey, error) {
if key, ok := (types.TxOutputBare{Target: target}).SpendKey(); ok {
return key, nil
}
switch t := target.(type) {
case types.TxOutMultisig:
if len(t.Keys) == 0 {
return types.PublicKey{}, coreerr.E("ringOutputSpendKey", "multisig target has no keys", nil)
}
return t.Keys[0], nil
case types.TxOutHTLC:
if height > t.Expiration {
return t.PKRefund, nil
}
return t.PKRedeem, nil
default:
return types.PublicKey{}, coreerr.E("ringOutputSpendKey", fmt.Sprintf("unsupported target type %T", target), nil)
}
} }
// GetZCRingOutputs fetches ZC ring members (stealth address, amount commitment, // GetZCRingOutputs fetches ZC ring members (stealth address, amount commitment,
@ -53,6 +83,8 @@ func (c *Chain) GetRingOutputs(amount uint64, offsets []uint64) ([]types.PublicK
// consensus.ZCRingOutputsFn signature for post-HF4 CLSAG GGX verification. // consensus.ZCRingOutputsFn signature for post-HF4 CLSAG GGX verification.
// //
// ZC outputs are indexed at amount=0 (confidential amounts). // ZC outputs are indexed at amount=0 (confidential amounts).
//
// members, err := blockchain.GetZCRingOutputs([]uint64{100, 200, 300})
func (c *Chain) GetZCRingOutputs(offsets []uint64) ([]consensus.ZCRingMember, error) { func (c *Chain) GetZCRingOutputs(offsets []uint64) ([]consensus.ZCRingMember, error) {
members := make([]consensus.ZCRingMember, len(offsets)) members := make([]consensus.ZCRingMember, len(offsets))
for i, gidx := range offsets { for i, gidx := range offsets {

View file

@ -43,7 +43,7 @@ func TestGetRingOutputs_Good(t *testing.T) {
t.Fatalf("gidx: got %d, want 0", gidx) t.Fatalf("gidx: got %d, want 0", gidx)
} }
pubs, err := c.GetRingOutputs(1000, []uint64{0}) pubs, err := c.GetRingOutputs(100, 1000, []uint64{0})
if err != nil { if err != nil {
t.Fatalf("GetRingOutputs: %v", err) t.Fatalf("GetRingOutputs: %v", err)
} }
@ -55,6 +55,134 @@ func TestGetRingOutputs_Good(t *testing.T) {
} }
} }
func TestGetRingOutputs_Good_Multisig(t *testing.T) {
c := newTestChain(t)
first := types.PublicKey{0x11, 0x22, 0x33}
second := types.PublicKey{0x44, 0x55, 0x66}
tx := types.Transaction{
Version: types.VersionPreHF4,
Vin: []types.TxInput{types.TxInputGenesis{Height: 0}},
Vout: []types.TxOutput{
types.TxOutputBare{
Amount: 1000,
Target: types.TxOutMultisig{
MinimumSigs: 2,
Keys: []types.PublicKey{first, second},
},
},
},
Extra: wire.EncodeVarint(0),
Attachment: wire.EncodeVarint(0),
}
txHash := wire.TransactionHash(&tx)
if err := c.PutTransaction(txHash, &tx, &TxMeta{KeeperBlock: 0, GlobalOutputIndexes: []uint64{0}}); err != nil {
t.Fatalf("PutTransaction: %v", err)
}
if _, err := c.PutOutput(1000, txHash, 0); err != nil {
t.Fatalf("PutOutput: %v", err)
}
pubs, err := c.GetRingOutputs(100, 1000, []uint64{0})
if err != nil {
t.Fatalf("GetRingOutputs: %v", err)
}
if pubs[0] != first {
t.Errorf("pubs[0]: got %x, want %x", pubs[0], first)
}
}
func TestGetRingOutputs_Good_HTLC(t *testing.T) {
c := newTestChain(t)
redeem := types.PublicKey{0xAA, 0xBB, 0xCC}
refund := types.PublicKey{0xDD, 0xEE, 0xFF}
tx := types.Transaction{
Version: types.VersionPreHF4,
Vin: []types.TxInput{types.TxInputGenesis{Height: 0}},
Vout: []types.TxOutput{
types.TxOutputBare{
Amount: 1000,
Target: types.TxOutHTLC{
HTLCHash: types.Hash{0x01},
Flags: 0,
Expiration: 200,
PKRedeem: redeem,
PKRefund: refund,
},
},
},
Extra: wire.EncodeVarint(0),
Attachment: wire.EncodeVarint(0),
}
txHash := wire.TransactionHash(&tx)
if err := c.PutTransaction(txHash, &tx, &TxMeta{KeeperBlock: 0, GlobalOutputIndexes: []uint64{0}}); err != nil {
t.Fatalf("PutTransaction: %v", err)
}
if _, err := c.PutOutput(1000, txHash, 0); err != nil {
t.Fatalf("PutOutput: %v", err)
}
pubs, err := c.GetRingOutputs(100, 1000, []uint64{0})
if err != nil {
t.Fatalf("GetRingOutputs: %v", err)
}
if pubs[0] != redeem {
t.Errorf("pubs[0]: got %x, want %x", pubs[0], redeem)
}
pubs, err = c.GetRingOutputs(250, 1000, []uint64{0})
if err != nil {
t.Fatalf("GetRingOutputs refund path: %v", err)
}
if pubs[0] != refund {
t.Errorf("pubs[0] refund path: got %x, want %x", pubs[0], refund)
}
}
func TestGetRingOutputs_Good_HTLCExpirationBoundary(t *testing.T) {
c := newTestChain(t)
redeem := types.PublicKey{0xAA, 0xBB, 0xCC}
refund := types.PublicKey{0xDD, 0xEE, 0xFF}
tx := types.Transaction{
Version: types.VersionPreHF4,
Vin: []types.TxInput{types.TxInputGenesis{Height: 0}},
Vout: []types.TxOutput{
types.TxOutputBare{
Amount: 1000,
Target: types.TxOutHTLC{
HTLCHash: types.Hash{0x01},
Flags: 0,
Expiration: 200,
PKRedeem: redeem,
PKRefund: refund,
},
},
},
Extra: wire.EncodeVarint(0),
Attachment: wire.EncodeVarint(0),
}
txHash := wire.TransactionHash(&tx)
if err := c.PutTransaction(txHash, &tx, &TxMeta{KeeperBlock: 0, GlobalOutputIndexes: []uint64{0}}); err != nil {
t.Fatalf("PutTransaction: %v", err)
}
if _, err := c.PutOutput(1000, txHash, 0); err != nil {
t.Fatalf("PutOutput: %v", err)
}
pubs, err := c.GetRingOutputs(200, 1000, []uint64{0})
if err != nil {
t.Fatalf("GetRingOutputs boundary path: %v", err)
}
if pubs[0] != redeem {
t.Errorf("pubs[0] boundary path: got %x, want %x", pubs[0], redeem)
}
}
func TestGetRingOutputs_Good_MultipleOutputs(t *testing.T) { func TestGetRingOutputs_Good_MultipleOutputs(t *testing.T) {
c := newTestChain(t) c := newTestChain(t)
@ -97,7 +225,7 @@ func TestGetRingOutputs_Good_MultipleOutputs(t *testing.T) {
t.Fatalf("PutOutput(tx2): %v", err) t.Fatalf("PutOutput(tx2): %v", err)
} }
pubs, err := c.GetRingOutputs(500, []uint64{0, 1}) pubs, err := c.GetRingOutputs(500, 500, []uint64{0, 1})
if err != nil { if err != nil {
t.Fatalf("GetRingOutputs: %v", err) t.Fatalf("GetRingOutputs: %v", err)
} }
@ -115,7 +243,7 @@ func TestGetRingOutputs_Good_MultipleOutputs(t *testing.T) {
func TestGetRingOutputs_Bad_OutputNotFound(t *testing.T) { func TestGetRingOutputs_Bad_OutputNotFound(t *testing.T) {
c := newTestChain(t) c := newTestChain(t)
_, err := c.GetRingOutputs(1000, []uint64{99}) _, err := c.GetRingOutputs(1000, 1000, []uint64{99})
if err == nil { if err == nil {
t.Fatal("GetRingOutputs: expected error for missing output, got nil") t.Fatal("GetRingOutputs: expected error for missing output, got nil")
} }
@ -130,7 +258,7 @@ func TestGetRingOutputs_Bad_TxNotFound(t *testing.T) {
t.Fatalf("PutOutput: %v", err) t.Fatalf("PutOutput: %v", err)
} }
_, err := c.GetRingOutputs(1000, []uint64{0}) _, err := c.GetRingOutputs(1000, 1000, []uint64{0})
if err == nil { if err == nil {
t.Fatal("GetRingOutputs: expected error for missing tx, got nil") t.Fatal("GetRingOutputs: expected error for missing tx, got nil")
} }
@ -159,7 +287,7 @@ func TestGetRingOutputs_Bad_OutputIndexOutOfRange(t *testing.T) {
t.Fatalf("PutOutput: %v", err) t.Fatalf("PutOutput: %v", err)
} }
_, err := c.GetRingOutputs(1000, []uint64{0}) _, err := c.GetRingOutputs(1000, 1000, []uint64{0})
if err == nil { if err == nil {
t.Fatal("GetRingOutputs: expected error for out-of-range index, got nil") t.Fatal("GetRingOutputs: expected error for out-of-range index, got nil")
} }
@ -168,7 +296,7 @@ func TestGetRingOutputs_Bad_OutputIndexOutOfRange(t *testing.T) {
func TestGetRingOutputs_Good_EmptyOffsets(t *testing.T) { func TestGetRingOutputs_Good_EmptyOffsets(t *testing.T) {
c := newTestChain(t) c := newTestChain(t)
pubs, err := c.GetRingOutputs(1000, []uint64{}) pubs, err := c.GetRingOutputs(1000, 1000, []uint64{})
if err != nil { if err != nil {
t.Fatalf("GetRingOutputs: %v", err) t.Fatalf("GetRingOutputs: %v", err)
} }

View file

@ -41,6 +41,8 @@ type blockRecord struct {
} }
// PutBlock stores a block and updates the block_index. // PutBlock stores a block and updates the block_index.
//
// err := blockchain.PutBlock(&blk, &chain.BlockMeta{Hash: blockHash, Height: 100})
func (c *Chain) PutBlock(b *types.Block, meta *BlockMeta) error { func (c *Chain) PutBlock(b *types.Block, meta *BlockMeta) error {
var buf bytes.Buffer var buf bytes.Buffer
enc := wire.NewEncoder(&buf) enc := wire.NewEncoder(&buf)
@ -72,6 +74,8 @@ func (c *Chain) PutBlock(b *types.Block, meta *BlockMeta) error {
} }
// GetBlockByHeight retrieves a block by its height. // GetBlockByHeight retrieves a block by its height.
//
// blk, meta, err := blockchain.GetBlockByHeight(1000)
func (c *Chain) GetBlockByHeight(height uint64) (*types.Block, *BlockMeta, error) { func (c *Chain) GetBlockByHeight(height uint64) (*types.Block, *BlockMeta, error) {
val, err := c.store.Get(groupBlocks, heightKey(height)) val, err := c.store.Get(groupBlocks, heightKey(height))
if err != nil { if err != nil {
@ -84,6 +88,8 @@ func (c *Chain) GetBlockByHeight(height uint64) (*types.Block, *BlockMeta, error
} }
// GetBlockByHash retrieves a block by its hash. // GetBlockByHash retrieves a block by its hash.
//
// blk, meta, err := blockchain.GetBlockByHash(blockHash)
func (c *Chain) GetBlockByHash(hash types.Hash) (*types.Block, *BlockMeta, error) { func (c *Chain) GetBlockByHash(hash types.Hash) (*types.Block, *BlockMeta, error) {
heightStr, err := c.store.Get(groupBlockIndex, hash.String()) heightStr, err := c.store.Get(groupBlockIndex, hash.String())
if err != nil { if err != nil {
@ -106,6 +112,8 @@ type txRecord struct {
} }
// PutTransaction stores a transaction with metadata. // PutTransaction stores a transaction with metadata.
//
// err := blockchain.PutTransaction(txHash, &tx, &chain.TxMeta{KeeperBlock: height})
func (c *Chain) PutTransaction(hash types.Hash, tx *types.Transaction, meta *TxMeta) error { func (c *Chain) PutTransaction(hash types.Hash, tx *types.Transaction, meta *TxMeta) error {
var buf bytes.Buffer var buf bytes.Buffer
enc := wire.NewEncoder(&buf) enc := wire.NewEncoder(&buf)
@ -130,6 +138,8 @@ func (c *Chain) PutTransaction(hash types.Hash, tx *types.Transaction, meta *TxM
} }
// GetTransaction retrieves a transaction by hash. // GetTransaction retrieves a transaction by hash.
//
// tx, meta, err := blockchain.GetTransaction(txHash)
func (c *Chain) GetTransaction(hash types.Hash) (*types.Transaction, *TxMeta, error) { func (c *Chain) GetTransaction(hash types.Hash) (*types.Transaction, *TxMeta, error) {
val, err := c.store.Get(groupTx, hash.String()) val, err := c.store.Get(groupTx, hash.String())
if err != nil { if err != nil {
@ -156,6 +166,8 @@ func (c *Chain) GetTransaction(hash types.Hash) (*types.Transaction, *TxMeta, er
} }
// HasTransaction checks whether a transaction exists in the store. // HasTransaction checks whether a transaction exists in the store.
//
// if blockchain.HasTransaction(txHash) { /* already stored */ }
func (c *Chain) HasTransaction(hash types.Hash) bool { func (c *Chain) HasTransaction(hash types.Hash) bool {
_, err := c.store.Get(groupTx, hash.String()) _, err := c.store.Get(groupTx, hash.String())
return err == nil return err == nil

View file

@ -11,11 +11,10 @@ import (
"encoding/hex" "encoding/hex"
"encoding/json" "encoding/json"
"fmt" "fmt"
"log"
"regexp" "regexp"
"strconv" "strconv"
coreerr "dappco.re/go/core/log" corelog "dappco.re/go/core/log"
"dappco.re/go/core/blockchain/config" "dappco.re/go/core/blockchain/config"
"dappco.re/go/core/blockchain/consensus" "dappco.re/go/core/blockchain/consensus"
@ -52,12 +51,12 @@ func DefaultSyncOptions() SyncOptions {
func (c *Chain) Sync(ctx context.Context, client *rpc.Client, opts SyncOptions) error { func (c *Chain) Sync(ctx context.Context, client *rpc.Client, opts SyncOptions) error {
localHeight, err := c.Height() localHeight, err := c.Height()
if err != nil { if err != nil {
return coreerr.E("Chain.Sync", "sync: get local height", err) return corelog.E("Chain.Sync", "sync: get local height", err)
} }
remoteHeight, err := client.GetHeight() remoteHeight, err := client.GetHeight()
if err != nil { if err != nil {
return coreerr.E("Chain.Sync", "sync: get remote height", err) return corelog.E("Chain.Sync", "sync: get remote height", err)
} }
for localHeight < remoteHeight { for localHeight < remoteHeight {
@ -72,22 +71,22 @@ func (c *Chain) Sync(ctx context.Context, client *rpc.Client, opts SyncOptions)
blocks, err := client.GetBlocksDetails(localHeight, batch) blocks, err := client.GetBlocksDetails(localHeight, batch)
if err != nil { if err != nil {
return coreerr.E("Chain.Sync", fmt.Sprintf("sync: fetch blocks at %d", localHeight), err) return corelog.E("Chain.Sync", fmt.Sprintf("sync: fetch blocks at %d", localHeight), err)
} }
if err := resolveBlockBlobs(blocks, client); err != nil { if err := resolveBlockBlobs(blocks, client); err != nil {
return coreerr.E("Chain.Sync", fmt.Sprintf("sync: resolve blobs at %d", localHeight), err) return corelog.E("Chain.Sync", fmt.Sprintf("sync: resolve blobs at %d", localHeight), err)
} }
for _, bd := range blocks { for _, bd := range blocks {
if err := c.processBlock(bd, opts); err != nil { if err := c.processBlock(bd, opts); err != nil {
return coreerr.E("Chain.Sync", fmt.Sprintf("sync: process block %d", bd.Height), err) return corelog.E("Chain.Sync", fmt.Sprintf("sync: process block %d", bd.Height), err)
} }
} }
localHeight, err = c.Height() localHeight, err = c.Height()
if err != nil { if err != nil {
return coreerr.E("Chain.Sync", "sync: get height after batch", err) return corelog.E("Chain.Sync", "sync: get height after batch", err)
} }
} }
@ -96,12 +95,12 @@ func (c *Chain) Sync(ctx context.Context, client *rpc.Client, opts SyncOptions)
func (c *Chain) processBlock(bd rpc.BlockDetails, opts SyncOptions) error { func (c *Chain) processBlock(bd rpc.BlockDetails, opts SyncOptions) error {
if bd.Height > 0 && bd.Height%100 == 0 { if bd.Height > 0 && bd.Height%100 == 0 {
log.Printf("sync: processing block %d", bd.Height) corelog.Info("sync processing block", "height", bd.Height)
} }
blockBlob, err := hex.DecodeString(bd.Blob) blockBlob, err := hex.DecodeString(bd.Blob)
if err != nil { if err != nil {
return coreerr.E("Chain.processBlock", "decode block hex", err) return corelog.E("Chain.processBlock", "decode block hex", err)
} }
// Build a set of the block's regular tx hashes for lookup. // Build a set of the block's regular tx hashes for lookup.
@ -111,7 +110,7 @@ func (c *Chain) processBlock(bd rpc.BlockDetails, opts SyncOptions) error {
dec := wire.NewDecoder(bytes.NewReader(blockBlob)) dec := wire.NewDecoder(bytes.NewReader(blockBlob))
blk := wire.DecodeBlock(dec) blk := wire.DecodeBlock(dec)
if err := dec.Err(); err != nil { if err := dec.Err(); err != nil {
return coreerr.E("Chain.processBlock", "decode block for tx hashes", err) return corelog.E("Chain.processBlock", "decode block for tx hashes", err)
} }
regularTxs := make(map[string]struct{}, len(blk.TxHashes)) regularTxs := make(map[string]struct{}, len(blk.TxHashes))
@ -126,7 +125,7 @@ func (c *Chain) processBlock(bd rpc.BlockDetails, opts SyncOptions) error {
} }
txBlobBytes, err := hex.DecodeString(txInfo.Blob) txBlobBytes, err := hex.DecodeString(txInfo.Blob)
if err != nil { if err != nil {
return coreerr.E("Chain.processBlock", fmt.Sprintf("decode tx hex %s", txInfo.ID), err) return corelog.E("Chain.processBlock", fmt.Sprintf("decode tx hex %s", txInfo.ID), err)
} }
txBlobs = append(txBlobs, txBlobBytes) txBlobs = append(txBlobs, txBlobBytes)
} }
@ -137,10 +136,10 @@ func (c *Chain) processBlock(bd rpc.BlockDetails, opts SyncOptions) error {
computedHash := wire.BlockHash(&blk) computedHash := wire.BlockHash(&blk)
daemonHash, err := types.HashFromHex(bd.ID) daemonHash, err := types.HashFromHex(bd.ID)
if err != nil { if err != nil {
return coreerr.E("Chain.processBlock", "parse daemon block hash", err) return corelog.E("Chain.processBlock", "parse daemon block hash", err)
} }
if computedHash != daemonHash { if computedHash != daemonHash {
return coreerr.E("Chain.processBlock", fmt.Sprintf("block hash mismatch: computed %s, daemon says %s", computedHash, daemonHash), nil) return corelog.E("Chain.processBlock", fmt.Sprintf("block hash mismatch: computed %s, daemon says %s", computedHash, daemonHash), nil)
} }
return c.processBlockBlobs(blockBlob, txBlobs, bd.Height, diff, opts) return c.processBlockBlobs(blockBlob, txBlobs, bd.Height, diff, opts)
@ -155,7 +154,7 @@ func (c *Chain) processBlockBlobs(blockBlob []byte, txBlobs [][]byte,
dec := wire.NewDecoder(bytes.NewReader(blockBlob)) dec := wire.NewDecoder(bytes.NewReader(blockBlob))
blk := wire.DecodeBlock(dec) blk := wire.DecodeBlock(dec)
if err := dec.Err(); err != nil { if err := dec.Err(); err != nil {
return coreerr.E("Chain.processBlockBlobs", "decode block wire", err) return corelog.E("Chain.processBlockBlobs", "decode block wire", err)
} }
// Compute the block hash. // Compute the block hash.
@ -165,21 +164,21 @@ func (c *Chain) processBlockBlobs(blockBlob []byte, txBlobs [][]byte,
if height == 0 { if height == 0 {
genesisHash, err := types.HashFromHex(GenesisHash) genesisHash, err := types.HashFromHex(GenesisHash)
if err != nil { if err != nil {
return coreerr.E("Chain.processBlockBlobs", "parse genesis hash", err) return corelog.E("Chain.processBlockBlobs", "parse genesis hash", err)
} }
if blockHash != genesisHash { if blockHash != genesisHash {
return coreerr.E("Chain.processBlockBlobs", fmt.Sprintf("genesis hash %s does not match expected %s", blockHash, GenesisHash), nil) return corelog.E("Chain.processBlockBlobs", fmt.Sprintf("genesis hash %s does not match expected %s", blockHash, GenesisHash), nil)
} }
} }
// Validate header. // Validate header.
if err := c.ValidateHeader(&blk, height); err != nil { if err := c.ValidateHeader(&blk, height, opts.Forks); err != nil {
return err return err
} }
// Validate miner transaction structure. // Validate miner transaction structure.
if err := consensus.ValidateMinerTx(&blk.MinerTx, height, opts.Forks); err != nil { if err := consensus.ValidateMinerTx(&blk.MinerTx, height, opts.Forks); err != nil {
return coreerr.E("Chain.processBlockBlobs", "validate miner tx", err) return corelog.E("Chain.processBlockBlobs", "validate miner tx", err)
} }
// Calculate cumulative difficulty. // Calculate cumulative difficulty.
@ -187,7 +186,7 @@ func (c *Chain) processBlockBlobs(blockBlob []byte, txBlobs [][]byte,
if height > 0 { if height > 0 {
_, prevMeta, err := c.TopBlock() _, prevMeta, err := c.TopBlock()
if err != nil { if err != nil {
return coreerr.E("Chain.processBlockBlobs", "get prev block meta", err) return corelog.E("Chain.processBlockBlobs", "get prev block meta", err)
} }
cumulDiff = prevMeta.CumulativeDiff + difficulty cumulDiff = prevMeta.CumulativeDiff + difficulty
} else { } else {
@ -198,13 +197,13 @@ func (c *Chain) processBlockBlobs(blockBlob []byte, txBlobs [][]byte,
minerTxHash := wire.TransactionHash(&blk.MinerTx) minerTxHash := wire.TransactionHash(&blk.MinerTx)
minerGindexes, err := c.indexOutputs(minerTxHash, &blk.MinerTx) minerGindexes, err := c.indexOutputs(minerTxHash, &blk.MinerTx)
if err != nil { if err != nil {
return coreerr.E("Chain.processBlockBlobs", "index miner tx outputs", err) return corelog.E("Chain.processBlockBlobs", "index miner tx outputs", err)
} }
if err := c.PutTransaction(minerTxHash, &blk.MinerTx, &TxMeta{ if err := c.PutTransaction(minerTxHash, &blk.MinerTx, &TxMeta{
KeeperBlock: height, KeeperBlock: height,
GlobalOutputIndexes: minerGindexes, GlobalOutputIndexes: minerGindexes,
}); err != nil { }); err != nil {
return coreerr.E("Chain.processBlockBlobs", "store miner tx", err) return corelog.E("Chain.processBlockBlobs", "store miner tx", err)
} }
// Process regular transactions from txBlobs. // Process regular transactions from txBlobs.
@ -212,27 +211,27 @@ func (c *Chain) processBlockBlobs(blockBlob []byte, txBlobs [][]byte,
txDec := wire.NewDecoder(bytes.NewReader(txBlobData)) txDec := wire.NewDecoder(bytes.NewReader(txBlobData))
tx := wire.DecodeTransaction(txDec) tx := wire.DecodeTransaction(txDec)
if err := txDec.Err(); err != nil { if err := txDec.Err(); err != nil {
return coreerr.E("Chain.processBlockBlobs", fmt.Sprintf("decode tx wire [%d]", i), err) return corelog.E("Chain.processBlockBlobs", fmt.Sprintf("decode tx wire [%d]", i), err)
} }
txHash := wire.TransactionHash(&tx) txHash := wire.TransactionHash(&tx)
// Validate transaction semantics. // Validate transaction semantics, including the HF5 freeze window.
if err := consensus.ValidateTransaction(&tx, txBlobData, opts.Forks, height); err != nil { if err := consensus.ValidateTransactionInBlock(&tx, txBlobData, opts.Forks, height); err != nil {
return coreerr.E("Chain.processBlockBlobs", fmt.Sprintf("validate tx %s", txHash), err) return corelog.E("Chain.processBlockBlobs", fmt.Sprintf("validate tx %s", txHash), err)
} }
// Optionally verify signatures using the chain's output index. // Optionally verify signatures using the chain's output index.
if opts.VerifySignatures { if opts.VerifySignatures {
if err := consensus.VerifyTransactionSignatures(&tx, opts.Forks, height, c.GetRingOutputs, c.GetZCRingOutputs); err != nil { if err := consensus.VerifyTransactionSignatures(&tx, opts.Forks, height, c.GetRingOutputs, c.GetZCRingOutputs); err != nil {
return coreerr.E("Chain.processBlockBlobs", fmt.Sprintf("verify tx signatures %s", txHash), err) return corelog.E("Chain.processBlockBlobs", fmt.Sprintf("verify tx signatures %s", txHash), err)
} }
} }
// Index outputs. // Index outputs.
gindexes, err := c.indexOutputs(txHash, &tx) gindexes, err := c.indexOutputs(txHash, &tx)
if err != nil { if err != nil {
return coreerr.E("Chain.processBlockBlobs", fmt.Sprintf("index tx outputs %s", txHash), err) return corelog.E("Chain.processBlockBlobs", fmt.Sprintf("index tx outputs %s", txHash), err)
} }
// Mark key images as spent. // Mark key images as spent.
@ -240,11 +239,15 @@ func (c *Chain) processBlockBlobs(blockBlob []byte, txBlobs [][]byte,
switch inp := vin.(type) { switch inp := vin.(type) {
case types.TxInputToKey: case types.TxInputToKey:
if err := c.MarkSpent(inp.KeyImage, height); err != nil { if err := c.MarkSpent(inp.KeyImage, height); err != nil {
return coreerr.E("Chain.processBlockBlobs", fmt.Sprintf("mark spent %s", inp.KeyImage), err) return corelog.E("Chain.processBlockBlobs", fmt.Sprintf("mark spent %s", inp.KeyImage), err)
}
case types.TxInputHTLC:
if err := c.MarkSpent(inp.KeyImage, height); err != nil {
return corelog.E("Chain.processBlockBlobs", fmt.Sprintf("mark spent %s", inp.KeyImage), err)
} }
case types.TxInputZC: case types.TxInputZC:
if err := c.MarkSpent(inp.KeyImage, height); err != nil { if err := c.MarkSpent(inp.KeyImage, height); err != nil {
return coreerr.E("Chain.processBlockBlobs", fmt.Sprintf("mark spent %s", inp.KeyImage), err) return corelog.E("Chain.processBlockBlobs", fmt.Sprintf("mark spent %s", inp.KeyImage), err)
} }
} }
} }
@ -254,7 +257,7 @@ func (c *Chain) processBlockBlobs(blockBlob []byte, txBlobs [][]byte,
KeeperBlock: height, KeeperBlock: height,
GlobalOutputIndexes: gindexes, GlobalOutputIndexes: gindexes,
}); err != nil { }); err != nil {
return coreerr.E("Chain.processBlockBlobs", fmt.Sprintf("store tx %s", txHash), err) return corelog.E("Chain.processBlockBlobs", fmt.Sprintf("store tx %s", txHash), err)
} }
} }
@ -329,13 +332,13 @@ func resolveBlockBlobs(blocks []rpc.BlockDetails, client *rpc.Client) error {
// Batch-fetch tx blobs. // Batch-fetch tx blobs.
txHexes, missed, err := client.GetTransactions(allHashes) txHexes, missed, err := client.GetTransactions(allHashes)
if err != nil { if err != nil {
return coreerr.E("resolveBlockBlobs", "fetch tx blobs", err) return corelog.E("resolveBlockBlobs", "fetch tx blobs", err)
} }
if len(missed) > 0 { if len(missed) > 0 {
return coreerr.E("resolveBlockBlobs", fmt.Sprintf("daemon missed %d tx(es): %v", len(missed), missed), nil) return corelog.E("resolveBlockBlobs", fmt.Sprintf("daemon missed %d tx(es): %v", len(missed), missed), nil)
} }
if len(txHexes) != len(allHashes) { if len(txHexes) != len(allHashes) {
return coreerr.E("resolveBlockBlobs", fmt.Sprintf("expected %d tx blobs, got %d", len(allHashes), len(txHexes)), nil) return corelog.E("resolveBlockBlobs", fmt.Sprintf("expected %d tx blobs, got %d", len(allHashes), len(txHexes)), nil)
} }
// Index fetched blobs by hash. // Index fetched blobs by hash.
@ -363,16 +366,16 @@ func resolveBlockBlobs(blocks []rpc.BlockDetails, client *rpc.Client) error {
// Parse header from object_in_json. // Parse header from object_in_json.
hdr, err := parseBlockHeader(bd.ObjectInJSON) hdr, err := parseBlockHeader(bd.ObjectInJSON)
if err != nil { if err != nil {
return coreerr.E("resolveBlockBlobs", fmt.Sprintf("block %d: parse header", bd.Height), err) return corelog.E("resolveBlockBlobs", fmt.Sprintf("block %d: parse header", bd.Height), err)
} }
// Miner tx blob is transactions_details[0]. // Miner tx blob is transactions_details[0].
if len(bd.Transactions) == 0 { if len(bd.Transactions) == 0 {
return coreerr.E("resolveBlockBlobs", fmt.Sprintf("block %d has no transactions_details", bd.Height), nil) return corelog.E("resolveBlockBlobs", fmt.Sprintf("block %d has no transactions_details", bd.Height), nil)
} }
minerTxBlob, err := hex.DecodeString(bd.Transactions[0].Blob) minerTxBlob, err := hex.DecodeString(bd.Transactions[0].Blob)
if err != nil { if err != nil {
return coreerr.E("resolveBlockBlobs", fmt.Sprintf("block %d: decode miner tx hex", bd.Height), err) return corelog.E("resolveBlockBlobs", fmt.Sprintf("block %d: decode miner tx hex", bd.Height), err)
} }
// Collect regular tx hashes. // Collect regular tx hashes.
@ -380,7 +383,7 @@ func resolveBlockBlobs(blocks []rpc.BlockDetails, client *rpc.Client) error {
for _, txInfo := range bd.Transactions[1:] { for _, txInfo := range bd.Transactions[1:] {
h, err := types.HashFromHex(txInfo.ID) h, err := types.HashFromHex(txInfo.ID)
if err != nil { if err != nil {
return coreerr.E("resolveBlockBlobs", fmt.Sprintf("block %d: parse tx hash %s", bd.Height, txInfo.ID), err) return corelog.E("resolveBlockBlobs", fmt.Sprintf("block %d: parse tx hash %s", bd.Height, txInfo.ID), err)
} }
txHashes = append(txHashes, h) txHashes = append(txHashes, h)
} }
@ -410,17 +413,17 @@ var aggregatedRE = regexp.MustCompile(`"AGGREGATED"\s*:\s*\{([^}]+)\}`)
func parseBlockHeader(objectInJSON string) (*types.BlockHeader, error) { func parseBlockHeader(objectInJSON string) (*types.BlockHeader, error) {
m := aggregatedRE.FindStringSubmatch(objectInJSON) m := aggregatedRE.FindStringSubmatch(objectInJSON)
if m == nil { if m == nil {
return nil, coreerr.E("parseBlockHeader", "AGGREGATED section not found in object_in_json", nil) return nil, corelog.E("parseBlockHeader", "AGGREGATED section not found in object_in_json", nil)
} }
var hj blockHeaderJSON var hj blockHeaderJSON
if err := json.Unmarshal([]byte("{"+m[1]+"}"), &hj); err != nil { if err := json.Unmarshal([]byte("{"+m[1]+"}"), &hj); err != nil {
return nil, coreerr.E("parseBlockHeader", "unmarshal AGGREGATED", err) return nil, corelog.E("parseBlockHeader", "unmarshal AGGREGATED", err)
} }
prevID, err := types.HashFromHex(hj.PrevID) prevID, err := types.HashFromHex(hj.PrevID)
if err != nil { if err != nil {
return nil, coreerr.E("parseBlockHeader", "parse prev_id", err) return nil, corelog.E("parseBlockHeader", "parse prev_id", err)
} }
return &types.BlockHeader{ return &types.BlockHeader{

View file

@ -12,6 +12,7 @@ import (
"encoding/json" "encoding/json"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"strings"
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@ -735,6 +736,87 @@ func TestSync_Bad_InvalidBlockBlob(t *testing.T) {
} }
} }
func TestSync_Bad_PreHardforkFreeze(t *testing.T) {
genesisBlob, genesisHash := makeGenesisBlockBlob()
genesisBytes, err := hex.DecodeString(genesisBlob)
if err != nil {
t.Fatalf("decode genesis blob: %v", err)
}
regularTx := types.Transaction{
Version: 1,
Vin: []types.TxInput{
types.TxInputToKey{
Amount: 1000000000000,
KeyOffsets: []types.TxOutRef{{
Tag: types.RefTypeGlobalIndex,
GlobalIndex: 0,
}},
KeyImage: types.KeyImage{0xaa, 0xbb, 0xcc},
EtcDetails: wire.EncodeVarint(0),
},
},
Vout: []types.TxOutput{
types.TxOutputBare{
Amount: 900000000000,
Target: types.TxOutToKey{Key: types.PublicKey{0x02}},
},
},
Extra: wire.EncodeVarint(0),
Attachment: wire.EncodeVarint(0),
}
var txBuf bytes.Buffer
txEnc := wire.NewEncoder(&txBuf)
wire.EncodeTransaction(txEnc, &regularTx)
regularTxBlob := txBuf.Bytes()
regularTxHash := wire.TransactionHash(&regularTx)
minerTx1 := testCoinbaseTx(1)
block1 := types.Block{
BlockHeader: types.BlockHeader{
MajorVersion: 1,
Nonce: 42,
PrevID: genesisHash,
Timestamp: 1770897720,
},
MinerTx: minerTx1,
TxHashes: []types.Hash{regularTxHash},
}
var blk1Buf bytes.Buffer
blk1Enc := wire.NewEncoder(&blk1Buf)
wire.EncodeBlock(blk1Enc, &block1)
block1Blob := blk1Buf.Bytes()
orig := GenesisHash
GenesisHash = genesisHash.String()
t.Cleanup(func() { GenesisHash = orig })
s, _ := store.New(":memory:")
defer s.Close()
c := New(s)
opts := SyncOptions{
Forks: []config.HardFork{
{Version: config.HF0Initial, Height: 0, Mandatory: true},
{Version: config.HF5, Height: 2, Mandatory: true},
},
}
if err := c.processBlockBlobs(genesisBytes, nil, 0, 1, opts); err != nil {
t.Fatalf("process genesis: %v", err)
}
err = c.processBlockBlobs(block1Blob, [][]byte{regularTxBlob}, 1, 100, opts)
if err == nil {
t.Fatal("expected freeze rejection, got nil")
}
if !strings.Contains(err.Error(), "freeze") {
t.Fatalf("expected freeze error, got %v", err)
}
}
// testCoinbaseTxV2 creates a v2 (post-HF4) coinbase transaction with Zarcanum outputs. // testCoinbaseTxV2 creates a v2 (post-HF4) coinbase transaction with Zarcanum outputs.
func testCoinbaseTxV2(height uint64) types.Transaction { func testCoinbaseTxV2(height uint64) types.Transaction {
return types.Transaction{ return types.Transaction{
@ -937,3 +1019,148 @@ func TestSync_Good_ZCInputKeyImageMarkedSpent(t *testing.T) {
t.Error("IsSpent(zc_key_image): got false, want true") t.Error("IsSpent(zc_key_image): got false, want true")
} }
} }
func TestSync_Good_HTLCInputKeyImageMarkedSpent(t *testing.T) {
genesisBlob, genesisHash := makeGenesisBlockBlob()
htlcKeyImage := types.KeyImage{0x44, 0x55, 0x66}
htlcTx := types.Transaction{
Version: 1,
Vin: []types.TxInput{
types.TxInputHTLC{
HTLCOrigin: "contract-1",
Amount: 1000000000000,
KeyOffsets: []types.TxOutRef{{
Tag: types.RefTypeGlobalIndex,
GlobalIndex: 0,
}},
KeyImage: htlcKeyImage,
EtcDetails: wire.EncodeVarint(0),
},
},
Vout: []types.TxOutput{
types.TxOutputBare{
Amount: 900000000000,
Target: types.TxOutToKey{Key: types.PublicKey{0x21}},
},
},
Extra: wire.EncodeVarint(0),
Attachment: wire.EncodeVarint(0),
}
var txBuf bytes.Buffer
txEnc := wire.NewEncoder(&txBuf)
wire.EncodeTransaction(txEnc, &htlcTx)
htlcTxBlob := hex.EncodeToString(txBuf.Bytes())
htlcTxHash := wire.TransactionHash(&htlcTx)
minerTx1 := testCoinbaseTx(1)
block1 := types.Block{
BlockHeader: types.BlockHeader{
MajorVersion: 1,
Nonce: 42,
PrevID: genesisHash,
Timestamp: 1770897720,
},
MinerTx: minerTx1,
TxHashes: []types.Hash{htlcTxHash},
}
var blk1Buf bytes.Buffer
blk1Enc := wire.NewEncoder(&blk1Buf)
wire.EncodeBlock(blk1Enc, &block1)
block1Blob := hex.EncodeToString(blk1Buf.Bytes())
block1Hash := wire.BlockHash(&block1)
orig := GenesisHash
GenesisHash = genesisHash.String()
t.Cleanup(func() { GenesisHash = orig })
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
if r.URL.Path == "/getheight" {
json.NewEncoder(w).Encode(map[string]any{
"height": 2,
"status": "OK",
})
return
}
var req struct {
Method string `json:"method"`
Params json.RawMessage `json:"params"`
}
json.NewDecoder(r.Body).Decode(&req)
switch req.Method {
case "get_blocks_details":
blocks := []map[string]any{
{
"height": uint64(0),
"timestamp": uint64(1770897600),
"base_reward": uint64(1000000000000),
"id": genesisHash.String(),
"difficulty": "1",
"type": uint64(1),
"blob": genesisBlob,
"transactions_details": []any{},
},
{
"height": uint64(1),
"timestamp": uint64(1770897720),
"base_reward": uint64(1000000),
"id": block1Hash.String(),
"difficulty": "100",
"type": uint64(1),
"blob": block1Blob,
"transactions_details": []map[string]any{
{
"id": htlcTxHash.String(),
"blob": htlcTxBlob,
"fee": uint64(100000000000),
},
},
},
}
result := map[string]any{
"blocks": blocks,
"status": "OK",
}
resultBytes, _ := json.Marshal(result)
json.NewEncoder(w).Encode(map[string]any{
"jsonrpc": "2.0",
"id": "0",
"result": json.RawMessage(resultBytes),
})
}
}))
defer srv.Close()
s, _ := store.New(":memory:")
defer s.Close()
c := New(s)
client := rpc.NewClient(srv.URL)
opts := SyncOptions{
VerifySignatures: false,
Forks: []config.HardFork{
{Version: config.HF1, Height: 0, Mandatory: true},
{Version: config.HF2, Height: 0, Mandatory: true},
},
}
err := c.Sync(context.Background(), client, opts)
if err != nil {
t.Fatalf("Sync: %v", err)
}
spent, err := c.IsSpent(htlcKeyImage)
if err != nil {
t.Fatalf("IsSpent: %v", err)
}
if !spent {
t.Error("IsSpent(htlc_key_image): got false, want true")
}
}

View file

@ -12,13 +12,14 @@ import (
coreerr "dappco.re/go/core/log" coreerr "dappco.re/go/core/log"
"dappco.re/go/core/blockchain/config" "dappco.re/go/core/blockchain/config"
"dappco.re/go/core/blockchain/consensus"
"dappco.re/go/core/blockchain/types" "dappco.re/go/core/blockchain/types"
"dappco.re/go/core/blockchain/wire" "dappco.re/go/core/blockchain/wire"
) )
// ValidateHeader checks a block header before storage. // ValidateHeader checks a block header before storage.
// expectedHeight is the height at which this block would be stored. // expectedHeight is the height at which this block would be stored.
func (c *Chain) ValidateHeader(b *types.Block, expectedHeight uint64) error { func (c *Chain) ValidateHeader(b *types.Block, expectedHeight uint64, forks []config.HardFork) error {
currentHeight, err := c.Height() currentHeight, err := c.Height()
if err != nil { if err != nil {
return coreerr.E("Chain.ValidateHeader", "validate: get height", err) return coreerr.E("Chain.ValidateHeader", "validate: get height", err)
@ -34,6 +35,9 @@ func (c *Chain) ValidateHeader(b *types.Block, expectedHeight uint64) error {
if !b.PrevID.IsZero() { if !b.PrevID.IsZero() {
return coreerr.E("Chain.ValidateHeader", "validate: genesis block has non-zero prev_id", nil) return coreerr.E("Chain.ValidateHeader", "validate: genesis block has non-zero prev_id", nil)
} }
if err := consensus.CheckBlockVersion(b.MajorVersion, forks, expectedHeight); err != nil {
return coreerr.E("Chain.ValidateHeader", "validate: block version", err)
}
return nil return nil
} }
@ -46,6 +50,11 @@ func (c *Chain) ValidateHeader(b *types.Block, expectedHeight uint64) error {
return coreerr.E("Chain.ValidateHeader", fmt.Sprintf("validate: prev_id %s does not match top block %s", b.PrevID, topMeta.Hash), nil) return coreerr.E("Chain.ValidateHeader", fmt.Sprintf("validate: prev_id %s does not match top block %s", b.PrevID, topMeta.Hash), nil)
} }
// Block major version check.
if err := consensus.CheckBlockVersion(b.MajorVersion, forks, expectedHeight); err != nil {
return coreerr.E("Chain.ValidateHeader", "validate: block version", err)
}
// Block size check. // Block size check.
var buf bytes.Buffer var buf bytes.Buffer
enc := wire.NewEncoder(&buf) enc := wire.NewEncoder(&buf)

View file

@ -8,8 +8,9 @@ package chain
import ( import (
"testing" "testing"
store "dappco.re/go/core/store" "dappco.re/go/core/blockchain/config"
"dappco.re/go/core/blockchain/types" "dappco.re/go/core/blockchain/types"
store "dappco.re/go/core/store"
) )
func TestValidateHeader_Good_Genesis(t *testing.T) { func TestValidateHeader_Good_Genesis(t *testing.T) {
@ -19,13 +20,13 @@ func TestValidateHeader_Good_Genesis(t *testing.T) {
blk := &types.Block{ blk := &types.Block{
BlockHeader: types.BlockHeader{ BlockHeader: types.BlockHeader{
MajorVersion: 1, MajorVersion: 0,
Timestamp: 1770897600, Timestamp: 1770897600,
}, },
MinerTx: testCoinbaseTx(0), MinerTx: testCoinbaseTx(0),
} }
err := c.ValidateHeader(blk, 0) err := c.ValidateHeader(blk, 0, config.MainnetForks)
if err != nil { if err != nil {
t.Fatalf("ValidateHeader genesis: %v", err) t.Fatalf("ValidateHeader genesis: %v", err)
} }
@ -38,7 +39,7 @@ func TestValidateHeader_Good_Sequential(t *testing.T) {
// Store block 0. // Store block 0.
blk0 := &types.Block{ blk0 := &types.Block{
BlockHeader: types.BlockHeader{MajorVersion: 1, Timestamp: 1770897600}, BlockHeader: types.BlockHeader{MajorVersion: 0, Timestamp: 1770897600},
MinerTx: testCoinbaseTx(0), MinerTx: testCoinbaseTx(0),
} }
hash0 := types.Hash{0x01} hash0 := types.Hash{0x01}
@ -47,14 +48,14 @@ func TestValidateHeader_Good_Sequential(t *testing.T) {
// Validate block 1. // Validate block 1.
blk1 := &types.Block{ blk1 := &types.Block{
BlockHeader: types.BlockHeader{ BlockHeader: types.BlockHeader{
MajorVersion: 1, MajorVersion: 0,
Timestamp: 1770897720, Timestamp: 1770897720,
PrevID: hash0, PrevID: hash0,
}, },
MinerTx: testCoinbaseTx(1), MinerTx: testCoinbaseTx(1),
} }
err := c.ValidateHeader(blk1, 1) err := c.ValidateHeader(blk1, 1, config.MainnetForks)
if err != nil { if err != nil {
t.Fatalf("ValidateHeader block 1: %v", err) t.Fatalf("ValidateHeader block 1: %v", err)
} }
@ -66,21 +67,21 @@ func TestValidateHeader_Bad_WrongPrevID(t *testing.T) {
c := New(s) c := New(s)
blk0 := &types.Block{ blk0 := &types.Block{
BlockHeader: types.BlockHeader{MajorVersion: 1, Timestamp: 1770897600}, BlockHeader: types.BlockHeader{MajorVersion: 0, Timestamp: 1770897600},
MinerTx: testCoinbaseTx(0), MinerTx: testCoinbaseTx(0),
} }
c.PutBlock(blk0, &BlockMeta{Hash: types.Hash{0x01}, Height: 0}) c.PutBlock(blk0, &BlockMeta{Hash: types.Hash{0x01}, Height: 0})
blk1 := &types.Block{ blk1 := &types.Block{
BlockHeader: types.BlockHeader{ BlockHeader: types.BlockHeader{
MajorVersion: 1, MajorVersion: 0,
Timestamp: 1770897720, Timestamp: 1770897720,
PrevID: types.Hash{0xFF}, // wrong PrevID: types.Hash{0xFF}, // wrong
}, },
MinerTx: testCoinbaseTx(1), MinerTx: testCoinbaseTx(1),
} }
err := c.ValidateHeader(blk1, 1) err := c.ValidateHeader(blk1, 1, config.MainnetForks)
if err == nil { if err == nil {
t.Fatal("expected error for wrong prev_id") t.Fatal("expected error for wrong prev_id")
} }
@ -92,12 +93,12 @@ func TestValidateHeader_Bad_WrongHeight(t *testing.T) {
c := New(s) c := New(s)
blk := &types.Block{ blk := &types.Block{
BlockHeader: types.BlockHeader{MajorVersion: 1, Timestamp: 1770897600}, BlockHeader: types.BlockHeader{MajorVersion: 0, Timestamp: 1770897600},
MinerTx: testCoinbaseTx(0), MinerTx: testCoinbaseTx(0),
} }
// Chain is empty (height 0), but we pass expectedHeight=5. // Chain is empty (height 0), but we pass expectedHeight=5.
err := c.ValidateHeader(blk, 5) err := c.ValidateHeader(blk, 5, config.MainnetForks)
if err == nil { if err == nil {
t.Fatal("expected error for wrong height") t.Fatal("expected error for wrong height")
} }
@ -110,14 +111,33 @@ func TestValidateHeader_Bad_GenesisNonZeroPrev(t *testing.T) {
blk := &types.Block{ blk := &types.Block{
BlockHeader: types.BlockHeader{ BlockHeader: types.BlockHeader{
MajorVersion: 1, MajorVersion: 0,
PrevID: types.Hash{0xFF}, // genesis must have zero prev_id PrevID: types.Hash{0xFF}, // genesis must have zero prev_id
}, },
MinerTx: testCoinbaseTx(0), MinerTx: testCoinbaseTx(0),
} }
err := c.ValidateHeader(blk, 0) err := c.ValidateHeader(blk, 0, config.MainnetForks)
if err == nil { if err == nil {
t.Fatal("expected error for genesis with non-zero prev_id") t.Fatal("expected error for genesis with non-zero prev_id")
} }
} }
func TestValidateHeader_Bad_WrongVersion(t *testing.T) {
s, _ := store.New(":memory:")
defer s.Close()
c := New(s)
blk := &types.Block{
BlockHeader: types.BlockHeader{
MajorVersion: 1,
Timestamp: 1770897600,
},
MinerTx: testCoinbaseTx(0),
}
err := c.ValidateHeader(blk, 0, config.MainnetForks)
if err == nil {
t.Fatal("expected error for wrong block version")
}
}

View file

@ -6,6 +6,8 @@
package blockchain package blockchain
import ( import (
"fmt"
"net"
"os" "os"
"path/filepath" "path/filepath"
@ -28,9 +30,9 @@ const defaultChainSeed = "seeds.lthn.io:36942"
// command path documents the node features directly. // command path documents the node features directly.
func AddChainCommands(root *cobra.Command) { func AddChainCommands(root *cobra.Command) {
var ( var (
dataDir string chainDataDir string
seed string seedPeerAddress string
testnet bool useTestnet bool
) )
chainCmd := &cobra.Command{ chainCmd := &cobra.Command{
@ -39,26 +41,26 @@ func AddChainCommands(root *cobra.Command) {
Long: "Manage the Lethean blockchain — sync, explore, and mine.", Long: "Manage the Lethean blockchain — sync, explore, and mine.",
} }
chainCmd.PersistentFlags().StringVar(&dataDir, "data-dir", defaultChainDataDirPath(), "blockchain data directory") chainCmd.PersistentFlags().StringVar(&chainDataDir, "data-dir", defaultChainDataDirPath(), "blockchain data directory")
chainCmd.PersistentFlags().StringVar(&seed, "seed", defaultChainSeed, "seed peer address (host:port)") chainCmd.PersistentFlags().StringVar(&seedPeerAddress, "seed", defaultChainSeed, "seed peer address (host:port)")
chainCmd.PersistentFlags().BoolVar(&testnet, "testnet", false, "use testnet") chainCmd.PersistentFlags().BoolVar(&useTestnet, "testnet", false, "use testnet")
chainCmd.AddCommand( chainCmd.AddCommand(
newChainExplorerCommand(&dataDir, &seed, &testnet), newChainExplorerCommand(&chainDataDir, &seedPeerAddress, &useTestnet),
newChainSyncCommand(&dataDir, &seed, &testnet), newChainSyncCommand(&chainDataDir, &seedPeerAddress, &useTestnet),
) )
root.AddCommand(chainCmd) root.AddCommand(chainCmd)
} }
func chainConfigForSeed(testnet bool, seed string) (config.ChainConfig, []config.HardFork, string) { func chainConfigForSeed(useTestnet bool, seedPeerAddress string) (config.ChainConfig, []config.HardFork, string) {
if testnet { if useTestnet {
if seed == defaultChainSeed { if seedPeerAddress == defaultChainSeed {
seed = "localhost:46942" seedPeerAddress = "localhost:46942"
} }
return config.Testnet, config.TestnetForks, seed return config.Testnet, config.TestnetForks, seedPeerAddress
} }
return config.Mainnet, config.MainnetForks, seed return config.Mainnet, config.MainnetForks, seedPeerAddress
} }
func defaultChainDataDirPath() string { func defaultChainDataDirPath() string {
@ -75,3 +77,16 @@ func ensureChainDataDirExists(dataDir string) error {
} }
return nil return nil
} }
func validateChainOptions(chainDataDir, seedPeerAddress string) error {
if chainDataDir == "" {
return coreerr.E("validateChainOptions", "data dir is required", nil)
}
if seedPeerAddress == "" {
return coreerr.E("validateChainOptions", "seed is required", nil)
}
if _, _, err := net.SplitHostPort(seedPeerAddress); err != nil {
return coreerr.E("validateChainOptions", fmt.Sprintf("seed %q must be host:port", seedPeerAddress), err)
}
return nil
}

View file

@ -46,3 +46,68 @@ func TestAddChainCommands_Good_PersistentFlags(t *testing.T) {
assert.NotNil(t, chainCmd.PersistentFlags().Lookup("seed")) assert.NotNil(t, chainCmd.PersistentFlags().Lookup("seed"))
assert.NotNil(t, chainCmd.PersistentFlags().Lookup("testnet")) assert.NotNil(t, chainCmd.PersistentFlags().Lookup("testnet"))
} }
func TestValidateChainOptions_Good(t *testing.T) {
err := validateChainOptions("/tmp/lethean", "seed.example:36942")
require.NoError(t, err)
}
func TestValidateChainOptions_Bad(t *testing.T) {
tests := []struct {
name string
dataDir string
seed string
want string
}{
{name: "missing data dir", dataDir: "", seed: "seed.example:36942", want: "data dir is required"},
{name: "missing seed", dataDir: "/tmp/lethean", seed: "", want: "seed is required"},
{name: "malformed seed", dataDir: "/tmp/lethean", seed: "seed.example", want: "must be host:port"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := validateChainOptions(tt.dataDir, tt.seed)
require.Error(t, err)
assert.Contains(t, err.Error(), tt.want)
})
}
}
func TestChainSyncCommand_BadMutuallyExclusiveFlags(t *testing.T) {
dataDir := t.TempDir()
seed := "seed.example:36942"
testnet := false
cmd := newChainSyncCommand(&dataDir, &seed, &testnet)
cmd.SetArgs([]string{"--daemon", "--stop"})
err := cmd.Execute()
require.Error(t, err)
assert.Contains(t, err.Error(), "cannot be combined")
}
func TestChainSyncCommand_BadArgsRejected(t *testing.T) {
dataDir := t.TempDir()
seed := "seed.example:36942"
testnet := false
cmd := newChainSyncCommand(&dataDir, &seed, &testnet)
cmd.SetArgs([]string{"extra"})
err := cmd.Execute()
require.Error(t, err)
assert.Contains(t, err.Error(), "unknown command")
}
func TestChainExplorerCommand_BadSeedRejected(t *testing.T) {
dataDir := t.TempDir()
seed := "bad-seed"
testnet := false
cmd := newChainExplorerCommand(&dataDir, &seed, &testnet)
cmd.SetArgs(nil)
err := cmd.Execute()
require.Error(t, err)
assert.Contains(t, err.Error(), "must be host:port")
}

View file

@ -71,6 +71,8 @@ var TestnetForks = []HardFork{
// //
// A fork with Height=0 is active from genesis (height 0). // A fork with Height=0 is active from genesis (height 0).
// A fork with Height=N is active at heights > N. // A fork with Height=N is active at heights > N.
//
// version := config.VersionAtHeight(config.MainnetForks, 15000) // returns HF2
func VersionAtHeight(forks []HardFork, height uint64) uint8 { func VersionAtHeight(forks []HardFork, height uint64) uint8 {
var version uint8 var version uint8
for _, hf := range forks { for _, hf := range forks {
@ -85,6 +87,8 @@ func VersionAtHeight(forks []HardFork, height uint64) uint8 {
// IsHardForkActive reports whether the specified hardfork version is active // IsHardForkActive reports whether the specified hardfork version is active
// at the given block height. // at the given block height.
//
// if config.IsHardForkActive(config.MainnetForks, config.HF4Zarcanum, height) { /* Zarcanum rules apply */ }
func IsHardForkActive(forks []HardFork, version uint8, height uint64) bool { func IsHardForkActive(forks []HardFork, version uint8, height uint64) bool {
for _, hf := range forks { for _, hf := range forks {
if hf.Version == version { if hf.Version == version {
@ -97,6 +101,8 @@ func IsHardForkActive(forks []HardFork, version uint8, height uint64) bool {
// HardforkActivationHeight returns the activation height for the given // HardforkActivationHeight returns the activation height for the given
// hardfork version. The fork becomes active at heights strictly greater // hardfork version. The fork becomes active at heights strictly greater
// than the returned value. Returns (0, false) if the version is not found. // than the returned value. Returns (0, false) if the version is not found.
//
// height, ok := config.HardforkActivationHeight(config.TestnetForks, config.HF5)
func HardforkActivationHeight(forks []HardFork, version uint8) (uint64, bool) { func HardforkActivationHeight(forks []HardFork, version uint8) (uint64, bool) {
for _, hf := range forks { for _, hf := range forks {
if hf.Version == version { if hf.Version == version {

20
consensus/balance.go Normal file
View file

@ -0,0 +1,20 @@
// Copyright (c) 2017-2026 Lethean (https://lt.hn)
//
// Licensed under the European Union Public Licence (EUPL) version 1.2.
// SPDX-License-Identifier: EUPL-1.2
package consensus
import "dappco.re/go/core/blockchain/crypto"
// VerifyBalanceProof verifies a generic double-Schnorr proof against the
// provided public points.
//
// The caller is responsible for constructing the balance context point(s)
// from transaction inputs, outputs, fees, and any asset-operation terms.
// This helper only performs the cryptographic check.
//
// ok := consensus.VerifyBalanceProof(txHash, false, pointA, pointB, proofBytes)
func VerifyBalanceProof(hash [32]byte, aIsX bool, a [32]byte, b [32]byte, proof []byte) bool {
return crypto.VerifyDoubleSchnorr(hash, aIsX, a, b, proof)
}

View file

@ -23,6 +23,8 @@ func IsPoS(flags uint8) bool {
// CheckTimestamp validates a block's timestamp against future limits and // CheckTimestamp validates a block's timestamp against future limits and
// the median of recent timestamps. // the median of recent timestamps.
//
// consensus.CheckTimestamp(blk.Timestamp, blk.Flags, uint64(time.Now().Unix()), recentTimestamps)
func CheckTimestamp(blockTimestamp uint64, flags uint8, adjustedTime uint64, recentTimestamps []uint64) error { func CheckTimestamp(blockTimestamp uint64, flags uint8, adjustedTime uint64, recentTimestamps []uint64) error {
// Future time limit. // Future time limit.
limit := config.BlockFutureTimeLimit limit := config.BlockFutureTimeLimit
@ -61,10 +63,38 @@ func medianTimestamp(timestamps []uint64) uint64 {
return sorted[n/2] return sorted[n/2]
} }
func expectedMinerTxVersion(forks []config.HardFork, height uint64) uint64 {
switch {
case config.IsHardForkActive(forks, config.HF5, height):
return types.VersionPostHF5
case config.IsHardForkActive(forks, config.HF4Zarcanum, height):
return types.VersionPostHF4
case config.IsHardForkActive(forks, config.HF1, height):
return types.VersionPreHF4
default:
return types.VersionInitial
}
}
// ValidateMinerTx checks the structure of a coinbase (miner) transaction. // ValidateMinerTx checks the structure of a coinbase (miner) transaction.
// For PoW blocks: exactly 1 input (TxInputGenesis). For PoS blocks: exactly // For PoW blocks: exactly 1 input (TxInputGenesis). For PoS blocks: exactly
// 2 inputs (TxInputGenesis + stake input). // 2 inputs (TxInputGenesis + stake input).
//
// consensus.ValidateMinerTx(&blk.MinerTx, height, config.MainnetForks)
func ValidateMinerTx(tx *types.Transaction, height uint64, forks []config.HardFork) error { func ValidateMinerTx(tx *types.Transaction, height uint64, forks []config.HardFork) error {
expectedVersion := expectedMinerTxVersion(forks, height)
if tx.Version != expectedVersion {
return coreerr.E("ValidateMinerTx", fmt.Sprintf("version %d invalid at height %d (expected %d)",
tx.Version, height, expectedVersion), ErrMinerTxVersion)
}
if tx.Version >= types.VersionPostHF5 {
activeHardForkVersion := config.VersionAtHeight(forks, height)
if tx.HardforkID != activeHardForkVersion {
return coreerr.E("ValidateMinerTx", fmt.Sprintf("hardfork id %d does not match active fork %d at height %d",
tx.HardforkID, activeHardForkVersion, height), ErrMinerTxVersion)
}
}
if len(tx.Vin) == 0 { if len(tx.Vin) == 0 {
return coreerr.E("ValidateMinerTx", "no inputs", ErrMinerTxInputs) return coreerr.E("ValidateMinerTx", "no inputs", ErrMinerTxInputs)
} }
@ -86,12 +116,13 @@ func ValidateMinerTx(tx *types.Transaction, height uint64, forks []config.HardFo
switch tx.Vin[1].(type) { switch tx.Vin[1].(type) {
case types.TxInputToKey: case types.TxInputToKey:
// Pre-HF4 PoS. // Pre-HF4 PoS.
default: case types.TxInputZC:
hf4Active := config.IsHardForkActive(forks, config.HF4Zarcanum, height) hardForkFourActive := config.IsHardForkActive(forks, config.HF4Zarcanum, height)
if !hf4Active { if !hardForkFourActive {
return coreerr.E("ValidateMinerTx", "invalid PoS stake input type", ErrMinerTxInputs) return coreerr.E("ValidateMinerTx", "invalid PoS stake input type", ErrMinerTxInputs)
} }
// Post-HF4: accept ZC inputs. default:
return coreerr.E("ValidateMinerTx", "invalid PoS stake input type", ErrMinerTxInputs)
} }
} else { } else {
return coreerr.E("ValidateMinerTx", fmt.Sprintf("%d inputs (expected 1 or 2)", len(tx.Vin)), ErrMinerTxInputs) return coreerr.E("ValidateMinerTx", fmt.Sprintf("%d inputs (expected 1 or 2)", len(tx.Vin)), ErrMinerTxInputs)
@ -102,6 +133,11 @@ func ValidateMinerTx(tx *types.Transaction, height uint64, forks []config.HardFo
// ValidateBlockReward checks that the miner transaction outputs do not // ValidateBlockReward checks that the miner transaction outputs do not
// exceed the expected reward (base reward + fees for pre-HF4). // exceed the expected reward (base reward + fees for pre-HF4).
//
// Post-HF4 miner transactions may use Zarcanum outputs, so the validator
// sums both transparent amounts and the encoded Zarcanum amount field.
//
// consensus.ValidateBlockReward(&blk.MinerTx, height, blockSize, medianSize, totalFees, config.MainnetForks)
func ValidateBlockReward(minerTx *types.Transaction, height, blockSize, medianSize, totalFees uint64, forks []config.HardFork) error { func ValidateBlockReward(minerTx *types.Transaction, height, blockSize, medianSize, totalFees uint64, forks []config.HardFork) error {
base := BaseReward(height) base := BaseReward(height)
reward, err := BlockReward(base, blockSize, medianSize) reward, err := BlockReward(base, blockSize, medianSize)
@ -109,14 +145,17 @@ func ValidateBlockReward(minerTx *types.Transaction, height, blockSize, medianSi
return err return err
} }
hf4Active := config.IsHardForkActive(forks, config.HF4Zarcanum, height) hardForkFourActive := config.IsHardForkActive(forks, config.HF4Zarcanum, height)
expected := MinerReward(reward, totalFees, hf4Active) expected := MinerReward(reward, totalFees, hardForkFourActive)
// Sum miner tx outputs. // Sum miner tx outputs.
var outputSum uint64 var outputSum uint64
for _, vout := range minerTx.Vout { for _, vout := range minerTx.Vout {
if bare, ok := vout.(types.TxOutputBare); ok { switch out := vout.(type) {
outputSum += bare.Amount case types.TxOutputBare:
outputSum += out.Amount
case types.TxOutputZarcanum:
outputSum += out.EncryptedAmount
} }
} }
@ -149,24 +188,34 @@ func expectedBlockMajorVersion(forks []config.HardFork, height uint64) uint8 {
// checkBlockVersion validates that the block's major version matches // checkBlockVersion validates that the block's major version matches
// what is expected at the given height in the fork schedule. // what is expected at the given height in the fork schedule.
func checkBlockVersion(blk *types.Block, forks []config.HardFork, height uint64) error { func checkBlockVersion(majorVersion uint8, forks []config.HardFork, height uint64) error {
expected := expectedBlockMajorVersion(forks, height) expected := expectedBlockMajorVersion(forks, height)
if blk.MajorVersion != expected { if majorVersion != expected {
return coreerr.E("checkBlockVersion", fmt.Sprintf("got %d, want %d at height %d", return coreerr.E("CheckBlockVersion", fmt.Sprintf("got %d, want %d at height %d",
blk.MajorVersion, expected, height), ErrBlockMajorVersion) majorVersion, expected, height), ErrBlockMajorVersion)
} }
return nil return nil
} }
// CheckBlockVersion validates that the block's major version matches the
// expected version for the supplied height and fork schedule.
//
// consensus.CheckBlockVersion(blk.MajorVersion, config.MainnetForks, height)
func CheckBlockVersion(majorVersion uint8, forks []config.HardFork, height uint64) error {
return checkBlockVersion(majorVersion, forks, height)
}
// ValidateBlock performs full consensus validation on a block. It checks // ValidateBlock performs full consensus validation on a block. It checks
// the block version, timestamp, miner transaction structure, and reward. // the block version, timestamp, miner transaction structure, and reward.
// Transaction semantic validation for regular transactions should be done // Transaction semantic validation for regular transactions should be done
// separately via ValidateTransaction for each tx in the block. // separately via ValidateTransaction for each tx in the block.
//
// consensus.ValidateBlock(&blk, height, blockSize, medianSize, totalFees, adjustedTime, recentTimestamps, config.MainnetForks)
func ValidateBlock(blk *types.Block, height, blockSize, medianSize, totalFees, adjustedTime uint64, func ValidateBlock(blk *types.Block, height, blockSize, medianSize, totalFees, adjustedTime uint64,
recentTimestamps []uint64, forks []config.HardFork) error { recentTimestamps []uint64, forks []config.HardFork) error {
// Block major version check. // Block major version check.
if err := checkBlockVersion(blk, forks, height); err != nil { if err := checkBlockVersion(blk.MajorVersion, forks, height); err != nil {
return err return err
} }
@ -199,6 +248,8 @@ func ValidateBlock(blk *types.Block, height, blockSize, medianSize, totalFees, a
// //
// Returns false if the fork version is not found or if the activation height // Returns false if the fork version is not found or if the activation height
// is too low for a meaningful freeze window. // is too low for a meaningful freeze window.
//
// if consensus.IsPreHardforkFreeze(config.TestnetForks, config.HF5, height) { /* reject non-coinbase txs */ }
func IsPreHardforkFreeze(forks []config.HardFork, version uint8, height uint64) bool { func IsPreHardforkFreeze(forks []config.HardFork, version uint8, height uint64) bool {
activationHeight, ok := config.HardforkActivationHeight(forks, version) activationHeight, ok := config.HardforkActivationHeight(forks, version)
if !ok { if !ok {
@ -223,6 +274,8 @@ func IsPreHardforkFreeze(forks []config.HardFork, version uint8, height uint64)
// pre-hardfork freeze check. This wraps ValidateTransaction with an // pre-hardfork freeze check. This wraps ValidateTransaction with an
// additional check: during the freeze window before HF5, non-coinbase // additional check: during the freeze window before HF5, non-coinbase
// transactions are rejected. // transactions are rejected.
//
// consensus.ValidateTransactionInBlock(&tx, txBlob, config.MainnetForks, blockHeight)
func ValidateTransactionInBlock(tx *types.Transaction, txBlob []byte, forks []config.HardFork, height uint64) error { func ValidateTransactionInBlock(tx *types.Transaction, txBlob []byte, forks []config.HardFork, height uint64) error {
// Pre-hardfork freeze: reject non-coinbase transactions in the freeze window. // Pre-hardfork freeze: reject non-coinbase transactions in the freeze window.
if !isCoinbase(tx) && IsPreHardforkFreeze(forks, config.HF5, height) { if !isCoinbase(tx) && IsPreHardforkFreeze(forks, config.HF5, height) {

View file

@ -74,6 +74,15 @@ func validMinerTx(height uint64) *types.Transaction {
} }
} }
func validMinerTxForForks(height uint64, forks []config.HardFork) *types.Transaction {
tx := validMinerTx(height)
tx.Version = expectedMinerTxVersion(forks, height)
if tx.Version >= types.VersionPostHF5 {
tx.HardforkID = config.VersionAtHeight(forks, height)
}
return tx
}
func TestValidateMinerTx_Good(t *testing.T) { func TestValidateMinerTx_Good(t *testing.T) {
tx := validMinerTx(100) tx := validMinerTx(100)
err := ValidateMinerTx(tx, 100, config.MainnetForks) err := ValidateMinerTx(tx, 100, config.MainnetForks)
@ -87,14 +96,14 @@ func TestValidateMinerTx_Bad_WrongHeight(t *testing.T) {
} }
func TestValidateMinerTx_Bad_NoInputs(t *testing.T) { func TestValidateMinerTx_Bad_NoInputs(t *testing.T) {
tx := &types.Transaction{Version: types.VersionInitial} tx := &types.Transaction{Version: types.VersionPreHF4}
err := ValidateMinerTx(tx, 100, config.MainnetForks) err := ValidateMinerTx(tx, 100, config.MainnetForks)
assert.ErrorIs(t, err, ErrMinerTxInputs) assert.ErrorIs(t, err, ErrMinerTxInputs)
} }
func TestValidateMinerTx_Bad_WrongFirstInput(t *testing.T) { func TestValidateMinerTx_Bad_WrongFirstInput(t *testing.T) {
tx := &types.Transaction{ tx := &types.Transaction{
Version: types.VersionInitial, Version: types.VersionPreHF4,
Vin: []types.TxInput{types.TxInputToKey{Amount: 1}}, Vin: []types.TxInput{types.TxInputToKey{Amount: 1}},
} }
err := ValidateMinerTx(tx, 100, config.MainnetForks) err := ValidateMinerTx(tx, 100, config.MainnetForks)
@ -103,7 +112,7 @@ func TestValidateMinerTx_Bad_WrongFirstInput(t *testing.T) {
func TestValidateMinerTx_Good_PoS(t *testing.T) { func TestValidateMinerTx_Good_PoS(t *testing.T) {
tx := &types.Transaction{ tx := &types.Transaction{
Version: types.VersionInitial, Version: types.VersionPreHF4,
Vin: []types.TxInput{ Vin: []types.TxInput{
types.TxInputGenesis{Height: 100}, types.TxInputGenesis{Height: 100},
types.TxInputToKey{Amount: 1}, // PoS stake input types.TxInputToKey{Amount: 1}, // PoS stake input
@ -117,6 +126,148 @@ func TestValidateMinerTx_Good_PoS(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
} }
func TestValidateMinerTx_Good_PoS_ZCAfterHF4(t *testing.T) {
tx := &types.Transaction{
Version: types.VersionPostHF4,
Vin: []types.TxInput{
types.TxInputGenesis{Height: 101},
types.TxInputZC{KeyImage: types.KeyImage{1}},
},
Vout: []types.TxOutput{
types.TxOutputBare{Amount: config.BlockReward, Target: types.TxOutToKey{Key: types.PublicKey{1}}},
},
}
err := ValidateMinerTx(tx, 101, config.TestnetForks)
require.NoError(t, err)
}
func TestValidateMinerTx_Bad_PoS_UnsupportedStakeInput(t *testing.T) {
tx := &types.Transaction{
Version: types.VersionPostHF4,
Vin: []types.TxInput{
types.TxInputGenesis{Height: 101},
types.TxInputHTLC{Amount: 1, KeyImage: types.KeyImage{1}},
},
Vout: []types.TxOutput{
types.TxOutputBare{Amount: config.BlockReward, Target: types.TxOutToKey{Key: types.PublicKey{1}}},
},
}
err := ValidateMinerTx(tx, 101, config.TestnetForks)
assert.ErrorIs(t, err, ErrMinerTxInputs)
}
func TestValidateMinerTx_Version_Good(t *testing.T) {
tests := []struct {
name string
forks []config.HardFork
tx *types.Transaction
height uint64
}{
{
name: "mainnet_pre_hf1_v0",
forks: config.MainnetForks,
height: 100,
tx: validMinerTx(100),
},
{
name: "mainnet_post_hf1_pre_hf4_v1",
forks: config.MainnetForks,
height: 10081,
tx: &types.Transaction{
Version: types.VersionPreHF4,
Vin: []types.TxInput{types.TxInputGenesis{Height: 10081}},
Vout: []types.TxOutput{types.TxOutputBare{Amount: config.BlockReward, Target: types.TxOutToKey{Key: types.PublicKey{1}}}},
},
},
{
name: "testnet_post_hf4_v2",
forks: config.TestnetForks,
height: 101,
tx: &types.Transaction{
Version: types.VersionPostHF4,
Vin: []types.TxInput{types.TxInputGenesis{Height: 101}},
Vout: []types.TxOutput{types.TxOutputBare{Amount: config.BlockReward, Target: types.TxOutToKey{Key: types.PublicKey{1}}}},
},
},
{
name: "testnet_post_hf5_v3",
forks: config.TestnetForks,
height: 201,
tx: &types.Transaction{
Version: types.VersionPostHF5,
HardforkID: config.HF5,
Vin: []types.TxInput{types.TxInputGenesis{Height: 201}},
Vout: []types.TxOutput{types.TxOutputBare{Amount: config.BlockReward, Target: types.TxOutToKey{Key: types.PublicKey{1}}}},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := ValidateMinerTx(tt.tx, tt.height, tt.forks)
require.NoError(t, err)
})
}
}
func TestValidateMinerTx_Version_Bad(t *testing.T) {
tests := []struct {
name string
forks []config.HardFork
height uint64
tx *types.Transaction
}{
{
name: "mainnet_pre_hf1_v1",
forks: config.MainnetForks,
height: 100,
tx: &types.Transaction{
Version: types.VersionPreHF4,
Vin: []types.TxInput{types.TxInputGenesis{Height: 100}},
Vout: []types.TxOutput{types.TxOutputBare{Amount: config.BlockReward, Target: types.TxOutToKey{Key: types.PublicKey{1}}}},
},
},
{
name: "mainnet_post_hf1_pre_hf4_v0",
forks: config.MainnetForks,
height: 10081,
tx: &types.Transaction{
Version: types.VersionInitial,
Vin: []types.TxInput{types.TxInputGenesis{Height: 10081}},
Vout: []types.TxOutput{types.TxOutputBare{Amount: config.BlockReward, Target: types.TxOutToKey{Key: types.PublicKey{1}}}},
},
},
{
name: "testnet_post_hf4_v1",
forks: config.TestnetForks,
height: 101,
tx: &types.Transaction{
Version: types.VersionPreHF4,
Vin: []types.TxInput{types.TxInputGenesis{Height: 101}},
Vout: []types.TxOutput{types.TxOutputBare{Amount: config.BlockReward, Target: types.TxOutToKey{Key: types.PublicKey{1}}}},
},
},
{
name: "testnet_post_hf5_wrong_hardfork_id",
forks: config.TestnetForks,
height: 201,
tx: &types.Transaction{
Version: types.VersionPostHF5,
HardforkID: config.HF4Zarcanum,
Vin: []types.TxInput{types.TxInputGenesis{Height: 201}},
Vout: []types.TxOutput{types.TxOutputBare{Amount: config.BlockReward, Target: types.TxOutToKey{Key: types.PublicKey{1}}}},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := ValidateMinerTx(tt.tx, tt.height, tt.forks)
assert.ErrorIs(t, err, ErrMinerTxVersion)
})
}
}
func TestValidateBlockReward_Good(t *testing.T) { func TestValidateBlockReward_Good(t *testing.T) {
height := uint64(100) height := uint64(100)
tx := validMinerTx(height) tx := validMinerTx(height)
@ -127,7 +278,7 @@ func TestValidateBlockReward_Good(t *testing.T) {
func TestValidateBlockReward_Bad_TooMuch(t *testing.T) { func TestValidateBlockReward_Bad_TooMuch(t *testing.T) {
height := uint64(100) height := uint64(100)
tx := &types.Transaction{ tx := &types.Transaction{
Version: types.VersionInitial, Version: types.VersionPreHF4,
Vin: []types.TxInput{types.TxInputGenesis{Height: height}}, Vin: []types.TxInput{types.TxInputGenesis{Height: height}},
Vout: []types.TxOutput{ Vout: []types.TxOutput{
types.TxOutputBare{Amount: config.BlockReward + 1, Target: types.TxOutToKey{Key: types.PublicKey{1}}}, types.TxOutputBare{Amount: config.BlockReward + 1, Target: types.TxOutToKey{Key: types.PublicKey{1}}},
@ -141,7 +292,7 @@ func TestValidateBlockReward_Good_WithFees(t *testing.T) {
height := uint64(100) height := uint64(100)
fees := uint64(50_000_000_000) fees := uint64(50_000_000_000)
tx := &types.Transaction{ tx := &types.Transaction{
Version: types.VersionInitial, Version: types.VersionPreHF4,
Vin: []types.TxInput{types.TxInputGenesis{Height: height}}, Vin: []types.TxInput{types.TxInputGenesis{Height: height}},
Vout: []types.TxOutput{ Vout: []types.TxOutput{
types.TxOutputBare{Amount: config.BlockReward + fees, Target: types.TxOutToKey{Key: types.PublicKey{1}}}, types.TxOutputBare{Amount: config.BlockReward + fees, Target: types.TxOutToKey{Key: types.PublicKey{1}}},
@ -151,6 +302,53 @@ func TestValidateBlockReward_Good_WithFees(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
} }
func TestValidateBlockReward_Good_ZarcanumOutputs(t *testing.T) {
height := uint64(100)
tx := &types.Transaction{
Version: types.VersionPostHF4,
Vin: []types.TxInput{types.TxInputGenesis{Height: height}},
Vout: []types.TxOutput{
types.TxOutputZarcanum{
StealthAddress: types.PublicKey{1},
ConcealingPoint: types.PublicKey{2},
AmountCommitment: types.PublicKey{3},
BlindedAssetID: types.PublicKey{4},
EncryptedAmount: config.BlockReward / 2,
},
types.TxOutputZarcanum{
StealthAddress: types.PublicKey{5},
ConcealingPoint: types.PublicKey{6},
AmountCommitment: types.PublicKey{7},
BlindedAssetID: types.PublicKey{8},
EncryptedAmount: config.BlockReward / 2,
},
},
}
err := ValidateBlockReward(tx, height, 1000, config.BlockGrantedFullRewardZone, 0, config.MainnetForks)
require.NoError(t, err)
}
func TestValidateBlockReward_Bad_ZarcanumOutputs(t *testing.T) {
height := uint64(100)
tx := &types.Transaction{
Version: types.VersionPostHF4,
Vin: []types.TxInput{types.TxInputGenesis{Height: height}},
Vout: []types.TxOutput{
types.TxOutputZarcanum{
StealthAddress: types.PublicKey{1},
ConcealingPoint: types.PublicKey{2},
AmountCommitment: types.PublicKey{3},
BlindedAssetID: types.PublicKey{4},
EncryptedAmount: config.BlockReward + 1,
},
},
}
err := ValidateBlockReward(tx, height, 1000, config.BlockGrantedFullRewardZone, 0, config.MainnetForks)
assert.ErrorIs(t, err, ErrRewardMismatch)
}
func TestValidateBlock_Good(t *testing.T) { func TestValidateBlock_Good(t *testing.T) {
now := uint64(time.Now().Unix()) now := uint64(time.Now().Unix())
height := uint64(100) height := uint64(100)
@ -294,7 +492,7 @@ func TestCheckBlockVersion_Good(t *testing.T) {
Flags: 0, Flags: 0,
}, },
} }
err := checkBlockVersion(blk, tt.forks, tt.height) err := checkBlockVersion(blk.MajorVersion, tt.forks, tt.height)
require.NoError(t, err) require.NoError(t, err)
}) })
} }
@ -323,7 +521,7 @@ func TestCheckBlockVersion_Bad(t *testing.T) {
Flags: 0, Flags: 0,
}, },
} }
err := checkBlockVersion(blk, tt.forks, tt.height) err := checkBlockVersion(blk.MajorVersion, tt.forks, tt.height)
assert.ErrorIs(t, err, ErrBlockVersion) assert.ErrorIs(t, err, ErrBlockVersion)
}) })
} }
@ -336,17 +534,17 @@ func TestCheckBlockVersion_Ugly(t *testing.T) {
blk := &types.Block{ blk := &types.Block{
BlockHeader: types.BlockHeader{MajorVersion: 255, Timestamp: now}, BlockHeader: types.BlockHeader{MajorVersion: 255, Timestamp: now},
} }
err := checkBlockVersion(blk, config.MainnetForks, 0) err := checkBlockVersion(blk.MajorVersion, config.MainnetForks, 0)
assert.ErrorIs(t, err, ErrBlockVersion) assert.ErrorIs(t, err, ErrBlockVersion)
err = checkBlockVersion(blk, config.MainnetForks, 10081) err = checkBlockVersion(blk.MajorVersion, config.MainnetForks, 10081)
assert.ErrorIs(t, err, ErrBlockVersion) assert.ErrorIs(t, err, ErrBlockVersion)
// Version 0 at the exact HF1 boundary (height 10080 -- fork not yet active). // Version 0 at the exact HF1 boundary (height 10080 -- fork not yet active).
blk0 := &types.Block{ blk0 := &types.Block{
BlockHeader: types.BlockHeader{MajorVersion: config.BlockMajorVersionInitial, Timestamp: now}, BlockHeader: types.BlockHeader{MajorVersion: config.BlockMajorVersionInitial, Timestamp: now},
} }
err = checkBlockVersion(blk0, config.MainnetForks, 10080) err = checkBlockVersion(blk0.MajorVersion, config.MainnetForks, 10080)
require.NoError(t, err) require.NoError(t, err)
} }
@ -376,7 +574,7 @@ func TestValidateBlock_MajorVersion_Good(t *testing.T) {
Timestamp: now, Timestamp: now,
Flags: 0, Flags: 0,
}, },
MinerTx: *validMinerTx(tt.height), MinerTx: *validMinerTxForForks(tt.height, tt.forks),
} }
err := ValidateBlock(blk, tt.height, 1000, config.BlockGrantedFullRewardZone, 0, now, nil, tt.forks) err := ValidateBlock(blk, tt.height, 1000, config.BlockGrantedFullRewardZone, 0, now, nil, tt.forks)
require.NoError(t, err) require.NoError(t, err)
@ -410,7 +608,7 @@ func TestValidateBlock_MajorVersion_Bad(t *testing.T) {
Timestamp: now, Timestamp: now,
Flags: 0, Flags: 0,
}, },
MinerTx: *validMinerTx(tt.height), MinerTx: *validMinerTxForForks(tt.height, tt.forks),
} }
err := ValidateBlock(blk, tt.height, 1000, config.BlockGrantedFullRewardZone, 0, now, nil, tt.forks) err := ValidateBlock(blk, tt.height, 1000, config.BlockGrantedFullRewardZone, 0, now, nil, tt.forks)
assert.ErrorIs(t, err, ErrBlockMajorVersion) assert.ErrorIs(t, err, ErrBlockMajorVersion)

View file

@ -14,6 +14,7 @@
// - Cryptographic: PoW hash verification (RandomX via CGo), // - Cryptographic: PoW hash verification (RandomX via CGo),
// ring signature verification, proof verification. // ring signature verification, proof verification.
// //
// All functions take *config.ChainConfig and a block height for // All validation functions take a hardfork schedule ([]config.HardFork)
// hardfork-aware validation. The package has no dependency on chain/. // and a block height for hardfork-aware gating. The package has no
// dependency on chain/ or any storage layer.
package consensus package consensus

View file

@ -29,15 +29,16 @@ var (
ErrNegativeFee = errors.New("consensus: outputs exceed inputs") ErrNegativeFee = errors.New("consensus: outputs exceed inputs")
// Block errors. // Block errors.
ErrBlockTooLarge = errors.New("consensus: block exceeds max size") ErrBlockTooLarge = errors.New("consensus: block exceeds max size")
ErrBlockMajorVersion = errors.New("consensus: invalid block major version for height") ErrBlockMajorVersion = errors.New("consensus: invalid block major version for height")
ErrTimestampFuture = errors.New("consensus: block timestamp too far in future") ErrTimestampFuture = errors.New("consensus: block timestamp too far in future")
ErrTimestampOld = errors.New("consensus: block timestamp below median") ErrTimestampOld = errors.New("consensus: block timestamp below median")
ErrMinerTxInputs = errors.New("consensus: invalid miner transaction inputs") ErrMinerTxInputs = errors.New("consensus: invalid miner transaction inputs")
ErrMinerTxHeight = errors.New("consensus: miner transaction height mismatch") ErrMinerTxHeight = errors.New("consensus: miner transaction height mismatch")
ErrMinerTxUnlock = errors.New("consensus: miner transaction unlock time invalid") ErrMinerTxVersion = errors.New("consensus: invalid miner transaction version for current hardfork")
ErrRewardMismatch = errors.New("consensus: block reward mismatch") ErrMinerTxUnlock = errors.New("consensus: miner transaction unlock time invalid")
ErrMinerTxProofs = errors.New("consensus: miner transaction proof count invalid") ErrRewardMismatch = errors.New("consensus: block reward mismatch")
ErrMinerTxProofs = errors.New("consensus: miner transaction proof count invalid")
// ErrBlockVersion is an alias for ErrBlockMajorVersion, used by // ErrBlockVersion is an alias for ErrBlockMajorVersion, used by
// checkBlockVersion when the block major version does not match // checkBlockVersion when the block major version does not match

View file

@ -17,6 +17,8 @@ import (
// TxFee calculates the transaction fee for pre-HF4 (v0/v1) transactions. // TxFee calculates the transaction fee for pre-HF4 (v0/v1) transactions.
// Coinbase transactions return 0. For standard transactions, fee equals // Coinbase transactions return 0. For standard transactions, fee equals
// the difference between total input amounts and total output amounts. // the difference between total input amounts and total output amounts.
//
// fee, err := consensus.TxFee(&tx)
func TxFee(tx *types.Transaction) (uint64, error) { func TxFee(tx *types.Transaction) (uint64, error) {
if isCoinbase(tx) { if isCoinbase(tx) {
return 0, nil return 0, nil

View file

@ -19,6 +19,8 @@ var maxTarget = new(big.Int).Lsh(big.NewInt(1), 256)
// CheckDifficulty returns true if hash meets the given difficulty target. // CheckDifficulty returns true if hash meets the given difficulty target.
// The hash (interpreted as a 256-bit little-endian number) must be less // The hash (interpreted as a 256-bit little-endian number) must be less
// than maxTarget / difficulty. // than maxTarget / difficulty.
//
// if consensus.CheckDifficulty(powHash, currentDifficulty) { /* valid PoW solution */ }
func CheckDifficulty(hash types.Hash, difficulty uint64) bool { func CheckDifficulty(hash types.Hash, difficulty uint64) bool {
if difficulty == 0 { if difficulty == 0 {
return true return true
@ -39,6 +41,8 @@ func CheckDifficulty(hash types.Hash, difficulty uint64) bool {
// CheckPoWHash computes the RandomX hash of a block header hash + nonce // CheckPoWHash computes the RandomX hash of a block header hash + nonce
// and checks it against the difficulty target. // and checks it against the difficulty target.
//
// valid, err := consensus.CheckPoWHash(headerHash, nonce, difficulty)
func CheckPoWHash(headerHash types.Hash, nonce, difficulty uint64) (bool, error) { func CheckPoWHash(headerHash types.Hash, nonce, difficulty uint64) (bool, error) {
// Build input: header_hash (32 bytes) || nonce (8 bytes LE). // Build input: header_hash (32 bytes) || nonce (8 bytes LE).
var input [40]byte var input [40]byte

View file

@ -17,6 +17,8 @@ import (
// BaseReward returns the base block reward at the given height. // BaseReward returns the base block reward at the given height.
// Height 0 (genesis) returns the premine amount. All other heights // Height 0 (genesis) returns the premine amount. All other heights
// return the fixed block reward (1 LTHN). // return the fixed block reward (1 LTHN).
//
// reward := consensus.BaseReward(15000) // 1_000_000_000_000 (1 LTHN)
func BaseReward(height uint64) uint64 { func BaseReward(height uint64) uint64 {
if height == 0 { if height == 0 {
return config.Premine return config.Premine
@ -33,6 +35,8 @@ func BaseReward(height uint64) uint64 {
// reward = baseReward * (2*median - size) * size / median² // reward = baseReward * (2*median - size) * size / median²
// //
// Uses math/bits.Mul64 for 128-bit intermediate products to avoid overflow. // Uses math/bits.Mul64 for 128-bit intermediate products to avoid overflow.
//
// reward, err := consensus.BlockReward(consensus.BaseReward(height), blockSize, medianSize)
func BlockReward(baseReward, blockSize, medianSize uint64) (uint64, error) { func BlockReward(baseReward, blockSize, medianSize uint64) (uint64, error) {
effectiveMedian := medianSize effectiveMedian := medianSize
if effectiveMedian < config.BlockGrantedFullRewardZone { if effectiveMedian < config.BlockGrantedFullRewardZone {
@ -72,6 +76,9 @@ func BlockReward(baseReward, blockSize, medianSize uint64) (uint64, error) {
// MinerReward calculates the total miner payout. Pre-HF4, transaction // MinerReward calculates the total miner payout. Pre-HF4, transaction
// fees are added to the base reward. Post-HF4 (postHF4=true), fees are // fees are added to the base reward. Post-HF4 (postHF4=true), fees are
// burned and the miner receives only the base reward. // burned and the miner receives only the base reward.
//
// payout := consensus.MinerReward(reward, totalFees, false) // pre-HF4: reward + fees
// payout := consensus.MinerReward(reward, totalFees, true) // post-HF4: reward only (fees burned)
func MinerReward(baseReward, totalFees uint64, postHF4 bool) uint64 { func MinerReward(baseReward, totalFees uint64, postHF4 bool) uint64 {
if postHF4 { if postHF4 {
return baseReward return baseReward

View file

@ -12,15 +12,34 @@ import (
"dappco.re/go/core/blockchain/config" "dappco.re/go/core/blockchain/config"
"dappco.re/go/core/blockchain/types" "dappco.re/go/core/blockchain/types"
"dappco.re/go/core/blockchain/wire"
) )
type transactionForkState struct {
activeHardForkVersion uint8
hardForkOneActive bool
hardForkFourActive bool
hardForkFiveActive bool
}
func newTransactionForkState(forks []config.HardFork, height uint64) transactionForkState {
return transactionForkState{
activeHardForkVersion: config.VersionAtHeight(forks, height),
hardForkOneActive: config.IsHardForkActive(forks, config.HF1, height),
hardForkFourActive: config.IsHardForkActive(forks, config.HF4Zarcanum, height),
hardForkFiveActive: config.IsHardForkActive(forks, config.HF5, height),
}
}
// ValidateTransaction performs semantic validation on a regular (non-coinbase) // ValidateTransaction performs semantic validation on a regular (non-coinbase)
// transaction. Checks are ordered to match the C++ validate_tx_semantic(). // transaction. Checks are ordered to match the C++ validate_tx_semantic().
//
// consensus.ValidateTransaction(&tx, txBlob, config.MainnetForks, blockHeight)
func ValidateTransaction(tx *types.Transaction, txBlob []byte, forks []config.HardFork, height uint64) error { func ValidateTransaction(tx *types.Transaction, txBlob []byte, forks []config.HardFork, height uint64) error {
hf4Active := config.IsHardForkActive(forks, config.HF4Zarcanum, height) state := newTransactionForkState(forks, height)
// 0. Transaction version. // 0. Transaction version.
if err := checkTxVersion(tx, forks, height); err != nil { if err := checkTxVersion(tx, state, height); err != nil {
return err return err
} }
@ -37,15 +56,18 @@ func ValidateTransaction(tx *types.Transaction, txBlob []byte, forks []config.Ha
return coreerr.E("ValidateTransaction", fmt.Sprintf("%d", len(tx.Vin)), ErrTooManyInputs) return coreerr.E("ValidateTransaction", fmt.Sprintf("%d", len(tx.Vin)), ErrTooManyInputs)
} }
hf1Active := config.IsHardForkActive(forks, config.HF1, height)
// 3. Input types — TxInputGenesis not allowed in regular transactions. // 3. Input types — TxInputGenesis not allowed in regular transactions.
if err := checkInputTypes(tx, hf1Active, hf4Active); err != nil { if err := checkInputTypes(tx, state); err != nil {
return err return err
} }
// 4. Output validation. // 4. Output validation.
if err := checkOutputs(tx, hf1Active, hf4Active); err != nil { if err := checkOutputs(tx, state); err != nil {
return err
}
// 4a. HF5 asset operation validation inside extra.
if err := checkAssetOperations(tx.Extra, state.hardForkFiveActive); err != nil {
return err return err
} }
@ -63,7 +85,7 @@ func ValidateTransaction(tx *types.Transaction, txBlob []byte, forks []config.Ha
} }
// 7. Balance check (pre-HF4 only — post-HF4 uses commitment proofs). // 7. Balance check (pre-HF4 only — post-HF4 uses commitment proofs).
if !hf4Active { if !state.hardForkFourActive {
if _, err := TxFee(tx); err != nil { if _, err := TxFee(tx); err != nil {
return err return err
} }
@ -75,27 +97,37 @@ func ValidateTransaction(tx *types.Transaction, txBlob []byte, forks []config.Ha
// checkTxVersion validates that the transaction version is appropriate for the // checkTxVersion validates that the transaction version is appropriate for the
// current hardfork era. // current hardfork era.
// //
// After HF5: transaction version must be >= VersionPostHF5 (3). // Pre-HF4: regular transactions must use version 1.
// Before HF5: transaction version 3 is rejected (too early). // HF4 era: regular transactions must use version 2.
func checkTxVersion(tx *types.Transaction, forks []config.HardFork, height uint64) error { // HF5+: transaction version must be exactly version 3 and the embedded
hf5Active := config.IsHardForkActive(forks, config.HF5, height) // hardfork_id must match the active hardfork version.
func checkTxVersion(tx *types.Transaction, state transactionForkState, height uint64) error {
var expectedVersion uint64
switch {
case state.hardForkFiveActive:
expectedVersion = types.VersionPostHF5
case state.hardForkFourActive:
expectedVersion = types.VersionPostHF4
default:
expectedVersion = types.VersionPreHF4
}
if hf5Active && tx.Version < types.VersionPostHF5 { if tx.Version != expectedVersion {
return coreerr.E("checkTxVersion", return coreerr.E("checkTxVersion",
fmt.Sprintf("version %d too low after HF5 at height %d", tx.Version, height), fmt.Sprintf("version %d invalid at height %d (expected %d)", tx.Version, height, expectedVersion),
ErrTxVersionInvalid) ErrTxVersionInvalid)
} }
if !hf5Active && tx.Version >= types.VersionPostHF5 { if tx.Version >= types.VersionPostHF5 && tx.HardforkID != state.activeHardForkVersion {
return coreerr.E("checkTxVersion", return coreerr.E("checkTxVersion",
fmt.Sprintf("version %d not allowed before HF5 at height %d", tx.Version, height), fmt.Sprintf("hardfork id %d does not match active fork %d at height %d", tx.HardforkID, state.activeHardForkVersion, height),
ErrTxVersionInvalid) ErrTxVersionInvalid)
} }
return nil return nil
} }
func checkInputTypes(tx *types.Transaction, hf1Active, hf4Active bool) error { func checkInputTypes(tx *types.Transaction, state transactionForkState) error {
for _, vin := range tx.Vin { for _, vin := range tx.Vin {
switch vin.(type) { switch vin.(type) {
case types.TxInputToKey: case types.TxInputToKey:
@ -104,25 +136,26 @@ func checkInputTypes(tx *types.Transaction, hf1Active, hf4Active bool) error {
return coreerr.E("checkInputTypes", "txin_gen in regular transaction", ErrInvalidInputType) return coreerr.E("checkInputTypes", "txin_gen in regular transaction", ErrInvalidInputType)
case types.TxInputHTLC, types.TxInputMultisig: case types.TxInputHTLC, types.TxInputMultisig:
// HTLC and multisig inputs require at least HF1. // HTLC and multisig inputs require at least HF1.
if !hf1Active { if !state.hardForkOneActive {
return coreerr.E("checkInputTypes", fmt.Sprintf("tag %d pre-HF1", vin.InputType()), ErrInvalidInputType) return coreerr.E("checkInputTypes", fmt.Sprintf("tag %d pre-HF1", vin.InputType()), ErrInvalidInputType)
} }
default: case types.TxInputZC:
// Future types (ZC) — accept if HF4+. if !state.hardForkFourActive {
if !hf4Active {
return coreerr.E("checkInputTypes", fmt.Sprintf("tag %d pre-HF4", vin.InputType()), ErrInvalidInputType) return coreerr.E("checkInputTypes", fmt.Sprintf("tag %d pre-HF4", vin.InputType()), ErrInvalidInputType)
} }
default:
return coreerr.E("checkInputTypes", fmt.Sprintf("unsupported input type %T", vin), ErrInvalidInputType)
} }
} }
return nil return nil
} }
func checkOutputs(tx *types.Transaction, hf1Active, hf4Active bool) error { func checkOutputs(tx *types.Transaction, state transactionForkState) error {
if len(tx.Vout) == 0 { if len(tx.Vout) == 0 {
return ErrNoOutputs return ErrNoOutputs
} }
if hf4Active && uint64(len(tx.Vout)) < config.TxMinAllowedOutputs { if state.hardForkFourActive && uint64(len(tx.Vout)) < config.TxMinAllowedOutputs {
return coreerr.E("checkOutputs", fmt.Sprintf("%d (min %d)", len(tx.Vout), config.TxMinAllowedOutputs), ErrTooFewOutputs) return coreerr.E("checkOutputs", fmt.Sprintf("%d (min %d)", len(tx.Vout), config.TxMinAllowedOutputs), ErrTooFewOutputs)
} }
@ -136,15 +169,24 @@ func checkOutputs(tx *types.Transaction, hf1Active, hf4Active bool) error {
if o.Amount == 0 { if o.Amount == 0 {
return coreerr.E("checkOutputs", fmt.Sprintf("output %d has zero amount", i), ErrInvalidOutput) return coreerr.E("checkOutputs", fmt.Sprintf("output %d has zero amount", i), ErrInvalidOutput)
} }
// HTLC and Multisig output targets require at least HF1. // Only known transparent output targets are accepted.
switch o.Target.(type) { switch o.Target.(type) {
case types.TxOutToKey:
case types.TxOutHTLC, types.TxOutMultisig: case types.TxOutHTLC, types.TxOutMultisig:
if !hf1Active { if !state.hardForkOneActive {
return coreerr.E("checkOutputs", fmt.Sprintf("output %d: HTLC/multisig target pre-HF1", i), ErrInvalidOutput) return coreerr.E("checkOutputs", fmt.Sprintf("output %d: HTLC/multisig target pre-HF1", i), ErrInvalidOutput)
} }
case nil:
return coreerr.E("checkOutputs", fmt.Sprintf("output %d: missing target", i), ErrInvalidOutput)
default:
return coreerr.E("checkOutputs", fmt.Sprintf("output %d: unsupported target %T", i, o.Target), ErrInvalidOutput)
} }
case types.TxOutputZarcanum: case types.TxOutputZarcanum:
// Validated by proof verification. if !state.hardForkFourActive {
return coreerr.E("checkOutputs", fmt.Sprintf("output %d: Zarcanum output pre-HF4", i), ErrInvalidOutput)
}
default:
return coreerr.E("checkOutputs", fmt.Sprintf("output %d: unsupported output type %T", i, vout), ErrInvalidOutput)
} }
} }
@ -170,3 +212,33 @@ func checkKeyImages(tx *types.Transaction) error {
} }
return nil return nil
} }
func checkAssetOperations(extra []byte, hardForkFiveActive bool) error {
if len(extra) == 0 {
return nil
}
elements, err := wire.DecodeVariantVector(extra)
if err != nil {
return coreerr.E("checkAssetOperations", "parse extra", ErrInvalidExtra)
}
for i, elem := range elements {
if elem.Tag != types.AssetDescriptorOperationTag {
continue
}
if !hardForkFiveActive {
return coreerr.E("checkAssetOperations", fmt.Sprintf("extra[%d]: asset descriptor operation pre-HF5", i), ErrInvalidExtra)
}
op, err := wire.DecodeAssetDescriptorOperation(elem.Data)
if err != nil {
return coreerr.E("checkAssetOperations", fmt.Sprintf("extra[%d]: decode asset descriptor operation", i), ErrInvalidExtra)
}
if err := op.Validate(); err != nil {
return coreerr.E("checkAssetOperations", fmt.Sprintf("extra[%d]", i), err)
}
}
return nil
}

View file

@ -8,10 +8,12 @@
package consensus package consensus
import ( import (
"bytes"
"testing" "testing"
"dappco.re/go/core/blockchain/config" "dappco.re/go/core/blockchain/config"
"dappco.re/go/core/blockchain/types" "dappco.re/go/core/blockchain/types"
"dappco.re/go/core/blockchain/wire"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -32,6 +34,10 @@ func validV1Tx() *types.Transaction {
} }
} }
type unsupportedTxOutTarget struct{}
func (unsupportedTxOutTarget) TargetType() uint8 { return 250 }
func TestValidateTransaction_Good(t *testing.T) { func TestValidateTransaction_Good(t *testing.T) {
tx := validV1Tx() tx := validV1Tx()
blob := make([]byte, 100) // small blob blob := make([]byte, 100) // small blob
@ -238,6 +244,178 @@ func TestCheckOutputs_MultisigTargetPostHF1_Good(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
} }
func TestCheckInputTypes_ZCPreHF4_Bad(t *testing.T) {
tx := &types.Transaction{
Version: types.VersionPreHF4,
Vin: []types.TxInput{
types.TxInputZC{KeyImage: types.KeyImage{1}},
},
Vout: []types.TxOutput{
types.TxOutputBare{Amount: 90, Target: types.TxOutToKey{Key: types.PublicKey{1}}},
types.TxOutputBare{Amount: 1, Target: types.TxOutToKey{Key: types.PublicKey{2}}},
},
}
blob := make([]byte, 100)
err := ValidateTransaction(tx, blob, config.MainnetForks, 5000)
assert.ErrorIs(t, err, ErrInvalidInputType)
}
func TestCheckOutputs_ZarcanumPreHF4_Bad(t *testing.T) {
tx := &types.Transaction{
Version: types.VersionPreHF4,
Vin: []types.TxInput{
types.TxInputToKey{Amount: 100, KeyImage: types.KeyImage{1}},
},
Vout: []types.TxOutput{
types.TxOutputZarcanum{StealthAddress: types.PublicKey{1}},
},
}
blob := make([]byte, 100)
err := ValidateTransaction(tx, blob, config.MainnetForks, 5000)
assert.ErrorIs(t, err, ErrInvalidOutput)
}
func TestCheckOutputs_ZarcanumPostHF4_Good(t *testing.T) {
tx := &types.Transaction{
Version: types.VersionPostHF4,
Vin: []types.TxInput{
types.TxInputZC{KeyImage: types.KeyImage{1}},
},
Vout: []types.TxOutput{
types.TxOutputZarcanum{StealthAddress: types.PublicKey{1}},
types.TxOutputZarcanum{StealthAddress: types.PublicKey{2}},
},
}
blob := make([]byte, 100)
err := ValidateTransaction(tx, blob, config.TestnetForks, 150)
require.NoError(t, err)
}
func TestCheckOutputs_MissingTarget_Bad(t *testing.T) {
tx := &types.Transaction{
Version: types.VersionPreHF4,
Vin: []types.TxInput{
types.TxInputToKey{Amount: 100, KeyImage: types.KeyImage{1}},
},
Vout: []types.TxOutput{
types.TxOutputBare{Amount: 90, Target: nil},
},
}
blob := make([]byte, 100)
err := ValidateTransaction(tx, blob, config.MainnetForks, 20000)
assert.ErrorIs(t, err, ErrInvalidOutput)
}
func TestCheckOutputs_UnsupportedTarget_Bad(t *testing.T) {
tx := &types.Transaction{
Version: types.VersionPreHF4,
Vin: []types.TxInput{
types.TxInputToKey{Amount: 100, KeyImage: types.KeyImage{1}},
},
Vout: []types.TxOutput{
types.TxOutputBare{Amount: 90, Target: unsupportedTxOutTarget{}},
},
}
blob := make([]byte, 100)
err := ValidateTransaction(tx, blob, config.MainnetForks, 20000)
assert.ErrorIs(t, err, ErrInvalidOutput)
}
func assetDescriptorExtraBlob(ticker string, ownerZero bool) []byte {
var buf bytes.Buffer
enc := wire.NewEncoder(&buf)
enc.WriteVarint(1)
enc.WriteUint8(types.AssetDescriptorOperationTag)
assetOp := bytes.Buffer{}
opEnc := wire.NewEncoder(&assetOp)
opEnc.WriteUint8(1) // version
opEnc.WriteUint8(types.AssetOpRegister)
opEnc.WriteUint8(0) // no asset id
opEnc.WriteUint8(1) // descriptor present
opEnc.WriteVarint(uint64(len(ticker)))
opEnc.WriteBytes([]byte(ticker))
opEnc.WriteVarint(7)
opEnc.WriteBytes([]byte("Lethean"))
opEnc.WriteUint64LE(1000000)
opEnc.WriteUint64LE(0)
opEnc.WriteUint8(12)
opEnc.WriteVarint(0)
if ownerZero {
opEnc.WriteBytes(make([]byte, 32))
} else {
opEnc.WriteBytes(bytes.Repeat([]byte{0xAA}, 32))
}
opEnc.WriteVarint(0)
opEnc.WriteUint64LE(0)
opEnc.WriteUint64LE(0)
opEnc.WriteVarint(0)
enc.WriteBytes(assetOp.Bytes())
return buf.Bytes()
}
func TestValidateTransaction_AssetDescriptorOperation_Good(t *testing.T) {
tx := &types.Transaction{
Version: types.VersionPostHF5,
Vin: []types.TxInput{
types.TxInputZC{
KeyImage: types.KeyImage{1},
},
},
Vout: []types.TxOutput{
types.TxOutputBare{
Amount: 90,
Target: types.TxOutToKey{Key: types.PublicKey{1}},
},
types.TxOutputBare{
Amount: 1,
Target: types.TxOutToKey{Key: types.PublicKey{2}},
},
},
Extra: assetDescriptorExtraBlob("LTHN", false),
}
blob := make([]byte, 100)
err := ValidateTransaction(tx, blob, config.TestnetForks, 250)
require.NoError(t, err)
}
func TestValidateTransaction_AssetDescriptorOperationPreHF5_Bad(t *testing.T) {
tx := &types.Transaction{
Version: types.VersionPreHF4,
Vin: []types.TxInput{
types.TxInputToKey{Amount: 100, KeyImage: types.KeyImage{1}},
},
Vout: []types.TxOutput{
types.TxOutputBare{Amount: 90, Target: types.TxOutToKey{Key: types.PublicKey{1}}},
},
Extra: assetDescriptorExtraBlob("LTHN", false),
}
blob := make([]byte, 100)
err := ValidateTransaction(tx, blob, config.MainnetForks, 5000)
assert.ErrorIs(t, err, ErrInvalidExtra)
}
func TestValidateTransaction_AssetDescriptorOperationInvalid_Bad(t *testing.T) {
tx := &types.Transaction{
Version: types.VersionPostHF5,
Vin: []types.TxInput{
types.TxInputZC{
KeyImage: types.KeyImage{1},
},
},
Vout: []types.TxOutput{
types.TxOutputBare{Amount: 90, Target: types.TxOutToKey{Key: types.PublicKey{1}}},
types.TxOutputBare{Amount: 1, Target: types.TxOutToKey{Key: types.PublicKey{2}}},
},
Extra: assetDescriptorExtraBlob("TOO-LONG", true),
}
blob := make([]byte, 100)
err := ValidateTransaction(tx, blob, config.TestnetForks, 250)
assert.ErrorIs(t, err, ErrInvalidExtra)
}
// --- Key image tests for HTLC (Task 8) --- // --- Key image tests for HTLC (Task 8) ---
func TestCheckKeyImages_HTLCDuplicate_Bad(t *testing.T) { func TestCheckKeyImages_HTLCDuplicate_Bad(t *testing.T) {

View file

@ -64,7 +64,7 @@ func TestCheckTxVersion_Good(t *testing.T) {
for _, tt := range tests { for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
err := checkTxVersion(tt.tx, tt.forks, tt.height) err := checkTxVersion(tt.tx, newTransactionForkState(tt.forks, tt.height), tt.height)
if err != nil { if err != nil {
t.Errorf("checkTxVersion returned unexpected error: %v", err) t.Errorf("checkTxVersion returned unexpected error: %v", err)
} }
@ -79,15 +79,37 @@ func TestCheckTxVersion_Bad(t *testing.T) {
forks []config.HardFork forks []config.HardFork
height uint64 height uint64
}{ }{
// v0 regular transaction before HF4 — must still be v1.
{"v0_before_hf4", func() *types.Transaction {
tx := validV1Tx()
tx.Version = types.VersionInitial
return tx
}(), config.MainnetForks, 5000},
// v1 transaction after HF4 — must be v2.
{"v1_after_hf4", validV1Tx(), config.TestnetForks, 150},
// v2 transaction after HF5 — must be v3. // v2 transaction after HF5 — must be v3.
{"v2_after_hf5", validV2Tx(), config.TestnetForks, 250}, {"v2_after_hf5", validV2Tx(), config.TestnetForks, 250},
// v3 transaction after HF4 but before HF5 — too early.
{"v3_after_hf4_before_hf5", validV3Tx(), config.TestnetForks, 150},
// v3 transaction after HF5 with wrong hardfork id.
{"v3_after_hf5_wrong_hardfork", func() *types.Transaction {
tx := validV3Tx()
tx.HardforkID = 4
return tx
}(), config.TestnetForks, 250},
// v3 transaction before HF5 — too early. // v3 transaction before HF5 — too early.
{"v3_before_hf5", validV3Tx(), config.TestnetForks, 150}, {"v3_before_hf5", validV3Tx(), config.TestnetForks, 150},
// future version must be rejected.
{"v4_after_hf5", func() *types.Transaction {
tx := validV3Tx()
tx.Version = types.VersionPostHF5 + 1
return tx
}(), config.TestnetForks, 250},
} }
for _, tt := range tests { for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
err := checkTxVersion(tt.tx, tt.forks, tt.height) err := checkTxVersion(tt.tx, newTransactionForkState(tt.forks, tt.height), tt.height)
if err == nil { if err == nil {
t.Error("expected ErrTxVersionInvalid, got nil") t.Error("expected ErrTxVersionInvalid, got nil")
} }
@ -96,16 +118,30 @@ func TestCheckTxVersion_Bad(t *testing.T) {
} }
func TestCheckTxVersion_Ugly(t *testing.T) { func TestCheckTxVersion_Ugly(t *testing.T) {
// v2 at exact HF4 activation boundary (height 101 on testnet, HF4.Height=100).
txHF4 := validV2Tx()
err := checkTxVersion(txHF4, newTransactionForkState(config.TestnetForks, 101), 101)
if err != nil {
t.Errorf("v2 at HF4 activation boundary should be valid: %v", err)
}
// v1 at exact HF4 activation boundary should be rejected.
txPreHF4 := validV1Tx()
err = checkTxVersion(txPreHF4, newTransactionForkState(config.TestnetForks, 101), 101)
if err == nil {
t.Error("v1 at HF4 activation boundary should be rejected")
}
// v3 at exact HF5 activation boundary (height 201 on testnet, HF5.Height=200). // v3 at exact HF5 activation boundary (height 201 on testnet, HF5.Height=200).
tx := validV3Tx() tx := validV3Tx()
err := checkTxVersion(tx, config.TestnetForks, 201) err = checkTxVersion(tx, newTransactionForkState(config.TestnetForks, 201), 201)
if err != nil { if err != nil {
t.Errorf("v3 at HF5 activation boundary should be valid: %v", err) t.Errorf("v3 at HF5 activation boundary should be valid: %v", err)
} }
// v2 at exact HF5 activation boundary — should be rejected. // v2 at exact HF5 activation boundary — should be rejected.
tx2 := validV2Tx() tx2 := validV2Tx()
err = checkTxVersion(tx2, config.TestnetForks, 201) err = checkTxVersion(tx2, newTransactionForkState(config.TestnetForks, 201), 201)
if err == nil { if err == nil {
t.Error("v2 at HF5 activation boundary should be rejected") t.Error("v2 at HF5 activation boundary should be rejected")
} }

View file

@ -18,6 +18,18 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func buildSingleZCSigRaw() []byte {
var buf bytes.Buffer
enc := wire.NewEncoder(&buf)
enc.WriteVarint(1)
enc.WriteUint8(types.SigTypeZC)
enc.WriteBytes(make([]byte, 64))
enc.WriteVarint(0)
enc.WriteVarint(0)
enc.WriteBytes(make([]byte, 64))
return buf.Bytes()
}
// loadTestTx loads and decodes a hex-encoded transaction from testdata. // loadTestTx loads and decodes a hex-encoded transaction from testdata.
func loadTestTx(t *testing.T, filename string) *types.Transaction { func loadTestTx(t *testing.T, filename string) *types.Transaction {
t.Helper() t.Helper()
@ -147,6 +159,20 @@ func TestVerifyV2Signatures_BadSigCount(t *testing.T) {
assert.Error(t, err, "should fail with mismatched sig count") assert.Error(t, err, "should fail with mismatched sig count")
} }
func TestVerifyV2Signatures_HTLCWrongSigTag_Bad(t *testing.T) {
tx := &types.Transaction{
Version: types.VersionPostHF5,
Vin: []types.TxInput{
types.TxInputHTLC{Amount: 100, KeyImage: types.KeyImage{1}},
},
SignaturesRaw: buildSingleZCSigRaw(),
}
err := VerifyTransactionSignatures(tx, config.TestnetForks, 250, nil, nil)
require.Error(t, err)
assert.Contains(t, err.Error(), "HTLC")
}
func TestVerifyV2Signatures_TxHash(t *testing.T) { func TestVerifyV2Signatures_TxHash(t *testing.T) {
// Verify the known tx hash matches. // Verify the known tx hash matches.
tx := loadTestTx(t, "../testdata/v2_spending_tx_mixin0.hex") tx := loadTestTx(t, "../testdata/v2_spending_tx_mixin0.hex")

View file

@ -16,9 +16,9 @@ import (
"dappco.re/go/core/blockchain/wire" "dappco.re/go/core/blockchain/wire"
) )
// RingOutputsFn fetches the public keys for a ring at the given amount // RingOutputsFn fetches the public keys for a ring at the given spending
// and offsets. Used to decouple consensus/ from chain storage. // height, amount, and offsets. Used to decouple consensus/ from chain storage.
type RingOutputsFn func(amount uint64, offsets []uint64) ([]types.PublicKey, error) type RingOutputsFn func(height, amount uint64, offsets []uint64) ([]types.PublicKey, error)
// ZCRingMember holds the three public keys per ring entry needed for // ZCRingMember holds the three public keys per ring entry needed for
// CLSAG GGX verification (HF4+). All fields are premultiplied by 1/8 // CLSAG GGX verification (HF4+). All fields are premultiplied by 1/8
@ -41,6 +41,9 @@ type ZCRingOutputsFn func(offsets []uint64) ([]ZCRingMember, error)
// getRingOutputs is used for pre-HF4 (V1) signature verification. // getRingOutputs is used for pre-HF4 (V1) signature verification.
// getZCRingOutputs is used for post-HF4 (V2) CLSAG GGX verification. // getZCRingOutputs is used for post-HF4 (V2) CLSAG GGX verification.
// Either may be nil for structural-only checks. // Either may be nil for structural-only checks.
//
// consensus.VerifyTransactionSignatures(&tx, config.MainnetForks, height, chain.GetRingOutputs, chain.GetZCRingOutputs)
// consensus.VerifyTransactionSignatures(&tx, config.MainnetForks, height, nil, nil) // structural only
func VerifyTransactionSignatures(tx *types.Transaction, forks []config.HardFork, func VerifyTransactionSignatures(tx *types.Transaction, forks []config.HardFork,
height uint64, getRingOutputs RingOutputsFn, getZCRingOutputs ZCRingOutputsFn) error { height uint64, getRingOutputs RingOutputsFn, getZCRingOutputs ZCRingOutputsFn) error {
@ -49,17 +52,17 @@ func VerifyTransactionSignatures(tx *types.Transaction, forks []config.HardFork,
return nil return nil
} }
hf4Active := config.IsHardForkActive(forks, config.HF4Zarcanum, height) hardForkFourActive := config.IsHardForkActive(forks, config.HF4Zarcanum, height)
if !hf4Active { if !hardForkFourActive {
return verifyV1Signatures(tx, getRingOutputs) return verifyV1Signatures(tx, height, getRingOutputs)
} }
return verifyV2Signatures(tx, getZCRingOutputs) return verifyV2Signatures(tx, getZCRingOutputs)
} }
// verifyV1Signatures checks NLSAG ring signatures for pre-HF4 transactions. // verifyV1Signatures checks NLSAG ring signatures for pre-HF4 transactions.
func verifyV1Signatures(tx *types.Transaction, getRingOutputs RingOutputsFn) error { func verifyV1Signatures(tx *types.Transaction, height uint64, getRingOutputs RingOutputsFn) error {
// Count ring-signing inputs (TxInputToKey and TxInputHTLC contribute // Count ring-signing inputs (TxInputToKey and TxInputHTLC contribute
// ring signatures; TxInputMultisig does not). // ring signatures; TxInputMultisig does not).
var ringInputCount int var ringInputCount int
@ -108,7 +111,7 @@ func verifyV1Signatures(tx *types.Transaction, getRingOutputs RingOutputsFn) err
offsets[i] = ref.GlobalIndex offsets[i] = ref.GlobalIndex
} }
ringKeys, err := getRingOutputs(amount, offsets) ringKeys, err := getRingOutputs(height, amount, offsets)
if err != nil { if err != nil {
return coreerr.E("verifyV1Signatures", fmt.Sprintf("consensus: failed to fetch ring outputs for input %d", sigIdx), err) return coreerr.E("verifyV1Signatures", fmt.Sprintf("consensus: failed to fetch ring outputs for input %d", sigIdx), err)
} }
@ -152,7 +155,8 @@ func verifyV2Signatures(tx *types.Transaction, getZCRingOutputs ZCRingOutputsFn)
return coreerr.E("verifyV2Signatures", fmt.Sprintf("consensus: V2 signature count %d != input count %d", len(sigEntries), len(tx.Vin)), nil) return coreerr.E("verifyV2Signatures", fmt.Sprintf("consensus: V2 signature count %d != input count %d", len(sigEntries), len(tx.Vin)), nil)
} }
// Validate that ZC inputs have ZC_sig and vice versa. // Validate that ZC inputs have ZC_sig and that ring-spending inputs use
// the ring-signature tags that match their spending model.
for i, vin := range tx.Vin { for i, vin := range tx.Vin {
switch vin.(type) { switch vin.(type) {
case types.TxInputZC: case types.TxInputZC:
@ -163,6 +167,10 @@ func verifyV2Signatures(tx *types.Transaction, getZCRingOutputs ZCRingOutputsFn)
if sigEntries[i].tag != types.SigTypeNLSAG && sigEntries[i].tag != types.SigTypeVoid { if sigEntries[i].tag != types.SigTypeNLSAG && sigEntries[i].tag != types.SigTypeVoid {
return coreerr.E("verifyV2Signatures", fmt.Sprintf("consensus: input %d is to_key but signature tag is 0x%02x", i, sigEntries[i].tag), nil) return coreerr.E("verifyV2Signatures", fmt.Sprintf("consensus: input %d is to_key but signature tag is 0x%02x", i, sigEntries[i].tag), nil)
} }
case types.TxInputHTLC:
if sigEntries[i].tag != types.SigTypeNLSAG && sigEntries[i].tag != types.SigTypeVoid {
return coreerr.E("verifyV2Signatures", fmt.Sprintf("consensus: input %d is HTLC but signature tag is 0x%02x", i, sigEntries[i].tag), nil)
}
} }
} }
@ -245,8 +253,9 @@ func verifyV2Signatures(tx *types.Transaction, getZCRingOutputs ZCRingOutputsFn)
} }
} }
// TODO: Verify balance proof (generic_double_schnorr_sig). // Balance proofs are verified by the generic double-Schnorr helper in
// Requires computing commitment_to_zero and a new bridge function. // consensus.VerifyBalanceProof once the transaction-specific public
// points have been constructed.
return nil return nil
} }

View file

@ -48,7 +48,7 @@ func TestVerifyV1Signatures_Good_MockRing(t *testing.T) {
tx.Signatures = [][]types.Signature{make([]types.Signature, 1)} tx.Signatures = [][]types.Signature{make([]types.Signature, 1)}
tx.Signatures[0][0] = types.Signature(sigs[0]) tx.Signatures[0][0] = types.Signature(sigs[0])
getRing := func(amount uint64, offsets []uint64) ([]types.PublicKey, error) { getRing := func(height, amount uint64, offsets []uint64) ([]types.PublicKey, error) {
return []types.PublicKey{types.PublicKey(pub)}, nil return []types.PublicKey{types.PublicKey(pub)}, nil
} }
@ -82,7 +82,7 @@ func TestVerifyV1Signatures_Bad_WrongSig(t *testing.T) {
}, },
} }
getRing := func(amount uint64, offsets []uint64) ([]types.PublicKey, error) { getRing := func(height, amount uint64, offsets []uint64) ([]types.PublicKey, error) {
return []types.PublicKey{types.PublicKey(pub)}, nil return []types.PublicKey{types.PublicKey(pub)}, nil
} }

View file

@ -49,8 +49,6 @@ set(CXX_SOURCES
set(RANDOMX_SOURCES set(RANDOMX_SOURCES
randomx/aes_hash.cpp randomx/aes_hash.cpp
randomx/argon2_ref.c randomx/argon2_ref.c
randomx/argon2_ssse3.c
randomx/argon2_avx2.c
randomx/bytecode_machine.cpp randomx/bytecode_machine.cpp
randomx/cpu.cpp randomx/cpu.cpp
randomx/dataset.cpp randomx/dataset.cpp
@ -58,23 +56,47 @@ set(RANDOMX_SOURCES
randomx/virtual_memory.c randomx/virtual_memory.c
randomx/vm_interpreted.cpp randomx/vm_interpreted.cpp
randomx/allocator.cpp randomx/allocator.cpp
randomx/assembly_generator_x86.cpp
randomx/instruction.cpp randomx/instruction.cpp
randomx/randomx.cpp randomx/randomx.cpp
randomx/superscalar.cpp randomx/superscalar.cpp
randomx/vm_compiled.cpp
randomx/vm_interpreted_light.cpp randomx/vm_interpreted_light.cpp
randomx/argon2_core.c randomx/argon2_core.c
randomx/blake2_generator.cpp randomx/blake2_generator.cpp
randomx/instructions_portable.cpp randomx/instructions_portable.cpp
randomx/reciprocal.c randomx/reciprocal.c
randomx/virtual_machine.cpp randomx/virtual_machine.cpp
randomx/vm_compiled.cpp
randomx/vm_compiled_light.cpp randomx/vm_compiled_light.cpp
randomx/blake2/blake2b.c randomx/blake2/blake2b.c
randomx/jit_compiler_x86.cpp
randomx/jit_compiler_x86_static.S
) )
if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(x86_64|amd64|AMD64)$")
list(APPEND RANDOMX_SOURCES
randomx/argon2_ssse3.c
randomx/argon2_avx2.c
randomx/assembly_generator_x86.cpp
randomx/jit_compiler_x86.cpp
randomx/jit_compiler_x86_static.S
)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64|arm64)$")
list(APPEND RANDOMX_SOURCES
randomx/jit_compiler_a64.cpp
randomx/jit_compiler_a64_static.S
)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(riscv64|rv64)$")
list(APPEND RANDOMX_SOURCES
randomx/aes_hash_rv64_vector.cpp
randomx/aes_hash_rv64_zvkned.cpp
randomx/cpu_rv64.S
randomx/jit_compiler_rv64.cpp
randomx/jit_compiler_rv64_static.S
randomx/jit_compiler_rv64_vector.cpp
randomx/jit_compiler_rv64_vector_static.S
)
else()
message(FATAL_ERROR "Unsupported RandomX architecture: ${CMAKE_SYSTEM_PROCESSOR}")
endif()
add_library(randomx STATIC ${RANDOMX_SOURCES}) add_library(randomx STATIC ${RANDOMX_SOURCES})
target_include_directories(randomx PRIVATE target_include_directories(randomx PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/randomx ${CMAKE_CURRENT_SOURCE_DIR}/randomx
@ -85,15 +107,18 @@ set_property(TARGET randomx PROPERTY CXX_STANDARD_REQUIRED ON)
# Platform-specific flags for RandomX # Platform-specific flags for RandomX
enable_language(ASM) enable_language(ASM)
target_compile_options(randomx PRIVATE -maes)
check_c_compiler_flag(-mssse3 HAVE_SSSE3) if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(x86_64|amd64|AMD64)$")
if(HAVE_SSSE3) target_compile_options(randomx PRIVATE -maes)
set_source_files_properties(randomx/argon2_ssse3.c PROPERTIES COMPILE_FLAGS -mssse3)
endif() check_c_compiler_flag(-mssse3 HAVE_SSSE3)
check_c_compiler_flag(-mavx2 HAVE_AVX2) if(HAVE_SSSE3)
if(HAVE_AVX2) set_source_files_properties(randomx/argon2_ssse3.c PROPERTIES COMPILE_FLAGS -mssse3)
set_source_files_properties(randomx/argon2_avx2.c PROPERTIES COMPILE_FLAGS -mavx2) endif()
check_c_compiler_flag(-mavx2 HAVE_AVX2)
if(HAVE_AVX2)
set_source_files_properties(randomx/argon2_avx2.c PROPERTIES COMPILE_FLAGS -mavx2)
endif()
endif() endif()
target_compile_options(randomx PRIVATE target_compile_options(randomx PRIVATE
@ -106,7 +131,6 @@ target_compile_options(randomx PRIVATE
# --- Find system dependencies --- # --- Find system dependencies ---
find_package(OpenSSL REQUIRED) find_package(OpenSSL REQUIRED)
find_package(Boost REQUIRED)
# --- Static library --- # --- Static library ---
add_library(cryptonote STATIC ${C_SOURCES} ${CXX_SOURCES}) add_library(cryptonote STATIC ${C_SOURCES} ${CXX_SOURCES})
@ -116,7 +140,6 @@ target_include_directories(cryptonote PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/compat ${CMAKE_CURRENT_SOURCE_DIR}/compat
${CMAKE_CURRENT_SOURCE_DIR}/randomx ${CMAKE_CURRENT_SOURCE_DIR}/randomx
${OPENSSL_INCLUDE_DIR} ${OPENSSL_INCLUDE_DIR}
${Boost_INCLUDE_DIRS}
) )
target_link_libraries(cryptonote PRIVATE target_link_libraries(cryptonote PRIVATE

View file

@ -104,37 +104,91 @@ bool deserialise_bpp(const uint8_t *buf, size_t len, crypto::bpp_signature &sig)
return off == len; // must consume all bytes return off == len; // must consume all bytes
} }
bool read_bppe_at(const uint8_t *buf, size_t len, size_t *offset,
crypto::bppe_signature &sig) {
if (!read_pubkey_vec(buf, len, offset, sig.L)) return false;
if (!read_pubkey_vec(buf, len, offset, sig.R)) return false;
if (!read_pubkey(buf, len, offset, sig.A0)) return false;
if (!read_pubkey(buf, len, offset, sig.A)) return false;
if (!read_pubkey(buf, len, offset, sig.B)) return false;
if (!read_scalar(buf, len, offset, sig.r)) return false;
if (!read_scalar(buf, len, offset, sig.s)) return false;
if (!read_scalar(buf, len, offset, sig.delta_1)) return false;
if (!read_scalar(buf, len, offset, sig.delta_2)) return false;
return true;
}
// Deserialise a bppe_signature from wire bytes (Bulletproofs++ Enhanced, 2 deltas). // Deserialise a bppe_signature from wire bytes (Bulletproofs++ Enhanced, 2 deltas).
// Layout: varint(len(L)) + L[]*32 + varint(len(R)) + R[]*32 // Layout: varint(len(L)) + L[]*32 + varint(len(R)) + R[]*32
// + A0(32) + A(32) + B(32) + r(32) + s(32) + delta_1(32) + delta_2(32) // + A0(32) + A(32) + B(32) + r(32) + s(32) + delta_1(32) + delta_2(32)
bool deserialise_bppe(const uint8_t *buf, size_t len, crypto::bppe_signature &sig) { bool deserialise_bppe(const uint8_t *buf, size_t len, crypto::bppe_signature &sig) {
size_t off = 0; size_t off = 0;
if (!read_pubkey_vec(buf, len, &off, sig.L)) return false; if (!read_bppe_at(buf, len, &off, sig)) return false;
if (!read_pubkey_vec(buf, len, &off, sig.R)) return false;
if (!read_pubkey(buf, len, &off, sig.A0)) return false;
if (!read_pubkey(buf, len, &off, sig.A)) return false;
if (!read_pubkey(buf, len, &off, sig.B)) return false;
if (!read_scalar(buf, len, &off, sig.r)) return false;
if (!read_scalar(buf, len, &off, sig.s)) return false;
if (!read_scalar(buf, len, &off, sig.delta_1)) return false;
if (!read_scalar(buf, len, &off, sig.delta_2)) return false;
return off == len; // must consume all bytes return off == len; // must consume all bytes
} }
bool read_bge_at(const uint8_t *buf, size_t len, size_t *offset,
crypto::BGE_proof &proof) {
if (!read_pubkey(buf, len, offset, proof.A)) return false;
if (!read_pubkey(buf, len, offset, proof.B)) return false;
if (!read_pubkey_vec(buf, len, offset, proof.Pk)) return false;
if (!read_scalar_vec(buf, len, offset, proof.f)) return false;
if (!read_scalar(buf, len, offset, proof.y)) return false;
if (!read_scalar(buf, len, offset, proof.z)) return false;
return true;
}
// Deserialise a BGE_proof from wire bytes. // Deserialise a BGE_proof from wire bytes.
// Layout: A(32) + B(32) + varint(len(Pk)) + Pk[]*32 // Layout: A(32) + B(32) + varint(len(Pk)) + Pk[]*32
// + varint(len(f)) + f[]*32 + y(32) + z(32) // + varint(len(f)) + f[]*32 + y(32) + z(32)
bool deserialise_bge(const uint8_t *buf, size_t len, crypto::BGE_proof &proof) { bool deserialise_bge(const uint8_t *buf, size_t len, crypto::BGE_proof &proof) {
size_t off = 0; size_t off = 0;
if (!read_pubkey(buf, len, &off, proof.A)) return false; if (!read_bge_at(buf, len, &off, proof)) return false;
if (!read_pubkey(buf, len, &off, proof.B)) return false;
if (!read_pubkey_vec(buf, len, &off, proof.Pk)) return false;
if (!read_scalar_vec(buf, len, &off, proof.f)) return false;
if (!read_scalar(buf, len, &off, proof.y)) return false;
if (!read_scalar(buf, len, &off, proof.z)) return false;
return off == len; return off == len;
} }
bool read_clsag_ggxxg_at(const uint8_t *buf, size_t len, size_t *offset,
crypto::CLSAG_GGXXG_signature &sig) {
if (!read_scalar(buf, len, offset, sig.c)) return false;
if (!read_scalar_vec(buf, len, offset, sig.r_g)) return false;
if (!read_scalar_vec(buf, len, offset, sig.r_x)) return false;
if (!read_pubkey(buf, len, offset, sig.K1)) return false;
if (!read_pubkey(buf, len, offset, sig.K2)) return false;
if (!read_pubkey(buf, len, offset, sig.K3)) return false;
if (!read_pubkey(buf, len, offset, sig.K4)) return false;
return true;
}
bool deserialise_zarcanum(const uint8_t *buf, size_t len,
crypto::zarcanum_proof &proof) {
size_t off = 0;
if (!read_scalar(buf, len, &off, proof.d)) return false;
if (!read_pubkey(buf, len, &off, proof.C)) return false;
if (!read_pubkey(buf, len, &off, proof.C_prime)) return false;
if (!read_pubkey(buf, len, &off, proof.E)) return false;
if (!read_scalar(buf, len, &off, proof.c)) return false;
if (!read_scalar(buf, len, &off, proof.y0)) return false;
if (!read_scalar(buf, len, &off, proof.y1)) return false;
if (!read_scalar(buf, len, &off, proof.y2)) return false;
if (!read_scalar(buf, len, &off, proof.y3)) return false;
if (!read_scalar(buf, len, &off, proof.y4)) return false;
if (!read_bppe_at(buf, len, &off, proof.E_range_proof)) return false;
if (!read_pubkey(buf, len, &off, proof.pseudo_out_amount_commitment)) return false;
if (!read_clsag_ggxxg_at(buf, len, &off, proof.clsag_ggxxg)) return false;
return off == len;
}
bool deserialise_double_schnorr(const uint8_t *buf, size_t len,
crypto::generic_double_schnorr_sig &sig) {
if (buf == nullptr || len != 96) {
return false;
}
memcpy(sig.c.m_s, buf, 32);
memcpy(sig.y0.m_s, buf + 32, 32);
memcpy(sig.y1.m_s, buf + 64, 32);
return true;
}
} // anonymous namespace } // anonymous namespace
extern "C" { extern "C" {
@ -639,13 +693,133 @@ int cn_bge_verify(const uint8_t context[32], const uint8_t *ring,
} }
} }
int cn_double_schnorr_generate(int a_is_x, const uint8_t hash[32],
const uint8_t secret_a[32],
const uint8_t secret_b[32],
uint8_t *proof, size_t proof_len) {
if (hash == nullptr || secret_a == nullptr || secret_b == nullptr || proof == nullptr) {
return 1;
}
if (proof_len != 96) {
return 1;
}
try {
crypto::hash m;
memcpy(&m, hash, 32);
crypto::scalar_t sa, sb;
memcpy(sa.m_s, secret_a, 32);
memcpy(sb.m_s, secret_b, 32);
crypto::generic_double_schnorr_sig sig;
bool ok;
if (a_is_x != 0) {
ok = crypto::generate_double_schnorr_sig<crypto::gt_X, crypto::gt_G>(
m, sa * crypto::c_point_X, sa, sb * crypto::c_point_G, sb, sig);
} else {
ok = crypto::generate_double_schnorr_sig<crypto::gt_G, crypto::gt_G>(
m, sa * crypto::c_point_G, sa, sb * crypto::c_point_G, sb, sig);
}
if (!ok) {
return 1;
}
memcpy(proof, sig.c.m_s, 32);
memcpy(proof + 32, sig.y0.m_s, 32);
memcpy(proof + 64, sig.y1.m_s, 32);
return 0;
} catch (...) {
return 1;
}
}
int cn_double_schnorr_verify(int a_is_x, const uint8_t hash[32],
const uint8_t a[32], const uint8_t b[32],
const uint8_t *proof, size_t proof_len) {
if (hash == nullptr || a == nullptr || b == nullptr || proof == nullptr) {
return 1;
}
try {
crypto::hash m;
memcpy(&m, hash, 32);
crypto::public_key b_pk;
memcpy(&b_pk, b, 32);
crypto::public_key a_pk;
memcpy(&a_pk, a, 32);
crypto::point_t a_pt(a_pk);
crypto::generic_double_schnorr_sig sig;
if (!deserialise_double_schnorr(proof, proof_len, sig)) {
return 1;
}
if (a_is_x != 0) {
return crypto::verify_double_schnorr_sig<crypto::gt_X, crypto::gt_G>(m, a_pt, b_pk, sig) ? 0 : 1;
}
return crypto::verify_double_schnorr_sig<crypto::gt_G, crypto::gt_G>(m, a_pt, b_pk, sig) ? 0 : 1;
} catch (...) {
return 1;
}
}
// ── Zarcanum PoS ──────────────────────────────────────── // ── Zarcanum PoS ────────────────────────────────────────
// Zarcanum verification requires many parameters beyond what the current // Compatibility wrapper for the historical proof-only API.
// bridge API exposes (kernel_hash, ring, last_pow_block_id, stake_ki,
// pos_difficulty). Returns -1 until the API is extended.
int cn_zarcanum_verify(const uint8_t /*hash*/[32], const uint8_t * /*proof*/, int cn_zarcanum_verify(const uint8_t /*hash*/[32], const uint8_t * /*proof*/,
size_t /*proof_len*/) { size_t /*proof_len*/) {
return -1; // needs extended API — see bridge.h TODO return -1;
}
int cn_zarcanum_verify_full(const uint8_t m[32], const uint8_t kernel_hash[32],
const uint8_t *ring, size_t ring_size,
const uint8_t last_pow_block_id_hashed[32],
const uint8_t stake_ki[32],
uint64_t pos_difficulty,
const uint8_t *proof, size_t proof_len) {
if (m == nullptr || kernel_hash == nullptr || ring == nullptr ||
last_pow_block_id_hashed == nullptr || stake_ki == nullptr ||
proof == nullptr || proof_len == 0 || ring_size == 0) {
return 1;
}
try {
crypto::hash msg;
crypto::hash kernel;
crypto::scalar_t last_pow;
crypto::key_image key_img;
memcpy(&msg, m, 32);
memcpy(&kernel, kernel_hash, 32);
memcpy(&last_pow, last_pow_block_id_hashed, 32);
memcpy(&key_img, stake_ki, 32);
std::vector<crypto::public_key> stealth_keys(ring_size);
std::vector<crypto::public_key> commitments(ring_size);
std::vector<crypto::public_key> asset_ids(ring_size);
std::vector<crypto::public_key> concealing_pts(ring_size);
std::vector<crypto::CLSAG_GGXXG_input_ref_t> ring_refs;
ring_refs.reserve(ring_size);
for (size_t i = 0; i < ring_size; ++i) {
memcpy(&stealth_keys[i], ring + i * 128, 32);
memcpy(&commitments[i], ring + i * 128 + 32, 32);
memcpy(&asset_ids[i], ring + i * 128 + 64, 32);
memcpy(&concealing_pts[i], ring + i * 128 + 96, 32);
ring_refs.emplace_back(stealth_keys[i], commitments[i], asset_ids[i], concealing_pts[i]);
}
crypto::zarcanum_proof sig;
if (!deserialise_zarcanum(proof, proof_len, sig)) {
return 1;
}
crypto::mp::uint128_t difficulty(pos_difficulty);
return crypto::zarcanum_verify_proof(msg, kernel, ring_refs, last_pow,
key_img, difficulty, sig) ? 0 : 1;
} catch (...) {
return 1;
}
} }
// ── RandomX PoW Hashing ────────────────────────────────── // ── RandomX PoW Hashing ──────────────────────────────────

View file

@ -125,12 +125,42 @@ int cn_bppe_verify(const uint8_t *proof, size_t proof_len,
int cn_bge_verify(const uint8_t context[32], const uint8_t *ring, int cn_bge_verify(const uint8_t context[32], const uint8_t *ring,
size_t ring_size, const uint8_t *proof, size_t proof_len); size_t ring_size, const uint8_t *proof, size_t proof_len);
// ── Generic Double Schnorr ────────────────────────────────
// Generates a generic_double_schnorr_sig from zarcanum.h.
// a_is_x selects the generator pair:
// 0 -> (G, G)
// 1 -> (X, G)
// proof must point to a 96-byte buffer.
int cn_double_schnorr_generate(int a_is_x, const uint8_t hash[32],
const uint8_t secret_a[32],
const uint8_t secret_b[32],
uint8_t *proof, size_t proof_len);
// Verifies a generic_double_schnorr_sig from zarcanum.h.
// a_is_x selects the generator pair:
// 0 -> (G, G)
// 1 -> (X, G)
// Returns 0 on success, 1 on verification failure or deserialisation error.
int cn_double_schnorr_verify(int a_is_x, const uint8_t hash[32],
const uint8_t a[32], const uint8_t b[32],
const uint8_t *proof, size_t proof_len);
// ── Zarcanum PoS ────────────────────────────────────────── // ── Zarcanum PoS ──────────────────────────────────────────
// TODO: extend API to accept kernel_hash, ring, last_pow_block_id, // Legacy compatibility wrapper for the historical proof-only API.
// stake_ki, pos_difficulty. Currently returns -1 (not implemented).
int cn_zarcanum_verify(const uint8_t hash[32], const uint8_t *proof, int cn_zarcanum_verify(const uint8_t hash[32], const uint8_t *proof,
size_t proof_len); size_t proof_len);
// Full Zarcanum verification entrypoint.
// ring is a flat array of 128-byte CLSAG_GGXXG ring members:
// [stealth(32) | amount_commitment(32) | blinded_asset_id(32) | concealing(32)]
// Returns 0 on success, 1 on verification failure or deserialisation error.
int cn_zarcanum_verify_full(const uint8_t m[32], const uint8_t kernel_hash[32],
const uint8_t *ring, size_t ring_size,
const uint8_t last_pow_block_id_hashed[32],
const uint8_t stake_ki[32],
uint64_t pos_difficulty,
const uint8_t *proof, size_t proof_len);
// ── RandomX PoW Hashing ────────────────────────────────── // ── RandomX PoW Hashing ──────────────────────────────────
// key/key_size: RandomX cache key (e.g. "LetheanRandomXv1") // key/key_size: RandomX cache key (e.g. "LetheanRandomXv1")
// input/input_size: block header hash (32 bytes) + nonce (8 bytes LE) // input/input_size: block header hash (32 bytes) + nonce (8 bytes LE)

View file

@ -0,0 +1,67 @@
// Copyright (c) 2017-2026 Lethean (https://lt.hn)
//
// Licensed under the European Union Public Licence (EUPL) version 1.2.
// SPDX-License-Identifier: EUPL-1.2
#pragma once
#include <cstddef>
#include <cstdint>
namespace boost {
namespace multiprecision {
using limb_type = std::uint64_t;
enum cpp_integer_type {
signed_magnitude,
unsigned_magnitude,
};
enum cpp_int_check_type {
unchecked,
checked,
};
enum expression_template_option {
et_off,
et_on,
};
template <unsigned MinBits = 0, unsigned MaxBits = 0,
cpp_integer_type SignType = signed_magnitude,
cpp_int_check_type Checked = unchecked,
class Allocator = void>
class cpp_int_backend {};
template <class Backend, expression_template_option ExpressionTemplates = et_off>
class number {
public:
number() = default;
number(unsigned long long) {}
class backend_type {
public:
std::size_t size() const { return 0; }
static constexpr std::size_t limb_bits = sizeof(limb_type) * 8;
limb_type *limbs() { return nullptr; }
const limb_type *limbs() const { return nullptr; }
void resize(unsigned, unsigned) {}
void normalize() {}
};
backend_type &backend() { return backend_; }
const backend_type &backend() const { return backend_; }
private:
backend_type backend_{};
};
using uint128_t = number<cpp_int_backend<128, 128, unsigned_magnitude, unchecked, void>>;
using uint256_t = number<cpp_int_backend<256, 256, unsigned_magnitude, unchecked, void>>;
using uint512_t = number<cpp_int_backend<512, 512, unsigned_magnitude, unchecked, void>>;
} // namespace multiprecision
} // namespace boost

View file

@ -578,10 +578,82 @@ func TestBGE_Bad_GarbageProof(t *testing.T) {
} }
} }
func TestZarcanum_Stub_NotImplemented(t *testing.T) { func TestZarcanumCompatibilityWrapper_Bad_EmptyProof(t *testing.T) {
// Zarcanum bridge API needs extending — verify it returns false.
hash := [32]byte{0x01} hash := [32]byte{0x01}
if crypto.VerifyZarcanum(hash, []byte{0x00}) { if crypto.VerifyZarcanum(hash, []byte{0x00}) {
t.Fatal("Zarcanum stub should return false") t.Fatal("compatibility wrapper should reject malformed proof data")
}
}
func TestZarcanumWithContext_Bad_MinimalProof(t *testing.T) {
var ctx crypto.ZarcanumVerificationContext
ctx.ContextHash = [32]byte{0x01}
ctx.KernelHash = [32]byte{0x02}
ctx.LastPowBlockIDHashed = [32]byte{0x03}
ctx.StakeKeyImage = [32]byte{0x04}
ctx.PosDifficulty = 1
ctx.Ring = []crypto.ZarcanumRingMember{{
StealthAddress: [32]byte{0x11},
AmountCommitment: [32]byte{0x22},
BlindedAssetID: [32]byte{0x33},
ConcealingPoint: [32]byte{0x44},
}}
// Minimal structurally valid proof blob:
// 10 scalars/points + empty BPPE + pseudo_out_amount_commitment +
// CLSAG_GGXXG with one ring entry and zeroed scalars.
proof := make([]byte, 0, 10*32+2+32+2+32+1+128)
proof = append(proof, make([]byte, 10*32)...)
proof = append(proof, 0x00) // BPPE L length
proof = append(proof, 0x00) // BPPE R length
proof = append(proof, make([]byte, 7*32)...)
proof = append(proof, make([]byte, 32)...)
proof = append(proof, 0x01) // CLSAG_GGXXG r_g length
proof = append(proof, make([]byte, 32)...)
proof = append(proof, 0x01) // CLSAG_GGXXG r_x length
proof = append(proof, make([]byte, 32)...)
proof = append(proof, make([]byte, 128)...)
ctx.Proof = proof
if crypto.VerifyZarcanumWithContext(ctx) {
t.Fatal("minimal Zarcanum proof should fail verification")
}
}
func TestDoubleSchnorr_Bad_EmptyProof(t *testing.T) {
var hash, a, b [32]byte
if crypto.VerifyDoubleSchnorr(hash, true, a, b, nil) {
t.Fatal("empty double-Schnorr proof should fail")
}
}
func TestDoubleSchnorr_Good_Roundtrip(t *testing.T) {
hash := crypto.FastHash([]byte("double-schnorr"))
_, secretA, err := crypto.GenerateKeys()
if err != nil {
t.Fatalf("GenerateKeys(secretA): %v", err)
}
pubA, err := crypto.SecretToPublic(secretA)
if err != nil {
t.Fatalf("SecretToPublic(secretA): %v", err)
}
_, secretB, err := crypto.GenerateKeys()
if err != nil {
t.Fatalf("GenerateKeys(secretB): %v", err)
}
pubB, err := crypto.SecretToPublic(secretB)
if err != nil {
t.Fatalf("SecretToPublic(secretB): %v", err)
}
proof, err := crypto.GenerateDoubleSchnorr(hash, false, secretA, secretB)
if err != nil {
t.Fatalf("GenerateDoubleSchnorr: %v", err)
}
if !crypto.VerifyDoubleSchnorr(hash, false, pubA, pubB, proof[:]) {
t.Fatal("generated double-Schnorr proof failed verification")
} }
} }

View file

@ -7,7 +7,59 @@ package crypto
*/ */
import "C" import "C"
import "unsafe" import (
"unsafe"
coreerr "dappco.re/go/core/log"
)
// ZarcanumRingMember is one flat ring entry for Zarcanum verification.
// All fields are stored premultiplied by 1/8, matching the on-chain form.
type ZarcanumRingMember struct {
StealthAddress [32]byte
AmountCommitment [32]byte
BlindedAssetID [32]byte
ConcealingPoint [32]byte
}
// ZarcanumVerificationContext groups the full context required by the
// upstream C++ verifier.
type ZarcanumVerificationContext struct {
ContextHash [32]byte
KernelHash [32]byte
Ring []ZarcanumRingMember
LastPowBlockIDHashed [32]byte
StakeKeyImage [32]byte
Proof []byte
PosDifficulty uint64
}
// GenerateDoubleSchnorr creates a generic_double_schnorr_sig from zarcanum.h.
// aIsX selects the generator pair:
//
// false -> (G, G)
// true -> (X, G)
func GenerateDoubleSchnorr(hash [32]byte, aIsX bool, secretA [32]byte, secretB [32]byte) ([96]byte, error) {
var proof [96]byte
var flag C.int
if aIsX {
flag = 1
}
rc := C.cn_double_schnorr_generate(
flag,
(*C.uint8_t)(unsafe.Pointer(&hash[0])),
(*C.uint8_t)(unsafe.Pointer(&secretA[0])),
(*C.uint8_t)(unsafe.Pointer(&secretB[0])),
(*C.uint8_t)(unsafe.Pointer(&proof[0])),
C.size_t(len(proof)),
)
if rc != 0 {
return proof, coreerr.E("GenerateDoubleSchnorr", "double_schnorr_generate failed", nil)
}
return proof, nil
}
// VerifyBPP verifies a Bulletproofs++ range proof (1 delta). // VerifyBPP verifies a Bulletproofs++ range proof (1 delta).
// Used for zc_outs_range_proof in post-HF4 transactions. // Used for zc_outs_range_proof in post-HF4 transactions.
@ -74,9 +126,36 @@ func VerifyBGE(context [32]byte, ring [][32]byte, proof []byte) bool {
) == 0 ) == 0
} }
// VerifyDoubleSchnorr verifies a generic_double_schnorr_sig from zarcanum.h.
// aIsX selects the generator pair:
//
// false -> (G, G)
// true -> (X, G)
//
// The proof blob is the 96-byte wire encoding: c(32) + y0(32) + y1(32).
func VerifyDoubleSchnorr(hash [32]byte, aIsX bool, a [32]byte, b [32]byte, proof []byte) bool {
if len(proof) != 96 {
return false
}
var flag C.int
if aIsX {
flag = 1
}
return C.cn_double_schnorr_verify(
flag,
(*C.uint8_t)(unsafe.Pointer(&hash[0])),
(*C.uint8_t)(unsafe.Pointer(&a[0])),
(*C.uint8_t)(unsafe.Pointer(&b[0])),
(*C.uint8_t)(unsafe.Pointer(&proof[0])),
C.size_t(len(proof)),
) == 0
}
// VerifyZarcanum verifies a Zarcanum PoS proof. // VerifyZarcanum verifies a Zarcanum PoS proof.
// Currently returns false — bridge API needs extending to pass kernel_hash, // This compatibility wrapper remains for the historical proof blob API.
// ring, last_pow_block_id, stake_ki, and pos_difficulty. // Use VerifyZarcanumWithContext for full verification.
func VerifyZarcanum(hash [32]byte, proof []byte) bool { func VerifyZarcanum(hash [32]byte, proof []byte) bool {
if len(proof) == 0 { if len(proof) == 0 {
return false return false
@ -87,3 +166,43 @@ func VerifyZarcanum(hash [32]byte, proof []byte) bool {
C.size_t(len(proof)), C.size_t(len(proof)),
) == 0 ) == 0
} }
// VerifyZarcanumWithContext verifies a Zarcanum PoS proof with the full
// consensus context required by the upstream verifier.
//
// Example:
//
// crypto.VerifyZarcanumWithContext(crypto.ZarcanumVerificationContext{
// ContextHash: txHash,
// KernelHash: kernelHash,
// Ring: ring,
// LastPowBlockIDHashed: lastPowHash,
// StakeKeyImage: stakeKeyImage,
// PosDifficulty: posDifficulty,
// Proof: proofBlob,
// })
func VerifyZarcanumWithContext(ctx ZarcanumVerificationContext) bool {
if len(ctx.Ring) == 0 || len(ctx.Proof) == 0 {
return false
}
flat := make([]byte, len(ctx.Ring)*128)
for i, member := range ctx.Ring {
copy(flat[i*128:], member.StealthAddress[:])
copy(flat[i*128+32:], member.AmountCommitment[:])
copy(flat[i*128+64:], member.BlindedAssetID[:])
copy(flat[i*128+96:], member.ConcealingPoint[:])
}
return C.cn_zarcanum_verify_full(
(*C.uint8_t)(unsafe.Pointer(&ctx.ContextHash[0])),
(*C.uint8_t)(unsafe.Pointer(&ctx.KernelHash[0])),
(*C.uint8_t)(unsafe.Pointer(&flat[0])),
C.size_t(len(ctx.Ring)),
(*C.uint8_t)(unsafe.Pointer(&ctx.LastPowBlockIDHashed[0])),
(*C.uint8_t)(unsafe.Pointer(&ctx.StakeKeyImage[0])),
C.uint64_t(ctx.PosDifficulty),
(*C.uint8_t)(unsafe.Pointer(&ctx.Proof[0])),
C.size_t(len(ctx.Proof)),
) == 0
}

View file

@ -16,6 +16,9 @@
// //
#pragma once #pragma once
#include <string> #include <string>
#include <sstream>
#include <iomanip>
#include <stdexcept>
#include <boost/multiprecision/cpp_int.hpp> #include <boost/multiprecision/cpp_int.hpp>
#include "crypto.h" #include "crypto.h"
#include "eth_signature.h" #include "eth_signature.h"

View file

@ -61,6 +61,8 @@ var StarterDifficulty = big.NewInt(1)
// //
// where each solve-time interval i is weighted by its position (1..n), // where each solve-time interval i is weighted by its position (1..n),
// giving more influence to recent blocks. // giving more influence to recent blocks.
//
// nextDiff := difficulty.NextDifficulty(timestamps, cumulativeDiffs, 120)
func NextDifficulty(timestamps []uint64, cumulativeDiffs []*big.Int, target uint64) *big.Int { func NextDifficulty(timestamps []uint64, cumulativeDiffs []*big.Int, target uint64) *big.Int {
// Need at least 2 entries to compute one solve-time interval. // Need at least 2 entries to compute one solve-time interval.
if len(timestamps) < 2 || len(cumulativeDiffs) < 2 { if len(timestamps) < 2 || len(cumulativeDiffs) < 2 {

View file

@ -12,7 +12,7 @@ import (
"path/filepath" "path/filepath"
"sync" "sync"
coreerr "dappco.re/go/core/log" corelog "dappco.re/go/core/log"
cli "dappco.re/go/core/cli/pkg/cli" cli "dappco.re/go/core/cli/pkg/cli"
store "dappco.re/go/core/store" store "dappco.re/go/core/store"
@ -29,31 +29,35 @@ import (
// chain explorer --data-dir ~/.lethean/chain // chain explorer --data-dir ~/.lethean/chain
// //
// Use it alongside `AddChainCommands` to expose the TUI node view. // Use it alongside `AddChainCommands` to expose the TUI node view.
func newChainExplorerCommand(dataDir, seed *string, testnet *bool) *cobra.Command { func newChainExplorerCommand(chainDataDir, seedPeerAddress *string, useTestnet *bool) *cobra.Command {
return &cobra.Command{ return &cobra.Command{
Use: "explorer", Use: "explorer",
Short: "TUI block explorer", Short: "TUI block explorer",
Long: "Interactive terminal block explorer with live sync status.", Long: "Interactive terminal block explorer with live sync status.",
Args: cobra.NoArgs,
PreRunE: func(cmd *cobra.Command, args []string) error {
return validateChainOptions(*chainDataDir, *seedPeerAddress)
},
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
return runChainExplorer(*dataDir, *seed, *testnet) return runChainExplorer(*chainDataDir, *seedPeerAddress, *useTestnet)
}, },
} }
} }
func runChainExplorer(dataDir, seed string, testnet bool) error { func runChainExplorer(chainDataDir, seedPeerAddress string, useTestnet bool) error {
if err := ensureChainDataDirExists(dataDir); err != nil { if err := ensureChainDataDirExists(chainDataDir); err != nil {
return err return err
} }
dbPath := filepath.Join(dataDir, "chain.db") dbPath := filepath.Join(chainDataDir, "chain.db")
chainStore, err := store.New(dbPath) chainStore, err := store.New(dbPath)
if err != nil { if err != nil {
return coreerr.E("runChainExplorer", "open store", err) return corelog.E("runChainExplorer", "open store", err)
} }
defer chainStore.Close() defer chainStore.Close()
blockchain := chain.New(chainStore) blockchain := chain.New(chainStore)
chainConfig, hardForks, resolvedSeed := chainConfigForSeed(testnet, seed) chainConfig, hardForks, resolvedSeed := chainConfigForSeed(useTestnet, seedPeerAddress)
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt) ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt)
defer cancel() defer cancel()
@ -74,6 +78,7 @@ func runChainExplorer(dataDir, seed string, testnet bool) error {
frame.Header(status) frame.Header(status)
frame.Content(explorer) frame.Content(explorer)
frame.Footer(hints) frame.Footer(hints)
corelog.Info("running chain explorer", "data_dir", chainDataDir, "seed", resolvedSeed, "testnet", useTestnet)
frame.Run() frame.Run()
cancel() // Signal the sync loop to stop. cancel() // Signal the sync loop to stop.

View file

@ -7,6 +7,7 @@ package p2p
import ( import (
"encoding/binary" "encoding/binary"
"fmt"
"dappco.re/go/core/p2p/node/levin" "dappco.re/go/core/p2p/node/levin"
) )
@ -173,3 +174,29 @@ func (r *HandshakeResponse) Decode(data []byte) error {
} }
return nil return nil
} }
// ValidateHandshakeResponse verifies that a remote peer's handshake response
// matches the expected network and satisfies the minimum build version gate.
//
// Example:
//
// err := ValidateHandshakeResponse(&resp, config.NetworkIDMainnet, false)
func ValidateHandshakeResponse(resp *HandshakeResponse, expectedNetworkID [16]byte, isTestnet bool) error {
if resp.NodeData.NetworkID != expectedNetworkID {
return fmt.Errorf("p2p: peer network id %x does not match expected %x",
resp.NodeData.NetworkID, expectedNetworkID)
}
buildVersion, ok := PeerBuildVersion(resp.PayloadData.ClientVersion)
if !ok {
return fmt.Errorf("p2p: peer build %q is malformed", resp.PayloadData.ClientVersion)
}
if !MeetsMinimumBuildVersion(resp.PayloadData.ClientVersion, isTestnet) {
minBuild := MinimumRequiredBuildVersion(isTestnet)
return fmt.Errorf("p2p: peer build %q parsed as %d below minimum %d",
resp.PayloadData.ClientVersion, buildVersion, minBuild)
}
return nil
}

View file

@ -7,6 +7,7 @@ package p2p
import ( import (
"encoding/binary" "encoding/binary"
"strings"
"testing" "testing"
"dappco.re/go/core/blockchain/config" "dappco.re/go/core/blockchain/config"
@ -154,3 +155,75 @@ func TestDecodePeerlist_Good_EmptyBlob(t *testing.T) {
t.Errorf("empty peerlist: got %d entries, want 0", len(entries)) t.Errorf("empty peerlist: got %d entries, want 0", len(entries))
} }
} }
func TestValidateHandshakeResponse_Good(t *testing.T) {
resp := &HandshakeResponse{
NodeData: NodeData{
NetworkID: config.NetworkIDTestnet,
},
PayloadData: CoreSyncData{
ClientVersion: "6.0.1.2[go-blockchain]",
},
}
if err := ValidateHandshakeResponse(resp, config.NetworkIDTestnet, true); err != nil {
t.Fatalf("ValidateHandshakeResponse: %v", err)
}
}
func TestValidateHandshakeResponse_BadNetwork(t *testing.T) {
resp := &HandshakeResponse{
NodeData: NodeData{
NetworkID: config.NetworkIDMainnet,
},
PayloadData: CoreSyncData{
ClientVersion: "6.0.1.2[go-blockchain]",
},
}
err := ValidateHandshakeResponse(resp, config.NetworkIDTestnet, true)
if err == nil {
t.Fatal("ValidateHandshakeResponse: expected network mismatch error")
}
if !strings.Contains(err.Error(), "network id") {
t.Fatalf("ValidateHandshakeResponse error: got %v, want network id mismatch", err)
}
}
func TestValidateHandshakeResponse_BadBuildVersion(t *testing.T) {
resp := &HandshakeResponse{
NodeData: NodeData{
NetworkID: config.NetworkIDMainnet,
},
PayloadData: CoreSyncData{
ClientVersion: "0.0.1.0",
},
}
err := ValidateHandshakeResponse(resp, config.NetworkIDMainnet, false)
if err == nil {
t.Fatal("ValidateHandshakeResponse: expected build version error")
}
if !strings.Contains(err.Error(), "below minimum") {
t.Fatalf("ValidateHandshakeResponse error: got %v, want build minimum failure", err)
}
}
func TestValidateHandshakeResponse_BadMalformedBuildVersion(t *testing.T) {
resp := &HandshakeResponse{
NodeData: NodeData{
NetworkID: config.NetworkIDMainnet,
},
PayloadData: CoreSyncData{
ClientVersion: "bogus",
},
}
err := ValidateHandshakeResponse(resp, config.NetworkIDMainnet, false)
if err == nil {
t.Fatal("ValidateHandshakeResponse: expected malformed build version error")
}
if !strings.Contains(err.Error(), "malformed") {
t.Fatalf("ValidateHandshakeResponse error: got %v, want malformed build version failure", err)
}
}

86
p2p/version.go Normal file
View file

@ -0,0 +1,86 @@
// Copyright (c) 2017-2026 Lethean (https://lt.hn)
//
// Licensed under the European Union Public Licence (EUPL) version 1.2.
// SPDX-License-Identifier: EUPL-1.2
package p2p
import (
"strconv"
"strings"
)
const (
// MinimumRequiredBuildVersionMainnet matches the C++ daemon's mainnet gate.
MinimumRequiredBuildVersionMainnet uint64 = 601
// MinimumRequiredBuildVersionTestnet matches the C++ daemon's testnet gate.
MinimumRequiredBuildVersionTestnet uint64 = 2
)
// MinimumRequiredBuildVersion returns the minimum accepted peer version gate
// for the given network.
//
// Example:
//
// MinimumRequiredBuildVersion(false) // 601 on mainnet
func MinimumRequiredBuildVersion(isTestnet bool) uint64 {
if isTestnet {
return MinimumRequiredBuildVersionTestnet
}
return MinimumRequiredBuildVersionMainnet
}
// PeerBuildVersion extracts the numeric major.minor.revision component from a
// daemon client version string.
//
// The daemon formats its version as "major.minor.revision.build[extra]".
// The minimum build gate compares the first three components, so
// "6.0.1.2[go-blockchain]" becomes 601.
func PeerBuildVersion(clientVersion string) (uint64, bool) {
parts := strings.SplitN(clientVersion, ".", 4)
if len(parts) < 3 {
return 0, false
}
major, err := strconv.ParseUint(parts[0], 10, 64)
if err != nil {
return 0, false
}
minor, err := strconv.ParseUint(parts[1], 10, 64)
if err != nil {
return 0, false
}
revPart := parts[2]
for i := 0; i < len(revPart); i++ {
if revPart[i] < '0' || revPart[i] > '9' {
revPart = revPart[:i]
break
}
}
if revPart == "" {
return 0, false
}
revision, err := strconv.ParseUint(revPart, 10, 64)
if err != nil {
return 0, false
}
return major*100 + minor*10 + revision, true
}
// MeetsMinimumBuildVersion reports whether the peer's version is acceptable
// for the given network.
//
// Example:
//
// MeetsMinimumBuildVersion("6.0.1.2[go-blockchain]", false) // true
func MeetsMinimumBuildVersion(clientVersion string, isTestnet bool) bool {
buildVersion, ok := PeerBuildVersion(clientVersion)
if !ok {
return false
}
return buildVersion >= MinimumRequiredBuildVersion(isTestnet)
}

71
p2p/version_test.go Normal file
View file

@ -0,0 +1,71 @@
// Copyright (c) 2017-2026 Lethean (https://lt.hn)
//
// Licensed under the European Union Public Licence (EUPL) version 1.2.
// SPDX-License-Identifier: EUPL-1.2
package p2p
import "testing"
func TestPeerBuildVersion_Good(t *testing.T) {
tests := []struct {
name string
input string
want uint64
wantOK bool
}{
{"release", "6.0.1.2[go-blockchain]", 601, true},
{"two_digits", "12.3.4.5", 1234, true},
{"suffix", "6.0.1-beta.2", 601, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, ok := PeerBuildVersion(tt.input)
if ok != tt.wantOK {
t.Fatalf("PeerBuildVersion(%q) ok = %v, want %v", tt.input, ok, tt.wantOK)
}
if got != tt.want {
t.Fatalf("PeerBuildVersion(%q) = %d, want %d", tt.input, got, tt.want)
}
})
}
}
func TestPeerBuildVersion_Bad(t *testing.T) {
tests := []string{
"",
"6",
"6.0",
"abc.def.ghi",
}
for _, input := range tests {
t.Run(input, func(t *testing.T) {
if got, ok := PeerBuildVersion(input); ok || got != 0 {
t.Fatalf("PeerBuildVersion(%q) = (%d, %v), want (0, false)", input, got, ok)
}
})
}
}
func TestMeetsMinimumBuildVersion_Good(t *testing.T) {
if !MeetsMinimumBuildVersion("6.0.1.2[go-blockchain]", false) {
t.Fatal("expected mainnet build version to satisfy minimum")
}
if !MeetsMinimumBuildVersion("6.0.1.2[go-blockchain]", true) {
t.Fatal("expected testnet build version to satisfy minimum")
}
}
func TestMeetsMinimumBuildVersion_Bad(t *testing.T) {
if MeetsMinimumBuildVersion("0.0.1.0", false) {
t.Fatal("expected low mainnet build version to fail")
}
if MeetsMinimumBuildVersion("0.0.1.0", true) {
t.Fatal("expected low testnet build version to fail")
}
if MeetsMinimumBuildVersion("bogus", false) {
t.Fatal("expected malformed version to fail")
}
}

View file

@ -8,14 +8,13 @@ package blockchain
import ( import (
"context" "context"
"fmt" "fmt"
"log"
"os" "os"
"os/signal" "os/signal"
"path/filepath" "path/filepath"
"sync" "sync"
"syscall" "syscall"
coreerr "dappco.re/go/core/log" corelog "dappco.re/go/core/log"
"dappco.re/go/core/blockchain/chain" "dappco.re/go/core/blockchain/chain"
"dappco.re/go/core/process" "dappco.re/go/core/process"
@ -32,7 +31,7 @@ import (
// chain sync --stop // chain sync --stop
// //
// It keeps the foreground and daemon modes behind a predictable command path. // It keeps the foreground and daemon modes behind a predictable command path.
func newChainSyncCommand(dataDir, seed *string, testnet *bool) *cobra.Command { func newChainSyncCommand(chainDataDir, seedPeerAddress *string, useTestnet *bool) *cobra.Command {
var ( var (
daemon bool daemon bool
stop bool stop bool
@ -42,14 +41,21 @@ func newChainSyncCommand(dataDir, seed *string, testnet *bool) *cobra.Command {
Use: "sync", Use: "sync",
Short: "Headless P2P chain sync", Short: "Headless P2P chain sync",
Long: "Sync the blockchain from P2P peers without the TUI explorer.", Long: "Sync the blockchain from P2P peers without the TUI explorer.",
Args: cobra.NoArgs,
PreRunE: func(cmd *cobra.Command, args []string) error {
if daemon && stop {
return corelog.E("newChainSyncCommand", "flags --daemon and --stop cannot be combined", nil)
}
return validateChainOptions(*chainDataDir, *seedPeerAddress)
},
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
if stop { if stop {
return stopChainSyncDaemon(*dataDir) return stopChainSyncDaemon(*chainDataDir)
} }
if daemon { if daemon {
return runChainSyncDaemon(*dataDir, *seed, *testnet) return runChainSyncDaemon(*chainDataDir, *seedPeerAddress, *useTestnet)
} }
return runChainSyncForeground(*dataDir, *seed, *testnet) return runChainSyncForeground(*chainDataDir, *seedPeerAddress, *useTestnet)
}, },
} }
@ -59,36 +65,36 @@ func newChainSyncCommand(dataDir, seed *string, testnet *bool) *cobra.Command {
return cmd return cmd
} }
func runChainSyncForeground(dataDir, seed string, testnet bool) error { func runChainSyncForeground(chainDataDir, seedPeerAddress string, useTestnet bool) error {
if err := ensureChainDataDirExists(dataDir); err != nil { if err := ensureChainDataDirExists(chainDataDir); err != nil {
return err return err
} }
dbPath := filepath.Join(dataDir, "chain.db") dbPath := filepath.Join(chainDataDir, "chain.db")
chainStore, err := store.New(dbPath) chainStore, err := store.New(dbPath)
if err != nil { if err != nil {
return coreerr.E("runChainSyncForeground", "open store", err) return corelog.E("runChainSyncForeground", "open store", err)
} }
defer chainStore.Close() defer chainStore.Close()
blockchain := chain.New(chainStore) blockchain := chain.New(chainStore)
chainConfig, hardForks, resolvedSeed := chainConfigForSeed(testnet, seed) chainConfig, hardForks, resolvedSeed := chainConfigForSeed(useTestnet, seedPeerAddress)
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM) ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer cancel() defer cancel()
log.Println("Starting headless P2P sync...") corelog.Info("starting headless P2P sync", "data_dir", chainDataDir, "seed", resolvedSeed, "testnet", useTestnet)
runChainSyncLoop(ctx, blockchain, &chainConfig, hardForks, resolvedSeed) runChainSyncLoop(ctx, blockchain, &chainConfig, hardForks, resolvedSeed)
log.Println("Sync stopped.") corelog.Info("headless P2P sync stopped", "data_dir", chainDataDir)
return nil return nil
} }
func runChainSyncDaemon(dataDir, seed string, testnet bool) error { func runChainSyncDaemon(chainDataDir, seedPeerAddress string, useTestnet bool) error {
if err := ensureChainDataDirExists(dataDir); err != nil { if err := ensureChainDataDirExists(chainDataDir); err != nil {
return err return err
} }
pidFile := filepath.Join(dataDir, "sync.pid") pidFile := filepath.Join(chainDataDir, "sync.pid")
daemon := process.NewDaemon(process.DaemonOptions{ daemon := process.NewDaemon(process.DaemonOptions{
PIDFile: pidFile, PIDFile: pidFile,
@ -100,25 +106,25 @@ func runChainSyncDaemon(dataDir, seed string, testnet bool) error {
}) })
if err := daemon.Start(); err != nil { if err := daemon.Start(); err != nil {
return coreerr.E("runChainSyncDaemon", "daemon start", err) return corelog.E("runChainSyncDaemon", "daemon start", err)
} }
dbPath := filepath.Join(dataDir, "chain.db") dbPath := filepath.Join(chainDataDir, "chain.db")
chainStore, err := store.New(dbPath) chainStore, err := store.New(dbPath)
if err != nil { if err != nil {
_ = daemon.Stop() _ = daemon.Stop()
return coreerr.E("runChainSyncDaemon", "open store", err) return corelog.E("runChainSyncDaemon", "open store", err)
} }
defer chainStore.Close() defer chainStore.Close()
blockchain := chain.New(chainStore) blockchain := chain.New(chainStore)
chainConfig, hardForks, resolvedSeed := chainConfigForSeed(testnet, seed) chainConfig, hardForks, resolvedSeed := chainConfigForSeed(useTestnet, seedPeerAddress)
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM) ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer cancel() defer cancel()
daemon.SetReady(true) daemon.SetReady(true)
log.Println("Sync daemon started.") corelog.Info("sync daemon started", "data_dir", chainDataDir, "seed", resolvedSeed, "testnet", useTestnet)
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(1) wg.Add(1)
@ -132,22 +138,22 @@ func runChainSyncDaemon(dataDir, seed string, testnet bool) error {
return err return err
} }
func stopChainSyncDaemon(dataDir string) error { func stopChainSyncDaemon(chainDataDir string) error {
pidFile := filepath.Join(dataDir, "sync.pid") pidFile := filepath.Join(chainDataDir, "sync.pid")
pid, running := process.ReadPID(pidFile) pid, running := process.ReadPID(pidFile)
if pid == 0 || !running { if pid == 0 || !running {
return coreerr.E("stopChainSyncDaemon", "no running sync daemon found", nil) return corelog.E("stopChainSyncDaemon", "no running sync daemon found", nil)
} }
processHandle, err := os.FindProcess(pid) processHandle, err := os.FindProcess(pid)
if err != nil { if err != nil {
return coreerr.E("stopChainSyncDaemon", fmt.Sprintf("find process %d", pid), err) return corelog.E("stopChainSyncDaemon", fmt.Sprintf("find process %d", pid), err)
} }
if err := processHandle.Signal(syscall.SIGTERM); err != nil { if err := processHandle.Signal(syscall.SIGTERM); err != nil {
return coreerr.E("stopChainSyncDaemon", fmt.Sprintf("signal process %d", pid), err) return corelog.E("stopChainSyncDaemon", fmt.Sprintf("signal process %d", pid), err)
} }
log.Printf("Sent SIGTERM to sync daemon (PID %d)", pid) corelog.Info("sent SIGTERM to sync daemon", "pid", pid)
return nil return nil
} }

View file

@ -10,11 +10,10 @@ import (
"crypto/rand" "crypto/rand"
"encoding/binary" "encoding/binary"
"fmt" "fmt"
"log"
"net" "net"
"time" "time"
coreerr "dappco.re/go/core/log" corelog "dappco.re/go/core/log"
"dappco.re/go/core/blockchain/chain" "dappco.re/go/core/blockchain/chain"
"dappco.re/go/core/blockchain/config" "dappco.re/go/core/blockchain/config"
@ -22,7 +21,7 @@ import (
levin "dappco.re/go/core/p2p/node/levin" levin "dappco.re/go/core/p2p/node/levin"
) )
func runChainSyncLoop(ctx context.Context, blockchain *chain.Chain, chainConfig *config.ChainConfig, hardForks []config.HardFork, seed string) { func runChainSyncLoop(ctx context.Context, blockchain *chain.Chain, chainConfig *config.ChainConfig, hardForks []config.HardFork, seedPeerAddress string) {
opts := chain.SyncOptions{ opts := chain.SyncOptions{
VerifySignatures: false, VerifySignatures: false,
Forks: hardForks, Forks: hardForks,
@ -35,8 +34,8 @@ func runChainSyncLoop(ctx context.Context, blockchain *chain.Chain, chainConfig
default: default:
} }
if err := runChainSyncOnce(ctx, blockchain, chainConfig, opts, seed); err != nil { if err := runChainSyncOnce(ctx, blockchain, chainConfig, opts, seedPeerAddress); err != nil {
log.Printf("sync: %v (retrying in 10s)", err) corelog.Warn("sync failed, retrying in 10s", "error", err, "seed", seedPeerAddress)
select { select {
case <-ctx.Done(): case <-ctx.Done():
return return
@ -53,22 +52,27 @@ func runChainSyncLoop(ctx context.Context, blockchain *chain.Chain, chainConfig
} }
} }
func runChainSyncOnce(ctx context.Context, blockchain *chain.Chain, chainConfig *config.ChainConfig, opts chain.SyncOptions, seed string) error { func runChainSyncOnce(ctx context.Context, blockchain *chain.Chain, chainConfig *config.ChainConfig, opts chain.SyncOptions, seedPeerAddress string) error {
conn, err := net.DialTimeout("tcp", seed, 10*time.Second) conn, err := net.DialTimeout("tcp", seedPeerAddress, 10*time.Second)
if err != nil { if err != nil {
return coreerr.E("runChainSyncOnce", fmt.Sprintf("dial %s", seed), err) return corelog.E("runChainSyncOnce", fmt.Sprintf("dial %s", seedPeerAddress), err)
} }
defer conn.Close() defer conn.Close()
levinConn := levin.NewConnection(conn) p2pConn := levin.NewConnection(conn)
var peerIDBytes [8]byte var peerIDBytes [8]byte
rand.Read(peerIDBytes[:]) if _, err := rand.Read(peerIDBytes[:]); err != nil {
return corelog.E("runChainSyncOnce", "generate peer id", err)
}
peerID := binary.LittleEndian.Uint64(peerIDBytes[:]) peerID := binary.LittleEndian.Uint64(peerIDBytes[:])
localHeight, _ := blockchain.Height() localHeight, err := blockchain.Height()
if err != nil {
return corelog.E("runChainSyncOnce", "get local height", err)
}
handshakeReq := p2p.HandshakeRequest{ handshakeRequest := p2p.HandshakeRequest{
NodeData: p2p.NodeData{ NodeData: p2p.NodeData{
NetworkID: chainConfig.NetworkID, NetworkID: chainConfig.NetworkID,
PeerID: peerID, PeerID: peerID,
@ -81,33 +85,37 @@ func runChainSyncOnce(ctx context.Context, blockchain *chain.Chain, chainConfig
NonPruningMode: true, NonPruningMode: true,
}, },
} }
payload, err := p2p.EncodeHandshakeRequest(&handshakeReq) payload, err := p2p.EncodeHandshakeRequest(&handshakeRequest)
if err != nil { if err != nil {
return coreerr.E("runChainSyncOnce", "encode handshake", err) return corelog.E("runChainSyncOnce", "encode handshake", err)
} }
if err := levinConn.WritePacket(p2p.CommandHandshake, payload, true); err != nil { if err := p2pConn.WritePacket(p2p.CommandHandshake, payload, true); err != nil {
return coreerr.E("runChainSyncOnce", "write handshake", err) return corelog.E("runChainSyncOnce", "write handshake", err)
} }
hdr, data, err := levinConn.ReadPacket() packetHeader, packetData, err := p2pConn.ReadPacket()
if err != nil { if err != nil {
return coreerr.E("runChainSyncOnce", "read handshake", err) return corelog.E("runChainSyncOnce", "read handshake", err)
} }
if hdr.Command != uint32(p2p.CommandHandshake) { if packetHeader.Command != uint32(p2p.CommandHandshake) {
return coreerr.E("runChainSyncOnce", fmt.Sprintf("unexpected command %d", hdr.Command), nil) return corelog.E("runChainSyncOnce", fmt.Sprintf("unexpected command %d", packetHeader.Command), nil)
} }
var handshakeResp p2p.HandshakeResponse var handshakeResponse p2p.HandshakeResponse
if err := handshakeResp.Decode(data); err != nil { if err := handshakeResponse.Decode(packetData); err != nil {
return coreerr.E("runChainSyncOnce", "decode handshake", err) return corelog.E("runChainSyncOnce", "decode handshake", err)
} }
localSync := p2p.CoreSyncData{ if err := p2p.ValidateHandshakeResponse(&handshakeResponse, chainConfig.NetworkID, chainConfig.IsTestnet); err != nil {
return corelog.E("runChainSyncOnce", "validate handshake", err)
}
localSyncData := p2p.CoreSyncData{
CurrentHeight: localHeight, CurrentHeight: localHeight,
ClientVersion: config.ClientVersion, ClientVersion: config.ClientVersion,
NonPruningMode: true, NonPruningMode: true,
} }
p2pConn := chain.NewLevinP2PConn(levinConn, handshakeResp.PayloadData.CurrentHeight, localSync) p2pConnection := chain.NewLevinP2PConn(p2pConn, handshakeResponse.PayloadData.CurrentHeight, localSyncData)
return blockchain.P2PSync(ctx, p2pConn, opts) return blockchain.P2PSync(ctx, p2pConnection, opts)
} }

View file

@ -309,26 +309,19 @@ func (m *ExplorerModel) viewTxDetail() string {
if len(tx.Vin) > 0 { if len(tx.Vin) > 0 {
b.WriteString(" Inputs:\n") b.WriteString(" Inputs:\n")
for i, in := range tx.Vin { for i, in := range tx.Vin {
switch v := in.(type) { b.WriteString(fmt.Sprintf(" [%d] %s\n", i, describeTxInput(in)))
case types.TxInputGenesis:
b.WriteString(fmt.Sprintf(" [%d] coinbase height=%d\n", i, v.Height))
case types.TxInputToKey:
b.WriteString(fmt.Sprintf(" [%d] to_key amount=%d key_image=%x\n", i, v.Amount, v.KeyImage[:4]))
default:
b.WriteString(fmt.Sprintf(" [%d] %T\n", i, v))
}
} }
} }
if len(tx.Vout) > 0 { if len(tx.Vout) > 0 {
b.WriteString("\n Outputs:\n") b.WriteString("\n Outputs:\n")
for i, out := range tx.Vout { for i, output := range tx.Vout {
switch v := out.(type) { switch v := output.(type) {
case types.TxOutputBare: case types.TxOutputBare:
if toKey, ok := v.Target.(types.TxOutToKey); ok { if targetKey, ok := v.SpendKey(); ok {
b.WriteString(fmt.Sprintf(" [%d] bare amount=%d key=%x\n", i, v.Amount, toKey.Key[:4])) b.WriteString(fmt.Sprintf(" [%d] bare amount=%d key=%x\n", i, v.Amount, targetKey[:4]))
} else { } else {
b.WriteString(fmt.Sprintf(" [%d] bare amount=%d target=%T\n", i, v.Amount, v.Target)) b.WriteString(fmt.Sprintf(" [%d] bare amount=%d %s\n", i, v.Amount, describeTxOutTarget(v.Target)))
} }
case types.TxOutputZarcanum: case types.TxOutputZarcanum:
b.WriteString(fmt.Sprintf(" [%d] zarcanum stealth=%x\n", i, v.StealthAddress[:4])) b.WriteString(fmt.Sprintf(" [%d] zarcanum stealth=%x\n", i, v.StealthAddress[:4]))
@ -341,6 +334,41 @@ func (m *ExplorerModel) viewTxDetail() string {
return b.String() return b.String()
} }
// describeTxOutTarget renders a human-readable summary for non-to-key outputs.
func describeTxOutTarget(target types.TxOutTarget) string {
switch t := target.(type) {
case types.TxOutMultisig:
return fmt.Sprintf("multisig minimum_sigs=%d keys=%d", t.MinimumSigs, len(t.Keys))
case types.TxOutHTLC:
return fmt.Sprintf("htlc expiration=%d flags=%d redeem=%x refund=%x", t.Expiration, t.Flags, t.PKRedeem[:4], t.PKRefund[:4])
case types.TxOutToKey:
return fmt.Sprintf("to_key key=%x mix_attr=%d", t.Key[:4], t.MixAttr)
case nil:
return "target=<nil>"
default:
return fmt.Sprintf("target=%T", t)
}
}
// describeTxInput renders a human-readable summary for transaction inputs in
// the explorer tx detail view.
func describeTxInput(input types.TxInput) string {
switch v := input.(type) {
case types.TxInputGenesis:
return fmt.Sprintf("coinbase height=%d", v.Height)
case types.TxInputToKey:
return fmt.Sprintf("to_key amount=%d key_image=%x", v.Amount, v.KeyImage[:4])
case types.TxInputHTLC:
return fmt.Sprintf("htlc origin=%q amount=%d key_image=%x", v.HTLCOrigin, v.Amount, v.KeyImage[:4])
case types.TxInputMultisig:
return fmt.Sprintf("multisig amount=%d sigs=%d out=%x", v.Amount, v.SigsCount, v.MultisigOutID[:4])
case types.TxInputZC:
return fmt.Sprintf("zc inputs=%d key_image=%x", len(v.KeyOffsets), v.KeyImage[:4])
default:
return fmt.Sprintf("%T", v)
}
}
// loadBlocks refreshes the block list from the chain store. // loadBlocks refreshes the block list from the chain store.
// Blocks are listed from newest (top) to oldest. // Blocks are listed from newest (top) to oldest.
func (m *ExplorerModel) loadBlocks() { func (m *ExplorerModel) loadBlocks() {

View file

@ -10,6 +10,8 @@ import (
"testing" "testing"
tea "github.com/charmbracelet/bubbletea" tea "github.com/charmbracelet/bubbletea"
"dappco.re/go/core/blockchain/types"
) )
func TestExplorerModel_View_Good_BlockList(t *testing.T) { func TestExplorerModel_View_Good_BlockList(t *testing.T) {
@ -174,3 +176,52 @@ func TestExplorerModel_ViewBlockDetail_Good_CoinbaseOnly(t *testing.T) {
t.Errorf("block detail should contain 'coinbase only' for blocks with no TxHashes, got:\n%s", out) t.Errorf("block detail should contain 'coinbase only' for blocks with no TxHashes, got:\n%s", out)
} }
} }
func TestDescribeTxInput_Good(t *testing.T) {
tests := []struct {
name string
input types.TxInput
want string
}{
{
name: "genesis",
input: types.TxInputGenesis{Height: 12},
want: "coinbase height=12",
},
{
name: "to_key",
input: types.TxInputToKey{
Amount: 42,
KeyImage: types.KeyImage{0xaa, 0xbb, 0xcc, 0xdd},
},
want: "to_key amount=42 key_image=aabbccdd",
},
{
name: "htlc",
input: types.TxInputHTLC{
HTLCOrigin: "origin-hash",
Amount: 7,
KeyImage: types.KeyImage{0x10, 0x20, 0x30, 0x40},
},
want: `htlc origin="origin-hash" amount=7 key_image=10203040`,
},
{
name: "multisig",
input: types.TxInputMultisig{
Amount: 99,
SigsCount: 3,
MultisigOutID: types.Hash{0x01, 0x02, 0x03, 0x04},
},
want: "multisig amount=99 sigs=3 out=01020304",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := describeTxInput(tt.input)
if got != tt.want {
t.Fatalf("describeTxInput() = %q, want %q", got, tt.want)
}
})
}
}

View file

@ -31,6 +31,9 @@ type Address struct {
SpendPublicKey PublicKey SpendPublicKey PublicKey
ViewPublicKey PublicKey ViewPublicKey PublicKey
Flags uint8 Flags uint8
// Prefix records the original base58 prefix when an address was decoded.
// It is optional for manually constructed addresses.
Prefix uint64 `json:"-"`
} }
// IsAuditable reports whether the address has the auditable flag set. // IsAuditable reports whether the address has the auditable flag set.
@ -41,11 +44,7 @@ func (a *Address) IsAuditable() bool {
// IsIntegrated reports whether the given prefix corresponds to an integrated // IsIntegrated reports whether the given prefix corresponds to an integrated
// address type (standard integrated or auditable integrated). // address type (standard integrated or auditable integrated).
func (a *Address) IsIntegrated() bool { func (a *Address) IsIntegrated() bool {
// This method checks whether the address was decoded with an integrated return IsIntegratedPrefix(a.Prefix)
// prefix. Since we do not store the prefix in the Address struct, callers
// should use the prefix returned by DecodeAddress to determine this.
// This helper exists for convenience when the prefix is not available.
return false
} }
// IsIntegratedPrefix reports whether the given prefix corresponds to an // IsIntegratedPrefix reports whether the given prefix corresponds to an
@ -79,6 +78,8 @@ func (a *Address) Encode(prefix uint64) string {
// DecodeAddress parses a CryptoNote base58-encoded address string. It returns // DecodeAddress parses a CryptoNote base58-encoded address string. It returns
// the decoded address, the prefix that was used, and any error. // the decoded address, the prefix that was used, and any error.
//
// addr, prefix, err := types.DecodeAddress("iTHN6...")
func DecodeAddress(s string) (*Address, uint64, error) { func DecodeAddress(s string) (*Address, uint64, error) {
raw, err := base58Decode(s) raw, err := base58Decode(s)
if err != nil { if err != nil {
@ -117,6 +118,7 @@ func DecodeAddress(s string) (*Address, uint64, error) {
copy(addr.SpendPublicKey[:], remaining[0:32]) copy(addr.SpendPublicKey[:], remaining[0:32])
copy(addr.ViewPublicKey[:], remaining[32:64]) copy(addr.ViewPublicKey[:], remaining[32:64])
addr.Flags = remaining[64] addr.Flags = remaining[64]
addr.Prefix = prefix
return addr, prefix, nil return addr, prefix, nil
} }

View file

@ -65,6 +65,10 @@ func TestAddressEncodeDecodeRoundTrip_Good(t *testing.T) {
if decoded.Flags != original.Flags { if decoded.Flags != original.Flags {
t.Errorf("Flags mismatch: got 0x%02x, want 0x%02x", decoded.Flags, original.Flags) t.Errorf("Flags mismatch: got 0x%02x, want 0x%02x", decoded.Flags, original.Flags)
} }
if decoded.IsIntegrated() != IsIntegratedPrefix(tt.prefix) {
t.Errorf("IsIntegrated mismatch: got %v, want %v", decoded.IsIntegrated(), IsIntegratedPrefix(tt.prefix))
}
}) })
} }
} }
@ -103,6 +107,27 @@ func TestIsIntegratedPrefix_Good(t *testing.T) {
} }
} }
func TestAddressIsIntegrated_Good(t *testing.T) {
decoded, prefix, err := DecodeAddress(makeTestAddress(0x00).Encode(config.IntegratedAddressPrefix))
if err != nil {
t.Fatalf("DecodeAddress failed: %v", err)
}
if prefix != config.IntegratedAddressPrefix {
t.Fatalf("prefix mismatch: got 0x%x, want 0x%x", prefix, config.IntegratedAddressPrefix)
}
if !decoded.IsIntegrated() {
t.Fatal("decoded integrated address should report IsIntegrated() == true")
}
standard, _, err := DecodeAddress(makeTestAddress(0x00).Encode(config.AddressPrefix))
if err != nil {
t.Fatalf("DecodeAddress failed: %v", err)
}
if standard.IsIntegrated() {
t.Fatal("decoded standard address should report IsIntegrated() == false")
}
}
func TestDecodeAddress_Bad(t *testing.T) { func TestDecodeAddress_Bad(t *testing.T) {
tests := []struct { tests := []struct {
name string name string

143
types/asset.go Normal file
View file

@ -0,0 +1,143 @@
// Copyright (c) 2017-2026 Lethean (https://lt.hn)
//
// Licensed under the European Union Public Licence (EUPL) version 1.2.
// SPDX-License-Identifier: EUPL-1.2
package types
import (
"fmt"
"unicode/utf8"
coreerr "dappco.re/go/core/log"
)
// AssetDescriptorOperationTag is the wire tag for asset_descriptor_operation
// extra variants.
const AssetDescriptorOperationTag uint8 = 40
// Asset operation types used by the HF5 asset_descriptor_operation variant.
const (
AssetOpRegister uint8 = 0 // deploy new asset
AssetOpEmit uint8 = 1 // emit additional supply
AssetOpUpdate uint8 = 2 // update asset metadata
AssetOpBurn uint8 = 3 // burn supply with proof
AssetOpPublicBurn uint8 = 4 // burn supply publicly
)
// AssetDescriptorBase holds the core asset metadata referenced by
// asset_descriptor_operation extra variants.
type AssetDescriptorBase struct {
Ticker string
FullName string
TotalMaxSupply uint64
CurrentSupply uint64
DecimalPoint uint8
MetaInfo string
OwnerKey PublicKey
Etc []byte
}
// Validate checks that the base asset metadata is structurally valid.
//
// base := types.AssetDescriptorBase{Ticker: "LTHN", FullName: "Lethean", TotalMaxSupply: 1_000_000, OwnerKey: ownerPub}
// if err := base.Validate(); err != nil { ... }
func (base AssetDescriptorBase) Validate() error {
tickerLen := utf8.RuneCountInString(base.Ticker)
fullNameLen := utf8.RuneCountInString(base.FullName)
if base.TotalMaxSupply == 0 {
return coreerr.E("AssetDescriptorBase.Validate", "total max supply must be non-zero", nil)
}
if base.CurrentSupply > base.TotalMaxSupply {
return coreerr.E("AssetDescriptorBase.Validate", fmt.Sprintf("current supply %d exceeds max supply %d", base.CurrentSupply, base.TotalMaxSupply), nil)
}
if tickerLen == 0 || tickerLen > 6 {
return coreerr.E("AssetDescriptorBase.Validate", fmt.Sprintf("ticker length %d out of range [1,6]", tickerLen), nil)
}
if fullNameLen == 0 || fullNameLen > 64 {
return coreerr.E("AssetDescriptorBase.Validate", fmt.Sprintf("full name length %d out of range [1,64]", fullNameLen), nil)
}
if base.OwnerKey.IsZero() {
return coreerr.E("AssetDescriptorBase.Validate", "owner key must be non-zero", nil)
}
return nil
}
// AssetDescriptorOperation represents a deploy/emit/update/burn operation.
// The wire format is parsed in wire/ as an opaque blob for round-tripping.
type AssetDescriptorOperation struct {
Version uint8
OperationType uint8
Descriptor *AssetDescriptorBase
AssetID Hash
AmountToEmit uint64
AmountToBurn uint64
Etc []byte
}
// Validate checks that the operation is structurally valid for HF5 parsing.
//
// op := types.AssetDescriptorOperation{Version: 1, OperationType: types.AssetOpRegister, Descriptor: &base}
// if err := op.Validate(); err != nil { ... }
func (op AssetDescriptorOperation) Validate() error {
switch op.Version {
case 0, 1:
default:
return coreerr.E("AssetDescriptorOperation.Validate", fmt.Sprintf("unsupported version %d", op.Version), nil)
}
switch op.OperationType {
case AssetOpRegister:
if !op.AssetID.IsZero() {
return coreerr.E("AssetDescriptorOperation.Validate", "register operation must not carry asset id", nil)
}
if op.Descriptor == nil {
return coreerr.E("AssetDescriptorOperation.Validate", "register operation missing descriptor", nil)
}
if err := op.Descriptor.Validate(); err != nil {
return err
}
if op.AmountToEmit != 0 || op.AmountToBurn != 0 {
return coreerr.E("AssetDescriptorOperation.Validate", "register operation must not include emission or burn amounts", nil)
}
case AssetOpEmit:
if op.AssetID.IsZero() {
return coreerr.E("AssetDescriptorOperation.Validate", "emit operation must carry asset id", nil)
}
if op.AmountToEmit == 0 {
return coreerr.E("AssetDescriptorOperation.Validate", "emit operation has zero amount", nil)
}
if op.Descriptor != nil {
return coreerr.E("AssetDescriptorOperation.Validate", "emit operation must not carry descriptor", nil)
}
case AssetOpUpdate:
if op.AssetID.IsZero() {
return coreerr.E("AssetDescriptorOperation.Validate", "update operation must carry asset id", nil)
}
if op.Descriptor == nil {
return coreerr.E("AssetDescriptorOperation.Validate", "update operation missing descriptor", nil)
}
if err := op.Descriptor.Validate(); err != nil {
return err
}
if op.AmountToEmit != 0 || op.AmountToBurn != 0 {
return coreerr.E("AssetDescriptorOperation.Validate", "update operation must not include emission or burn amounts", nil)
}
case AssetOpBurn, AssetOpPublicBurn:
if op.AssetID.IsZero() {
return coreerr.E("AssetDescriptorOperation.Validate", "burn operation must carry asset id", nil)
}
if op.AmountToBurn == 0 {
return coreerr.E("AssetDescriptorOperation.Validate", "burn operation has zero amount", nil)
}
if op.Descriptor != nil {
return coreerr.E("AssetDescriptorOperation.Validate", "burn operation must not carry descriptor", nil)
}
default:
return coreerr.E("AssetDescriptorOperation.Validate", fmt.Sprintf("unsupported operation type %d", op.OperationType), nil)
}
return nil
}

140
types/asset_test.go Normal file
View file

@ -0,0 +1,140 @@
// Copyright (c) 2017-2026 Lethean (https://lt.hn)
//
// Licensed under the European Union Public Licence (EUPL) version 1.2.
// SPDX-License-Identifier: EUPL-1.2
package types
import "testing"
func TestAssetOperationConstants_Good(t *testing.T) {
tests := []struct {
name string
got uint8
want uint8
}{
{"register", AssetOpRegister, 0},
{"emit", AssetOpEmit, 1},
{"update", AssetOpUpdate, 2},
{"burn", AssetOpBurn, 3},
{"public_burn", AssetOpPublicBurn, 4},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if tt.got != tt.want {
t.Fatalf("got %d, want %d", tt.got, tt.want)
}
})
}
}
func TestAssetDescriptorTypes_Good(t *testing.T) {
base := AssetDescriptorBase{
Ticker: "LTHN",
FullName: "Lethean",
TotalMaxSupply: 1000000,
CurrentSupply: 0,
DecimalPoint: 12,
MetaInfo: "{}",
OwnerKey: PublicKey{1},
Etc: []byte{1, 2, 3},
}
op := AssetDescriptorOperation{
Version: 1,
OperationType: AssetOpRegister,
Descriptor: &base,
AssetID: Hash{1},
AmountToEmit: 100,
AmountToBurn: 10,
Etc: []byte{4, 5, 6},
}
if op.Descriptor == nil || op.Descriptor.Ticker != "LTHN" {
t.Fatalf("unexpected descriptor: %+v", op.Descriptor)
}
if op.OperationType != AssetOpRegister || op.Version != 1 {
t.Fatalf("unexpected operation: %+v", op)
}
}
func TestAssetDescriptorBaseValidate_Good(t *testing.T) {
base := AssetDescriptorBase{
Ticker: "LTHN",
FullName: "Lethean",
TotalMaxSupply: 1000000,
CurrentSupply: 0,
DecimalPoint: 12,
MetaInfo: "{}",
OwnerKey: PublicKey{1},
}
if err := base.Validate(); err != nil {
t.Fatalf("Validate() error = %v", err)
}
}
func TestAssetDescriptorBaseValidate_Bad(t *testing.T) {
tests := []struct {
name string
base AssetDescriptorBase
}{
{"zero_supply", AssetDescriptorBase{Ticker: "LTHN", FullName: "Lethean", OwnerKey: PublicKey{1}}},
{"too_many_current", AssetDescriptorBase{Ticker: "LTHN", FullName: "Lethean", TotalMaxSupply: 10, CurrentSupply: 11, OwnerKey: PublicKey{1}}},
{"empty_ticker", AssetDescriptorBase{FullName: "Lethean", TotalMaxSupply: 10, OwnerKey: PublicKey{1}}},
{"empty_name", AssetDescriptorBase{Ticker: "LTHN", TotalMaxSupply: 10, OwnerKey: PublicKey{1}}},
{"zero_owner", AssetDescriptorBase{Ticker: "LTHN", FullName: "Lethean", TotalMaxSupply: 10}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := tt.base.Validate(); err == nil {
t.Fatal("Validate() error = nil, want error")
}
})
}
}
func TestAssetDescriptorOperationValidate_Good(t *testing.T) {
base := AssetDescriptorBase{
Ticker: "LTHN",
FullName: "Lethean",
TotalMaxSupply: 1000000,
CurrentSupply: 0,
DecimalPoint: 12,
MetaInfo: "{}",
OwnerKey: PublicKey{1},
}
op := AssetDescriptorOperation{
Version: 1,
OperationType: AssetOpRegister,
Descriptor: &base,
}
if err := op.Validate(); err != nil {
t.Fatalf("Validate() error = %v", err)
}
}
func TestAssetDescriptorOperationValidate_Bad(t *testing.T) {
tests := []struct {
name string
op AssetDescriptorOperation
}{
{"unsupported_version", AssetDescriptorOperation{Version: 2, OperationType: AssetOpRegister}},
{"register_missing_descriptor", AssetDescriptorOperation{Version: 1, OperationType: AssetOpRegister}},
{"emit_zero_amount", AssetDescriptorOperation{Version: 1, OperationType: AssetOpEmit, AssetID: Hash{1}}},
{"update_missing_descriptor", AssetDescriptorOperation{Version: 1, OperationType: AssetOpUpdate, AssetID: Hash{1}}},
{"burn_zero_amount", AssetDescriptorOperation{Version: 1, OperationType: AssetOpBurn, AssetID: Hash{1}}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := tt.op.Validate(); err == nil {
t.Fatal("Validate() error = nil, want error")
}
})
}
}

View file

@ -121,6 +121,15 @@ type TxOutTarget interface {
TargetType() uint8 TargetType() uint8
} }
// AsTxOutToKey returns target as a TxOutToKey when it is a standard
// transparent output target.
//
// toKey, ok := types.AsTxOutToKey(bare.Target)
func AsTxOutToKey(target TxOutTarget) (TxOutToKey, bool) {
v, ok := target.(TxOutToKey)
return v, ok
}
// TxOutToKey is the txout_to_key target variant. On the wire it is // TxOutToKey is the txout_to_key target variant. On the wire it is
// serialised as a 33-byte packed blob: 32-byte public key + 1-byte mix_attr. // serialised as a 33-byte packed blob: 32-byte public key + 1-byte mix_attr.
type TxOutToKey struct { type TxOutToKey struct {
@ -219,7 +228,7 @@ func (t TxInputHTLC) InputType() uint8 { return InputTypeHTLC }
// TxInputMultisig spends from a multisig output (HF1+). // TxInputMultisig spends from a multisig output (HF1+).
type TxInputMultisig struct { type TxInputMultisig struct {
Amount uint64 Amount uint64
MultisigOutID Hash // 32-byte hash identifying the multisig output MultisigOutID Hash // 32-byte hash identifying the multisig output
SigsCount uint64 SigsCount uint64
EtcDetails []byte // opaque variant vector EtcDetails []byte // opaque variant vector
} }
@ -237,6 +246,19 @@ type TxOutputBare struct {
Target TxOutTarget Target TxOutTarget
} }
// SpendKey returns the standard transparent spend key when the target is
// TxOutToKey. Callers that only care about transparent key outputs can use
// this instead of repeating a type assertion.
//
// if key, ok := bareOutput.SpendKey(); ok { /* use key */ }
func (t TxOutputBare) SpendKey() (PublicKey, bool) {
target, ok := AsTxOutToKey(t.Target)
if !ok {
return PublicKey{}, false
}
return target.Key, true
}
// OutputType returns the wire variant tag for bare outputs. // OutputType returns the wire variant tag for bare outputs.
func (t TxOutputBare) OutputType() uint8 { return OutputTypeBare } func (t TxOutputBare) OutputType() uint8 { return OutputTypeBare }

View file

@ -14,6 +14,46 @@ func TestTxOutToKey_TargetType_Good(t *testing.T) {
} }
} }
func TestAsTxOutToKey_Good(t *testing.T) {
target, ok := AsTxOutToKey(TxOutToKey{Key: PublicKey{1}, MixAttr: 7})
if !ok {
t.Fatal("AsTxOutToKey: expected true for TxOutToKey target")
}
if target.Key != (PublicKey{1}) {
t.Errorf("Key: got %x, want %x", target.Key, PublicKey{1})
}
if target.MixAttr != 7 {
t.Errorf("MixAttr: got %d, want %d", target.MixAttr, 7)
}
}
func TestAsTxOutToKey_Bad(t *testing.T) {
if _, ok := AsTxOutToKey(TxOutHTLC{}); ok {
t.Fatal("AsTxOutToKey: expected false for non-to-key target")
}
}
func TestTxOutputBare_SpendKey_Good(t *testing.T) {
out := TxOutputBare{
Amount: 10,
Target: TxOutToKey{Key: PublicKey{0xAA, 0xBB}, MixAttr: 3},
}
key, ok := out.SpendKey()
if !ok {
t.Fatal("SpendKey: expected true for TxOutToKey target")
}
if key != (PublicKey{0xAA, 0xBB}) {
t.Errorf("SpendKey: got %x, want %x", key, PublicKey{0xAA, 0xBB})
}
}
func TestTxOutputBare_SpendKey_Bad(t *testing.T) {
out := TxOutputBare{Target: TxOutHTLC{Expiration: 100}}
if key, ok := out.SpendKey(); ok || key != (PublicKey{}) {
t.Fatalf("SpendKey: got (%x, %v), want zero key and false", key, ok)
}
}
func TestTxOutMultisig_TargetType_Good(t *testing.T) { func TestTxOutMultisig_TargetType_Good(t *testing.T) {
var target TxOutTarget = TxOutMultisig{MinimumSigs: 2, Keys: []PublicKey{{1}, {2}}} var target TxOutTarget = TxOutMultisig{MinimumSigs: 2, Keys: []PublicKey{{1}, {2}}}
if target.TargetType() != TargetTypeMultisig { if target.TargetType() != TargetTypeMultisig {

View file

@ -23,6 +23,7 @@ import (
store "dappco.re/go/core/store" store "dappco.re/go/core/store"
"dappco.re/go/core/blockchain/config"
"dappco.re/go/core/blockchain/crypto" "dappco.re/go/core/blockchain/crypto"
"dappco.re/go/core/blockchain/types" "dappco.re/go/core/blockchain/types"
) )
@ -107,6 +108,7 @@ func (a *Account) Address() types.Address {
return types.Address{ return types.Address{
SpendPublicKey: a.SpendPublicKey, SpendPublicKey: a.SpendPublicKey,
ViewPublicKey: a.ViewPublicKey, ViewPublicKey: a.ViewPublicKey,
Prefix: config.AddressPrefix,
} }
} }

View file

@ -52,33 +52,33 @@ func (s *V1Scanner) ScanTransaction(tx *types.Transaction, txHash types.Hash,
isCoinbase := len(tx.Vin) > 0 && tx.Vin[0].InputType() == types.InputTypeGenesis isCoinbase := len(tx.Vin) > 0 && tx.Vin[0].InputType() == types.InputTypeGenesis
var transfers []Transfer var transfers []Transfer
for i, out := range tx.Vout { for i, output := range tx.Vout {
bare, ok := out.(types.TxOutputBare) bare, ok := output.(types.TxOutputBare)
if !ok { if !ok {
continue continue
} }
expectedPub, err := crypto.DerivePublicKey( expectedPublicKey, err := crypto.DerivePublicKey(
derivation, uint64(i), [32]byte(s.account.SpendPublicKey)) derivation, uint64(i), [32]byte(s.account.SpendPublicKey))
if err != nil { if err != nil {
continue continue
} }
toKey, ok := bare.Target.(types.TxOutToKey) targetKey, ok := bare.SpendKey()
if !ok { if !ok {
continue continue
} }
if types.PublicKey(expectedPub) != toKey.Key { if types.PublicKey(expectedPublicKey) != targetKey {
continue continue
} }
ephSec, err := crypto.DeriveSecretKey( ephemeralSecretKey, err := crypto.DeriveSecretKey(
derivation, uint64(i), [32]byte(s.account.SpendSecretKey)) derivation, uint64(i), [32]byte(s.account.SpendSecretKey))
if err != nil { if err != nil {
continue continue
} }
ki, err := crypto.GenerateKeyImage(expectedPub, ephSec) keyImage, err := crypto.GenerateKeyImage(expectedPublicKey, ephemeralSecretKey)
if err != nil { if err != nil {
continue continue
} }
@ -89,10 +89,10 @@ func (s *V1Scanner) ScanTransaction(tx *types.Transaction, txHash types.Hash,
Amount: bare.Amount, Amount: bare.Amount,
BlockHeight: blockHeight, BlockHeight: blockHeight,
EphemeralKey: KeyPair{ EphemeralKey: KeyPair{
Public: types.PublicKey(expectedPub), Public: types.PublicKey(expectedPublicKey),
Secret: types.SecretKey(ephSec), Secret: types.SecretKey(ephemeralSecretKey),
}, },
KeyImage: types.KeyImage(ki), KeyImage: types.KeyImage(keyImage),
Coinbase: isCoinbase, Coinbase: isCoinbase,
UnlockTime: extra.UnlockTime, UnlockTime: extra.UnlockTime,
}) })

View file

@ -122,17 +122,23 @@ func (w *Wallet) scanTx(tx *types.Transaction, blockHeight uint64) error {
// Check key images for spend detection. // Check key images for spend detection.
for _, vin := range tx.Vin { for _, vin := range tx.Vin {
toKey, ok := vin.(types.TxInputToKey) var keyImage types.KeyImage
if !ok { switch v := vin.(type) {
case types.TxInputToKey:
keyImage = v.KeyImage
case types.TxInputHTLC:
keyImage = v.KeyImage
default:
continue continue
} }
// Try to mark any matching transfer as spent. // Try to mark any matching transfer as spent.
tr, err := getTransfer(w.store, toKey.KeyImage) tr, err := getTransfer(w.store, keyImage)
if err != nil { if err != nil {
continue // not our transfer continue // not our transfer
} }
if !tr.Spent { if !tr.Spent {
markTransferSpent(w.store, toKey.KeyImage, blockHeight) markTransferSpent(w.store, keyImage, blockHeight)
} }
} }

View file

@ -12,11 +12,11 @@ package wallet
import ( import (
"testing" "testing"
store "dappco.re/go/core/store"
"dappco.re/go/core/blockchain/chain" "dappco.re/go/core/blockchain/chain"
"dappco.re/go/core/blockchain/crypto" "dappco.re/go/core/blockchain/crypto"
"dappco.re/go/core/blockchain/types" "dappco.re/go/core/blockchain/types"
"dappco.re/go/core/blockchain/wire" "dappco.re/go/core/blockchain/wire"
store "dappco.re/go/core/store"
) )
func makeTestBlock(t *testing.T, height uint64, prevHash types.Hash, func makeTestBlock(t *testing.T, height uint64, prevHash types.Hash,
@ -133,3 +133,55 @@ func TestWalletTransfers(t *testing.T) {
t.Fatalf("got %d transfers, want 1", len(transfers)) t.Fatalf("got %d transfers, want 1", len(transfers))
} }
} }
func TestWalletScanTxMarksHTLCSpend(t *testing.T) {
s, err := store.New(":memory:")
if err != nil {
t.Fatal(err)
}
defer s.Close()
acc, err := GenerateAccount()
if err != nil {
t.Fatal(err)
}
ki := types.KeyImage{0x42}
if err := putTransfer(s, &Transfer{
KeyImage: ki,
Amount: 100,
BlockHeight: 1,
}); err != nil {
t.Fatal(err)
}
w := &Wallet{
store: s,
scanner: NewV1Scanner(acc),
}
tx := &types.Transaction{
Version: types.VersionPreHF4,
Vin: []types.TxInput{
types.TxInputHTLC{
Amount: 100,
KeyImage: ki,
},
},
}
if err := w.scanTx(tx, 10); err != nil {
t.Fatal(err)
}
got, err := getTransfer(s, ki)
if err != nil {
t.Fatal(err)
}
if !got.Spent {
t.Fatal("expected HTLC spend to be marked spent")
}
if got.SpentHeight != 10 {
t.Fatalf("spent height = %d, want 10", got.SpentHeight)
}
}

View file

@ -44,6 +44,8 @@ func BlockHashingBlob(b *types.Block) []byte {
// varint length prefix, so the actual hash input is: // varint length prefix, so the actual hash input is:
// //
// varint(len(blob)) || blob // varint(len(blob)) || blob
//
// blockID := wire.BlockHash(&blk)
func BlockHash(b *types.Block) types.Hash { func BlockHash(b *types.Block) types.Hash {
blob := BlockHashingBlob(b) blob := BlockHashingBlob(b)
var prefixed []byte var prefixed []byte
@ -58,6 +60,8 @@ func BlockHash(b *types.Block) types.Hash {
// get_transaction_prefix_hash for all versions. The tx_id is always // get_transaction_prefix_hash for all versions. The tx_id is always
// Keccak-256 of the serialised prefix (version + inputs + outputs + extra, // Keccak-256 of the serialised prefix (version + inputs + outputs + extra,
// in version-dependent field order). // in version-dependent field order).
//
// txID := wire.TransactionHash(&tx)
func TransactionHash(tx *types.Transaction) types.Hash { func TransactionHash(tx *types.Transaction) types.Hash {
return TransactionPrefixHash(tx) return TransactionPrefixHash(tx)
} }
@ -65,6 +69,8 @@ func TransactionHash(tx *types.Transaction) types.Hash {
// TransactionPrefixHash computes the hash of a transaction prefix. // TransactionPrefixHash computes the hash of a transaction prefix.
// This is Keccak-256 of the serialised transaction prefix (version + vin + // This is Keccak-256 of the serialised transaction prefix (version + vin +
// vout + extra, in version-dependent order). // vout + extra, in version-dependent order).
//
// prefixHash := wire.TransactionPrefixHash(&tx)
func TransactionPrefixHash(tx *types.Transaction) types.Hash { func TransactionPrefixHash(tx *types.Transaction) types.Hash {
var buf bytes.Buffer var buf bytes.Buffer
enc := NewEncoder(&buf) enc := NewEncoder(&buf)

View file

@ -176,6 +176,9 @@ func encodeInputs(enc *Encoder, vin []types.TxInput) {
encodeKeyOffsets(enc, v.KeyOffsets) encodeKeyOffsets(enc, v.KeyOffsets)
enc.WriteBlob32((*[32]byte)(&v.KeyImage)) enc.WriteBlob32((*[32]byte)(&v.KeyImage))
enc.WriteBytes(v.EtcDetails) enc.WriteBytes(v.EtcDetails)
default:
enc.err = coreerr.E("encodeInputs", fmt.Sprintf("wire: unsupported input type %T", in), nil)
return
} }
} }
} }
@ -268,6 +271,63 @@ func decodeKeyOffsets(dec *Decoder) []types.TxOutRef {
return refs return refs
} }
func encodeTxOutTarget(enc *Encoder, target types.TxOutTarget, context string) bool {
switch t := target.(type) {
case types.TxOutToKey:
enc.WriteVariantTag(types.TargetTypeToKey)
enc.WriteBlob32((*[32]byte)(&t.Key))
enc.WriteUint8(t.MixAttr)
case types.TxOutMultisig:
enc.WriteVariantTag(types.TargetTypeMultisig)
enc.WriteVarint(t.MinimumSigs)
enc.WriteVarint(uint64(len(t.Keys)))
for i := range t.Keys {
enc.WriteBlob32((*[32]byte)(&t.Keys[i]))
}
case types.TxOutHTLC:
enc.WriteVariantTag(types.TargetTypeHTLC)
enc.WriteBlob32((*[32]byte)(&t.HTLCHash))
enc.WriteUint8(t.Flags)
enc.WriteVarint(t.Expiration)
enc.WriteBlob32((*[32]byte)(&t.PKRedeem))
enc.WriteBlob32((*[32]byte)(&t.PKRefund))
default:
enc.err = coreerr.E(context, fmt.Sprintf("wire: unsupported output target type %T", target), nil)
return false
}
return true
}
func decodeTxOutTarget(dec *Decoder, tag uint8, context string) types.TxOutTarget {
switch tag {
case types.TargetTypeToKey:
var t types.TxOutToKey
dec.ReadBlob32((*[32]byte)(&t.Key))
t.MixAttr = dec.ReadUint8()
return t
case types.TargetTypeMultisig:
var t types.TxOutMultisig
t.MinimumSigs = dec.ReadVarint()
keyCount := dec.ReadVarint()
t.Keys = make([]types.PublicKey, keyCount)
for i := uint64(0); i < keyCount; i++ {
dec.ReadBlob32((*[32]byte)(&t.Keys[i]))
}
return t
case types.TargetTypeHTLC:
var t types.TxOutHTLC
dec.ReadBlob32((*[32]byte)(&t.HTLCHash))
t.Flags = dec.ReadUint8()
t.Expiration = dec.ReadVarint()
dec.ReadBlob32((*[32]byte)(&t.PKRedeem))
dec.ReadBlob32((*[32]byte)(&t.PKRefund))
return t
default:
dec.err = coreerr.E(context, fmt.Sprintf("wire: unsupported target tag 0x%02x", tag), nil)
return nil
}
}
// --- outputs --- // --- outputs ---
// encodeOutputsV1 serialises v0/v1 outputs. In v0/v1, outputs are tx_out_bare // encodeOutputsV1 serialises v0/v1 outputs. In v0/v1, outputs are tx_out_bare
@ -279,26 +339,12 @@ func encodeOutputsV1(enc *Encoder, vout []types.TxOutput) {
case types.TxOutputBare: case types.TxOutputBare:
enc.WriteVarint(v.Amount) enc.WriteVarint(v.Amount)
// Target is a variant (txout_target_v) // Target is a variant (txout_target_v)
switch t := v.Target.(type) { if !encodeTxOutTarget(enc, v.Target, "encodeOutputsV1") {
case types.TxOutToKey: return
enc.WriteVariantTag(types.TargetTypeToKey)
enc.WriteBlob32((*[32]byte)(&t.Key))
enc.WriteUint8(t.MixAttr)
case types.TxOutMultisig:
enc.WriteVariantTag(types.TargetTypeMultisig)
enc.WriteVarint(t.MinimumSigs)
enc.WriteVarint(uint64(len(t.Keys)))
for k := range t.Keys {
enc.WriteBlob32((*[32]byte)(&t.Keys[k]))
}
case types.TxOutHTLC:
enc.WriteVariantTag(types.TargetTypeHTLC)
enc.WriteBlob32((*[32]byte)(&t.HTLCHash))
enc.WriteUint8(t.Flags)
enc.WriteVarint(t.Expiration)
enc.WriteBlob32((*[32]byte)(&t.PKRedeem))
enc.WriteBlob32((*[32]byte)(&t.PKRefund))
} }
default:
enc.err = coreerr.E("encodeOutputsV1", fmt.Sprintf("wire: unsupported output type %T", out), nil)
return
} }
} }
} }
@ -316,31 +362,8 @@ func decodeOutputsV1(dec *Decoder) []types.TxOutput {
if dec.Err() != nil { if dec.Err() != nil {
return vout return vout
} }
switch tag { out.Target = decodeTxOutTarget(dec, tag, "decodeOutputsV1")
case types.TargetTypeToKey: if dec.Err() != nil {
var t types.TxOutToKey
dec.ReadBlob32((*[32]byte)(&t.Key))
t.MixAttr = dec.ReadUint8()
out.Target = t
case types.TargetTypeMultisig:
var t types.TxOutMultisig
t.MinimumSigs = dec.ReadVarint()
keyCount := dec.ReadVarint()
t.Keys = make([]types.PublicKey, keyCount)
for k := uint64(0); k < keyCount; k++ {
dec.ReadBlob32((*[32]byte)(&t.Keys[k]))
}
out.Target = t
case types.TargetTypeHTLC:
var t types.TxOutHTLC
dec.ReadBlob32((*[32]byte)(&t.HTLCHash))
t.Flags = dec.ReadUint8()
t.Expiration = dec.ReadVarint()
dec.ReadBlob32((*[32]byte)(&t.PKRedeem))
dec.ReadBlob32((*[32]byte)(&t.PKRefund))
out.Target = t
default:
dec.err = coreerr.E("decodeOutputsV1", fmt.Sprintf("wire: unsupported target tag 0x%02x", tag), nil)
return vout return vout
} }
vout = append(vout, out) vout = append(vout, out)
@ -356,25 +379,8 @@ func encodeOutputsV2(enc *Encoder, vout []types.TxOutput) {
switch v := out.(type) { switch v := out.(type) {
case types.TxOutputBare: case types.TxOutputBare:
enc.WriteVarint(v.Amount) enc.WriteVarint(v.Amount)
switch t := v.Target.(type) { if !encodeTxOutTarget(enc, v.Target, "encodeOutputsV2") {
case types.TxOutToKey: return
enc.WriteVariantTag(types.TargetTypeToKey)
enc.WriteBlob32((*[32]byte)(&t.Key))
enc.WriteUint8(t.MixAttr)
case types.TxOutMultisig:
enc.WriteVariantTag(types.TargetTypeMultisig)
enc.WriteVarint(t.MinimumSigs)
enc.WriteVarint(uint64(len(t.Keys)))
for k := range t.Keys {
enc.WriteBlob32((*[32]byte)(&t.Keys[k]))
}
case types.TxOutHTLC:
enc.WriteVariantTag(types.TargetTypeHTLC)
enc.WriteBlob32((*[32]byte)(&t.HTLCHash))
enc.WriteUint8(t.Flags)
enc.WriteVarint(t.Expiration)
enc.WriteBlob32((*[32]byte)(&t.PKRedeem))
enc.WriteBlob32((*[32]byte)(&t.PKRefund))
} }
case types.TxOutputZarcanum: case types.TxOutputZarcanum:
enc.WriteBlob32((*[32]byte)(&v.StealthAddress)) enc.WriteBlob32((*[32]byte)(&v.StealthAddress))
@ -383,6 +389,9 @@ func encodeOutputsV2(enc *Encoder, vout []types.TxOutput) {
enc.WriteBlob32((*[32]byte)(&v.BlindedAssetID)) enc.WriteBlob32((*[32]byte)(&v.BlindedAssetID))
enc.WriteUint64LE(v.EncryptedAmount) enc.WriteUint64LE(v.EncryptedAmount)
enc.WriteUint8(v.MixAttr) enc.WriteUint8(v.MixAttr)
default:
enc.err = coreerr.E("encodeOutputsV2", fmt.Sprintf("wire: unsupported output type %T", out), nil)
return
} }
} }
} }
@ -403,31 +412,8 @@ func decodeOutputsV2(dec *Decoder) []types.TxOutput {
var out types.TxOutputBare var out types.TxOutputBare
out.Amount = dec.ReadVarint() out.Amount = dec.ReadVarint()
targetTag := dec.ReadVariantTag() targetTag := dec.ReadVariantTag()
switch targetTag { out.Target = decodeTxOutTarget(dec, targetTag, "decodeOutputsV2")
case types.TargetTypeToKey: if dec.Err() != nil {
var t types.TxOutToKey
dec.ReadBlob32((*[32]byte)(&t.Key))
t.MixAttr = dec.ReadUint8()
out.Target = t
case types.TargetTypeMultisig:
var t types.TxOutMultisig
t.MinimumSigs = dec.ReadVarint()
keyCount := dec.ReadVarint()
t.Keys = make([]types.PublicKey, keyCount)
for k := uint64(0); k < keyCount; k++ {
dec.ReadBlob32((*[32]byte)(&t.Keys[k]))
}
out.Target = t
case types.TargetTypeHTLC:
var t types.TxOutHTLC
dec.ReadBlob32((*[32]byte)(&t.HTLCHash))
t.Flags = dec.ReadUint8()
t.Expiration = dec.ReadVarint()
dec.ReadBlob32((*[32]byte)(&t.PKRedeem))
dec.ReadBlob32((*[32]byte)(&t.PKRefund))
out.Target = t
default:
dec.err = coreerr.E("decodeOutputsV2", fmt.Sprintf("wire: unsupported target tag 0x%02x", targetTag), nil)
return vout return vout
} }
vout = append(vout, out) vout = append(vout, out)
@ -531,10 +517,10 @@ const (
tagZarcanumSig = 45 // zarcanum_sig — complex tagZarcanumSig = 45 // zarcanum_sig — complex
// Asset operation tags (HF5 confidential assets). // Asset operation tags (HF5 confidential assets).
tagAssetDescriptorOperation = 40 // asset_descriptor_operation tagAssetDescriptorOperation = types.AssetDescriptorOperationTag // asset_descriptor_operation
tagAssetOperationProof = 49 // asset_operation_proof tagAssetOperationProof = 49 // asset_operation_proof
tagAssetOperationOwnershipProof = 50 // asset_operation_ownership_proof tagAssetOperationOwnershipProof = 50 // asset_operation_ownership_proof
tagAssetOperationOwnershipProofETH = 51 // asset_operation_ownership_proof_eth tagAssetOperationOwnershipProofETH = 51 // asset_operation_ownership_proof_eth
// Proof variant tags (proof_v). // Proof variant tags (proof_v).
tagZCAssetSurjectionProof = 46 // vector<BGE_proof_s> tagZCAssetSurjectionProof = 46 // vector<BGE_proof_s>
@ -586,6 +572,12 @@ func readVariantElementData(dec *Decoder, tag uint8) []byte {
case tagTxPayer, tagTxReceiver: case tagTxPayer, tagTxReceiver:
return readTxPayer(dec) return readTxPayer(dec)
// Alias types
case tagExtraAliasEntryOld:
return readExtraAliasEntryOld(dec)
case tagExtraAliasEntry:
return readExtraAliasEntry(dec)
// Composite types // Composite types
case tagExtraAttachmentInfo: case tagExtraAttachmentInfo:
return readExtraAttachmentInfo(dec) return readExtraAttachmentInfo(dec)
@ -794,6 +786,96 @@ func readTxServiceAttachment(dec *Decoder) []byte {
return raw return raw
} }
// readExtraAliasEntryOld reads extra_alias_entry_old (tag 20).
// Structure: alias(string) + address(spend_key(32) + view_key(32)) +
// text_comment(string) + sign(vector of generic_schnorr_sig_s, each 64 bytes).
func readExtraAliasEntryOld(dec *Decoder) []byte {
var raw []byte
// m_alias: string
alias := readStringBlob(dec)
if dec.err != nil {
return nil
}
raw = append(raw, alias...)
// m_address: spend_public_key(32) + view_public_key(32) = 64 bytes
addr := dec.ReadBytes(64)
if dec.err != nil {
return nil
}
raw = append(raw, addr...)
// m_text_comment: string
comment := readStringBlob(dec)
if dec.err != nil {
return nil
}
raw = append(raw, comment...)
// m_sign: vector<generic_schnorr_sig_s> (each is 2 scalars = 64 bytes)
v := readVariantVectorFixed(dec, 64)
if dec.err != nil {
return nil
}
raw = append(raw, v...)
return raw
}
// readExtraAliasEntry reads extra_alias_entry (tag 33).
// Structure: alias(string) + address(spend_key(32) + view_key(32) + optional flag) +
// text_comment(string) + sign(vector of generic_schnorr_sig_s, each 64 bytes) +
// view_key(optional secret_key, 32 bytes).
func readExtraAliasEntry(dec *Decoder) []byte {
var raw []byte
// m_alias: string
alias := readStringBlob(dec)
if dec.err != nil {
return nil
}
raw = append(raw, alias...)
// m_address: account_public_address with optional is_auditable flag
// Same wire format as tx_payer (tag 31): spend_key(32) + view_key(32) + optional
addr := readTxPayer(dec)
if dec.err != nil {
return nil
}
raw = append(raw, addr...)
// m_text_comment: string
comment := readStringBlob(dec)
if dec.err != nil {
return nil
}
raw = append(raw, comment...)
// m_sign: vector<generic_schnorr_sig_s> (each is 2 scalars = 64 bytes)
v := readVariantVectorFixed(dec, 64)
if dec.err != nil {
return nil
}
raw = append(raw, v...)
// m_view_key: optional<crypto::secret_key> — uint8 marker + 32 bytes if present
marker := dec.ReadUint8()
if dec.err != nil {
return nil
}
raw = append(raw, marker)
if marker != 0 {
key := dec.ReadBytes(32)
if dec.err != nil {
return nil
}
raw = append(raw, key...)
}
return raw
}
// readSignedParts reads signed_parts (tag 17). // readSignedParts reads signed_parts (tag 17).
// Structure: n_outs (varint) + n_extras (varint). // Structure: n_outs (varint) + n_extras (varint).
func readSignedParts(dec *Decoder) []byte { func readSignedParts(dec *Decoder) []byte {
@ -1043,113 +1125,7 @@ func readZarcanumSig(dec *Decoder) []byte {
// decimal_point(uint8) + meta_info(string) + owner_key(32 bytes) + // decimal_point(uint8) + meta_info(string) + owner_key(32 bytes) +
// etc(vector<uint8>). // etc(vector<uint8>).
func readAssetDescriptorOperation(dec *Decoder) []byte { func readAssetDescriptorOperation(dec *Decoder) []byte {
var raw []byte raw, _ := parseAssetDescriptorOperation(dec)
// ver: uint8
ver := dec.ReadUint8()
if dec.err != nil {
return nil
}
raw = append(raw, ver)
// operation_type: uint8
opType := dec.ReadUint8()
if dec.err != nil {
return nil
}
raw = append(raw, opType)
// opt_asset_id: uint8 presence marker + 32 bytes if present
assetMarker := dec.ReadUint8()
if dec.err != nil {
return nil
}
raw = append(raw, assetMarker)
if assetMarker != 0 {
b := dec.ReadBytes(32)
if dec.err != nil {
return nil
}
raw = append(raw, b...)
}
// opt_descriptor: uint8 presence marker + descriptor if present
descMarker := dec.ReadUint8()
if dec.err != nil {
return nil
}
raw = append(raw, descMarker)
if descMarker != 0 {
// AssetDescriptorBase
// ticker: string
s := readStringBlob(dec)
if dec.err != nil {
return nil
}
raw = append(raw, s...)
// full_name: string
s = readStringBlob(dec)
if dec.err != nil {
return nil
}
raw = append(raw, s...)
// total_max_supply: uint64 LE
b := dec.ReadBytes(8)
if dec.err != nil {
return nil
}
raw = append(raw, b...)
// current_supply: uint64 LE
b = dec.ReadBytes(8)
if dec.err != nil {
return nil
}
raw = append(raw, b...)
// decimal_point: uint8
dp := dec.ReadUint8()
if dec.err != nil {
return nil
}
raw = append(raw, dp)
// meta_info: string
s = readStringBlob(dec)
if dec.err != nil {
return nil
}
raw = append(raw, s...)
// owner_key: 32 bytes
b = dec.ReadBytes(32)
if dec.err != nil {
return nil
}
raw = append(raw, b...)
// etc: vector<uint8>
v := readVariantVectorFixed(dec, 1)
if dec.err != nil {
return nil
}
raw = append(raw, v...)
}
// amount_to_emit: uint64 LE
b := dec.ReadBytes(8)
if dec.err != nil {
return nil
}
raw = append(raw, b...)
// amount_to_burn: uint64 LE
b = dec.ReadBytes(8)
if dec.err != nil {
return nil
}
raw = append(raw, b...)
// etc: vector<uint8>
v := readVariantVectorFixed(dec, 1)
if dec.err != nil {
return nil
}
raw = append(raw, v...)
return raw return raw
} }

View file

@ -987,3 +987,225 @@ func TestHTLCTargetV2RoundTrip_Good(t *testing.T) {
t.Errorf("round-trip mismatch:\n got: %x\n want: %x", rtBuf.Bytes(), buf.Bytes()) t.Errorf("round-trip mismatch:\n got: %x\n want: %x", rtBuf.Bytes(), buf.Bytes())
} }
} }
type unsupportedTxInput struct{}
func (unsupportedTxInput) InputType() uint8 { return 250 }
type unsupportedTxOutTarget struct{}
func (unsupportedTxOutTarget) TargetType() uint8 { return 250 }
type unsupportedTxOutput struct{}
func (unsupportedTxOutput) OutputType() uint8 { return 250 }
func TestEncodeTransaction_UnsupportedInput_Bad(t *testing.T) {
tx := types.Transaction{
Version: 1,
Vin: []types.TxInput{unsupportedTxInput{}},
Vout: []types.TxOutput{types.TxOutputBare{
Amount: 1,
Target: types.TxOutToKey{Key: types.PublicKey{1}},
}},
Extra: EncodeVarint(0),
}
var buf bytes.Buffer
enc := NewEncoder(&buf)
EncodeTransactionPrefix(enc, &tx)
if enc.Err() == nil {
t.Fatal("expected encode error for unsupported input type")
}
}
func TestEncodeTransaction_UnsupportedOutputTarget_Bad(t *testing.T) {
tests := []struct {
name string
version uint64
}{
{name: "v1", version: types.VersionPreHF4},
{name: "v2", version: types.VersionPostHF4},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tx := types.Transaction{
Version: tt.version,
Vin: []types.TxInput{types.TxInputGenesis{Height: 1}},
Vout: []types.TxOutput{types.TxOutputBare{
Amount: 1,
Target: unsupportedTxOutTarget{},
}},
Extra: EncodeVarint(0),
}
var buf bytes.Buffer
enc := NewEncoder(&buf)
EncodeTransactionPrefix(enc, &tx)
if enc.Err() == nil {
t.Fatal("expected encode error for unsupported output target type")
}
})
}
}
func TestEncodeTransaction_UnsupportedOutputType_Bad(t *testing.T) {
tests := []struct {
name string
version uint64
}{
{name: "v1", version: types.VersionPreHF4},
{name: "v2", version: types.VersionPostHF4},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tx := types.Transaction{
Version: tt.version,
Vin: []types.TxInput{types.TxInputGenesis{Height: 1}},
Vout: []types.TxOutput{unsupportedTxOutput{}},
Extra: EncodeVarint(0),
}
var buf bytes.Buffer
enc := NewEncoder(&buf)
EncodeTransactionPrefix(enc, &tx)
if enc.Err() == nil {
t.Fatal("expected encode error for unsupported output type")
}
})
}
}
// TestExtraAliasEntryOldRoundTrip_Good verifies that a variant vector
// containing an extra_alias_entry_old (tag 20) round-trips through
// decodeRawVariantVector without error.
func TestExtraAliasEntryOldRoundTrip_Good(t *testing.T) {
// Build a synthetic variant vector with one extra_alias_entry_old element.
// Format: count(1) + tag(20) + alias(string) + address(64 bytes) +
// text_comment(string) + sign(vector of 64-byte sigs).
var raw []byte
raw = append(raw, EncodeVarint(1)...) // 1 element
raw = append(raw, tagExtraAliasEntryOld)
// m_alias: "test.lthn"
alias := []byte("test.lthn")
raw = append(raw, EncodeVarint(uint64(len(alias)))...)
raw = append(raw, alias...)
// m_address: spend_key(32) + view_key(32) = 64 bytes
addr := make([]byte, 64)
for i := range addr {
addr[i] = byte(i)
}
raw = append(raw, addr...)
// m_text_comment: "hello"
comment := []byte("hello")
raw = append(raw, EncodeVarint(uint64(len(comment)))...)
raw = append(raw, comment...)
// m_sign: 1 signature (generic_schnorr_sig_s = 64 bytes)
raw = append(raw, EncodeVarint(1)...) // 1 signature
sig := make([]byte, 64)
for i := range sig {
sig[i] = byte(0xAA)
}
raw = append(raw, sig...)
// Decode and round-trip.
dec := NewDecoder(bytes.NewReader(raw))
decoded := decodeRawVariantVector(dec)
if dec.Err() != nil {
t.Fatalf("decode failed: %v", dec.Err())
}
if !bytes.Equal(decoded, raw) {
t.Fatalf("round-trip mismatch: got %d bytes, want %d bytes", len(decoded), len(raw))
}
}
// TestExtraAliasEntryRoundTrip_Good verifies that a variant vector
// containing an extra_alias_entry (tag 33) round-trips through
// decodeRawVariantVector without error.
func TestExtraAliasEntryRoundTrip_Good(t *testing.T) {
// Build a synthetic variant vector with one extra_alias_entry element.
// Format: count(1) + tag(33) + alias(string) + address(tx_payer format) +
// text_comment(string) + sign(vector) + view_key(optional).
var raw []byte
raw = append(raw, EncodeVarint(1)...) // 1 element
raw = append(raw, tagExtraAliasEntry)
// m_alias: "myalias"
alias := []byte("myalias")
raw = append(raw, EncodeVarint(uint64(len(alias)))...)
raw = append(raw, alias...)
// m_address: tx_payer format = spend_key(32) + view_key(32) + optional marker
addr := make([]byte, 64)
for i := range addr {
addr[i] = byte(i + 10)
}
raw = append(raw, addr...)
// is_auditable optional marker: 0 = not present
raw = append(raw, 0x00)
// m_text_comment: empty
raw = append(raw, EncodeVarint(0)...)
// m_sign: 0 signatures
raw = append(raw, EncodeVarint(0)...)
// m_view_key: optional, present (marker=1 + 32 bytes)
raw = append(raw, 0x01)
viewKey := make([]byte, 32)
for i := range viewKey {
viewKey[i] = byte(0xBB)
}
raw = append(raw, viewKey...)
// Decode and round-trip.
dec := NewDecoder(bytes.NewReader(raw))
decoded := decodeRawVariantVector(dec)
if dec.Err() != nil {
t.Fatalf("decode failed: %v", dec.Err())
}
if !bytes.Equal(decoded, raw) {
t.Fatalf("round-trip mismatch: got %d bytes, want %d bytes", len(decoded), len(raw))
}
}
// TestExtraAliasEntryNoViewKey_Good verifies extra_alias_entry with
// the optional view_key marker set to 0 (not present).
func TestExtraAliasEntryNoViewKey_Good(t *testing.T) {
var raw []byte
raw = append(raw, EncodeVarint(1)...) // 1 element
raw = append(raw, tagExtraAliasEntry)
// m_alias: "short"
alias := []byte("short")
raw = append(raw, EncodeVarint(uint64(len(alias)))...)
raw = append(raw, alias...)
// m_address: keys + no auditable flag
raw = append(raw, make([]byte, 64)...)
raw = append(raw, 0x00) // not auditable
// m_text_comment: empty
raw = append(raw, EncodeVarint(0)...)
// m_sign: 0 signatures
raw = append(raw, EncodeVarint(0)...)
// m_view_key: not present (marker=0)
raw = append(raw, 0x00)
dec := NewDecoder(bytes.NewReader(raw))
decoded := decodeRawVariantVector(dec)
if dec.Err() != nil {
t.Fatalf("decode failed: %v", dec.Err())
}
if !bytes.Equal(decoded, raw) {
t.Fatalf("round-trip mismatch: got %d bytes, want %d bytes", len(decoded), len(raw))
}
}

View file

@ -64,6 +64,23 @@ func TestReadAssetDescriptorOperation_Good(t *testing.T) {
if !bytes.Equal(got, blob) { if !bytes.Equal(got, blob) {
t.Fatalf("round-trip mismatch: got %d bytes, want %d bytes", len(got), len(blob)) t.Fatalf("round-trip mismatch: got %d bytes, want %d bytes", len(got), len(blob))
} }
op, err := DecodeAssetDescriptorOperation(blob)
if err != nil {
t.Fatalf("DecodeAssetDescriptorOperation failed: %v", err)
}
if op.Version != 1 || op.OperationType != 0 {
t.Fatalf("unexpected operation header: %+v", op)
}
if op.Descriptor == nil {
t.Fatal("expected descriptor to be present")
}
if op.Descriptor.Ticker != "LTHN" || op.Descriptor.FullName != "Lethean" {
t.Fatalf("unexpected descriptor contents: %+v", op.Descriptor)
}
if op.Descriptor.TotalMaxSupply != 1000000 || op.Descriptor.DecimalPoint != 12 {
t.Fatalf("unexpected descriptor values: %+v", op.Descriptor)
}
} }
func TestReadAssetDescriptorOperation_Bad(t *testing.T) { func TestReadAssetDescriptorOperation_Bad(t *testing.T) {
@ -110,6 +127,23 @@ func TestReadAssetDescriptorOperationEmit_Good(t *testing.T) {
if !bytes.Equal(got, blob) { if !bytes.Equal(got, blob) {
t.Fatalf("round-trip mismatch: got %d bytes, want %d bytes", len(got), len(blob)) t.Fatalf("round-trip mismatch: got %d bytes, want %d bytes", len(got), len(blob))
} }
op, err := DecodeAssetDescriptorOperation(blob)
if err != nil {
t.Fatalf("DecodeAssetDescriptorOperation (emit) failed: %v", err)
}
if op.Version != 1 || op.OperationType != 1 {
t.Fatalf("unexpected operation header: %+v", op)
}
if op.Descriptor != nil {
t.Fatalf("emit operation should not carry descriptor: %+v", op)
}
if op.AmountToEmit != 500000 || op.AmountToBurn != 0 {
t.Fatalf("unexpected emit amounts: %+v", op)
}
if op.AssetID[0] != 0xAB || op.AssetID[31] != 0xAB {
t.Fatalf("unexpected asset id: %x", op.AssetID)
}
} }
func TestVariantVectorWithTag40_Good(t *testing.T) { func TestVariantVectorWithTag40_Good(t *testing.T) {
@ -136,6 +170,17 @@ func TestVariantVectorWithTag40_Good(t *testing.T) {
if !bytes.Equal(got, raw) { if !bytes.Equal(got, raw) {
t.Fatalf("round-trip mismatch: got %d bytes, want %d bytes", len(got), len(raw)) t.Fatalf("round-trip mismatch: got %d bytes, want %d bytes", len(got), len(raw))
} }
elements, err := DecodeVariantVector(raw)
if err != nil {
t.Fatalf("DecodeVariantVector failed: %v", err)
}
if len(elements) != 1 || elements[0].Tag != tagAssetDescriptorOperation {
t.Fatalf("unexpected elements: %+v", elements)
}
if !bytes.Equal(elements[0].Data, innerBlob) {
t.Fatalf("unexpected element payload length: got %d, want %d", len(elements[0].Data), len(innerBlob))
}
} }
func buildAssetOperationProofBlob() []byte { func buildAssetOperationProofBlob() []byte {
@ -273,9 +318,9 @@ func TestV3TransactionRoundTrip_Good(t *testing.T) {
// version = 3 // version = 3
enc.WriteVarint(3) enc.WriteVarint(3)
// vin: 1 coinbase input // vin: 1 coinbase input
enc.WriteVarint(1) // input count enc.WriteVarint(1) // input count
enc.WriteVariantTag(0) // txin_gen tag enc.WriteVariantTag(0) // txin_gen tag
enc.WriteVarint(201) // height enc.WriteVarint(201) // height
// extra: variant vector with 2 elements (public_key + zarcanum_tx_data_v1) // extra: variant vector with 2 elements (public_key + zarcanum_tx_data_v1)
enc.WriteVarint(2) enc.WriteVarint(2)
@ -289,13 +334,13 @@ func TestV3TransactionRoundTrip_Good(t *testing.T) {
// vout: 2 Zarcanum outputs // vout: 2 Zarcanum outputs
enc.WriteVarint(2) enc.WriteVarint(2)
for range 2 { for range 2 {
enc.WriteVariantTag(38) // OutputTypeZarcanum enc.WriteVariantTag(38) // OutputTypeZarcanum
enc.WriteBytes(make([]byte, 32)) // stealth_address enc.WriteBytes(make([]byte, 32)) // stealth_address
enc.WriteBytes(make([]byte, 32)) // concealing_point enc.WriteBytes(make([]byte, 32)) // concealing_point
enc.WriteBytes(make([]byte, 32)) // amount_commitment enc.WriteBytes(make([]byte, 32)) // amount_commitment
enc.WriteBytes(make([]byte, 32)) // blinded_asset_id enc.WriteBytes(make([]byte, 32)) // blinded_asset_id
enc.WriteUint64LE(0) // encrypted_amount enc.WriteUint64LE(0) // encrypted_amount
enc.WriteUint8(0) // mix_attr enc.WriteUint8(0) // mix_attr
} }
// hardfork_id = 5 // hardfork_id = 5

214
wire/variant.go Normal file
View file

@ -0,0 +1,214 @@
// Copyright (c) 2017-2026 Lethean (https://lt.hn)
//
// Licensed under the European Union Public Licence (EUPL) version 1.2.
// SPDX-License-Identifier: EUPL-1.2
package wire
import (
"bytes"
"fmt"
coreerr "dappco.re/go/core/log"
"dappco.re/go/core/blockchain/types"
)
// VariantElement is one tagged element from a raw variant vector.
// Data contains the raw wire bytes for the element payload, without the tag.
type VariantElement struct {
Tag uint8
Data []byte
}
// DecodeVariantVector decodes a raw variant vector into tagged raw elements.
// It is useful for higher-level validation of raw transaction fields such as
// extra, attachment, signatures, and proofs.
func DecodeVariantVector(raw []byte) ([]VariantElement, error) {
dec := NewDecoder(bytes.NewReader(raw))
count := dec.ReadVarint()
if dec.Err() != nil {
return nil, dec.Err()
}
elems := make([]VariantElement, 0, int(count))
for i := uint64(0); i < count; i++ {
tag := dec.ReadUint8()
if dec.Err() != nil {
return nil, coreerr.E("DecodeVariantVector", fmt.Sprintf("read tag %d", i), dec.Err())
}
data := readVariantElementData(dec, tag)
if dec.Err() != nil {
return nil, coreerr.E("DecodeVariantVector", fmt.Sprintf("read element %d", i), dec.Err())
}
elems = append(elems, VariantElement{Tag: tag, Data: data})
}
return elems, nil
}
// DecodeAssetDescriptorOperation decodes a raw asset_descriptor_operation
// payload into its typed representation.
func DecodeAssetDescriptorOperation(raw []byte) (types.AssetDescriptorOperation, error) {
dec := NewDecoder(bytes.NewReader(raw))
_, op := parseAssetDescriptorOperation(dec)
if dec.Err() != nil {
return types.AssetDescriptorOperation{}, coreerr.E("DecodeAssetDescriptorOperation", "decode asset descriptor operation", dec.Err())
}
return op, nil
}
// parseAssetDescriptorOperation is the single source of truth for both raw
// wire preservation and typed HF5 asset operation decoding.
func parseAssetDescriptorOperation(dec *Decoder) ([]byte, types.AssetDescriptorOperation) {
var raw []byte
var op types.AssetDescriptorOperation
appendByte := func(v uint8) {
raw = append(raw, v)
}
appendBytes := func(v []byte) {
raw = append(raw, v...)
}
op.Version = dec.ReadUint8()
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
appendByte(op.Version)
op.OperationType = dec.ReadUint8()
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
appendByte(op.OperationType)
assetMarker := dec.ReadUint8()
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
appendByte(assetMarker)
if assetMarker != 0 {
assetID := dec.ReadBytes(32)
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
copy(op.AssetID[:], assetID)
appendBytes(assetID)
}
descMarker := dec.ReadUint8()
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
appendByte(descMarker)
if descMarker != 0 {
desc := &types.AssetDescriptorBase{}
tickerRaw := readStringBlob(dec)
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
desc.Ticker = decodeStringBlob(tickerRaw)
appendBytes(tickerRaw)
fullNameRaw := readStringBlob(dec)
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
desc.FullName = decodeStringBlob(fullNameRaw)
appendBytes(fullNameRaw)
desc.TotalMaxSupply = dec.ReadUint64LE()
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
appendBytes(uint64LEBytes(desc.TotalMaxSupply))
desc.CurrentSupply = dec.ReadUint64LE()
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
appendBytes(uint64LEBytes(desc.CurrentSupply))
desc.DecimalPoint = dec.ReadUint8()
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
appendByte(desc.DecimalPoint)
metaInfoRaw := readStringBlob(dec)
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
desc.MetaInfo = decodeStringBlob(metaInfoRaw)
appendBytes(metaInfoRaw)
ownerKey := dec.ReadBytes(32)
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
copy(desc.OwnerKey[:], ownerKey)
appendBytes(ownerKey)
desc.Etc = readVariantVectorFixed(dec, 1)
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
appendBytes(desc.Etc)
op.Descriptor = desc
}
op.AmountToEmit = dec.ReadUint64LE()
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
appendBytes(uint64LEBytes(op.AmountToEmit))
op.AmountToBurn = dec.ReadUint64LE()
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
appendBytes(uint64LEBytes(op.AmountToBurn))
op.Etc = readVariantVectorFixed(dec, 1)
if dec.Err() != nil {
return nil, types.AssetDescriptorOperation{}
}
appendBytes(op.Etc)
return raw, op
}
func decodeStringBlob(raw []byte) string {
return string(raw[varintPrefixLen(raw):])
}
func varintPrefixLen(raw []byte) int {
n := 0
for n < len(raw) {
n++
if raw[n-1] < 0x80 {
return n
}
}
return len(raw)
}
func uint64LEBytes(v uint64) []byte {
return []byte{
byte(v),
byte(v >> 8),
byte(v >> 16),
byte(v >> 24),
byte(v >> 32),
byte(v >> 40),
byte(v >> 48),
byte(v >> 56),
}
}